text
stringlengths 5.43k
47.1k
| id
stringlengths 47
47
| dump
stringclasses 7
values | url
stringlengths 16
815
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 4.1k
8.19k
| score
float64 2.52
4.88
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
5. OUR APPROACH
i. Tackling income inequality
The Government Economic Strategy has set an ambitious target to deliver greater Solidarity in Scotland by reducing the nation's relatively high levels of income inequalities. Our aim is to reconnect more people to the mainstream economy and provide the opportunities - and incentives - for all to contribute to Scotland's economic growth.
What the evidence says
The evidence points to a number of key drivers of income inequalities in Scotland. These key drivers can be particularly acute for some groups in society. People from minority ethnic backgrounds, disabled people and those with caring responsibilities, for example, can be at a particular disadvantage:
Low educational attainment and a lack of training
A lack of qualifications can severely limit a person's likelihood of accessing, sustaining, and advancing in employment - and, indeed, of earning a decent wage. For adults on low pay or benefits, good quality vocational training and training within the workplace is a well-established route into jobs which provide a better wage.
Substantial differences in life chances, quality of life and social inclusion are evident between those with low levels of literacy and numeracy and others at higher levels. Low level skills are associated with lack of qualifications, poor labour market experience and prospects, poor material and financial circumstances, poorer health behaviours and prospects, and lower social and political participation.
The evidence tells us that most of those in the three lowest income deciles who are in work receive low hourly pay. In-work poverty is a very real problem in Scotland and can act as a disincentive to people who are looking to make the transition from benefits into work. Many women are concentrated in low paid employment and some minority ethnic communities, and in particular women from these communities, are disproportionately affected by low pay and occupational segregation, i.e. are over-represented in traditionally low-pay sectors.
Caring responsibilities and other barriers to work
The evidence tells us that families with caring responsibilities can face particular disadvantages in accessing and sustaining employment. Parents can face difficulties balancing the time burden of care and work - and can lose confidence and skills if they take time out of the labour force to care for their families. A lack of high quality reliable childcare can also discourage those furthest from the jobs market in seeking to take initial steps toward employability.
A lack of incentives in the benefits system
Evidence suggests that the threat of sudden benefits withdrawal can act as a real disincentive for many people who are looking to move from benefits into work. Currently, the tax credits and benefits system does not provide adequate support for people making these important transitions.
To respond to these root causes of income inequality the Scottish Government, local government and their partners need to take an approach which:
- Makes work pay - by providing people with the skills and training they need to progress in or into work and realise their potential; by supporting economic development and the creation of better employment opportunities; and by encouraging the enforcement of statutory workers' rights;
- Maximises the potential for people to work - by removing any barriers to their employment, including through the provision of more accessible and affordable childcare, and learning the lessons from projects such as Working for Families.
- Maximises income for all - so that everyone - including those who cannot enter the labour market - is well supported by income maximisation services and have a decent living standard whether or not they are in work.
What more we will do
The approach taken by local authorities in addressing income inequalities through the Single Outcome Agreements will support real improvement across Scotland - but, clearly, more needs to be done. Working with its partners across the broader public sector, Government will therefore take further, and more focused, action across the following areas:
Making work pay
- To help more people realise their potential - and to encourage more employers to deliver learning in the workplace - Government will provide additional funding through the Individual Learning Accounts Scotland scheme for in-work learning and ensure that it is targeted at those in the three lowest income deciles.
- The Scottish Government will press the UK Government to transfer responsibility for personal taxation and benefits to Scotland, to allow the development of an approach to equity and boosting economic activity that fits with Scottish circumstances. Specifically, the Government will press for a simplification of the tax credits scheme and the promotion of greater availability of childcare vouchers. Moreover, it will continue to make the case for a single, progressive and accessible system for supporting parents with childcare costs and making work pay for low income parents.
- The creation of stronger, more dynamic and sustainable communities is integral to the work of the enterprise agencies. This is particularly challenging in the fragile areas of the Highlands and Islands and in other rural communities, and both Highlands and Islands Enterprise ( HIE) and Scottish Enterprise will continue to support rural growth businesses and aid rural economic diversification. HIE will tackle the equity challenges through a new Growth at the Edge approach. This will bring together a range of activities - including community capacity building, leadership development, acquisition and development of assets for community benefit, support for business development and cultural initiatives - to enable disadvantaged communities to generate economic growth and create the conditions for population retention and growth.
- The Scottish Government will publish in 2009 an analysis of the scope for further action on income inequality in Scotland through pay across the public sector, taking into account the interaction with the tax and benefits system.
- The Scottish Government will with the Poverty Alliance, the STUC and Third Sector partners launch a campaign in 2009-10 to raise awareness of statutory workers' rights in Scotland in relation to the minimum wage, paid sick leave, holidays and maternity or paternity leave.
- The Scottish Government will press the Department of Business, Enterprise and Regulatory Reform and Her Majesty's Revenue and Customs to step up their efforts in Scotland to raise awareness, increase enforcement with employers and ensure that all workers in Scotland get what is rightfully theirs.
Maximise the potential for people to work
- The Scottish Government will work with local authorities to identify and disseminate examples of projects which successfully remove barriers to employment, including the evaluation of the successful Working for Families project.
- We will continue to roll-out Workforce Plus and early in 2009 we will launch and facilitate an Employability Learning Network. This will enable employability partnerships within CPPs to learn from the experience and best practice of others in Scotland and elsewhere in supporting the most disadvantaged in the labour market into work.
- We will work with the Third Sector in 2009 to develop initiatives focused on fast-track entry into work, with transitional placements in the third sector and in-work support - all with the aim of addressing the gap that so often exists between those who are out of work and employers.
- We will work with NHS Boards, Jobcentre Plus and others to provide better support for those with mental and physical health needs who are currently receiving benefits but who might be able to join the workforce. Many of Scotland's citizens who currently receive Incapacity Benefit would like to work, and this targeted support will help them make that important step.
- We will work with the Third Sector to ensure that people are equipped with the financial skills that they need to help them manage their money during the transition into work.
- We will set out plans in 2009 for improved employability and skills services to Scotland's black and minority ethnic communities, working with community organisations.
- In line with our commitment in the Government Economic Strategy to improve the life chances of those at risk, we will extend our approach on inclusive employment for people with learning disabilities - so that other disadvantaged groups are able to benefit from this too.
- NHSScotland is one of the largest employers in Scotland. Territorial Boards are required to offer pre-employment training and opportunities for employment for people on benefits, through Health Academies or similar schemes. Many Boards are signing up to Local Employment Partnerships with Jobcentre Plus, committing to providing opportunities for people identified as on benefits and wishing to return to work. Some local authorities are working in partnership with their local health boards to extend the scope of the schemes and efforts will be made to make the approach more widespread. The Scottish Government will work with COSLA in 2009 to promote to local authorities a common public sector recruitment approach to develop pools of appropriate individuals from which smaller public sector recruiters could also draw.
- We will use the evaluation of our 14 existing pilots to support those with multiple and complex needs to focus investment on a smaller number of approaches aimed at supporting those with multiple and complex needs - such as disability or mental health issues - overcome barriers to employment and will announce plans for this in 2009.
Maximising income for all
- We will make significant new investment in 2009-10 and 2010-11 in income maximisation work. This will include a focus on benefits uptake for older people and other key groups, building on our existing pilots with Age Concern Scotland, and work to increase people's net disposable income - helping their money go further. We will build on what works and develop new approaches to boost the income of those in poverty or at risk of poverty. This will be linked to implementation of the income maximisation recommendations in the Equally Well report and from the National Fuel Poverty Forum.
ii. Longer-term measures to tackle poverty and the drivers of low income
What the evidence says
The evidence suggests that, while dealing with the root causes of low income, we must also adopt an approach which breaks the inter-generational cycle of poverty and which addresses the following, major longer-term drivers of poverty in our society:
Inequalities in attainment of our children and young people
There is compelling evidence which shows that - despite the best efforts of Government, local authorities and others so far - many children and young people are still held back by social and economic barriers which hamper their development and make it much more likely that they will experience poverty in later life.
Inequality resulting from discrimination and bias
Despite the legislation to provide protection from discrimination, many people still experience disadvantage and limited opportunities because of their gender, race, disability, sexual orientation, faith, age or social background. Whilst huge progress has been made in making society fairer, discrimination still exists and institutions, public bodies, private enterprises and voluntary organisations can sometimes conduct their business in a way that may, unwittingly, disadvantage particular groups of people. The barriers and limited opportunities that arise as a result can lead to poverty and disadvantage.
The gap in life expectancy in Scotland has increased consistently over the past 10 years. Problems from drugs and alcohol abuse; from mental ill health and from other key health problems are far more pronounced amongst our poorer citizens. The distribution of poor health has an impact upon income inequality and can pass from generation to generation.
A lack of good quality, accessible and affordable housing - particularly, within our more deprived areas
It is clear that housing supply must be increased in many parts of Scotland over the longer term if we are to meet the nation's future housing requirements, ensure greater fairness and stability in the housing market, and help regenerate our most disadvantaged communities. In particular a strong supply of affordable housing is essential to support the country's social housing needs and encourage labour mobility from disadvantaged areas to areas with greater demand for labour.
Responding to these longer-term drivers of poverty in our society, therefore, the Scottish Government, Local Government and their partners need to take an approach which:
- Provides all children and young people with the best start in life - by putting parenting at the heart of policy, providing better access to spaces to play, and making every pre-school and school a family learning environment, so that all can realise their potential and avoid poverty in later life.
- Supports the broader effort to deal with the health inequalities in our society - by implementing the recommendations of the Equally Well report, including the development of financial inclusion activity within mainstream public services and promoting the evidence of the health benefits of employment and by taking a holistic approach to social issues such as violence - so current and future generations are able to live healthy working lives that are free from poverty.
- Promotes equality and tackles discrimination - by challenging stereotypes, building on public sector equalities duties, and supporting individuals so that all can reach their potential.
- Delivers good quality affordable housing for all - investing in house-building and protecting the housing stock - so that everyone in Scotland has the opportunity to live in a decent house that they can afford in a place where they can access services and employment.
- Regenerates disadvantaged communities - promoting the lasting transformation of places for the benefit of the people that live there by: targeting investment; creating the right environment for private and public investment and devolving power to the local level.
What more we will do
The approach taken by local authorities in addressing the major, longer-term drivers of poverty through the Single Outcome Agreements will support real improvement across Scotland - but, clearly, more needs to be done. Working with its partners across the broader public sector, the Scottish Government will therefore take further, and more focused, action across the following areas:
Providing children and young people with the best start in life
- We will introduce an early years framework to address many of the root causes of disadvantage through a focus on supporting parents and communities to provide the nurturing, stimulating environment for children. This will involve shifting the focus from crisis intervention to prevention and early intervention.
- By 2010-11, we will put in place arrangements for a weekly allowance to be paid to kinship carers of looked after children. This will be at an equivalent rate of the allowance paid to foster carers - subject to agreeing with the DWP that this will not negatively impact on the benefit entitlements of these carers.
- Central to Curriculum for Excellence is the ongoing entitlement for all our young people to develop their skills for learning, skills for life and skills for work in whatever type of provision is best suited to their needs and aspirations. 16+ Learning Choices is our new model for ensuring that every young person has an appropriate, relevant, attractive offer of learning made to them, well in advance of their school-leaving date. We expect this to be a universal offer across Scotland by 2010; a specific focus will be needed by local authorities and their partners on the most vulnerable young people.
- We will build on our abolition of the Graduate Endowment fee by progressing wider plans to ensure that access to higher education is based on the ability to succeed rather than the ability to pay.
Supporting the broader effort to deal with the health inequalities in our society
- We will implement the key recommendations from Equally Well, the report of the Ministerial Taskforce on health inequalities and tackle the shared underlying causes of health inequalities and poverty. This will include the establishment of test sites for the task force's approach to redesigning and refocusing public services, using the best available evidence to inform good practice.
- Violence affects all of Scotland but it does not do so equally. We know that the death rate from assault in the most deprived communities is nearly four times that of even the Scottish average, and over ten times that in the least deprived communities. The Scottish Government will support the Violence Reduction Unit to deliver its 10 Year Violence Reduction Action Plan - launched on 17 December 2007 - in order to reduce significantly violence in Scotland.
Promoting equality and tackling discrimination
We will continue to progress a range of activities to advance equality and to tackle discrimination including:
- Work with the public and third sector and the Equality and Human Rights Commission ( EHRC) to embed and progress equality, building on the public equality duties.
- Activities to raise public awareness and challenge the stereotypes and attitudes which limit the opportunities for particular groups.
- The development, in concert with the EHRC and the UK Government, of a framework for measuring progress on equality.
- Working with disabled people, COSLA and the EHRC in shaping a programme to improve the opportunities for disabled people to live independently.
- Developing guidance for CPPs on the Equality Impact Assessing of Single Outcome Agreements.
We will set out the detail of these proposals with our partners in 2009.
Delivering good quality affordable housing for all
To deliver good quality affordable housing for all, the Scottish Government will implement the approach set out in Responding to the Changing Economic Climate: Further Action on Housing by:
- Providing over £1.5 billion for affordable housing investment across Scotland during the period 2008-11 - with £100 million of that being brought forward to accelerate the building of affordable housing;
- Legislating to exempt new social housing from the Right to Buy, to protect the stock for future generations of tenants;
- Making £25 million available to Councils to encourage them to build new homes for rent; and
- Making £250 million available over the period 2008-11 to increase the funding for the Scottish Government's Low-cost Initiative for First Time Buyers ( LIFT) programme to help first time buyers get a foot on the property ladder.
- Funding an awareness campaign in 2008-09 to encourage those with financial difficulties to seek advice from the national debt-line to avoid home repossession.
- Establish a home owner support fund of £25 million over 2 years to support mortgage to rent and mortgage to shared equity transfers.
Regenerating disadvantaged communities
- We will continue to support the six Urban Regeneration Companies ( URC) throughout Scotland to help transform our most deprived areas, and to lead improvements in employability, educational attainment, community safety and health in those areas.
- Scottish Enterprise will engage with URCs and others delivering projects of a national or regional scale, to make regenerated areas attractive to inward investment and other business opportunities. New businesses and indigenous business growth will create further employment opportunities in high unemployment areas.
- As indicated in our Government Economic Strategy, we will support social enterprise - as part of our wider investment in the third sector - to provide start-up assistance and to provide supported employment to those furthest from the labour market.
- CPPs are seeking to use the Fairer Scotland Fund to accelerate the achievement of real outcomes for the most disadvantaged areas and vulnerable people. We will support them in this process by developing a community regeneration and tackling poverty learning network in 2009 to share best practice across Scotland.
- The 2014 Commonwealth Games will provide individuals, groups and organisations across Scotland with a range of opportunities and has the potential to act as a catalyst for economic, physical, and social regeneration in Scotland. The Games will create an estimated 1,200 new jobs in Scotland of which 1,000 will be in Glasgow. Glasgow City Council is placing appropriate community benefit clauses in tenders relating to the 2014 Games.
- Preparations for the Games are closely linked to the work of the Clyde Gateway URC in the east end of Glasgow and neighbouring South Lanarkshire, into which we are investing £62 million. The Gateway has the potential to transform one of the most deprived communities in Scotland.
iii. Supporting those experiencing poverty
What the evidence says
Evidence suggests that, to help those experiencing poverty, the Scottish Government, Local Government and our partners must adopt an approach which:
- Delivers a fairer system of local taxation - based on ability to pay, to bring much-needed relief to Scottish household budgets.
- Supports those who face hardship as a result of rising energy prices - by implementing key recommendations from the National Fuel Poverty Forum and developing measures to make our citizens' money go further.
- Puts in place measures to provide greater financial inclusion - to help people avoid falling into hardship, whether as a result of economic downturn, or health, family and personal problems - as well as to address the stigma of poverty, particularly among our children and young people.
What more we will do
The approach taken by local authorities in supporting those experiencing poverty through the Single Outcome Agreements will make a difference to many thousands who are currently experiencing poverty - but, clearly, more needs to be done. Working with its partners across the broader public sector, the Scottish Government will therefore take further, and more focused, action across the following areas:
Replace the Council Tax with a fairer Local Income Tax
- The Scottish Government will legislate to replace the regressive, unfair Council Tax with a fairer system of local taxation, based on ability to pay. This change will help to lift an estimated 90,000 people out of poverty. This will provide a vital financial boost to low and middle-income households across the country as the biggest tax cut in a generation. Eight out of ten families living in Scotland will be better or no worse off, with, for example, the average married couple with children saving £182.00 per year, and the average single pensioner £369.20 per year.
Supporting those who face hardship as a result of rising energy prices
- The Scottish Government re-established the Fuel Poverty Forum to advise us on how best to tackle fuel poverty in future. We will implement their recommendation of a redesigned Energy Assistance Package for the fuel poor. This will provide more help and advice on all aspects of fuel poverty - checking those vulnerable to fuel poverty are on the best fuel tariff and maximising their income and improving the energy efficiency of their homes. Energy companies have agreed to work with the Government on providing a package of insulation measures, funded under Carbon Emissions Reduction Target, to fuel poor households, and the Government will fund enhanced energy efficiency improvements to those households hardest hit by higher fuel bills. We expect many rural homes that are hard to insulate and not on the gas grid will be able to benefit from energy efficiency measures under the new Energy Assistance Package.
- The Scottish Government will press energy companies and UK Ministers to take action to minimise the impact of high fuel prices, particularly on our most vulnerable people.
- The Scottish Government will continue to call for action on fuel prices at a UK level, seeking greater consistency and clarity around the social tariffs being offered by energy companies and pressing the UK Government to reconsider its decision not to put social tariffs on a mandatory legal footing, and for more progress on data sharing which would help energy companies target help at those most in need.
Put in place measures to provide greater financial inclusion and address the stigma of poverty
- The Scottish Government will introduce legislation to extend entitlement to free school meals to all primary school and secondary school pupils whose parents or carers are in receipt of both maximum child tax credit and maximum working tax credit. This will increase entitlement to around an additional 44,000 pupils.
- The Government will introduce legislation to enable local authorities to provide free school meals to P1-P3 pupils by August 2010.
- We will increase availability and usage of money advice services and ensure they are appropriately targeted at and accessible to people from minority ethnic and faith communities, for example by being Sharia compliant for Muslims who seek it.
- For the first time, all young people will be taught how to manage their money and understand their finances as a result of Curriculum for Excellence. To ensure that teachers are adequately supported to deliver financial education the Scottish Government will provide additional support and funding to the Scottish Centre for Financial Education.
- There is strong evidence that problems with health, employment, housing or in the family put people at risk of falling into poverty, and can trigger further problems. Carefully targeted advice and representation can prevent this happening. We will work with advice providers and the Scottish Legal Aid Board to better integrate and so improve advice and support for people at risk of poverty.
- The Scottish Government and COSLA will carefully consider the recommendations of the short life working group on the School Clothing Grant.
- The findings of recent research carried out on behalf of the Scottish Government into the experience of poverty in rural areas and how that may differ from the experience in urban areas will be useful for service providers, including local authorities. We will publish that research and arrange an event at which we can share findings with relevant partners.
iv. Making the Benefits and Tax Credits system work better for Scotland
In the Government Economic Strategy, the Scottish Government pledged that it will continue to make the case for Scotland to have fuller, and eventually full, responsibility for personal taxation and benefits, to allow the development of approaches that better fit with Scottish circumstances. Over the months and years ahead, therefore, the Government will make the case for a benefits and tax credits system which provides security of income, supports transition to employment and allows those who cannot work to live with dignity.
The Scottish Government and Scotland's local authorities believe that Scotland's benefits, tax credits and employment support systems must act to protect our people from poverty and help them fulfil their potential. Irrespective of the administrative arrangement governing tax and benefits, the following key principles must guide benefits and tax credits policy if poverty and income inequalities in Scotland are to be eradicated:
- Individuals must have a strong degree of confidence around the security of their income. This means that the benefits system must be fair, transparent and sympathetic to the challenges faced by people living in poverty.
- The benefits, tax credits and employment support systems must work in harmony to support those who are capable of pulling themselves out of poverty through work. The financial benefits of working for those who can work must be significant, sustained and clearly signposted.
- Successful transitions into employment should never be undermined by financial uncertainty. This means that the system of transitional support must be transparent, responsive, quick and effective.
- For some, work is not possible. It is essential that the benefits system does not relegate such people to a life of disadvantage, financial uncertainty and poverty. Benefits must provide a standard of living which supports dignity, freedom and social unity. This must include female pensioners disadvantaged under the current system for time spent caring for dependents.
- The administration of benefits and tax credits should be as swift, streamlined and customer focused as possible to avoid administrative complexity leading to confusion and uncertainty about entitlement and support, particularly where individuals are trying to make a successful transition back into work.
What more we will do:
To make the benefits and tax credits system work better for Scotland's people, we will:
- Seek to establish a high-level biennial meeting involving Scottish Ministers, COSLA leaders and Ministers from the Department for Work and Pensions, to examine ways of developing and co-ordinating policies that will work in the best interests of Scotland.
- The Scottish Government will work in 2009 to develop these principles in the context of the National Conversation, and will present a range of policy options for tackling poverty and income inequality in the event of additional fiscal autonomy or independence.
- Scottish Government and partners will encourage local DWP officials to engage in each of Scotland's Community Planning Partnerships, in line with current best practice.
v. Supporting partners and engaging wider society
We can only deliver significant and lasting improvements to the lives of those experiencing poverty through collective action with all parts of Scottish society playing a role. The Scottish Government is committed to supporting our partners in local government and the public sector, but also wider civic society in Scotland, to reduce poverty and income inequality in Scotland.
Support for Community Planning Partnerships
Community Planning is a process which helps public agencies to work together with the community to plan and deliver better services which make a real difference to people's lives. Community Planning Partnerships ( CPPs) have been formed across the country to deliver these benefits. Those partnerships were clear in their response to our consultation that the Scottish Government needs to provide them with more guidance on how to tackle poverty and income inequality in their local areas through a Framework such as this one.
The Scottish Government will respond to CPP requests that we provide more information and support to their local level planning efforts, by:
- Developing an online Tackling Poverty Toolkit in 2009 which will set out the national context within which CPPs' work will take place; a series of policy papers setting out the evidence of what works; links to the available data and guidance on interpretation; and a library of best practice examples of successful interventions.
- Establishing learning networks in 2009 to support CPPs to access expertise on community regeneration and tackling poverty.
- Work with the EHRC and COSLA to provide guidance to CPPs on equalities issues, including legal obligations and Equality Impact Assessments.
- Recent guidance for CPPs on preparing SOAs provides advice to CPPs on how this Framework and its sister documents can support their SOAs, and stresses the importance of tracking inequalities on a cross-cutting theme for all SOAs.
Learning from our neighbours
Respondents to our consultation were impressed by those countries which have managed to combine economic growth with lower levels of poverty and income inequality, for example Finland and Norway. The Scottish Government will develop stronger links with all levels of government and public services in these countries and use the resultant learning. The Scottish Government and COSLA will also do more to engage with and learn from the European Anti-Poverty Network.
Supporting the Third Sector
The Third Sector can play an important role in connecting with individuals and communities. Social enterprise can create opportunities for employment and income in areas where the private sector might not choose to operate. The third sector is a key partner, bringing experience of practical issues and multiple and complex need to the design of public services, particularly through their contribution to Community Planning.
We will provide training and funding to support the Third Sector in their contribution to tackling poverty and income inequality. To support the creation of the right environment for growth the Scottish Government has announced that it will:
- Provide training for public sector purchasers to help open all markets to the third sector;
- Invest £30 million in the Scottish Investment Fund. This will support enterprise in the Third Sector through strategic investment in individual organisations in combination with integral business support and management development.
- Invest in a £12 million Third Sector Enterprise Fund aimed at building capacity, capability and financial sustainability in the Third Sector.
- Provide funding, through Firstport, for social entrepreneurs to establish new social enterprises.
- The Scottish Government will continue to support the social economy to increase access to affordable credit and other services offered by Third Sector financial services.
- The Scottish Government and COSLA will work to ensure the structured engagement of the community and voluntary sector with local authorities and the Scottish Government. This will complement existing work to improve the engagement of Third Sector organisations with CPPs.
It is critical that the Third Sector contribution to Community Planning is strengthened - that its voice is heard as SOAs are developed.
We are working with the Third Sector to identify ways we can provide greater support to communities, allowing them to make change happen on their own terms. By harnessing their energy and creativity to identify solutions to local challenges and by giving them responsibility for delivering that change, we can make a lasting impact on poverty and income inequality.
- The Scottish Government and COSLA, will publish a community empowerment action plan by April 2009, building on the learning from the use of the National Standards for Community Engagement.
The Private Sector
The Private Sector must play a key role if we are to successfully reduce income inequalities and tackle poverty in Scotland. Regeneration and economic development are dependent on the contribution of this sector.
Private Sector involvement in CPPs must be strengthened. This will allow perspectives and experiences from this sector of the community to be more widely heard than they have in the past.
- The Scottish Government will seek to further engage the Private Sector in delivering Solidarity at the national level through the work of the National Economic Forum in 2009.
- COSLA will work in 2009 with national representative business organisations to investigate how businesses can be better engaged in the CPP process in all parts of Scotland.
The success of the Framework should be judged by the extent to which it influences investment decisions and action in all parts of the public sector in Scotland, and engages with and supports action by other parts of Scottish society.
Information on becoming involved in local CPPs for individuals, businesses and the Third Sector can be found at http://www.scotland.gov.uk/library5/localgov/cpsg-00.asp and at Scotland's Community Planning website: http://www.improvementservice.org.uk/community-planning
Citizens in Scotland will be able to track progress on poverty and inequality at a national level through the Scotland Performs website: http://www.scotland.gov.uk/About/scotPerforms/performance
The Scottish Government will consider with COSLA how The Poverty Alliance's National Forum can be best used to inform the national debate on progress with poverty and inequality.
National Outcomes and Indicators
Progress against the National Outcomes we have agreed with our partners, for example:
|"We have tackled the significant inequalities in Scottish society."|
Can be tracked through our basket of National Indicators, and local indicators adopted by CPPs. Those National Indicators include the following which are relevant to efforts to tackle poverty and income inequality in Scotland:
- Improve people's perceptions of the quality of public services delivered;
- Increase the proportion of school leavers (from Scottish publicly funded schools) in positive and sustained destinations ( FE, HE, employment or training);
- Increase the proportion of schools receiving positive inspection reports;
- Reduce the number of working age people with severe literacy and numeracy problems;
- Decrease the proportion of individuals living in poverty;
- 60% of school children in primary 1 will have no signs of dental disease by 2010;
- Increase the proportion of pre-school centres receiving positive inspection reports;
- Increase the social economy turnover;
- Increase the average score of adults on the Warwick-Edinburgh Mental Wellbeing scale by 2011;
- Increase Healthy Life Expectancy at birth in the most deprived areas;
- Reduce alcohol related hospital admissions by 2011;
- Reduce mortality from coronary heart disease among the under 75s in deprived areas;
- All unintentionally homeless households will be entitled to settled accommodation by 2012;
- Reduce overall reconviction rates by two percentage points by 2011;
- Increase the rate of new housebuilding.
Local partners have also developed a range of relevant local indicators as part of the first Single Outcome Agreement process. These include:
- Reduce work-related benefit claimants per 1,000 of the population;
- Reduce under-16-year-old pregnancies per 1,000;
- Increase percentage of adults rating neighbourhood as 'very good' or 'fairly good';
- Increase percentage of social housing above quality standard;
- Reduce percentage of children in benefit dependent households;
- Increase number of affordable homes.
These indicators and others give us a strong platform from which to observe progress and drive change. Reducing poverty and inequality will support a narrowing of the gap in outcomes between the poorest and most affluent members of society across a range of areas - including health, education - but it must also be driven by a narrowing of that gap. We will therefore continue to work with our partners, including the Improvement Service, to develop indicators which drive improvements fastest for our most deprived citizens.
Information on each Local Authority's Single Outcome Agreement can be found at: http://www.improvementservice.org.uk/component/option,com_docman/Itemid,43/task,cat_view/gid,561/
Copies of this Framework are available from;
Scottish Government - Tackling Poverty Team
Area 2F (South)
0131 244 0064 | <urn:uuid:a82eebef-3e2a-4e3a-a903-be607c59d2d8> | CC-MAIN-2019-47 | https://www.gov.scot/publications/achieving-potential-framework-tackle-poverty-income-inequality-scotland/pages/5/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00377.warc.gz | en | 0.945879 | 7,270 | 2.65625 | 3 |
If you are looking for eco-friendly materials, there are many options you have. With many brands marketing their materials as sustainable and renewable resources, I always wonder which materials are the most eco-friendly.
Eco-friendly is a material that is not harmful to the environment. I’ve done some research and created a handy reference guide to the most eco-friendly materials that are available.
1. Bamboo Fiber
Bamboo is considered one of the most renewable resources on the planet due to its ability to rapidly grow in various climates throughout the world and its natural antibacterial properties, which don’t require the need for chemicals or pesticides.
As a textile, bamboo is created from the pulp of bamboo grass. These natural fibers are then mechanically combed out and spun into yarn. As a result, you have a fabric with a very soft feel similar to linen.
Bamboo textiles can also be created with an intense chemical process in which bamboo leaves and fibers are cooked with strong chemicals and then hardened into fine strands and spun into yarn. These strands result in a very soft and silky fabric known as “rayon from bamboo”. This process is not very eco-friendly as it involves the use of toxic chemicals that can be harmful to the environment when not disposed of properly.
Some manufacturers have opted to use the less harmful process to chemically manufacture bamboo fibers called “lyocell” where these less-toxic chemicals are processed to be recycled and used again.
Bamboo-derived textiles may not be the most perfect option for a fully sustainable and eco-friendly material, but no fabric is perfect. Polyesters and acrylics are fossil fuel based and require plenty of water, chemicals, and energy to produce. Cotton is available in both conventional and organic versions, but both are water and chemically intensive crops.
2. Bamboo Hardwood
When compared to sustainable European hardwoods, bamboo is fast-growing and will reach maturity in five years, whereas other woods can take up to 25 years. Bamboo is also self-regenerating and will replant itself after harvesting.
Most bamboo is grown and manufactured in China which can reduce its carbon footprint. It is commonly thought that the manufacturing of bamboo in China and then shipping west to Europe or North America, compromises its eco-friendly properties. This can be a slight disadvantage, but when hardwood is felled, it will be shipped to China to be manufactured, then shipped back west to be sold, making it travel which is twice as much as bamboo.
When looking for truly eco-friendly bambo, make sure the material is treated according to environmental standards. The Forest Stewardship Council (FSC) certificate makes sure that the harvest of bamboo and timber preserves the natural biodiversity and ecology of the forest.
Cork has naturally buoyant and waterproof properties. As a naturally harvested material, only the bark from the cork tree is taken which means the tree can keep on living and provide oxygen to the environment.
Some cork trees can live up to 300 years! One cork tree is enough to supply many generations as the tree’s bark can be harvest every 9 years after the age of 25. With cork sequestering CO2 from the atmosphere, every time the cork is harvested, the tree absorbs more CO2 to aid in the regeneration of the bark. Cork trees that are regularly harvested can store 3-5 times more CO2 than those cork trees unharvested. This makes cork an excellent sustainable and renewable resource.
Cork is a biodegradable and easily recyclable resource. During the manufacturing process, any cork waste that is produced can be recycled to make other cork products and will not pollute the environment.
Cork isn’t just used to top wine bottles! In addition to household items and personal accessories, cork can be made into flooring, wall coverings and adapted into textiles for fashion. It is one of the best alternatives to leather and plastics.
Teak is a hardwood from Southeast Asia that is a sustainable timber commonly used for both indoor and outdoor furniture along with patios and decking. Teak is prized over pine and oak woods due to its natural oils and rubber found inside the tight grains of the wood. Because of these natural oils, teak hardwood has naturally weather-resistant properties that prevent the wood from dry rotting and potentially damaging parasites.
Produced in Indonesia on the island of Java, the government has strict regulations for teak grown on plantations. Only a predetermined number of trees can be felled each year with the requirement that one is replanted with each lost tree.
Teak is known for its durability and elegance but it will come with an expensive price tag. In comparison to pine and oak, teak will save you from annual waterproofing and upkeep, lasting for many years.
5. Bioplastic Compostables
Bioplastics are materials made from sugar cane fibers, corn, and potato starch. They are an eco-friendly alternative to petroleum-based plastics which can take hundreds of years to biodegrade in the environment. Since bioplastics they are generated from sustainable sources, they will break down naturally instead of producing more plastic waste into the environment.
Bioplastic compostable products will look and feel just like regular plastic. Products such as compostable cutlery, tableware, straws, cups, plastic bags, and packaging are commonly made from bioplastics. Some multinational corporations, such as Coca-Cola and Heinz, are opting to switch to produce a percentage of their packaging using bioplastics in an effort to be more environmentally friendly. This may be the right step forward.
Hemp is an eco-friendly material that is replacing plastic-based materials for both clothing and home decor. It is grown free of pesticides or fertilizers and will yield more than 250% more fiber than cotton and 500% more pulp fiber than forest wood per acre. Cotton uses 50% more water than hemp in order to be turned into fabric, this makes it a much more sustainable option over cotton.
As a fabric, hemp is more porous than cotton and allows your skin to breathe better. It is resistant to mildew and will soften with age making it an ideal fabric for clothing and linens. Hemp farming today is very limited, so it can be difficult to find 100% hemp made clothing but mixtures of hemp and cotton can still be high-quality apparel.
In addition to fabrics, hemp is used to make foods such as oils, seeds, protein powder, milk, butter, and even beer! Hemp is made from that infamous green plant family but because of stigma, hemp is often overlooked for its sustainability as a renewable resource.
7. Organic Cotton
The “fabric of our lives” is used for nearly everything we come in contact with on a daily basis, from our clothes and bedsheets to even some of the food we eat. The growth of conventional cotton can have a huge environmental impact on the use of toxic pesticides and fertilizers. Organic cotton is grown without these harmful chemicals and requires less water so the surrounding environmental impact organic cotton farming is much lower.
Any cotton sold as organic in the United States must meet strict governmental regulations. Unfortunately, less than 1% of all cotton grown is organic but today more and more brands are committing to using organic cotton during production; the retailer H&M is the world’s largest buyer of organic cotton.
When looking for an eco-friendly organic cotton option, avoid dyes and go for textiles that come in the natural shades that cotton is grown in, light brown, cream, and pale green. Organic cotton can cost more than conventional cotton due to farming and manufacturing processes but may be worth the extra price to help promote a sustainable and eco-friendly industry.
8. Soybean Fabric
Soybean fabric is a renewable resource as it is made from a by-product of soy foods such as tofu and soybean oil. It has a soft texture comparable to silk when it drapes and can be used for many textiles in the home. It’s also a cruelty-free alternative to silk and cashmere production, which both involve the use of animals.
Soy fabric does undergo an extensive process from a plant into a fabric. This requires breaking down the proteins in the soybean with heat and chemicals which are filtered into fibers and spun into long strands. A positive is that the chemicals in this process are recycled in a closed-loop process, but some can be harmful when exposed to humans.
Soybeans are used in many of the ingredients of the foods we eat today, with nearly 80% from GMO (genetically modified) soybeans. How the soybeans are grown can be a factor in it being environmentally friendly or not. GMO soy requires a large amount of water and pesticides for cultivation. If you want a more eco-friendly and sustainable option, look for fabric, garments, or yarn which made from organic soy.
9. Recycled Glass
Recycled glass can be melted down into different forms of glass or glass fiber. When glass bottles arrive in the recycling facility, they are broken and crushed up into tiny pieces, sorted and cleaned, then prepared to be mixed with raw materials like sand, soda ash, and limestone. Combined with these raw materials, glass pieces are melted and molded into new glass bottles and jars.
Glass produced from recycled glass is melted at lower temperatures thus lowering energy requirements for production compared to glass produced directly from raw materials. Recycling glass also reduces the amount of glass waste that will end up in landfills.
Glass from food and beverage containers can are 100% recyclable. Some states in the US even offer a small amount of money, usually .05-10 cents per bottle as an incentive. Be aware that window glass, ovenware, Pyrex, and crystal are produced through a different process so they are not as easily recyclable.
Glass can also be easily upcycled or reused in a creative way. Save your bottles and jars and turn them into reusable storage containers in the kitchen or garage. Some companies are even using recycled glass bottles to create mosaic-style countertops.
10. Recycled Paper
Recycling paper is one of the easiest ways to have a positive impact on the environment. It hasn’t always been popular as the idea of reducing the number of trees felled didn’t catch on until the late 20th century. It wasn’t until 1993 that more paper was recycled than thrown away.
Recycling paper products can have a large impact on the environment as it reduces the amount of “virgin paper” (paper that comes directly from trees) produced. For every ton (907kg) of paper that is recycled, 17 trees are saved. This number is enough to significantly impact the number of greenhouse gases in the atmosphere and keep paper out of our landfills.
It’s worth noting that the recycling process is not perfect. Through the process of removing ink, some recycling facilities end up dumping 22% of the weight of recycled paper into sludge in the environment. Recycled paper also needs to be bleached which can cause the release of toxic chemicals. Some (but not all) companies are opting for chlorine-free bleach in order to reduce their environmental impact.
Most curbside recycling programs allow you to recycled most paper, including newspaper, white office paper, and mixed-color paper. There are many eco-friendly household paper products that are made from recycled paper. Look for these options when next time you are buying toilet paper, paper towels, or napkins. You can also reduce your use and reuse your paper when possible.
11. Recycled Polyester Plastic
PET stands for Polyethylene Terephthalate, which is a plastic resin and a form of polyester. This is identified as #1 PET on the resin identification code (RIC) classification. The number and initials are written inside the recycling symbol with “chasing arrows” and can be found on product labels or directly molded into the plastic.
Most PET plastic containers are used to package food, water, soft drinks, salad dressings, oil, toiletries, and cosmetics. PET is a popular choice for manufacturers because it is a safe, inexpensive, lightweight, durable, and most importantly a recyclable plastic.
After consumption, PET plastic is crushed and shredded into tiny flakes which are then reprocessed to make new PET bottles or packaging. They can also be spun into polyester fiber and made into textiles such as clothing, carpets, bags, and furnishings.
Unfortunately, only 25% of PET bottles in the United States are actually recycled. This number could change in the future with many municipalities offering curbside pickup for recycled plastic readily available. Keep in mind PET plastic cannot be reused and is manufactured with the intent of only single-use.
It’s not just for your children’s arts and crafts projects. Felt is a low-impact, eco-friendly and 100% biodegradable textile.
Wool felt is non-woven and made by condensing and pressing wool fibers together while they are wet. The result is a soft fabric with a fine and dense texture that is often used by artists and craftspeople. Wool felt is also used for its beauty in fashion and is a well-insulating fabric.
Synthetic felt can be made from recycled PET plastic bottles (see above) and turned into furniture or wall paneling with an excellent acoustic performance. Felt can also be made into a more rugged and durable material used in construction for roofing or siding on houses.
13. Reclaimed Wool
Reclaimed sheep’s wool is an intelligent technology that is gaining some steam in the fashion industry. During the manufacturing of sheep’s wool products, leftover scraps are collected and recycled. They are first sorted by color, cut into small pieces, and then pulled apart so the fibers can be re-spun into yarn and then weaved into fabric. The result is a fabric that is made from 100% recycled materials.
Reclaimed wool may not be commonplace in the fashion industry yet, but some small brands and manufacturers have taken a step towards sustainability and lowering the environmental impact of wool production.
14. Stainless Steel
Stainless steel is a long-lasting, durable and 100% recyclable material. The average stainless steel object has nearly 60% recycled material and will not contain any potentially harmful chemicals making it food-safe.
Stainless steel can be used for pipes in drinking water systems and is effective against preventing bacteria growth and minimize potential corrosion. In addition to its shine and beauty, it serves as an eco-friendly alternative to many commonly used plastic materials and appliances in the kitchen such as refrigerators, storage containers, bins, and kitchenware.
A very popular alternative for portable food and drink containers, stainless steel products will stand the test of time. Just because the product shines and is metal does not make it stainless steel. Be sure to check with the brand to ensure that it is indeed stainless steel. Most high-quality brands will have the material listed on the product itself. Look for #304 or 18/8 food-grade stainless steel to make sure your product is food-safe.
Aluminum is among one of the most eco-friendly and sustainable building materials and the most recycled industrial metal in the world. Often used as a beverage container, the average aluminum beverage can will contain 70% recycled metal and can be reused within 6 weeks from entering a recycling plant.
Aluminum cans are also lightweight and can be stacked to ship efficiently thus lowering carbon emissions through logistics and supply chains. Aluminum cans are produced in a “closed loop” recycling chain and are infinitely recycled.
When compared to the reusable plastic bottle, an aluminum bottle will be more durable and last much longer. Do your homework and make sure your aluminum bottle has a non-toxic water-based interior coating and is BPA-free to prevent any toxic chemicals from leaching into your beverage.
16. BPA-Free Plastic
By now most are aware of the dangers of BPA in plastics. BPA or bisphenol A is a chemical that has been used to make certain hard plastics and resins since the 1960s. BPA is found in polycarbonate plastics and epoxy resins. These polycarbonate plastics are often used in food and beverage containers such as water bottles.
BPA has been linked to a number of health concerns including altering hormone levels in children, brain and behavior problems, and even cancer. Recently, many state governments have taken action restricting the sale of products with BPA. Many consumers are also aware of this potential danger and make the choice for a safer option, therefore, brands have followed suit.
If using plastic, especially for food or drinking, its advised to stay away from BPA. Most companies that produce high-quality products will have “BPA-Free” listed as a feature. Do your research to be sure. Avoid plastics that are marked #3 or #7 according to the resin identification code (RIC) as these have the highest possibility of containing BPA. You can also avoid BPA by switching from plastic to an alternative material such as glass or stainless steel.
17. Recycled Rubber
Did you know rubber can actually be reclaimed and made into sidewalks? Old tires that were once destined to an eternity in the landfill are shredded up and made into paneling that fits together to form a rubber sidewalk. Not only is it softer and feels better under your feet, but when growing trees are near, the rubber will bend and raise around the roots. This prevents any tree removal due to roots tearing up concrete pathways. Reclaimed rubber can also be used for playground surfacing, sports surfaces, and outdoor interlocking floor tiles.
There are two forms of rubber that can be produced, natural and synthetic. Natural rubber is found inside certain plants and is made from a liquid called latex. Synthetic rubber is derived from petroleum and goes through a chemical process during production. Natural rubber is non-toxic and the most eco-friendly option.
Recycling rubber can save energy and ultimately reduce greenhouse gas emissions. Recycling just 4 tires reduces CO2 by about 323 lbs (105 kg), which is equivalent to 18 gallons (68 L) of gasoline. Using recycled rubber in molded products produces a carbon footprint that is 20 times lower than during the production of virgin plastic resins.
18. Reclaimed Wood
Always keep your options open when it comes to purchasing new wood products. Instead of purchasing household items such as flooring and furniture which would be produced from virgin timber, opt for reclaimed wood as an eco-friendly alternative.
This option will save energy, eliminate landfill burning, and reduce timber harvesting keeping more trees on the earth. Old wood can be reclaimed from many sources you may not think of, such as storm-damaged trees, old shipping pallets, unwanted furniture, and razed buildings.
In the past, reclaimed wood was a choice more common for the eco-friendly conscious and those willing to spend money. Now more than ever, individuals with a thrifty and DIY mindset are transforming old wood in a new and creative use bringing character into your home. Not only will this save your wallet but also give you a sense of accomplishment if you decide to embark on a DIY reclaimed wood project.
19. Clay Brick
Clay brick is all-natural and simply made from water and clay from the earth. It is completely recyclable. Old bricks build new buildings while crushed bricks can pave streets or provide natural mulch to soil. If a brick ends up in a landfill, there is no need for special handling and toxic chemicals will not enter the environment and can be naturally returned to the earth.
Clay brick is energy efficient making it a heating and cooling system’s best friend. It absorbs and releases thermal energy. In the summer, it soaks up the heat keeping a house cooler. In the winter, it traps the internal heat longer to provide warmth.
As one of the oldest materials used by humans, it serves as a durable and reliable building material lasting up to 100 years. Clay brick reduces the need for replacement and other resources needed during construction and has a lower environmental impact during its production compared to metals and other raw materials.
20. Eco-Friendly Paints and Finishes
VOC’s (Volatile Organic Compounds) are extremely hazardous and pose dangers to our health by leaching into the air. They can cause eye, nose and throat irritation, headaches, dizziness, respiratory impairment, and even memory loss.
VOC’s are found in many household products including cleaners, paints, and finishes. If you can smell the chemicals present, chances are they contain VOC’s! Be careful about your exposure especially when indoors due to the closed space.
The good news is that many brands are now producing paints and finishes that are low-VOC which are more eco-friendly. They emit a much lighter odor to minimize any irritation. These tend to be water or oil-based paints and contain no chemical solvents and have no emissions during production. Some brands have even developed a paint that is made from milk protein, a super eco-friendly option!
If you plan to paint or finish any DIY projects in your home, go non-toxic and avoid your exposure to VOC’s by choosing an eco-friendly option.
Materials to Avoid
- Non-organic cotton – do you have a piece of clothing labeled 100% cotton? Check again. The average 100% cotton item actually contains around 75% cotton, the remaining 25% is, well you guessed it, a cocktail of chemicals, binders, and resins used in the manufacturing process. Look for clothing labeled “organic cotton” with at least 95% of the fibers organically grown.
- Acrylic fabric – this is a synthetic fabric that uses carcinogenic chemicals during its production. Due to these toxic chemicals, it is not biodegradable or recyclable. Not an eco-friendly choice.
- Polyester fabric – virgin polyester production requires crude oil (petroleum) making it a non-renewable resource and giving it a high environmental impact during its production. As a fabric, polyester is reported as posing health risks and can be harmful to your skin. The only positive is that it can be recycled into PET plastic bottles.
- BPA plastic – as a toxic and harmful chemical (see above) BPA should be avoided. Watch out for plastics that are marked #3 or #7 according to the plastic identification code (PIC) as these have the highest possibility of containing BPA.
These are some of the most eco-friendly materials that you will commonly see brands marketing as “good” for the environment. Be an educated and proactive consumer. Always do your homework before purchasing to find out how the material or product is manufactured and what exactly is its environmental impact. I hope this guide has been informative and helped you make eco-friendly choices for the future and betterment of our environment. | <urn:uuid:4898fb22-0219-41ae-afb1-cf81e9c75d75> | CC-MAIN-2019-47 | https://householdwonders.com/most-eco-friendly-materials/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669431.13/warc/CC-MAIN-20191118030116-20191118054116-00098.warc.gz | en | 0.955631 | 4,797 | 2.890625 | 3 |
The knee joint is the largest and most complicated joint in the human body. Bone structures, capsules, menisci, and ligaments provide static stability in the knee joint and are responsible for dynamic stabilization of the muscles and tendons. Menisci are fibrocartilage structures that cover two-thirds of the tibial plateau joint surface. The main functions of the meniscus are load sharing and loading of the tibiofemoral joint, shock absorption, helping to feed the cartilage by facilitating dissociation of the joint fluid, and contributing to the joint fit by increasing joint stability and joint contact surface area. Menisci are frequently injured structures. The incidence of acute meniscal tears is 60 per 100,000. It is more common in males. Trauma-related tears are common in patients under 30 years of age, whereas degenerative complex tears increase in patients over 30 years of age. There may not be a significant trauma story, especially in degenerative meniscus tears. They are sports traumas that come to the fore in the etiology of meniscus tears. It is the football that has the greatest risk of creating a meniscus lesion, followed by athletics, American football, and skiing. There is an indication for repair in peripheral ruptures where blood flow is excessive. In the central rupture where blood is not present, the treatment is meniscectomy. In this review, we compiled the diagnosis, etiology, and treatment methods of the meniscal tears.
- meniscus tears
The knee is an open joint to frequent injuries in sports activities. Direct impacts, forced movements, or repetitive overloads can cause anatomical damage. Menisci are formed from fibrous cartilage. It has a shock-absorbing feature. The main tasks are providing load transfer, increasing joint surface contact area and joint stability, and contributing to proprioception [1, 2]. A total of 100,000 people per year are found to have meniscus rupture in 60–70 . The most common pathology associated with meniscal tears is anterior cruciate ligament (ACL) ruptures .
Today, in addition to professional sportsmen, people participate in sports activities for hobby purposes . Increasing interest in sports with high risk of injury, such as skiing, snowboarding, and mountain biking, has increased the frequency of traumatic meniscal tear [6, 7]. Decision-making process is difficult in professional sportsmen. Approximately 40% of all sports injuries involve the knee joint. Meniscus injuries account for 14.5% of these injuries . The most risky period in terms of age is between the ages of 20 and 29 . Male to female ratio of meniscus proplemia in sports injuries is 2–4/1 [8, 9]. The medial-lateral meniscus injury rate for all age groups was reported as 3/1. However, lateral meniscus tears are more common in young professional athletes . According to the age distribution, the medial meniscus tear is more likely to occur in the athletes who are under 30 years old and laterally in sportsmen over 30 years . In an epidemiological study of National Basketball Association (NBA) basketball players, 87.8% of meniscal tears are isolated, and 12.2% are associated with ligament injuries, often ACL . Acute ACL injuries are more common in lateral meniscus, and chronic ACL injuries are more common in medial meniscus tears . Body mass index (BMI) is specified in professional basketball players as a risk factor. It has been reported that especially above 25, it increases the risk of rupture more in the lateral meniscus . The high physical activity during play was more associated with the lateral meniscus . In an epidemiological study of athletic knee injuries, the distribution of 836 medial meniscus injuries according to sports branches was examined. Soccer was 32.7%, skiing 22.4%, tennis 7.8%, handball 5.4%, and cycling 3.5%. In the distribution of 284 lateral meniscus injuries, 34.5% of football, 19% of skiing, 9.8% of handball, 6.6% of tennis, and 3.5% of cycling sources were stated. In gymnastics and dancers doing lateral, tennis, and jogging, the risk of medial meniscus injuries is greater . Most of the injuries occur during the competition and are thought to be caused by faulty warming or overloading . 10–19 years is the period when lateral meniscus injuries are seen in sportsmen at the second frequency . It is thought that rapid and variable physiology of the age of growth has increased meniscus injuries in this age group . Nowadays, with the understanding of biomechanics and functions of the meniscus, tissue preservation has become the mainstay of treatment . Exposure to high physical activity levels and relatively early age causes injury to the athletes in terms of degenerative arthrosis .
The diagnosis of symptomatic meniscus rupture can be made during the anatomy of the patient. The common complaints of patients are pain during hanging and flexion, which starts after the knee swelling or excessive flexion. On physical examination, joint tenderness, McMurray test, and Apley test were described as the most commonly used tests . Magnetic resonance imaging (MRI) can diagnose approximately 95% of cases. Because non-symptomatic individuals can also detect meniscal tears with MRI, treatment decisions should be made by combining them with the clinical findings of the patient, not just the MRI outcome . Many features should be taken into account when deciding on surgical treatment of meniscus tears. Among the factors that are effective in deciding on surgical technique for menisci are patient complaints, age, rupture size, and additional pathologies associated with morphology .
Total meniscectomy has been used extensively in the pre-arthroscopic era and has caused many athletes to lose their sporting life . It has been shown that partial meniscectomy causes irreversible damage to joint cartilage in the long term . Since the 1980s, the development of arthroscopic techniques and the ability to repair the menstrual blood, and thus the healing possibilities, have led to the repair of suitable tears. Longitudinal tears, usually in the peripheral 25% area, are suitable for repairs in young and sporty people. With the understanding that menisci are indispensable for knee health today, indications for repair especially in lateral meniscus tears have been expanded.
In the beginning, conventional sewing techniques have been described as repairs from the inside to the outside and from the outside . With a variety of meniscal fixators (meniscus fixation materials), the possibility of vascular nerve injury with complete internal repair has been reduced, and operation times have been shortened .
In comparison with biomechanical stitches, conventional stitches have shown remarkably superior durability than meniscal fixators in many studies .
When performing arthroscopic surgery, care should first be taken to protect the meniscus tissue. Accompanying lesions should be evaluated carefully, especially with frequent ACL problems. All problems should be solved together by following a holistic approach in treatment. These injuries cause serious morbidity in the short term when not properly treated. In the long term, it may also lead to degenerative changes in the knee joint resulting in osteoarthritis.
Therefore, the treatment of meniscus injuries is very important. Today, it is understood that meniscus is protected as much as possible. Current treatment methods are being implemented and developed on the basis of this principle .
In this article, we aim to present the latest developments in diagnosis, treatment, and follow-up of meniscus injuries in the light of the literature.
2. Meniscus tear
Meniscus tears are the result of traumatic, degenerative, or congenital pathologies. Loads exceeding the normal endurance limit may result in a tear. In degenerative menisci, ruptures may also occur at normal loads. Traumatic tears usually occur in active people, aged between 10 and 40 years . Degenerative tears are generally over 40 years of age. Such tears are often associated with other degenerative changes in the cartilage and bone tissues of the knee.
3. Management of treatment
Accelerating degenerative changes in the meniscus-deficient knees and the menisci played a key role in the functioning of the meniscus leading us to focus on the protection of the meniscus. In early 1948, Fairbank showed that total meniscectomy accelerated the radiological change in the knee . This was changed by partial meniscectomy .
There is no randomized controlled trial showing that arthroscopic meniscus repair has a long-term benefit for joint protection. However, good results to date suggest that this may reduce the incidence of early degenerative changes .
According to De Haven, all meniscus tears would not cause clinical symptoms . It has been shown that the tibial asymptomatic meniscus tears, which are intact and have biomechanical function, can recover spontaneously.
Clinical studies could not explain whether meniscal injury or articular cartilage damage developed first . A recent study by Christoforakis evaluated 497 consecutive knee arthroscopies in patients with meniscal tear . These complex and horizontal tears were found to be statistically increased in outerbridge grade III or IV joint cartilage damage. Moreover, complex and horizontal tears had excessive joint damage compared to other types of tears. Nevertheless, the result does not answer which of the meniscus tears or articular degeneration occurred first.
The general approach is to actively tear the young patients with clinical and radiological examinations including X-ray and MRI. If there is a tear or is very suspicious, arthroscopy and meniscus protection surgery are recommended. Non-operative treatment option is used in patients with suspected degenerative tears. The debridement of the degenerated meniscus is well documented that it cannot always result in long-term relief .
3.1 Non-operative treatment
Small peripheral tears in young patients can be treated without surgery. The difficulty is to decide whether the tear is stable or not. Weiss et al. retrospectively reviewed 3612 arthroscopic procedures for meniscus lesions . They found 80 (2.2%) meniscus tears which were considered stable. They were not treated. Six patients presented for arthroscopy again due to meniscus symptoms. The authors suggest that stable vertical peripheral tears have a high healing potential .
Physiotherapy has been shown to be beneficial to patients with degenerative meniscus tears. In a recent published randomized control study, patients who underwent surgical debridement with physiotherapy showed no better results than those who received only physiotherapy .
Some patients with degenerative meniscal tears recover after a single corticosteroid injection into the knee. Corticosteroids are the first-line treatment for degenerative meniscus in the absence of locking symptoms.
Because of the high functional expectations and the need for early return to sports, it is still preferred in selected cases .
In the red-red zone, stable, incomplete longitudinal tears with a size below 1 cm may be suitable for conservative treatment . Bucket handle, radial, parrot beak, oblique tears, and degenerative and complex tears are not suitable for conservative methods . Conservative treatment can be used as a temporary treatment method in athletes, who are frequently asymptomatic in the season .
Selection should be made when deciding on conservative treatment. Abnormal stresses should be avoided in the early period of rupture. The development of cartilage lesions after aggressive rehabilitation of a young professional athlete with lateral meniscus radial rupture to return to early sports shows that this treatment is not innocent . It should be kept in mind that meniscus tears, which cannot be repaired, may cause cartilage lesions due to mechanical problems that occur even if they are not symptomatic in athletes. Surgical treatment should be prioritized especially in athletes .
3.2 Operative management
3.2.1 Total meniscectomy
With the development of arthroscopic techniques and understanding of biomechanics, the importance of meniscus has increased. Treatment led to a shift toward the protection of the meniscus tissue. Total meniscectomy treatment is rarely practiced today.
3.2.2 Open repair
It was one of the first ways to repair meniscus tears . It is now used to fix the meniscus as part of the management of tibial plateau fractures.
3.2.3 Arthroscopic repair
The high expectations and career concerns of the athletes have made the meniscus repairs even more important. Red-red zone often provides successful repairs due to the potential for cannulation. Discussions on repairs to the red-white zone are still ongoing. In a study, midterm and long-term acceptable results after repair of red-white zone tears of 22 athletes are promising .
When deciding on the repair of meniscus in professional athletes, it is necessary to take into account the possibility of the meniscus recovery and to target 90% success. Considering the possible risks, the athletes should be careful to repair the tears in the red-white zone. White-white zone is considered to be the indication of repair today. But athletes should not consider arthroscopic repair .
Meniscus tear is present in 60% of ACL-ruptured patients . When the ligament is not repaired, the meniscus is becoming more complicated as it is not healed . For this reason, repairs should be done in the early period and in the same session.
Although it is accepted that there is an improvement in the repair area in about 6–8 weeks, the process actually lasts longer, and the athletes cannot return to competitive activities before 3 months . As stated by Forriol’s study, the improvement in the repaired meniscus depends on two basic elements. The first one is the extrinsic blood circulation, and the other is the ability to repair synovial fluid and fibrocartilage intrinsically . Histological studies after meniscus repair are based on animal experiments and cannot be fully adapted to human meniscus repair process . Therefore, the relationship between healing in tissue and return to movement is mostly based on clinical observations.
The success rates after repair vary. Pujol et al. reported success rates between 5 and 43% of meniscus repair in basketball players . According to Stein et al. in the 8-year follow-up, the rate of return to pre-traumatic activity in the group undergoing athletes was found to be 96.2%, and in the meniscectomy group, it was 50% . Paxton et al. found failure after meniscectomy was 3.7% and in repair group 20.7% . In this article, better long-term clinical results have been reported in meniscus repairs despite high reoperation rates .
It is reported that repair is better characterized by better functional scores and lower failure rates in the current meta-analysis of meniscectomy and repair . Reoperation depends not only on the technique but on the skill of the orthopedist, the tear itself, the age of the athlete, the level of activity, and the rehabilitation program applied . In a study evaluating the results of repair in athletes, failure in the medial meniscus was reported as 36.4%, and failure in the lateral meniscus was reported to be 5.6%.
Reoperation rates are high in medial meniscus repairs. This is due to the less mobility of the medial meniscus and to the greater load on the medial compartment . Late repair of medial tears has also been implicated as the cause of this failure .
Forty-two elite athletes and meniscus repair agressively recommend the study, after the repair reported 24% failure. Of the cases, 67% had medial meniscus, and 33% had lateral meniscus tears and a mean follow-up of 8.5 years .
The success of repair in the complete radial tears of the lateral meniscus is low . However, in the studies of Haklar et al., successful results are obtained in approximately half of the patients, and return to sports is provided . Nevertheless, these patients should be shared with the athlete who may be a candidate for meniscus transplant in the future.
The surgeon must also make efforts to repair the medial or lateral meniscus radial root tears in athletes. If the circumferential fibers are completely ruptured when the repair is not performed, the meniscus becomes functional. Therefore, primary repair of complete radial tears should be the first aim, especially in young athletes.
Radial tears in the posterior meniscus posterior are more promising because of the region’s blood supply .
Failure to achieve successful results with today’s repair techniques leads to new searches. The success of repair in meniscus tears combined with ACL reconstruction is thought to be the effect of growth factors and multipotent cells from the bone marrow . Similarly, synovial abrasion, trephination, mechanical stimulation, fibrin clot, or platelet-rich plasma (PRP) applications are always aimed for the same purpose .
The growth factors released after mechanical stimulation and trephination contribute positively to meniscus healing. Ochi et al. showed that the mediators increased to the highest level in the joint after 14 days of mechanical stimulation .
Trephination can be used successfully in the complete tears of the lateral meniscus posterior or in complete longitudinal tears less than 1 cm. Successful results of vertical, peripheral, and non-degenerative tears in trephination are seen in the literature .
In a recent study on the effect of PRP on meniscus repairs, no significant difference was found in functional scores . Rights et al. used microfracture to create an effect similar to ACL reconstruction, and this would also contribute positively to recovery in the repair area of multipotent cells.
Studies have shown that smoking has a negative effect on the results of meniscus repair .
For successful results, it is important to remember the importance of combining vertical mattress sutures from the inside to the outside as far as possible, with the microfracture method .
The presence of opposing views in the literature shows that there is still no consensus on rehabilitation and return to sports after repair . In the conservative approach, the return to sports takes a long period such as 3–6 months, while the aggressive approach is as short as 10 weeks . While limited conservative rehabilitation is recommended initially until the meniscus is healed , recent biomechanical studies report that early burden is not inconvenient . Even in animal experiments, it has been shown that blood flow to the repair site increases with mobilization .
In a randomized controlled trial by Lind et al., the functional scores with MRI and arthroscopy are evaluated. The rate of failure was found to be 28% in the limited rehabilitation group and 36% in the nonrestricted rehabilitation group .
As a result, we can say that the trend toward accelerated rehabilitation in the current studies is promising. In practice, the location of the tear, its size, the quality of the meniscus, and the stability of the repair affect the rehabilitation to be applied to the athlete . Neuromuscular control is very important in current rehabilitation . The individual needs and sports-specific approaches of the athlete should not be ignored in rehabilitation .
Meniscal tears in young athletes have great challenges for orthopedists. High activity-level, long career expectancy requires all conditions to be repaired . The high potential of recovery according to adults is an important advantage .
Athletes may be asked to be guided by the orthopedist athlete or club when planning treatment. Often, the athlete’s desire to return to sports early can create pressure on the physician. The rehabilitation process following the treatment of accompanying ligamentous injuries gives the physician the time required for recovery after meniscus repair . However, the expectation of early sports return to isolated meniscus tears may force the physician to perform meniscectomy. Taking into account the expectations of the athlete and the situation in which he/she is not affected from the orthopedic pressures, it is to make the right decision to give priority to anatomical and functional meniscus repair.
3.2.4 Meniscal rasping
Meniscal rasping is used to clean the torn edges of the meniscus to stimulate bleeding. It is indicated in patients with stable, longitudinal tears in the vascular region of the meniscus. In the case of unstable knee or avascular region ruptures, this treatment is not appropriate.
3.2.5 Meniscal suturing
Red-red zone or red-white zone tears can be repaired. Traditionally, longitudinal tears are most suitable for suturing and healing. The most important condition for a good recovery is a stable knee. Repair of meniscus in unstable knees results in failure of treatment.
However, a stable knee with normal kinematics does not apply unnecessary shear force on the meniscus repair. Recently, positive results have been obtained regarding the repair of full-thickness radial tears . The results of the repair were not reported in randomized controlled trials. However, case reports seem to be positive. Repairs in the avascular region are at risk of failure. Meniscus repair, with ACL reconstruction, showed better recovery rates than ACL stable knees .
3.2.6 Meniscal suturing techniques
Various techniques for the repair of meniscus have been described.
188.8.131.52 Outside-in meniscal suturing techniques
It was the first arthroscopic node technique. It is now the least used method. Suitable for tears in the middle and anterior 1/3 section of the meniscus. Posterior 1/3 cut is not possible with this technique.
The most important advantages of the outside-in repair method are that it is very easy to reach the anterior 1/3 region ruptures which are difficult to reach by other methods and it does not require additional posteromedial or posterolateral cuts to protect the vascular nerve pack. The most important disadvantage of this method is the difficulty in reaching tears extending to the posterior 1/3.
184.108.40.206 Inside-out meniscal suturing techniques
Single- or double-lumen, special-inclined cannula through the needles passed through the repair. It can be applied to tears in every region, but it is more suitable for tears in the rear and middle 1/3 section. With this method, which is accepted as the gold standard in meniscus repair, the desired number and type of stitches can be placed easily in each region of the meniscus.
The most important disadvantage of the method is the need for a second incision in the posteromedial or posterolateral to prevent the needles from the capsule from causing vascular nerve injury, requiring an experienced assistant and special instrumentation.
The repair of tears near the posterior insertion of the meniscus is difficult and dangerous with the inside-out technique. In this type of tear, Morgan described the whole technique of sewing inside .
4. Meniscus fixators
Implants called “meniscus fixators” have been developed due to the difficulties of sewing techniques, in some cases requiring additional incisions and vascular nerve complications. These implants manufactured as arrow, hook, anchor, screw, or staple are biodegradable or permanent.
The most important advantage of the fixators is that they are technically very easy. In addition, there are advantages such as very low vascular nerve complications, no need for additional incisions, meniscus tears in hard-to-reach areas, “all-in-one” repair, no assistant, and no need for arthroscopic nodes. Generally, there is no problem in the visualization of the lateral compartment. Medial repair on very narrow knees can be difficult .
However, the fixators have serious disadvantages. The mechanical forces are half or one-third of the vertical stitch .
Another problem with meniscus fixators is the risk of rigid implants to damage the articular cartilage . This problem arises especially in puffy head implants, which are not fully embedded in the meniscus body.
5. Methods for improving the healing
Methods for improving healing in tears extending to the nonvascular area have been described. Some authors recommend applying one or more of these methods in all isolated tears, regardless of the area in which they are located. These methods are described below.
5.1 Fibrin clot technique
When the patient’s venous blood is mixed with a glass baguette, the paste-shaped clot is placed between the torn lips. Since Arnoczky showed the chemotactic and mitogenic factors involved in these dogs and showed that this clot had a positive effect on healing, this technique was also introduced in humans .
5.2 Trephination technique
This method is based on the principle of opening radial tunnels in the meniscus body so that the peripheral vascular structures reach the avascular region. Zhang et al. showed that the trephination combined with the suture was more effective than the suture alone in avascular tears in the goat meniscus [68, 69].
5.3 Synovial abrasion
It is based on the principle of a hemorrhage and infusion responses as a result of filing the synovial tissue around the rupture with the help of a curette and contributing to the healing process .
5.4 Synovial flap transfer
It was shown that a better repair tissue was formed in the animal experiments with the interposition of a vascular tissue, a pedicled flap, in the tear area of the synovium . However, this technique has not been widely used.
5.5 Texture adhesives
An ideal tissue adhesive should include the following: tissue compatibility, biodegradable, good connect, minimal tissue reaction, and affordable .
Tissue adhesives currently used in clinical practice are limited because they contain all of these features.
5.6 Growth factors in meniscus repair
It is known that fibrin clots placed in meniscus tears increase the healing potential of these lesions. It has been shown that meniscal fibrochondrocytes have the ability to make matrix and cell proliferation when they are associated with mitogenic and chemotactic factors in wound hematoma . In fibrochondrocyte cell culture, platelet-derived growth factor (PDGF) has been shown to stimulate proliferation of these cells .
Researchers showed that PDGF alone could not initiate meniscus repair in the central region of the meniscus .
The effect of endothelial cell growth factor (ECGF) on the healing potential of meniscal injuries was investigated. It has been said that there is not much effect .
6. Rehabilitation in patients with meniscus repair
The discussion in the literature is on rehabilitation protocols that should be applied after isolated meniscus repair . There is no consensus on knee movement, weight-bearing, knee pad use, and return to sports. In more conservative protocols, there are 4–6 weeks of partial load, knee movements gradually increased in knee pad control, and 6 months of deep crouching and sports ban. In contrast, aggressive protocols recommend immediate burdening, unlimited knee movement, and return to sports when muscle strength is acquired, as long as the patient can tolerate it.
In 95 patients with aggressive and conservative protocols, there was no difference in failure rates . This study yields full knee movement width and allows for return to sports when pain and effusion are lost. Since the only factor affecting the success of the repair is not rehabilitation, the results of various series are difficult to compare. The generally accepted opinion is that rehabilitation using only meniscus fixators is a little more conservative.
Scaffolds can be used as salvage interventions in meniscus ruptures with irreparable meniscus tears and athletes with segmental meniscectomy . The porous and absorbable structure should provide a meniscus-like tissue formation, while the biomechanical strength of the joint should be adequate.
In a European-centered study, 52 partial meniscectomy patients underwent polyurethane scaffold. In the third month, 81.4% of the patients underwent MRI. In the 12th month of the arthroscopic evaluation, in 97.7% of the cases, scaffold integration was detected with real meniscus tissue . Zaffagnini et al. 43 patients with lateral menisectomy applied scaffold. At the sixth postoperative month, they showed functional improvement. At the 12th month, the knee swelling and fatigue decreased to the optimal level. At the 24-month follow-up, 58% of the cases had reached the pre-injury activity level, and 95% of the patients had patient satisfaction .
However, it is recommended not to give a full load for 6–8 weeks after meniscus scaffold applications. This causes muscle atrophy especially in athletes and is inadequate to prevent rehabilitation muscle atrophy .
Nowadays, cell scaffolds have been introduced. The benefit of cell-free scaffolds was questioned . The factors affecting the success of the procedure were indicated as chronicity of the injury, body mass index, and other accompanying knee problems . Long-term studies on the results of scaffold applications, especially in athletes, are needed.
8. Meniscus transplantation
Meniscus transplantation has been proposed to prevent the development of arthrosis in young patients whose meniscus is completely removed, without axial impairment and arthritic changes. The structures used for meniscus replacement in experimental and clinical studies are as follows: autografts, allografts, xenografts, synthetic polymer implants, carbon fiber and polyurethane implants .
It is doubtful that structures used as meniscus transplant may prevent the development of arthritis in the knee in the long term .
8.1 Allograft transplantation
Subtotal or total meniscectomy after the functional deficiency and pain is applied in athletes . After close meniscectomy, especially under the influence of abnormal load distribution in the lateral compartment, chondral lesions develop in the early period. The rehabilitation of an athlete who develops a chondral lesion is more difficult, and in the late period, arthrosis develops frequently . For success in transplantation, it is important that the articular cartilage surface is smooth, stable, and normal or that BMI is below 30. In a recent meta-analysis, good and excellent results were reported in 84% of cases after transplantation.
Again in a recent study, posttransplantation in 12 professional footballers was performed in 92% of the cases. At the 36th month, 75% of the cases were reported to continue their professional sports lives .
Studies and discussions on transplantation still continue, with short-term to midterm results being positive . There is a rare risk of infection . The delay in returning to sports due to the long healing process is the biggest obstacle to the technique. Currently, randomized controlled long-term studies are needed .
It should be kept in mind that this intervention can be applied after the professionalism of the athletes who have undergone meniscectomy in their careers and who are symptomatic or postponed transplantation in their careers.
Meniscus injuries constitute a large part of the studies performed by orthopedist surgeons. The current management has progressed toward the meniscus protection. Although there has been a lot of progress in meniscus transplantation, this has still not become a routine procedure.
Young athletes need to make more efforts to protect the meniscus, while long-term treatments in a professional athlete may be postponed at the end of their career. Radial tears of the lateral meniscus corpus and anterior junction are quite important in athletes. They need to be treated early. In the case of complete radial tears, the rate of recovery after repair should be tried, but it should be noted that these patients may be transplant candidates in the later period. | <urn:uuid:5fd572fe-832b-4ebd-844c-304f754d8693> | CC-MAIN-2019-47 | https://www.intechopen.com/books/meniscus-of-the-knee-function-pathology-and-management/meniscus-tears-and-review-of-the-literature | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00138.warc.gz | en | 0.927819 | 6,675 | 3.40625 | 3 |
Sushi Terminology and Pronunciation Guide
Over the years we have tried to make this the most comprehensive list of terminology for foods one might find at a sushi restaurant. While we have worked hard, this list is by no means complete and we would love to hear any suggestion for additions from find in a Japanese restaurant. We thought to include items found in sushi, sashimi, courses, beverages, and accompaniments. You may notice that some of these items are links. Click on these links to go to a page devoted to that particular item to learn more about it and view images. Please contact us with anything you would like us to include that we may have missed.
Looking for sushi grade seafood online?
Abura Bouzu - ah-boo-rah boh-zoo) or Abura Sokomutsu (ah-boo-rah soh-koh-moo-tsoo) – This is Escolar (Oilfish) and sometimes called Shiro Maguro, although it is not tuna and should not be confused with that fish. Bright white in color and quite fatty, this fish is not always easy to find. Due to the high levels of fatty esters, this particular fish may cause digestive issues with some individuals, and for that reason has been prohibited in Japan since the 1970′s. If your body can tolerate it, the creamy texture and clean taste can be quite appealing. While escolar can be found called Abura Bouzu, this is in fact a different fish, called the "skilfish.
Aburage - ah-boo-rah-ah-geh)-Fried tofu pouches usually prepared by cooking in sweet cooking sake, shoyu, and dashi. Used in various dishes, in Miso Shiru and for Inari Zushi.
Aemono - ah-eh-moh-noh) -Vegetables (sometimes meats) mixed with a dressing or sauce.
Agari - ah-gah-ree) – A Japanese sushi-bar term for green tea.
Agemono - ah-geh-moh-noh) – Fried foods — either deep-fat fried or pan-fried.
Ahi - aaa-hee) – Yellowfin Tuna.
Aji - ah-jee) - Horse mackerel, Jack Mackerel (less fishy tasting than Spanish mackerel). Purportedly this is not actually a mackerel, but member of the Jack family. It is small – about 6" in length and they fillet it and serve marinated in vinegar.
Aji-no-moto - ah-jee-no-moh-toh) – Monosodium glutamate (MSG).
Aka miso - ah-ka-mee-soh) – Red soy bean paste.
Akagai - ah-ka-gah-ee) – Pepitona clam, red in colour, not always available.
Akami - ah-kah-me) – the leaner flesh of tuna from the sides of the fish. If you ask for ‘maguro’ at a restaurant you will get this cut.
Ama Ebi - ah-mah-eh-bee) – Sweet Shrimp, Red Prawns. Always served raw on nigirizushi. Sometimes served with the deep-fried shells of the shrimp. Eat the shells like you would crayfish.
An - ahn) – Sweetened puree of cooked red beans. Also called Anko, thought not to be confused with Monkfish, also called Anko, but contextually the difference will be apparent to Japanese speakers.
Anago - ah-nah-goh) – Salt water eel (a type of conger eel) pre-cooked (boiled) and then grilled before serving, less rich than unagi (fresh water eel).
Ankimo - ahn-kee-moh) - Monkfish liver, usually served cold after being steamed or poached in sake.
Anko - ahn-koh)- Monkfish.
Aoyagi - ah-oh-yah-gee) – Round clam. Also called Hen Clam.
Awabi - ah-wah-bee) – abalone.
Ayu - ah-yoo) – Sweetfish. A small member of the trout family indigenous to Japan, usually grilled.
Azuki - ah-zoo-kee) – Small red beans used to make an. Azuki connotes uncooked form.
Beni shoga - beh-nee shoh-gah)- Red pickled ginger. Used for Inari Zushi, Futomaki, and Chirashizushi, but not for Nigirizushi.
Bonito - bo-nee-toh) – See Katsuo (kah-tsoo-oh).
Buri - boo-ree) – Yellowtail. Hamachi refers to the young yellowtail and Buri are the older ones.
Buri Toro - boo-ree toh-roh) – Fatty Yellowtail. The belly strip of the yellowtail. Incredibly rich with a nice buttery flavour.
Butaniku - boo-ta-nee-koo) – Pork. Buta means pig.
California Roll - maki) A California roll is an american style roll created in California for the American palate. It usually consists of kamaboko (imitation crab meat) and avocado, sometimes including cucumber.
Chikuwa - chee-koo-wah) – Browned fish cake with a hole running through its length.
Chirashi-zushi - chee-ra-shee-zoo-shee) – translates as "scattered sushi", a bowl or box of sushi rice topped with a variety of sashimi.
Chutoro - choo-toh-roh) – The belly area of the tuna along the side of the fish between the Akami and the Otoro. Often preferred because it is fatty but not as fatty as Otoro.
Daikon - Dah-ee-kohn) – giant white radish, usually served grated as garnish for sashimi.
Dashi - dah-shee) – Basic soup and cooking stock usually made from, or from a combination of Iriko (dried Anchovies), Konbu (type of Kelp) and Katsuobushi (dried bonito flakes). However any form of stock can be called “dashi”.
Donburi - dohn-boo-ree) – A large bowl for noodle and rice dishes. Also refers specifically to a rice dish served in such a large bowl with with the main items placed on top of the rice, Examples include Tendon (Tenpura Donburi) and Unadon (Unagi Donburi).
Ebi - eh-bee) – Shrimp. Not the same as Sweet Shrimp, as Ebi is cooked, while Ami Ebi is served in raw form.
Edamame - eh-dah-mah-meh) – Young green soybeans served steamed and salted and usually still in the pod.
Fugu - foo-goo) – Fugu is puffer fish which is a delicacy, though its innards and blood contain extremely poisonous tetrodotoxin. In Japan only licensed fugu chefs are allowed to prepare fugu or puffer fish.
Fuki - foo-kee) – Fuki is a Japanese butterbur plant which contains a bitter substance called "fukinon" (a kind of ketone compound), but upon blanching fukinon is easily washed out from its petioles (edible parts) and is prepared for an excellent Japanese vegetable dish.
Futo-Maki - foo-toh-mah-kee) – Big, oversized rolls.
Gari - gah-ree) – Pickled ginger (the pink or off-white stuff) that comes along with Sushi.
Gobo - goh-boh) – Long, slender burdock root.
Gohan - goh-hahn) – Plain boiled rice.
Goma - goh-mah) – Sesame seeds.
Gunkan-maki - goon-kahn-mah-kee) – Battleship roll. This is where the maki is rolled to form a container for the liquid or soft neta. Used for oysters, uni, quail eggs, ikura, tobiko, etc.
Gyoza - gi-yoh-zah) – A filled wanton dumpling that has been either fried or boiled.
Ha-Gatsuo - ha gat-soo-oh) – Skipjack tuna. This meat is similar to bonito but is a lighter, pinker product.
Hamachi - hah-mah-chee) – Young Yellowtail tuna, or amberjack, worth asking for if not on menu.
Hamaguri - hah-mah-goo-ree) – Hard shell Clam. Includes American littlenecks and cherrystones.
Hamo - hah-moh) – Pike Conger Eel. Indigenous to Japan.
Hanakatsuo - hah-nah-kah-tsoo-oh) – Dried bonito fish, shaved or flaked. Usually sold in a bag. Also called Katsuobushi (bonito flakes).
Harusame - hah-roo-sah-meh) – Thin, transparent bean gelatin noodles.
Hashi - hah-shee) – Chopsticks. Also called O-Hashi.
Hatahata - hah-tah-hah-tah) – Sandfish. Indigenous to Northern Japan.
Hijiki - hee-jee-kee) - Black seaweed in tiny threads.
Hikari-mono - hee-kah-ree-mo-no) - A comprehensive term for all the shiny fish. However usually refers to the shiny oily fish, such as Aji, Iwashi, Sanma, Kohada.
Himo - hee-moh) - The fringe around the outer part of any clam.
Hirame - hee-rah-meh) – Generally speaking this name is used for many types of flat fish, specifically fluke or summer flounder. The name for winter flounder is really "karei" (kah-ray), but often restaurants do not discriminate between fluke or summer flounder when one asks for hirame. Some restaurants call halibut "hirame," however the actual Japanese word for halibut is "ohyo" (oh-yoh).
Hocho - hoh-choh) - General Japanese term for cooking knives. Can be classified as Traditional Japanese style (Wa-bocho) or Western style (yo-bocho)
Hokkigai - hohk-kee-gah-ee) - Surf Clam (also called Hokkyokugai). Sort of a thorn-shaped piece, with red coloring on one side.
Hotate-Gai - hoh-tah-teh-gah-ee) – Scallops.
Ika - ee-kah) – Squid. As sushi or sashimi the body is eaten raw and the tentacles are usually served parboiled then grilled or toasted.
Ikura - ee-koo-rah) – salmon roe. (FYI, Ikura means ‘How much?’ in Japanese) The word Ikura is shared with the Russian word “Ikra” meaning salmon roe.
Inada - ee-nah-dah) - Very young yellowtail.
Inari-Zushi - ee-nah-ree-zoo-shee) – [see an image] – Aburage stuffed with sushi rice.
Kaibashira - kah-ee-bah-shee-rah) – large scallops, actually giant clam adductor muscle, though often scallops are served, much like cooked scallops but more tender and sweeter. Kobashiri are small scallops and like kaibashira may or may not come from scallops or other bivalves.
Kajiki - kah-jee-kee) – Billfish including Swordfish and Marlins. Swordfish specifically is called Me-Kajiki or Kajiki-Maguro.
Kaki - kah-kee) – Oysters.
Kamaboko - kah-mah-boh-ko) – Imitation crab meat (also called surimi) usually made from pollack. Generally used in California rolls and other maki, it’s not the same thing as "soft shell crab."
Kampyo - kahn-piyoh) – Dried gourd. Unprepared is a light tan color. Prepared it’s a translucent brown. It comes in long strips, shaped like fettuccine.
Kani - kah-nee) – Crab meat. The real stuff. Always served cooked, much better if cooked fresh but usually cooked and then frozen.
Kanpachi - kahn-pa-chi) – Greater Amberjack. This is similar to hamachi, but this is actually a different fish (and is not Yellowtail or the Japanese Amberjack).
Karasu Garei - kah-rah-soo gah-ray) – Literally translated this means "cow flounder" and is the term for Atlantic halibut.
Karei - kah-reh-ee) – Winter flounder.
Katsuo - kah-tsoo-oh) – Bonito. It is usually found in sushi bars on the West Coast because it lives in the Pacific Ocean, and doesn’t freeze very well. Sometimes confused with Skipjack Tuna, which is incorrect as Skipjack Tuna is called "ha-gatsuo."
Katsuobushi - kah-tsoo-oh boo-shi) - Bonito flakes. Smoked and dried blocks of skipjack tuna (katsuo) that are shaved and uses usually to make dashi, or stock.
Kazunoko - kah-zoo-noh-koh) – herring roe, usually served marinated in sake, broth, and soy sauce, sometimes served raw, kazunoko konbu.
Kohada - koh-hah-dah) – Japanese shad (or young punctatus, it’s Latin species name). Kohada is the name when marinated and used as sushi neta. Prior to this the fish is called Konoshiro (ko-no-shee-roh).
Kuro goma - koo-roh-goh-mah) – Black sesame seeds.
Maguro - mah-goo-roh) – Tuna, which is sold as different cuts for the consumer, listed below in order of increasing fattiness:
- Akami (ah-kah-me) - the leaner flesh from the sides of the fish. If you ask for 'maguro' at a restaurant you will get this cut.
- Chu toro (choo-toh-roh) - The belly area of the tuna along the side of the fish between the Akami and the Otoro. Often preferred because it is fatty but not as fatty as Otoro.
- O toro (oh-toh-roh) - The fattiest portion of the tuna, found on the underside of the fish.
- Toro (toh-roh) is the generic term for the fatty part of the tuna (either chutoro or otoro) versus the 'akami' portion of the cut.
Maki-zushi - mah-kee-zoo-shee) – The rice and seaweed rolls with fish and/or vegetables. Most maki places the nori on the outside, but some, like the California and rainbow rolls, place the rice on the outside.
Makisu - mah-kee-soo) – Mat made of bamboo strips to create make-zushi.
Masago - mah-sah-goh) – capelin (smelt) roe, very similar to tobiko but slightly more orange in colour, not as common as tobiko in North America (though often caught here). Capelin, shishamo, is also served grilled (after being lightly salted) whole with the roe in it as an appetizer.
Matoudai - mah-toh-dai) – John Dory.
Mirin - mee-rin) – Sweet rice wine for cooking.
Mirugai - mee-roo-ghai) – Geoduck or horseneck clam, slightly crunchy and sweet.
Miso - mee-soh) – Soy bean paste.
Moyashi - moh-yah-shee) – Bean sprouts.
Murasaki - moo-rah-sah-kee) – meaning “purple” an old “sushi term” for Shoyu.
Namako - nah-mah-koh) – Sea cucumber. This is much harder to find in North America than in Japan. As a variation, the pickled/cured entrails, konowata (koh-noh-wah-tah), can be found for the more adventurous diners. The liver, anago no kimo (ah-nah-goh noh kee-moh) is served standalone as well.
Nasu - nah-soo) – Eggplant. Also called Nasubi.
Natto - naht-toh) – Fermented soy beans. (Not just for breakfast anymore) Very strong smell and taste, also slimy. Most people don’t like it. Order it once, if for no other reason that to see the confused look of the chef.
Negi - neh-gee) – Green Onion. Scallion. Round onion is called Tama-negi.
Neta - neh-tah) – The piece of fish that is placed on top of the sushi rice for nigiri.
Nigiri-zushi - nee-ghee-ree-zoo-shee) - The little fingers of rice topped with wasabi and a filet of raw or cooked fish or shellfish. Generally the most common form of sushi you will see outside of Japan.
Nori - noh-ree) – Sheets of dried seaweed used in maki.
Ocha - oh-chah) – Tea.
Odori ebi - oh-doh-ree-eh-bee) - (‘Dancing shrimp’)- large prawns served still alive.
Ohyo - oh-hyoh) – Pacific halibut, sometimes incorrectly labeled "dohyo." Atlantic halibut is called Karasu Garei.
Ono - oh-noh) Wahoo. As much fun to catch as to eat, ono (Hawaiian for ‘delicious’) has a very white flesh with a delicate consistency, similar to a white hamachi (yellowtail).
Oshi-zushi - oh-shww-zoo-shee) – Sushi made from rice pressed in a box or mold.
Oshibako - oh-shee-bah-koh) - Used for pressing the sushi to make Oshi-zushi.
Oshibori - oh-shee-boh-ree) – The wet towel one cleans one’s hands with before the meal.
Oshinko - oh-shin-ko) - A general term for the many and varied pickled vegetables that are not uncommon at the table in Japanese dining, and often found at sushi-ya. They include, but are not limited to pickled burdock root, daikon, cabbage carrots, and many others.
Otoro - oh-toh-roh) – The fattiest portion of the tuna, found on the underside of the fish.
Ponzu - pohn-zoo) – Sauce made with soy sauce, dashi and Japanese citron, such as Yuzu or Sudachi.
Ramen - rah-mehn) – ‘Instant’ noodles, created by extrusion and often bought in packets for easy preparation. Chinese style noodles served in broth in a bowl. Traditional Japanese “fast food.” Instant ramen invented in the 1960s and now found worldwide. Today Cup-Ramen which is even easier to make is popular worldwide.
Roe - Fish eggs) Generally, flying fish, smelt, and salmon roe are available in all sushi restaurants. "Roe" is a generic name. The roes are:
Saba - sah-bah) - mackerel, almost always served after being salted and marinated for a period ranging from a couple of hours to a few days, so really cooked. In this form it is called Shime-Saba (shee-meh-sah-bah). Raw mackerel (nama-saba) is sometimes served but it must be extremely fresh as it goes off quickly.
Sake - sah-keh) – Rice wine. Pronounced ‘sah-keh’ not “sah-key.” Served both hot and cold depending on the brand type. Some people love it, some people hate it.
Sake - sah-keh) – Salmon. To avoid confusion, some people say Sha-ke.
Sansho - sahn-shoh) - Japanese pepper. A must with most Unagi dishes.
Sashimi - sah-shee-mee) – Raw fish fillets sans the sushi rice.
Sazae - sah-zah-eh) – Type of conch, not found in the US.
Shari - shah-ree) – Sushi Meshi (sushi rice). A sushi bar term.
Shiokara - shee-oh-kah-rah) – A dish made of the pickled and salted internal organs of various aquatic creatures. It comes in many form such as ‘Ika no Shiokara’ (squid shiokara), shrimp, or fish.
Shirako - shee-rah-koh) – The milt sac of the male codfish.
Shirataki - shee-rah-tah-kee) – Translucent rubbery noodles.
Shiro goma - shee-roh-goh-mah) – White sesame seeds.
Shiro maguro - shee-roh mah-goo-roh) – (‘White Tuna’) Sometimes called ‘Bincho Maguro’ or ‘Bin-Naga Maguro.’ This is often either Escolar or white albacore tuna. It doesn’t handle as well and can change color (though doesn’t change in taste or quality) so it is not as common as other tunas. It will usually not be on the menu, and if available, must be asked for (or listed as a ‘special’). It is not unusual to find Escolar (oilfish) labeled as shiro maguro, however in quantity, this particular fish can have a laxative effect on some people. Recently, Black Marlin is also being served as ‘white tuna.’
Shiro miso - shee-roh-mee-soh) – White soy bean paste.
Shiromi - shee-roh-mee) – This is the general term for any white fish, and if one asks for shiromi the itamae will serve whatever white fish may be in season at the time.
Shiso - shee-soh) – The leaf of the Perilla plant. Used frequently with in makizushi and with sashimi. The sushi term is actually Ooba (oh-bah).
Shitake - shee-tah-keh) – A type of Japanese mushroom, usually available dried.
Shoga - shoh-gah) – Ginger root. Usually grated.
Shoyu - shoh-yoo) – Japanese soy sauce.
Shumai - shoo-mai) (another type) is always steamed.
Soba - soh-bah) – Buckwheat noodles.
Somen - soh-mehn) – White, threadlike wheat noodles.
Spam - yes, SPAM!) – a sushi you can get in Hawaii (maybe Japan too), an acquired taste, perhaps.
Su - soo) – Rice vinegar.
Suimono - soo-ee-moh-noh) – Clear soup.
Surimi - soo-ree-mee) – Imitation crab meat (also called kamaboko (kah-mah-boh-koh)) usually made from pollack. Generally used in California rolls and other maki, it’s not the same thing as "soft shell crab." Although “surimi” is used outside of Japan, most Japanese people use the term Kani-Kama, short for Kani-Kamaboko.
Sushi - soo-shee)- Technically refers to the sweetened, seasoned rice. The fish is sashimi. Wrap the two together in portions and sell it as sushi, and the name still refers to the rice, not the fish. Sushi is the term for the special rice but it is modified, in Japanese, to zushi when coupled with modifiers that describe the different styles of this most popular dish. In Japan when one says “sushi” they are referring to the whole package, the sushi rice plus the neta. And this holds true for all kinds of sushi. When one wants to say “sushi rice” they say “sushi-meshi.” Also, in Japan when someone suggests going out for sushi, they are referring specifically to nigirizushi.
Suzuki - soo-zoo-kee) – sea bass (of one species or another, often quite different).
Tai - tah-ee) – porgy or red snapper (substitutes, though good), real, Japanese, tai is also sometimes available.
Tairagi - tah-ee-rah-gah-ee) - The razor shell clam.
Tako - tah-koh) – Octopus, cooked.
Tamago yaki - tah-mah-goh-yah-kee) – egg omelet, sweet and, hopefully light, a good test of a new sushi restaurant, if its overcooked and chewy, go somewhere else. In Japan it is the trademark of each chef. Often potential customers in Japan will ask for a taste of the Tamago in order to judge the chef’s proficiency.
Tarabagani - tah-rah-bah-gah-ni) – King Crab (the real thing, as opposed to kanikama, which is the fake crab leg made from surimi).
Tataki - tah-tah-kee) - Tataki is a Japanese term which may mean seared on the outside only (as in Katsuo) or chopped up and served in its raw form (as in Aji).
Temaki-zushi - the-mah-kee-zoo-shee) - Hand rolled cones of sushi rice, fish and vegetables wrapped in seaweed. Very similar to maki.
Tempura - tem-poo-rah) – Seafood or vegetables dipped in batter and deep fried.
Tobiko - toh-bee-koh) – flying-fish roe, red and crunchy, often served as part of maki-zushi but also as nigiri-zushi commonly with quail egg yolk (uzura no tamago) on top uncooked.
Tofu - toh-foo) – Soybean curd.
Tori - toh-ree) – Chicken.
Torigai - toh-ree-gah-ee) – Japanese cockle, black and white shell fish, better fresh but usually frozen (and chewier as a result).
Toro - toh-roh) – Fatty Tuna. There are several different types of tuna you can order in a sushi restaurant. It comes in many different grades which are from best to, well, not worst, o-toro, chu-toro, toro, and akami (which has no fat content).
Udon - oo-dohn) – Wide noodles made from wheat.
Unagi - oo-nah-gee) – Eel (Freshwater) – grilled, and brushed with a teriyaki-like sauce, richer than salt water eel.
Uni - oo-nee) – Sea Urchin. If you are lucky you won’t like it, if not you have just developed an expensive habit. The most expensive (start saving now) is red in color, the least is yellow, luckily they taste the same. Lobsters eat sea urchin as a mainstay of their diet.
Usukuchi shoyu - oo-soo-koo-chee-shoh-yoo) - Light Japanese soy sauce.
Wakame - wah-kah-meh) – Dried lobe-leaf seaweed in long, dark green strands.
Wasabi - wah-sah-bee) – Japanese ‘Horseradish.’ This is the small lump of green stuff that looks sort of like clay. Best done in extremely small doses. The actual rhizome is not related to American Horseradish except by name, but unfortunately, the ‘wasabi’ most often served is not real wasabi, but powdered and reconstituted American Horseradish with food coloring. Real wasabi is difficult to find in most restaurants, but is sometimes available upon request (and worth it, even with a surcharge, in my opinion). It is quite different in appearance (slightly more vegetal in color and obviously a ground up lump of rhizome, not powder) as well as taste. Real wasabi has a hotness that does not linger, and compliments and enhances the flavor of sushi rather well.
Yakumi - yah-koo-mee) – A generic term for condiments. | <urn:uuid:078a4bae-688a-4c6a-96ca-cca4f5ade758> | CC-MAIN-2019-47 | https://www.sushifaq.com/sushi-sashimi-info/sushi-terminology/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00257.warc.gz | en | 0.88275 | 6,589 | 2.671875 | 3 |
Patricia M. DeMarco
December 21, 2018
Presentation is on video by Blue Lens, LLC : https://youtu.be/iJrSADqS9pA
Our beautiful, fragile, resilient Living Earth provides everything we need to survive and thrive. All the living things on the planet have co-evolved forming an interconnected web of life with a life support system that provides oxygen-rich air, fresh water, and fertile ground. The functions of the living earth – ecosystem services – support life with elegance and simplicity following the laws of Nature. These laws were discovered to human knowledge over many years but still hold many mysteries. The laws of Nature – the principles of chemistry, physics, biology, ecology- are not negotiable, whether humans acknowledge them or not. The human enterprise has brought the delicate balance of the natural world under acute stress in modern times. Overpopulation, resource extraction for minerals and materials, fossil fuel combustion, and hyper-consumption now threaten the stability of our existence. Global warming from the accumulation of greenhouse gases, especially carbon dioxide and methane; and global pollution, especially from plastics, now threaten life on Earth as we have known it.
If the goal of our entire civilization is to achieve sustainability for future generations, some adjustments must be made in the way people relate to the natural world. In a sustainable condition, “people meet the needs of today without compromising the ability of future generations to meet their own needs.”This condition of sustainability has a balance among environmental values, social and cultural values and economic values. In today’s civilization, the economic values far outweigh environment and social values, leading to an increase in environmental degradation and social inequity. The damages are distributed unevenly around the globe, with those least responsible for creating the problem most affected by the results of climate change. The children, non-human living things and those unborn of the next generation will pay the heaviest price for decisions made today. Therefore, this is not a technology issue; rather it is a moral and ethical issue: are our decisions going to preserve the wealth and privilege of the fossil industry corporations, or will our decisions move to preserve a viable planet for our children?
Nothing characterizes modern life so well as plastic – long-lasting, resilient, malleable, diverse in applications and uses. We find plastic everywhere from food containers to personal care products, structural materials, fibers and finishes. Whether single use products like plastic bags or structural materials like car dashboards or PVC pipe, all plastics are made from materials found in fossil fuels- natural gas and petroleum. The plastics industry burst forth in the decade following the end of World War II when the industrial might amassed for munitions turned to domestic products like fertilizer, herbicides, pesticides, and plastics. Worldwide 8.3 billion tons of plastic have been produced since then, and over half of that material has been discarded as waste. The problem is only expected to grow as plastic production increases exponentially—from a mere two million metric tons annually in 1950 to more than 300 million metric tons today, and a projected 33 billion metric tons each year by 2050.
Plastics are made of long-lived polymers, they do not break down easily in the environment, neither in landfills nor in the oceans. Plastics are not readily broken down by biological systems- they are indigestible and provide no nutrition when introduced into food chains. Nearly all the plastic ever made is still in the biosphere.Worldwide, factories produce 400 million tons of plastic per year, with plastic bottles produced at a rate of 20,000 per second.Globally, 60% of all plastics ever produced were discarded and are accumulating in landfills or in the natural environment. Americans discard 33.6 million tons of plastic a year; only 6.5% of plastic is recycled for re-use, and 7.7% is burned in trash to energy facilities.Until 2018, most of American recycled plastic was collected as mixed waste and sent to China for processing. However, China is no longer accepting material with more than 1% contamination for recycling.So, most of the plastic waste generated in America is now destined for landfill. Worldwide, plastic demand is expected to drive petroleum and natural gas production to use for feed stocks for many decades, especially to serve growing markets in Asia.
The ubiquitous contamination of the earth from man-made plastics presents a system problem. We need to seek a systematic solution.The problem of global pollution from plastics has three components: 1. Economic Issues; 2. Environmental and health issues; and 3. Ethics Issues.
1. Economic issue: The entire plastics enterprise is based on taking fossil derived raw material extracted from natural gas and petroleum deep underground, refining the products and producing polymers, forming the polymers into the desired product, distributing and trucking to the wholesale and retail operations for a product that is often used once and discarded.
Image from PA Department of Commerce and Economic Development https://dced.pa.gov/key-industries/plastics/
This system only works economically when fossil fuels are valued at a relatively low price, and no cost is imposed on the discarded or wasted material. This approach is entirely incompatible with a sustainable society. In many situations, the fossil extraction and production phases are heavily subsidized, and the single-use products are inexpensive to the users, or costs are unseen, as with plastic packaging or plastic bags at the retail check-out counter. In 2014, UN Environment Programme estimated the natural capital cost of plastics, from environmental degradation, climate change and health, to be about $75 billion annually.As of 2018, the hydraulic fracturing sector of the oil and gas industry continued its nine-year streak of cash losses. In 2018 Third Quarter, a cross section of 32 publicly traded fracking-focused companies spent nearly $1 billion more on drilling and related capital outlays than they generated by selling oil and gas.The fracking industry is anticipating the ultimate sale of gas liquids for plastic production in refineries, such as the proposed Shell Appalachia Project, to generate positive revenue from sales of plastic pellets for production of consumer goods, many of the single-use packaging like plastic bottles, bags and flatware.
2. Environment and Health Issue
Many plastics and by-products of their production are directly toxic to humans and other living things. Some are also disruptors of endocrine functions, such as by mimicking hormone activity yielding results that over-stimulate or suppress normal hormone functions. Such compounds have been associated with reproductive and developmental disorders, obesity, fertility, and neurologic disorders. Over 80,000 synthetic chemicals are in common commercial use; of these only 200 have been directly tested for health effects. It is nearly impossible to avoid exposure to chemical contaminants- they are in BPA plastic packaging for food and containers, in consumer products phthalates and parabens as well as plastic microbeads are common; fire retardants in upholstery, curtains, and electronics expose people to PBDE (Poly-brominated di-phenyl ethers), and residues of long abolished chemicals like PCB (used in insulation oil for transformers and banned in 1979) continue to contaminate the food chain.
Should we be concerned? Industry advocates argue that there is no “proof of harm” that any particular chemical caused a specific instance of illness or disease. But human epidemiology studies are uniquely challenging because individuals respond differently to the same exposure, and the effects can vary widely for children, elderly, and especially unborn fetus development. Furthermore, people are not exposed to one chemical at a time but experience a chemical stew of myriad chemicals, some without their knowledge. The use of animal models where some of the variables can be controlled present problems as well, especially in court where the industry defense can argue that animals are not exactly like humans, and reasonable doubt prevents a clear ruling of harm. The burden of proof is on the consumer, and the case is rarely successful.
People are exposed as minute quantities of potentially harmful materials are magnified through the food chain. Observations in the field conflict with rosy promotion of the benefits of plastics. Attempts to move legislation to protect consumers and prevent widespread exposures to questionable materials become bogged down in a regulatory quagmire. Citizen action groups use information campaigns and argue for better testing, but as industry experts infiltrate the regulatory agencies, the credibility of government agencies is eroding.
3. Ethics Issue
The entire matter of global pollution, especially from plastics products and the by-products associated with their production, is a question of moral commitment to preserve the life support systems of the Earth, or to allow destruction of the living part of the planet for the sake of short-term profit for a very few corporate interests. It is really a matter of asserting the right for life to EXIST! The surge in plastic use, especially single-use plastics like plastic bags for purchased items, developed as a consumer convenience. But we are seeing now the unintended consequences of convenience. But is it really from convenience that we see 48 tons of garbage, mostly plastic containers and packaging, left in the parking lot after a concert?Is this convenience, or is it really a consumer sense of entitlement and total oblivious disregard for the consequences of their actions? The freedom to act as we wish without the sense of responsibility for the consequences of our actions yields chaos. As we see the cumulative effects of single-use plastics in the environment, in fish and sea creatures, and even in human bodies, we must begin to question the obligation to control this material at its source. Recognizing that the source is a fossil-based feed stock, the need to re-think plastic reaches a higher plane. Are we killing our planet for convenience?
As horrific images begin to filter into the media, people are beginning to move from awareness to action. The plastic problem will not go away without fundamental changes in expectations and the reality of packaging and single-use materials. According to a United Nations Environment Programme study: “To get the plastics problem under control, the world has to take three primary steps. In the short- term society needs to significantly curtail unnecessary single-use plastic items such as water bottles, plastic shopping bags, straws and utensils. In the medium -term governments need to strengthen garbage collection and recycling systems to prevent waste from leaking into the environment between the trash can and the landfill, and to improve recycling rates. In the long run scientists need to devise ways to break plastic down into its most basic units, which can be rebuilt into new plastics or other materials.”Three kinds of solutions present good options for re-thinking how we develop, use and dispose of plastic:1. Restructure the Value System;2. Use Green Chemistry to prevent environmental and health harms;3. Take precaution in protecting living systems.
1. Restructure the Value System
To the consumer, and to many manufacturers, plastic looks cheap. The price of the oil or natural gas liquids used as feed stocks for plastic are way too low, compared to the actual cost to extract, refine, process and transport the plastic products. And, it is cheaper to produce plastic from virgin material than from recycled plastics because recycled material needs to be cleaned, sorted, and is difficult to define precisely. Plastic was designed to melt at temperatures lower than metals, so metal molds can be used repeatedly to shape plastic into products, conserving the capital needed for the machinery, while using a relatively cheap ingredient. Oil and natural gas have significant price supports for extraction and production embedded in the laws, tax treatments, and land uses that have supported the supremacy of mineral rights since 1837. These subsidies have kept the apparent cost of fossil based products artificially low.The system is set up to reward manufacturers for producing products in the form of profits, but to impose the cost of disposal of waste on the taxpayers. The system gives economic incentives for turning raw (fossil) material into trash as rapidly as possible. Thus, the cost of the entire life cycle of the plastic is not included in the price of the product. If the full life cycle cost of the extraction, production and disposal or recapture of the plastic were included in the price the consumer sees, plastics would not seem so inexpensive, and there would be a greater incentive to avoid waste. In a circular model of materials management, incentives for designing products to be re-used or recaptured and re-purposed would reduce the waste.
Plastics also seems inexpensive because much of the cost of their production and use is not counted at all. The Gross Domestic Product, one of the most common measures of the economy, does not include the value if services provided by the living earth… essential things like producing oxygen, regenerating fresh water, and providing food, fuel and fiber from natural materials. The Gross National Product as measured for the global economy is about $19 Trillion (US Dollar equivalent) while the services provided by ecosystems have a value of $33 Trillion globally.By comparison, the global plastics industry is valued at $1.75 trillion, growing at an expected 3% annually.The degradation of ecosystems and ignoring the value of essential services we take for granted has allowed products like fossil fuels and plastics derived from fossil origins to seem cheap, when in fact, their use is destroying the priceless life support system of planet Earth. The artificially cheap price of plastics has contributed to the hyper-consumption that is clogging our landfills and oceans with wasted materials that may never completely break down to innocuous components. One large part of the solution would be to adjust the value calculation to reflect the true cost.
2. Use Green Chemistry to prevent environmental and health harms
Just as plastics were engineered to resist breaking down, materials can be designed to serve useful functions without the biological and physical characteristics that make plastics a problem when they interact with living systems. Risk to health and to the environment is a function of the inherent hazard and the exposure to the hazard. The current regulatory system that controls environmental and health risks from chemicals and materials is based on limiting the amount of exposure, or emissions into the environment. Thus, even very toxic materials can be deemed “safe” if they are limited to a very small release. Under this system, over 5.2 billion pounds of toxic or hazardous material is emitted into the air and water by permit each year.Green chemistry takes the approach of designing chemicals and materials to have inherently benign characteristics. Thus, the risk is reduced by reducing or eliminating the inherent hazard itself, instead of trying to limit the exposure.
Green chemistry uses the kind of processes found in nature- ambient temperature and pressure, catalysts and enzymes, biological processes, and non-toxic ingredients and by-products.Creative application of green chemistry principles has produced exciting innovations and has the potential for changing the way we produce and use materials.Green chemistry uses bio-mimicry as an inspiration for making new materials. Using catalysts simulating the processes of living systems to address the breakdown of organic chemical contaminants has proven productive. Using plant-based feed stocks instead of fossil resources has produced many innovations in both pharmaceutical applications and in materials. The whole field of bio-plastic is emerging with very promising innovations using algae, hemp, and bamboo. Taking the approach to design for benign, or even helpful, effects on the natural world will revolutionize materials management.The waste stream is part of the cost. The circular economy that can emerge offers productive and sustainable ways to meet the need for materials without increasing the burden on living systems from materials that cannot be used or broken down by living systems.
3. Precaution in protecting living systems
The problem of global pollution from plastics will not go away without specific and deliberate intervention both from individuals and from governments. An ethic that places value on retaining and re-using materials that will not degrade in the environment must replace the expectation of convenience regardless of the true cost. The demand for convenience has come at a terrible price for the oceans, for the health and well-being of millions of creatures, including people. For the millions of people for whom using plastic is the only choice for clean water, or single-servings of essential items of food or sanitation, the systemic problems of wealth distribution must be addressed. There is an obligation upon the industrialized societies to resolve the material problem created initially as a by-product of industrialization. Making massive amounts of plastics without considering the implications of their disposal places an ethical burden upon the producers to protect the living systems that are being choked by the waste. Waste has become a cultural norm of modern life, but it is not a condition that can persist if survival of life on the planet is to be sustained.
The regulatory system must also be adjusted to require independent testing for health and biological effects in advance of mass production, not only after consumer complaints materialize. The burden of proof of safety must rest on the producer, not on the consumer. It is critical to protect workers from chronic exposures and to evaluate by-products and wastes for the potential to cause harm as well.
Consumers have a role to play in moving both the markets and the regulatory infrastructure of plastic. In re-thinking plastic, we can refuse single-use plastics. Ask yourself how materials will be disposed of at the point of purchase. Plan ahead when shopping to take a reusable bag, water bottle, cutlery with you. When in a restaurant, before the server brings anything just say, “No plastic, please,” and you will not have a plastic straw. You can carry bamboo or re-usable straws with you easily. While some situations may be challenging, it can become a focus for family joint activity to seek creative ways to avoid plastic in everyday functions. (You may find helpful suggestions here: https://myplasticfreelife.com/plasticfreeguide/)
It is important for consumers to communicate to manufacturers and stores that the excessive plastic used in packaging everything is not acceptable. Challenge the grocery manager for wrapping individual vegetables in shrink-wrap. Ask for less packaging, and bring your own for as many items as you can. Obtain re-usable containers for storing produce and other foods at home instead of plastic wrap, bags or single-use containers. For things like yogurt or other dairy products, re-use the plastic containers for storage, re-purpose them for take-out containers; or craft projects. A little preventive thinking can eliminate much of the single-use waste stream: No K-Cups- use a single serve brass insert instead. Get out of the habit of buying beverages in plastic bottles. Cook real food to avoid excess packaging and choose bulk food items.
Re-use and re-purpose as many items as possible. It is becoming fashionable again to use real dishes and glassware and cutlery. These need not be heirloom porcelain to be effective, and dishwashers without the heat element use less resources than the extraction, production and disposal of plastic goods. Choose quality forever items. You can swap or re-design clothing and visit consignment shops, especially for things that you will wear infrequently.
It is more important than ever to recycle correctly. Many recycling requirements have changed recently, as mixed waste streams are harder to separate into useable product lines. It is most important to avoid Poly Vinyl Chloride (PVC), Styrofoam and styrene because these are the most difficult to recycle, and they have a 450-year life in the landfill. These materials are especially noxious when they arrive in the ocean, delivered there from materials washed down to the rivers and through the waterways to the ocean. Remember that the Mississippi River drainage covers more than one third of the U.S. land. Clean plastic for recycling and separate it from non-recyclable trash. Cross-contamination will disqualify an entire load. Recycle electronics at a recapture facility where the components are recovered and returned to the production cycle. These are not always free, but the cost is an important part of moving to a circular economy.
Finally, to protect the living systems of the planet, it will be important for consumers to support policies that require less packaging, establish markets and procedures for recovery and re-use of materials, and align the value to reflect the true life-cycle cost of the plastic burden on the Earth.
The Moral Imperative
America operates under the banner of freedom, but has not embraced the concept that freedom without taking responsibility for consequences yields chaos. Technology used without accountability and wisdom yields disaster. We are seeing all around us today the unintended consequences of convenience. It is time to take responsibility for the trash. Everyone can dispose of plastic responsibly- litter kills. We can connect to the natural world and recognize its true value to our life, our survival, and the dependence we have as humans on all the other living things with which we share this time and space. We can find the courage to defend and protect the living Earth
Citations and Sources:
DeMarco, Patricia. “Listening to the Voice of the Earth.” Pathways to Our Sustainable Future- Global Perspective from Pittsburgh.2017. (University of Pittsburgh Press, Pittsburgh) Pages 13 to 35.
Andrea Thompson. “Solving Microplastic PollutionMeans Reducing, Recycling – And Fundamental Re-thinking.” Scientific American November 12, 2018. https://www.scientificamerican.com/article/solving-microplastic-pollution-means-reducing-recycling-mdash-and-fundamental-rethinking1/?utm_source=newsletter&utm_medium=email&utm_campaign=policy&utm_content=link&utm_term=2018-11-12_featured-this-week&spMailingID=57769378&spUserID=MzUxNTcwNDM4OTM1S0&spJobID=1521540986&spReportId=MTUyMTU0MDk4NgS2 Accessed December 18, 2018.
Laura Parker. “China’s Ban of Plastic Trash Imports Shifts Waste Crisis to Southeast Asia and Malaysia.” National Geographic. November 16, 2018. https://www.nationalgeographic.com/environment/2018/11/china-ban-plastic-trash-imports-shifts-waste-crisis-southeast-asia-malaysia/China refusal of mixed plastic waste
UNEP. 2014. Valuing plastics: the business case for measuring, managing and disclosing plastic use in the consumer goods industry. United Nations Environment Programme. https://wedocs.unep.org/rest/bitstreams/16290/retrieve
Clark Williams-Derry. “Nine-Year Losing Streak Continues for US Fracking Sector.” Sightline. December 5, 2018. www.sightline.org.)
Irfan A. Rather et. Al. “The Sources of Chemical Contaminants in Food and their Health Implications.” Frontiers in Pharmacology. 2017. 8:830 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5699236/
Sam Levin and Patrick Greenfield. “Monsanto Ordered to Pay $289 million as Jury Rules Weedkiller Caused Man’s Cancer.” The Guardian. August 11,2018. https://www.theguardian.com/business/2018/aug/10/monsanto-trial-cancer-dewayne-johnson-ruling
UNEP. 2014. Valuing plastics: the business case for measuring, managing and disclosing plastic use in the consumer goods industry. United Nations Environment Programme. https://wedocs.unep.org/rest/bitstreams/16290/retrieve
Costanza, R., R. de Groot, L. Braat, I. Kubiszewski, L. Fioramonti, P. Sutton, S. Farber, and M. Grasso. 2017. “Twenty years of ecosystem services: how far have we come and how far do we still need to go?” Ecosystem Services. 28:1-16.
Clare Goldsberry. “Global market for plastic products to reach $1.175 trillion by 2020” BusinessExtrusion: Film & Sheet, Extrusion: Pipe & Profile, Injection Molding. December 17, 2017 https://www.plasticstoday.com/author/clare-goldsberryAccessed December 19, 2018.
For an explanation of Green Chemistry Principles see Paul Anastas and John Warner. 12 Design Principles of Green Chemistry. American Chemical Society. https://www.acs.org/content/acs/en/greenchemistry/principles/12-principles-of-green-chemistry.html
DeMarco, Patricia. 2017. “Preventing Pollution.” Pathways to Our Sustainable Future – A Global Perspective from Pittsburgh. (University of Pittsburgh Press. Pittsburgh PA) Page140-169.
Lord, R. 2016. Plastics and sustainability: a valuation of environmental benefits, costs and opportunities for continuous improvement. Trucost and American Chemistry Council. https://plastics.americanchemistry.com/Plastics-and-Sustainability.pdf
A detailed description of the circular economy can be found in EMF, 2013. Towards a circular economy – opportunities for the consumer goods sector. Ellen MacArthur Foundation. https://www.ellenmacarthurfoundation.org/assets/downloads/publications/TCE_Report-2013.pdf | <urn:uuid:10c9dab8-e4e2-435f-a2cb-99d28e6f3ebc> | CC-MAIN-2019-47 | https://patriciademarco.com/2018/12/21/re-thinking-plastics-in-our-future/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00018.warc.gz | en | 0.925177 | 5,268 | 3.140625 | 3 |
[Updated January 9, 2019]
The heartworm (Dirofilaria immitis) is a nematode or roundworm named for its place of residence, inside the heart. To understand the challenge of controlling this parasite, you have to understand its life cycle, a complex and slightly bizarre process.
Living in a dog’s pulmonary arteries, adult heartworms mate. Each female (which can reach sizes of up to 11 inches) produces thousands of eggs, each less than 1/800th of an inch, which are called microfilariae, and circulate in the blood. (A dog who hosts breeding adults may have as many as 10-15 microfilariae circulating in each drop of his blood.) The microfilariae cannot develop any further unless the dog meets up with mosquitoes, which are essential to the parasite’s development.
Heartworm microfilariae conduct the next stage of their life cycle in the mosquito. When a mosquito drinks blood from an infected dog, it obviously drinks in some of the microfilariae, which undergo changes and are called larvae at this stage. They molt twice in the next two to three weeks (and are referred to as L1, L2, and L3 as they molt and change) after entering the mosquito. Then they must leave the mosquito and find a new host or they will die. The larvae move to the mosquito’s mouth, positioned so the insect’s next bite allows them to migrate into the victim’s subcutaneous tissue.
If the mosquito bites an animal that does not host heartworm, the larvae die. If the mosquito bites an animal that heartworms thrive in, the larvae molt again (within three to four days) and take the form (called L4 or “tissue larvae”) they will use to migrate through the dog’s body to reach the circulatory system. It takes the L4 about 60 – 90 days to reach the heart, where, in three to four months, they become adults and begin to reproduce.
Heartworm is most successful when hosted by dogs, red wolves, coyotes, red foxes, and grey foxes.
Wild and domestic felids (cats), ferrets, wolverines, and California sea lions also host heartworm, but its life cycle differs significantly in these species. Few of these animals host adult heartworms that produce microfilariae, which generally die within a month of circulating in the animal’s blood. Also, adult heartworms live for much shorter periods in these animals, compared with five to seven years in the dog.
Humans, bears, raccoons, and beavers are considered “dead-end hosts.” The larval form of the heartworm, transmitted by mosquito bites, do not reach adulthood in dead-end hosts.
Facts about Heartworm (D. immitis)
For heartworms to thrive and become endemic in an area, they require:
1) a susceptible host population (dogs),
2) a stable reservoir of the disease (dogs infected with breeding adults and circulating microfilariae),
3) a stable population of vector species (mosquitoes), and
4) a climate that supports the life cycle of both the heartworm’s reservoir species (dogs and wild canids), its vector species (mosquitoes), and its larval stages within the mosquito. The development of the larvae within the mosquito requires temperatures of about 80 degrees F or above for about two weeks.
Only female mosquitoes suck blood and transmit heartworm microfilariae.
The mosquito is the only known vector for the transmission of heartworm (except for scientists who inject research animals with heartworminfected blood). Heartworm shows a preference for certain species of mosquito. However, during its spread throughout the United States it has adapted well to using other mosquito species as vectors. It has also adapted to a wider temperature range for development of larval stages in the mosquito.
Adult heartworms prefer to live in the heart and in veins leading to it, but they have been found in dogs’ liver, trachea, esophagus, stomach, feces, eye, brain, spinal cord, and vomit.
For more information about heartworm, see the American Heartworm Society’s website.
Effects of Heartworm Infestation in Dogs
The number of adult worms in a dog’s body and the dog’s activity level determine the severity of heartworm disease symptoms.
Sedentary dogs with 25 or fewer worms often remain symptom-free. A dog with a mild heartworm infestation – only a couple of heartworms – may exhibit a mild, occasional cough.
Active dogs with the same number of heartworms and most dogs with 50 or more worms have moderate to severe symptoms. Moderate symptoms may include a cough, exercise intolerance, and abnormal lung sounds.
A dog whose heart is clogged with worms may develop the same symptoms as one with congestive heart failure, including fatigue, a chronic cough, and fluid retention (edema). If adult worms fill the right side of the heart, increased venous pressure can damage liver tissue, alter red blood cells, and lead to blood disorders that cause a sudden, dramatic collapse called caval syndrome. Severe symptoms include a cough, exercise intolerance, difficulty breathing, abnormal lung sounds, radiographs showing visible changes to the pulmonary artery, liver enlargement, temporary loss of consciousness because of impaired blood flow to the brain, edema in the abdominal cavity, and abnormal heart sounds.
Resistance to Heartworm
However, as we have reported before, the immune systems of some dogs seem to be able to wage war, to a limited extent, on the heartworm. An immune-mediated response is thought to be responsible for some dogs’ ability to remove microfilariae from the dog’s circulatory system. And a very healthy dog may be able to outlive a light infestation of adult worms.
This makes sense from a biological perspective. Parasites want to continue their own life cycle; if they kill their host, they kill themselves. Nature’s plan for parasites generally calls for a few hosts to die – thus weeding out weaker individuals of the host species – but for more hosts to live in great enough numbers to provide the parasite with a stable environment.
In order for this to occur, says Richard Pitcairn, DVM, Ph.D., author of the best-selling book, Natural Health for Dogs and Cats, animals in the wild (coyotes, wolves, foxes, and other wild canines) develop a natural resistance to heartworm. “They get very light infestations and then become immune,” he explains. What few people realize, says Dr. Pitcairn, is that many domestic dogs display the same response.
According to Dr. Pitcairn, an estimated 25 to 50 percent of dogs in high-heartworm areas are able to kill (through an immune-mediated response) any microfilariae in their bodies, so they cannot pass along the developing microfilariae. “Also, after being infested by a few heartworms, most dogs do not get more of them even though they are continually bitten by mosquitoes carrying the parasite. In other words, they are able to limit the extent of infestation. All of this points to the importance of the health and resistance mounted by the dogs themselves.”
Despite the extensive use of heartworm preventive drugs, the rate of heartworm infestation in dogs in high-heartworm areas has not declined in 20 years. To Dr. Pitcairn, this statistic proves that drug treatments are not the answer.
Conventional Veterinary Treatments for Heartworm Infection
Years ago, there was only one option offered by conventional veterinary medicine for treating a dog harboring adult heartworms – an arsenic-based drug called Caparsolate that was administered several times intravenously. The treatment was effective, but fraught with potential side effects. Caparsolate has to be injected carefully into the dog’s veins; if even a minute amount comes in contact with muscle or other tissue, it causes horrible wounds accompanied by massive tissue sloughing.
In 1995, Rhone-Merieaux introduced Immiticide, a drug that is injected into spinal muscles, which has quickly replaced Caparsolate as the treatment of choice. This drug, now made by Merial, also presents some side effects, including irritation at the injection site, pain, swelling, and reluctance to move, but none quite as dramatic as the tissue-sloughing danger of Caparsolate.
Post-treatment symptoms are similar with both the old and the new treatment. The drugs kill the adult heartworms, and the dead and decaying worms must work their way out of the dog’s circulatory system. The dead worms are carried in the bloodstream to the lungs, where they are gradually reabsorbed. Depending on the dog’s health and the total number of worms in his system, this can be a mild or a violent process. Dogs usually cough, gag, and vomit, experience fever and lung congestion, and are understandably depressed and lethargic.
Both treatments require the dog to be kept as quiet as possible (preferably caged) for the first few days. All increases in heart rate and respiration force a greater amount of dead worm fragments into circulation. If too many particles flood into the lungs at once, they can block the blood vessels to the lungs and cause death.
Both the Immiticide and the Caparsolate treatments are contraindicated (not recommended) for the most severely infested dogs with Caval syndrome. Ten to 20 percent of dogs with a high worm burden will die as a result of the Immiticide treatment. The number seems grim until you consider that even without treatment, dogs with that level of infestation suffer a much slower, progressively debilitating death. If a heart radiograph, antigen test, or the dog’s symptoms suggest that the infestation is very severe, the dog can undergo a modified treatment protocol, consisting of a single injection, which kills the weaker worms, followed by two more injections a month later.
A dog that experiences difficulties may require extended veterinary care, including administrations of fluids, steroids to reduce any fever or inflammation and help quell the coughing, and supportive therapies for the liver.
After the adult heartworms are killed, the next step in conventional treatment is to kill any microfilariae that are still in circulation. Since the microfilariae cannot mature without an intermediate host (time spent in a mosquito), you’d think you could skip this step. However, at the dose at which they will surely be used to prevent further heartworm infections, heartworm preventive drugs can kill microfilariae at a dangerous rate, potentially causing shock and subsequent death of the dog. The veterinarian will dispense a filaricidal drug and monitor the dog until tests show that the microfilariae are absent from the blood, usually within one to two weeks.
Conventional Veterinary Heartworm Preventive Drugs
All commercial heartworm preventive medicines are targeted to kill heartworm in the “tissue larval” (L4) life stage (after the larvae have been deposited in the dog by an infected mosquito). For this reason, the preventives must be administered within the period of time that begins when the dog is bitten by an infected mosquito and the time it reaches the dog’s circulatory system – about 63-91 days. The preventives do not affect the heartworms once they reach the circulatory system. To provide a safe overlap of time, guaranteeing that any larvae that have been administered to a dog by an infected mosquito are killed before they reach the next, invulnerable life stage, most of the preventives are given every 30 days.
All the preventives listed below kill some (but not all) heartworm microfilariae. If a dog hosts adult, breeding heartworms, has microfilariae in his bloodstream, and receives a preventive, the subsequent die-off of the microfilariae may cause the dog to suffer from labored respiration, vomiting, salivation, lethargy, or diarrhea. These signs are thought to be caused by the release of protein from dead or dying microfilariae. This is why the makers of all of the preventives recommend that dogs receive a heartworm test – to rule out the possibility that the dog hosts adults and/or microfilariae – before receiving a heartworm preventive.
The U.S. Food and Drug Administration has approved four oral, one injectable, and one topical medication for use as heartworm preventives. These include:
Filaribits (diethylcarbamazine citrate), manufactured by Pfizer, is widely considered to be the safest preventive, causing the fewest adverse reactions and deaths. Filaribits Plus adds oxibendazole, which targets hookworm, whipworm, and ascarid infections. Filaribits and Filaribits Plus are given to the dog daily.
Heartgard Chewables or Heartgard Tabs (ivermectin), made by Merial, is given monthly. Heartgard Plus Chewables adds pyrantel, which controls ascarids and hookworms.
Interceptor (milbemycin), by Novartis, is similar to ivermectin and also controls hookworm, roundworm, and whipworm. Novartis’ other heartworm preventive is Sentinal (milbemycin plus lufenuron). The added ingredient controls fleas by inhibiting the development of flea eggs (it does not kill adult fleas).
ProHeart (moxidectin), by Fort Dodge, is a derivative of ivermectin that is given monthly. Fort Dodge also makes ProHeart6, an injectable form of moxidectin that is administered by a veterinarian every six months. This formulation allows for the moxidectin to be time-released, affecting heartworm larvae for a period of six months following injection.
Revolution (selamectin) from Pfizer, is not an oral drug but a topical preparation that is applied monthly. It also kills fleas, ear mites, sarcoptes scabiei (the cause of sarcoptic mange), and the American Dog Tick, and prevents flea eggs from hatching.
The following data are extracted from the 1987-2000 Adverse Drug Experience (ADE) Summary published by the Food and Drug Administration’s Center for Veterinary Medicine (FDA CVM). The summaries include all adverse drug reaction reports submitted to the CVM which the CVM has determined to be “at least possibly drug-related.”
In reviewing these reports, the CVM takes into consideration “confounding factors such as dosage, concomitant drug use, the medical and physical condition of the animals at the time of treatment, environmental and management information, product defects, extra-label uses, etc.”
The CVM warns readers that these complex factors cannot be fully addressed in its summaries, which are intended only as a general reference to the type of reactions that veterinarians, animal owners, and others have voluntarily reported to the FDA or the manufacturer after drug use.
Also, the drugs or drug combinations listed below are not necessarily the products mentioned above.
|DRUG||# OF REACTIONS||# OF DOGS DIED||TOP 5 SIGNS AND % OF DOGS WHO DISPLAYED THEM|
|Diethylcarbamazine||187||7||vomiting (32%), depression/lethargy (15%), diarrhea (12%), anorexia (6%), collapse (4%)|
|Diethylcarbamazine/oxybendazole||1033||128||vomiting (27%), increased alanine aminotransferase (liver enzyme)/blood outside the vascular system oxybendazole (hemorrhage, 25%), increased alkaline phosphatase (liver or bone enzyme)/blood outside the vascular system (hemorrhage,22%), anorexia (18%), depression/lethargy (18%)|
|Ivermectin||681||134||depression/lethargy (31%), vomiting (26%), ataxia (loss of muscle coordination, 23%), mydriasis (prolonged dilatation of the pupil, 18%), death (13%)|
|Ivermectin/pyrantel||209||30||vomiting (22%), depression/lethargy (17%), diarrhea (16%), death (11%), anorexia (9%)|
|Milbemycin||460||67||depression/lethargy (34%), vomiting (31%), ataxia (12%), death (12%), diarrhea (11%)|
|Milbemycin/lufenuron||400||14||vomiting (31%), depression/lethargy (23%), diarrhea (18%), pruritis (itching, 16%), anorexia (13%)|
|Moxidectin||283||51||ataxia (56%), convulsions (22%), depression/lethargy (18%), trembling (17%), recumbency (lying down, won’t get up, 16%)|
|Selamectin (topical)||1716||67||vomiting (17%), depression/lethargy (13%), diarrhea (13%), anorexia (9%), pruritis (9%)|
|Figures for the injectable form of moxidectin not yet available. ProHeart 6 (injectable moxidectin) was released to market in late 2001.|
Problems with Heartworm Preventives
There is no doubt that preventive drugs have protected millions of dogs that may have otherwise become infected with heartworm. However, a small percentage of dogs treated with commercial preventives do suffer from mild to serious side effects. And many veterinarians, faced with a sick dog with no changes in its routine except a recent administration of heartworm preventive, are reluctant to consider the possibility that a veterinarian-developed and -prescribed drug may cause illness. In some cases, in fact, these drugs are the last thing veterinarians seem to consider.
This was certainly the case in March 2000, when Terri Eddy of Rincon, Georgia, asked her veterinarian for a heartworm preventive for Sage, her two-year-old Australian Shepherd. Sage had been spayed, was an indoor dog, and had no bleeding or clotting disorders. Eddy’s veterinarian recommended Revolution, a topical medication that is used to kill heartworm larvae, fleas, the American Dog Tick, ear mites, and the mites that cause sarcoptic mange.
Two days after Revolution was applied to Sage, the young Aussie developed a cough. Three days after that, she became quiet, didn’t want to play, developed bruising, and whimpered in pain. Eddy took Sage back to the veterinarian, and asked whether the Revolution could have caused Sage’s signs of distress. The veterinarians at the practice agreed that Sage’s symptoms, including blood in the whites of her eyes, could not have been caused by Revolution; they speculated that Sage must have ingested rat poison and/or suffered a blow to the head.
Eddy, a nurse, felt that neither diagnosis was correct, and Sage did not respond to the treatment provided.
The next day, Eddy took Sage to an emergency clinic when the dog lost her balance, could not stand, and began vomiting blood. At the clinic, she began having seizures. A few hours later at a specialist’s clinic, she continued to have seizures and bled into the orbits of her eyes. The following morning, she died. Eddy was told that another dog had died the previous month at the same clinic with identical symptoms after being treated with Revolution.
An autopsy on Sage showed low platelets and intracranial hemorrhage from a toxin. “No dog should ever suffer the way Sage did,” says Eddy. “I encourage all owners to approach this product with caution.”
Alternative Options for Treating A Dog’s Heartworm
Sage’s story is an extreme example of what can go wrong when toxic drugs are used, and, of course, dogs with severe heartworm infestation suffer, too. However, dog guardians have many heartworm prevention options available to them – certainly more than either using the most toxic chemicals or going without any protection whatsoever from heartworm.
Many veterinarians, holistic and conventional, take a conservative approach to heartworm preventives and other medicines. Rather than availing oneself of the most complicated combination drug on the market, a dog owner can focus on one threat at a time, and only when that threat is imminent. For example, in most parts of the country, mosquitoes are a seasonal danger, so an owner could safely discontinue heartworm preventives when mosquitoes are not present. If a dog was suffering from a second parasite, such as ear mites, an owner could address that issue separately, and with the least-toxic preparation available, rather than turning to a multi-target drug.
Another approach is to keep careful records of your administration of preventive drugs, and stretch the time period between applications from the recommended 30 days to something a bit longer – thus reducing the number of doses per season a dog will receive. It takes heartworm larvae a minimum of 63 days after being deposited in a dog’s body by an infected mosquito to develop into a juvenile worm that cannot be affected by preventive drugs. It’s critical, then, to make sure the dog receives a preventive drug within that period, even allowing for some overlap. Some owners give their dogs preventives every 45 – 50 days, rather than every 30 days, sparing their dogs one or two doses per mosquito season. Obviously the success of this approach absolutely depends on the owner’s reliable record-keeping and administration.
Still other guardians make their preventive decisions based on the incidence of heartworm in their part of the country. A person who lives in an area with lots of heartworm cases and a long mosquito season may make different decisions than a person living in an area where veterinarians rarely or never see heartworm cases.
And then there are the guardians who forego conventional preventives in favor of alternative approaches.
Fighting Heartworm by Fighting Mosquitoes (Without Toxins)
The most effective way to avoid biting insects is to reduce their population, and the latest weapons in the war against mosquitoes – as wells as no-see-ums, biting midges, sand flies, and black flies – are machines that pretend to be people. The Mosquito Magnetemits a plume of carbon dioxide, warmth, and moisture in combination with octenol, a natural attractant that lures biting insects. A vacuum pulls them into a net, where they dehydrate and die. According to the maker, two months of continuous use causes local mosquito populations to collapse. The Mosquito Magnet comes in three models powered by electricity or propane, each protecting 3/4 to 1 acre. The machines cost $500 to $1300.
ALTERING THE MOSQUITO’S ENVIRONMENT
Low-tech mosquito control methods are important, too. Remove buckets, tires, and other objects that collect and hold rainwater; empty and refill birdbaths every few days; and maintain screens on doors, windows, and porches. “Mosquito fish” (Gambusia affinis), tiny fish that eat mosquito larvae, can be added to ponds, rain barrels, and other potential mosquito nurseries. They are available from some garden stores, agricultural extension offices, and fish & game departments.
Look at your local organic garden supply for Bacillus thuringiensis (BTI), a biological control product that is added to standing water to prevent mosquito larvae from maturing.
Agnique MMF is an environmentally friendly product that covers ponds and other standing water with an invisible film that smothers mosquito larvae and drowns egg-laying adults. Agnique MMF spreads rapidly, is safe for recreational and drinking water, and remains effective for 10 to 14 days.
Arbicois a mail order company in Tucson, Arizona, that sells organic gardening supplies and biological insect control products, including BTI, plus battery-operated mosquito inhibitors. Arbico also sells fly parasites and all kinds of other organic pest control products.
Infected Dog Recovers Without Conventional Treatment:
While some dog guardians focus on finding alternative heartworm preventives, others find themselves in the unfortunate situation of needing alternative treatment for their dog’s heartworm infection. That was the case with Georgia resident Robin Sockness Snelgrove, the guardian of a small mixed-breed dog named Bandit.
In January 2000, at the age of 10, Bandit developed signs of a heartworm infection, including a chronic cough and loss of appetite. Snelgrove’s veterinarian diagnosed a moderate to severe infection. Concerned about Bandit’s age and serious potential side effects, Snelgrove declined the option of conventional treatment. The veterinarian offered steroids to make Bandit more comfortable – and Snelgrove began investigating alternative treatments.
Snelgrove contacted a friend who raises dogs holistically, and followed her friend’s suggestions for a herbal treatment program. This included using products made by Nature’s Sunshine, including two artemisias (mugwort and sweet Annie, or annual wormwood) and several other herbs in combination with black walnut* to kill the heartworms and their microfilariae; coenzyme Q10, hawthorn, garlic, and cayenne to strengthen the heart and help prevent clotting; and yucca to help relieve Bandit’s cough.
The cough continued intermittently for four or five months before diminishing. “Then, almost overnight, he came back to life and started acting like a puppy again,” says Snelgrove. She kept Bandit on the herbs for a year before going back to the veterinarian for another heartworm test. “The vet couldn’t believe he was still alive,” she says, “but here he was, with a shiny new coat and full of energy.” Snelgrove says Bandit has tested negative for heartworm for the last two years, during which he has taken the same herbal products on a maintenance schedule.
After Snelgrove posted Bandit’s story on her Web site, eight people put their heartworm-positive dogs on the same program. “So far two are completely cured with negative heartworm tests to prove it,” she says, “and the others are improving.”
Snelgrove appreciates the seriousness of heartworm disease, and says she would rather prevent it than have to treat it. “But what I’ve learned from all this,” she says, “is that a diagnosis of heartworm infection doesn’t mean having to choose between expensive, dangerous treatments and letting your dog die. There are other options.”
*Controversy over black walnut preventive: Some holistic veterinarians have reported having some success using black walnut capsules or extracts as heartworm preventives and even as a treatment for adult heartworm infections. In recent years, perhaps because more people have been trying this approach, more reports have surfaced of black walnut’s shortcomings as a preventive, with some dogs testing positive for heartworm despite their owners’ use of black walnut. Has black walnut been over-promoted as an alternative to conventional veterinary heartworm preventives?
If a dog eats commercial pet foods, receives annual vaccinations, is exposed to pesticides and other chemicals, and has taken prescription drugs, her impaired immune system may fail to discourage heartworms. In addition, poorquality herbal preparations or good-quality products that have been damaged by exposure to heat, light, and air won’t help her. Because most powders lose their effectiveness quickly, tinctures (alcohol extracts) are usually a better choice than capsules, but even a freshly made tincture that wasn’t aged long enough or did not contain enough plant material may be too weak to help.
One way to protect dogs from heartworm and other parasites with black walnut is to buy the best products you can find (Gaia and HerbPharm are excellent brands) from a store that receives frequent shipments. Freshness matters when products are stored under fluorescent lights or exposed to sunlight or heat.
For additional protection from heartworm, intestinal worms, fleas, and mosquitoes, add garlic and other parasite-repelling herbs to your dog’s dinner. Several products designed for pets contain wormwood and other artemisias, noni, neem, rue, thyme, the white rind of pomegranates, or cloves.
Building Your Dog’s Immune Competence
Stephen Blake, DVM, of San Diego, California, is a holistic veterinarian who consults with preventive-drug-adverse clients all over the country, including areas where heartworm is endemic.
“Many of my clients either never used conventional heartworm preventives or quit using them decades ago,” says Dr. Blake. “Today’s preventives are much improved, but they still can cause adverse side effects. Some dogs develop autoimmune disorders when heartworm chemicals alter normal cells so that the body considers them foreign and attacks them. The drug’s active ingredients wind up in the liver, where they may cause a form of hepatitis, or the drug might affect some other part of the body. The end result is that in trying to prevent heartworm, you might lose the patient to an autoimmune complex, liver failure, or the failure of whatever organ was most damaged by the drug.
“Sometimes the damage caused by heartworm preventive medication is so subtle,” he says, “that no one makes the connection. It could show up as slightly reduced energy, a picky appetite, skin problems, ear infections, or any number of benign chronic conditions that the dog didn’t have before it went on the medication. Several of my patients had symptoms like this that went away when their owners discontinued the medication. That’s when I realized that the risk of damage from preventive drugs was greater than the risk of heartworm, and I started to focus on nutrition and natural preventives instead.”
Dr. Blake monitors patients with heartworm blood tests every six months. “Negative test results reassure clients,” he says, “but even if a dog tests positive, it doesn’t mean the dog is going to die. This is a common misconception. If the dog’s test was negative six months ago, a positive result probably indicates the presence of just a few heartworms rather than a large number. In that case, nutritional and herbal supplements, dietary improvements, and other holistic strategies can help the dog eliminate adult worms and prevent microfilariae from maturing.”
Dr. Blake is fond of citing a study conducted several years ago at Auburn University Veterinary Medical School by Dr. Ray Dillon, who attempted to infect impounded stray dogs with heartworm by injecting them with blood containing 100 microfilariae. At the end of the study, each of the dogs had only three to five heartworms.
In contrast, Dr. Dillon found that when dogs bred for research were given 100 microfilariae, they typically developed 97 to 99 adult worms. “That’s a huge difference,” says Dr. Blake. “The stray dogs were from a control facility in Mississippi, which is a heartworm-endemic area, and no one was giving them heartworm protection medication. These dogs had developed their own resistance to heartworm in order to survive, which they probably did by manufacturing antibodies that prevented the heartworm larvae from maturing.”
To improve a dog’s overall health in order to help him repel and eliminate heartworms, Dr. Blake recommends improving the diet (more protein, better-quality protein, and a gradual transition to raw food), digestive support (colostrum, digestive enzymes, and probiotics such as acidophilus), clean water (filtered or bottled), ample exposure to unfiltered natural light outdoors (something he believes kept the stray dogs healthy), and the elimination of everything that weakens the canine immune system. This includes pesticide treatments for fleas or ticks, vaccinations, exposure to garden chemicals, and most prescription drugs.
“It isn’t necessary to fear every mosquito,” says Dr. Blake, “or to equate every positive heartworm test with a death sentence. Mother Nature has given your dog plenty of defense weapons that will work fine if you keep chemicals and inadequate nutrition from interfering. When I first stopped using heartworm prevention medicines, I went through stages of using homeopathic nosodes, herbs, and natural repellents in their place. I no longer use any of those replacements because I believe a dog’s best protection comes from a clean, toxin-free life-style and good nutrition.”
Heartworm-Infected Mosquitoes: A Spreading Threat
The first description of heartworm in dogs appeared 155 years ago in the October 1847 Western Journal of Medicine and Surgery. But until the late 20th century, America’s canine heartworm was a regional illness, with most cases occurring in the Southeast. Dogs living in Rocky Mountain and Western states rarely contracted heartworm, and if they did, it was because they picked it up while traveling through areas in which heartworm was endemic, or permanently established.
Warm summer temperatures, conditions that favor mosquitoes, and an increasingly mobile canine population have contributed to the spread of heartworm. Mosquitoes thrive in swampy areas and wherever they have access to standing water. Sometimes natural disasters such as storms or floods spread heartworm by expanding the mosquitoes’ habitat. Other factors that contribute to heartworm infestation include the agricultural irrigation of previously dry land or the installation of swimming pools, ponds, and fountains.
Wendy C. Brooks, DVM, at the Mar Vista Animal Medical Center in Los Angeles, California, is keeping a close eye on heartworm infections in areas thought to be safe from the parasite. Consider Salt Lake City, Utah, historically a low-risk area for heartworm.
“A beautification project led to the planting of new trees throughout the city,” says Dr. Brooks. “The following year, these trees were pruned for the first time, leaving thousands of knot holes throughout Salt Lake City. This suited Aedes sierrensis, the ‘tree hole mosquito,’ just fine. Soon heartworm cases began appearing. Salt Lake City is now considered as endemic an area for heartworm as Texas, Louisiana, or Florida. Planting trees throughout a city is hardly a major climatic event, but it was enough to establish heartworm and its mosquito vector in a new area.”
Between 1996 and 1998, researchers at the University of California at Davis School of Veterinary Medicine reviewed the heartworm tests of 4,350 dogs in 103 cities in Los Angeles County. Eighteen dogs tested positive, or 1 in 250. The result startled veterinarians not only because it was unexpected but because the infection rate was as high for dogs that had never traveled as it was for dogs that had, and 50 percent of the infected dogs were “indoor” dogs, which are considered less susceptible to heartworm than dogs that live outdoors. Age, sex, and coat length were ruled out as risk factors.
“Veterinarians in Southern California do not usually test for heartworm,” says Dr. Brooks, “but we’re beginning to. In areas with swimming pools, reservoirs, lakes, ponds, and other mosquito-friendly environments, heartworm is infecting our dogs.”
Is Alaska next? Thanks to global warming, mosquitoes have appeared in Barrow, the northernmost city in North America, and the mosquito-friendly Kenai Peninsula southwest of Anchorage reached heartworm-incubating temperatures in May.
Making Decisions on Heartworm Treatment
The success of alternative approaches for preventing or treating heartworm – or any other condition, for that matter – depends upon a complex multitude of factors. One should not simply replace conventional medications with “natural” remedies and expect miracles to happen; this is the sort of ill-considered approach that often fails and gives alternative medicine a bad reputation.
Instead, dog guardians who are concerned about the risks of conventional prevention or treatment drugs should consult with a holistic veterinarian and look into a “whole dog” heartworm prevention program. This should include a review of and improvements in the dog’s diet, overall health status, exposure to toxins, and stress levels. Local conditions should also be taken into account, including the incidence of mosquitoes and of heartworm in any areas that you and your dog frequent.
The decisions of whether or not to use natural or conventional preventives, and how and whether to treat a heartworm infection are not easy to make – but they are your choices. Find a veterinarian who will support and help you protect your dog according to your dog-care philosophy.
1. Inquire about the prevalence of heartworm in any areas where you and your dog frequent.
2. Rigorously employ a protection program (any protection is better than none) that suits your dog-care philosophy.
3. Have your dog tested for heartworm infection annually. The competence of your dog’s immune system is critical for protecting him from heartworm.
4. Use immune boosters such as an improved diet, pure water, reduced exposure to toxins, etc. | <urn:uuid:37806a48-eb73-4f7c-81cb-28d276fa3482> | CC-MAIN-2019-47 | https://www.whole-dog-journal.com/health/heartworm-treatment-options/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00138.warc.gz | en | 0.941265 | 7,925 | 3.890625 | 4 |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
By MasterC at 2003-07-11 03:55
Part 1: What ProFTPD is, and what it isn't.
Proftpd is a linux FTP server. It has the same basic functionality as ANY other FTP server out there. It recieves connections into the server in the form of a
file request, and then tranfers that via File Transport Protocol (FTP) back to the requester.
Proftpd is well documented should you need to further enhance your basic configuration file. This means that if you go to Proftpd's Home Page that you will find more information to specialize or customize your configuration to your needs.
Proftpd is NOT an http server. You can access FTP servers via a browser, but this does not mean that the server you are accessing is an HTTP server. The main difference to note is that FTP is specifically for File Transfer, where HTTP (HyperText Transport Protocol) is for broadcasting a wide array of information in a coded (HTML or PHP usually) format.
Part 2: Obtaining ProFTPD.
You can obtain ProFTPD straight from the applications home page: http://proftpd.org Here you can grab either the latest stable version (1.2.8 as of this writing) or you can grab the latest "release candidate" meaning it's not officially stable, but may have more features, and possibly less bugs than the current "stable" release. As always, you can grab the CVS version of this software in their CVS repository: http://proftpd.org/cvs.html CVS is the truly "latest and greatest" but will also not work on many systems. CVS is usually not the way to go unless you are an active developer with the project, or if you must have a feature only included in the CVS version (until it makes the stable release sometime down the road).
Other options for obtaining include FTP mirrors for your distribution (distro) where packaged versions may be available. Official mirrors listed on Proftpd's homepage are also a great place to grab from. If you use Gentoo, Proftpd in it's latest stable release is always available through portage. Debian has it's packages available through apt-get as well. Proftpd is a very common, strong FTP, so it's likely your distro has made packages for it. Ideally though, and in this document, we will use the source. This should be the case whenever feasible because of the amazing flexibility in compiling your own applications. For those just starting out, and/or needing to simply "get things working" RPM's and .deb are an excellent way to do this.
Where should I put this file? Is there a "standard" place on linux where I should place the files I download?
Yes, with a sometimes...
It's quite a hot topic, and a very arguable one at that. You are encouraged to place files wherever you feel the most comfortable, and where it is likely you'll remember where they are at. It's also advisable to use a place where all users will be able to write to, so system wide user directories, such as /home and /usr are where I'll suggest to keep them. On several distros I've used (such as Mandrake) they have even created a location in /usr/src for source files, and /usr/src/RPM for RPM files. This may be what you will want to look into for your file(s) storage, or feel free to arbitrarily choose your own location. Should you choose to use /usr/src you will need to give the users write privileges, as well as own the directory to the group users (although you may find other options work great as well). To do this, we will need to have root privileges, so open up your favorite terminal (I'll be using xterm and bash) and type:
This will send the command to switch users, and specifically switch to root. We will then be prompted for root's password:
And type it in, as always cASe SenSItiVe in linux.
Once in, we will then proceed with the setting up of permissions for users on /usr/src
First up, we'll need to ensure the directory exists:
ls -l /usr/src
It is VERY likely this will be there. Most systems kernel compiles go here, and many applications know this and use it when creating drivers. In the rare instance /usr/src does not exist, we will create it:
We now want to make sure group rights are writable:
chmod 770 /usr/src
Which will allow owner (currently root) and group (currently root) full read/write and execute permissions. This is necessary on this directory because:
root should [almost] always have full rights on files;
Your group will need to be able to read the directory, write to it, and list file contents (the result of adding execute to a directory).
We will now need to change the group ownership to that of the users group on the system, on most distros this is the 'users' group ;-) Ingenious. So we will do that with chown:
chown root.users /usr/src
(*Note: You can use <i>Either</i> a dot ( . ) <i>OR</i> a colon ( : ) when specifying user.group on a chown)
Now your users and root should be able to place files in that directory without recieving warnings that you don't have the rights to. Log out of root and proceed onto the next section as an unprivileged user (any user on the system other than root).
Part 3: Unpacking and configuring ProFTPD
For the rest of this helper, I will assume you have:
Obtained the latest source (tar.bz2 1.2.8) from the official HomePage;
Saved it (as an unprivileged user) to /usr/src
So when I refer to /usr/src/proftpd-1.2.8 you should use whatever directory you have chosen.
Let's get into the directory now containing the bzipped tarball (tar.bz2):
And using tar we will both extract it from the compressed tarball, and unpack the tarball into it's directory:
tar xvjf proftpd-1.2.8.tar.bz2
What this will do is unpack tar into a directory called proftpd-1.2.8 which we will then need to enter:
Now you can view all the files in the directory using ls:
Ensure there are files, specifically you can check the README and INSTALL files that are included with quite a few project tarballs, to view them use a reading application such as less:
This will allow you to scroll freely through those 2 text files which describe functions of the software, and sometimes how to install them (usually generically).
Once we've verified these 2 files exist, we will move onto configuring the Makefile. The Makefile is a file that is created from the configure process (configure is actually simply a script that will verify applications locations, and can be very in depth or very simple depending on the needs of the application it's configuring) that will eventually build the the necessary executables that proftpd will need to run. First you should always check a configure script's help file:
./configure --help | less
What we are doing there is calling the configure script from it's current directory (. means use this directory I am in and ./ means run this executable file) and piping it's output to something we can easily use to scroll up and down the screen, less. Pipes are VERY useful; I won't go into much detail with them as they deservant of their own small helper, but I will say, learn about them, use them, love them. Nuf' said.
Reading over the options always helps you to see an applications capabilities and to see it's requirements if they are not discussed in the README or INSTALL files. If you wish to use a feature, these are usually indicated with a --enable-feature attached to the configure script as I will show below. The same goes for disabling a feature. Should you decide something isn't necessary for your purposes, and you would like to ensure it's not included as a function in your final product, you can usually specify --disable-feature to do so. The configuration help (./configure --help) should have more information on that. After you are finished viewing all your options with the configure script, we will then execute it with those we wish to use:
It can be as complex as you'd like, or as simple as:
Which often works great on standard system setups. But should you wish more from your application, or just more control over exactly what is and isn't used, it's available. For information on each option, you should search the documentation from the projects website, and/or http://google.com/linux for answers. In my example I've chosen to enable 2 security featues for checking for username/password authentications. Security is always something to consider when setting up anything on your system, especially a server.
Part 4: The Makefile and the make options
Once the configure script has completed successfully (if there are errors, these are usually due to not having required dependencies, satisfy them, and then re-run the script) you can then run:
which will go through the Makefile that is created from configure's output. If you want to change things that configure found, such as the install_user (which can also
be specified prior to ./configure along with install_group to setup default ownership and permissions) variable, you can edit it then. It's not always a great idea to tweak this file, and I suggest not to. Simply run the 'make' and sit back and watch your application happily compile. Once finished without errors, you now have a working proftpd executable. You can run it from the current directory for testing if you'd like. If you didn't change the install_user option above, then this will need to be done as root (as shown above use the 'su -' option to change to root again) and then:
What these options will do is run proftpd (/usr/src/proftpd-1.2.8/proftpd) by calling it with it's full path since likely it's not in the system PATH (and rightly shoudn't be yet); and it will also call a specific conf file (configuration file) by using the -c option. If you don't receive an error after starting it, try to ftp to yourself:
Login as your default user, use your password when requested, and you should be in. Standard console commands work, try 1 just to make sure you are in:
And you should see a list of files.
We will exit the ftp now:
Since everything is working out, we can safely choose to install this now, so while we are still root:
You now have propftd installed, and you can smile proudly! :-)
Having an application installed is only half the battle, especially when talking about a server. There is more to it than simple install and run (usually). In the next section(s) we'll talk about the ProFTPD configuration file.
Part 5: The Basic configuration file (proftpd.conf)
Several different style 'sample' configurations can be found inside the source directory of proftpd-1.2.8/sample-configurations At this point I start to assume you know a little bit more about your wants/needs and desires for your FTP server. If all you want is "for it to work" you can probably just use the basic.conf as your configuration file. To do this, you only need to copy it to your configuration files directory (/etc usually). You will need to be root to do this, so let's get back to root with 'su -' :
And now, again as root, you'll need to start it. You have several ways of doing this, but for ease of this document, we will start it straight from the command line, no
t as a daemon service (such as inetd). So open up your terminal and 'su -' to root and:
/usr/sbin/proftpd -c /etc/proftpd.conf
And make sure it's running by:
Once you've verified it's working, you are done! Well, for basic FTP you are done, and you should now have a working FTP server on your linux box!
Test it from outside connections, ensure you make IPTables rules that allow port 21 to be open (this is the standard port that FTP runs on) and enjoy your new server. This does allow anonymous access, but to actually allow this type of access more must be done.
Part 6: Basic Anonymous Access
First, you will need to open up, in your favorite text editor, /etc/ftpusers If this file does not exist, you will need to create one. Simple syntax, so do not worry about this part, just add the users that you don't want people to be able to login as, this should include root at the very least, an example of the ftpusers file could be:
# ftpusers This file describes the names of the users that may
# _*NOT*_ log into the system via the FTP server.
# This usually includes "root", "uucp", "news" and the
# like, because those users have too much power to be
# allowed to do "just" FTP...
# Version: @(#)/etc/ftpusers 3.00 02/25/2001 volkerdi
# Original Author: Fred N. van Kempen, <firstname.lastname@example.org>
# The entire line gets matched, so no comments or extra characters on
# lines containing a username.
# To enable anonymous FTP, remove the "ftp" user:
# End of ftpusers.
This is taken from Slackware 8.1 should anyone wonder :-) As noted in the comments in that file, be sure to not have "ftp" listed if you want to allow anonymous. If you do not want anonymous however, simply add that user into the file just below "news". We will assume you do want anonymous and move on with setting that up.
Anonymous FTP users will default to using the directory /home/ftp as their home directory. This is the action most people will want, so we will leave that alone. We will need to create the user, and that will need to be done as root, be very careful when adding users, you do not want to use existing uid's otherwise your new "user" will be masquerading as an already existing one. On we go with creating the ftp user. As root ( su - ) type:
What this will do is:
Create a user named ftp (the last argument) with a uid of 5555 (usually safe to assume that's not used yet, you can choose another uid if you are sure it's not already used, to check, you can open /etc/passwd up with your favorite text reader ( less ) and view already used uid's), a default group of ftp, a home directory of /home/ftp and a shell as rbash. No that's not a typo, rbash is the chosen shell, it's restricted bash for those users who need limited or no shell. If you don't have rbash on your system, it's created as easily as this (as root):
ln -s /bin/bash /bin/rbash
And that's it. Now you've got a restricted bash shell on your machine, you should feel even cooler. We will now need to create the group ftp so that your ftp user actually belongs to something. To do that, we will use groupadd:
groupadd -g 5555 ftp
Last thing we need to do is create the users home directory and own it to them. In this case it's the ftp, but that makes nothing different from any other user, so again, as root type:
Now everything should be ready to go, and your ftp user should be happy as can be. Basic anonymous should now be working, complete and good to go.
Part 7: Adding to your Basic proftpd.conf file
Here we will discuss more in depth options in your conf file, specifically whether "to chroot, or not chroot", a bit on <Directory> directives, and other various parts of the file. So let's dig in!!
Here is the basic.conf file (in the sample-configurations directory) which we'll use as our example:
# This is a basic ProFTPD configuration file (rename it to
# 'proftpd.conf' for actual use. It establishes a single server
# and a single anonymous login. It assumes that you have a user/group
# "nobody" and "ftp" for normal operation and anon.
ServerName "ProFTPD Default Installation"
# Port 21 is the standard FTP port.
# Umask 022 is a good standard umask to prevent new dirs and files
# from being group and world writable.
# To prevent DoS attacks, set the maximum number of child processes
# to 30. If you need to allow more than 30 concurrent connections
# at once, simply increase this value. Note that this ONLY works
# in standalone mode, in inetd mode you should use an inetd server
# that allows you to limit maximum number of processes per service
# (such as xinetd).
# Set the user and group under which the server will run.
# To cause every FTP user to be "jailed" (chrooted) into their home
# directory, uncomment this line.
# Normally, we want files to be overwriteable.
# A basic anonymous configuration, no upload directories. If you do not
# want anonymous users, simply delete this entire <Anonymous> section.
# We want clients to be able to login with "anonymous" as well as "ftp"
UserAlias anonymous ftp
# Limit the maximum number of anonymous logins
# We want 'welcome.msg' displayed at login, and '.message' displayed
# in each newly chdired directory.
# Limit WRITE everywhere in the anonymous chroot
Most of these can be referenced here: http://www.proftpd.org/docs/directives/configuration_full.html
I'll go into what I've found to be the most common questions: DefaultRoot (not shown above) directive is used to "chroot" a user to a specific directory. This is commonly ~ which is also known as (AKA) the user's home directory. For example, if you have user "donald" on your system, it is highly likely their home directory is: /home/donald Therefor, if you have the DefaultRoot directive in your conf file, and you follow it with~: DefaultRoot ~
Then when they FTP into your server, they'll be automatically placed into their home directory and will not be able to go above it (folders inside their home directory will be accessible, simply not other folders in /home or lower). This is known as "chrooting" the user. It makes their root directory ( / ) the directory specified. If you wanted a common DefaultRoot for everyone who FTP's into your server, you would specify that directory. Replacing ~ above with the example you want them in. For example, if you wanted to only allow FTP into /var/ftp you would have: DefaultRoot /var/ftp
**Remember, this directory must exist, otherwise there will be no one able to get into your system. Furthermore, you should also realize no files exist outside this directory. This will essentially be the root directory ( / ). You cannot soft-symlink here since the file will basically be, not there ( /home/user is not inside /var/ftp ).
I'll quickly touch on Logging. I strongly suggest you specify a location to log your FTP information to, I personally like the /var/log/proftpd.log and /var/log/xferlog as my 2 logfiles, and to have them, the only thing you have to add to your proftpd.conf file is: SystemLog /var/log/proftpd.log
(*Note, after any changes to your /etc/proftpd.conf file, be sure to restart your process. That can be done (if you are using inetd - see below for discussion on this)
you: ps aux | grep inetd
kill -HUP 1234
Where 1234 is the PID you get from 'ps aux | grep intetd'
For standalone, simply substitute 'grep inetd' with 'grep ftp'
We'll move onto <Directory> Directives.
Most often it seems people are unable to either create or delete files. This can usually be fixed within the <Directory> directives. Here's an example:
If you want to be able to write and delete from a specific directory (in this example we assume you have a DefaultRoot of /var/ftp and a subdirectory in there called /pub):
The last part of this section, we'll discuss using inetd vs standalone.
Inetd is known as the SuperServer daemon of the system. It can take control of all server requests and further request the servers as required for each request. What this means is you don't have to have several servers running constantly when they are not in use. Instead INETD will run, waiting to discover new requests for your server (in this case FTP) and will call your /usr/sbin/proftpd to use it. This is an excellent way to reduce the load on your system, especially if you are running 2 or more servers that will wait until a connection is made to begin working. It's also a great idea to use inetd because the FTP user is serviced immediately. Whereas if run in standalone mode, proftpd will be run and waiting for connections, and then when it recieves one, will then spawn child processes and this child process will serve all connections for the new process.
Basically, use inetd if possible, in the rare instance you cannot, standalone will hopefully be sufficient.
Hopefully this covered most common questions when setting up ProFTPD . If you have anymore questions, please feel free to post them up at LinuxQuestions.org and be sure to be as specific as possible. Include as much information as possible, paying extra special attenti
on to any error messages you find. And always, check your logs. | <urn:uuid:fccda6ba-1d4b-4494-8f52-c05f987d4f1b> | CC-MAIN-2019-47 | https://www.linuxquestions.org/linux/answers/Networking/From_beginning_to_end_ProFTPD?s=3b9945f0d939de2dca6ea352884ac6c0 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00297.warc.gz | en | 0.92286 | 5,143 | 2.703125 | 3 |
When we became followers of Jesus Christ there were many practices we put behind us; lying, immorality, drunkenness, brawling, etc. We accepted the clear teaching of Holy Scripture that a believer should not make such habits a practice in his life. Many, however, feel that the bible is not so clear in condemning the believer’s participation in the celebration of Halloween. They would say that it is a “gray area” where each man must be convinced in his own mind. Is this true? Let us examine, from a biblical and historical perspective, what the bible has to say about the revelry of October 31st.
Reference materials are in general agreement about the origins of the practices of Halloween:
Now a children’s holiday, Halloween was originally a Celtic festival for the dead, celebrated on the last day of the Celtic year, Oct. 31. Elements of that festival were incorporated into the Christian holiday of All Hallows’ Eve, the night preceding All Saints’ (Hallows’) Day. (4)
Customs and superstitions gathered through the ages go into the celebration of Halloween, or All Hallows Eve, on October 31, the Christian festival of All Saints. It has its origins, however, in the autumn festivals of earlier times.
The ancient Druids had a three-day celebration at the beginning of November. They believed that on the last night of October spirits of the dead roamed abroad, and they lighted bonfires to drive them away. In ancient Rome the festival of Pomona, goddess of fruits and gardens, occurred at about this time of year. It was an occasion of rejoicing associated with the harvest; and nuts and apples, as symbols of the winter store of fruit, were roasted before huge bonfires. But these agricultural and pastoral celebrations also had a sinister aspect, with ghosts and witches thought to be on the prowl.
Even after November 1 became a Christian feast day honoring all saints, many people clung to the old pagan beliefs and customs that had grown up about Halloween. Some tried to foretell the future on that night by performing such rites as jumping over lighted candles. In the British Isles great bonfires blazed for the Celtic festival of Samhain. Laughing bands of guisers (young people disguised in grotesque masks) carved lanterns from turnips and carried them through the villages.
In ancient Britain and Ireland, October 31 was celebrated as the end of summer. In later centuries it was the opening of the new year and was the occasion for setting huge bonfires on hilltops to drive away evil spirits. The souls of the dead were supposed to revisit their homes on that day, and the annual fall festival acquired sinister connotations, with evil spirits, ghosts, witches, goblins, black cats, and demons wandering about. (3)
Halloween, name applied to the evening of October 31st, preceding the Christian feast of Hallowmas, Allhallows, or All Saints’ Day. The observances connected with Halloween are thought to have originated among the ancient Druids, who believed that on that evening, Saman, the lord of the dead, called forth hosts of evil spirits. The Druids customarily lit great fires on Halloween, apparently for the purpose of warding off all these spirits. Among the ancient Celts, Halloween was the last evening of the year and was regarded as a propitious time for examining the portents of the future. The Celts also believed that the spirits of the dead revisited their earthly homes on that evening. After the Romans conquered Britain, they added to Halloween features of the Roman harvest festival held on November 1 in honor of Pomona, goddess of the fruits of trees.
The Celtic tradition of lighting fires on Halloween survived until modern times in Scotland and Wales, and the concept of ghosts and witches is still common to all Halloween observances. Traces of the Roman harvest festival survive in the custom, prevalent in both the United States and Great Britain, of playing games involving fruit, such as ducking for apples in a tub of water. Of similar origin is the use of hollowed-out pumpkins, carved to resemble grotesque faces and lit by candles placed inside. (1)
So, according to secular sources, the traditions of Halloween are based upon the worship of false gods, contact with the dead, foretelling the future, and communing with evil spirits. Does the bible have anything to say about these practices?
The worship of false gods is condemned numerous times in both the Old and New Testaments and is emphasized so strongly that it is the very first of the commandments given to Moses on Mt. Sinai.
In Exodus 20:2-3 the Lord writes with His own hand:
“I am the LORD your God, who brought you out of Egypt, out of the land of slavery. You shall have no other gods before me.”
In Deuteronomy 11:16 He warns the Israelites:
“Be careful, or you will be enticed to turn away and worship other gods and bow down to them.”
The Psalmist warns in Psalm 81:9:
“You shall have no foreign god among you; you shall not bow down to an alien god.”
John even closes the “Love Letter” of 1 John with the admonition to “…keep yourselves from idols” (1 John 5:21).
The passage in the bible that most directly addresses the customs mentioned above is Deuteronomy 18:9-14, where we read:
“When you enter the land which the LORD your God gives you, you shall not learn to imitate the detestable things of those nations. There shall not be found among you anyone who makes his son or his daughter pass through the fire, one who uses divination, one who practices witchcraft, or one who interprets omens, or a sorcerer, or one who casts a spell, or a medium, or a spiritist, or one who calls up the dead. For whoever does these things is detestable to the LORD; and because of these detestable things the LORD your God will drive them out before you. You shall be blameless before the LORD your God. For those nations, which you shall dispossess, listen to those who practice witchcraft and to diviners, but as for you, the LORD your God has not allowed you to do so.”
We see here about the most inclusive list of the activities upon which Halloween was established that can be found anywhere in the bible, and the practitioners thereof are labeled “detestable.” Those habits are, in fact, the very reason the Pagan nations were driven out of the Promised Land.
In Amos 5:14 the Lord tells Israel, “Seek good, not evil, that you may live. Then the LORD God Almighty will be with you, just as you say he is.” He goes on in the next verse to say, “Hate evil, love good.” Note the emphasis (added) on the statement “just as you say He is.” Even though one may be making a profession of faith, Amos is clearly saying that the Lord Almighty is not with those who are actually seeking evil, instead of good. Peter reminds us of this when he says “…the eyes of the Lord are on the righteous and His ears are attentive to their prayer, but the face of the Lord is against those who do evil” (1 Peter 3:12).
At this point some may say, “But all of that was ages ago. None of that significance remains. It is now a harmless kids holiday, isn’t it?” Let’s see just what significance, if any, there is in the modern holiday of Halloween.
Rowan Moonstone (a pseudonym), a self-described witch, has written a pamphlet entitled “The Origins of Halloween,” in which he seeks to defend Halloween from the “erroneous information” contained in “woefully inaccurate and poorly researched” Christian tracts on the subject. Following are excerpts from the question and answer style article.
1. Where does Halloween come from?
Our modern celebration of Halloween is a descendent of the ancient Celtic fire festival called “Samhain”. The word is pronounced “sow-in,” with “sow” rhyming with cow. (2)
2. What does “Samhain” mean?
The Irish English dictionary published by the Irish Texts Society defines the word as follows: “Samhain, All Hallowtide, the feast of the dead in Pagan and Christian times, signalizing the close of harvest and the initiation of the winter season, lasting till May, during which troops (esp. the Fiann) were quartered. Faeries were imagined as particularly active at this season. From it the half year is reckoned. also called Feile Moingfinne (Snow Goddess). The Scottish Gaelis Dictionary defines it as “Hallowtide. The Feast of All Soula. Sam + Fuin = end of summer.” Contrary to the information published by many organizations, there is no archaeological or literary evidence to indicate that Samhain was a deity. The Celtic Gods of the dead were Gwynn ap Nudd for the British, and Arawn for the Welsh. The Irish did not have a “lord of death” as such. (2)
Okay, it is possible that the name of the god and the name of the celebration got mixed up in someone’s research. Note that he still admits it was a “feast of the dead.” He then describes its significance:
4. What does it have to do with a festival of the dead?
The Celts believed that when people died, they went to a land of eternal youth and happiness called Tir nan Og. They did not have the concept of heaven and hell that the Christian church later brought into the land. The dead were sometimes believed to be dwelling with the Fairy Folk, who lived in the numerous mounds or sidhe (pron. “shee”) that dotted the Irish and Scottish countryside. Samhain was the new year to the Celts. In the Celtic belief system, turning points, such as the time between one day and the next, the meeting of sea and shore, or the turning of one year into the next were seen as magickal times. The turning of the year was the most potent of these times. This was the time when the “veil between the worlds” was at its thinnest, and the living could communicate with their beloved dead in Tir nan Og. (2)
11. What other practices were associated with this season?
Folk tradition tells us of many divination practices associated with Samhain. Among the most common were divinations dealing with marriage, weather, and the coming fortunes for the year. These were performed via such methods as ducking for apples, and apple peeling. Ducking for apples was a marriage divination. The first person to bite an apple would be the first to marry in the coming year. Apple peeling was a divination to see how long your life would be. The longer the unbroken apple peel, the longer your life was destined to be. In Scotland, people would place stones in the ashes of the hearth before retiring for the night. Anyone whose stone had been disturbed during the night was said to be destined to die during the coming year. (2)
So from the pen of a defender of the holiday we find that pretty much all that has been said about the holiday by the encyclopedia’s cited earlier, with the possible exception of the faulty association of god status on the name Samhain, is true. Toward the end of the article Mr. Moonstone makes what seems to be, for our purposes, the most telling statement of all:
14. Does anyone today celebrate Samhain as a religious observance?
Yes. Many followers of various pagan religions, such as Druids and Wiccans, observe this day as a religious festival. They view it as a memorial day for their dead friends, similar to the national holiday of Memorial Day in May. It is still a night to practice various forms of divination concerning future events. Also, it is considered a time to wrap up old projects, take stock of ones life, and initiate new projects for the coming year. As the winter season is approaching, it is a good time to do studying on research projects and also a good time to begin hand work such as sewing, leather working, woodworking, etc. for Yule gifts later in the year. (2)
So, according to a witch, for Druids and Wiccans the day still holds religious significance. It is a festival during which “various forms of divination” are practiced. This position is supported in the following article. A witch is giving tips to other homeschooling (!) Witches at a website entitled “Halloween: October Festival of the Dead”.
Origins: All Hallow’s Eve, Halloween or Samhain once marked the end of grazing, when herds were collected and separated for slaughter. For farmers, it is the time at which anything not made use of in the garden loses its’ life essence, and is allowed to rot. Halloween is the original new year, when the Wheel of the Year finishes: debts are paid, scores settled, funereal rites observed and the dead put to rest before the coming winter. On this night, the veil between our world and the spirit world is negligible, and the dead may return to walk amongst us. Halloween is the night to ensure that they have been honored, fed and satisfied–and is the best time of the year for gaining otherworldly insight through divination and psychic forecasting. Recognition of the unseen world and the ordinary person’s access to it, as well as the acceptance of death as a natural and illusory part of life is central to the sacred nature of this holiday. (5)
Note her use of the present tense to describe the various aspects of Halloween. Of special interest is the term “sacred nature of this holiday.” Further down the webpage, in an article entitled “Elemental Homeschooling,” she gives the following suggestions for how to enlighten your children on (and about) Halloween:
As much fun as it is for children to get great bags of sweets at Halloween, the origins of this time of year are sacred and meaningful. It is the time when nature appears to die, so it becomes natural to consider those who have passed away to the spirit world. Bring out pictures of your ancestors and re-tell the old family stories to those who haven’t heard them yet. Remind yourself where you come from. Water is the element of Autumn, and the fluidity of emotion is most apparent in the Fall. We retreat within, burrow down into our homes in order to stay warm for the coming winter. We look within, and easily seek inner communication. Halloween is the perfect time to link the deepening of emotion with finding new ways to search for interior wisdom. Likewise, this is a fun and exciting holiday: theatrics, costuming, and acting out new personas express our ability to change. Here are some ideas for integrating this holy day with home schooling lessons.
Methods of inner communication with divination tools: tarot, palmistry, astrology, dream journaling … ? archetypes: fairy tales, storytelling the Dark Ages, the medieval era, issues about superstition and eternal truths, skeletons: the skeletal system, organs, anatomy …issues about death, persecution (using the Burning Times as a beginning point for older children), mysteries, the spirit world night: nocturnal animals, bodies of water: rivers, lakes, ocean, ponds … (5)
Once again she uses the present tense and describes Halloween as a “Holy Day.” She also advocates many of the activities specifically condemned by Deuteronomy 18:9-12. Obviously, there is a lot more to Halloween than some costumed kids gathering a stomach ache worth of candy. It is clearly a festival of the Kingdom of Darkness.
The scripture has a lot to say about participating in such activities:
“Do not be yoked together with unbelievers. For what do righteousness and wickedness have in common? Or what fellowship can light have with darkness? What harmony is there between Christ and Belial? What does a believer have in common with an unbeliever? What agreement is there between the temple of God and idols? For we are the temple of the living God. As God has said: ‘I will live with them and walk among them, and I will be their God, and they will be my people.” “Therefore come out from them and be separate, says the Lord. Touch no unclean thing, and I will receive you” (2 Corinthians 6:14-17).
1 Thessalonians 5:22 says to “avoid every kind of evil.”
Jesus said it best in John 3:19-21:
“This is the verdict: Light has come into the world, but men loved darkness instead of light because their deeds were evil. Everyone who does evil hates the light, and will not come into the light for fear that his deeds will be exposed. But whoever lives by the truth comes into the light, so that it may be seen plainly that what he has done has been done through God.”
Finally, Paul gives us an idea for a costume to be worn on Halloween (or any) night in Romans 13:12:
“The night is nearly over; the day is almost here. So let us put aside the deeds of darkness and put on the armor of light.”
READ IT HERE >>
There will certainly be people who will still rationalize ways to participate, at some level, in the festivities of Halloween. To this the Lord replies in Proverbs 3:7 “Do not be wise in your own eyes; fear the LORD and shun evil,” and Proverbs 8:13 “To fear the LORD is to hate evil; I hate pride and arrogance, evil behavior and perverse speech.” Will we seek to push the boundaries of our faith to see just how far we can go? Or will we seek to serve the Lord with all our hearts, souls, minds, and strength? “Woe to those who call evil good and good evil, who put darkness for light and light for darkness, who put bitter for sweet and sweet for bitter” (Isaiah 5:20).
The Lord equates Spiritual maturity with the ability to discern good and evil. Paul wrote to the Corinthians that they should “stop thinking like children. In regard to evil be infants, but in your thinking be adults” (1 Corinthains 14:20). The author of Hebrews makes it even more clear when he says “But solid food is for the mature, who by constant use have trained themselves to distinguish good from evil” (Hebrews 5:14).
For those who would still insist that they can participate in such activities with a clear conscience, there is another aspect to think about: the example you are to those around you.
“So whether you eat or drink or whatever you do, do it all for the glory of God. Do not cause anyone to stumble, whether Jews, Greeks or the church of God– even as I try to please everybody in every way. For I am not seeking my own good but the good of many, so that they may be saved” (1 Corinthains 10:31-33).
It is curious to note that in the same breath that Paul says “Love must be sincere” he says “Hate what is evil; Cling to what is good” (Romans 12:9). If we have sincere love for our brethren we will do all that we can to set a good example and not be a stumbling block to them.
“Do not allow what you consider good to be spoken of as evil. For the kingdom of God is not a matter of eating and drinking, but of righteousness, peace and joy in the Holy Spirit, because anyone who serves Christ in this way is pleasing to God and approved by men. Let us therefore make every effort to do what leads to peace and to mutual edification. Do not destroy the work of God for the sake of food. All food is clean, but it is wrong for a man to eat anything that causes someone else to stumble. It is better not to eat meat or drink wine or to do anything else that will cause your brother to fall. So whatever you believe about these things keep between yourself and God. Blessed is the man who does not condemn himself by what he approves. But the man who has doubts is condemned if he eats, because his eating is not from faith; and everything that does not come from faith is sin.”
1 Corinthians 8:7-13:
“But not everyone knows this. Some people are still so accustomed to idols that when they eat such food they think of it as having been sacrificed to an idol, and since their conscience is weak, it is defiled. But food does not bring us near to God; we are no worse if we do not eat, and no better if we do. Be careful, however, that the exercise of your freedom does not become a stumbling block to the weak. For if anyone with a weak conscience sees you who have this knowledge eating in an idol’s temple, won’t he be emboldened to eat what has been sacrificed to idols? So this weak brother, for whom Christ died, is destroyed by your knowledge. When you sin against your brothers in this way and wound their weak conscience, you sin against Christ. Therefore, if what I eat causes my brother to fall into sin, I will never eat meat again, so that I will not cause him to fall.”
The weak or new brother who sees or hears of one of us participating in Halloween may be led or feel pressured to participate himself, even though he does not have a clean conscience about the activity. For him then the activity is clearly sin, because it does not come from faith. This brother would have been pushed toward this sinful state by your indulgence.
Consider another aspect of this; who among us is weaker than our children? Can we take the risk of them seeing us participating, however marginally, in an activity rife with occultism? Jesus had harsh words for those who would cause such little ones to stumble! (Matthew 18:6) We work so hard at protecting them from the evil world around them, will we then be guilty of corrupting them for the sake of a celebration of that very evil? “Do not be misled: Bad company corrupts good character” (1 Corinthians 15:33).
The best thing we can do for our relationship with Jesus is devote ourselves entirely to Him.
“…Let us throw off everything that hinders and the sin that so easily entangles and let us run with endurance the race marked out for us. Let us fix our eyes on Jesus, the author and perfecter of our faith” (Hebrews 12:1b-2a).
How fixed on Jesus can our eyes be if we are spending a night, or even an evening, thinking on darkness? So lets press on to know the Lord!
GET OUR HALLOWEEN TRACTS >>
1. “Halloween” in Funk & Wagnall’s New Encyclopedia, 29 vols. (Rand McNally, 1990), 12:348-349.
2. “The Origins of Halloween” by Rowan Moonstone (Online Source)
3. Compton’s Interactive Encyclopedia
4. Grollier Multimedia Encyclopedia
5. “Halloween: October Festival of the Dead” by Jill Dakota (Online source) | <urn:uuid:320c74a3-df42-4b9c-a45f-7cb7c20da9b4> | CC-MAIN-2019-47 | https://www.goodfight.org/articles/cults-occult/christian-response-halloween/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00018.warc.gz | en | 0.962466 | 5,014 | 2.859375 | 3 |
Titus Livius or Livy (59 BCE - 17 CE): Roman historian, author of the authorized version of the history of the Roman republic.
A large part of Livy's History of Rome since the Foundation is now lost, but fortunately we have an excerpt, called the Periochae, which helps us reconstruct the general scope. This translation was made by Jona Lendering.
From Book 46
[46.1] King Eumenes [II Soter of Pergamon], who had taken an ambiguous stance during the Macedonian war, came to Rome.
[46.2] To prevent him appearing to be considered an enemy, if he was not permitted to enter, or acquitted, if he was admitted, a general law was passed that no king could be permitted to enter Rome.
[46.3] Consul Claudius Marcellus subdued the Alpine Gauls, consul Gaius Sulpicius Gallus the Ligurians.
[46.4] Envoys of king Prusias complained that Eumenes ravaged their territory and said that he conspired with Antiochus [IV Epiphanes] against the Roman people.
[46.5] At their request, an alliance was concluded with the Rhodians.
[46.6] [164 BCE] The censors performed the lustrum ceremony.
[46.7] 337,022 citizens were registered.
[46.8] The first man in the Senate was Marcus Aemilius Lepidus.
[46.9] When king Ptolemy [VI Philometor] was expelled from his kingdom by his younger brother [Ptolemy VIII Euergetes Physcon], envoys were sent to the latter, and the former was restored.
[46.10] When Ariarathes [IV Eusebes], king of Cappadocia, was dead, his son Ariarathes [V Philopator] accepted the kingdom and renewed the friendship with the Roman people through envoys.
[46.11] It [book 46] also contains an account of various battles with various outcomes against the Ligurians, Corsicans, and Lusitanians, and an account of the turmoil in Syria after the death of Antiochus [IV Epiphanes; 164], who left behind a son named Antiochus [V Eupator], a mere boy.
[46.12] Together with his tutor Lysias, this boy Antiochus was killed by Demetrius [I Soter], the son of Seleucus [IV Philopator], who had been a hostage at Rome, had secretly [fled] from Rome because he had not been released, and was accepted in this kingdom.
[46.13] Lucius Aemilius Paullus, who had defeated Perseus, died.
[46.14] Although he had brought back immense treasures from Hispania and Macedonia, his scrupulousness had been so great that when an auction was conducted, the dowry of his wife could hardly be repaid.
[46.15] The Pomptine marshes were drained by consul Cornelius Cethegus, to whom this task had been assigned, and converted into arable land.
From Book 47
[47.1] Praetor Gnaeus Tremellius was fined, because he had illegally opposed pontifex maximus Marcus Aemilius Lepidus. The claims of the religious authorities were stronger than that of the magistrates.
[47.2] A law against bribery was passed.
[47.3] The censors performed the lustrum ceremony.
[47.4] 328,316 citizens were registered.
[47.5] The first man in the Senate was Marcus Aemilius Lepidus.
[47.6] A treaty was negotiated between the two Ptolemaean brothers. One was to rule Egypt, the other Cyrene.
[47.7] King Ariarathes [V Philopator] of Cappadocia, who had been expelled from his kingdom on the initiative and with troops of king Demetrius [I Soter] of Syria, was restored by the Senate.
[47.8] A delegation was sent by the Senate to settle a border dispute between Massinissa and the Carthaginians.
[47.9] Consul Gaius Marcius [Figulus] fought against the Dalmatians, at first unsuccessfully, later with more luck.
[47.10] The reason for going to war was that they had attacked the Illyrians, allies of the Roman people; consul Cornelius Nasica subdued the Dalmatians.
[47.11] Consul Quintus Opimius subdued the Transalpine Ligurians, who had attacked two towns of the Massiliots, Antipolis and Nicaea.
[47.12] Itnote[Book 47.] also contains an account of several unsuccessful campaigns in Hispania by various commanders.
[47.13] In the five hundred and ninety-eighth year after the founding of the city, the consuls began to enter upon their office on 1 January.
[47.14] The cause of this change in the date of the elections was a rebellion in Hispania.
[47.15] Envoys sent to negotiate between the Carthaginians and Massinissa said they had seen lots of timber in Carthage.
[47.16] Several praetors were charged with peculiation and condemned.
From Book 48
[48.1] [154 BCE] The censors performed the lustrum ceremony.
[48.2] 324,000 citizens were registered.
[48.3] The causes of the Third Punic War are described.
[48.4] It was said that a very large Numidian army, commanded by Arcobarzanes, son of Syphax, was on Carthaginian soil, and Marcus Porcius Cato argued that although this force was ostensibly directed against Massinissa, it was in fact against the Romans, and that consequently, war had to be declared.
[48.5] Publius Cornelius Nasica defended the opposite, and it was agreed that envoys were to be sent to Carthage, to see what was going on.
[48.6] They rebuked the Carthaginian Senate because it had, contrary to the treaty, collected an army and timber to build ships, and proposed to make peace between Carthage and Massinissa, because Masinissa was evacuating the contested piece of land.
[48.7] But Hamilcar's son Gesco, a riotous man who occupied an office, provoked the populace to wage war against the Romans, so that when the [Carthaginian] Senate announced it would comply with the Roman wishes, the envoys had to flee to escape violence.
[48.8] When they told this, they made the [Roman] Senate, already hostile towards the Carthaginians, even more hostile.
[48.9] Marcus Porcius Cato gave his son, who had died during his praetorship, a cheap funeral according to his means (because he was poor).
[48.10] Andriscus, who pretended persistently that he was the son of Perseus, the former king of Macedonia, was sent to Rome.
[48.11] Before he died, Marcus Aemilius Lepidus, who had been chosen as first among the senators by six pairs of censors, ordered his sons that they should carry his bier to the pyre covered with linens without purple, and they were not to spend more than a million for the remainder: the imagesnote[Of his ancestors.] and not the expenditure should enhance the funerals of great men.
[48.12] There was an investigation of poisonings.
[48.13] The noble women Publilia and Licinia were accused of murdering their husbands, former consuls; after the hearing, they assigned real estate as bail to the praetor, but were executed by a decision of their relatives.
[48.14] Gulussa, the son of Massinissa, told that a levy was conducted in Carthage, a navy was being built, and that without any doubt, they were preparing for war.
[48.15] When Cato argued that war should be declared, and Publius Cornelius Nasica said that it was better to do nothing too fast, it was decided to send ten investigators.
[48.16] When consuls Lucius Licinius Lucullus and Aulus Postumius Albinus recruited their army with great strictness and favored no one with an exemption, they were imprisoned by the tribunes of the plebs, because they were unable to obtain exemptions for their friends.
[48.17] The Spanish War had been waged unsuccessfully and resulted in such a great confusion among the Roman citizens that no one wanted to go there as tribune or commander, but Publius Cornelius [Scipio] Aemilianus came forward and said he would accept any kind of military task to which he should be assigned.
[48.18] This example gave everyone an appetite for war.
[48.19] Although Claudius Marcellus appeared to have pacified all Celtiberian nations, his successor consul Lucullus subdued the Vaccaeans and Cantabrians and several other hitherto unknown nations in Hispania.
[48.20] Here, tribune Publius Cornelius Scipio Aemilianus, the son of Lucius [Aemilius] Paullus, and the grandson of [Publius Cornelius Scipio] Africanus (although by adoption), killed a barbarian challenger, and added an even greater danger when the town of Intercatia was stormed,
[48.21] because he was the first to climb the wall.
[48.22] Praetor Servius Sulpicius unsuccessfully fought against the Lusitanians.
[48.23] The envoys returned from Africa with Carthaginian ambassadors and Massinissa's son Gulussa, saying they had seen how an army and navy were built in Carthage, and it was decided to ask for opinions [of all senators]
[48.24] While Cato and other influential senators argued that an army should immediately be sent to Africa, Cornelius Nasica said that it still did not seem to be a justified war, and it was agreed to refrain from war if the Carthaginians would burn their ships and dismiss their army; if they did less, the next pair of consuls should put the Punic War on the agenda.
[48.25] When a theater, contracted for by the censors, was built, Publius Cornelius Nasica was the author of a senatorial decree that this building, which was so useless and dangerous for the public morals, should be destroyed; for some time, the people had to stand to watch theatrical performances.
[48.26] When the Carthaginians declared war upon Massinissa and broke the treaty, they were beaten by this man (who was ninety-two years old and accustomed to eat and enjoy dry bread without a relish) and incurred a war against the Romans.
[48.27] Itnote[Book 48.] also contains an account of the situation in Syria and the war waged between its kings
[48.28] In this turmoil, the Syrian king Demetrius [I Soter] was killed.
From Book 49
[49.1] The beginning of the Third Punic War was in the six hundred and second year after the founding of Rome, and came to an end five years after its beginning
[49.2] Between Marcus Porcius Cato and Scipio Nasica, of which the former was the most intelligent man in the city and the latter considered to be the best man in the Senate, was a debate of opposing opinions, in which Cato argued for and Nasica against war and the removal and sack of Carthage.
[49.3] It was decided to declare war on Carthage, because the Carthaginians had, contrary to the treaty, ships, because they had sent an army outside their territory, because they had waged war against Massinissa, an ally and friend of the Roman people, and because they had refused to receive in their city Massinissa's son Gulussa (who had been with the Roman envoys).
[49.4] Before any troops had boarded their ships, Utican envoys came to Rome, to surrender themselves and everything they owned.
[49.5] This embassy was received as a good omen by the senators, and as a bad omen in Carthage.
[49.6] The games of Dis Pater took place at the Tarentum, in accordance with the [Sibylline] Books. Similar festivities had taken place hundred year before, at the beginning of the First Punic War, in the five hundred and second year since the founding of the city.
[49.7] Thirty envoys came to Rome to surrender Carthage.
[49.8] Cato's opinion prevailed that the declaration of war was to be maintained and that the consuls, as had been agreed, would proceed to the front.
[49.9] When they had crossed into Africa, they received the three hundred hostages they had demanded and all the weapons and war engines that were in Carthage, and demanded on the authority of the Senate that the Carthaginians rebuilt their city on another site, which was to be no less than fifteen kilometers from the sea. These offensive demands forced the Carthaginians to war.
[49.10] The beginning of the siege and the attack of Carthage were organized by consuls Lucius Marcius [Censorinus] and Manius Manilius.
[49.11] During the siege, two tribunes rashly broke through a carelessly defended wall and suffered greatly from the inhabitants, but were relieved by Scipio Orfitianus [Africanus].
[49.12] With the help of a few cavalry, he also relieved a Roman fort that had been attacked by night, and he received the greatest glory from the liberation of Roman camps which the Carthaginians, sallying in full force from the city, vigorously attacked.
[49.13] Besides, when the consul (his colleague had returned to Rome for the elections) led the army against Hasdrubal (who had occupied with many troops an inaccessible pass), he convinced the consul first not to attack on this inaccessible place.
[49.14] However, the opinions of the others, who were jealous of his intelligence and valor, prevailed, and he entered the pass himself,
[49.15] and when - as he had predicted - the Roman army was defeated and routed and two subunits were besieged by the enemy, he returned with a few cavalry squadrons, relieved them, and brought them back unharmed.
[49.16] In the Senate, his valor was praised by even Cato, a man whose tongue was better suited for criticism, but now said that the others fighting in Africa were mere spirits, whereas Scipio was alive; and the Roman people received him with so much enthusiasm that most districts elected him as consul, although his age did not allow this.
[49.17] When Lucius Scribonius, a tribune of the plebs, proposed a law that the Lusitanians, who had surrendered to the Roman people but had been sold [into slavery] by Servius [Sulpicius] Galba in Gaul, would be liberated, Marcus Porcius Cato supported him energetically.
[49.18] (His speech still exists and is included in his Annals.)
[49.19] Quintus Fulvius Nobilior, who had often been assailed by Cato in the Senate, spoke for Galba; and Galba himself, seeing that he was about to be condemned, embracing his two young sons and the son of Sulpicius Gallus, whose guardian he was, spoke so pitiably in his own defense, that the case was abandoned.
[49.20] (Three of his speeches still exist: two against tribune Libo in the Lusitanian case, and one against Lucius Cornelius Cethegus, in which he admits that during a truce, he had massacred the Lusitanians near his camp because, as he explains, he had found out that they had sacrificed a man and a horse, which according to their custom meant that they were preparing an attack.)
[49.21] A certain Andriscus, a man of the lowest kind, pretending to be a son of king Perseus, changed his name into Philip, and secretly fled from the city of Rome, to which king Demetrius [I Soter] of Syria had sent him, precisely because of this lie; many people were attracted by his false story (as if it were true), he gathered an army and occupied all of Macedonia, whether the people wanted it or not.
[49.22] He told the following story: born as the son of king Perseus and a courtesan, he had been handed over for education to a certain Cretan, so that, in this situation of war against the Romans, some scion of the royal stock would survive.
[49.23] Without knowledge of his family and believing that the man who taught him was his father, he had been educated at Adramyttion until he was twelve years old.
[49.24] When this man fell ill and was close to the end of his life, he finally told Andriscus about his origin and gave his "mother" a writing that had been sealed by king Perseus, which she should give the boy when he reached maturity, and the teacher added that everything had to be kept secret until that moment.
[49.25] When he reached maturity, Andriscus received the writing, from which he learned that his father had left him two treasures.
[49.26] Until then he had only known that he was a foster son and had been unaware about his real ancestry; now his foster mother told him about his lineage and begged him to avoid being assassinated by departing from the country before the news reached [king] Eumenes [II Soter of Pergamon], an enemy of Perseus.
[49.27] Frightened and hoping to obtain assistance from Demetrius, he went to Syria, where he had declared for the first time who he was.
From Book 50
[50.1] Thessaly, which the false Philip wanted to invade and occupy with his armies, was defended by Roman envoys and Achaean allies.
[50.2] King Prusias [II the Hunter] of Bithynia, a man full of the lowest moral defects, was killed by his son Nicomedes [Epiphanes], who received help from king Attalus [II] of Pergamon, but had a second son (who is said to have had one single bone growing in place of his upper teeth).
[50.3] When the Romans sent three envoys to negotiate peace between Nicomedes and Prusias, of which the first had many scars on his head, the second was gouty, and the third was considered to have a slow mind, Marcus [Porcius] Cato said that this was embassy without head, feet, and brains.
[50.4] In Syria, which had until then had a king [Alexander I Balas] who was equal to that of Macedonia in ancestry but to Prusias in laziness and slowness, and who took his ease in kitchens and brothels, Hammonius ruled, and he murdered all friends of the king, and queen Laodice, and Demetrius' son Antigonus.
[50.5] More than ninety years old, king Massinissa of Numidia died, a remarkable man.
[50.6] He was so vigorous that among the other youthful exploits that he performed during his final years, he was still sexually active and begot a son when he was eighty-six.
[50.7] He left his undivided kingdom to his three sons (Micipsa the eldest, Gulussa, and Mastanabal, who was well-versed in Greek literature), and ordered them to divide it according to the instructions of Publius [Cornelius] Scipio Aemilianus, who accordingly assigned the part of the kingdom they were to rule.
[50.8] The same man persuaded Phameas Himilco, the commander of the Carthaginian cavalry and a man of valor who was important to the Carthaginians, to join the Romans with his squadron.
[50.9] From the three envoys that were sent to Massinissa, Marcus Claudius Marcellus drowned during a tempest at sea.
[50.10] In their Senate room, the Carthaginians killed Hasdrubal, a grandson of Massinissa who served them as general, because they believed he was a traitor. Their suspicion was based on his relation to the Romans' ally Gulussa.
[50.11] When Publius [Cornelius] Scipio Aemilianus ran for aedile, he was elected consul by the people.
[50.12] Because he could not lawfully be made consul as he was under age, there was a big struggle between the people, who campaigned for him, and the senators, who resisted him for some time, but eventually the law was repealed and he was made consul.
[50.13] Manius Manilius stormed several cities in the neighborhood of Carthage.
[50.14] After the false Philip had massacred praetor Publius Juventius with his army in Macedonia, he was defeated and captured by Quintus Caecilius, and Macedonia was subdued again. | <urn:uuid:8a903a27-8200-463e-9eb3-62a3105dc41a> | CC-MAIN-2019-47 | https://www.livius.org/sources/content/livy/livy-periochae-46-50/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00457.warc.gz | en | 0.987906 | 4,666 | 2.90625 | 3 |
The link to Directions returns you to this page. Logical reasoning tests measure logical thinking for problem-solving, based on your flexibility in working with data. Every time you click the New Worksheet button, you will get a brand new printable PDF worksheet on Logical Reasoning. Logical Reasoning Tests are a form of Aptitude tests designed to measure "mental ability" and "non-verbal skills". PowerScore LSAT Logical Reasoning: Question Type Training (Powerscore Test Preparation) The PowerScore LSAT Logical Reasoning Bible: A Comprehensive System for Attacking the Logical Reasoning Section of the LSAT The PowerScore LSAT Logical Reasoning Bible Workbook. 1 website PrepInsta for Infosys Logical Reasoning Questions and Answers PDF and get a job offer from Infosys. We advise reading this guide from start to end, picking up any tip that suits your thinking process. These puzzles require both logical and mathematical reasoning. Reasoning and Aptitude Test – Question Answers 2016: This page will gives you an opportunity to participate in Logical Reasoning Online Mock Test which is completely free for all of you. How to prepare for your deductive reasoning test - practice deductive reasoning tests. Generally, this portion of reasoning often comes in Competitive exams. They're fun, they're addictive, and best of all, they're free!. Start by carefully reading the first passage on page 2. learningcenter. George has Rs. The Logical Reasoning Test entails letter series and checks your capability to think logically and systematically. How can I use these reasoning test papers in year 5 classes? This mathematics reasoning paper for year 5, covers objectives from year 3 to year 5. Five Sample Analytical Reasoning Questions and Explanations. The Loophole in Logical Reasoning is the LSAT book for anyone who has been put off by cookie-cutter test prep or finds themselves stuck in a rut with other materials. I am also sharing shortcuts in PDF format. This is one of the most useful ebooks ever posted on BankExamsToday. Analogies test your vocabulary and your ability to figure out the relationships between pairs of words. ) with full confidence. A Modern Approach to Verbal & Non – Verbal Reasoning ( RS Aggarwal Reasoning Book PDF) includes Latest Questions and their Solutions from various last year exams. You can choose to include answers and step-by-step solutions. क्युकी सभी विषयों की तरह Reasoning तर्कशक्ति भी एक ऐसा विषय है. Inductive reasoning is important to science, but so is deductive reasoning, which is the subject of this book. Why are Aptitude Tests used? Employers often use aptitude tests as part of their assessment procedures for the selection and. These questions are there is most of the examinations, and present an interesting challenge to students. Logical reasoning. When solving, drawing a picture or using a table may be helpful to your child. You will see many websites linked to incomplete preview versions of the book, which are. This is not a timed test. Olympiad Champs Logical Reasoning Workbook Class 1 is an attractive material which creates interest for learning in the minds of the students. Free LSAT Critical Reasoning Questions. Machine scoreable. These type of tests can either be verbal or non-verbal and to make it even more confusing, each major test provider SHL, Kenexa, etc using their own terminology and style of testing for logical, non-verbal, abstract, inductive reasoning tests, etc. RS Aggarwal Reasoning pdf Free Download RS Aggarwal Reasoning Book on Amazon: Click Me Download Free pdf Books for Your Exam: Click Me. Logical reasoning advanced objective questions and answers and tests for online practice. These papers also cover previous year papers. Logical Reasoning Questions and Answers PDF For SSC BANK. RS Aggarwal Quantitative Aptitude PDF; Kiran SSC Reasoning Book PDF Download; Rakesh Yadav Class Notes Of Reasoning In Hindi; Quicker Maths Tricks By M Tyra PDF. Logical reasoning is a thought process where we apply logic on a statements to find out a conclusion. Infosys Logical Reasoning Questions and Answers PDF Trust India's No. Sometimes logical reasoning tests are given a more specific name to reflect a more targetted skillset. The score on the writing task constitutes 25% of the Verbal Reasoning score. Practice assessments. Varsity Tutors also offers resources like free LSAT Logical Reasoning Practice Tests to help with your self-paced study,. Immanuel Kant first described analytical reasoning as part of his System of Perspectives, where he refers to it as "analytic judgments. The LSAT logical reasoning section applies these terms in a near-mathematical sense. A major bias is that people often only consider a situation or a problem from one point of view. SHL Deductive Reasoning Test Deductive Reasoning Tests measure a candidate's ability to draw logical conclusions based on information provided, identify strengths and weaknesses of arguments, and complete scenarios using incomplete information. This logical reasoning pdf contains all reasoning topics and solution tricks. Reasoning is the process of thinking from facts or premises about something in a logical way in order to form a conclusion or judgment or in other words Reasoning is the capacity for a person to make sense of things to establish & verify facts, To rationaly work through data and information. First of all, know what to expect in the SBI PO Mains reasoning section 2019. Download Reasoning Questions and Answers PDF for Bank Exams, RRB, EPFO. Let me know what you think by commenting below the explanations! If you want these explanations offline, or want the whole test in one place, you can get a pdf version. Such as: ‘ inductive reasoning ‘,. We will suggest you to bookmark this website for Infosys Reasoning Papers Press CTRL + D to bookmark Infosys Reasoning Questions and Previous year Papers page. All these tests categories are included in our All Tests Package to help you ease worries and anxieties about the tests by familiarizing yourself with them. Hello Friends, Looking ForDownload Free Nishit Sinha's Logical Reasoning and Data Interpretation for the CAT PDF ? As soon many exams is in schedule like and students are looking for notes for written exams so Jobsfundaz team would be giving you the free PDF eBooks for the various exams. Nishit K Sinha No preview available - 2016. While knowledge of some formal logic principles can be helpful on some Logical Reasoning questions, you certainly don't. logical reasoning questions free download - Logical Reasoning Question Bank, Abstract and Logical Reasoning tests, FREE Abstract and Logical Reasoning tests, and many more programs. Among A, B, C, D and E each having different amount of money, C has more money than only E. Proper preparation for the LSAT Logical Reasoning test is therefore the beginning of your legal thought process. Logical Reasoning is the second section in CAT exam. CoCubes Logical. CTS Logical Reasoning Papers conducted on Campus Logical Ability Online Test are by MeritTrac but, cognizant reasoning written Test papers from 2017 and 2018 conducted off campus are AMCAT. NUMERICAL REASONING PRACTICE TEST PRACTICE QUESTIONS The front page of this booklet provides practice examples to show you what the questions on the real. Free LSAT Logical Reasoning practice problem - LSAT Logical Reasoning Diagnostic Test 1. Download Capgemini Logical Reasoning Questions and Answers PDF. How to solve Online Logical Reasoning Test problems? You can easily solve all kind of Online Logical Reasoning Test questions by practicing the following exercises. Quiz is useful for IBPS clerks, PO, SBI clerks, PO, insurance, LIC AAO and for all types of banking exams with pdf. The following logical deduction test was reported to have been one of the questions given for 14-year-olds in the Singapore and Asian Schools Math Olympiad. Quantitative aptitude for the cat by nishit sinha pdfData interpretation and logical reasoning by arun sharma pdf. inductive logic, it is probably best to take a course on probability and statistics. On each click on answers system will tell you where the answers is correct or incorrect. Infosys Logical Reasoning Questions and Answers PDF Trust India’s No. In every examination, reasoning section is always there and have a very important place. com is the No. Sue, Andy, Lu, and Carlos each bought a hobby book. way to integrate probability and deduction. S Aggarwal Reasoning Book PDF Logical Download Hello Students, आज आप RS Aggarwal Reasoning Book PDF पढेंगे. Analytical reasoning skills are important in both our personal and professional lives, as they are an essential part of solving the problems we encounter in our everyday life. The below tips are a good start, but click that link for much, much, more. Federal Law Enforcement Written Entrance Test. Especially tests that measure sector-specific abilities can have verbal and numerical test questions. These papers also cover previous year papers. eBook/ PDF of the Revised & Updated 2nd Edition of 30 Mock Test Series for Olympiads Class 8 Science, Maths, English, Logical Reasoning, GK & Cyber is first of its kind ebook preparatory on Olympiad in many ways and designed to give the student a hands on experience for any Regional / National/ International Olympiads. Deductive Reasoning and Logic Deductive reasoning should be distinguished from logic. Olympiad Champs Logical Reasoning Workbook Class 1 is an attractive material which creates interest for learning in the minds of the students. inductive logic, it is probably best to take a course on probability and statistics. Cognizant Logical reasoning questions range from easy to medium. If you don't see any interesting for you, use our search form on bottom ↓. Your answer will be marked immediately if it is correct or incorrect. This practice pack contains compilation of 12-deductive reasoning tests with answers to the problem solving case problems. Logical Reasoning Questions And Answers Sample Test 10 Sample Logical Reasoning Test 10 for you to Practice. You can choose to include answers and step-by-step solutions. Mettl Reasoning Skills Assessment has been designed to assess the reasoning skills and judge whether candidates are able to provide solutions to any given problem efficiently. Our psychologists have written a large number of test items and need to collect data so we can develop them into a large bank of inductive reasoning questions. Study the question carefully. Mathematical Reasoning Jill had 23 candies. Online Verbal. Practice Test # 1 Instructions for the Verbal Reasoning and Quantitative Reasoning Sections For your convenience, these instructions are included both in the test book for Sections 1 and 2, and in the test book for Sections 3 and 4. Proper preparation for the LSAT Logical Reasoning test is therefore the beginning of your legal thought process. Tests that measure your logical reasoning, usually under strict time conditions. It examines your following faculties: Verbal Ability. 7 Logical Reasoning Tips and Tricks UPDATE: I've put together a GINORMOUS list of free Logical Reasoning advice and strategies. These tests are an example of an ability test (sometimes known as aptitude tests) and are often used by employers in combination with numerical reasoning tests and logical reasoning tests. The style of the following test is based on the Inductive Reasoning and Diagrammatic Reasoning tests available on the AssessmentDay website www. Practice with our self-assessment questions: The new SAT math test has: – More algebra questions – Less geometry questions – Questions aligned with the Common Core – More questions that require a conceptual understanding of how the variables in a mathematical equation align to real life scenarios. In every examination, reasoning section is always there and have a very important place. More than 2000 MCQs Question Answers on Reasoning and Aptitude has been set here. Logical Reasoning (480) aptitude and logic reasaning (#M40151818) TCS QUESTION Number system question Keep an EYE Keep an eye puzzle. While knowledge of some formal logic principles can be helpful on some Logical Reasoning questions, you certainly don't. This will include a) identifying the converse, inverse, and contrapositive of a conditional statement; b) translating a short verbal argument into symbolic form; c) using Venn diagrams to represent set relationships; and d) using deductive reasoning. • Practice with the timed conditions of the actual Reasoning Test REASONING TEST INSTRUCTIONS On the next two pages, you will find the exact instructions that you will receive when taking the actual Reasoning Test. Download Study material for preparation of LSAT for free. Finding the correct answers to certain questions depends on the way the test-makers use these terms. The questions are well picked to test the knowledge, comprehension, evaluation, analytical and application skills. Abstract reasoning tests have been found to have a high correlation with general intelligence and the ability to reason logically. Download Reasoning Questions and Answers PDF for Bank Exams, RRB, EPFO. The following instructions appear at the beginning of the section:. CAT stands for Common Admission Test. Logical Puzzles interview questions and answers for software companies, mba exams and all type of exams. Your task is to choose which one of the options best fits the missing symbol. IQ Articles > Parts of IQ Test > Sample questions for Logical Reasoning Skills Sample questions for Logical Reasoning. This practice pack contains compilation of 12-deductive reasoning tests with answers to the problem solving case problems. The Logical Reasoning test assesses candidates' ability to make a good conclusion based on given preconditions. Logical Reasoning (MCQs) questions with answers are very useful for freshers, interview, campus placement preparation, entrance exams like bank, MBA, CSAT, SSC and experienced professionals. Help us improve our tests. Find great deals on eBay for powerscore lsat workbook. The style of the following test is based on the Inductive Reasoning and Diagrammatic Reasoning tests available on the Assessmentday website www. Interested in which European nation is the most curious?. Candidates can download all the series of the R S Aggarwal Quantitative Aptitude in the PDF Format. This page contains instructions, general comments and mathematical formulas, which you may refer to during the test. Logical Reasoning Type 1 Method Questions and Answers with Explanations - Download Type 2 Method Questions and Answers with Explanations - Download Type 3 Method Questions and Answers with Explanations - Download Type 4 Method Questions and Answers with Explanations - Download Type 5 Method Questions and Answers with Explanations - Download. In each question one of the symbols is missing. Prepare for all kinds of Aptitude tests with us and maximize your scores. We started CLATGyan in the year 2010 and it is the first website dedicated for CLAT mentoring. Elements Logical consists of 12 questions based on an incomplete series of symbols, designed to measure a candidate’s ability to analyze abstract information and apply this to determine outcomes and patterns. Work through the questions carefully and select the best answer from among the choices. Study the question carefully. Check out our FREE demos. If you want to improve reasoning skills, try to identify your biases. Visual Skills. Verbal reasoning tests aim to. Logical Reasoning Questions For IBPS Clerk: Download Reasoning Questions For IBPS Clerk: 75 IBPS Clerk Mocks - just Rs. These questions are all frequently used in all. txt) or view presentation slides online. For answers with complete explanations to other questions, please order the LSAT Prep Course with hundred of pages of test techniques and strategies. Logical Reasoning questions are mostly based on arrangement type questions, that will have certain conditions to fulfill. Logical Reasoning Questions And Answers For Placement Pdf Campus Fold - Placement Papers, Interview preparation, Materials Puzzle Questions Aptitude Logical Reasoning Character Shortcut Tricks pdf Puzzle questions with answers asked in interviews - The puzzle is also a logical reasoning. Reasoning is the process of thinking from facts or premises about something in a logical way in order to form a conclusion or judgment or in other words Reasoning is the capacity for a person to make sense of things to establish & verify facts, To rationaly work through data and information. Numerical Reasoning Test. Correct reasoning is useful for decision-making and problem solving, activities that prevail on the job. "—Sir Edward Coke LSAT Logical Reasoning Your LSAT exam will contain two scored Logical Reasoning sections of approximately 25 questions per section. Download Logical Reasoning PDF if you want to download the Logical Reasoning PDF then click on the download pdf button. rs aggarwal logical reasoning pdf free download Download Logical Reasoning PDF, Download Reasoning PDF Free. Every thought about being a prisoner officer? Think again https://t. Practice all tests for free, plus tips, advice and scientific insight. ABSTRACT REASONING TESTS – HOW TO PASS THEM INCLUDING SAMPLE TEST QUESTIONS AND ANSWERS. His contribution to education is admirable and he deserves alot of respect for helping us all pass our tests in Can I get a PDF of RS Agarwal logical reasoning?. assessmentday. The correct answer will be indicated in green color, otherwise, it will be red. Reasoning Questions: Logical Reasoning Questions and Answers for exam preparation. We use logic because it also could show us the relationship between the parts of an idea and the whole idea. CoCubes Logical Reasoning Question with Answers 2019-20 are available. The instructions are the same in both locations. B has more money than D out. Included: Complete 12-minute simulation (17 minutes with instructions). The caffeine increase also increased the occurrence of birth defects. What does Logical reasoning mean? Information and translations of Logical reasoning in the most comprehensive dictionary definitions resource on the web. On this page you can read or download logical reasoning test 1questions booklet in PDF format. in the test. Free Inductive/Logical Test Questions (Questions only) JobTestPrep invites you to a free practice session that represents only some of the materials offered in our online practice packs. SSC Live Coaching. Click here to find Aptitude Reasoning question and answers with explanation. Logical Reasoning Tests are a form of Aptitude tests designed to measure "mental ability" and "non-verbal skills". These questions require you to use your reasoning skills to answer questions that contain a number of conditions. A psychometric test designed to measure a subject's logical reasoning aptitude. logical reasoning questions and answers pdf. Freshersworld. SAMPLE ASHOKA APTITUDE TEST QUANTITATIVE APTITUDE 1) If numerator of a fraction is increased by 200% and denominator by 250%, the resultant fraction is 3 14. The questions in this section are based on the reasoning contained in brief statements or passages. Free Test Prep Section One : Logical Reasoning Exam Questions & Dumps. The Logical Reasoning (LR) section of the LSAT tests your ability to analyze the logical foundations of a given argument. Deductive reasoning is a psychological process. Interested in which European nation is the most curious?. Person A will study p. Here, we will discuss the importance of Logical reasoning in CLAT 2019 exam, syllabus of Logical reasoning and tips to build command over this section. Situational judgement. A psychometric test designed to measure a subject's logical reasoning aptitude. Each test question displays a series of shapes/objects. CTS Logical Reasoning Papers conducted on Campus Logical Ability Online Test are by MeritTrac but, cognizant reasoning written Test papers from 2017 and 2018 conducted off campus are AMCAT. Logical Reasoning Questions and Answers PDF For SSC BANK. TEST YOUR SKILLS – YEAR 6 & YEAR 7 FREE PREPARATION EXAM ©2008 COPYRIGHT EDWORKS Unauthorised copying by any means is strictly forbidden and will result in legal proceedings. As outlined in the Law School Admissions Council's description of what is on the LSAT, these sections test skills such as: drawing well-supported conclusions, identifying flaws in arguments, and reasoning by analogy. These tests will eventually be put to use by large organisations such as KPMG to identify promising candidates. Logical reasoning tests assess a candidate's ability to use structured thinking to deduce from a short passage which of a number of statements is the most accurate response to a posed question. A brief explanation of why each choice is correct or incorrect follows each practice question. Here are the most frequently asked logical reasoning questions for your practice for UPSC exams, state PSC exams, entrance exams, bank exams, NEET exam (National Eligibility and Entrance Test) or any other competitive exams and job placement interviews. LSAT Logical Reasoning Classifications Cambridge LSAT Manhattan (MLR) PowerScore (LRB) Must Be True Inference Must Be True/Most Supported Most Strongly Supported Complete the Passage Main Point—Fill in the Blank Cannot Be True Cannot Be True Main Conclusion Identify the Conclusion Main Point Principle (Identify) Principle Support Principle* Principle (Apply) Principle Example Necessary Assumption Necessary Assumptions Assumption Sufficient Assumption Sufficient Assumptions Justify the. Today we shear Logical and Verbal Reasoning Questions and Answers PDF SSC BANK REASONING. Nishit K Sinha Logical Reasoning Pdf 19 DOWNLOAD (Mirror #1) 3b9d4819c4 The Pearson Guide to Data Interpretation and Logical. The key competencies which can be measured with the help of the test are: Logical Reasoning - whether a candidate is able to identify patterns in the given data to take. How can I use these reasoning test papers in year 5 classes? This mathematics reasoning paper for year 5, covers objectives from year 3 to year 5. The Coding and Decoding of Logical Reasoning Problems and Solutions is available here. The real inductive reasoning test will contain 16 questions of increased difficulty and you will get 40 seconds per question. LSAT Logical Reasoning Flaws, Fallacies List The LSAT Logical Reasoning section tests your ability to spot a variety of flaws. Weekly Class 2 Logical Reasoning mock test. Inductive reasoning is often called statistical (or probabilistic) reasoning, and forms the basis of experimental science. The Prolog implementations build upon the technique of metalogic programming, which is. Logical Reasoning (MCQ) questions for Placement Tests 350+ Logical Reasoning (MCQ) Questions with answers and explanation for placement tests. Reasoning PDF, Logical Reasoning, Reasoning Book PDF तथा Reasoning Questions से सम्बन्धित बहुत से PDF Material, Book तर्कशक्ति बुद्धि परीक्षण Reasoning book in hindi Verbal Reasoning Book, And Non-Verbal Reasoning Book PDF Download तथा रिजनिंग सम्बन्धित Questions के. Answers of the logical & aptitude questions are provided for the reference at the bottom of this article. The style of the following test is based on the Inductive Reasoning and Diagrammatic Reasoning tests available on the AssessmentDay website www. The correct answer will be indicated in green color, otherwise, it will be red. 120 must solve Verbal Reasoning Critical Reasoning Questions Cetking launches its famous 10 workshops for Verbal Reasoning Critical Reasoning questions with revolutionary 3 phases in each workshop Workshop 1 – Statement Conclusions – Phase 1 – Phase 2 – Phase 3 Workshop 2 – Cause and Effect – Phase 1 – Phase 2 – Phase 3 … 120 must solve Verbal Reasoning Critical Reasoning. An abstract reasoning test measures your ability or aptitude to reason logically. Logical Reasoning is an. If you don't see any interesting for you, use our search form on bottom ↓. Reasoning - Question & Answer with solution & Formulas in Hindi & English useful for preparation of SSC, Banks, IBPS, CAT-MAT, Railways, Police & many more competitive examinations & entrance tests. A psychometric test designed to measure a subject's logical reasoning aptitude. Free Inductive/Logical Test Questions (With questions and answers) JobTestPrep invites you to a free practice session that represents only some of the materials offered in our online practice packs. The students are 109 years old studying to be kindergarten teachers. Today In this Article we will Provide you the Complete Reasoning Ability Basic book to Increase your Speed for Upcoming Bank and Other Competitive Exam. RS Aggarwal Reasoning pdf Free Download. We advise reading this guide from start to end, picking up any tip that suits your thinking process. Logical Reasoning requires you to read short passages and answer a question about each one. Sue, Andy, Lu, and Carlos each bought a hobby book. Download Study material for preparation of LSAT for free. In each of the following questions find out the alternative which will replace the question mark. Sentential logic (also known as "Propositional logic") and Predicate Logic are both examples of formal systems of logic. This activity will help your students develop their critical thinking, problem solving, and math skills. Some are as simple as failing to consider a particular possibility, but we can group the others into the category of classic logical fallacies. so u can download pdf without any charge. Which of the following, if true, identifies the greatest flaw in the reasoning above? A. How to use reasoning in a sentence. pdf download at 2shared. Trial numerical reasoning tests online, designed by top psychometric specialists. This inductive reasoning test measures your ability to think logically and solve abstract problems. PDF, IB ACIO Study Material of Logical Reasoning download. As was stated before, not all information is of the same type. This book is based on the new and the latest pattern of CAT and Arun Sharma Data Interpretation and Logical Reasoning is published by one of the most trusted name Tata McGraw Hill, so you can opt to read this. Sample Questions with Explanations for LSAT—India 1. In rare cases, technical problems can lead to the test suddenly being blocked, for example, if the browser crashes while you are working on the test. faulty reasoning and faulty logic are two different ways to say the same thing. Verbal Reasoning Sample Questions: Read the following text and answer the questions below:. They test your ability to think logically, analytically and numeric The verbal logic test involves verbal logic puzzles, some of which have a numerical element. These sets are of different types. Online Reasoning Test for competitive examination, entrance examination and campus interview. You can easily understand concept of each type of question once you start reading and excercising the RS Agrawal Reasoning PDF. We are providing you with these instructions so that you can understand what will be expected of you when you take the actual Reasoning Test. Question 3: Best Answer: D. A means of assessing your verbal logic and capacity to quickly digest information from passages of text. General Reasoning Test Questions & Answers These general Reasoning test questions and answers are for class 12th and college students. Verbal reasoning tests. For every Examination nowadays, this Logical Reasoning Test Questions are playing a very crucial role in the Exam Papers. Abstract reasoning tests Presented with a series of symbols, diagrams or shapes you are asked to decide what comes next in the sequence. Logical Reasoning is the second section in CAT exam. Tcs Logical Reasoning Questions And Answers Pdf Logical word sequence verbal reasoning test logical sequence of words questions with answers download pdf with solved examples reasoning test sequence. Here is a simple tutorial I've created for ones having difficulty understanding Venn diagrams. The key competencies which can be measured with the help of the test are: Logical Reasoning - whether a candidate is able to identify patterns in the given data to take. Read the below sections to improve your Intelligence. In this article we provide the Rs Aggarwal Logical Reasoning Pdf Free Download. It acts as a “brain” behind the decision-making process of the system. Numerical reasoning tests include exercises comprised of statistical data presented in graphs and tables, accompanied by a text passage and followed by one or more questions. Law (CLAT) Architecture. Logical Reasoning questions are mostly based on arrangement type questions, that will have certain conditions to fulfill. Weekly Class 2 Logical Reasoning mock test to prepare for this section of various Science Olympiads, Maths Olympiad and Cyber Olympiads. the big shape rotates clockwise from frame to frame). A brief explanation of why each choice is correct or incorrect follows each practice question. A key to successfully answering logical reasoning questions is to understand how the LSAT uses certain words. PowerScore LSAT Logical Reasoning: Question Type Training (Powerscore Test Preparation) The PowerScore LSAT Logical Reasoning Bible: A Comprehensive System for Attacking the Logical Reasoning Section of the LSAT The PowerScore LSAT Logical Reasoning Bible Workbook. Answers of the logical & aptitude questions are provided for the reference at the bottom of this article. Cognizant Logical Reasoning Questions - Test Pattern & Syllabus. Sample tests with detailed answer description, explanation are given and it would be easy to understand (online mock tests for CAT, GATE, GRE, MAT, GMAT, IAS, Bank Exam, Railway Exam). Use Logical Reasoning (Gr. details, details basic facts about logical reasoning Two of your four scored sections will be Logical Reasoning. A major bias is that people often only consider a situation or a problem from one point of view. The ACER Mechanical Reasoning Test is a test of mechanical reasoning ability. Our computer programmer aptitude test is created & validated by experienced subject matter experts (SMEs) to assess aptitude skills of candidates as per industry standards. Evaluate your Logical Reasoning Test 10 test answering skills by trying the online Logical Reasoning Sample Test 10 and know your score. RS Aggarwal Reasoning pdf Free Download RS Aggarwal Reasoning Book on Amazon: Click Me Download Free pdf Books for Your Exam: Click Me. 1984) and to within-test practice (Swinton, wild, & Wallmark, 1983). For most students, these aren’t taught in school, so they might not be well known. This is the Graduate Monkey logical reasoning test tutorial, video edition. Analytical reasoning is logic that is inferred through the virtue of the statement's own content. Here are some aptitude questions and answers. Consider argument (a2) above. We make sure that our practice tests are practice material that helps simulate the experience of a real logical reasoning test. Specialized High School Admissions Test Everything you Need to Pass the SHSAT! Free online practice questions, exam registration information, Test Preparation Tips and Strategies, Multiple Choice Tips and Strategy, Sample Questions, Test Taking Tips, plus SHSAT study guides and practice tests in every format – PDF and paperback. It overlaps with psychology, philosophy, linguistics, cognitive science, artificial intelligence, logic, and probability theory. If you don't see any interesting for you, use our search form on bottom ↓. ii the powerscore lsat logical reasoning bible Read the Fine Print 49. Support us today and start preparing for your test without the intrusion of ads. ABSTRACT REASONING TESTS – HOW TO PASS THEM INCLUDING SAMPLE TEST QUESTIONS AND ANSWERS. Logical reasoning. Online LR Questions and Answers Logical reasoning known as critical thinking or analytic reasoning and it is very important part of every aptitude examination. We are a group of students from NALSAR - Hyderabad. Select One Possible Answer: 1). You need to identify the rules and patterns in sets of objects in order to find a missing object. We started CLATGyan in the year 2010 and it is the first website dedicated for CLAT mentoring. Weekly Class 2 Logical Reasoning mock test to prepare for this section of various Science Olympiads, Maths Olympiad and Cyber Olympiads. What does this test contain?. These solved reasoning questions are extremely useful for the campus placement of all freshers including Engineering, MBA and MCA students, Computer and IT Engineers etc. Class 3 LOGICAL REASONING - WEEKLY MOCK TEST to prepare for this section of various Maths, Science and Cyber Olympiad exams. The first section of the test is a writing task, followed by several sections containing multiple-choice questions. Around 16 questions will be asked from this topic in the form of 3-5 sets having 3-5 questions in each LR set. The Logical Reasoning test assesses candidates' ability to make a good conclusion based on given preconditions. To make this easier on all of us, we are going to use the term “logic” instead of reasoning simply because it’s shorter! If you see the term “faulty reasoning” on the STAAR Reading Test or on a Benchmark Test, just know that it’s the same thing as faulty. Test-Taking Tips for Logical Reasoning Before you try to answer a few sample questions, here are some general test-taking tips that should help you with the Logical Reasoning section. An abstract reasoning test measures your ability or aptitude to reason logically. The link to Directions returns you to this page. Many students are searching for R S Aggarwal reasoning book pdf. LSAT Logical Reasoning Main Point Questions. By clicking the image below, you can practice 10 free questions of our logical reasoning practice tests:. 501 Challenging Logical And Reasoning Questions in pdf. You will also receive important information about these tests as well as actual solved tests from real test scenarios. Read the passage and then decide whether each conclusion is: T) true, which means that you can infer the conclusion from the facts given. pdf download at 2shared. Logical Reasoning Type 1 Method Questions and Answers with Explanations - Download Type 2 Method Questions and Answers with Explanations - Download Type 3 Method Questions and Answers with Explanations - Download Type 4 Method Questions and Answers with Explanations - Download Type 5 Method Questions and Answers with Explanations - Download. In addition, every two steps a shape is added to the frame. October 2019 2nd week with PDF. logical reasoning questions and answers pdf. Top 10 Proctor & Gamble Reasoning Test Questions and Answers So you want to work for one biggest consumer product companies in the world. Abstract Reasoning Test questions with answers and tips by Richard McMunn of How2Become. Diagrammatic tests. Logical reasoning tests can thus refer to different kinds of testing, such as aforementioned deductive or inductive reasoning tests. Logical Problems Logical Reasoning Questions and Answers for exam preparation are available on this page. An IQ test, tests your intelligence on different levels of thinking. This test contains 18 questions and there is a time limit of 60 seconds per question. A Modern Approach to Verbal & Non – Verbal Reasoning ( RS Aggarwal Reasoning Book PDF) includes Latest Questions and their Solutions from various last year exams. That purpose which supplies a priori ideas Kant calls “pure purpose,” as distinguished within the “simple rationale,” that is specially concerned with the efficiency of steps. sinha pdf By Nishit K Sinha. Print PDF format available with or without full explanations. Verbal reasoning tests aim to. [PDF] TNPSC Aptitude and Mental Ability Model Question with Answer. We provide reasoning quiz on a daily basis to improve your performance in exam. This reasoning tricks for competitive exam app plays a major role in all the competitive exams, bank exams and other entrance test of various institutions because logical reasoning reveals a person's analyzing ability and to make the decision based on the given conditions. Verbal reasoning tests. Deductive reasoning is a psychological process. Let me know what you think by commenting below the explanations! If you want these explanations offline, or want the whole test in one place, you can get a pdf version. Some are of the conventional type logical reasoning sets but have difficulty, while others are similar to the logical reasoning sets that you will find in the recent CAT papers. Study the question carefully. assessmentday. Consequently, in 1981 the two problematic item types were deleted from the test, and additional numbers of analytical reasoning and logical reasoning items, which constituted a very small part of the original analytical measure, were inserted. A major bias is that people often only consider a situation or a problem from one point of view. But in contemporary philosophy the term “logic” is often used for a theory of implication and inconsistency, as in accounts of truth-functional logic, quantificational logic, and. | <urn:uuid:e7d4520c-629f-4b8a-8545-855d8b8b3f15> | CC-MAIN-2019-47 | http://keiz.taxi-marchal-lyon.fr/logical-reasoning-test-pdf.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00258.warc.gz | en | 0.90329 | 7,814 | 3.078125 | 3 |
Who Invented the Shopping Mall?
Modern shopping malls are so common that we forget they’ve only been around for about a half century. Here’s the story of how they came to be…and the story of the man who invented them, Victor Gruen—the most famous architect you’ve never heard of.
In the winter of 1948, an architect named Victor Gruen got stranded in Detroit, Michigan, after his flight was cancelled due to a storm. Gruen made his living designing department stores, and rather than sit in the airport or in a hotel room, he paid a visit to Detroit’s landmark Hudson’s department store and asked the store’s architect to show him around. The Hudson’s building was nice enough; the company prided itself on being one of the finest department stores in the entire Midwest. But downtown Detroit itself was pretty run-down, which was not unusual for an American city in that era. World War I (1914–18), followed by the Great Depression and then World War II (1939–45), had disrupted the economic life of the country, and decades of neglect of downtown areas had taken their toll.
The suburbs were even shabbier, as Gruen saw when he took a ride in the country and drove past ugly retail and commercial developments that seemed to blight every town.
The combination of dirt-cheap land, lax zoning laws, and rampant real estate speculation had spawned an era of unregulated and shoddy commercial development in the suburbs. Speculators threw up cheap, (supposedly) temporary buildings derisively known as “taxpayers” because the crummy eyesores barely rented for enough money to cover the property taxes on the lot. That was their purpose: Land speculators were only interested in covering their costs until the property rose in value and could be unloaded for a profit. Then the new owner could tear down the taxpayer and build something more substantial on the lot. But if the proliferation of crumbling storefronts, gas stations, diners, and fleabag hotels were any guide, few taxpayers were ever torn down.
The unchecked growth in the suburbs was a problem for downtown department stores like Hudson’s, because their customers were moving there, too. Buying a house in suburbia was cheaper than renting an apartment downtown, and thanks to the G.I. Bill, World War II veterans could buy them with no money down.
Once these folks moved out to the suburbs, few of them wanted to return to the city to do their shopping. The smaller stores in suburban retail strips left a lot to be desired, but they were closer to home and parking was much easier than downtown, where a shopper might circle the block for a half hour or more before a parking space on the street finally opened up.
Stores like Hudson’s had made the situation worse by using their substantial political clout to block other department stores from building downtown. Newcomers such as Sears and J. C. Penney had been forced to build their stores in less desirable locations outside the city, but this disadvantage turned into an advantage when the migration to the suburbs began.
As he drove through the suburbs, Gruen envisioned a day when suburban retailers would completely surround the downtown department stores and drive them out of business.
When Gruen returned home to New York City, he wrote a letter to the president of Hudson’s explaining that if the customers were moving out to the suburbs, Hudson’s should as well. For years Hudson’s had resisted opening branch stores outside the city. It had an image of exclusivity to protect, and opening stores in seedy commercial strips was no way to do that. But it was clear that something had to be done, and as Hudson’s president, Oscar Webber, read Gruen’s letter, he realized that here was a man who might be able to help. He offered Gruen a job as a real estate consultant, and soon Gruen was back driving around Detroit suburbs looking for a commercial strip worthy of the Hudson’s name.
The only problem: There weren’t any. Every retail development Gruen looked at was flawed in one way or another. Either it was too tacky even to be considered, or it was too close to downtown and risked stealing sales from the flagship store. Gruen recommended that the company develop a commercial property of its own. Doing so, he argued, offered a lot of advantages: Hudson’s wouldn’t have to rely on a disinterested landlord to maintain the property in keeping with Hudson’s image. And because Gruen proposed building an entire shopping center, one that would include other tenants, Hudson’s would be able to pick and choose which businesses moved in nearby.
Furthermore, by building a shopping center, Hudson’s would diversify its business beyond retailing into real estate development and commercial property management. And there was a bonus, Gruen argued: By concentrating a large number of stores in a single development, the shopping center would prevent ugly suburban sprawl. The competition that a well-designed, well-run shopping center presented, he reasoned, would discourage other businesses from locating nearby, helping to preserve open spaces in the process.
FOUR OF A KIND
Oscar Webber was impressed enough with Gruen’s proposal that he hired the architect to create a 20-year plan for the company’s growth. Gruen spent the next three weeks sneaking around the Detroit suburbs collecting data for his plan. Then he used the information to write up a proposal that called for developing not one but four shopping centers, to be named Northland, Eastland, Southland, and Westland Centers, each in a different suburb of Detroit. Gruen recommended that the company locate its shopping centers on the outer fringes of existing suburbs, where the land was cheapest and the potential for growth was greatest as the suburbs continued to expand out from downtown Detroit.
Hudson’s approved the plans and quietly began buying up land for the shopping centers. It hired Gruen to design them, even though he’d only designed two shopping centers before and neither was actually built. On June 4, 1950, Hudson’s announced its plan to build Eastland Center, the first of the four projects scheduled for development.
Three weeks later, on June 25, 1950, the North Korean People’s Army rolled across the 38th parallel that served as the border between North and South Korea. The Korean War had begun.
Though Victor Gruen is credited with being the “father of the mall,” he owes a lot to the North Korean Communists for helping him get his temples of consumerism off the ground. He owes the Commies (and so do you, if you like going to the mall) because as Gruen himself would later admit, his earliest design for the proposed Eastland Center was terrible. Had the Korean War not put the brakes on all nonessential construction projects, Eastland might have been built as Gruen originally designed it, before he could develop his ideas further.
Those early plans called for a jumble of nine detached buildings organized around a big oval parking lot. The parking lot was split in two by a sunken four-lane roadway, and if pedestrians wanted to cross from one half of the shopping center to the other, the only way to get over the moat-like roadway was by means of a scrawny footbridge that was 300 feet long. How many shoppers would even have bothered to cross over to the other side?
Had Eastland Center been built according to Gruen’s early plans, it almost certainly would have been a financial disaster. Even if it didn’t bankrupt Hudson’s, it probably would have forced the company to scrap its plans for Northland, Westland, and Southland Centers. Other developers would have taken note, and the shopping mall as we know it might never have come to be.
ALL IN A ROW
Shopping centers of the size of Eastland Center were such a new concept that no architect had figured out how to build them well. Until now, most shopping centers consisted of a small number of stores in a single strip facing the street, set back far enough to allow room for parking spaces in front of the stores. Some larger developments had two parallel strips of stores, with the storefronts facing inward toward each other across an area of landscaped grass called a “mall.” That’s how shopping malls got their name.
There had been a few attempts to build even larger shopping centers, but nearly all had lost money. In 1951 a development called Shoppers’ World opened outside of Boston. It had more than 40 stores on two levels and was anchored by a department store at the south end of the mall. But the smaller stores had struggled from the day the shopping center opened, and when they failed they took the entire shopping center (and the developer, who filed for bankruptcy) down with them.
Gruen needed more time to think through his ideas, and when the Korean War pushed the Eastland project off into the indefinite future, he got it. Hudson’s eventually decided to build Northland first, and by the time Gruen started working on those plans in 1951, his thoughts on what a shopping center should look like had changed completely. The question of where to put all the parking spaces (Northland would have more than 8,000) was one problem. Gruen eventually decided that it made more sense to put the parking spaces around the shopping center, instead of putting the shopping center around the parking spaces, as his original plans for Eastland Center had called for.
WALK THIS WAY
Gruen then put the Hudson’s department store right in the middle of the development, surrounded on three sides by the smaller stores that made up the rest of the shopping center. Out beyond these smaller stores was the parking lot, which meant that the only way to get from the parking lot to Hudson’s—the shopping center’s biggest draw—was by walking past the smaller shops.
This may not sound like a very important detail, but it turned out to be key to the mall’s success. Forcing all that foot traffic past the smaller shops—increasing their business in the process—was the thing that made the small stores financially viable. Northland Center was going to have nearly 100 small stores; they all needed to be successful for the shopping center itself to succeed.
Northland was an outdoor shopping center, with nearly everything a modern enclosed mall has…except the roof. Another feature that set it apart from other shopping centers of the era, besides its layout, its massive scale, and the large number of stores in the development, were the bustling public spaces between the rows of stores. In the past developers who had incorporated grassy malls into their shopping centers did so with the intention of giving the projects a rural, almost sleepy feel, similar to a village green.
Gruen, a native of Vienna, Austria, thought just the opposite was needed. He wanted his public spaces to blend with the shops to create a lively (and admittedly idealized) urban feel, just like he remembered from downtown Vienna, with its busy outdoor cafés and shops. He divided the spaces between Hudson’s and the other stores into separate and very distinct areas, giving them names like Peacock Terrace, Great Lakes Court, and Community Lane. He filled them with landscaping, fountains, artwork, covered walkways, and plenty of park benches to encourage people to put the spaces to use.
If Northland Center were to open its doors today, it would be remarkably unremarkable. There are dozens, if not hundreds, of similarly-sized malls all over the United States. But when Northland opened in the spring of 1954, it was one-of-a-kind, easily the largest shopping center on Earth, both in terms of square footage and the number of stores in the facility. The Wall Street Journal dispatched a reporter to cover the grand opening. So did Time and Newsweek, and many other newspapers and magazines. In the first weeks that the Northland Center was open, an estimated 40,000-50,000 people passed through its doors each day.
DON’T LOOK NOW
It was an impressive start, but Hudson’s executives still worried. Did all these people really come to shop, or just look around? Would they ever be back? No one knew for sure if the public would even feel comfortable in such a huge facility. People were used to shopping in one store, not having to choose from nearly 100. And there was a very real fear that for many shoppers, finding their way back to their car in the largest parking lot they had ever parked in would be too great a strain and they’d never come back. Even worse, what if Northland Center was too good? What if the public enjoyed the public spaces so much that they never bothered to go inside the stores? With a price tag of nearly $25 million, the equivalent of more than $200 million today, Northland Center was one of the most expensive retail developments in history, and nobody even knew if it would work.
Whatever fears the Hudson’s executives had about making back their $25 million investment evaporated when their own store’s sales exceeded forecasts by 30 percent. The numbers for the smaller stores were good, too, and they stayed good month after month. In its first year in business Northland Center grossed $88 million, making it one of the most profitable shopping centers in the United States. And all of the press coverage generated by the construction of Northland Center made Gruen’s reputation. Before the center was even finished, he received the commission of a lifetime: Dayton’s department store hired him to design not just the world’s first enclosed shopping mall but an entire planned community around it, on a giant 463-acre plot in a suburb of Minneapolis.
Southdale Center, the mall that Victor Gruen designed for Dayton’s department store in the town of Edina, Minnesota, outside of Minneapolis, was only his second shopping center. But it was the very first fully enclosed, climate-controlled shopping mall in history, and it had many of the features that are still found in modern malls today.
It was “anchored” by two major department stores, Dayton’s and Donaldson’s, which were located at opposite ends of the mall in order to generate foot traffic past the smaller shops in between. Southdale also had a giant interior atrium called the “Garden Court of Perpetual Spring” in the center of the mall. The atrium was as long as a city block and had a soaring ceiling that was five stories tall at its highest point.
Just as he had with the public spaces at Northland, Gruen intended the garden court to be a bustling space with an idealized downtown feel. He filled it with sculptures, murals, a newsstand, a tobacconist, and a Woolworth’s “sidewalk” café. Skylights in the ceiling of the atrium flooded the garden court with natural light; crisscrossing escalators and second story skybridges helped create an atmosphere of continuous movement while also attracting shoppers’ attention to the stores on the second level.
The mall was climate controlled to keep it at a constant spring-like temperature (hence the “perpetual spring” theme) that would keep people shopping all year round. In the past shopping had always been a seasonal activity in harsh climates like Minnesota’s, where frigid winters could keep shoppers away from stores for months. Not so at Southdale, and Gruen emphasized the point by filling the garden court with orchids and other tropical plants, a 42-foot-tall eucalyptus tree, a goldfish pond, and a giant aviary filled with exotic birds. Such things were rare sights indeed in icy Minnesota, and they gave people one more reason to go to the mall.
With 10 acres of shopping surrounded by 70 acres of parking, Southdale was a huge development in its day. Even so, it was intended as merely a retail hub for a much larger planned community, spread out over the 463-acre plot acquired by Dayton’s. Just as the Dayton’s and Donaldson’s department stores served as anchors for the Southdale mall, the mall itself would one day serve as the retail anchor for this much larger development, which as Gruen designed it, would include apartment buildings, single-family homes, schools, office buildings, a hospital, landscaped parks with walking paths, and a lake.
The development was Victor Gruen’s response to the ugly, chaotic suburban sprawl that he had detested since his first visit to Michigan back in 1948. He intended it as a brand-new downtown for the suburb, carefully designed to eliminate sprawl while also solving the problems that poor or nonexistent planning had brought to traditional urban centers like Minneapolis. Such places had evolved gradually and haphazardly over many generations instead of following a single, carefully thought-out master plan.
The idea was to build the Southdale Center mall first. Then, if it was a success, Dayton’s would use the profits to develop the rest of the 463 acres in accordance with Gruen’s plan. And Southdale was a success: Though Dayton’s downtown flagship store did lose some business to the mall when it opened in the fall of 1956, the company’s overall sales rose 60 percent, and the other stores in the mall also flourished.
But the profits generated by the mall were never used to bring the rest of Gruen’s plan to fruition. Ironically, it was the very success of the mall that doomed the rest of the plan.
LOCATION, LOCATION, LOCATION
Back before the first malls had been built, Gruen and others had assumed that they would cause surrounding land values to drop, or at least not rise very much, on the theory that commercial developers would shy away from building other stores close to such a formidable competitor as a thriving shopping mall. The economic might of the mall, they reasoned, would help to preserve nearby open spaces by making them unsuitable for further commercial development.
But the opposite turned out to be the case. Because shopping malls attracted so much traffic, it soon became clear that it made sense to build other developments nearby. Result: The once dirt-cheap real estate around Southdale began to climb rapidly in value. As it did, Dayton’s executives realized they could make a lot of money selling off their remaining parcels of land—much more quickly, with much less risk—than they could by gradually implementing Gruen’s master plan over many years.
From the beginning Gruen had seen the mall as a solution to sprawl, something that would preserve open spaces, not destroy them. But his “solution” had only made the problem worse—malls turned out to be sprawl magnets, not sprawl killers. Any remaining doubts Gruen had were dispelled in the mid-1960s when he made his first visit to Northland Center since its opening a decade earlier. He was stunned by the number of seedy strip malls and other commercial developments that had grown up right around it.
REVERSAL OF FORTUNE
Victor Gruen, the father of the shopping mall, became one of its most outspoken critics. He tried to remake himself as an urban planner, marketing his services to American cities that wanted to make their downtown areas more mall-like, in order to recapture some of the business lost to malls. He drew up massive, ambitious, and very costly plans for remaking Fort Worth, Rochester, Manhattan, Kalamazoo, and even the Iranian capital city of Tehran. Most of his plans called for banning cars from city centers, confining them to ring roads and giant parking structures circling downtown. Unused roadways and parking spaces in the center would then be redeveloped into parks, walkways, outdoor cafés, and other uses. It’s doubtful that any of these pie-in-the-sky projects were ever really politically or financially viable, and none of them made it off the drawing board.
In 1968 Gruen closed his architectural practice and moved back to Vienna…where he discovered that the once thriving downtown shops and cafés, which had inspired him to invent the shopping mall in the first place, were now themselves threatened by a new shopping mall that had opened outside the city.
He spent the remaining years of his life writing articles and giving speeches condemning shopping malls as “gigantic shopping machines” and ugly “land-wasting seas of parking.” He attacked developers for shrinking the public, non-profit-generating spaces to a bare minimum. “I refuse to pay alimony for these bastard developments,” Gruen told a London audience in 1978, in a speech titled “The Sad Story of Shopping Centers.”
Gruen called on the public to oppose the construction of new malls in their communities, but his efforts were largely in vain. At the time of his death in 1980, the United States was in the middle of a 20-year building boom that would see more than 1,000 shopping malls added to the American landscape. And were they ever popular: According to a survey by U.S. News and World Report, by the early 1970s, Americans spent more time at the mall than anyplace else except for home and work.
Today Victor Gruen is largely a forgotten man, known primarily to architectural historians. That may not be such a bad thing, considering how much he came to despise the creation that gives him his claim to fame.
Gruen does live on, however, in the term “Gruen transfer,” which mall designers use to refer to the moment of disorientation that shoppers who have come to the mall to buy a particular item can experience upon entering the building—the moment in which they are distracted into forgetting their errand and instead begin wandering the mall with glazed eyes and a slowed, almost shuffling gait, impulsively buying any merchandise that strikes their fancy.
Victor Gruen may well be considered the “father of the mall,” but he didn’t remain a doting dad for long. Southdale Center, the world’s first enclosed shopping mall, opened its doors in the fall of 1956, and by 1968 Gruen had turned publicly and vehemently against his creation.
So it would fall to other early mall builders, people such as A. Alfred Taubman, Melvin Simon, and Edward J. DeBartolo Sr., to give the shopping mall its modern, standardized form, by taking what they understood about human nature and applying it to Gruen’s original concept. In the process, they fine-tuned the mall into the highly effective, super-efficient “shopping machines” that have dominated American retailing for nearly half a century.
BACK TO BASICS
These developers saw shopping malls the same way that Gruen did, as idealized versions of downtown shopping districts. Working from that starting point, they set about systematically removing all distractions, annoyances, and other barriers to consumption. Your local mall may not contain all of the following features, but there should be much here that looks familiar:
- It’s a truism among mall developers that most shoppers will only walk about three city blocks—about 1,000 feet—before they begin to feel a need to head back to where they’d started. So 1,000 feet became a standard length for malls.
- Most of the stairs, escalators, and elevators are located at the ends of the mall, not in the center. This is done to encourage shoppers to walk past all the stores on the level they’re on before visiting shops on another level.
- Malls are usually built with shops on two levels, not one or three. This way, if a shopper walks the length of the mall on one level to get to the escalator, then walks the length of the mall on the second level to return to where they started, they’ve walked past every store in the mall and are back where they parked their car. (If there was a third level of shops, a shopper who walked all three levels would finish up at the opposite end of the mall, three city blocks away from where they parked.)
- Another truism among mall developers is that people, like water, tend to flow down more easily than they flow up. Because of this, many malls are designed to encourage people to park and enter the mall on the upper level, not the lower level, on the theory they are more likely to travel down to visit stores on a lower level than travel up to visit stores on a higher level.
THE VISION THING
- Great big openings are designed into the floor that separates the upper level of shops from the lower level. That allows shoppers to see stores on both levels from wherever they happen to be in the mall. The handrails that protect shoppers from falling into the openings are made of glass or otherwise designed so that they don’t obstruct the sight lines to those stores.
- Does your mall’s decor seem dull to you? That’s no accident—the interior of the mall is designed to be aesthetically pleasing but not particularly interesting, so as not to distract shoppers from looking at the merchandise, which is much more important.
- Skylights flood the interiors of malls with natural light, but these skylights are invariably recessed in deep wells to keep direct sunlight from reflecting off of storefront glass, which would create glare and distract shoppers from looking at the merchandise. The wells also contain artificial lighting that comes on late in the day when the natural light begins to fade, to prevent shoppers from receiving a visual cue that it’s time to go home.
A NEIGHBORLY APPROACH
- Great attention is paid to the placement of stores within the mall, something that mall managers refer to as “adjacencies.” The price of merchandise, as well as the type, factors into this equation: There’s not much point in placing a store that sells $200 silk ties next to one that sells $99 men’s suits.
- Likewise, any stores that give off strong smells, like restaurants and hair salons, are kept away from jewelry and other high-end stores. (Would you want to smell cheeseburgers or fried fish while you and your fiancé are picking out your wedding rings?)
- Have you ever bought milk, raw meat, or a gallon of ice cream at the mall? Probably not, and there’s a good reason for it: Malls generally do not lease space to stores that sell perishable goods, because once you buy them you have to take them right home, instead of spending more time shopping at the mall.
- Consumer tastes change over time, and mall operators worry about falling out of fashion with shoppers. Because of this, they keep a close watch on individual store sales. Even if a store in the mall is profitable, if it falls below its “tenant profile,” or average sales per square foot of other stores in the same retail category, the mall operator may refuse to renew its lease. Tenant turnover at a well-managed mall can run as high as 10 percent a year.
HERE, THERE, EVERYWHERE
Malls have been a part of the American landscape for so long now that a little “mall fatigue” is certainly understandable. But like so much of American culture, the concept has been exported to foreign countries, and malls remain very popular around the world, where they are built not just in the suburbs but in urban centers as well. They have achieved the sort of iconic status once reserved for airports, skyscrapers, and large government buildings. They are the kind of buildings created by emerging societies to communicate to the rest of the world, “We have arrived.” If you climb into a taxicab in almost any major city in the world, be it Moscow, Kuala Lumpur, Dubai, or Shanghai, and tell the driver, “Take me to the mall,” he’ll know where to go.
KILL ’EM MALL
America’s love-hate relationship with shopping malls is now more than half a century old, and for as long as it has been fashionable to see malls as unfashionable, people have been predicting their demise. In the 1970s, “category killers” were seen as a threat. Stand-alone stores like Toys “R” Us focused on a single category of goods, offering a greater selection at a lower price than even the biggest stores in the mall couldn’t match. They were soon followed by “power centers,” strip malls anchored by “big box” stores like Walmart, and discount warehouse stores like Costco and Sam’s Club. In the early 1990s, TV shopping posed a threat, only to fizzle out…and be replaced by even stronger competition posed by Internet retailers like Amazon.
By the early 1990s, construction of new malls in the United States had slowed to a crawl, but this had as much to do with rising real estate prices (land in the suburbs wasn’t dirt cheap anymore), the savings and loan crisis (which made construction financing harder to come by), and the fact that most communities that wanted a mall already had one…or two…or more.
Increasing competition from other retailers and bad economic times in recent years have also taken their toll, resulting in declining sales per square foot and rising vacancy rates in malls all over the country. In 2009 General Growth Properties, the nation’s second-largest mall operator, filed for bankruptcy; it was the largest real estate bankruptcy in American history.
But mall builders and operators keep fighting back, continually reinventing themselves as they try to keep pace with the times. Open-air malls are remade into enclosed malls, and enclosed malls are opened to the fresh air. One strategy tried in Kansas, Georgia, and other areas is to incorporate shopping centers into larger mixed-use developments that include rental apartments, condominiums, office buildings, and other offerings. Legacy Town Center, a 150-acre development in the middle of a 2,700-acre business park north of Dallas, for example, includes 80 outdoor shops and restaurants, 1,500 apartments and townhouses, two office towers, a Marriott Hotel, a landscaped park with hiking trails, and a lake. (Sound familiar?)
In other words, developers are trying to save the mall by finally building them just the way that Victor Gruen wanted to in the first place.
This article is reprinted with permission from Uncle John’s Heavy Duty Bathroom Reader. The big brains at the Bathroom Readers’ Institute have come up with 544 all-new pages full of incredible facts, hilarious articles, and a whole bunch of other ways to, er, pass the time. With topics ranging from history and science to pop culture, wordplay, and modern mythology, Heavy Duty is sure to amaze and entertain the loyal legions of throne sitters.
Since 1987, the Bathroom Readers’ Institute has led the movement to stand up for those who sit down and read in the bathroom (and everywhere else for that matter). With more than 15 million books in print, the Uncle John’s Bathroom Reader series is the longest-running, most popular series of its kind in the world.
If you like Today I Found Out, I guarantee you’ll love the Bathroom Reader Institute’s books, so check them out!
|Share the Knowledge!| | <urn:uuid:d4ed7b2f-2315-4835-99a4-64b9cc137297> | CC-MAIN-2019-47 | http://www.todayifoundout.com/index.php/2015/11/who-invented-the-shopping-mall/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668910.63/warc/CC-MAIN-20191117091944-20191117115944-00018.warc.gz | en | 0.976817 | 6,634 | 3.125 | 3 |
The woman known to history as Bess of Hardwick was born Elizabeth Hardwick, one of five daughters in a family of minor gentry in 1527. For perspective, this was about six years before the birth of Queen Elizabeth I, making the two women contemporaries. During this particularly chaotic time in English history, those who sought power had to make careful choices on who to support, knowing that men like Cardinal Wolsey and Thomas Cromwell could be favourites one day and executed for treason the next. For anyone to thrive in this culture spoke to their cleverness, resiliency, and good luck. And for a woman like Bess to succeed – without royal pedigree, without powerful allies – demonstrates the depth of ambition, intelligence, and ruthlessness she must have possessed.
This isn’t a story about a woman victimized by men who winds up executed. Bess of Hardwick’s story is one of the resiliency of a self-made woman in a time when she was set up by societal conventions to fail. This is a story with a happy ending.
Bess and her family lived in a modest manor in Hardwick, their ancestral land in the county of Derbyshire. Her father died when she was very young, leaving a very small dowry to be split among Bess and her sisters. With her brother set to inherit the Hardwick lands, it was up to the girls to make themselves appealing matches for prospective husbands. With that in mind, Bess was sent at age 12 to serve in the household of her distant relation Lady Zouche at nearby Codnor Castle. The purpose of this sort of appointment was to allow young men and women from less-notable families the opportunity to meet influential people in order to improve their stations. Lady Zouche was in service to Queen Jane Seymour at the time that Bess came into service for her. This likely meant that Bess got to travel with her mistress to and from the royal court of Henry VIII, giving her an insider’s view of the turmoil and intrigue that went on there.
While in service to Lady Zouche, Bess made the acquaintance of teen aristocrat Robert Barley. The pair married in 1543 when she was about sixteen years old, and Barley died about a year later. As his widow, she should have been entitled to a portion of his family’s estate but the Barley family initially refused to provide it to her. Bess pursued the matter in a series of court battles and finally was awarded about thirty pounds a year as her widow’s dower. She was still not wealthy, but much better off than she had been as a child of the Hardwick household. With these funds, and her experience with Lady Zouche, Bess sought to elevate her position yet higher.
In 1545, she was placed in a position in the household of Lady Frances Grey, the Marchioness of Dorset. Frances was the daughter of Henry VIII’s sister, Mary Tudor, and was the mother of three girls: Lady Katherine, Lady Mary, and Lady Jane Grey. These girls were all about ten years younger than Bess, and they seem to have become good friends with Bess like a cool older sister figure to them. Due to the Grey family’s connection to Henry VIII, Bess would again have gotten the opportunity to spend time at royal court. During her time in this household, she met a man named Sir William Cavendish, who would become her second husband.
Cavendish was a definite catch for Bess. He was the Treasurer of the King’s Chamber, a highly influential role in Henry VIII’s court, had been widowed twice before, and had two daughters who were about Bess’s age (Cavendish was about 42 years old to Bess’s 20). His rank meant that Bess was now given the title of Lady Cavendish, and his wealth meant that she was now able to entirely change her lifestyle. They were married in 1547, a few years after Henry VIII had dissolved the monasteries as part of the Protestant Reformation. This meant that all of the highly valuable land previously owned by religious orders could now be taken over by rich people in search of land, and Cavendish and Bess were powerful enough they got first pick of which property they wanted. It had to have been at Bess’s suggestion that they would up claiming property in Derbyshire near where she’d grown up. You know Bess just loved knowing that all the people she grew up with could see her being rich and fancy. They began to build a home that’s still around today to visit, called Chatsworth House.
Bess gave birth to eight children during her time with Cavendish, six of whom survived infancy. Her older husband died in 1557, leaving the now 30-year-old Bess once more a widow. While Cavendish had lived a lavish lifestyle and owned property, he had died with considerable debt to the crown for which Bess was now responsible. During this time, the country entered a state of newfound chaos as the crown passed quickly between the boy king Edward IV, Bess’s former friend Lady Jane Grey, and finally to Queen Mary I. Mary I died shortly after Bess had been widowed, and Elizabeth I was crowned to widespread concern and uncertainty. Bess knew she had to find a new husband, someone even richer than Cavendish, to help her pay off her debts and to help raise her star even higher among the other courtiers. She looked to the court of Elizabeth I for options and decided her best option was a guy named Sir William St. Loe.
Just to make it very clear, each of Bess’s husbands was more politically powerful and wealthy than the previous, and St. Loe was truly a catch for her: he was Elizabeth I’s Captain of the Guard and Chief Butler of England, which was apparently a very prestigious thing to be. Like Cavendish, he owned lots of prestigious estates, so Bess was able to continue her new passion for building and flipping castles (but not selling them to anyone; she kept them all for herself). And like Cavendish, he had daughters from a previous relationship but unlike most other people, he had an extremely problematic brother. St. Loe’s brother, Edward, wanted nothing more than to inherit all of the St. Loe estates and money. He hoped that since no sons had been born to his brother, then he would be the main heir. So he saw Bess as a threat, because she was still young enough (32) to have a son, and if there was a new Baby St. Loe, Edward wouldn’t inherit anything. And so… he decided to kill her.
So, shortly after hosting a visit by Edward and his maybe-also-evil wife Margaret, Bess became very very very ill. Everyone immediately assumed that Edward had poisoned her: St. Loe believed it to be true, as did his mother. Bess’s condition improved and she didn’t die thank goodness, and an investigation found that Edward had been working with a necromancer! But nobody was sent to jail because Bess hadn’t died and really, people back then fell ill for a number of hygiene-related reasons, so they all decided to just move on. Except for Edward, who was still determined to cut Bess out of St. Loe’s will and make himself the only heir. He brought his brother to court, and the verdict there was that Edward’s wife Margaret would inherit the manor known as Sutton Court as long as she lived. St. Loe and Bess had his will rewritten so that Bess would inherit all of his money, leaving nothing for Edward.
During this time period, Bess was also spending lots of time at the royal court as she’d been appointed Lady of the Bedchamber by Queen Elizabeth I. This was a hugely important role, as this meant she had access to the Queen on a daily basis — all the better to know what was going on, and to lobby for her personal interests. Bess was older than most of the other ladies in waiting, who included her former friend Lady Katherine Grey. In 1561, Katherine became involved in a scandal and turned to her childhood friend Bess for advice. The thing is that Katherine — who, because of her royal grandmother, was a potential heir to the throne — had secretly fallen in love with and married the Earl of Hertford (who was himself also an heir to the throne). Now, ever since the whole Lady Margaret Douglas-Thomas Howard scenario of a few decades earlier, the rule was that any heir to the throne had to get royal permission to get married and if they skipped tha step, it was treason. So Katherine was in trouble because she’d not only gotten married without permission but also she was now eight months pregnant and didn’t know wha to do.
Bess was like, “Get out of here, Lady Katherine Grey, I want nothing to do with your drama,” because she knew this was not going to end well. But Elizabeth found out that Bess knew about it, and charged her as an accomplice. Bess spent 31 weeks in the Tower of London being questioned, but left there still on such good terms with the Queen that she received a royal gift for New Year’s and Elizabeth agreed to waive all of the debt Bess still owed on behalf of her second husband, Cavendish. Bess spent much of her time overseeing the construction at Chatsworth House while St. Loe hung out being a butler etc. at royal court. But in 1564, Bess was called back to London because her husband had become very ill.
St. Loe had already died by the time Bess got there, and she was fairly certain he’d been poisoned by his horrible brother, Edward. But Edward didn’t know that the will had already been changed, cutting him out of inheritance altogether. St. Loe’s daughters were unhappy not to inherit anything from their father (remember, the property went to Edward’s wife and the money went to Bess). It wasn’t a good look on Bess, who looked a bit like a black widow/golddigger, and Edward took her to court to contest the terms of the will. Nothing changed, though; Margaret St. Loe still got the house, and Bess still got the money.
One assumed glad to have that all behind her, Bess headed off to hang out in Derbyshire and oversee yet more construction projects because she was now a real estate/castle building maven. She was rich, wealthy, and influential and could have chosen to live out the rest of her life as a widow but that was not Bess’s style. She returned to royal court in 1566 and everyone started gossiping about if and when she’d take a new husband. Bess was so rich she could have her pick of anyone, and a year later it was announced that she had become engaged, with royal permission, to George Talbot, the Earl of Shrewsbury. Talbot was the richest man in England, and marrying him meant Bess now had the title of Countess of Shrewsbury.
George was already the father of seven children from a previous marriage, and was basically her equal in terms of income and influence. Part of their marital arrangement also included the marriage of four of their children to one another: Bess’s teenage daughter Mary was married to George’s teenage son Gilbert, and Bess’s teenage son Henry was married to George’s young daughter Grace. There were caveats listed in these arrangements that if any of the children died before their marriages were consummated (given the young age of Mary and Grace, that wouldn’t happen for a few years), the marriages would then be moved onto the next younger sibling in each family. I mean, I guess that’s just how things were done back then?
So things were going great! With her new title came new lands and money, and Bess set about to build even more amazing palaces. From the letters that survive, she and George clearly adored one another. Theirs was a marriage they’d both chosen to enter into because they were both so rich nobody (other than the Queen) could tell them what to do, so they were equally rich, equally powerful, and made a Tudor-era supercouple. What could possible go wrong?? Well…
Into this extremely rich and marriage-focused family, came Hurricane Mary, Queen of Scots. If you’ve forgotten her whole deal you can click on this link for a reminder but basically: Mary QofS was a cousin of Elizabeth I and a lot of people thought she should be the queen of England instead of Elizabeth. Mary’s plans to take over England were set aside by two truly horrible husbands, one of whom she’d (allegedly) helped conspire to blow up before running off with the second one. She was now on the run from Scottish jail, because they didn’t like her blowing-up-husbands habit, and made her abdicate as Queen of Scotland. So she wound up in England, hoping her cousin/rival Elizabeth would help her out (despite Mary having actively tried to remove Elizabeth from power like five minutes ago). Elizabeth wasn’t sure what to do with her for obvious reasons; Mary’s motivations seemed shady as hell. But Elizabeth couldn’t just execute her because if she did, then others would think it’s OK to execute queens, and they might try and go for Elizabeth next. So finally, Elizabeth decided to put Mary in house arrest in the home of Bess and George and their weirdo intermarried Brady Bunch of a family.
Now. Mary was 26 years old when she arrived; Bess was 42. This isn’t a huge age gap all things considered, but just something to keep in mind. Having Mary sent to live with her was a huge honour for Bess, as it showed that Elizabeth trusted her. But it also really sucked because Elizabeth decided not to pay for Mary’s living expenses, so Bess and George wound up having to pay for all of Mary’s living expenses, which included: paying her sixteen personal servants, paying for thirty carts to transport Mary’s stuff between different properties as she was moved around, food for her personal chefs to prepare her thirty-two options for every meal. So like: the costs were not insubstantial. Chatsworth House was finished by then, so that’s where Mary spent a lot of her time in an apartment now known as the Queen of Scots room. And what did she do all day? Well, both she and Bess were really skilled at embroidery and they worked together on a series of panels known as the Oxburgh Hangings. You can view these in person if you go and visit Oxburgh Hall! You can tell who stitched which as Mary’s have the initials MS on them and Bess’s have ES on them (because her official name at that time was Elizabeth Shrewsbury).
But while it was all stitching and girl talk for the first little while, Bess was finally like, “Queen Liz I, how long is your cousin going to stay with me, like… a few years? Five years? What’s the deal?” And Elizabeth was like, “Well, I was thinking about fifteen years,” except that conversation never happened. The years just went by, and suddenly it had been fifteen years and Mary’s expenses were bankrupting Bess and George, and her ongoing psychological warfare had permanently ruined Bess and George’s marriage. But honestly, is it any wonder that, between the political tensions and financial strain, Bess and George’s marriage began to fall apart? Like not to blame Mary per se, but these two were her prison guards and all she had to do all day was embroider when what she really wanted was to take over England, so can you blame her for pitting Bess and George against each other?
To keep her mind off of her imploding marriage and her crafty houseguest-prisoner, Bess got busy figuring out the most advantageous marriages for her remaining single children. Her pal Lady Margaret Douglas, aka the mother-in-law of Mary Queen of Scots (and the mother of the ODIOUS LORD DARNLEY), was like, “What if we marry my non-Darnley son, Charles, to your cute daughter Elizabeth?” And Bess was like, “Hell yes, let’s do this.” Because even though Bess was literally the richest woman in England other than the Queen, she was herself not royal and none of her children were married to royals. Margaret Douglas was royal-adjacent, which meant that any children Charles Douglas and Elizabeth Cavendish had together would be possible heirs to the throne. Because by now Elizabeth I was in her 40s and people were starting to figure out she wasn’t ever going to get married or have children. So all of the maybe-heirs, people like Charles Douglas, were suddenly way more valuable.
Now, as we all remember from the whole Lady Katherine Grey scenario, sort-of-heirs had to get royal permission before getting married. But Bess and Lady Margaret were like, “Let’s just skip that step,” and arranged a quickie marriage for Charles and Elizabeth. Which basically meant that all four of them (Bess, Lady Margaret, Charles, and Elizabeth) had just committed treason. But by the time Elizabeth found out about it, it was too late to annul anything — Elizabeth was pregnant! And the child she had with Charles would become another heir to the throne. What to do?? Well, Elizabeth threw Lady Margaret in the Tower of London, and put Bess and Elizabeth under house arrest. Bess was like, “Oh no, I have to stay inside my huge palace house I’ve built for myself, the place where I’d be anyway? Woe is me, etc.”
In the midst of all this drama, Charles Douglas died of old-timey reasons, leaving Bess and her daughter Elizabeth in a Gilmore Girls scenario with a new baby. The new maybe-heir baby was named Arbella Stuart, and her life was VERY INTERESTING due to her being born from a treasonous marriage and being a possible candidate for Queen of England and Scotland. She deserves her own essay and rest assured, I will write one soon, and sort of skip over her stuff here so you aren’t spoiled for her feature.
During all of this, don’t forget, Mary Queen of Scots was still sitting around Chatsworth House. She’d grown tired of just doing embroidery and ruining Bess’s marriage, and had started (allegedly) scheming with some others to (allegedly) have Elizabeth assassinated. When Elizabeth found out about this, she transferred Mary from Bess’s property to a more prison-esque environment. Finally, Bess was free! And then a few years later, George (who had been living separately from her for quite awhile now anyway) died, leaving Bess 63 years old, the richest woman in England, with the title Dowager Countess of Shrewsbury.
Her daughter Elizabeth also died young, leaving Bess as the primary role model and guardian of her maybe-royal granddaughter, Arbella Stuart.I picture this as sort of a Miss Havisham scenario, with Bess training her granddaughter from an early age everything she needs to know about being an expert schemer and manipulator. They lived in Bess’s latest construction, a new palace build in Derbyshire that she called Hardwick Hall. Since she was grooming Arbella to be the next Queen, this estate was just as fancy if not fancier than the places where Elizabeth I lived, and had so many windows that people made up the rhyme “Hardwick Hall, more glass than wall.” If you’re curious, it’s still open to the public to visit!
But while Bess used all of her cleverness to leverage Arbella’s position, she hadn’t counted on Arbella inheriting Bess’s exact same very high amount of stubbornness and awesomeness. So despite all of Bess’s scheming, Arbella wound up running away to marry a man she’d chosen for herself, with an assist from her Uncle Henry. At this point, Bess effectively disowned her and removed her from her will and started calling Henry, her own son, “Horrible Henry.” It was a whole thing and at one point Arbella tried to escape dressed like a man and again: don’t worry, I’ll write about it another time. But just know that the battle of wills between Bess of Hardwick and Arbella Stuart was one for the ages.
Despite us not knowing the day, month or even year of her birth, it’s sort of nice to know that Bess was so important that we know the precise time of her death: 5pm on February 13th, 1608. She was 81 and, presumably, died of the sort of old timey diseases that you’d catch when you were that age in that time period. She was buried in Derbyshire, the place where she’d grown up in obscurity but wound up the most famous woman in town. There’s still a monument to her honour that you can visit if you’re there.
POSTSCRIPT: Although her granddaughter Arbella never became Queen, Bess of Hardwick did wind up an ancestor to royalty. Bess’s son William’s line leads to Elizabeth Bowes-Lyon, aka the mother of Queen Elizabeth II. And through her, she is an ancestor to Queen Elizabeth I and all of her heirs, including the adorably opinionated toddler Princess Charlotte.
Further Reading And Viewing
A BBC mini-series called Mistress of Hardwick aired in 1972, though most episodes are now lost. It must have been really great though, with noted historian Alison Plowden credited as the writer, and the series winning a Writers Guild Award. You can still read a book Plowden wrote to tie-in with the mini-series, also called Mistress of Hardwick. In the upcoming movie Mary Queen of Scots, Bess will be portrayed by the preternaturally elegant Gemma Chan.
Bess’s story is so dramatic, and she’s so clearly such a dynamic character, it’s no wonder there have been several novels written about her. The most recent book is the 2013 novel Venus in Winter: A Novel of Bess of Hardwick by Gillian Bagwell. She’s also a lead character in The Other Queen by Philippa Gregory, which focuses on the time Mary spent living with Bess and George. | <urn:uuid:5c6045c3-4bfd-40a6-94d4-25b8dfaf459d> | CC-MAIN-2019-47 | https://annfosterwriter.com/2018/08/27/bess-of-hardwick/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00018.warc.gz | en | 0.987813 | 4,874 | 3.078125 | 3 |
American Anthropological Association Response to OMB Directive 15
Race and Ethnic Standards for Federal Statistics
A Brief History of OMB Directive 15
The Statistical Policy Division, Office of Information and Regulatory Affairs, of the Office of Management and Budget (OMB) determines federal standards for the reporting of "racial" and "ethnic" statistics. In this capacity, OMB promulgated Directive 15: Race and Ethnic Standards for Federal Statistics and Administrative Reporting in May, 1977, to standardize the collection of racial and ethnic information among federal agencies and to include data on persons of Hispanic origins, as required by Congress. Directive 15 is used in the collection of information on "racial" and "ethnic" populations not only by federal agencies, but also, to be consistent with national information, by researchers, business, and industry as well.
Directive 15 described four races (i.e., American Indian or Alaskan Native, Asian or Pacific Islander, Black, and White) and two ethnic backgrounds (of Hispanic origin and not of Hispanic origin). The Directive's categories allowed collection of more detailed information as long as it could be aggregated to the specified categories.
Directive 15 was not clear regarding whether the race or origins of persons was to be determined by self-identification or by others, e.g., interviewers. Research has shown substantial differences of racial/ethnic identification by these two methods.
Directive 15 noted the absence of "scientific or anthropological" foundations in its formulation. Directive 15 did not explain what was meant by "race" or "origin," or what distinguished these concepts. However, the race and ethnicity categories of the Directive are used in scientific research and the interpretation of the research findings is based often on the "variables" of race and ethnicity.
Since Directive 15 was issued 20 years ago, the United States population has become increasingly diverse. Criticism that the federal race and ethnic categories do not reflect the Nation's diversity led to a review of Directive 15. Formal review began in 1993 with Congressional hearings, followed by a conference organized at the request of OMB by the National Academy of Sciences. OMB then instituted an Interagency Committee for the Review of Racial and Ethnic Standards, and appointed a Research Subcommittee to assess available research and conduct new research as a basis for possible revision of the Directive.
Among the guidelines for the review, OMB stated that .". . the racial and ethnic categories set forth in the standard should be developed using appropriate scientific methodologies, including the social sciences." The guidelines noted, too, that "the racial and ethnic categories set forth in the standards should not be interpreted as being primarily biological or genetic in reference. Race and ethnicity may be thought of in terms of social and cultural characteristics as well as ancestry." However, the distinction between the concepts of race and ethnicity was, again, not clarified.
The recommendations from the Interagency Committee were published by OMB in the Federal Register July 9, 1997 (Vol. 62, No. 131: 36847-36946), with a request for public comment by September 8, 1997. The recommendations included (1) maintaining the basic racial and ethnic categories from the 1977 Directive; and (2) collecting race and ethnicity data through two separate questions (p. 36943), with ethnicity collected first. The minimum designations for "race" were: "American Indian or Alaskan Native," "Asian or Pacific Islander," "Black or African-American," and "White." The minimum designations for "ethnicity" were: "Hispanic origin," or "not of Hispanic origin." To account for multiple races, OMB recommended that respondents be allowed to report "More than one race."
History and Problems with the Concept of "Race": A Biological Perspective
Anthropologically speaking, the concept of race is a relatively recent one. Historically, the term "race" was ascribed to groups of individuals who were categorized as biologically distinct. Rather than developing as a scientific concept, the current notion of "race" in the United States grew out of a European folk taxonomy or classification system sometime after Columbus sailed to the Americas. Increased exploration of far-away lands with people of different custom, language, and physical traits clearly contributed to the developing idea. In these pre-Darwinian times the observed differences--biological, behavioral and cultural--were all considered to be products of creation by God. It was in this intellectual climate that the perceived purity and immutability of races originated. Perceived behavioral features and differences in intellect were inextricably linked to race and served as a basis for the ranking, in terms of superiority, of races.
Early natural history approaches to racial classification supported these rankings and the implications for behavior. For example, in the 18th century, Carolus Linneaus, the father of taxonomy and a European, described American Indians as not only possessing reddish skin, but also as choleric, painting themselves with fine red lines and regulated by custom. Africans were described as having black skin, flat noses and being phlegmatic, relaxed, indolent, negligent, anointing themselves with grease and governed by caprice. In contrast, Europeans were described as white, sanguine, muscular, gentle, acute, inventive, having long flowing hair, blue eyes, covered by close vestments and governed by law.
In the 1800s, the first "scientific" studies of race attempted to extract the behavioral features from the definition of race. However, racist interpretation remained. For example the origin of racial variation was interpreted as degeneration of the original "Caucasian" race (the idea of a Caucasian race is based on the belief that the most "perfect" skulls came from the Caucasus Mountains). Degeneration explained the development of racial differences and racial differences explained cultural development. Biology and behavior were used to gauge the degree of deterioration from the original race. Measures of intellect were an important part of these early studies. In some cases, the degree of facial prognathism, bumps on the skull as interpreted by phrenology, cranial index, and cranial capacity were used as measures of intelligence. IQ is just the latest in the list of these so-called "definitive" features used to rank races.
The clearest data about human variation come from studies of genetic variation, which are clearly quantifiable and replicable. Genetic data show that, no matter how racial groups are defined, two people from the same racial group are about as different from each other as two people from any two different racial groups.
One of the basic principles about genetic transmission in families is that different variants are transmitted to different offspring independently. The more generations of mixing, the more likely such heterogeneity in geographic origin of genes within the same person will be. Fixed sets of traits are not transmitted across generations as many people assume. Rules like the "one drop of blood" rule show clearly how vague and social, rather than biological, are categorical terms for people.
Modern humans (Homo sapiens) appear to be a fairly recent and homogenous species. Regardless of ancestral geographic origins, humans maintain a high degree of similarities from a biological perspective. Admixture, even among and between highly isolated populations, has resulted in widespread, worldwide distribution of genes and thus human variation.
It is because people often share cultural identity and geographic ancestry that "race" or a system of terms for grouping people carries some information that can be useful for biomedical purposes (as in assigning resources for disease screening). For example, sickle cell hemoglobin is a health risk associated with black or African-descended populations and PKU or phenylketonuria is a health risk associated with white or European-descended populations. Despite being loaded with the historical or colloquial connotations, such terminology may in practice be about as effective as any other questionnaire-based way to define categories of people that capture at least limited biological outcomes.
"Race" as a concept is controversial because of the numerous instances in human history in which a categorical treatment of people, rationalized on the grounds of biology-like terms, have been used. Common examples of this include arguments about which "race" is more intelligent, better at mathematics or athletics, and so on. The ultimate use of categorical notions of race have occurred to achieve political ends, as in the Holocaust, slavery, and the extirpation of American Indian populations, that, while basically economic in motivation, has received emotional support and rationale from biological language used to characterize groups. The danger in attempting to tie race and biology is not only that individuals are never identical within any group, but that the physical traits used for such purposes may not even be biological in origin.
The American Anthropological Association recognizes that classical racial terms may be useful for many people who prefer to use proudly such terms about themselves. The Association wishes to stress that if biological information is not the objective, biological-sounding terms add nothing to the precision, rigor, or factual basis of information being collected to characterize the identities of the American population. In that sense, phasing out the term "race," to be replaced with more correct terms related to ethnicity, such as "ethnic origins," would be less prone to misunderstanding.
Social and Cultural Aspects of "Race" and "Ethnicity"
Race and ethnicity both represent social or cultural constructs for categorizing people based on perceived differences in biology (physical appearance) and behavior. Although popular connotations of race tend to be associated with biology and those of ethnicity with culture, the two concepts are not clearly distinct from one another.
While diverse definitions exist, ethnicity may be defined as the identification with population groups characterized by common ancestry, language and custom. Because of common origins and intermarriage, ethnic groups often share physical characteristics which also then become a part of their identification--by themselves and/or by others. However, populations with similar physical appearance may have different ethnic identities, and populations with different physical appearances may have a common ethnic identity.
OMB Directive 15 views race and ethnicity as distinct phenomena and appropriate ways to categorize people because both are thought to identify distinct populations. Although this viewpoint may capture some aspects of the way most people think about race and ethnicity, it overlooks or distorts other critical aspects of the same process.
First, by treating race and ethnicity as fundamentally different kinds of identity, the historical evolution of these category types is largely ignored. For example, today's ethnicities are yesterday's races. In the early 20th century in the US, Italians, the Irish, and Jews were all thought to be racial (not ethnic) groups whose members were inherently and irredeemably distinct from the majority white population. Today, of course, the situation has changed considerably. Italians, Irish, and Jews are now seen as ethnic groups that are included in the majority white population. The notion that they are racially distinct from whites seems far-fetched, possibly "racist." Earlier in the 20th century, the categories of Hindu and Mexican were included as racial categories in the Census. Today, however, neither would be considered racial categories.
Knowing the history of how these groups "became white" is an integral part of how race and ethnicity are conceptualized in contemporary America. The aggregated category of "white" begs scrutiny. It is important to keep in mind that the American system of categorizing groups of people on the basis of race and ethnicity, developed initially by a then-dominant white, European-descended population, served as a means to distinguish and control other "non-white" populations in various ways.
Second, by treating race and ethnicity as an enduring and unchanging part of an individual's identity, OMB and the Census ignore a fundamental tension and ambiguity in racial and ethnic thinking. While both race and ethnicity are conceptualized as fixed categories, research demonstrates that individuals perceive of their identities as fluid, changing according to specific contexts in which they find themselves.
Third, OMB Directive 15, Census and common sense treat race and ethnicity as properties of an individual, ignoring the extent to which both are defined by the individual's relation to the society at large. Consider, for example, the way that racial and ethnic identity supposedly "predict" a range of social outcomes. The typical correlation is that by virtue of being a member of a particular racial or ethnic group, imprisonment, poor health, poverty, and academic failure are more likely. Such an interpretation, while perhaps statistically robust, is structurally and substantively incomplete because it is not the individual's association with a particular racial or ethnic group that predicts these various outcomes but the attribution of that relationship by others that underlies these outcomes. For instance, a person is not more likely to be denied a mortgage because he or she is black (or Hispanic or Chinese), but because another person believes that he or she is black (or Hispanic or Chinese) and ascribes particular behaviors with that racial or ethnic category.
Current OMB Directive 15 policy and federal agency application of the Directive that does not take into account the complexities of racial and ethnic thinking is likely to create more problems than it resolves. Racial and ethnic categories are marked by both expectations of fixity and variation, both in historical and individual terms. Attempts to "hone" racial categories by expanding or contracting the groups listed in Directive 15 and on the Census form or by reorganizing the order in which questions are posed, will continue to miss important aspects of how people actually think about race and ethnicity. Similarly, treating race as an individual rather than relational property almost certainly compromises the value of the data collected. Finally, by ignoring the differences between self- and other- strategies for identification, Directive 15 and the Census application creates a situation where expectations about the nature of the data collected are violated by the way most people use common sense to interpret those same questions.
Overlap of the Concepts of Race, Ethnicity and Ancestry
A basic assumption of OMB Directive 15 is that persons who self-identify or identify others by race and ethnicity understand what these concepts mean and see them as distinct. Recent research by the US Bureau of Census and other federal agencies, supported by qualitative pretesting of new race and ethnicity questions and field tests of these new question formats, has demonstrated that for many respondents, the concepts of race, ethnicity and ancestry are not clearly distinguished. Rather, respondents view race, ethnicity and ancestry as one and the same.
It should be pointed out that the race and ethnicity categories used by the Census over time have been based on a mixture of principles and criteria, including national origin, language, minority status and physical characteristics (Bates, et al, 1994.) The lack of conceptual distinction discussed below is not exclusive to respondents, but may represent misunderstandings about race and ethnicity among the American people. Hahn (1992) has called for additional research to clarify the popular uses of these concepts.
The following outlines some of the evidence for the lack of clear distinctions between the concepts:
First, respondent definitions of the concepts. Cognitive pretesting for the Race and Ethnicity Targeted Test and the Current Population Survey Race Supplement suggest that, except for some college-educated respondents who saw the terms as distinct, respondents define all of the concepts in similar terms. Gerber and de la Puente (1996) found that respondents tended to define race in terms of family origins. Thus, common definitional strategies included: "your people," "what you are," and "where your family comes from." These concepts were invoked also to define the term "ethnic group" when it appeared in the same context. Many respondents said that "ethnic group" meant "the same thing" as "race." In subsequent discussions, the term "ethnic race" was frequently created by respondents as a label for the global domain. McKay and de la Puente (1995) found, too, that respondents did not distinguish between race and ethnicity, and concluded that many respondents are unfamiliar with the term "ethnicity." for example, several respondents assumed that a question containing the term "ethnicity" must be asking about the "ethical" nature of various groups. They concluded that the terms "race," "ethnicity," "ancestry" and "national origin . . . draw on the same semantic domain."
Second, perceived redundancy of race and ethnicity questions. In most Federal data collections, Hispanic origin is defined as an "ethnicity" and is collected separately from "race." In most recent tests, the Hispanic origin question precedes the race question. Both Hispanic and non-Hispanic respondents tend to treat the two questions as asking for essentially the same information.
For example, when Hispanic and non-Hispanic respondents are asked what the Hispanic ethnicity question means, they often say that it is asking about "race." Respondents often comment on this perceived redundancy, and wonder aloud why the two questions are separate. Non-Hispanic respondents attempt to answer the ethnicity question by offering a race-based term, such as "Black" or "White." (McKay and de la Puente, 1995.)
In addition, many Hispanic respondents regard the term "Hispanic" as a "race" category, defined in terms of ancestry, behavior as well as physical appearance (Gerber and de la Puente, 1996; Rodriguez and Corder-Guzman, 1992; Kissim and Nakamoto, 1993). They therefore tend to look for this category in the race question, and when they do not find it there, they often write it in to a line provided for the "some other race" category. More than 40% of self-identified Hispanics have not specified a race or ethnic category in the 1980 or 1990 Census. Census Bureau research has shown that over 97% of the 10 million persons who reported as "Other race" in 1990 were Hispanic (U.S. Census Bureau, 1992.)
Third, multiethnic and multiracial identifications are frequently not distinguished. Some respondents who identify as "multiracial" offer only ethnic groups to explain their backgrounds. For example, McKay et al. (1996) found that some individuals who defined themselves as "multiracial" offered two ethnicities, such as "German and Irish" as an explanation. The authors concluded that such reporting "presents the overlapping of the semantic categories of race and ethnicity. . . ."(p. 5). Other respondents in the same research who identify with only a single race category subsequently mention an additional "race" category when answering the ancestry question.
1. The American Anthropological Association supports the OMB Directive 15 proposal to allow respondents to identify "more than one" category of "race/ethnicity" as a means of reporting diverse ancestry. The Association agrees with the Interagency Committee's finding that a multiple reporting method is preferable to adoption of a "multiracial" category. This allows for the reflection of heterogeneity and growing interrelatedness of the American population.
2. The American Anthropological Association recommends that OMB Directive 15 combine the "race" and "ethnicity" categories into one question to appear as "race/ethnicity" until the planning for the 2010 Census begins. The Association suggests additional research on how a question about race/ethnicity would best be posed.
As recommended by the Interagency Committee, the proposed revision to OMB Directive 15 would separate "race" and "ethnicity." However, the inability of OMB or the Interagency Committee to define these terms as distinct categories and the research findings that many respondents conceptualize "race" and "ethnicity" as one in the same underscores the need to consolidate these terms into one category, using a term that is more meaningful to the American people.
3. The American Anthropological Association recommends that further research be conducted to determine the term that best delimits human variability, reflected in the standard "race/ethnicity," as conceptualized by the American people. Research indicates that the term "ethnic group"is better understood by individuals as a concept related to ancestry or origin sought by OMB Directive 15 than either "race" or "ethnicity." While people seldom know their complete ancestry with any certainty, they more often know what ethnic group or groups with which to identify. It is part of their socialization and daily identity. Additionally, there are fewer negative connotations associated with the term "ethnic group."
4. The proposed revision to OMB Directive 15 advocates using the following categories to designate "race" or "ethnicity": "American Indian or Alaskan Native," "Asian or Pacific Islander," "Black or African-American," "White," "Hispanic origin," "Not of Hispanic origin." Part of the rationale for maintaining these terms is to preserve the continuity of federal data collection.
However, the "race" and "ethnicity" categories have changed significantly over time to reflect changes in the American population. Since 1900, 26 different racial terms have been used to identify populations in the US Census. Preserving outdated terms for the sake of questionable continuity is a disservice to the nation and the American people.
The American Anthropological Association recommends further research, building on the ongoing research activities of the US Bureau of the Census, on the terms identified as the population delimiters, or categories, associated with "race/ethnicity" in OMB Directive 15 in order to determine terms that better reflect the changing nature and perceptions of the American people. For example, the term "Latino" is preferred by some populations who view "Hispanic" as European in origin and offensive because it does not acknowledge the unique history of populations in the Americas. OMB may want to consider using the term "Hispanic or Latino" to allay these concerns.
5. The American Anthropological Association recommends the elimination of the term "race" from OMB Directive 15 during the planning for the 2010 Census. During the past 50 years, "race" has been scientifically proven to not be a real, natural phenomenon. More specific, social categories such as "ethnicity" or "ethnic group" are more salient for scientific purposes and have fewer of the negative, racist connotations for which the concept of race was developed.
Yet the concept of race has become thoroughly--and perniciously--woven into the cultural and political fabric of the United States. It has become an essential element of both individual identity and government policy. Because so much harm has been based on "racial" distinctions over the years, correctives for such harm must also acknowledge the impact of "racial" consciousness among the U.S. populace, regardless of the fact that "race" has no scientific justification in human biology. Eventually, however, these classifications must be transcended and replaced by more non-racist and accurate ways of representing the diversity of the U.S. population.
This is the dilemma and opportunity of the moment. It is important to recognize the categories to which individuals have been assigned historically in order to be vigilant about the elimination of discrimination. Yet ultimately, the effective elimination of discrimination will require an end to such categorization, and a transition toward social and cultural categories that will prove more scientifically useful and personally resonant for the public than are categories of "race." Redress of the past and transition for the future can be simultaneously effected.
The American Anthropological Association recognizes that elimination of the term "race" in government parlance will take time to accomplish. However, the combination of the terms "race/ethnicity" in OMB Directive 15 and the Census 2000 will assist in this effort, serving as a "bridge" to the elimination of the term "race" by the Census 2010.
Bates, Nancy, M. de la Puente, T. J. DeMaio, and E. A. Martin
1994 "Research on Race and Ethnicity: Results From Questionnaire Design Tests." Proceedings of the Bureau of the Census' Annual Research Conference. Rosslyn, Virginia. Pp 107-136.
Gerber, Eleanor, and Manuel de la Puente
1996 "The Development and Cognitive Testing of Race and Ethnic Origin Questions for the Year 2000 Decennial Census." Proceedings of the Bureau of the Census' 1996 Annual Research Conference. Rosslyn, Virginia.
1992 "The State of Federal Health Statistics on Racial and Ethnic Groups." Journal of the American Medical Association 267(2):268-271. March.
Kissim, E., E. Herrera and J. M. Nakamoto
1993 "Hispanic Responses to Census Enumeration Forms and Procedures." Report prepared for the Bureau of the Census. Suitland, MD.
McKay, Ruth B., and Manuel de la Puente
1995 "Cognitive Research on Designing the CPS Supplement on Race and Ethnicity." Proceedings of the Bureau of the Census' 1995 Annual Research Conference. Rosslyn, Virginia. Pp 435-445.
McKay, Ruth B., L. L. Stinson, M. de la Puente, and B. A. Kojetin
1996 "Interpreting the Findings of the Statistical Analysis of the CPS Supplement on Race and Ethnicity." Proceedings of the Bureau of the Census' 1996 Annual Research Conference. Rosslyn, Virginia. Pp 326-337.
1989 "Two Hundred Years and Counting: The 1990 Census." Population Bulletin 44(1). April.
Rodriguez, C. E., and J. M. Cordero-Guzman
1992 "Place Race in Context." Ethnic Racial Studies Vol. 15, Pp 523-543.
U.S. Bureau of Census
1973 Population in U.S. Decennial Censuses: 1790-1970.
1979 Twenty Censuses: Population and Housing Questions: 1790-1980.
1992 Census Questionnaire Content, 1990. CQC-4 Race.
American Anthropological Association | <urn:uuid:1aae72b4-2777-4983-81d4-765fde98055f> | CC-MAIN-2019-47 | http://www.learner.org/workshops/primarysources/census/docs/ombd.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00137.warc.gz | en | 0.951719 | 5,258 | 2.6875 | 3 |
This makes us wondering whether software is reliable at all, whether we should use software in safety-critical embedded applications. With processors and software permeating safety critical embedded world, the reliability of software is simply a matter of life and death. Are we embedding potential disasters while we embed software into systems? Electronic and mechanical parts may become "old" and wear out with time and usage, but software will not rust or wear-out during its life cycle. Software will not change over time unless intentionally changed or upgraded.
Software Reliability is an important to attribute of software quality, together with functionality, usability, performance, serviceability, capability, installability, maintainability, and documentation. Software Reliability is hard to achieve, because the complexity of software tends to be high. While any system with a high degree of complexity, including software, will be hard to reach a certain level of reliability, system developers tend to push complexity into the software layer, with the rapid growth of system size and ease of doing so by upgrading the software.
For example, large next-generation aircraft will have over one million source lines of software on-board; next-generation air traffic control systems will contain between one and two million lines; the upcoming international Space Station will have over two million lines on-board and over ten million lines of ground support software; several major life-critical defense systems will have over five million source lines of software.
Emphasizing these features will tend to add more complexity to software. Software failures may be due to errors, ambiguities, oversights or misinterpretation of the specification that the software is supposed to satisfy, carelessness or incompetence in writing code, inadequate testing, incorrect or unexpected usage of the software or other unforeseen problems. Hardware faults are mostly physical faults , while software faults are design faults , which are harder to visualize, classify, detect, and correct. In hardware, design faults may also exist, but physical faults usually dominate.
In software, we can hardly find a strict corresponding counterpart for "manufacturing" as hardware manufacturing process, if the simple action of uploading software modules into place does not count. Therefore, the quality of software will not change once it is uploaded into the storage and start running. Trying to achieve higher reliability by simply duplicating the same software modules will not work, because design faults can not be masked off by voting.
A partial list of the distinct characteristics of software compared to hardware is listed below [Keene94]:. Over time, hardware exhibits the failure characteristics shown in Figure 1, known as the bathtub curve. Period A, B and C stands for burn-in phase, useful life phase and end-of-life phase. A detailed discussion about the curve can be found in the topic Traditional Reliability. Software reliability, however, does not show the same characteristics similar as hardware. A possible curve is shown in Figure 2 if we projected software reliability on the same axes.
One difference is that in the last phase, software does not have an increasing failure rate as hardware does. In this phase, software is approaching obsolescence; there are no motivation for any upgrades or changes to the software. Therefore, the failure rate will not change. The second difference is that in the useful-life phase, software will experience a drastic increase in failure rate each time an upgrade is made.
The failure rate levels off gradually, partly because of the defects found and fixed after the upgrades. Revised bathtub curve for software reliability. The upgrades in Figure 2 imply feature upgrades, not upgrades for reliability. For feature upgrades, the complexity of software is likely to be increased, since the functionality of software is enhanced. Even bug fixes may be a reason for more software failures, if the bug fix induces other defects into software. For reliability upgrades, it is possible to incur a drop in software failure rate, if the goal of the upgrade is enhancing software reliability, such as a redesign or reimplementation of some modules using better engineering approaches, such as clean-room method.
A proof can be found in the result from Ballista project, robustness testing of off-the-shelf software Components. Since software robustness is one aspect of software reliability, this result indicates that the upgrade of those systems shown in Figure 3 should have incorporated reliability upgrades. Since Software Reliability is one of the most important aspects of software quality, Reliability Engineering approaches are practiced in software field as well. Software Reliability Engineering SRE is the quantitative study of the operational behavior of software-based systems with respect to user requirements concerning reliability [IEEE95].
A proliferation of software reliability models have emerged as people try to understand the characteristics of how and why software fails, and try to quantify software reliability. Over models have been developed since the early s, but how to quantify software reliability still remains largely unsolved.
Interested readers may refer to [RAC96] , [Lyu95]. As many models as there are and many more emerging, none of the models can capture a satisfying amount of the complexity of software; constraints and assumptions have to be made for the quantifying process. Therefore, there is no single model that can be used in all situations. No model is complete or even representative. One model may work well for a set of certain software, but may be completely off track for other kinds of problems. The mathematical function is usually higher order exponential or logarithmic.
Software modeling techniques can be divided into two subcategories: The major difference of the two models are shown in Table 1. Difference between software reliability prediction models and software reliability estimation models.
With processors and software permeating safety critical embedded world, the reliability of software is simply a matter of life and death. The initial quest in software reliability study is based on an analogy of traditional and hardware reliability. As discussed in National Research Council , to adequately test software, given the combinatorial complexity of the sequence of statements activated as a function of possible inputs, one is obligated to use some form of automated test generation, with high code coverage assessed using one of the various coverage metrics proposed in the research literature. A primary limitation is that there can be a very large number of states in a large software program. The use of fault seeding could also be biased in other ways, causing problems in estimation, but there are various generalizations and extensions of the technique that can address these various problems. The Clean Coder Robert C. For details, see Jelinksi and Moranda
Using prediction models, software reliability can be predicted early in the development phase and enhancements can be initiated to improve the reliability. Representative estimation models include exponential distribution models, Weibull distribution model, Thompson and Chelson's model, etc. The field has matured to the point that software models can be applied in practical situations and give meaningful results and, second, that there is no one model that is best in all situations.
Only limited factors can be put into consideration. By doing so, complexity is reduced and abstraction is achieved, however, the models tend to specialize to be applied to only a portion of the situations and a certain class of the problems. We have to carefully choose the right model that suits our specific case.
Furthermore, the modeling results can not be blindly believed and applied. Measurement is commonplace in other engineering field, but not in software engineering. Though frustrating, the quest of quantifying software reliability has never ceased. Until now, we still have no good way of measuring software reliability. Measuring software reliability remains a difficult problem because we don't have a good understanding of the nature of software. There is no clear definition to what aspects are related to software reliability.
We can not find a suitable way to measure software reliability, and most of the aspects related to software reliability.
Even the most obvious product metrics such as software size have not uniform definition. It is tempting to measure something related to reliability to reflect the characteristics, if we can not measure reliability directly. The current practices of software reliability measurement can be divided into four categories: Software size is thought to be reflective of complexity, development effort and reliability. But there is not a standard way of counting.
This method can not faithfully compare software not written in the same language. The advent of new technologies of code reuse and code generation technique also cast doubt on this simple method. Function point metric is a method of measuring the functionality of a proposed software development based upon a count of inputs, outputs, master files, inquires, and interfaces.
The method can be used to estimate the size of a software system as soon as these functions can be identified. It is a measure of the functional complexity of the program. It measures the functionality delivered to the user and is independent of the programming language. It is used primarily for business systems; it is not proven in scientific or real-time applications. Complexity is directly related to software reliability, so representing complexity is important.
Complexity-oriented metrics is a method of determining the complexity of a program's control structure, by simplify the code into a graphical representation. Representative metric is McCabe's Complexity Metric. Detailed discussion about various software testing methods can be found in topic Software Testing.
Researchers have realized that good management can result in better products. Research has demonstrated that a relationship exists between the development process and the ability to complete projects on time and within the desired quality objectives. Costs increase when developers use inadequate processes. Higher reliability can be achieved by using better development process, risk management process, configuration management process, etc. Based on the assumption that the quality of the product is a direct function of the process, process metrics can be used to estimate, monitor and improve the reliability and quality of software.
ISO certification, or "quality management standards", is the generic reference for a family of standards developed by the International Standards Organization ISO. The goal of collecting fault and failure metrics is to be able to determine when the software is approaching failure-free execution. Minimally, both the number of faults found during testing i. Test strategy is highly relative to the effectiveness of fault metrics, because if the testing scenario does not cover the full functionality of the software, the software may pass all tests and yet be prone to failure once delivered.
Usually, failure metrics are based upon customer information regarding failures found after release of the software. The failure data collected is therefore used to calculate failure density, Mean Time Between Failures MTBF or other parameters to measure or predict software reliability. Before the deployment of software products, testing, verification and validation are necessary steps.
Software testing is heavily used to trigger, locate and remove software defects. Software testing is still in its infant stage; testing is crafted to suit specific needs in various software development projects in an ad-hoc manner. Various analysis tools such as trend analysis, fault-tree analysis, Orthogonal Defect classification and formal methods, etc, can also be used to minimize the possibility of defect occurrence after release and therefore improve software reliability.
After deployment of the software product, field data can be gathered and analyzed to study the behavior of software defects. Software Reliability is a part of software quality.
It relates to many areas where software quality is concerned. Markov models require transition probabilities from state to state where the states are defined by the current values of key variables that define the functioning of the software system. Using these transition probabilities, a stochastic model is created and analyzed for stability.
A primary limitation is that there can be a very large number of states in a large software program. For details, see Whittaker In this model, fault clustering is estimated using time-series analysis. For details, see Crow and Singpurwalla In these models, if there is a fault in the mapping of the space of inputs to the space of intended outputs, then that mapping is identified as a potential fault to be rectified. These models are often infeasible because of the very large number of possibilities in a large software system.
For details, see Bastani and Ramamoorthy and Weiss and Weyuker It is quite likely that for broad categories of software systems, there already exist prediction models that could be used earlier in development than performance metrics for use in tracking and assessment. It is possible that such models could also be used to help identify better performing contractors at the proposal stage. Further, there has been a substantial amount of research in the software engineering community on building generalizable prediction models i.
Given the benefits from earlier identification of problematic software, we strongly encourage the U. Department of Defense DoD to stay current with the state of the art in software reliability as is practiced in the commercial software industry, with increased emphasis on data analytics and analysis. When it is clear that there are prediction models that are broadly applicable, DoD should consider mandating their use by contractors in software development.
A number of metrics have been found to be related to software system reliability and therefore are candidates for monitoring to assess progress toward meeting reliability requirements. These include code churn, code complexity, and code dependencies see below. We note that the course on reliability and maintainability offered by the Defense Acquisition University lists 10 factors for increasing software reliability and maintainability:.
These factors are all straightforward to measure, and they can be supplied by the contractor throughout development. Metrics-based models are a special type of software reliability growth model that have not been widely used in defense acquisition. The purpose of this section is to provide an understanding of when metrics-based models are applicable during software development.
The validation of such internal metrics requires a convincing demonstration that the metric measures what it purports to measure and that the metric is associated with an important external metric, such as field reliability, maintainability, or fault-proneness for details, see El-Emam, Software fault-proneness is defined as the probability of the presence of faults in the software.
Failure-proneness is the probability that a particular software element will fail in operation. The higher the failure-proneness of the software, logically, the lower the reliability and the quality of the software produced, and vice versa.
Using operational profiling information, it is possible to relate generic failure-proneness and fault-proneness of a product. Research on fault-proneness has focused on two areas: While software fault-proneness can be measured before deployment such as the count of faults per structural unit, e. Five types of metrics have been used to study software quality: The rest of this section, although not comprehensive, discusses the type of statistical models that can be built using these measures.
Code churn measures the changes made to a component, file, or system over some period of time. The most commonly used code churn measures are the number of lines of code that are added, modified, or deleted. Other churn measures include temporal churn churn relative to the time of release of the system and repetitive churn frequency of changes to the same file or component. Several research studies have used code churn as an indicator.
Munson and Elbaum observed that as a system is developed, the relative complexity of each program module that has been altered will change. They studied a software component with , lines of code embedded in a real-time system with 3, modules programmed in C. Code churn metrics were found to be among the most highly correlated with problem reports.
Another kind of code churn is debug churn, which Khoshgoftaar et al. They studied two consecutive releases of a large legacy system for telecommunications that contained more than 38, procedures in modules. Discriminant analysis identified fault-prone modules on the basis of 16 static software product metrics. Their model, when used on the second release, showed type I and type II misclassification rates of Using information on files with status new, changed, and unchanged, along with other explanatory variables such as lines of code, age, prior faults as predictors in a negative binomial regression equation, Ostrand et al.
Their model had high accuracy for faults found in both early and later stages of development. In a study on Windows Server , Nagappan and Ball demonstrated the use of relative code churn measures normalized values of the various measures obtained during the evolution of the system to predict defect density at statistically significant levels. The top three recommendations made by their system identified a correct location for future change with an accuracy of 70 percent. Code complexity measures range from the classical cyclomatic complexity measures see McCabe, to the more recent object-oriented metrics, one of which is known as the CK metric suite after its authors see Chidamber and Kemerer, McCabe designed cyclomatic complexity.
Cyclomatic complexity is adapted from the classical graph theoretical cyclomatic number and can be defined as the number of linearly independent paths through a program. The CK metric suite identifies six object-oriented metrics:. The CK metrics have also been investigated in the context of fault-proneness.
They found the first five object-oriented metrics listed above were correlated with defects while the last metric was not. Subramanyam and Krishnan present a survey on eight more empirical studies, all showing that object-oriented metrics are significantly associated with defects. Early work by Pogdurski and Clarke presented a formal model of program dependencies based on the relationship between two pieces of code inferred from the program text.
They proposed an alternate way of predicting failures for Java classes. Rather than looking at the complexity of a class, they looked exclusively at the components that a class uses. For Eclipse, the open source integrated development environment, they found that using compiler packages resulted in a significantly higher failure-proneness 71 percent than using graphical user interface packages. Zimmermann and Nagappan built a systemwide code dependency graph of Windows Server and found that models built from social network measures had accuracy of greater than 10 percentage points in comparison with models built from complexity metrics.
Defect growth curves i. And Biyani and Santhanam showed that for four industrial systems at IBM there was a very strong relationship between development defects per module and field defects per module. This approach allows the building of prediction models based on development defects to identify field defects. They found that the models built using such social measures revealed 58 percent of the failures in 20 percent of the files in the system.
Studies performed by Nagappan et al. In predicting software reliability with software metrics, a number of approaches have been proposed. Logistic regression is a popular technique that has been used for building metric-based reliability models. The general form of a logistic regression equation is given as follows:. In the case of metrics-based reliability models, the independent variables can be any of the combination of measures ranging from code churn and code complexity to people and social network measures.
Another common technique used in metrics-based prediction models is a support vector machine for details, see Han and Kamber, For a quick overview of this technique, consider a two-dimensional training set with two classes as shown in Figure In part a of the figure, points representing software modules are either defect-free circles or have defects boxes.
A support vector machine separates the data cloud into two sets by searching for a maximum marginal hyperplane; in the two-dimensional case, this hyperplane is simply a line. There are an infinite number of possible hyperplanes in part a of the figure that separate the two groups. Support vector machines choose the hyperplane with the margin that gives the largest separation between classes.
Part a of the figure shows a hyperplane with a small margin; part b shows one with the maximum margin. Support vector machines thus compute a decision boundary, which is used to classify or predict new points. One example is the triangle in part c of Figure The boundary shows on which side of the hyperplane the new software module is located. In the example, the triangle is below the hyperplane; thus it is classified as defect free. Separating data with a single hyperplane is not always possible. Part d of Figure shows an example of nonlinear data for which it is not possible to separate the two-dimensional data with a line.
In this case, support vector machines transform the input data into a higher dimensional space using a nonlinear mapping. In this new space, the data are then linearly separated for details, see Han and Kamber, Support vector machines are less prone to overfitting than some other approaches because the complexity is characterized by the number of support vectors and not by the dimensionality of the input. See text for discussion. Other techniques that have been used instead of logistic regression and support vector machines are discriminant analysis and decision and classification trees.
Drawing general conclusions from empirical studies in software engineering is difficult because any process is highly dependent on a potentially large number of relevant contextual variables. Consequently, the panel does not assume a priori that the results of any study will generalize beyond the specific environment in which it was conducted, although researchers understandably become more confident in a theory when similar findings emerge in different contexts. Given that software is a vitally important aspect of reliability and that predicting software reliability early in development is a severe challenge, we suggest that DoD make a substantial effort to stay current with efforts employed in industry to produce useful predictions.
There is a generally accepted view that it is appropriate to combine software failures with hardware failures to assess system performance in a given test. However, in this section we are focusing on earlier non-system-level testing in developmental testing, akin to component-level testing for hardware. The concern is that if insufficient software testing is carried out during the early stages of developmental testing, then addressing software problems discovered in later stages of developmental testing or in operational testing will be much more expensive.
As discussed in National Research Council , to adequately test software, given the combinatorial complexity of the sequence of statements activated as a function of possible inputs, one is obligated to use some form of automated test generation, with high code coverage assessed using one of the various coverage metrics proposed in the research literature. This is necessary both to discover software defects and to evaluate the reliability of the software component or subsystem. However, given the current lack of software engineering expertise accessible in government developmental testing, the testing that can be usefully carried out, in addition to the testing done for the full system, is limited.
Consequently, we recommend that the primary testing of software components and subsystems be carried out by the developers and carefully documented and reported to DoD and that contractors provide software that can be used to run automated tests of the component or subsystem Recommendation 14 , in Chapter This includes information technology systems and major automated information systems. If DoD acquires the ability to carry out automated testing, then there are model-based techniques, including those developed by Poore see, e.
Finally, if contractor code is also shared with DoD, then DoD could validate some contractor results through the use of fault injection seeding techniques see Box , above.
International Series in Software Engineering. Free Preview. © Software Defect and Operational Profile Modeling. Authors: Kai-Yuan Cai. Software Defect and Operational Profile Modeling. Authors Part of the The Kluwer International Series in Software Engineering book series (SOFT, volume 4).
However, operational testing of a software system can raise an issue known as fault masking, whereby the occurrence of a fault prevents the software system from continuing and therefore misses faults that are conditional on the previous code functioning properly. Therefore, fault seeding can fail to provide unbiased estimates in such cases.
The use of fault seeding could also be biased in other ways, causing problems in estimation, but there are various generalizations and extensions of the technique that can address these various problems. They include explicit recognition of order constraints and fault masking, Bayesian constructs that provide profiles for each subroutine, and segmenting system runs. One of the most important principles found in commercial best practices is the benefit from the display of collected data in terms of trend charts to track progress. Along these lines, Selby demonstrates the use of analytics dashboards in large-scale software systems.
Analytics dashboards provide easily interpretable information that can help many users, including front-line software developers, software managers, and project managers. These dashboards can cater to a variety of requirements: Several of the metrics shown in the figure, for example, the trend of post-delivery defects, can help assess the overall stability of the system.
Selby states that organizations should define data trends that are reflective of success in meeting software requirements so that, over time, one could develop statistical tests that could effectively discriminate between successful and unsuccessful development programs.
Analytics dashboards can also give context-specific help, and the ability to drill down to provide further details is also useful: A high percentage of defense systems fail to meet their reliability requirements. This is a serious problem for the U. Department of Defense DOD , as well as the nation. Those systems are not only less likely to successfully carry out their intended missions, but they also could endanger the lives of the operators.
Furthermore, reliability failures discovered after deployment can result in costly and strategic delays and the need for expensive redesign, which often limits the tactical situations in which the system can be used. Finally, systems that fail to meet their reliability requirements are much more likely to need additional scheduled and unscheduled maintenance and to need more spare parts and possibly replacement systems, all of which can substantially increase the life-cycle costs of a system. Beginning in , DOD undertook a concerted effort to raise the priority of reliability through greater use of design for reliability techniques, reliability growth testing, and formal reliability growth modeling, by both the contractors and DOD units.
To this end, handbooks, guidances, and formal memoranda were revised or newly issued to reduce the frequency of reliability deficiencies for defense systems in operational testing and the effects of those deficiencies. Reliability Growth evaluates these recent changes and, more generally, assesses how current DOD principles and practices could be modified to increase the likelihood that defense systems will satisfy their reliability requirements.
This report examines changes to the reliability requirements for proposed systems; defines modern design and testing for reliability; discusses the contractor's role in reliability testing; and summarizes the current state of formal reliability growth modeling. The recommendations of Reliability Growth will improve the reliability of defense systems and protect the health of the valuable personnel who operate them. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.
Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book. Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.
To search the entire text of this book, type in your search term here and press Enter. Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available. Do you enjoy reading reports from the Academies online for free? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released. | <urn:uuid:635c0d41-170a-485c-a03f-69752bf1daf4> | CC-MAIN-2019-47 | http://domaine-solitude.com/plugins/journals/software-defect-and-operational-profile-modeling-international-series-in-software-engineering.php | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00497.warc.gz | en | 0.941759 | 5,545 | 2.984375 | 3 |
Nitya Aryasomayajula, Riona Chen, Abby Dillon, Yuying Fan, Vaibhav Gupta, Bonita Huang, Apurva Joshi, Emily Liu, Peter Wilson, Julia Wu, Kelly Xu, Olivia Yang
Acton-Boxborough Regional High School, Acton MA
Submitted on 28 April 2018; Revised on 25 May 2018; Published on 22 October 2018
With help from the 2018 BioTreks Production Team.
Plastic pollution is a major environmental problem that disturbs the health and biodiversity of terrestrial and aquatic ecosystems. As of now, the primary way to handle plastics is mechanical recycling, in which plastics are sorted, melted, and extruded into new products. However, only 9% of the currently existing 8.3 billion metric tons of plastics have been recycled (Parker, 2017). Plastic pollution is a particular concern in oceans because gyres – circulating bodies of water – trap plastic products into growing mounds. Moreover, marine organisms ingest plastic that is floating in the ocean, which subsequently creates health problems for them. Our ultimate goal is the bioremediation of areas where plastic has built up. We plan to treat such areas by genetically engineering the yeast species Saccharomyces cerevisiae to break down polyethylene terephthalate (PET) plastic into the environmentally benign molecules ethylene glycol and terephthalate, by using the enzymes PETase and MHETase from Ideonella sakaiensis. We will verify that both of our proteins are expressed extracellularly and that our kill switch and limiting system are effective at controlling the growth of S. cerevisiae. Keywords: plastic pollution, gyres, polyethylene terephthalate, Ideonella sakaiensis Authors are listed in alphabetical order. Aaron Mathieu and Anne Burkhardt mentored the group. Please direct all correspondence
Plastic pollution is a major environmental problem that disturbs the health and biodiversity of terrestrial and aquatic ecosystems. As of now, the primary way to handle plastics is mechanical recycling, in which plastics are sorted, melted, and extruded into new products. However, only 9% of the currently existing 8.3 billion metric tons of plastics have been recycled (Parker, 2017). Plastic pollution is a particular concern in oceans because gyres – circulating bodies of water – trap plastic products into growing mounds. Moreover, marine organisms ingest plastic that is floating in the ocean, which subsequently creates health problems for them. Our ultimate goal is the bioremediation of areas where plastic has built up. We plan to treat such areas by genetically engineering the yeast species Saccharomyces cerevisiae to break down polyethylene terephthalate (PET) plastic into the environmentally benign molecules ethylene glycol and terephthalate, by using the enzymes PETase and MHETase from Ideonella sakaiensis. We will verify that both of our proteins are expressed extracellularly and that our kill switch and limiting system are effective at controlling the growth of S. cerevisiae.
Keywords: plastic pollution, gyres, polyethylene terephthalate, Ideonella sakaiensis
Authors are listed in alphabetical order. Aaron Mathieu and Anne Burkhardt mentored the group. Please direct all correspondence to firstname.lastname@example.org.
As massive amounts of plastics are produced each year, plastic buildup in oceans and landfill sites is increasingly becoming a major problem for the environment. If steps are not taken to combat this, it could severely and adversely affect life on Earth in a short period of time. We plan to use genetically engineered Saccharomyces cerevisiae to produce the enzymes PETase and MHETase, both originally found in the bacterium Ideonella sakaiensis (Yoshida et al., 2016). PETase will be used to break down PET plastic into another compound, MHET, which will subsequently be catabolized by MHETase into ethylene glycol and terephthalate.
We plan to use yeast to secrete PETase, an enzyme which breaks down PET into mono (2-hydroxyethyl) terephthalic acid (MHET) plastic polymers. The yeast then hydrolyzes MHET via MHETase into the environmentally benign monomers ethylene glycol and terephthalic acid. To determine if PETase and MHETase are produced and able to break down PET plastic, we will introduce the modified yeast to PET plastic and observe if degradation occurs. In the future, we will add a green fluorescent protein (GFP) gene into our system to reduce the time required to detect successful expression. For more information on the addition of GFP, please see the “Future steps” section below.
We chose to use the yeast strain S. cerevisiae as our system chassis because of its ability to extracellularly secrete protein. This ability will allow for the breakdown of PET plastics into organic components directly in an affected environment, so that progression of the treatment can be directly monitored and altered as necessary.
Both DNA sequences begin with the standard constitutive promoter BBa_J63005. Because the quantity of plastic to be degraded may be variable, the amount of protein needed to complete the degradation is also variable. Therefore, by keeping the system ‘on’ by default, we can produce as much PETase and MHETase as needed. We plan to stop the production of PETase and MHETase after the plastic degradation process is complete by killing the S. cerevisiae population with two kill switches. The first kill switch will be encoded within the systems while the second kill switch is external. For more information on the kill switches, please see the “Safety” section below. We will not use an inducible system due to the difficulty of detecting PET plastic. For the PETase system (Figure 1), the constitutive promoter is followed by the standard ribosome binding site BBa_B0034. The next component of the sequence is PETase (BBa_K2010000), which degrades PET plastic into MHET. We will finish the sequence with a standard terminator (BBa_J63002).
|Figure 1. PETase system.|
We plan to produce a nearly identical, separate system for the production of MHETase (Figure 2). This system will parallel the PETase system, the only difference being that the MHETase coding sequence (CDS) will replace the PETase CDS. The order for the MHETase system will be as follows: constitutive promoter BBa_J63005, ribosome binding site BBa_B0034, the MHETase CDS (BBa_K2110014), and the terminator BBa_J63002.
|Figure 2. The MHETase system.|
As per our plan for the use of this system, we intend to release it into the environment. Naturally, this means that it must have extensive safety controls to prevent the escape of genetically modified organisms into the environment. To this end, we propose two independent control mechanisms: a kill switch to permit the instant elimination of the yeast if necessary, and a fail-safe growth limiter to place a hard limit on the number of replications the yeast can undergo.
Since our yeast cells will be released into the environment, we designed a kill switch: a final safeguard mechanism for shutting down our organism. The kill switch would be activated by the addition of preprotoxins, a special type of toxin molecule that only yeast cells are susceptible to. Preprotoxins kill susceptible cells in a dose-dependent manner either by inducing apoptosis, or via necrotic pathways. Overall, preprotoxins would be the best molecules to use for the kill switch as they would not affect any plants or animals in the environment, just our modified yeast (Reiter et al., 2005).
The cytotoxic effectiveness of preprotoxins stems from their ability to create pores in cell membranes, which eventually results in cell death. Moreover, preprotoxins can bind to two receptor molecules in the S. cerevisiae cell wall: the β-1,6-D-glucan receptor and 𝛼-1,6-mannoprotein receptor. The endogenous toxins K1 and K28 then kill the yeast cells in a receptor-mediated process (Breinig et al., 2002).
The first process involves the binding of the K1 toxin to the cell wall receptor β-1,6-D-glucan, which facilitates the entry of the toxin into the cytoplasm. Subsequently, the preprotoxin binds to the Kre1p receptor, located on the cytoplasmic side of the plasma membrane. This binding induces the formation of selective ion channels in the membrane that disrupt the membrane function, eventually culminating in cell death.
The second process involves the binding of the preprotoxins to the 𝛼-1,6-mannoprotein receptor, which allows the K28 toxin entry into the cell. From there, the K28 variant moves from the cytoplasm to the nucleus, where the toxin shuts down DNA synthesis in the yeast cell. This inevitably results in apoptosis as the prevention of DNA synthesis leads to a lack of DNA repair, which over time fosters the degradation of the cell (Zhang et al., 2006).
Overall, preprotoxins would function as an effective kill switch because they not only degrade yeast cells through two different pathways, but also demonstrate specificity only towards yeast, thus they would be environmentally sound in aquatic environments. Other kill switch possibilities include the addition of ammonium, acetic acid, or other fungicides. However, the addition of any of these compounds into a water body could drastically impact other organisms, as the toxic nature or acidic conditions that could result from the use of these compounds may inhibit these other organisms’ metabolic activity. Thus, the use of preprotoxins would be the best way to eradicate the modified yeast cells with minimal environmental damage.
As we intend to release our yeast into the environment, there is a significant possibility that our yeast may escape from the intended bioremediation area, even if we attempted to activate the kill switch. Therefore, we will incorporate a fail-safe limit on the number of replications yeast can undergo in the wild. We plan to use the growth limiter developed by the 2014 iGEM team Cooper Union, which operates through the elimination of telomerases, as yeast without telomerases can only undergo a limited number of replications before senescence (Jay et al., 2016).
Yeast with nonfunctional copies of two genes, EST2 and RAD52, cannot extend their telomeres and therefore irreversibly pass into senescence after a limited number of replications (LeBel et al., 2009). However, we still wish to permit indefinite replication in laboratory situations. Therefore, we will destroy the yeasts’ native copies of EST2 and RAD52 and add them back under the positive control of galactose. Therefore, when the yeast are growing on galactose media in the lab, they will be able to grow indefinitely; however, when released into the wild, they will be prevented from replicating indefinitely.
Unfortunately, we were unable to test our systems in a laboratory setting because both PETase and MHETase BioBrick parts were out of stock in the iGEM parts registry. If we could acquire the aforementioned coding sequences necessary to complete both systems, we would perform DNA extraction and purification using standard procedures, such as a minipreparation and spin column purification.
Our first goal is to test the efficacy of this system, and if it is able to degrade PET. Future experiments will aim to study the rate at which our system can degrade PET, and the optimal aquatic environmental conditions for yeast protein expression, including pH.
The optimal pH for PET film hydrolysis is 9 while the pH of freshwater lakes is 6.5–8.5, so PETase activity could potentially be suboptimal in a freshwater lake. Tests for the optimum temperature for our system must also be conducted, as well as determining the minimum operating temperatures. While the optimal temperature for yeast performance is 37°C, the optimal temperature of PET hydrolysis is 40°C. Both of these conditions are likely to be different from the temperature of a body of water, which my fluctuate as a result of seasonal changes and vary by geographical location.
Proof of Concept
A possible configuration to test for the successful production of PETase and MHETase is the use of a reporter protein that consecutively follows the protein of choice via a constitutive promoter (see Figures 3). Green fluorescent protein (GFP) is commonly used as a reporter protein to indicate that an initial gene has been correctly expressed. If a GFP gene is inserted properly into another organism, it will be able to act as a visual tag to show the expression of other genes. However, it is important to note that there is a possibility of PETase being expressed but GFP failing to be expressed and vice versa. Nevertheless, correct assembly of this system will typically allow GFP to function effectively as a reporter protein: GFP should only be allowed to be properly expressed if the first protein (PETase or MHETase) is expressed as well. Alternatively, tests such as SDS-PAGE or Western Blotting may be used to validate the presence of the proteins without altering the current system plans (Figures 1 and 2).
|Figure 3. A) PETase with GFP indicator. B) MHETase with GFP indicator.|
Fortuitously, testing the actual breakdown of PET, our end goal, is quite feasible, so a GFP indicator or other tests may be unnecessary. We hope to test for the successful expression of PETase and MHETase by letting the modified yeast “digest” very thin PET plastic. Using a scanning electron microscope, catabolism of the PET plastic can be verified if increasing ruggedness and holes are observed in the surface of the PET plastic (Tianjin iGEM team 2016). However, the additional use of a GFP indicator is worth consideration as it may act as a quicker indicator of protein expression than plastic surface degradation.
The following experiments would be vital for maximizing the efficiency of our system.
We would firstly need to develop the time frame needed for protein expression, as well as determine the exact rate of plastic degradation for a variety of sizes of PET plastic sheets. An additional experiment could be the concentration of protein needed to break down PET. Our system must also be tested for degradation rates with different densities of PET plastic.
Another area for future exploration is the addition of tags to boost transcription efficiency. Extensive testing has been previously conducted by the Harvard 2016 iGEM team and others on this subject, but PETase and MHETase modified with transcription tags have often been unsuccessful in transcription and translation. Further research into boosting promoter-binding affinity that results in improved rates of protein expression is also required. This will be necessary to determine if our system is (or can be made to be) competitive with current mechanical collection and recycling methods.
In addition, we must prove that we can extracellularly express both proteins. This is crucial for the actual implementation of our system for bioremediation, so that the system can operate freely in the environment until it is no longer desired, at which point the kill switch or inhibitory mechanism will be triggered. Yeasts have a wide array of secretory expression tags, so we plan on performing extensive testing to identify the tag that allows for the maximum expression rates of PETase and MHETase.
As any organism genetically engineered for eventual release into the environment requires extremely robust controls, we will need to complete extensive testing of both of our control systems for the modified yeast: the kill switch and the growth limiter.
For the growth limiter, we will need to validate three things. Firstly, if we successfully add EXT2 and RAD52 back to our growth-limited yeast, the yeast will then grow indefinitely in the lab. Secondly, the yeast must pass into senescence after an appropriate period of time. Thirdly, we must validate that this control remains evolutionarily stable. The first two requirements are easy to verify by simply growing the yeast for a period of time with and without galactose, and comparing the duration of time before yeast populations significantly decline. However, verifying evolutionary stability over a long period of time and for large population sizes of yeast will be difficult, especially as our growth limiter relies on DNA damage. Therefore, our validation plan is to grow up a very large population of yeast, and see if any are able to inactivate the growth control. If they are, we will add in a second inducible promoter under the control of a different transcription factor, so that a double mutation is required to cause control failure. Once the growth limiter can pass this test, we will find a completely different way of preventing telomere lengthening, place it under inducible control, and subject it to the same tests so that we will have two independent growth limiters to ensure safety in the final system.
Though our team has yet to test this system, other synthetic biology teams have proven successful in the expression and verification of the catabolic properties of PETase and MHETase on PET plastics (for example, see the 2016 Tianjin iGEM team’s “Plasterminator” page). In addition, several iGEMs have constructed the PETase and MHETase nucleotide sequences into BioBrick parts, some even including secretion systems for the proteins (Harvard 2016 iGEM team). However, none of these BioBrick parts for PETase and MHETase are currently in stock or demonstrate reproducible fidelity. Thus, there is great capacity for outreach and collaborative possibilities. Our final option is the manual synthesis of these genes according to NCBI or UniProt sequences. Nonetheless, we plan on the eventual construction of our system in order to test its feasibility and efficacy.
Our previous design for the PETase and MHETase systems combined both PETase and MHETase genes in the coding sequence region (CDS) of the system (Figure 5). However, it came to our attention that there is a high likelihood of the protein being misfolded if the encoding sequence is too long. Thus, we decided to separate the combined genes into two separate CDS systems. Initially, we had planned on implementing both the PETase and MHETase systems into one strand of nucleotides (Figure 6). After further feedback from peers, we revised the system so that the PETase and MHETase systems are entirely independent on separate strands. This is simply due to the increased efficiency and agency over the functioning model. With separate strands, the amounts of PETase and MHETase can be more specifically monitored and tailored to suit the treatment area. In addition, this will allow for more efficient protein degradation, as we will first release PETase only, allowing the enzyme to fully metabolize PET into MHET, before proceeding with the addition of MHETase. Thus, no incomplete or “transition” metabolites between PET and MHET should be present in the treated area with the proposed system (see Figures 4).
|Figure 4. Models 1 (A) and 2 (B).|
A future possibility for our design systems is the mass production of these two proteins by biotechnology companies. If this were the case, the proteins could be expressed in a more rapidly growing chassis, such as E. coli, then extracted and purified using standard means, such as detergents and spin columns. This could allow for a greater ability to upscale the production of PETase and MHETase. In addition, there would be finer control over the quantities of PETase and MHETase produced, allowing for more precise treatment of polluted areas. This would also eliminate the risk of genetically modified organisms escaping, as well as reducing the toxic potential of any chemicals we might use as a kill switch for the constitutive system.
A second variant of the system could instead use an inducible promoter. However, there is no currently existing promoter that can selectively and accurately detect PET plastic. There is a further risk of environmental toxicity from the added transcription factor, as well as the additional cost of producing the transcription factor.
A significant area of concern is the toxicity of the kill switch systems. For example, preprotoxins are an excellent kill switch because they selectively induce apoptosis in yeast cells. However, further research needs to be conducted on the presence and ecological niche of naturally occurring yeasts in aquatic environments to ensure as little species displacement and ecosystem disruption as possible. Ideally, a less malignant or more selective chemical kill switch will be discovered.
In addition, it is unknown whether a mutation for shorter telomeres in a yeast plasmid could conjugate and spread to other non-target organisms. In such a case, it may be best to perform plastic degradation in a closed, controlled environment away from aquatic ecosystems.
Finally, there is some concern over the environmental toxicity of ethylene glycol and terephthalate, and their effects on aquatic life forms and water quality. Although the Environmental Protection Agency (EPA) has not classified ethylene glycol as a carcinogen, it has been shown to be fetotoxic and linked to detrimental kidney and liver effects in rodents (EPA ethylene glycol hazard summary). This could be a concern if the byproducts were to be ingested by aquatic organisms, potentially manifesting in the biomagnification of harmful toxins in humans from eating seafood.
We would like to thank Dr. Natalie Kuldell for her time and expertise, as well as the opportunity to present our design ideas and receive feedback from our peers in BioBuilder at the Final Presentation Session in LabCentral. We would also like to thank our mentor, Anne Burkhardt, for her advice and suggestions.
Ajo-Franklin, C. Part:BBa_J63005: Experience [Internet] IGEM: Register of Standard Biological Parts. Cambridge (MA): The International Genetically Engineered Machine Competition. 2006 Oct 11 [cited 2018 Apr 18]. Available from: https://bit.ly/2Ir8dcp.
Breinig, F, Tipper, DJ, Schmitt, MJ. Kre1p, the plasma membrane receptor for the yeast K1 viral toxin. Cell 2002 Feb 8; 108(3):395–405.
Coghlan, A. Bacteria found to eat PET plastics could help do the recycling. [Internet] New Scientist. 2016 March 10 [Cited 2018 Apr 18]. Available from: https://bit.ly/21rmAOY.
Fondriest.com (n.d.) pH of Water [Internet]. Fundamentals of environmental measurements. Fondriest Environmental Inc [Cited 2018 Apr 18]. Available from: https://bit.ly/2PMfNBB.
Parker, L. A whopping 91% of plastic isn’t recycled. [Internet] National Geographic. 2017 July 19 [Cited 2018 Apr 18]. Available from: https://bit.ly/2z2oCCI.
Reiter, J, Eva H, Frank M, Schmitt, MJ. Viral killer toxins induce caspase-mediated apoptosis in yeast. J Cell Biol. 2005 Jan 21;168(3):353.
Ribeiro, GF, Côrte-Real, M, Johansson, B. Characterization of DNA damage in yeast apoptosis induced by hydrogen peroxide, acetic acid, and hyperosmotic shock. Mol Biol Cell. 2006 Aug 9;17(10).
UniProt. A0A0K8P6T7 (PETH_IDESA) [Internet] (n.d.) [Cited 2018 Apr 18]. Available from: https://bit.ly/2y2oXC3.
Yoshida, S, Kazumi H, Toshihiko T, Ikuo T, Hironao Y, Yasuhito M, et al. A bacterium that degrades and assimilates poly(ethylene terephthalate). Science 2016 Mar 11;351(6278):1154–96.
Zhang, NN, Dudgeon, DD, Paliwal, S, Levchenko, A, Grote, E, Cunningham, KW. Multiple signaling pathways regulate yeast cell death during the response to mating pheromones. Mol Biol Cell. 2016 May 31; 17(8). | <urn:uuid:0fb828bc-f587-4f9e-9f25-f4733b6ce035> | CC-MAIN-2019-47 | http://biotreks.org/e201805/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00417.warc.gz | en | 0.920613 | 5,096 | 3.1875 | 3 |
The basic equipment you need to record sound are a microphone, some electronics to amplify and digitise the signal and a means of storing the recording. It’s also a good idea to monitor what you are recording for which you will need headphones. A smartphone or voice recorder will incorporate all of these but they are not usually good enough to pick up the quieter sounds of wildlife in the field. Pretty soon you will want to construct your own system from the constituent parts. Most of the equipment on the market was designed for the music or film industries, whose needs are often a little different from ours, so we need to pick and choose carefully. Below we run through a recordist’s basic equipment to help you with those choices.
A microphone is any device that converts sound waves into electrical signals. There are many ways of achieving that. You’ll read about dynamic, ribbon, carbon, piezoelectric and countless other types but much the most commonly used and practical for wildlife recording are condenser microphones. We’ll only deal with those here. A condenser (or capacitor) is a simple electrical component consisting of two sheets of conducting material separated by insulating material and holding an electrical charge between them. The electrical signal a condenser microphone produces results from the change in voltage between the two sheets that occurs when sound waves distort them.
The means by which the voltage across the condenser is set up is the basis of the difference between two main types of condenser microphone. In the conventional type the voltage is simply generated by a power supply. In the second type, the electret microphone, the voltage is electrostatic, set up and “baked in” when the device is manufactured. You’d think the charge would soon leak away but, in fact, it is so effectively sealed within insulating material that it is good for hundreds of years. Electret microphones are small, robust and cheap, and modern ones are remarkably sensitive and with low self-noise – perfect for a beginner dipping in a toe without committing to great expense. They do not, however, have the dynamic range or the capacity to capture the detail that a conventional condenser microphone has. Many factors determine the quality of a microphone including its robustness for field use, the diaphragm size (bigger more detailed) and impedance (lower the better); in general you get what you pay for.
One particular curse of the condenser microphone is its susceptibility to humidity and condensation. This does not necessarily destroy the mic but will incapacitate it until it is dried out. A third variant, the RF condenser microphone, has been introduced to reduce this problem. While these also employ a condenser to detect sound waves it is utilised in a different way: they set up a rapidly oscillating electrical circuit whose frequency is determined by the size of the gap between the condenser sheets. Thus sound modulates the frequency rather than the amplitude of the voltage, which makes them far more robust for use in the field. Further circuitry is required to generate the oscillation and convert the frequency change to a voltage change that the recorder can understand, all of which makes these microphones quite a lot more expensive.
Power and cables. Most microphones have a built-in amplifier circuit required to convert their tiny voltage changes into something strong enough to be reliably passed
In most cases the signal travelling between the microphone and the recorder is weak and will require considerable boosting by the recorder’s pre-amp. The cables themselves can be long and could act as an aerial picking up any electrical interference from mains hum to Radio Moscow, which will, of course, also be amplified by the pre-amp. To prevent this problem XLR cables are balanced. This involves not just shielding and twisting of the wires to minimise interference, but also a cunning piece of circuitry that subtracts the noise. Two copies of the signal are passed down separate wires, in one of which the voltage has been reversed. As both signals pass along the cable both suffer the same interference. Then, at the receiving end, the reserved copy is flipped back to its original polarity. Now the signal in both wires is of the same polarity while the polarity of the noise is reversed so combining them neatly cancels out the noise. Unbalanced cable of the sort often used to provide PiP is not so protected.
Frequency response. We are lucky because, unless you intend to record bats or elephants, most wildlife uses the same range of frequencies as the human ear detects. Actually, this is probably no coincidence because the human ear is adapted to hear most of what is going on in the natural world. And, in any case, the sounds of the natural world we find interesting are obviously those we can hear: ultra- and infra-sound, scientifically fascinating though they may be, do not form part of those majestic dawn choruses, or spine-tingling calls echoing through the forest, that we find so attractive. So most microphones will pick up an appropriate range of frequencies. We also share with most other microphone users the desire for a flat response that doesn’t emphasise some frequencies over others. So, in this respect, what we want is usually available commercially. But again, you get what you pay for: more expensive microphones tend to have a flatter response over a broader range of frequencies.
Microphone directionality. Not all microphones pick up sound equally from all directions. Those that do are known as “omnidirectional”, or “omnis”. Other common types are cardioid (essentially one-sided), super- and hyper-cardioid (or shotgun – reject sound from all but a narrow angle in the direction that the mic is pointed), bidirectional (or figure-of-eight – picks up sound on either side of the plane of the microphone but rejects sound in its plane).
Microphones that pick up sound from a particular direction obviously allow you to record a particular subject while rejecting noise from elsewhere. Long, shotgun microphones are the best at this. They are not, however, a panacea for the problem of environmental noise. Noise will reach the microphone both because the microphone’s off-axis rejection is only partial and because noise is reflected off objects in the environment.
Directional microphones do not amplify sounds from the direction in which they are pointed; they only reject sounds from other directions. A parabolic reflector with a microphone set at its focal point does amplify sound from a narrow direction. The larger the dish the greater will be the amplification, though clearly there is a trade-off with practicality (about 56cm is found to be a good compromise). The angle within which the signal is amplified is narrow so it is a good idea to use a transparent dish to avoid obscuring your view of the subject. A parabolic dish offers an effective and simple means of isolating a songbird’s song and can be used to make excellent recordings. The beginner will appreciate the instant success this brings. Dishes do, however, “colour” the sound: they amplify higher frequencies much more than lower ones and any sound with a wavelength below that of the width of the dish will not be amplified at all. This problem is even more pronounced for off-axis sounds, which can sound quite distorted and strange. Commercially available dishes can be a little expensive for the beginner but DIY solutions are possible.
Stereo. Isolating a single singing bird’s song is not everyone’s objective. Stereo recording can provide a far fuller, more immersive sound picture of a whole habitat. They are also capable of separating out different sounds that in a mono recording may blur together. For this reason noise can seem less intrusive, although generally harder to avoid, than in a mono recording.
The brain uses two clues to work out the direction a particular sound is coming from: phase and amplitude. A sound coming from your left will arrive slightly earlier and sounds louder in your left ear. There are numerous stereo techniques but all utilise timing and/or intensity differences. Here are some commonly used techniques:
Spaced pair (or A/B configuration): two omni or cardioid microphones set 1-3 meters apart.
Coincident pair (or X/Y configuration): two cardioid microphones set at an angle (often 90°) to one another and as close together as possible.
Mid-side (M/S) arrangement: a cardioid microphone facing the subject and a figure-of-eight (or bidirectional) microphone at right angles and as close as possible to this. The use of the fig-of-8 mic and need to decode the signals seem awkward but are repaid by allowing you to adjust the balance between the main subject and the surrounding ambience. The Blumlein configuration is similar but involves two figure-of-eight microphones.
ORTF (Office de Radiodiffusion Télévision Française) technique: two cardioid microphones separated by 17cm and facing at an angle of 110° to on another.
Binaural dummy head: a dummy head with anatomically correct (silicone) ears within each of which is place a small omni microphone.
All of these can produce effective stereo sound images but each has its pros and cons. A problem common to all techniques that involve microphones that are spaced out is that, when combined as a mono track, they suffer interference: some wavelengths cancel out while others are reinforced. (Similar problem occurs when the listener is not equidistant from stereo speakers.) Those techniques that use coincident microphones (X/Y and M/S) do not suffer this, although they do lose all timing information. The rather bizarre binaural head is in some ways the most realistic, especially when recordings are played back through headphones.
Wind. The slightest breeze will cause an awful roar on any unprotected microphone. Many microphones come with an open-pore foam windshield that will deal with light winds. But more wind than that requires the mic to be enclosed in a cage surrounded with open-weave, furry material. These “blimps” can be astonishingly expensive to buy. Fortunately, equally effective, homemade alternatives can be constructed for a fraction of the cost. A wire birdfeeder, covered with a stocking or faux fur and the microphone secured in place with elastic bands works well. The important point is that the covering should be acoustically transparent across all wavelengths.
A vast and ever-changing range of portable recording devices is available. Given that the recorder is probably the most important and expensive piece of kit the recordist will own, this can be quite daunting. Before deciding the most appropriate for you it is necessary to understand the many functions of the recorder.
Input from microphone. All but the very cheapest recorders include sockets to attach external microphones. These, as we have seen above, come in two sorts: stereo jack sockets for unbalanced cable and XLR for balance cable. The cheaper recorders will only provide the former with plug-in-power (PiP) while more expensive devices will also provide two or more three-pin XLR sockets with phantom power. Many also have a third input socket, “line in”, which expects an already-amplified input and skips the device’s pre-amp.
Preamplifier. For the quiet sounds of wildlife this is often where the game is won or lost. At the cheaper end of the market, preamp noise is often a much greater problem than that from the microphone. The further you turn up the pre-amp gain, the noisier it becomes. Thus even a microphone with a good signal-to-noise ratio, if it is quiet, can be unusable due to the noise introduced by the preamp. Fortunately, recorders with very clean pre-amps are becoming increasingly affordable. This area is changing rapidly so seek advice on what are currently the best.
Digitising. Once suitably amplified the signal is digitised: it is sampled thousands of times per second and its value at each sample recorded as a digital number. It is now essentially immune to degradation or acquiring further noise. The frequency of sampling (the “sampling rate”) and the precision with which each sample’s value is recorded (the “bit depth”) determine the quality of the recording. Common settings are those used for audio CDs: 44,100 samples per second (44.1 kHz) and 16 bits per sample (allowing 65,536 intensity levels). Some recorders will allow greater sampling rates and bit depths for improved definition but, of course, these produce larger data files. Fiddling with these values is something for the experts and need not bother the beginner.
Storage. Almost all digital recorders these days store the digitised signal on standard SD (Secure Digital) cards like those used in cameras and other digital devices. If you’d like to make lengthy recordings you may want to replace the SD card that comes with the recorder with something larger, but before buying a whopping one check that the recorder can read it. At the standard sampling and bit depth 1 hour’s stereo recording will occupy approximately 0.6 GB. To save space most recorders will allow you to store the recording in compressed (usually MP3) format. While this inevitably results in some loss of information, nevertheless huge space savings can be achieved with no discernable loss of quality.
Controls and display. Even the simplest of recorders will have innumerable settings. All have some sort of menu system to control such things as the sampling rate and bit depth, which input channels to record from and whether to record a stereo or mono signal, whether to compress the data, whether to automatically control the gain (rarely advisable for our purposes!), to switch on “pre record” (i.e. continually record a buffer of a few seconds to capture sounds immediately before the record button is pressed) and setting which type of batteries are being used (this mostly so that the battery life display is accurate) and the time and date (don’t forget this whenever replacing the batteries). Each device has its own system of buttons and knobs to navigate around the menu, which can be a little fiddly and counterintuitive if you aren’t used to it. Matters are often made worse by the quality of the display; it is worth checking that this can be read outdoors in daylight. More sopisticated machines have more options so, to make life easier, often provide a number of “presets” which, once set up, allow a whole suite of setting to be selected at once. There will often also be several knobs that allow you to adjust the preamp gain and the headphone volume. Finally there will be three standard buttons to start recording, stop and replay. All in all, recorders can be quite complicated to use so it is essential that you spend some time getting to know all its ways before taking it out into the field.
Batteries and battery life. Almost all portable recorders run on AA batteries (one or two use a Li-ion battery). These may either be disposable or rechargeable. Modern nickel metal hydride (Ni MH) rechargeable batteries store so much energy (up to 2800mAh) and hold their charge so well that it is hard to see why anyone would ever bother with single-charge batteries. However, be warned that they have a small fire risk and rechargeable batteries should never be carried in hold luggage on an aeroplane. Recorders themselves vary enormously in the power they require: some will record for 24 hours without draining the batteries while others, especially when phantom power is used, barely lasts half an hour. This means obviously that some recorders simply are not suitable for lengthy, unattended recording. And it is always worth carrying plenty of fully charged spares. Many recorders can be powered from additional, external sources, perhaps the most practical of which for field recording being Li-ion power packs usually used to charge mobile phones.
Once you have your recording safely digitised and on your computer you will want to polish it up a little. It may need amplifying again and noisy or uninteresting bits trimmed off. You might also attempt a little filtering (perhaps taking out frequencies below 100 Hz) to reduce wind or traffic noise. All of this can easily be done with free software available on the Internet. More ambitious cleaning or mastering of the recording is something of a black art and requires more sophisticated and expensive software. There are different schools of thought about how far down that road wildlife sound recordists should go. For some interfering with the “natural” recording is an anathema; for others what the microphone captures and what the ear hears are two different things and returning the former to the latter is an essential objective. Certainly over-use of filtering or noise reduction can simply sound weird. But, done well, who wouldn’t prefer a wildlife recording with that yapping dog or passing motorcycle removed? And what harm is there in sometimes enhancing natural sounds to make them more dramatic or emphatic? In the end it’s up to you and how scientific or artistic you want to be.
In all probability your first recordings will be disappointing, spoilt by hisses and roars, and bumps and rustles. You need to learn the when, where and how to record; to take account of conditions and the habits of the creature you hope to record. How far away and how busy are the nearest roads? Each tree leaf or reed spear rustles quietly but in a wood or reed bed with more than a breath of wind millions combine to produce a din that may overwhelm your recording. A good recording requires planning and ears that hear and assess all the sounds in the environment.
Plan as you might, environmental noise is almost impossible to avoid completely. It can be mitigated to some extent with a shotgun microphone or parabolic reflector, but these, as we’ve seen, have limitations. Better to place your microphone as close to the subject as possible without disturbing it. That’s surprisingly easy to do with some small songbirds but most require a little cunning and patience. Observe your subjects and get to know their habits. Then, on a calm day before the traffic gets going, set up your microphone where you know they will return, withdraw, hide and watch and wait with your recorder on the end of a long cable. Or, simply leave your kit recording unattended. Although it can be frustrating afterwards to work out who was doing what, this is an excellent method to capture the natural soundscape of a habitat. You’ll be amazed what goes on when you aren’t there! Obviously your presence can be inhibiting but what about the presence of your microphone placed so close to your subject? A furry wind muff can look a lot like a predatory mammal, so toss a piece of camouflaged scrim over it.
Sometimes there is no alternative to pointing a handheld microphone at a bird encountered by chance. That’s when you discover how noisy you are: all that breathing, coughing and stomach rumbling. Your bones creak – everyone’s do. It helps to cradle your mic in a “pistol grip” to isolate it from handling noise, but you still need to be ultra still and careful. What about that cable swinging about between the mic and the recorder? The sound of everything it touches is transmitted directly to the mic. For a scientific study you may want quantity rather than perfection and handheld microphones are the only practical method. A few yards of cable between you and the microphone make a huge difference but it does curb spontaneity.
There are many more lessons to learn. A large part of the fun of sound recording is in overcoming the challenges posed when trying to capture natural sounds. When you’ve come up with a new solution of your own, you can head over to the WSRS Forum and tell other recordists about it. | <urn:uuid:f649cdb5-2e6c-4af3-9810-06e393e05dba> | CC-MAIN-2019-47 | https://www.wildlife-sound.org/index.php?option=com_content&view=article&id=235&Itemid=162 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670389.25/warc/CC-MAIN-20191120010059-20191120034059-00016.warc.gz | en | 0.94721 | 4,120 | 3 | 3 |
- Open Access
Knowledge of rabies and dog-related behaviors among people in Siem Reap Province, Cambodia
Tropical Medicine and Health volume 46, Article number: 20 (2018)
The rabies incidence and number of dogs in Cambodia are much higher than in nearby countries. Knowledge and behaviors which are related to rabies and/or dogs are considered to be contributing factors for rabies infection control in the community; however, such information in rural Cambodia is limited. This cross-sectional study aimed to assess knowledge and experiences related to rabies as well as dog-related behaviors among people in Siem Reap Province, and to identify the specific factors associated with adequate knowledge.
Four-stage sampling was employed to identify villages and households. In total, 360 respondents were interviewed using a structured questionnaire. Data were descriptively summarized and logistic regression was performed to estimate odds ratios of adequate knowledge related to rabies for respondents’ characteristics.
Only 9.7% of respondents had adequate knowledge of rabies. Of the respondents, 86.9 and 18.3% had experienced hearing of or seeing a suspected rabid dog and a suspected rabid human, respectively. More than two-thirds (70.6%) of households had at least one dog, and the ratio of dog to human populations was 1: 2.8. Only a few owners had vaccinated dogs, used a cage, or tied up their dog. Visiting a health center was the first choice of treatment for respondents when bitten by a dog. However, post-exposure prophylaxis (PEP) was not commonly expected as a treatment choice by respondents. Those with higher education were more likely to have adequate knowledge than those with no education (adjusted OR 12.34, 95% CI 2.64–57.99, p < 0.01). Farmers and non-poor families were also less likely to have adequate knowledge than those of other professions and poor families (adjusted OR 0.30, 95% CI 0.12–0.76, p = 0.01, and adjusted OR 0.13, 95% CI 0.04–0.47, p < 0.01, respectively).
High dog population, inadequate knowledge of rabies, low recognition of human rabies, and poor dog management were found to be serious challenges for controlling rabies. Health education related to rabies should be introduced, targeting farmers in particular who easily encounter stray dogs but have little knowledge of rabies risk factors and signs. At the same time, PEP delivery and dog management should be improved.
Rabies is an ancient viral zoonotic disease that can be transmitted to humans from infected animals such as dogs, cats, and other types of wildlife. Among them, dogs are the most important rabies reservoir; 96% of reported human rabies cases are caused by dog bites [1, 2]. It is estimated that over 59,000 people die of rabies annually worldwide and majority of the cases occur in Asia and in Africa [3,4,5,6,7,8]. Although the fatality rate of rabies infection is nearly 100%, rabies can be prevented by appropriate vaccination of high-risk people in advance (pre-exposure prophylaxis or Pre-EP) and the victims of a dog bite (post-exposure prophylaxis or PEP) . Globally, more than 15 million people worldwide receive PEP annually to prevent rabies deaths . The global community has a strong commitment to eradicating rabies worldwide by the year 2030 .
In order to prevent human deaths from rabies, several strategies are implemented, including increasing accessibility to Pre-EP for people at risk and PEP to dog bite victims, and promoting awareness and knowledge related to rabies in the community through on-site health education or mass media. Along with dog registration, dog vaccination against rabies is also needed [3, 11, 12]. To implement the above-mentioned measures, collaboration is required among the public health sector, veterinary health sector, communities, and others. It is recommended that the public health sector strengthen the national rabies policy, including rabies control, and comprehensively coordinate rabies surveillance. It is also important for the veterinary health sector to develop a dog management policy. Communities need to change their attitudes toward and behaviors related to dogs, and development partners should support or facilitate technical and financial processes to all sectors listed above. Public education of rabies is not easy and simple. The willingness to participate in health education and awareness of rabies among people is limited . Health promotion and education is most likely to be successful through the cooperation of human and animal health authorities .
Cambodia is one of the countries greatly burdened by rabies . According to a study conducted by the Institute Pasteur of Cambodia, the estimated incidence was 5.8/100,000 (95% CI 2.8–11.5) of the population and 810 human rabies deaths would occur in 2007 (95% CI 394–1,607) in the whole country in 2007. The same study found that the ratio of dog population to human population is 1:3, and by that estimate the dog population in Cambodia could be 5 million. The rabies incidence and number of dogs in Cambodia are much higher than in nearby countries in Southeast Asia. Although the number of rabies patients is smaller than that of other common diseases such as malaria, dengue fever, and acute respiratory infection, the estimated number of deaths were greater than those from other infectious diseases due to the highest fatality rate .
Basic knowledge of the disease and its treatment are important for rabies infection control in the community. In addition, personal experience related to rabies and dogs as well as dog-related behaviors such as feeding and managing them are also considered to be contributing factors. Data on the current situations are necessary both for health authorities and communities to improve community knowledge and behaviors. However, such information available in Cambodia is only from Phnom Penh and Kandal Province , but not from rural areas. The objectives of this study were to assess knowledge of rabies, experiences related to rabies and dog, and dog-related behaviors among people in Siem Reap Province, and to identify factors associated with the adequate knowledge.
Study design and setting of the study
A cross-sectional study was carried out in Siem Reap Province, Cambodia, from December 24, 2013 to January 13, 2014. Multi-stage cluster sampling was undertaken. First, 3 from among 12 districts in Siem Reap Province were randomly selected. For the second stage, two communes were randomly selected from each district. For the third stage, two villages were randomly selected from each commune. Based on this method, we therefore obtained 12 villages in total from 926 villages within Siem Reap Province. According to the provincial health report in 2013, there were a total of 13,685 people living in the 2383 households in the 12 selected villages. The average population per village was around 1140 (ranging from 582 to 2130 people per village), and the average number of households per village was approximately 198 (ranging from 96 to 375 households per village). Among the 12 villages selected, two were located in a mountainous area but the other 10 villages were in a lowland rice field area. The nearest village was approximately 15 km away from Siem Reap town, while the farthest village was approximately 75 km away. The road was travelable to every village at that time because it was during the dry season.
Thirty respondents were chosen from each village. Because households are usually located alongside the main road, the target households were systematically sampled along the road by using a specified interval for each village: the number of households in the village divided by 30. The first households for the fourth stage were selected from either side of the village boundary where the main road passes. When no eligible respondents were available at the selected house, the next house was selected. Only one respondent was interviewed in each target household, and the primary target of the interview in each selected household was the head of the household. If the household head was not available at that time, a member of the household aged 18 years or older was accepted as the respondent. In total, 360 respondents were obtained.
A structured questionnaire was developed based on previous studies [13, 17,18,19], and modified to suit the local context by consulting with experts. The questionnaire included the following items: (1) characteristics of respondent (age, gender, marital status, education, occupation, monthly income, and dog ownership), (2) knowledge related to rabies, (3) experience related to rabies and dogs, and (4) dog-related behaviors.
Four interviewers were trained by the principal investigator for 2 days before data collection. Health centers provided logistic support and the Village Health Supporting Group (VHSG) guided interviewers in each village. A face-to-face interview was conducted when verbal consent was obtained from an eligible respondent.
Data entry and analysis
Data analyses were performed in the following steps; first, respondents’ characteristics, knowledge, experience, and behaviors were descriptively summarized. Chi-squared tests were performed to investigate associations between dog ownership and each respondent characteristic. For evaluating the level of rabies-related knowledge, correct replies were counted, then the distribution of the number of correct replies was described. If replies to six knowledge questions were all correct, it was classified as good knowledge, and otherwise as inadequate knowledge. Then logistic regression analyses were performed to estimate crude and adjusted odds ratios (ORs) of good knowledge for respondents’ characteristics, which were mutually adjusted, presented with 95% confidential intervals (CIs). P values less than 0.05 were considered statistically significant. Epi-info 7 software program developed by the Center for Disease Control and Prevention (CDC), USA, was used for data entry, data management, and analyses.
Approval for the study was obtained from the National Ethics Committee for Health Research of the National Institute of Public Health, the Cambodian Ministry of Health. The Siem Reap Provincial Health Department, which is the local health authority, provided official permission to conduct research in the area. All respondents were orally informed of the study objectives and procedures. They were also assured that their responses would be kept anonymous and confidential.
Socio-demographic characteristics of respondents
Among 360 selected households, the mean number of people per family was 5.4. Of the respondents, 73.3% were females while 26.7% were males. The mean age was 37 years old (ranging from 18 to 84 years). Most of the respondents (81.7%) were currently married. Only 12.5% of the respondents had secondary education or above, while nearly one-third of them had never received any formal education whatsoever. Almost four of five respondents were farmers with or without extra work. The median monthly income per household was about 129 USD (ranging from 50 USD to 1200 USD). More than two-thirds of households had at least one dog (Table 1). The average number of dogs per household was 2.0 (maximum 12) including households without a dog, and 2.8 excluding households without a dog. The total dog population was 704 among 360 households. The ratio of the dog population to the human population was 1: 2.8. There was no evidence of the association between investigated socio-demographic characteristics and dog ownership (Table 2).
Knowledge related to rabies
As shown in Table 3 of the six main questions pertaining to knowledge related to rabies, more than 80% of respondents said rabies was a transmittable disease. Among them, nearly all (98.6%) responded that rabies could be transmitted through a dog bite. Less than two-thirds of the respondents knew that rabies could be prevented. Awareness of dog rabies vaccine was much less known than awareness of human rabies vaccine. Although more than two-thirds of respondents answered that rabies is fatal, 21.1% believed that rabies could be cured. Thirty-five (9.5%) people correctly replied to six (all) questions. Eighty (22.2%), 81(22.5%), 72 (20.0%), 36 (10.0%), 29 (8.1%) people correctly replied to 5, 4, 3, 2, and 1 question(s), respectively. Twenty-seven (7.5%) did not give any correct replies.
Among those who knew that rabies could be prevented, the most frequent responses as to how to prevent were vaccination of humans and no contact with a dog. Only eight respondents suggested vaccinating dogs. Many of them did not know how to prevent rabies (Fig. 1). Of the respondents who knew that a human rabies vaccine was available. The most frequently suggested place where a vaccine was available was health centers (Fig. 2).
Experience related to rabies and dogs
Of the respondents, 86.9% had heard of or seen a suspected rabid dog (Table 4). Among them, 99.4% had heard of suspected rabid dogs from other people. No one answered that TV, radio, newspaper, school, or poster/leaflet were information sources. Figure 3 shows that foaming at the mouth, being aggressive, or biting other dogs or people were frequently suggested by the respondents as clinical signs of suspected rabid dogs. Contrary to the high proportion of those having heard of or having seen a suspected rabid dog, only one-fifth of the respondents had ever heard of or seen human rabies. Nearly half of the respondents (41.9%) had a family member who had been bitten by a dog (Table 4).
Behaviors related to dog
Among the dog owners, only 44.4% of respondents liked dogs. As for the question, respondents answered that the main purpose of feeding a dog was house security or house protection. Table 5 shows that there were only two households that had already vaccinated their dogs against rabies, and seven households had caged or tied up their dogs. Most of the respondents (84.7%) felt afraid of seeing stray dogs on the road or dogs kept in others’ premises. Nearly a fourth of respondents answered not to seek treatment if bitten by a dog.
Among 276 respondents who said they would seek treatment after being bitten by a dog, 114 (52.2%) of them sought treatment at a health center (Fig. 4). Their expected treatment was wound dressing (51.1%), anti-tetanus vaccine (47.1%), antibiotics (29.0%), and anti-rabies vaccine (21.7%), as shown in Fig. 5. Some respondents who did not seek any treatment said they would use a traditional dog bite treatment of sticking rice on the wound. Some respondents would then kill the suspected rabid dog.
Socio-demographic factors associated with adequate knowledge
Logistic regression indicated that respondents aged 30–39 years were significantly more likely to have adequate knowledge related to rabies than respondents aged less than 30 years (adjusted OR 3.48, 95% CI 1.07–11.42, p = 0.04). People with higher education showed statistically greater OR of having adequate knowledge than people with no education (adjusted OR 12.34, 95% CI 2.64–57.99, p < 0.01). Farmers and households whose family income was 150 USD or more per month were less likely to have adequate knowledge than the reference group (adjusted OR 0.30, 95% CI 0.12–0.76, p = 0.01, and adjusted OR 0.13, 95% CI 0.04–0.47, p < 0.01, respectively). (Table 6).
This study suggested that more than four-fifths of the respondents knew that rabies was a disease that could be transmitted, and nearly two-thirds said it could be prevented. Although the majority of people knew that it is fatal, some respondents considered it curable. While many respondents had experienced hearing of or seeing a suspected rabid dog but much less people had heard of or seen a suspected rabid human. The dog population was high; however, dog management was still poor. The study found that the respondents with higher education were more likely to have adequate knowledge than those with no education. Farmers and non-poor families were less likely to have adequate knowledge than those of other professions and poor families.
The dog: human ratio in this study was almost the same with the figure in 2007 , which implies little intervention had happened for the decade. In Cambodia, dog is called “village security” in the local language, and people keep dogs to protect their houses. According to findings from the same survey, which were not presented in this article, even more than half of the respondents answered “no” to the question which asked if they liked dogs, almost all (97.8%) respondents suggested the purpose of owning a dog was house security (data not presented). It was assumed that house security overweighed a fear of rabies. Or as shown in the results on knowledge related to rabies, people in the community were not aware of severity of rabies compared to the importance of house security.
A study in south-central Bhutan which used similar questions had participants with higher knowledge than our current study . However, the Bhutan study was conducted in a commercial center, and it was assumed that socio-economic status among their respondents was higher than that of our respondents. A post-intervention study conducted in Sri Lanka also demonstrated respondents with higher knowledge than our study after they received health education literature such as leaflets and posters . A study in India conducted in urban slums showed less knowledge than our study in Siem Reap, Cambodia.
The majority of respondents in our study replied that the rabies vaccine was available at health centers or referral hospitals. The national immunization program, however, does not have this vaccine for delivery to health centers and referral hospitals . This misunderstanding might have been caused by those respondents who thought that rabies vaccine was one of the routine immunization vaccines. Most respondents said that they would seek treatment at health centers or referral hospitals after they or their family members were bitten by a dog and they expected that those places would provide them with treatment services. Anti-rabies vaccination was not expected by many respondents. This implied that people were unaware as to the effectiveness and availability of the PEP.
In Cambodian language, the term Chkai-Chkot implies a dog disease, but there is no specific term for human rabies. To express human rabies, another term to indicate “disease” is usually added before Chkai-Chkot. This may cause confusion and lead people to think rabies is a disease only among dogs. To avoid the confusion, we asked the respondents separate questions about hearing of or seeing rabies in dogs and humans. Studies in Sri Lanka, south-central Bhutan, and India indicated that a high number of respondents had heard of and seen rabies, for which the question did not specify either dog rabies or human rabies (94.5, 89.6, and 74.1%, respectively) [13, 18, 19]. Findings from this study also had lower proportions than a previous study in Cambodia indicating that 93.2 and 43.5% of respondents had heard of or seen rabies in dogs and human, respectively. Because the previous study in Cambodia was conducted in Phnom Penh and Kandal Province (urban and periurban areas), respondents’ higher education, better living conditions, and accessibility to the PEP center might be possible reasons for the difference in awareness of rabies .
In this study, most of the respondents who had heard of or seen a suspected rabid dog knew it from others, but some had seen a suspected rabid dog themselves. This implied that suspected rabid dogs appeared and were well-known in the community. However, rabies cases among humans were not frequently suggested. This might be one of the reasons why respondents were unaware of human rabies and lacked knowledge of rabies among humans. Farmers were less likely to know about rabies than persons in other occupations. They work outside and may frequently encounter stray dogs, some of which might be rabid. Their unawareness might have been due to lack of health information. Therefore, farmers’ knowledge of rabies must be increased. Contrary to our expectations, non-poor families were less likely to have good knowledge scores. In their circumstances, they might pay too little attention to the disease, or they may just have less interaction with animals.
A study of rabies awareness in eight Asian countries (Indonesia, China, India, Philippines, Pakistan, Thailand, Sri Lanka, and Bangladesh) indicated that respondents obtained most of their information pertaining to rabies and its prevention from their relatives or neighbors . The study also suggested that few of the respondents had obtained rabies information from the government authorities of these countries . Although people obtain knowledge related to rabies from relatives or neighbors, sometimes it might be inaccurate or unclear. Public agencies must disseminate precise and practical information related to rabies as much as possible.
There are some limitations in this study. Firstly, we employed systematic sampling along with the main road. It must be valid in Siem Reap situations; however, some houses which were not on the main road might have been missed. Secondly, when no eligible respondents were available at the selected house, and we skipped a house and when the household head was not available, we interviewed somebody else. Although this strategy was practical, it may have caused selection bias. Thirdly, we chose only one person from one household; however, different people might have had different knowledge and experience. Fourthly, we obtained personal experience and behaviors. In addition, we could not have made clear definitions for some terms used in the questionnaire. For example, there was no subjective definition of a “suspected rabid dog” or “stray dog.” These may have caused information bias, including recall bias. Lastly, because face-to-face interviews were employed for the data collections by trained local interviewers in local language, we believe that little misunderstanding due to language would happen. However, some question was difficult to ask people in the community, as discussed before. This may also have caused information bias.
In conclusion, it was found that there was a high dog population, inadequate knowledge of rabies, low recognition of human rabies, and poor dog management. It was also suggested that although PEP was not available at the health center or referral hospital (public health services), people did not know this. All of these facts could lead to a high rabies burden in Siem Reap province.
Based on the findings of this study, it is recommended that the authorities (provincial, district, commune and village) inform dog owners directly to keep their dog in a cage or tied up, vaccinated, and also to reduce the number of stray dogs in the community. The most effective method of rabies prevention after a dog bite is PEP, which was not known well. The Ministry of Health and National Immunization Program should provide free PEP to dog bite victims through the existing routine vaccination channels. Health education should be developed and disseminated, particularly targeting high-risk groups such as farmers. At the same time, compulsory dog vaccination along with dog registration should also be done to achieve the WHO goal for eliminating rabies by the year 2030.
Center for Disease Control and Prevention
Village health supporting group
World Health Organization
World Health Organization: What is rabies? http://www.who.int/rabies/about/en/. Accessed 11 May 2018.
Gongal G, Wright AE. Human rabies in the WHO Southeast Asia region: forward steps for elimination. Adv Prev Med. 2011;2011:383870.
World Health Organization: Rabies. Key facts. 2018. http://www.who.int/en/news-room/fact-sheets/detail/rabies. Accessed 11 May 2018.
Wasay M, Malik A, Fahim A, Yousuf A, Chawla R, Daniel H, et al. Knowledge and attitudes about tetanus and rabies: a population-based survey from Karachi. Pakistan J Pak Med Assoc. 2012;62:378–82.
Dzikwi AA, Ibrahim AS, Umoh JU. Knowledge, attitude and practice about rabies among children receiving formal and informal education in Samaru, Zaria, Nigeria. Glob J Health Sci. 2012;4:132–9.
Fu ZF. The rabies situation in Far East Asia. Dev Biol (Basel). 2008;131:55–61.
Si H, Guo ZM, Hao YT, Liu YG, Zhang DM, Rao SQ, et al. Rabies trend in China (1990-2007) and post-exposure prophylaxis in the Guangdong province. BMC Infect Dis. 2008;8:113.
Matibag GC, Kamigaki T, Kumarasiri PV, Wijewardana TG, Kalupahana AW, Dissanayake DR, et al. Knowledge, attitudes, and practices survey of rabies in a community in Sri Lanka. Environ Health Prev Med. 2007;12:84–9.
World Health Organization. Rabies vaccines and immunoglobulins: WHO position: summary of 2017 updates (WHO/CDS/NTD/NZD/2018.04). Geneva: WHO; 2018.
Food and Agriculture Organization of the Unite Nations: Zero by 30: The global strategic plan to prevent human deaths from dog-transmitted rabies by 2030. 2017. http://www.fao.org/3/a-i7874e.pdf. Accessed 11 May 2018.
Totton SC, Wandeler AI, Zinsstag J, Bauch CT, Ribble CS, Rosatte RC, et al. Stray dog population demographics in jodhpur, India following a population control/rabies vaccination program. Prev Vet Med. 2010;97:51–7.
World Health Organization, Food and Agriculture Organization of the United Nations, World Organisation for Animal Health. Global elimination of dog-mediated human rabies: report of the rabies global conference, 10–11 December 2015 (WHO/HTM/NTD/NZD/2016.02). Geneva: WHO; 2016.
Matibag GC, Ohbayashi Y, Kanda K, Yamashina H, Kumara WR, Perera IN, et al. A pilot study on the usefulness of information and education campaign materials in enhancing the knowledge, attitude and practice on rabies in rural Sri Lanka. J Infect Dev Ctries. 2009;3:55–64.
Kilic B, Unal B, Semin S, Konakci SK. An important public health problem: rabies suspected bites and post-exposure prophylaxis in a health district in Turkey. Int J Infect Dis. 2006;10:248–54.
Wilde H, Khawplod P, Khamoltham T, Hemachudha T, Tepsumethanon V, Lumlerdacha B, et al. Rabies control in South and Southeast Asia. Vaccine. 2005;23:2284–9.
Ly S, Buchy P, Heng NY, Ong S, Chhor N, Bourhy H, et al. Rabies situation in Cambodia. PLoS Negl Trop Dis. 2009;3:e511.
Lunney M, Fevre SJ, Stiles E, Ly S, San S, Vong S. Knowledge, attitudes and practices of rabies prevention and dog bite injuries in urban and peri-urban provinces in Cambodia, 2009. Int Health. 2012;4:4–9.
Herbert M, Riyaz Basha S, Thangaraj S. Community perception regarding rabies prevention and stray dog control in urban slums in India. J Infect Public Health. 2012;5:374–80.
Tenzin DNK, Rai BD, Changlo TS, Tsheten K, et al. Community-based study on knowledge, attitudes and perception of rabies in Gelephu, South-central Bhutan. Int Health. 2012;4:210–9.
Soeung S, Grundy J, Biggs B, Boreland M, Cane J, Samnang C, et al. Management systems response to improving immunization coverage in developing countries: a case study from Cambodia. Rural Remote Health. 2004;4:263.
Dodet B, Goswami A, Gunasekera A, de Guzman F, Jamali S, Montalban C, et al. Rabies awareness in eight Asian countries. Vaccine. 2008;26:6344–8.
Tack DM, Blanton JD, Holman RC, Longenberger AH, Petersen BW, Rupprecht CE. Evaluation of knowledge, attitudes, and practices of deer owners following identification of a cluster of captive deer with rabies in Pennsylvania in July 2010. J Am Vet Med Assoc. 2013;242:1279–85.
We are grateful to the staffs of the Technical Bureau at Siem Reap Provincial Health Department for their generous assistance in the data collection and also to the health center staffs and VHSG who facilitated data collection in all villages. Our sincere gratitude to the National Institute of Public Health, the Cambodian Ministry of Health, and the local administrative authority, the Siem Reap Provincial Health Department for official permission to conduct this study. Our special thanks to all the respondents who generously took the time to participate in the study. We also would like to thank all staffs of the Department of Healthcare Administration, Nagoya University Graduate School of Medicine who have always so generously facilitated our study. This study was based on a SS’s master thesis for the Young Leaders’ Program (Healthcare Administration Course) of Nagoya University, which was financially supported by the Ministry of Education, Culture, Sports, Science and Technology, Japan.
Availability of data and materials
The dataset generated and analyzed during the current study are available from the first author on reasonable request.
Ethics approval and consent to participate
Approval for the study was obtained from the National Ethics Committee for Health Research of the National Institute of Public Health, the Cambodian Ministry of Health. All respondents were orally informed of the study objectives and procedures before participation.
Consent for publication
Consent for publication was obtained from study participant during data collection.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Sor, S., Higuchi, M., Sarker, M.A.B. et al. Knowledge of rabies and dog-related behaviors among people in Siem Reap Province, Cambodia. Trop Med Health 46, 20 (2018) doi:10.1186/s41182-018-0102-0
- Post-exposure prophylaxis
- Rural population | <urn:uuid:89ca5e6c-1436-4016-8735-fe1957139378> | CC-MAIN-2019-47 | https://tropmedhealth.biomedcentral.com/articles/10.1186/s41182-018-0102-0 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00378.warc.gz | en | 0.964217 | 6,425 | 2.734375 | 3 |
NOTE: Please remember that at various points in the course we will be asking you to study your way through different lists of passages on your own. The purpose of these worksheet assignments will be to supply you with necessary background material and context needed to make accurate and objective conclusions. We will help by providing you with special worksheets to assist in your research. Just do the best you can and don’t worry if you have difficulty doing it the first time or two. We will go through each verse with you so that you will be able to see the point of the passage.
In our last few lessons we looked at a general view of what the Bible is and how to study it. The rest of the material of this course will be devoted to a study of material found in the Bible that speaks about Jesus Christ. As we travel along the way we have tried very hard to apply the ideas and principles of Bible study which you have learned so that aside from learning new facts about Jesus, you will gain more experience in studying the Bible for yourself.
To help you gain a better picture of how the various findings of our study fit into place, we are going to make extensive use of a type of illustration called a TIMELINE which, as its name implies, is a simple line which represents the passing of time. The earlier time begins on the left end of the line and the later times are seen as one moves to the right. On this timeline we will mark different events in history that are an important part of our study so that you can see how each event fits together with other events of that period of history. The timeline represented below provides a general view of the entire history of the universe—from THE BEGINNING down through history until THE END of time.
Our study begins with an incident that happened during the life of Jesus. (Please read John 8:57-58 and remember to begin reading a few verses back so that you will be able to see the overall context of what is happening.) In this scene, Jesus was explaining to the religious leaders that he had come to them from the Father and that even their very famous ancestor, Abraham, looked forward to Christ’s coming. When Jesus mentioned the name of Abraham, the people seemed to think that Jesus was saying that he actually KNEW Abraham personally. They reacted to this by pointing out to Jesus that he was not even 50 years old and so how could he possibly know Abraham? To this, Jesus made a very surprising reply, “Before Abraham was born, I existed.” This astonished the crowd. By judging the reaction of the people it seems that they thought he was crazy! After all, they KNEW Jesus was the son of Mary and Joseph, and they also knew that he was born in the small town of Bethlehem some 30 years earlier. Suppose you were taking a history class in school and one day your teacher began class by saying, “Napoleon Bonaparte was my good friend and we used to sit for hours discussing his views on the topic of revolution.” Wouldn’t you also think he was crazy?
The situation we see in John chapter 8 is very interesting! Don’t you think? What was Jesus saying here? He hinted to the people that he seemed to have some sort of PRE-EXISTENCE. In other words, Jesus seemed to be saying that he was alive long BEFORE he was born in Bethlehem! This claim made the people feel upset and it invites us to look deeper into the origin of Jesus. Where did he come from and when did his time begin?
Let us begin by reading Luke 2:1-7 and taking note of the background and context. What significant detail do we learn about Jesus from this passage? It records the historical fact that Jesus—the man who we know was crucified on a cross some 30 years later—was not a living and breathing human being before Luke chapter 2! This is a fact and cannot be denied because the birth and childhood of Jesus were HISTORICAL FACTS! Local people saw Jesus as a baby and they saw him as a boy growing up. It was these two facts of history that stood behind the objections of the crowd in John 8. They KNEW him, and so how could Jesus possibly expect anyone to accept the idea that he was actually alive during the time of Abraham? And yet Jesus seemed to be implying that very thing when he said, “Before Abraham was born, I existed.” If you will refer back to the timeline above, you will see that Abraham lived some 2,000 years before Jesus walked on the earth! Considering this fact, it was no wonder the people were upset. This seeming contradiction creates a problem, which we will need to investigate further.
The origin of Jesus Christ has been a source of controversy for centuries and there have been many attempts throughout history to explain where he came from. During the 4th Century (the 300s) there was a fellow, named ARIUS, who came up with the theory that Jesus originated as the very first thing that God created “in the beginning”. The text of the Bible tells us in several places that Jesus was “born” from God. Arius took this wording to mean that Jesus was “brought into existence” by God. Since he believed that Jesus was “born” from God then he could not have existed before the Creation like God did. Therefore, according to Arius’ theory, Jesus was a created being—the first of God’s creations! Arius’ belief, known as ARIANISM, became very well accepted among the people. For several decades during his time, his theory was the “official belief” of Christianity. Some time later, things changed and most people abandoned Arius’ idea in favor of other explanations.
In the 16th Century (the 1500’s), another idea, called SOCINIANISM, became very popular. This idea stated that Jesus came into existence at birth, just an ordinary man, and that he had no pre-existence at all. It goes on to state that because Jesus was perfectly obedient and succeeded in his mission, God transformed him into a God. If we simplify the idea it basically means that Jesus began his existence as an ordinary human, just like you and I. He lived and died as a human being. Then, after his death and because he was successful in his mission, God changed Jesus into a God! This is very different from the idea of Arius. While Socinianism had some believers, this explanation also was ultimately rejected because it did not include a pre-existence.
In our time today, there are many different views circulating about the origin and existence of Jesus. For example, the JEHOVAH’S WITNESSES group believes that Jesus was God’s first creation. In this way they are very similar to ARIANISM. However, they teach that Jesus was actually Michael the Archangel living in a human body. They believe that Michael was the first thing God created. They teach that Michael assumed a human existence—in the form of Jesus Christ—and that he accomplished his assigned task as Jesus Christ. After his assignment was complete he returned to his original existence as Michael back in heaven—where he remains to this day.
Another interesting idea from our time comes from the MORMON CHURCH (the Church of Jesus Christ of Latter Day Saints). They believe that before his birth Jesus was one of the millions of “pre-existent spirits”—which also included all people living today and even the demons! They believe that Jesus’ spirit came into a body when he was born in Bethlehem and at the end of his life he became a God. They also believe that you and I came to exist here in our bodies in the same way and that we, like Jesus, can also become Gods.
If we compare these ideas are they the same? The existence of these conflicting views, and seeing how popular some of them are TODAY, shows how important this topic really is. Where DID Jesus come from? What DID Jesus mean in John 8? There is a need for us to study the Bible and see what evidence comes out that might help us to unravel all the confusion.
As is usually the case when major confusion is found in religion, much of the problem here has come about because of people misusing passages from the Bible. Do you remember our discussion about CONTEXT in Bible study back in Lesson 4?
(Please keep in mind that our goal in this lesson is not to prove or disprove anything regarding Jesus being God. We are only seeking to determine what Jesus was saying when he made the statement, “before Abraham was I existed”.)
If you remember back to our lesson about “How To Study The Bible”, we learned that conclusions are made by locating and combining the information from the passages which have something to say about our topic. What we need now are passages from the Bible that have some information about Jesus’ origin. We have provided you with a supplementary assignment, which is titled “Worksheet for Lesson 5”. It includes a list of verses that speak about the origin of Jesus. Take each verse and read it carefully. Put the verse into context—as we have learned to do from our past discussions—and use the space on the worksheet to record your findings. Remember that in this lesson, we are looking for specific TIME references about Jesus and his origin. After you finish your own research then return here and we will go through the verses together and compare notes.
Welcome back. We hope that you were able to complete your assignment. There will be other worksheets for you to research through later in the course. We will now go through the verses together and see how you did.
The first passage in the worksheet is Micah 5:2.
“2 But you Bethlehem Ephrathah, you are the smallest town in Judah. Your family is almost too small to count. But the “Ruler of Israel” will come from you for me. His beginnings are from ancient times, from the days of eternity.”
The first thing we must investigate is whether this passage ought to be in a list of passages that apply to our study about Jesus. This is accomplished in a two step process. The first is looking for any clues that might make a connection between what it says and Jesus. This passage seems to predict that someone will be born in Bethlehem and that the origin of this person is described as being “from the days of eternity”. “Bethlehem” seems to be a key word in the passage. Moving backward and forward in the passage does not seem to be able to give us any help. However, when you consider the fact that this same phrase if found in the New Testament book of Matthew it suddenly becomes very important. When he introduces Jesus to his readers in Matthew 2:6 he quotes this and indicates that it was speaking about Jesus. The fact that Matthew quotes the Micah passage is the proof we need to apply it to our study. Therefore, Jesus was the person being spoken of, the one who would 1) be born in Bethlehem and 2) have an origin from ancient days! The passage has to mean that the origin of Jesus would predate his birth in Bethlehem. In more simple terms, the point here seems to be that Jesus was alive before his birth in Bethlehem!
The next verse in the worksheet is John 1:1-3 and 14.
“1 Before the world began, the Word was there. The Word was there with God. The Word was God. 2 He was there with God in the beginning.3 All things were made through him. Nothing was made without him.”
and then verse 14…
“14 The Word became a man and lived among us. We saw his glory—the glory that belongs to the only Son of the Father. The Word was full of grace and truth.”
This is a difficult passage to understand when one first looks at it because the main character being mentioned here is someone who is called “the WORD” by the writer. He says that this WORD person: 1) was with God in the beginning; 2) that he also was God; and 3) that he created everything that was created. This is interesting because the writer is definitely speaking of a male person because he used the word “he”, but who is this person he calls “WORD”? To get the context all we need to do is continue reading down to verse 14 of the chapter and there we receive information that helps us discover the identity of the person. We find that the Word is actually Jesus, because Jesus fits the description, “the son who came from the father”. So, what do we learn about the origin of Jesus from this passage? The time reference here is “in the beginning”. The context surrounding the reference “the beginning” would seem to mean the Creation beginning. This is because of the reference to the things that were made by the Word. The beginning was definitely a long time before Jesus was born in the town of Bethlehem!
Next in the list we look at John 1:30.
“30 This is the one I was talking about. I said, ‘A man will come after me, but he is greater than I am, because he was living before me—he has always lived.’”
This is a very important passage because of the identity of the person speaking. If you go back to verse 29 you will find that the speaker is John “the Baptist”. According to what we learn from Luke chapters 1 and 2, John “the Baptist” is an older relative of Jesus. We know that the mother of John and the mother of Jesus were related. (See: Luke 1:36) The main point we learn from these chapters is that John was born before Jesus was born. Yet, in John 1:30, John “the Baptist” looks at Jesus and makes a very clear statement that Jesus “ranks before me because he WAS living before me”. In speaking these words, John seems to be pointing out that he believed that Jesus existed before he did! This would be a historical impossibility because everyone who knew both John and Jesus KNEW that John was older than Jesus. Therefore, John must have been speaking about a pre-existence of Jesus!
The next passage is John 3:13 and 31.
“13 The only one that has even gone up into heaven is the one who came down from heaven—the Son of Man”
“31 The one (Jesus) that comes from above is greater than all other people. The person that is from the earth belongs to the earth. That person talks about things that are on the earth. But the one that comes from heaven is greater than all other people.”
If you look carefully at verses 10 through 13 and then at verses 31 through 36, you will find numerous statements which help us identify that it is Jesus who is being referred to in these passages. Both sections mention him as having come down to the earth FROM HEAVEN. To say that a person CAME FROM a certain place means that he first existed IN THAT place before leaving to go to the new location. This statement about Jesus implies that he was alive in heaven before coming down to the Earth.
This same idea is mentioned in John 6:41-42.
“41 The Jews began to complain about Jesus. They complained because Jesus said, “I am the bread that comes down from heaven.” 42 The Jews said, “This is Jesus. We know his father and mother. Jesus is only Joseph’s son. How can he say, ‘I came down from heaven?’”
The context of this passage is pretty clear. The Jews seem to think that Jesus is saying that he was alive in heaven and that he came down from there. From what they are saying it seems pretty clear that the idea of Jesus coming down from heaven was something that was being taught in public. Of course, Jesus was not crazy and thus, again, we find evidence that Jesus experienced a pre-existence before coming down to the earth.
The next passage in the worksheet is John 16:27-28.
“27 No! The Father himself loves you. He loves you because you have loved me. And he loves you because you have believed that I came from God. 28 I came from the Father into the world. Now I am leaving the world and going back to the Father.”
Again, the context here is very clear. Jesus reveals the entire picture to us. He states that he came from the Father, arrived here on the earth, and is about to return to the Father. This provides us with even more evidence showing that Jesus was alive, in heaven, before he was alive on the earth!
Next we have John 17:5.
“5 And now, Father, give me glory with you. Give me the glory I had with you before the world was made.”
This is perhaps the earliest time reference. In a conversation with the Father, Jesus mentions his being with the Father “before the world was made”.
Another passage in our worksheet is Philippians 2:5-11.
“5 In your lives you must think and act like Christ Jesus. 6 Christ himself was like God in everything. Christ was equal with God. But Christ did not think that being equal with God was something he must keep. 7 He gave up his place with God and agreed to be like a servant. He was born to be a man and became like a servant. 8 And when he was living as a man, he humbled himself by being fully obedient to God. He obeyed even when that caused him to die. And he died on a cross. 9 Christ obeyed God, so God raised Christ to the most important place. God made the name of Christ greater than any other name. 10 God did this because he wants every person to bow for the name of Jesus. Every person in heaven, on earth, and under the earth will bow. 11 Every person will confess, ‘Jesus Christ is Lord.’ When they say this, it will bring glory to God the Father.”
This passage explains more detail about what Jesus mentioned in John 16:27-28. Jesus is first described as having “equality” with the Father. Then it says that he “gave up” that position in order to “be born” on the earth. Then, after accomplishing a mission, he returned to heaven. (This passage will be very important for future lessons in this course.)
The passage in Philippians, together with the one found in John 16, show the full cycle of Jesus’ existence.
Our last verse in the worksheet list was Revelation 22:16. We are going to look at this passage from the Revised Standard Version of the Bible because it has a very literal translation of the original sentence found in the text.
“16 I, Jesus, have sent my angel to give you this testimony for the churches. I am the root and the offspring of David, and the bright Morning Star.”
Jesus is speaking in this passage. (He was the one who gave the message of the Book of Revelation.) In this closing statement, he describes himself as being both the “root” and the “offspring” of David. In speaking this way, Jesus is using the word root in a figurative way. In context, the word “root” refers to the ancestor of a person. It is the opposite of the word offspring, which refers to a descendant of a person. To better understand what Jesus is saying here it might help if you think of the illustration of a tree. A tree gets its origin from its roots. A tree then produces its fruit as its offspring. In this statement, Jesus seems to be saying that he is both the root and the fruit of David! He claims to be both the ancestor of David and the descendant of David! How can this be? We know that Jesus physically lived on the earth hundreds of years after David died and so how could Jesus claim to be David’s root (living before David) AND offspring (living after David)? The evidence from our study of this lesson is here for you to consider. You must draw your own conclusion.
As we finish this lesson let’s take an inventory of the evidence and see what conclusion we can draw.
Jesus was not an ordinary fellow, as far as his origin is concerned because even though he was born into the world just like you and I, Jesus was alive long before he was born to his parents, Mary and Joseph! The evidence that we found traces the origin of Jesus back before the time of “the beginning”. In fact, we did not find any indication that there was ever a time when Jesus DID NOT exist! As a closing thought, before you go to answer the test questions for this lesson, please read carefully the statement made about Jesus in Hebrews 13:8.
The Bible Study Center
C. C. Regis Building
N. Bacalso Avenue, Corner Eucalyptus
Basak San Nicholas
6000 Cebu City, Philippines
(63) (32) 414-6311
(63) (927) 482-6921
Monday through Friday:
1:00 PM - 7:00 PM | <urn:uuid:f3f81300-e625-4d14-bf5f-9602c597b3e7> | CC-MAIN-2019-47 | https://biblestudycenter.net/BCC-JMAN/JMAN5.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668644.10/warc/CC-MAIN-20191115120854-20191115144854-00178.warc.gz | en | 0.983706 | 4,485 | 3.96875 | 4 |
Multiple star systems are not the majority in the Universe, even if in our Galaxy only 30 percent of stars are single, like our Sun. Massive stars tend to be more “family-oriented” than low-mass stars. For example, only 15 to 25 percent of M class stars are in binary or multiple systems comparing to 2/3 of G class stars, and G class makes only 7 percent of stars we see. The reason high-mass stars are often in multiples while low-mass ones are not is due to differences in how they form.
Star formation usually occurs in dense turbulent clouds of molecular Hydrogen. The natal environment influences star formation through the complex interplay of gravity, magnetic fields, and supersonic turbulence. Observations suggest that massive stars form through disk accretion in direct analogy to the formation of low-mass stars. However, several aspects distinguish high and low-mass star formation despite the broad similarity of the observed outflow and ejection phenomena.
Surprisingly, the nearby Taurus-Auriga star-forming region has a very high fraction of binaries for G stars and probably even for M stars. This produced a thought that most stars might be forming as multiples, but later these systems are broken apart.
High-mass vs. low-mass stars and the multiplet frequency
Low-mass star systems are observed to have a much lower binary fraction than higher mass stars (Lada 2006). In the recent simulations of a turbulent molecular cloud destined to form a small cluster of low-mass stars several classes of systems were produced: isolated stars, binaries and multiples formed via the fragmentation of a turbulent core, and binaries and multiples formed via the fragmentation of a disk; however, dynamical filament fragmentation is the dominant mechanism forming low-mass stars and binary systems, rather than disk fragmentation. (Offner et al. 2010). Loosely bound companions can be stripped by close encounters within the protocluster. Although previous studies suggested that multiple star formation via the fragmentation of a disk was limited to large mass ratio systems, recent work such as Stamatellos & Whitworth (2009) and Kratter et al. (2010a) has shown that when disks continue to be fed at their outer edges, the companions can grow substantially. Protostellar disks that are sufficiently massive and extended to fragment might be an important site for forming brown dwarfs and planetary mass objects (Stamatellos et al. 2011). Cores that form brown dwarfs would have to be very small and very dense to be bound (Offner et al. 2008).
Protostellar cores with a mass of a few tenths to a few hundreds of solar appear to be more turbulent, with gas within generally moving at higher velocities. Those are more prone to fragmentation, giving birth to binaries or multiple stars. Such massive star forming sites are rare and thus tend to be farther from Earth (> 400 pc) than low-mass star forming regions (around 100 pc). High-mass star formation occurs in clusters with high stellar densities. In addition, massive stars destroy their natal environment via HII regions. Their accretion disks are deeply embedded in dusty envelopes and ultra-compact HII regions become visible only after star formation is nearly complete. Observations of HII regions produced by massive stars are a prime tool for extragalactic astronomers to determine the star formation rate and abundances in galaxies.
Massive stars are shorter-lived and interact more energetically with their surrounding environment than their low-mass counterparts; they reach main sequence faster and begin nuclear burning while still embedded within and accreting from the circumstellar envelope. They also can be potential hazards to planetary formation: many low-mass stars are born in clusters containing massive stars whose UV radiation can destroy protoplanetary disks.
The temperature structure of the collapsing gas strongly affects the fragmentation of star-forming interstellar clouds and the resulting stellar initial mass function (IMF). Radiation feedback from embedded stars plays an important role in determining the IMF, since it can modify the outcome as the collapse proceeds (Krumholz et al. 2010). Radiation removes energy, allowing a collapsing cloud to maintain a nearly constant, low temperature as its density and gravitational binding energy rise by many orders of magnitude. Radiative transfer processes can be roughly broken into three categories: thermal feedback, in which collapsing gas and stars heat the gas and thereby change its pressure; force feedback, in which radiation exerts forces on the gas that alter its motion; and chemical feedback, in which radiation changes the chemical state of the gas (e.g. by ionizing it), and this chemical change affects the dynamics (Krumholz 2010).
In this image surface column densities L= 0.1, M=1.0, H=10.0 g cm-2 are shown. As density increases, the suppression of fragmentation increases: (L) small cluster, no massive stars, depleted disks; (M) massive binary with 2 circumstellar disks and large circumbinary disk; (H) single large disk with single massive star.
The effects of radiative heating depend strongly on the surface density of the collapsing clouds, which determines effectiveness of trapping radiation and accretion luminosities of forming stars. Surface density is also an important factor in binary and multiple star formation. Higher surface density clouds have higher accretion rates and exhibit enhanced radiative heating feedback, diminished disk fragmentation and host more massive primary stars with less massive companions (Cunningham et al. 2011).
Observations indicate most massive O-stars have one or more companions; binaries are common (> 59%) (Gies 2008). Massive protostellar disks are unstable to fragmentation at R ≥ 150AU for a star mass of 4 or more solar masses (Kratter & Matzner 2006) and cores with masses above 20 solar will form a multiple through disk fragmentation (Kratter & Matzner 2007). Radiation pressure does not limit stellar masses, but the instabilities that allow accretion to continue lead to small multiple systems (Krumholz et al. 2009).
The upper limit on the final companion star frequency in the system can be estimated (Bate 2004).
Primordial stars and their legacy
The similar pattern appears in formation of previously believed to be solitary primordial stars. Recent studies suggest that loners were rather an exception, than the rule.
First massive stars were probably accompanied by smaller stars, more similar to our Sun. Due to encounters with their neighbors some of the small stars may have been ejected from their birth group before they had grown into massive stars. This could indicate primordial stars with a broad range of masses: short-lived, high mass stars capable of enriching the cosmic gas with the first heavy chemical elements and produced first black holes that are alive and well today, and long-lived, low-mass stars which could survive for billions of years and maybe even to the present day.
Are single stars really single?
Again, similar fragmentation happens in the massive protoplanetary discs to produce gas giants, when a gas patch in protoplanetary disk collapses directly into gas giant planet (1 Jupiter mass or larger) due to gravitational instability. Gravitational instabilities can occur in any region of a gas disk that becomes sufficiently cool or develops a high enough surface density, be it a star or a planet formation site. Kratter et al. 2009 suggests that planets formed that way might be failed binaries.
In 1970 Stephen Dole performed planet accretion simulation which produced interesting results. In our Galaxy, the average separation of binary components is about 20 AU, corresponding roughly to the orbital distances of the jupiter-mass gas giants in our solar system (Jupiter and Saturn have often been called “failed stars”; however, both probably contain rocky cores, so they are definitely planets). By increasing the density of the initial protocloud an order of magnitude higher than before, Dole’s program generated larger and larger jovians. Eventually in one high-density run, a class K6 orange dwarf star appears near Saturn’s present orbit, along with two superjupiters and a faint red dwarf further sunward. No terrestrials were formed.
This study suggested that Jovians (brown dwarfs can be mentioned here too now) multiply at the expense of terrestrials. An increase of one critical parameter – the nebular density – resulted in the generation of binary and multiple star systems, and close companionship might lead to eventual exclusion of terrestrial worlds.
But things appear to be more complicated.
Terrestrial planets in multiple star systems
Multiple star systems provide a complicated mix of conditions for planet formation, because the accretion potentially involves material around each star in addition to material around the group. These locations can provide opportunities as well as hazards.
Binary systems can have circumprimary (around the more massive star), circumsecondary (around the less massive star), and circumbinary (around both stars) disks, compared to likely routine planet formation sites around single stars. In widely spaced binaries you could even have protoplanetary discs around the two.
If there are no tidal effects, no perturbation from other forces, and no transfer of mass from one star to the other, a binary system is stable, and both stars will trace out an elliptical orbit around the center of mass of the system.
A multiple star system is more complex than a binary and may exhibit chaotic behavior. Many configurations of small groups of stars are found to be unstable, as eventually one star will approach another closely and be accelerated so much that it will escape from the system. This instability can be avoided if the system is hierarchical. In a hierarchical system, the stars in the system can be divided into two smaller groups, each of which traverses a larger orbit around the system’s center of mass. Each of these smaller groups must also be hierarchical, which means that they must be divided into smaller subgroups which themselves are hierarchical, and so on.
For a certain range of stellar separations, the presence of a companion star will clearly impact the formation, structure, and evolution of circumstellar disks and any potential planet formation. Global properties such as initial molecular cloud angular momentum, stellar density, the presence of ionizing sources and/or high mass, and so on, may all influence disk and thereby planet formation.
The maximum separation of bound systems is related to the stellar density. The denser clusters, in which most stars form, contain a lower fraction of bound multiple systems, comparable to the fraction found among field stars.
The binary star systems that host planets are very diverse in their properties and binary binary semimajor axes ranging from 20 AU to 6400 AU. In case where orbits are eccentric, the binary periastron can be as small as 12 AU, and important dynamical effects are expected to have occurred during and after planet formation.
In a circumbinary disk strong tidal interactions between the binary and disk are almost always expected, significantly affecting planet formation.
In a circumstellar disk with separations of a few to several tens of AU, the tidal torques of the companion star generate strong spiral shocks, and angular momentum is transferred to the binary orbit. This in turn leads to disk truncation, determining a “planet-free” zone (at least for formation). Subsequent dynamical evolution in multiple systems could still bring planets into this region.
For a circumstellar disk in a binary system, which is not influenced by strong tidal forcing, the effect of the companion star will be modest, unless the orbital inclinations are such that the Kozai effect becomes important.
Math Box 1 – The Truncation Radius
The truncation radius rt of the disk depends on the binary semimajor axis ab, its eccentricity eb, the mass ratio q = M2/M1 (M1, M2 denote the masses of the primary and secondary stars, respectively), and the viscosity v of the disk. For typical values of q = 0.5, eb = 0.3 and disk Reynold’s number of 10^5, the disk will be truncated to a radius of rt = 1/3ab.
For a given mass ratio q and semimajor axis ab an increase in eb will reduce the size of the disk while a large v will increase the disk’s radius. Not only will the disk be truncated, but the overall structure and density stratification may be modified by the binary companion.
In a circumbinary disk, the binary creates a tidally-induced inner cavity. For typical disk and binary parameters (e.g., eb = 0.3, q = 0.5) the size of the cavity is = 2.7 * ab.
Numerical studies of the final stages of terrestrial planet formation in rather close binaries with separations of only 20–30 AU, that involve giant impacts between lunar-mass planetary embryos, show that terrestrial planet formation in such systems is possible, if there was a possibility for planetary embryos to form.
Systems with higher eccentricity or lower binary separation are more critical for planetesimal accretion. The effects of such eccentric companion include planetesimal breakage and fragmentation because of the increased relative velocities; the circumprimary planet forming disc truncation to smaller radii, causing the removal of material that may be used in the formation of terrestrial planets; destabilization of the regions where the building blocks for these objects may exist.
For binaries with separation less than 40 AU, only very low eccentricities allow planetesimal accretion to proceed as in the standard single-star case. On the contrary, only relatively high eccentricities (at least 0.2 in the closest 10AU separation and at least 0.7 for star system semimajor at 40AU) lead to a complete stop of planetesimal accretion.
A binary companion at 10 AU limits the number of terrestrial planets and the extent of the terrestrial planet region around one member of a binary star system.
Larger periastra (> 20AU) in solar-type binary star systems with terrestrial planets formation allow the stability of Jovian planets near 5 AU. These binary star/giant planets systems effectively support volatile delivery to the inner terrestrial region.
Approximately 40–50% of binaries are wide enough to support both the formation and the long-term stability of Earth-like planets in orbits around one of the stars. Approximately 10% of main sequence binaries are close enough to allow the formation and long-term stability of terrestrial planets in circumbinary orbits. According to this, a large number of systems can be habitable, given that the galaxy contains more than 100 billion star systems, and that roughly half remain viable for the formation and maintenance of Earth-like planets.
Math Box 2 –
Stability of the satellite-type orbit, where the planet moves around one stellar component (S-Type Orbits).
In this equation, ac, the critical semimajor axis, is the upper limit of the semimajor axis of a stable S-type orbit, ab and eb are the semimajor axis and eccentricity of the binary, and mu = M2/(M1+M2). S-type orbits in binaries with larger secondary stars on high eccentricities are less stable. The +- signs define a lower and an upper value for the critical semimajor axis which correspond to a transitional region that consists of a mix of stable and unstable orbits.
Stability of the planet-type orbit, where the planet surrounds both stars in a distant orbit (P-Type Orbits).
For circular binaries, this distance is approximately twice the separation of the binary, and for eccentric binaries (with eccentricities up to 0.7) the stable region extends to four time the binary separation. A critical semimajor axis below which the orbit of the planet will be unstable is given by
Similar to S-type orbits, the +- signs define a lower and an upper value for the critical semimajor axis ac, and set a transitional region that consists of a mix of stable and unstable orbits.
Habitable zones in binary star systems
HZs in binaries depend the binaries’ orbital elements and the actual amount of radiation arriving at an orbiting planet. The analytical estimate on the extent of the HZs includes the radiation field of the binary as a function of spectral types, orbital parameters, as well as the relative orbital phase and calculations of the RMS (root-mean-square) and Min-Max distances of the inner and outer borders of the habzones in P-Type and S-Type configurations.
In a binary-planetary system, the presence of the giant planet enhances destabilizing effect of the secondary star. The Jovian planet perturbs the motion of embryos and strengthens their radial mixing and the rate of their collisions by transferring angular momentum from the secondary star to these objects.
Systems with close-in giant planets may require massive protoplanetary disks to ensure that while planetesimals and protoplanets are scattered as giant planets migrate, terrestrial bodies can form and be stable. Systems with multiple giants also present a great challenge to terrestrial planet formation since the orbital architectures of such systems may limit the regions of the stability of smaller objects.
Four different types of orbits are possible for a terrestrial planet in a binary system that hosts a Jovian planet: the terrestrial planet is inside the orbit of the giant planet; the terrestrial planet is outside the orbit of the giant planet; the terrestrial planet is a Trojan of the primary (or secondary) or the giant planet; the terrestrial planet is a satellite of the giant planet.
When numerically studying the dynamics of a terrestrial planet in a binary planetary system, integrations have to be carried out for a vast parameter-space. These parameters include the eccentricities, semimajor axes, and inclinations of the binary and the two planets, the mass-ratio of the binary, and the ratio of the mass of the giant planet to that of its host star. The angular variables of the orbits of the two planets also add to these parameters.
Except for a few special cases, the complexities of these systems do not allow analytical solutions of their dynamics, and require extensive numerical integrations. Those special cases are: binaries with semimajor axes larger then 100 AU in which the secondary star is so far away from the planet-hosting star that its perturbative effect can be neglected; binaries in which the giant planet has an orbit with a very small eccentricity (almost circular); binaries in which, compared to the masses of the other bodies, the mass of the terrestrial planet is negligible.
Instability is not the only hazard in multiple systems. The difference in masses and lifetimes can pose serious problems for life, especially in relatively close binaries or multiples.
Binary or multiple systems might be hosts to several generations of planets; life might arise and be wiped out several times in system’s lifetime. Here’s how such binary system might evolve: while both stars are on the main sequence and in close proximity to each other, small and close-in first generation of planets forms; eventually one star evolves from the main sequence into the red giant and the two stars spread further apart while stellar material blown off from the red giant builds a protoplanetary disk around the other star and second generation planets form; the second star eventually goes red giant giving the first star, which is now white dwarf, a protoplanetary disk which could create a third generation of planets.
Each generation of planets is built from stellar material with a sequentially increasing metallicity as the material is recycled within each star’s fusion processes. In this case it becomes possible for old stars, even those which formed as low metal binaries, to develop rocky planets later in their lifetimes.
However, not always changing environment might be a threat, like in case of this old gas giant PSR B1620-26 b.
If such planet hosted habitable satellites, and host stars remained warm and safe enough to support life, inhabitants might not been affected much by dramatic changes. Or maybe they would. But that is another story.
# Pretty much everything in this article is presently under active research. To learn more about multiple star systems and habitable planets in them, try Planets in Binary Star Systems by Nader Haghighipour, 2010; or Multiple Stars across the H-R Diagram (you can read it online), 2005.
## Have fun and more exciting stuff to follow. | <urn:uuid:a9c2f631-684b-4146-bddb-0647a30eaff8> | CC-MAIN-2019-47 | http://jenomarz.com/designing-a-planetary-system-extension/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00377.warc.gz | en | 0.917262 | 4,230 | 4.0625 | 4 |
Posted March 11, 2006, 11 a.m.
By Giles Reaves
So you finally finished recording all your vocal tracks, but unfortunately you didn't get one take that was perfect all the way through. You're also wondering what to do about some excessive sibilance, a few popped "P"s, more than a few pitchy lines and some words that are all but too soft to even be heard - don't worry, there's hope! And hey, welcome to the world of vocal editing.
A Little History...
Since the beginning of musical performance, singers (and instrumentalists) have craved the possibility of re-singing that one "if only" note or line. You know the one: "if only I had hit that pitch, if only I had held that note out long enough, if only my voice hadn't cracked", etc. With the advent of early recording technologies, these 'if only' moments were now being captured, and performers were forced to face reliving those 'if only' moments forever! One 'if only' moment could ruin an entire take.
With the popularity of analog tape recording in the mid 20th century also comes the popularity of splice editing. Now you can record the same song two different times, and choose the first half of one take and the second half of another. Next comes multi track recording, where you don't even have to sing the vocal with the band!
Multi track recording introduced punching in and out, which allowed re-recording of just the "if only" moments on an individual track. But more importantly as it relates to the subject at hand, multi-track recording also introduced the idea of recording more than one pass or 'take' of a lead vocal, leading to what is now known as "vocal comping". More on that in just a bit.
But before we get into the nitty-gritty, here's a brief outline of the typical vocal editing process for lead and background vocals. Of course, much of this is subject to change according to production direction, or the vocalist's skills and past experience.
- Recording: This ranges from getting the first take, to punching in on a take, to recording multiple takes for comping.
- Comping: Combining various takes into one final track, tweaking edits to fit, crossfading if needed.
- Basic Cleaning: Listen in solo one time through. Typical tasks include removing the obvious things like talking, coughing, mouth 'noises' etc., checking all edits/crossfades, fading in/out where necessary.
- Performance Correction: Timing and pitch correction takes place after you have a solid final comp track to work with.
- Final Prep: this includes everything from basic compression/EQ, to de-essing, reducing breaths, filtering out Low Frequencies, etc.
- Leveling: During the final mix, automating the vocal level (if needed) to sit correctly in the mix throughout the song.
Note that at many stages along the way you will be generating a new 'master vocal' file (while still holding on to the the original files, just in case!). For example, let's say you record 4 vocal 'takes' which become the current 'masters'. The you comp those takes together to create a new "Comp Master" vocal track, and then you tune/time the Comp Master and sometimes create a "Tuned Vocal Master" track (which is then EQ'd and compressed to within an inch of its life while simultaneously being drowned in thick, gooey FX, all before being unceremoniously dumped into what we like to call the mix).
Recording Vocals for Comping
In order to comp a vocal, you must first have multiple vocal tracks to choose from. Recording comp tracks can be slightly different from recording a 'single take' vocal. For one thing, you don't have to stop when you make a mistake — in fact, many times a performer gets some great lines shortly after making a mistake!
I tend to ask for multiple 'full top to bottom' takes from the vocalist, to preserve the performance aspects and to help keep things from getting over-analytical. Then I use comping to work around any mistakes and 'lesser' takes, choosing the best take for each line. Often the vocalist will be involved with the comping choices, so be prepared to be a good diplomat (and don't be too hard on yourself if you're comping your own vocals)!
How many tracks?
This will be different for every singer, but for comping I generally suggest recording around three to five tracks. Any less and I don't feel that I have enough choices when auditioning takes — any more and it becomes difficult to remember how the first one sounded by the time you've heard the last take.
When recording tracks that I know will be comped, I usually let the singer warm up for a few takes (while setting levels and getting a good headphone mix) until we get a 'keeper' take that is good enough to be called 'take one'. From there, simply continue recording new takes until you feel you have enough material to work with. If you find yourself on take seven or eight and you're still not even getting close, it may be time to take a break!
In Reason, when tracking vocals for future comping, you simply record each 'take' on the same track. With 'tape' recording this would erase the previous take, but with 'non-destructive' recording you are always keeping everything (with the newest take laying on 'top' of the previous take). When you enter Comp Mode, you will see each take just below the Clip Overview area (with the newest take above the older takes). The 'takes' order can easily be rearranged by dragging them up or down. Double-click on any 'take' to make it the 'active take' (it will appear in color and in the Clip Overview, and this is the take you will hear if you hit play). Now comes the fun part.
Vocal Takes in Comp Mode
To combine or 'comp' different parts of different takes together, use the Razor tool as a 'selector' for the best lines/words. After creating cut lines with the Razor tool, you can easily move them earlier or later by dragging the 'Cut Handles' left or right. You can delete any edit by deleting the Cut Handle (click on it and hit the 'delete' key). Create a crossfade by clicking/dragging just above the Cut Handle. Silence can be inserted by using the Razor to make a selection in the "Silence" row, located below the Clip Overview and above the Comp Rows.
Comping (short for compositing): picking and choosing the best bits from among multiple takes, and assembling them into one continuos 'super take'.
Now that you have your vocal tracks recorded, how do you know which parts to use? I've approached this process differently through the years. Previously, I'd listen to each take in its entirety, making arcane notes on a lyric sheet along the way — this was how others were doing it at the time that I was learning the ropes. More recently I've taken another approach that makes more sense to me and seems to produce quicker, smoother, and better comps.
Currently, my auditioning/selection process consists of listening to one line at a time, quickly switching between the different takes and not stopping for discussion or comments. This is the basic technique you will see me demonstrate in our first video (see below).
Now it's time for a little thing I like to call a Video Detour. Enjoy De-tour (a-hem). Follow along in this 'made for internet' production as I comp the first verse of our demo song "It's Tool Late" (by singer/songwriter Trevor Price).
Note: watch your playback volume - the music at the top comes in soft, but it gets louder when the vocals are being auditioned.
Comping a Vocal using Reason's "Comp Mode"
The three most common issues with vocals are pitch, timing, and level/volume. All three are easy to correct with today's digital tools and just a little bit of knowledge on your part.
After comping, I usually move on to correcting any timing issues. You may also jump straight into dealing with any tuning issues if you prefer. Often times there isn't a lot of timing work that needs to be done on a lead vocal. But when you start stacking background vocals (BGVs) things can get 'messy' very quickly. User discretion is advised.
In our next video example (it's coming, I promise), I will show you how to line up a harmony vocal track with the lead vocal. I will use the lead vocal as the timing reference, moving the harmony track to match the lead. Since you can only see one track at at time when editing, I use the playback curser (Song Position Pointer in Reason) to 'mark' the lead vocal's timing, then when I edit the harmony track using this reference point to line it up with the lead vocal.
I will also use the following editing techniques:
Trim Edit, where you simply trim either end of a selected clip to be shorter or longer as desired, which will expose or hide more or less of the original recording that is inside the clip.
Time Stretch (called Tempo Scaling in Reason), where you use a modifier key [Ctrl](Win) or [Opt](Mac) when trimming an audio clip, allowing you to stretch or shrink any clip (audio, automation, or MIDI) which changes the actual length of the audio within the clip.
Clip Sliding (my term), where (in Comp Edit mode) you use the Razor to isolate a word or phrase, and you slide just that clip right or left to align it - using this technique allows you to slide audio forward or backwards in time without leaving any gaps between the clips!
OK, thanks for waiting - here's the video:
Possibly an entire subject in itself, as everyone has their own take on vocal tuning. Of course, it's always best to 'get it right the first time' if you can. But sometimes you are forced to choose between an initial performance that is emotionally awesome (but may have a few timing or pitch flaws), and one that was worked to death (but is perfect in regards to pitch and timing). If only you could use the first take with all its emotion and energy. Well now you can!
Neptune Pitch Adjuster on the lead vocal
In Reason, using Neptune to naturally correct minor pitch issues is about as simple as it gets. The following video demonstrates using Neptune for simple pitch correction, as well as using it in a few more advanced situations.
Vocal "Rides" (as they are called for 'riding the fader/gain'), have been common from almost the beginning of recording itself. In rare cases, you may have to actually ride the vocal while recording the vocal(!) - this is the way it was done back with ‘direct to disk' and ‘direct to two-track' recordings. But luckily you can now do these ‘rides' after the vocal is recorded, or you can even draw in these moves with a mouse (with painstaking detail, if you are so inclined). Most of the time I use a combination of both techniques.
The basic idea with vocal rides is to smooth out the overall vocal level by turning up the soft parts and turning down the loud parts (in relation to the overall mix). The end game is to get the vocal to sit ‘evenly' at every point in the song, in a way that is meaningful to you. Or as I like to say, to get the vocal to ride ON the musical wave, occasionally getting some air but never diving too far under the musical water.
Great engineers learn the song line by line and ‘perform' precision fader moves with the sensitivity and emotion of a concert violinist. It really can be a thing of beauty to watch, in an audio-geeky sort of way. For the rest of us, just use your ears, take your time, and do your best (you'll get better!).
There's no right or wrong way to edit vocal levels, only a few simple rules to follow: Obviously, you don't want to ever make an abrupt level change during a vocal (but you can have somewhat abrupt automation changes between words/lines), and you don't want to be able to actually hear any changes that are being made. All level rides should ideally sound natural in the end.
As for techniques, there are three approaches you can take in Reason. The most familiar is probably Fader Automation, which can be recorded in real-time as you ‘ride' the fader. You can also draw in these moves by hand if you prefer. Additionally, you can do what I call "Clip Automation", which involves using the Razor to create new clips on any word, breath or even an "S" that is too loud or too soft. Since each separate clip has it's own level, you simply use the Clip Level control to make your vocal ‘ride'. Alternatively, you can use the clip inspector to enter a precise numeric value, increase/decrease level gradually in a ‘fine tune' way, or simultaneously control a selection of clips (even forcing them all to the same level if desired).
The ‘pros' to Clip Automation are that it is fast, you can see the waveform change with level changes, you can see the change in decibels, and you can adjust multiple clips at once. The main con is that you can't draw a curve of any sort, so each clip will be at a static level. All I know is it's good to have options, and there's a time and place for each technique!
Using "Clip Automation" to reduce multiple "S"s on a Vocal Track
As a 'fader jockey' myself, I prefer to begin vocal rides with a fader (real or on-screen). From there I'll go into the automation track and make some tweaks, or to perform more 'surgical' nips and tucks (if needed) on the vocal track. It's these smaller/shorter duration level changes that are more often ideally created with a mouse rather than a fader. Reducing the level of a breath or an "S" sound come to mind as good examples of 'precision' level changes that benefit from being drawn by hand.
Leveling the vocal must ultimately be done in context, which means while listening to the final mix that the vocal supposed to be is 'sitting' in (or 'bed' it is supposed to 'lay' on, or choose your own analogy!). This is because you are ultimately trying to adjust the vocal level so that it 'rides' smoothly 'on' the music track at all times (ok, so I'm apparently going with a railroad analogy for now), which doesn't necessarily imply that it should sit at a static level throughout the song.
You would think that a compressor would be great at totally leveling a vocal, but it can only go so far. A compressor can and will control the level of a vocal above a certain threshold, but this doesn't necessarily translate into a vocal that will sit evenly throughout a dynamic mix. Speaking of compression, this is probably a good time to mention that all processing (especially dynamics) should be in place before beginning the vocal riding process, as changing any of these can change the overall vocal level (as well as the level of some lines in relation to others). Bottom line - do your final vocal rides (IF needed) last in the mixing process.
Let's begin - set your monitors to a moderate level and prepare to focus on the vocal in the mix. Oftentimes I prefer smaller monitors or even mono monitoring for performing vocal rides - you gotta get into the vocal 'vibe' however you can.
Things to look for:
Before you get into any actual detail work, listen to the overall vocal level in the mix throughout the entire song. Sometimes you will have a first verse where the vocal may actually be too loud, or a final chorus that totally swallows up the vocal. Fix these 'big picture' issues first before moving on to riding individual lines and words.
When actually recording the fader moves (as in the video), I'll push the fader up or down for a certain word and then I will want it to quickly jump back to the original level. In the "Levels" video, you will see me hit 'Stop' to get the fader level to jump back to where it was before punching in. The reason why I'm doing it this way is that if you simply punch out (without stopping) the fader won't return to it's original level (even though it's recording it correctly). Long story short, it's the quickest way I found to create my desired workflow, and it works for me (although it may look a bit weird at first)!
Often times you will find that it is the last word or two in a line that will need to be ridden up in level (sometimes the singer has run low on air by the end of a line). Also watch for the lowest notes in a vocal melody - low notes require more ‘air' to make them as loud as the higher notes, so they can tend to be the quieter notes in a vocal track. Another thing to listen for are any louder instruments that may ‘mask' the vocal at any time - sometimes the fix is to raise the vocal, other times you can get better results by lowering the conflicting instrument's level momentarily. In extreme cases, a combination of both may be required!
Other problems that rear their heads from time to time are sibilance, plosives, and other 'mouth noises'. These can all be addressed by using creative level automation, or by using a device more specifically for each issue - a 'de-esser' for sibilance, a High Pass Filter for plosives, for example.
Now, enjoy a short video interlude demonstrating the various techniques for vocal level correction, including the fader technique as well as automation techniques including break-point editing, individual clip level adjustments, and some basic dynamic level control concepts including de-essing and multi-band compression.
Controlling Vocal Levels in Reason.
Multi-bands for Multi Processes
I will leave you with one final tip; you can use a multi-band compressor on a vocal track to deal with multiple issues at once. The high band is good for a bit of 'de-essing', the mid band can be set as a 'smoother' to only 'reduce' gain when the singer gets overly harsh sounding or 'edgy', and the lower band can be used to simply smooth the overall level of the 'body' of the vocal. If there are four bands available, you can turn the level of the bottom-most band totally off, thus replicating a high pass filter for 'de-popping' etc. Additionally, adjusting the level of each band will act like a broad EQ!
Setting the crossover frequencies with this setup becomes more important than ever, so take care and take your time. Remember you are actually doing (at least) four different processes within a single device, so pay attention not only to each process on it's own but to the overall process as a whole. When it works, this can be the only processor you may need on the vocal track.
Multi-band Compressor as 'Multi Processor'
...all of the techniques in this article, however helpful they can be, are not always required - do I even need to remind you all to 'use your ears' at all times? Using vocal rides as an example, I've mixed two songs in a row (by the same artist), one where the vocal automation looked like a city skyline and the very next mix where the vocal needed no automation whatsoever!
As always; "listen twice, automate once"!
Annex Recording and Trevor Price (singer/songwriter) for the use of the audio tracks.
Giles Reaves is an Audio Illusionist and Musical Technologist currently splitting his time between the mountains of Salt Lake City and the valleys of Nashville. Info @http://web.mac.com/gilesreaves/Giles_Reaves_Music/Home.html and on AllMusic.com by searching for “Giles Reaves” and following the first FIVE entries (for spelling...). | <urn:uuid:ce0f5136-c4f3-4788-8cbb-d63f209de883> | CC-MAIN-2019-47 | https://www.reasonstudios.com/blog/vocal-production-and-perfection | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00138.warc.gz | en | 0.952728 | 4,294 | 2.734375 | 3 |
The following text originally appeared in the catalogue published on the occasion of the exhibition Really Useful Knowledge, curated by What, How & for Whom/WHW and organized and produced by the Museo Nacional Centro De Arte Contemporanea Reina Sofía.
The catalogue offers a further occasion to the contributing artists to reflect on the intrinsic pedagogical dimension of their work—in some cases openly affirming the emancipatory power of art, in other cases critically reflecting on knowledge production, with different artistic and political strategies to resist and confront the neoliberal impact on education (or other historical systems of oppression), to reclaim space for self-organized learning processes, to turn to aesthetics to reconfigure poetic and intimate relations, to chart new experiments around commons and communities, or to investigate the past, the present and the desired.
In WHW’s research-oriented curatorial approach, the space of the exhibition itself and the capacity of the museum are engaged as educational dispositif, challenged in their institutional role in knowledge production and distribution, and invited to become both catalysts for ideas and reflection, and sites for public debate and civic participation.
In this introduction, What, How & for Whom/WHW offers an overview of the work of the different artists who took part in the show, representing the artists’ concerns and ideas around education, and a diverse array of contemporary discourses and pedagogical artistic practices.
Photos: Joaquín Cortés & Román Lores / M.N.C.A.R.S
The notion of “really useful knowledge” originated with workers’ awareness of the need for self-education in the early nineteenth century. In the 1820s and 1830s, workers’ organizations in the United Kingdom introduced this phrase to describe a body of knowledge that encompassed various “unpractical” disciplines such as politics, economics, and philosophy, as opposed to the “useful knowledge” proclaimed as such by business owners, who some time earlier had begun investing in the advancement of their businesses by funding the education of workers in “applicable” skills and disciplines such as engineering, physics, chemistry, or math. Whereas the concept of “useful knowledge” operates as a tool of social reproduction and a guardian of the status quo, “really useful knowledge” demands changes by unveiling the causes of exploitation and tracing its origins within the ruling ideology; it is a collective emancipatory, theoretical, emotional, informative, and practical quest that starts with acknowledging what we do not yet know.
Although its title looks back to the class struggles of capitalism’s early years, the present exhibition is an inquiry into “really useful knowledge” from a contemporary perspective, positing critical pedagogy and materialist education as crucial elements of collective struggle. The exhibition is set against the backdrop of an ongoing crisis of capitalism and the revolts and attempts to oppose it at the structural level. In examining ways in which pedagogy can act as an integral part of progressive political practices, Really Useful Knowledge looks into the desires, impulses, and dilemmas of historical and current resistance and the ways they are embodied in education as a profound process of self-realization. The exhibition considers relations between usefulness and uselessness, knowledge and nescience, not as binary oppositions but as dialectical and, first and foremost, as dependable on the class perspective.
Conceived at the invitation of the Museo Nacional Centro de Arte Reina Sofía, the exhibition was shaped in a dialogue with the museum’s curatorial and educational team and is inevitably influenced by the discussions and experiences of the local context. The devastating effects of austerity measures in Spain have been confronted by numerous collective actions in which the forms of protest and organized actions fighting to reclaim hard-won rights have gradually transformed into formal or informal political forces based on principles of the commons and the democratization of power. Through these processes issues pertaining to the wide field of education became a prominent part of the social dynamic—from initiatives for empowerment through self-education, to the reconfigured locus of the university and the role of students in the current social battles, to the struggles to defend public education.
It is not by accident that Really Useful Knowledge includes numerous collective artistic positions. Although disclaimers about collective work have been issued on many occasions—beyond the lures of productiveness and mutual interest, working together is not a guarantee for change, positive or negative—it is a prerequisite for social transformation. In recent years a number of collectives have again come to the forefront of social change by building new systems for renegotiating and redistributing power relations in all spheres of life. Several of the collectives that take part in the exhibition explore its potential as a site for colearning and a tool for reaching out. The group Subtramas has included organizations and activists from all over Spain in a project developed in dialogue with the exhibition. Social actors such as self-education groups, occupied spaces, independent publishers, collective libraries, activists groups, social centers, theorists, poets, LGBT activists, and feminists will take part in assemblies, readings, discussions, and various public actions.
The activist and feminist collective Mujeres Públicas engages with various issues connected to the position of woman in society. One of their permanent causes is the political struggles around abortion legislation in Latin America. The group’s project for the exhibition gathers the recent material from their actions and protests in public space.
Chto Delat? initiate interventions examining the role of art, poetics, and literature in educational situations and integrate activism into efforts to make education more politically based. Their work Study, Study and Act Again (2011–) functions as an archival, theatrical, and didactic space, created to establish interaction with visitors to the exhibition. Many of the publications included in the Chto Delat? installation are published by the Madrid based activist collective and independent publishing house Traficantes de Sueños, who have also organized the continuous education project Nociones Comunes (Common Notions) on a number of topical questions, including the status of labor; geopolitics; and connecting grass-roots activists, militant researchers, citizens, and students with theorists and economists. The work by Argentinean artistic duo Iconoclasistas (Pablo Ares and Julia Risler) uses critical mapping to produce resources for the free circulation of knowledge and information. Their maps, built through collective research and workshops, summarize the effects of various social dynamics, such as the colonization of South America, the history of uprisings on the continent, and the urban developments brought about by neoliberal politics.
Works can only enter into real contact as inseparable elements of social intercourse. It is not works that come into contact, but people, who, however, come into contact through the medium of works.
— M. Bakhtin and P. M. Medvedev, The Formal Method in Literary Scholarship
Really Useful Knowledge explores the possibility of art initiating encounters and debate between people, works, structures, tools, objects, images, and ideas, embarking from two crucial notions—materialist pedagogy arising from the Marxist interpretations of Walter Benjamin’s cultural and political analysis; and critical pedagogy. The exhibition looks at diverse procedural, nonacademic, antihierarchical, grass-roots, heterodox educational situations primarily occupied with the transformative potentials of art, testing the role of images in that process. Without attempting to provide an “overview” of the various educational projects and practices of recent years, many of which use the rhetoric of education as a displaced politics and whose most visible outcome has been an inflation of the discursive realm and “pedagogical aesthetic,” the exhibition looks into the educational process as an existing and integral (but not to be taken for granted) part of the exhibition genre and the original role of the museum.
By considering teaching and learning as reciprocal active processes, Victoria Lomasko has developed Drawing Lesson (2010–), a project in which, as a volunteer for the Center for Prison Reform, she has been giving drawing lessons to the inmates of juvenile prisons in Russia. Lomasko developed her own methodology of empowering the socially oppressed by employing images to strengthen analytical thinking and empathy. Working closely with organizations for the rights of immigrants, Daniela Ortiz developed Nation State II (2014), a project engaged with the issue of immigration, specifically with the integration tests required for obtaining residency permits. Revealing this test as a mechanism for the further exclusion and extension of colonial dominance over illegal workers coming mostly from ex-colonies, Nation State II collaborates with immigrants in creating the tools needed to learn the critical information they require when obtaining their rights. At the same time, the project develops a critical analysis of immigration legislature in Spain.
Really Useful Knowledge develops through a number of recurring themes revolving around the relationship between the artist and social change, the dialectic embedded in the images and visual realm that can generate political action, and the tension between perceived need for active involvement and insistence on the right of art to be “useless.” In Cecilia Vicuña’s What Is Poetry to You?—filmed in 1980 in Bogotá—the artist asks passers-by to respond to the question posed in the work’s title. The answers offer personal definitions of poetry that are opposed to racial, class, and national divisions; and the collective voice emerges that delineates a direction for emancipation and articulates socialist ideas through art. While relying on research into military technology and operations as in many of his works, in Prototype for a Non-functional Satellite (2013) Trevor Paglen creates a satellite that functions as a sculptural element in the gallery space, its very “uselessness” serving to advocate for a technology divorced from corporate and military interests. Similarly, the Autonomy Cube (2014) that Paglen developed in collaboration with computer researcher and hacker Jacob Appelbaum problematizes the tension between art’s utilitarian and aesthetic impulses. While visually referencing Hans Haacke’s seminal work of conceptual art, Condensation Cube (1963–1965), the Autonomy Cube offers free, open-access, encrypted, Internet hotspot that route traffic over the TOR network, which enables private, unsurveilled communication.
Carole Condé and Karl Beveridge’s series of photographs Art Is Political (1975) employs stage photography to relate social movements with a field of art. The series combines dancers’ bodies in movement with Yvonne Rainer’s choreography and Chinese agitprop iconography, with each photograph composing one letter of the sentence “Art Is Political.” The tensions and contradictions pertaining to the possibility of reconciling high art and political militancy figure also in Carla Zaccagnini’s Elements of Beauty (2014), a project that examines protest attacks on paintings in UK museums carried out by suffragettes in the early twentieth century. By outlining the knife slashes made on the paintings, Zaccagnini retraces them as abstract forms, while the accompanying audio guides provide fragmented information on the suffragettes’ court trials. One hundred years after those iconoclastic attacks, Zaccagnini’s work poses uncomfortable questions about where we would put our sympathies and loyalties today and how we know when we have to choose.
Like highways, schools, at first glance, give the impression of being equally open to all comers. They are, in fact, open only to those who consistently renew their credentials.
— Ivan Illich, Deschooling Society
How societies define and distribute knowledge indicates the means by which they are structured, what is the dominant social order, and degrees of inclusion and exclusion. Artists have often attempted to analyze the way in which the education system acts as the primary element for maintaining social order and the potential for art to develop progressive pedagogy within existing systems. Work Studies in Schools (1976–1977) by Darcy Lange documents lessons in the classrooms of three schools in Birmingham, England. The project uses the promise of video’s self-reflectivity and interactivity in its early years to expose class affiliation and the ways in which education determines future status in society, touching upon a range of subjects that would soon be swept away by Thatcherite ideology. While working as a teacher of visual arts in a high school in Marrakesh, artist Hicham Benohoud took group photographs of his pupils in the carefully posed manner of tableaux vivants. The Classroom (1994–2002) creates surrealist juxtapositions of pupils’ bodies, educational props, and strange objects, while students’ readiness to adopt the curious and uneasy postures opens up themes of discipline, authority, and revolt. En rachâchant (1982), a film by Danièle Huillet and Jean-Marie Straub, humorously looks into dehierarchizing the educational process by showing schoolboy Ernesto, who insistently and with unshakable conviction refuses to go to school. Two Solutions for One Problem (1975) by Abbas Kiarostami, a short didactic film produced by the Iranian Centre for the Intellectual Development of Children and Young Adults, is a simple pedagogical tale of cooperation and solidarity that shows how two boys can resolve the conflict over a torn schoolbook through physical violence or camaraderie. In Postcards from a Desert Island (2011) Adelita Husni-Bey employs earlier pedagogical references, such as works by Francesc Ferrer i Guàrdia or Robert Gloton. For the children of an experimental public elementary school in Paris, the artist organized a workshop in which the students built a society on a fictional desert island. The film shows the children’s self-governance quickly encountering political doubts about decision-making processes and the role of law, echoing the impasses we experience today, but it also shows the potential and promise of self-organization.
Looking into ideological shifts that change how the relevance of particular knowledge is perceived, marxism today (prologue) (2010) by Phil Collins follows the changes brought about by the collapse of the German Democratic Republic (GDR) in the lives of three former teachers of Marxism-Leninism, a compulsory subject in all GDR schools that was abolished along with state socialism at the time of German reunification. The teaching of Marxism-Leninism, as described by the interviewed teachers, comes across as an epistemological method and not just a state religion whose dogmas were promulgated by a political authority. This recounting of the teachers’ lives complicates the success story of German unification, which sees the absorption of this aberrant entity back into the Bundesrepublik as a simple return to normality. In use! value! exchange! (2010), Collins reclaims the relevance of Marxist education for the present day by filming a symbolic return in which one of the former teachers gives a lesson on basic concepts of surplus value and its revolutionary potential to the clueless students of the University of Applied Sciences, previously the prestigious School of Economics, where she taught before the “transition.” The students’ ignorance of the most basic of the contradictions Marx discovered in capitalism—between use value and exchange value—is indicative of the present moment in which capitalism stumbles through its deepest economic crisis in eighty years.
Tracing the history of public education in most cases reveals an admixture of paternalistic idealism attempting to overcome social fears that, until the nineteenth century, had discouraged the education of the poor, and a clear agenda of worker pacification through the management of social inclusion. And yet, as Silvia Federici and George Caffentzis note, “In the same way as we would oppose the shutting down of factories where workers have struggled to control work and wages—especially if these workers were determined to fight against the closure—so we agree that we should resist the dismantling of public education, even though schools are also instruments of class rule and alienation. This is a contradiction that we cannot wish away and is present in all our struggles.”
The regressive tendencies of neoliberalism prompted a general retreat from the ideologies of social change, steering education further toward the function of legitimizing a deeply oppressive social order. But those engaged in the contemporary “battle for education” must shed all nostalgia for the progressive strategy of welfare provision associated with the “golden age” decades of European capitalism—a strategy that fostered social mobility within the prevailing economic structure and attempted limited educational reforms governed by the humanistic faith in education as the development of “people’s creative potential.” They must also be cautious about betting on the emancipatory hopes that have been inscribed in the affective and communicative possibilities of immaterial labor, because in the contemporary regime touted as the knowledge society, work has become a form of internalized vocation leading to creative self-fulfillment, while innermost thoughts and creative drives have been turned into activities productive for capital. The contemporary “battle for education” has to address new social inequalities and conflicts triggered by distribution and access to knowledge and must assess the effects that knowledge as the basis of capital reproduction has on the totality of knowledge workers’ existence.
History breaks down in images not into stories.
— Walter Benjamin, The Arcades Project
Several works in the exhibition use the principles of collecting, accumulating, and reorganizing images or objects and assembling them into sequences in order to challenge the impulses of reification and to test the ability of images to “defin[e] our experiences more precisely in areas where words are inadequate.” Many works constitute informal assemblies or archives aimed at revealing the ways in which images operate, thus making the very process of viewing more politically aware. Photographs by Lidwien van de Ven zoom into the hidden details of notorious public political events, implicating the viewer in their content. Since the 2012, the artist has been capturing the complex dynamic between the revolutionary pulses of social transformation and the counterrevolutionary resurgence in Egypt. Depicting the contested period of the Egyptian political uprising through visual fragments, van de Ven portrays the oscillations of the very subject of the revolution.
Several works in the exhibition deal with the modernist legacy and the present-day implications and reverberations of culture having been used as a Cold War instrument. Starting from a reference to the iconic exhibition Family of Man, first organized at the Museum of Modern Art in New York in 1955 and later circulated internationally, Ariella Azoulay’s installation The Body Politic—A Visual Universal Declaration of Human Rights (2014) deconstructs the notion of human rights as a post-WWII construction based on individualism, internationalism, humanism, and modernity that at the same time also contributed to the formation of the hegemonic notion of otherness. By reworking the original display of Family of Man, Azoulay shows the cracks in its representation system and asks what kind of humanism we need today to restore the conditions for solidarity. The visual archive of Lifshitz Institute (1993/2013) by Dmitry Gutov and David Riff centers on rereading the works of Russian aesthetic philosopher Mikhail Lifshitz, one of the most controversial intellectual figures of the Soviet era. Opening in Moscow by D. A. Pennebaker documents impressions of the American National Exhibition organized by the U.S. government in 1959 in order to propagate the American way of life. By portraying the rendezvous of Muscovites and American advanced technology, it shows a propaganda machine gone awry: while the exhibition attempted to lure the audience with a “promised land” of consumerism, the documentary presents differences as well as similarities between American and Russian working-class life.
If the pertinence of the Cold War for the present day manifests itself through the recent revival of Cold War rhetoric that serves as a cover for military and nationalist drumbeats whose noise is making up for a suspension of democracy, the legacy of colonial rule is as vigorous today as it was in 1962, when Jean-Paul Sartre memorably diagnosed the situation in “Sleepwalkers,” (1962) an essay about the behavior of Parisians on the very day the Algerian ceasefire was signed: “Colonialism over there, fascism here: one and the same thing.”
Originally produced for Algerian state television, How Much I Love You (1985) by Azzedine Meddour is an ingenious mixture of the genres of educational film, propaganda, and documentary. Meddour uses excerpts from advertising and propaganda films found in colonial archives, expertly edited with a distressingly joyous soundtrack and turned on their head in an ironic chronicle of colonial rule and the French role in the Algerian War of Independence. The installation Splinters of Monuments: A Solid Memory of the Forgotten Plains of Our Trash and Obsessions (2014) by Brook Andrew includes a wide assortment of objects: artworks from the Museo Reina Sofía collections, artworks borrowed from the Museo Nacional de Antropología i Museo de América, records from local community archives, original Aboriginal human skeletons used for medical purposes, and paraphernalia such as postcards, newspapers, posters, rare books, photographs, and smaller objects. Their juxtaposition challenges hegemonic views on history, art, gender, and race. The possibility of renegotiating relations of colonialism and power through engaged acts of viewing and by bringing a hybrid social imaginary to the symbolic site of the museum is also explored by This Thing Called the State (2013) and EntreMundos [BetweenWorlds] (2013) by Runo Lagomarsino, works that rely on historical narratives related to the colonial conquests of Latin America and the question of migration. Looking into how society relates to its past and projects its identity, Lagomarsino borrows a collection of retablo votive paintings commissioned by Mexican migrants after their successful illegal crossing of the border to the United States.
There is not only such a thing as being popular, there is also the process of becoming popular.
— Bertolt Brecht, Against Georg Lukács
Really Useful Knowledge reiterates the necessity of producing sociability through the collective use of existing public resources, actions, and experiments, either by developing new forms of sharing or by fighting to maintain existing ones now under threat of eradication. Public Library: Art as Infrastructure (www.memoryoftheworld.org) (2012–) by Marcell Mars is a hybrid media and social project based on ideas from the open-source software movement, which creates a platform for building a free, digitized book repository. In that way, it continues the public library’s role of offering universal access to knowledge for each member of society. However, despite including works that investigate the progressive aspects of complex new technologies and their potential to reach a wide public, the exhibition avoids idealizing them, because the technological leap for some has been paralleled by dispossession and an increase in poverty for others. The project Degenerated Political Art, Ethical Protocol (2014) by Núria Güell and Levi Orta uses the financial and symbolic infrastructure of art to establish a company in a tax haven. With help from financial advisors, the newly established “Orta & Güell Contemporary Art S.A” is able to evade taxes on its profits. The company will be donated to a local activist group as a tool for establishing a more autonomous financial system, thus using the contradictory mechanisms of financial capitalism as tools in the struggle against the very system those tools were designed to support.
The exhibition also looks into artistic practices in which social and communal messages are conveyed through folk or amateur practices, insisting on the importance of popular art—not as an ideologically “neutral” appreciation and inclusion of objects made by children, persons with mental illness, or the disadvantaged, but because it creates new forms of sociability, because it is popular in the Brechtian sense of “intelligible to the broad masses,” and because it communicates between presently ruling sections of society and “the most progressive section of the people so that it can assume leadership.” Ardmore Ceramic Art Studio is an artists’ collective founded in 1985 in the rural area of Ardmore in South Africa. As a reaction to official government silence on AIDS, the artists made ceramics that, in addition to commemorating fellow artists lost to AIDS, explain how the disease spreads and the possible methods of protection. Expressing important ideas related to HIV prevention, this didactic pottery is used as a far-reaching tool for raising awareness. Primitivo Evanán Poma is an artist from the village of Sarhua in the Peruvian Andes populated by indigenous people, many of whom migrated to Lima during the second half of the twentieth century due to economic hardship and the devastating effects of the “internal conflict” of 1980–2000. Art produced with the Association of Popular Artists of Sarhua uses the pictorial style of their native village to address social concerns and point out the many-sided discrimination of indigenous people in Lima, thus becoming a catalyst for building community self-awareness and solidarity.
In his film June Turmoil (1968), Želimir Žilnik documents student demonstrations in Belgrade in June 1968, the first mass protests in socialist Yugoslavia. Students were protesting the move away from socialist ideals, the “red bourgeoisie,” and economic reforms that had brought about high unemployment and emigration from the country. The film ends with a speech from Georg Büchner’s revolutionary play Danton’s Death (1835), delivered by stage actor Stevo Žigon—one of the many prominent public figures and artists who joined the protest in solidarity with the students’ cause. The film’s finale testifies to the centrality of education and knowledge to the socialist worldview and shows how the barriers separating “high” and “low” culture can be broken in crucial moments of political radicalization.
The question of the reach of popular art and its relation to high culture and art institutions can often be observed through the position of the autodidact and by resisting the authority of formal education and the ever-increasing professionalization of the art field. Beyond the refusal to follow the customary and accepted paths to the career of art-professional, the approach of developing knowledge through self-education and peer learning offers the possibility of building one’s own curriculum and methodology, as well as moving away from ossified and oppressive intellectual positions. Trained as a painter, in the early 1930s Hannah Ryggen taught herself to weave tapestries to comment on the political events of her time, such as the rise of fascism, the economic crisis of 1928 and its devastating effects on people’s lives, Benito Mussolini’s invasion of Ethiopia, the German occupation of Norway, and the Spanish Civil War. Using “traditional” techniques, she created a powerful body of politically progressive work imbued with pacifist, communist, and feminist ideas. Since the mid-1970s, Mladen Stilinović has been developing artistic strategies that combine words and images, using “poor” materials to engage the subjects of pain, poverty, death, power, discipline, and the language of repression. His pamphlet-like, agit-poetic works offer laconic commentary on the absurdity and crudity of power relations and the influence of ideology in contemporary life.
People get ready for the train is coming
— Curtis Mayfield, “People Get Ready”
Bringing to the fore a number of works that center on the question of political organization and art’s capability to produce imagery able to provoke strong emotional responses, the exhibition affirms the role of art in creating revolutionary subjectivity and explores how forms of popular art reflect the ideas of political movements, evoking the original meaning of the word propaganda, which can be defined as “things that must be disseminated.” The work by Emory Douglas included in the exhibition was created for The Black Panther, the newspaper of the Black Panther Party published during their struggle against racial oppression in the United States from 1966 until 1982. A number of artistic and propaganda activities were integrated into the Black Panther Party program, and as their minister of culture Douglas produced numerous posters and newspaper pages with strong political messages against police brutality and for every person’s equal rights to basic housing, employment, free education, and guaranteed income.
During the antifascist and revolutionary People’s Liberation War in Yugoslavia (1941–1945), numerous expressions of Partisan art contributed to the creation of a new revolutionary subjectivity and the articulation of revolutionary struggle, in the process changing the notion of art and the understanding of its autonomy. The Mozambican Institute by Catarina Simão researches the film archives of the Mozambican Liberation Front, or FRELIMO. As a part of their struggle against Portuguese colonial rule, and in an attempt to fight illiteracy, FRELIMO created the Mozambican Institute in Dar es Salaam in 1966 to enable study outside of the educational framework organized by colonial rule. Working with the remains of the institute’s film archive kept in Maputo, Simão reinterprets and researches this heritage in which political struggle intersected with radical educational and artistic ideas.
Many new models and alternatives to the current social system have been proposed, but applying what we already know on the individual and collective level is much more challenging than acquiring that knowledge. Really Useful Knowledge affirms the repoliticization of education as a necessary condition for recovering politics and pedagogy as a crucial element of organized resistance and collective struggles. The exhibition brings together artistic works imbued with ideas that reconfigure social and intimate relations, and it attempts to create an interchange of convictions and histories in order to infect viewers with the works’ proposals, convictions, and dilemmas.
Cited in Raymond Williams, “The Uses of Cultural Theory,” New Left Review 158 (July–August 1986).
Silvia Federici and George Caffentzis, “Notes on the Edu-factory and Cognitive Capitalism,” The Commoner, no. 12 (Spring/Summer 2007).
John Berger, Ways of Seeing (London: BBC & Penguin Books, 2008), 33.
Jean-Paul Sartre, “Sleepwalkers,” in Colonialism and Neocolonialism (New York: Routlege, 2001), 73.
Bertolt Brecht, “Against Georg Lukács,” in Aesthetics and Politics: The Key Texts of the Classic Debate within German Marxism (London: Verso, 2002), 81.
The song “People Get Ready” by Curtis Mayfield from 1965 became an emblematic protest song of various civil rights and revolutionary movements in the 60’s and 70’s in the US. The original spiritual message embodied in Mayfield’s lyrics: “People get ready, there’s a train a comin’ (…) Don’t need no ticket, you just thank the Lord” was transformed by Black Panthers’ R&B band The Lumpen into the rendition: “We said people get ready; Revolution’s come; Your only ticket; Is a loaded gun”.
What, How & for Whom
What, How & for Whom (WHW) is a curatorial collective formed in 1999. WHW organizes a range of production, exhibition, and publishing projects, and since 2003, they have been directing city-owned Gallery Nova in Zagreb. What? How? and For Whom? are the three basic questions of every economic organization, and are fundamental to the planning, conception, and realization of exhibitions, and the production and distribution of artworks, and the artist’s position in the labor market. These questions formed the title of WHW’s first project, in 2000 in Zagreb, dedicated to the 152nd anniversary of the Communist Manifesto, and became the motto of WHW’s work and the name of their collective. | <urn:uuid:04d349f7-f730-4d02-99df-33914e013348> | CC-MAIN-2019-47 | https://artseverywhere.ca/2017/02/02/really-useful-knowledge/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00418.warc.gz | en | 0.941888 | 6,668 | 2.96875 | 3 |
Fruit and Vegetable Consumption in Europe
Last Updated : 10 January 2012
Fruit and vegetables are important elements of a healthy, balanced diet, be it as part of a main meal or as a snack. They bring us vitamins, minerals and fibre, some energy (mainly in the form of sugar), as well as certain minor components - often referred to as phytochemicals or secondary plant products - which are potentially beneficial for our health. Epidemiological studies have shown that high intakes of fruit and vegetables are associated with a lower risk of chronic diseases; particularly, cardiovascular disease1-3, also type 2 diabetes4, and certain cancers i.e. of the mouth, pharynx, larynx, oesophageal, stomach and lungs5.
A majority of European citizens associate a healthy diet with fruit and vegetable consumption, and many of them believe that their diet is healthy6. But is this true? Do people in Europe actually get the amounts of fruit and vegetables recommended for good health? Aiming to answer this question, this review also looks deeper into what factors influence fruit and vegetable consumption in Europe, and what are the best intervention approaches to increase it. Lastly, we will have a glance at on-going European initiatives around fruit and vegetable consumption.
First of all, we need to understand which foods and drinks fall into the category of fruit and vegetables, how much we are recommended to have of these and why it might be difficult to obtain reliable and comparable data on fruit and vegetable consumption.
Definitions of fruit and vegetables
How are fruit and vegetables defined? It might seem like a simple question, but it is actually quite complicated to derive an all-embracing definition. Tomatoes and lettuce, apples and strawberries may be easy to identify as vegetables and fruits, respectively. But how about potatoes? And is fruit juice equal to fruit? Then there are pulses and nuts, which are also plant foods that may or may not be categorised in these food groups. This is important to keep in mind when performing dietary surveys in order to know what is actually being measured.
The definition of fruit and vegetables also varies between countries. Some countries (e.g. Austria, Belgium, Denmark, Iceland, Netherlands, Portugal, Spain and Sweden) have not included potatoes and starchy tubers, following the same principle as the World Health Organization (WHO), whereas the Norwegian recommendations, for example, include potatoes. Juice is sometimes excluded from the fruit and vegetable recommendations (e.g. Belgium, Spain), sometimes included with limitations (e.g. counts as maximum 1 portion (e.g. Denmark, the Netherlands and Sweden), and fully included in other countries (e.g. Iceland and Norway). Austria and Portugal do not provide any specification regarding juice.7
Varying definitions of which foods belong to fruit and vegetables present a barrier to comparing data from different studies. This is a major issue when trying to estimate fruit and vegetable consumption in Europe. Given that many national authorities regularly perform surveys of fruit and vegetable intake, standardising the survey methodology would vastly improve data comparability across different countries.
Measuring fruit and vegetable intake
There are different ways to measure food consumption. Food diaries and dietary recalls (i.e. interviews and questionnaires) are means to obtain information on what individuals eat. Household spend and average food supply based on national statistics may also be used to assess consumption.
Different methods take into account different aspects and the exactitude varies between them. Hence data obtained with different methods are not directly comparable. National Authorities have typically selected methods for their dietary surveys without international comparability in mind8.
The lack of comparable data on dietary intake will be tackled by the EU Menu, a pan-European dietary survey by the European Food Safety Authority (EFSA) that uses standardised data collection methods. The 5 year survey will start at the beginning of 20129.
Definitions of fruit and vegetables are not only important to obtain accurate and comparable data on consumption, but they are also crucial for intake recommendations and what their effect will be on population intakes.
WHO recommends eating ≥400 g per day of fruits and vegetables, not counting potatoes and other starchy tubers such as cassava10. In Europe, the recommendations vary between countries. In general, these are in line with the WHO recommendation, but some countries recommend higher amounts e.g. ≥600 g per day in Denmark7.
2. Fruit and vegetable consumption in Europe
What do food supply data say?
The Food and Agriculture Organization of the United Nations (FAO) provides data on food consumption based on agricultural data which indicates the food supply patterns at national level.
According to the FAO data, the vegetable supply (excluding potatoes and pulses) in Europe has increased over the last four decades. It also shows a north-south gradient; in Northern Europe the vegetable supply is lower than in Southern Europe. For example, in Finland the average supply is 195 g per person per day, which corresponds to 71 kg per person per year, whereas Greece has an average supply of 756 g per person per day (276 kg per person per year)11.
What do household food consumption data say?
National Authorities regularly collect data on food consumption at household level through household budget surveys. Efforts have been made to compile and modulate these data - from a number of European countries (collected at different time points) - to enable comparison.
Household data show that total vegetable consumption (excluding potatoes and pulses) varied from 284 g per day in Cyprus to 109 g per day in Norway. These countries had also the highest and lowest recorded intakes, respectively, of fresh vegetables. Interestingly, Cyprus had the lowest consumption (4 g per day) of processed vegetables (frozen, tinned, pickled, dried and in ready meals, but excluding potatoes). The consumption of processed vegetables was highest in Italy at 56 g per day12.
Based on household food data on fruit and vegetable consumption it has been suggested that household availability of fruit and vegetables is satisfactory in some Southern European countries and that in a number of countries the availability of fruits is higher than that of vegetables11.
What do dietary survey data say?
EFSA has compiled national food consumption data based on dietary surveys in order to assess food intake in Europe. Adjustments of the compiled data allow for a certain level of comparison13.
Figure 1 - Mean fruit & vegetable intake per country (in grams per day), excluding juices13
These data reveal that the mean vegetable intake (including pulses and nuts) in Europe is 220 g per day. Mean fruit intake is 166 g per day, implying that the average consumption of fruit and vegetables is 386 g per day. The data further show that the vegetable consumption is higher in the South than in the North of Europe and that the regions with the highest intake of fruits are those of Central and Eastern Europe followed by those in the South13.
Only in Poland, Germany, Italy and Austria the recommendation of consuming ≥400 g of fruit and vegetables per day was met. When fruit and vegetable juices were included, Hungary and Belgium also reached the recommended amount11. It is worth noting that the database only contains data from one Southern European country, namely Italy (Figure 1).
There is only limited data on European children’s fruit and vegetable consumption, but one study suggests that 6-24% of European children reach the WHO recommendation7. The average vegetable intake was estimated to be 86 g per day, the average fruit intake 141 g per day. When fruit and vegetables are combined, the highest intakes are seen in Austria and Portugal and the lowest in Iceland and Spain. The type of vegetables consumed differed according to the geographical location. In the North, consumption of raw vegetables was higher, while vegetable soups were the main sources of vegetables in the South.
Insufficient fruit and vegetable intakes in Europe
The WHO estimates that in more than half of the countries of the WHO European Region the consumption is lower than 400 g per day of fruit and vegetables, and in one third of the countries the average intake is less than 300 g per day (8). EFSA’s analysis based on national dietary surveys suggests that the recommended amount is reached only in 4 of the participating EU Member States11.
Disease burden related to low fruit and vegetable intakes
According to the estimates above, a majority of Europeans do not meet the WHO recommendations for fruit and vegetable intake. As eating recommended amounts of these foods helps to ensure health and prevent disease, poor intakes would be expected to negatively impact on health.
To get an idea of the magnitude of the problem, attempts have been made to estimate the contribution of low fruit and vegetable consumption to the burden of disease. The most recent analysis in the European Union (EU) dates from 1997. At the time, it was estimated that 8.3% of the burden of disease in the EU-15 could be attributed to inadequate nutrition, with low intakes of fruit and vegetables being the cause for 3.5% of the disease burden14. WHO has estimated that 2.4% of the burden of disease in the WHO European Region was attributable to low intakes of fruit and vegetables in 2004 (Table 1)15.
Table 1 - Top 10 health risk factors and their estimated relative contribution to the burden of disease (from15)
||Burden of disese (%)
|1. Tobacco use
|2. Alcohol use
|3. High blood pressure
|4. Overweight and obesity
|5. High cholesterol
|6. Physical inactivity
|7. High blood glucose
|8. Low F&V intake
|9. Occupational risks
|10. Illicit drug use
Most of the benefit of consuming fruit and vegetables comes from a reduction in cardiovascular disease risk, but fruit and vegetables may also reduce the risk of certain cancers16.
WHO has estimated that insufficient intake of fruit and vegetables causes around 14% of gastrointestinal cancer deaths, about 11% of ischaemic heart disease deaths and about 9% of stroke deaths worldwide15.
As for dietary habits in general, a wide range of factors influence fruit and vegetable consumption; factors in our physical, social and cultural environment as well as personal factors, such as taste preferences, level of independence, and health consciousness. Many of these factors change throughout life.
Income and education
There are many studies supporting a relation between income level and fruit and vegetable intake; low-income groups tend to consume lower amounts of fruit and vegetables than higher income groups17. But why is that?
High costs may negatively impact on fruit and vegetable intake levels18. This does not only concern low income groups. Also people with higher incomes perceive price as a barrier to consumption of these foods. However, it tends to be more of a concern among those with smaller revenues19. Thus, affordability is likely to be only one of several factors mediating the effect of income level on fruit and vegetable consumption.
Better educated adults show higher vegetable consumption. Besides the financial aspect just mentioned – higher education generally means higher income – this could be related to greater knowledge and awareness of healthy eating habits in those with higher education levels. It is also likely that certain values, ideals and social influence linked to education and income levels influence our eating behaviours, including fruit and vegetable consumption20.
Gender and age
In general, girls and women consume larger amounts of fruit and vegetables than do boys and men17, 21-23. This seems to be the case also for pre-school children24, thus the gender difference already shows at an age when nutrition knowledge is unlikely to have any impact.
There is no simple answer to the question why females eat more fruit and vegetables than males. Social structures linked to the traditional roles of men and women in society could be one explanation22. It has also been suggested that girls like fruit and vegetables more than do boys and hence they eat more of them. Why that is, however, remains unclear21.
Age also appears to influence fruit and vegetable consumption. In children and adolescents, consumption tends to decrease with age23. In adults, the relation between age and intake is inversed, i.e. intake levels increase with age. Possible explanations include higher income and knowledge with age, and social habits and cues, e.g. what type of social activities people take part in, social eating habits and ideals related to food and the time devoted to cooking20.
Accessibility and availability
The availability of a variety of attractively displayed fruit and vegetables all year round positively affect fruit and vegetable consumption, particularly with higher socioeconomic status (19). Similarly, availability of and access to fruit and vegetables in the home is important for consumption in both children and adults19, 23, 25. On the other hand, lack of or limited supply of fruit and vegetables (e.g. little variety offered in canteens or local shops and poor quality) has been reported to be obstacles to consumption of such foods18.
Family factors and social support
Social support appears to enhance fruit and vegetable consumption26 and family factors influence fruit and vegetable intake in children, adolescents and adults.
In adults, particularly in men, being married positively impacts on the amounts of fruit and vegetables consumed19, 22. Women seem to have a positive influence on their husbands’ intake frequency, amounts and variety of the fruit and vegetables eaten19. In general, family factors seem to be stronger determinants in men than in women. This is thought to be related to their traditional roles in the household; women handle health-related issues and more commonly shop and prepare food than do men19, 22.
Children’s fruit and vegetable intake levels are related to how much their parents consume24. There is also a relationship between family rules and children’s vegetable intake. Pressure to eat fruit and vegetables does not have any positive effect on intake in children. However, consumption can be enhanced when parents are good role-models and encourage children to eat fruit and vegetables27. Family meal patterns, in particular shared family meals, also improve fruit and vegetable consumption in children23, 24. Home availability and other factors in the shared environment as well as genetic pre-disposition (inborn food preferences) could explain the link between parents’ and children’s intake levels24.
Dietary habits learnt in childhood seem to be predictive for intake levels in adulthood19. The earlier children are introduced to vegetables the more likely they are to have higher consumption levels at pre-school age24. People who eat a lot of fruit and vegetables in childhood remain good consumers28.
Food preference is one of the factors related to fruit and vegetable consumption23, 25. When starting to eat solid food the child may initially not seem to like certain foods, but repeated exposure may improve this. As many vegetables have a slightly bitter taste, the child may need to try them more often than other foods before accepting them.
Parents using pressure and rewards to make their children eat fruit and vegetables may not be very successful. Typically such strategies result in even stronger aversions. Giving children a variety of foods, tastes and textures, being patient and repeatedly serving foods they initially seem to dislike, being a role model, and encouragement are far better strategies29.
Although to a large extent developed during childhood30, food preferences change over time and may be modified also in adulthood. As for children, repeated exposure may reduce food neophobia, i.e. being reluctant to try new foods, in adults as well31.
To what extent nutritional knowledge and awareness of recommendations influence what we eat is widely discussed and explanations to why certain groups eat more healthily than others have been sought. Among the psychosocial factors, nutritional knowledge is one of the strongest predictors for fruit and vegetable consumption26. The lack of skills to prepare fruit and vegetables for consumption is another factor which could constitute an obstacle to purchase and consumption18.
There often seem to be gender differences in nutritional knowledge, with women being more knowledgeable than men. Men also tend to be less aware of dietary recommendations and the risks linked to unhealthy dietary habits32, whereas women are more likely to associate a healthy diet with eating more fruit and vegetables6.
Psychological factors, attitudes, beliefs and perceived barriers
Attitudes and beliefs towards fruit and vegetables have an impact on consumption levels26. There is evidence that self-efficacy (belief in one’s own ability to perform tasks, attain goals etc.) is a strong predictor for fruit and vegetable intake in adults23, 26. Self-esteem also positively impacts vegetable intake20 as does perceived healthiness of fruits and vegetables32.
The vast majority of the citizens in the EU consider what they eat good for their health, 20% even declare that their eating habits are very healthy. A majority of Europeans believe that it is easy to eat a healthy diet and that eating a healthy diet means eating more fruit and vegetables6. Considering what we know about Europeans’ dietary habits and their fruit and vegetable intake, this may appear surprising. However, it has been suggested that one important barrier to fruit and vegetable consumption is that people actually believe their diet is satisfactory17.
Lack of time and control over what they eat are the two main reasons Europeans give to explain the difficulty of eating a healthy diet6. Time constraints to eat fruit and vegetables represent a complex issue. For example, there are indications that fruit is often considered convenient food whereas vegetables are not. For Europeans, irregular working hours and a busy lifestyle are perceived as barriers to vegetable consumption. Low consumers of fruit and vegetables consider convenience factors, such as time available for preparation of food and shopping, availability of shops and simplicity of preparation and cooking, of higher importance for their intake than high consumers31.
Increasing vegetable intakes
In 2006, 1 in 5 Europeans reported having changed their diet over the last year. Of these more than half indicated that they had increased their fruit and vegetable intake. Weight management and health maintenance were the major reasons for diet changes. Increased fruit and vegetable intake was reported by fewer in the Mediterranean region than elsewhere. On the other hand, as many as 70% of the individuals in Denmark and Slovenia who had changed their diet reported having increased their consumption of these foods. People in countries with relatively high fruit and vegetable consumption could be more likely to consider their intake of fruit and vegetables as sufficient6.
4. Interventions – what is effective?
The factors influencing fruit and vegetable intake are numerous and linked to each other in complex ways. As a consequence, changing consumption patterns remains a challenge, particularly at population level. Different intervention programmes addressing low fruit and vegetable consumption have adopted different strategies, with variable success.
Dietary habits and preferences largely form during childhood and hence many initiatives for increasing fruit and vegetable consumption target children. The sad truth is that despite a large number of interventions and intense efforts, impact on consumption levels has been rather limited18. Some elements of success can be identified, though.
Most often the projects aimed at increasing fruit and vegetable consumption in children are school-based. Implementing programmes in schools ensures wide participation and gives the opportunity to combine different types of activities, such as traditional classroom-based learning, school gardening, cooking classes and feeding33.
For maximum effect, school-based interventions should consist of a number of different activities. The more intense and multi-faceted the intervention, the higher the increase in intakes34. Skill-building activities, like cooking classes, are more effective than passive learning approaches18, 33. Duration is also important, with programmes running at least one year being the most effective 33.
Distributing fruit and vegetables as well as involving parents, teachers and peers also improves the results of school-based interventions. Involving parents is of great importance since parental intakes, encouragement and home availability of fruit and vegetables are factors with strong influence on children’s consumption (35). Active encouragement by food-service personnel in school canteens, training and involvement of peer leaders and the use of cartoon characters are as well positive elements in fruit and vegetable intervention programmes for children. Making fruit and vegetable messages a part of existing school subjects may also help18, 33.
In fruit and vegetable interventions for adults the strategies with the greatest impact on intake have included some kind of face-to-face counselling. The problem is that individual approaches are very resource-demanding and therefore hardly applicable in population-wide interventions. Individually-tailored printed or computer-based information, may serve as a good alternative to face-to-face counselling as the messages can be adopted to individual needs, attitudes etc.
Adults are often targeted at the workplace. To be effective, such interventions must consist of a number of different strategies, which often makes them costly. Collaboration with the company managers as well as with other stakeholders is also necessary to make workplace interventions successful. It appears to be difficult to recruit and retain participants in such projects, which might be the reason why, so far, the success of worksite interventions has been limited. The time demands and efforts required from workers and managers are considered barriers to their success18. Another important strategy is to establish supportive structures that will sustain efforts in the long run. Involving workers in planning and running of the programme, addressing the existing barriers and integrating the workers’ broader social context by targeting also their families, neighbourhoods etc. are other means to achieve better outcome34.
There are also broader, community-based fruit and vegetable programmes. The effectiveness of these has often been difficult to assess18. However, some elements for the success of community-based interventions have been identified. As for school- and worksite-based programmes a multi-component strategy seems to be the way to go for increasing intakes of fruit and vegetables18, 36. Clear fruit and vegetable messages, involvement of the family and using a theoretical framework as the basis of the intervention have also been demonstrated to be advantageous. Flexibility and participation of the target population in the intervention design also promotes better outcome, and the duration of the programme is important36.
The average effect obtained by interventions aiming at increasing adults’ fruit and vegetable consumption is around half a serving more per day18.
5. Initiatives across Europe
National nutrition policies
Given the contribution of low fruit and vegetable consumption to the burden of disease, action at national level towards increasing fruit and vegetable consumption has become common.
Most Western and Nordic European countries address insufficient intakes in their national nutrition policies and include fruit and vegetable promotion as one of their objectives. Equally, in Southern European countries, despite having intake levels closer to the recommended amounts, fruit and vegetable goals are part of their nutrition policies8. One example of strategies implemented at national level to enhance fruit and vegetable intake of the general population is the 5-a-day campaign, which is run in a number of European countries. In Denmark, where fruit and vegetable intake is also rather low, there is a 6-a-day campaign11.
The EU School fruit scheme
Increasing fruit and vegetable consumption is one of the goals identified in the European Commission's White Paper on Nutrition from 200737, which among other things addresses childhood obesity in Europe. In the concluding remarks of the White Paper, it is stated that a 'School Fruit Scheme would be a step in the right direction'. This has become reality and an EU-wide scheme to provide fruit and vegetables to school children started in the school year 2009/201028.
The 'School Fruit Scheme” takes into account several of the aspects identified as factors of success in other school-based programmes: it is a long-term programme providing fruit and vegetables for free, encouraging children to make fruit and vegetable consumption part of their lifestyle. It is involving children, teachers and parents. Additionally it will involve partners from public health, education and agriculture sectors and its effectiveness will continuously be monitored to allow improvements of the strategies throughout the programme (28).
The “School Fruit Scheme' is partly financed by the European Commission, but participating countries have to contribute as well. The funds from the Commission are aimed at encouraging additional activities, within or in addition to existing programmes. Besides provision of fruit and vegetables, awareness-raising and educational activities will take place to teach children the importance of good eating habits28.
National initiatives promoting fruit and vegetable consumption
There are already national initiatives in place aiming at increasing fruit and vegetable consumption in children28. Examples of such programmes are:
6. Fruit and vegetable consumption in Europe – summary
Despite various issues limiting the possibilities to assess fruit and vegetable intake on a European level, there are some consistent findings on consumption patterns in Europe:
- A majority of Europeans do not reach WHO recommendations on vegetables and fruit consumption (≥ 400 g per day).
- Consumption varies, with higher intakes in Southern compared to the Northern regions.
Fruit and vegetable consumption patterns are determined by a wide range of factors:
- Age, gender and socio-economic status – the influence of these seems to be mediated by other factors, e.g. food preferences, knowledge, skills and affordability.
- Personal factors, e.g. self-efficacy, self-esteem, perceived time constraints, personal values and perception of the healthiness of one’s own diet.
- Social environment - social support, social cues and meal patterns and atmosphere at meal time etc. influence food preferences and attitudes towards fruit and vegetables, thus determining our food choices and dietary behaviours.
Increasing fruit and vegetable consumption is a priority for international organisations as well as national governments, which has resulted in many initiatives. There are certain elements that have been shown to improve the results of intervention programmes. Among these are:
- Multi-component strategies addressing both personal factors such as knowledge and skills, as well as the physical and social environment by e.g. increasing the availability of fruits and vegetables and addressing attitudes and practices not only in the defined target group, but also in their social networks.
- Support and involvement of decision makers and representatives of the target population in programme planning and running in order to create support and ownership and to develop strategies that are accepted by the target group.
- Programme duration of at least 12 months.
- Mirmiran P, et al. (2009). Fruit and vegetable consumption and risk factors for cardiovascular disease. Metabolism 58(4):460-468.
- Hung HC, et al. (2004). Fruit and vegetable intake and risk of major chronic disease. Journal of the National Cancer Institute 96(21):1577-1584.
- Rissanen TH, et al. (2003). Low intake of fruits, berries and vegetables is associated with excess mortality in men: the Kuopio Ischaemic Heart Disease Risk Factor (KIHD) Study. Journal of Nutrition 133(1):199-204.
- Harding AH, et al. (2008). Plasma vitamin C level, fruit and vegetable consumption, and the risk of new-onset type 2 diabetes mellitus: the European prospective investigation of cancer--Norfolk prospective study. Archives of Internal Medicine 168(14):1493-1499.
- World Cancer Research Fund (WCRF) Panel (2007). Food, Nutrition, Physical Activity, and the Prevention of Cancer: A Global Perspective. World Cancer Research Fund: Washington, DC
- European Commission (2006). Health and food. Special Eurobarometer 246 / Wave 64.3 – TNS Opinion & Social. European Commission: Brussels.
- Yngve A, et al. (2005). Fruit and vegetable intake in a sample of 11-year-old children in 9 European countries: The Pro Children Cross-sectional Survey. Annals of Nutrition and Metabolism 49:236-245.
- World Health Organization (2006). Comparative analysis of nutrition policies in the WHO European Region. WHO: Copenhagen, Denmark.
- European Food Safety Authority (2010). The EU Menu. [accessed March 2010].
- World Health Organization (2008). WHO European Action Plan for Food and Nutrition 2007-2012. WHO: Copenhagen, Denmark.
- Elmadfa I, et al. (2009). European Nutrition and Health Report 2009. Forum Nutrition 62:1-405.
- The DAFNE databank. [accessed March 2010]
- European Food Safety Authority (2008). Concise Database summary statistics - Total population. [accessed March 2010]
- Pomerleau J, et al. (2003). The burden of disease attributable to nutrition in Europe. Public Health Nutrition 6:453-461.
- World Health Organization (2009). Global Health Risks Summary Tables. WHO: Geneva, Switzerland.
- World Health Organization(2009). Global Health Risks. WHO: Geneva, Switzerland.
- Dibsdall LA, et al. (2003). Low-income consumers' attitudes and behaviour towards access, availability and motivation to eat fruit and vegetables. Public Health Nutrition 6:159-168.
- World Health Organization (2005). Effectiveness of interventions and programmes promoting fruit and vegetable intake. WHO: Geneva, Switzerland.
- Kamphuis CB, et al. (2007). Perceived environmental determinants of physical activity and fruit and vegetable consumption among high and low socioeconomic groups in the Netherlands. Health Place 13:493-503.
- Elfhag K, et al. (2008). Consumption of fruit, vegetables, sweets and soft drinks are associated with psychological dimensions of eating behaviour in parents and their 12-year-old children. Public Health Nutrition 11:914-923.
- Bere E, et al. (2008). Why do boys eat less fruit and vegetables than girls? Public Health Nutrition 11:321-325.
- Friel S, et al. (2005). Who eats four or more servings of fruit and vegetables per day? Multivariate classification tree analysis of data from the 1998 Survey of Lifestyle, Attitudes and Nutrition in the Republic of Ireland. Public Health Nutrition 8:159-169.
- Rasmussen M, et al. (2006). Determinants of fruit and vegetable consumption among children and adolescents: a review of the literature. Part I: Quantitative studies. International Journal of Behavioural Nutrition and Physical Activity 3:22.
- Cooke LJ, et al. (2004). Demographic, familial and trait predictors of fruit and vegetable consumption by pre-school children. Public Health Nutrition 7:295-302.
- Bere E, Klepp KI. (2004). Correlates of fruit and vegetable intake among Norwegian schoolchildren: parental and self-reports. Public Health Nutrition 7:991-998.
- Shaikh AR, et al. (2008). Psychosocial Predictors of Fruit and Vegetable Consumption in Adults: A Review of the Literature. American Journal of Preventive Medicine 34:535-543.e11.
- Pearson N, et al. (2009). Family correlates of fruit and vegetable consumption in children and adolescents: a systematic review. Public Health Nutrition 12:267-283.
- European Commission. DG Agriculture and Rural Development. School Fruit Scheme. [accessed July 2011]
- Benton D. (2004). Role of parents in the determination of the food preferences of children and the development of obesity. International Journal of Obesity Related Metabolic Disorders 28:858-869.
- Havermans RC, et al. (2010). Increasing Children's Liking and Intake of Vegetables through Experiential Learning. In: Bioactive Foods in Promoting Health. pp. 273-283. [RR Watson and VR Preedy, editors]. San Diego: Academic Press.
- Pollard J, et al. (2002). Motivations for fruit and vegetable consumption in the UK Women's Cohort Study. Public Health Nutrition 5:479-586.
- Baker AH, Wardle J. (2003). Sex differences in fruit and vegetable intake in older adults. Appetite 40:269-275.
- Knai C, et al. (2006). Getting children to eat more fruit and vegetables: A systematic review. Preventive Medicine 42:85-95.
- Sorensen G, et al. (2004). Worksite-based research and initiatives to increase fruit and vegetable consumption. Preventive Medicine 39 Suppl 2:S94-100.
- Kristjansdottir AG, et al. (2009). Children's and parents' perceptions of the determinants of children's fruit and vegetable intake in a low-intake population. Public Health Nutrition 12:1224-1233.
- Ciliska D, et al. (2000). Effectiveness of Community-Based Interventions to Increase Fruit and Vegetable Consumption. Journal of Nutrition Education 32:341-352.
- European Commission (2007). White Paper on A Strategy for Europe on Nutrition; Overweight and Obesity related health issues. COM(2007) 279 final , 30 May 2007. European Commission: Brussels. | <urn:uuid:2a23b684-b0c0-4a6f-81a0-567adc4bd8da> | CC-MAIN-2019-47 | https://www.eufic.org/en/healthy-living/article/fruit-and-vegetable-consumption-in-europe-do-europeans-get-enough | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00297.warc.gz | en | 0.945536 | 6,741 | 3.421875 | 3 |
Friday, 30 November 2018
Thursday, 29 November 2018
Wednesday, 28 November 2018
Tuesday, 27 November 2018
Monday, 26 November 2018
"We begin with the story of Nana Asma’u, the daughter of Uthman don Fodio, who was not only a renowned scholar of her time, but a poet, a political and social activist, and a creative intellectual. She is considered to be one of the greatest women of 19th century Islamic communities. She was born in 1793 in modern-day Nigeria. A princess with an impressive lineage, she was named after a hero in Islamic heritage—Asma, the daughter of Abu Bakr, who was a strong woman in her support of Islam. She was raised in a supportive Islamic household, having not only memorized the Qur’an, but extensively learned the Islamic sciences and four languages as well.
Asma’u believed in women having a role in society and she led the women of her time by example throughout her life. One of her greatest achievements was compiling the extensive collection of writings of her father after he passed away when she was 27. The degree of respect the scholarly community had for Asma’u is seen here because they chose her to complete such a monumental task. Not only did this job require someone trustworthy, but also someone who was familiar with his writings and was well-versed in the Islamic sciences.
When she was a mother of two and pregnant with her third child, Asma’u completed the translation of the Qur’an in her native tongue and also translated her father’s work into the various dialects of the community. This shows her concern for her community and her desire to bring the knowledge of the Qur’an and Islam to her people.
Asma’u saw a dire need for the teachings of Islam to reach the women in her community and beyond the Sokoto region. She saw that women were absent from the circles of knowledge and stayed in their homes as they tended to their familial duties. Asma’u came up with a brilliant idea to not only teach these women but to teach them in the comfort of their homes. It was then that she gathered knowledgeable women in her community and trained them as teachers. This group, known as jajis, traveled to neighboring communities to bring Islamic knowledge to secluded women. This movement was called the Yan-taru movement, which means “those who congregate together” and “sisterhood”. Asma’u taught the jajis to use lesson plans, poetry, and creative mnemonic devices in their teachings.
Nana Asma’u, by the grace and guidance of Allah (swt), revolutionized the way her community learned Islam. She brought the knowledge of the religion to the people in an easy to remember fashion and wrote in their language. Her legacy is a legacy of scholarship and activism, and her name is still used today in West Africa."
From the Muslimah's Renaissance page on FB.
Friday, 23 November 2018
Wednesday, 21 November 2018
Tuesday, 20 November 2018
In a bid to restore the 'honour' of their tribe, two men in Bhong area of Rahim Yar Khan allegedly murdered two sisters after they had reportedly been abducted and subjected to rape by some influential landlords.
According to local police, the deceased sisters were first abducted and raped by a group of men and later fell victim to their own relatives' wrath when they returned home.
The abduction purportedly took place in Basti Gulab Khan of Mouza Muhammad Murad Dahir area on Sunday when Shah Mureed Kharu and Ali Dost Kharu, the two accused, allegedly kidnapped the siblings, one of whom was 18, and the other 20.
The victims' father claimed in the first information report (FIR) that the suspects had abducted his daughters and abused them before releasing them a day later.
The father said that he was also fearful that the members of their tribe could kill his daughters upon their return for having "smeared the tribe's name".
When the girls returned home following their ordeal, the father said, their uncle Muhammad Saleem and another relative Shah Nawaz allegedly strung a noose around their necks and dragged them to the nearby fields.
The father said that by the time he reached the crime scene, the girls had already been murdered.
The dead bodies were shifted to Sadiqabad THQ Hospital for autopsies.
Monday, 19 November 2018
Friday, 16 November 2018
Wednesday, 14 November 2018
Monday, 12 November 2018
Friday, 9 November 2018
Thursday, 8 November 2018
In April 2018 – within The Comment Awards’ one year period of consideration for writing – Melanie Phillips wrote an extraordinary article encapsulating this ideology. In it, she threatened that the entire Muslim world would be “destroyed”.
The piece, written in the form of “an open letter to the Muslim world” (archived here) addresses all Muslims and ‘Islam’ as a homogenous bloc of barbarians comprising a wholesale obstacle and inherent threat, by its innate nature, to Western “progress and modernity” – by which the Muslim world will eventually be destroyed.
In this regard, there would be little if anything in the letter that Spencer and his ilk would disagree with (even Phillip’ staunch pro-Israel line would be endorsed by Spencer who supports Israel’s new nation-state law and sees Israel as an example of the kind of ‘ethno-state’ he wants to create in the US).
But Phillips’ writing is careful. Like Islamist hate preacher Anjem Choudary, whose public pronouncements meticulously incited to hatred but only barely within the letter of the law allowing him to continue for decades, Phillips’ language seems designed to be vague enough to slip into legal ‘acceptability’.
You people like killing Jews
The letter begins, “Dear Muslim world”, and moves rapidly into arguing that the entirety of the latter is engaged in a conspiratorial war on Western modernity aimed at destroying “the Jews”, particularly those trying to live in Israel. Phillips blames not Hamas, but the “Muslim world” as a whole for killing 26,000 Jews, including military casualties:
“More than 26,000 dead—with most of the military casualties consisting of Israel’s precious young who must be conscripted to defend their country—purely because there are people determined to prevent the Jews from living in their own ancestral homeland. But you know all about that because you are the people killing them.”
Note the language. The “Muslim world” is equated with a whole “people”, who are de facto culpable in trying to destroy the Jewish people:
“You are the people who have been trying to destroy the Jewish homeland for the better part of a century. Look how hard you’ve tried. You’ve used war. You’ve used terrorism. You’ve used the Palestinian Arabs as pawns. You’ve used the diplomatic game. You’ve used economic boycotts.”
You people have a culture of honour and shame
Phillips goes on, attributing to the “Muslim world” as a people an inherent anti-Semitism rooted not in violent extremism, but in core Islamic teachings—a view held by the likes of ISIS and endorsed by far-right zealots such as Spencer and Tommy Robinson. Not only that, but she insists that the Muslim people suffer wholesale from a “culture of honor and shame” which further reinforces this deep-rooted “hatred of the Jews” (for now, we will merely remark in passing that this flies in the face of the historical record and Islam’s most authentic theological readings):
“We understand why you hate Israel. Paranoid hatred of the Jews is embedded in your religious texts. Moreover, since you believe that any land ever occupied by Muslims becomes Muslim land in perpetuity—and since the very idea of the Jews being your equals in ruling their own land is anathema to you—your culture of honor and shame means that you cannot accept a Jewish state in a region you claim as your own.”
You people know that your religion is not about peace
Phillips goes on to equate her barbaric readings of Islam with a “holy war” being waged by the Muslim world on the West:
“For all the terrible violence and mayhem you have unleashed in the cause of Islamic holy war, your purpose is ultimately defensive. You realize that, in its freedom for the individual and particularly for women, modernity poses a mortal threat to Islam. Unlike the ignorant West, you know that Islam does not mean peace. It means submission. Modernity means submission can no longer be enforced. Which is why, in its seventh-century form at least, Islam is on the way out.”
You people want to destroy the Jews
She does pause to acknowledge that there are an “increasing number of the Arab young, who are on Twitter and Facebook” who “don’t want to fight the unending battles of the seventh century.” But she then goes on to racialise the Israel-Palestine conflict and demonise all Palestinians wholesale:
“Of course, none of this means the Palestinian Arabs are about to abandon their war to destroy Israel… But the unstoppable force of modernity is meeting the immoveable object of Islam, and modernity will win.”
The problem here is not with Phillips’ critiquing Islam. Even if she is completely wrong, which arguably she is, the problem is that she racialises her barbaric depiction of Islam by constructing Muslims – literally addressed as ‘you people’ – as largely intentional vehicles for this inherent barbarism:
“Which is why you believe you have to stop modernity. Which is why you are at war with the West. And which is also why you see the Jews as your enemy of enemies because you believe they are behind absolutely everything to do with modernity. Destroy the Jews, you imagine, and you will defeat modernity.”
But it’s okay because you people will be destroyed by Israel and Western modernity
She closes her piece with the following genocidal double threat:
“If you finally were to decide to end your war against us in Israel, finally decide that you love your children more than you hate us, finally decide that instead of trying to destroy Israel you want it to help you accommodate to modernity, you will find our hands extended in friendship. But if you try to remove us from the earth, we will destroy you.
Dear Muslim world, wake up and smell the coffee. The Jewish people has defied all the odds over and over again, and will continue to do so. You may break our hearts by killing our loved ones, but you won’t break us. Progress and modernity will destroy you instead.”
One threat is conditional and tangibly military (if the Muslim world doesn’t end the war on Israel, it will be destroyed by Israel); and the other is unconditional (either way, the Muslim world will be destroyed by Western progress). Read plainly, Phillips’ reference to both physical and cultural forms of destruction of the entire Muslim world has deeply unnerving and seemingly genocidal connotations.
Imagine if I had written similar words as an ‘open letter to the Jewish world’, threatening that either Muslims would ‘destroy the Jewish world’ if it did not cease its war on Muslims, or ‘the Jewish world’ would be inevitably ‘destroyed’ by the advance of superior Muslim culture. I would be seen, rightly, as a not-so-closet Nazi.
For those that like to assume there are no consequences for such language, this is worth bearing in mind when considering that far-right terrorist Anders Breivik was an avid fan of Phillips, and quoted her approvingly in his manifesto.
Phillips is not doing journalism with pieces like this. She is simply spouting the same brand of bullshit that gives the Spencers, Robinsons and Breiviks of this world a hard on.
I’m not “offended” by this bullshit – I am maligned, marginalised and demonised by this bullshit.
Wednesday, 7 November 2018
Tuesday, 6 November 2018
Their open fondness is built on two premises:“If Trump is pro-Israel, then he can’t be an anti-Semitic, white nationalist” is the logic that underpins this new right-wing orchestrated talking point, but anyone who follows the machinations of the Israel Lobby and its cadre of Zionist organizations and individuals knows only too well that far-right, white nationalist, and even avowed neo-Nazis have long been courted as allies in their fight to permanently erase Palestinians from the ever expanding Israeli controlled territory.
In fact, Israel not only weaponizes anti-Semitism to provide cover for its brutal security state apparatus, but also it was European anti-Semitism that created the Israeli state in the first place. When Theodor Herzi, the founding architect of the “Jewish state,” brought forward his idea for creating a state in Palestine for exiled European Jews in 1896, prominent Jewish intellectuals dismissed his idea, claiming it undermined Jews who had assimilated successfully in European societies.
Dejected but not defeated, Herzi enlisted the help of the Chaim Weizmann, a prominent British Jewish figure, who, in turn, won support for Herzi’s proposed white European settler colonial project by recruiting the British Foreign Secretary Arthur Balfour, an avowed white supremacist.
“We have to face the facts,” Balfour said. “Men are not born equal, the white and black races are not born with equal capacities: they are born with different capacities which education cannot and will not change.”
Balfour also enacted anti-immigration laws that were designed to restrict and prevent Jews migrating to Britain. In many ways, Balfour’s ban on Jews was the 100 year precedent to Trump’s ban on Muslims.
In November 1917, the Balfour Declaration laid the groundwork for the future state of Israel, stating that, “His Majesty’s government view with favour the establishment in Palestine of a national home for the Jewish people, and will use their best endeavors to facilitate the achievement of this object.”
Balfour thus became a hero among Zionist Jews, who were only to happy to ignore his demonstrable record of avowed white supremacy and anti-Semitism, which brings us to where we are with Trump and white supremacists today.
During the past year, Trump has deployed anti-Semitic tropes, retweeted anti-Semitic posts, and has given cover to anti-Semitic, neo-Nazi agitators, going so far to label thousands of Nazi flag waving, “Sieg Heil” saluting thugs as “very fine people.”
In return, America’s Jew haters have praised Trump for his “honesty” and his defence of white America. What is telling, however, is the same anti-Semitic hate groups and individuals also support Trump’s decision to validate Israel’s war crime, recognizing Jerusalem as Israel’s capital.
You see, while Nazis and white supremacists might still hate Jews, they simultaneously also love the apartheid Israeli state.
Their open fondness is built on two premises:
Muslims have replaced Jews as the number one target for European white supremacists, and Israel’s abusive mistreatment of a majority Muslim population inspires them greatly, and
They love that Israel is everything they dream of: a fascist ethnocratic brute that suppresses a non-white indigenous population.
Across Europe and the US, Israeli flags now wave comfortably alongside Nazi and white supremacist banners. When 60,000 ultranationalists marched on Poland’s capital last month, Israeli flags were there. When neo-Nazis marched on Charlottesville, Virginia, Israeli flags were neatly nestled among flags of the Confederacy. Paradoxically, however, anti-Semitism remains at the heart of the platforms of all white supremacy groups that turned up to either.
Zionism and white supremacy are not strange bedfellows, but natural allies, according to Nada Elia, adding that, “Both represent a desire to establish and maintain a homogeneous society that posits itself as superior, more advanced, more civilised than the “others” who are, unfortunately, within its midst, a “demographic threat” to be contained through border walls and stricter immigration law. American fascism, then, is holding up a mirror to Zionism.”
The intersectionality between anti-Semitism and pro-Israel fervour is no accident. It was a strategy hatched and formulated by far-right, white nationalists in Britain in the late 90s before gaining traction in the aftermath of the 9/11 attacks.
When Nick Griffin, a Holocaust denier, took the helm of the far right, ultranationalist British National Party (BNP) in 1999, he shelved his public anti-Semitism, replacing it with open hostility towards Muslim.
Having never made a public statement about Islam or Muslims previously, Griffin suddenly attacked Islam as a “vicious, wicked faith,” while also claiming the “Islamification” of his country had taken place via “rape.”
From this moment forth, anti-Semitic political entrepreneurs on the far right began adopting pro-Israel talking points to mask their naked anti-Muslim bigotry, and Griffin admitted as much when he penned a 2007 essay that stated the motives behind substituting the far-right party’s anti-Semitism with Islamophobia: “It stands to reason that adopting an ‘Islamophobic’ position that appeals to large numbers of ordinary people?—?including un-nudged journalists?—?is going to produce on average much better media coverage than siding with Iran and banging on about ‘Jewish power’, which is guaranteed to raise hackles of virtually every single journalist in the western world.”
Zionists of all stripes teamed up and leveraged the political mobilizing power of anti-Semitic far right groups. NYU adjunct professor Arun Kundnani noted that by 2008, “a group of well-funded Islamophobic activists had coalesced” in order to demonize Muslims and Islam for the purpose of gaining support for Israeli policies of occupation, segregation and discrimination from far-right voters, white supremacists, and anti-Semites.
These pro-Israel individuals began positioning themselves as “counter-jihadists,” and, in turn, became darlings of the far right media landscape, with some even making their way into Trump’s foreign policy circle. Frank Gafney, for instance, who warned the US government had been taken over by the Muslim Brotherhood, continues to have Trump’s ear.
So, no?—?Trump’s move on Jerusalem does nothing to assuage his prior expressed sympathies with anti-Semitic white supremacists. It only reminds us how deeply Zionism and white supremacy are woven into the DNA of two respective white settler colonial states: Israel and the United States.
Monday, 5 November 2018
Friday, 2 November 2018
Thursday, 1 November 2018
Earlier this year, one of the victims of the Rotherham grooming gang anonymously wrote a very informed and intelligent piece on this issue for the Independent.
In it, she said that grooming gangs are upheld by religious extremism and even went so far as to compare them to terrorist networks. But even she - having very good reason to allow herself to be tempted to take the racist approach - condemned the work of people like Stephen Yaxley-Lennon aka Tommy Robinson , saying he doesn't speak for her, and said that she and other survivors are 'uncomfortable' with the EDL's protests.
In her own words, she 'experienced horrific, religiously sanctioned sexual violence and torture' and described how her main abuser beat her as he quoted scriptures from the Quran to her. And in Oxford, it was said that sexual assaults were particularly sadistic.
But, despite what some right-wing media and extremists want you to think, the fact is this isn't actually the case with every Asian grooming gang in the news.
It's a point that the prosecutor of the Rochdale grooming gang, Nazir Afzal, has already made.
Speaking about the case in an interview with The Guardian in 2014, he said:
There is no religious basis for this. These men were not religious.
"Islam says that alcohol, drugs, rape and abuse are all forbidden, yet these men were surrounded by all of these things. So how can anyone say that these men were driven by their religion to do this kind of thing?
"They were doing this horrible, terrible stuff, because of the fact that they are men. That’s sadly what the driver is here. This is about male power. These young girls have been manipulated and abused because they were easy prey for evil men."
In an interview with the New Statesman earlier this year, he described the ethnicity of street groomers as 'an issue', but gave more weight to the night-time economy that they often work in, the availability and vulnerability of the young girls who are often around it and the community's silence and lack of action to tackle the problem.
And I believe, based on the evidence heard in court, that what he said is also true of the Huddersfield grooming gang.
One of the victims in Huddersfield was Asian - something that also happened cases such as Rochdale and Newcastle, but is not often reported by the media.
The ringleader, Amere Singh Dhaliwal, converted to Sikhism after the abuse. He wears a turban, carried a kirpan in it and swore on the Guru Granth Sahib before taking to the witness stand. Raj Singh Barsran, who hosted many of the 'parties' in his house, is also a Sikh.
We shouldn't focus on race and religion and the discourse should be about something much more important - for a start, the causes of hebephilia and ephebophilia. | <urn:uuid:a8d35b0f-a16f-453b-a759-2949744ddc19> | CC-MAIN-2019-47 | https://blog.islamawareness.net/2018/11/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00098.warc.gz | en | 0.969169 | 4,680 | 2.53125 | 3 |
New Zealand is a sovereign island country in the southwestern Pacific Ocean. The country geographically comprises two main landmasses—the North Island, the South Island —and around 600 smaller islands. New Zealand is situated some 2,000 kilometres east of Australia across the Tasman Sea and 1,000 kilometres south of the Pacific island areas of New Caledonia and Tonga; because of its remoteness, it was one of the last lands to be settled by humans. During its long period of isolation, New Zealand developed a distinct biodiversity of animal and plant life; the country's varied topography and its sharp mountain peaks, such as the Southern Alps, owe much to the tectonic uplift of land and volcanic eruptions. New Zealand's capital city is Wellington. Sometime between 1250 and 1300, Polynesians settled in the islands that were named New Zealand and developed a distinctive Māori culture. In 1642, Dutch explorer Abel Tasman became the first European to sight New Zealand. In 1840, representatives of the United Kingdom and Māori chiefs signed the Treaty of Waitangi, which declared British sovereignty over the islands.
In 1841, New Zealand became a colony within the British Empire and in 1907 it became a dominion. Today, the majority of New Zealand's population of 4.9 million is of European descent. Reflecting this, New Zealand's culture is derived from Māori and early British settlers, with recent broadening arising from increased immigration; the official languages are English, Māori, NZ Sign Language, with English being dominant. A developed country, New Zealand ranks in international comparisons of national performance, such as quality of life, education, protection of civil liberties, economic freedom. New Zealand underwent major economic changes during the 1980s, which transformed it from a protectionist to a liberalised free-trade economy; the service sector dominates the national economy, followed by the industrial sector, agriculture. Nationally, legislative authority is vested in an elected, unicameral Parliament, while executive political power is exercised by the Cabinet, led by the prime minister Jacinda Ardern.
Queen Elizabeth II is the country's monarch and is represented by a governor-general Dame Patsy Reddy. In addition, New Zealand is organised into 11 regional councils and 67 territorial authorities for local government purposes; the Realm of New Zealand includes Tokelau. New Zealand is a member of the United Nations, Commonwealth of Nations, ANZUS, Organisation for Economic Co-operation and Development, ASEAN Plus Six, Asia-Pacific Economic Cooperation, the Pacific Community and the Pacific Islands Forum. Dutch explorer Abel Tasman sighted New Zealand in 1642 and named it Staten Land "in honour of the States General", he wrote, "it is possible that this land joins to the Staten Land but it is uncertain", referring to a landmass of the same name at the southern tip of South America, discovered by Jacob Le Maire in 1616. In 1645, Dutch cartographers renamed the land Nova Zeelandia after the Dutch province of Zeeland. British explorer James Cook subsequently anglicised the name to New Zealand. Aotearoa is the current Māori name for New Zealand.
It is unknown whether Māori had a name for the whole country before the arrival of Europeans, with Aotearoa referring to just the North Island. Māori had several traditional names for the two main islands, including Te Ika-a-Māui for the North Island and Te Waipounamu or Te Waka o Aoraki for the South Island. Early European maps labelled the islands North and South. In 1830, maps began to use North and South to distinguish the two largest islands and by 1907 this was the accepted norm; the New Zealand Geographic Board discovered in 2009 that the names of the North Island and South Island had never been formalised, names and alternative names were formalised in 2013. This set the names as North Island or Te Ika-a-Māui, South Island or Te Waipounamu. For each island, either its English or Māori name can be used. New Zealand was one of the last major landmasses settled by humans. Radiocarbon dating, evidence of deforestation and mitochondrial DNA variability within Māori populations suggest New Zealand was first settled by Eastern Polynesians between 1250 and 1300, concluding a long series of voyages through the southern Pacific islands.
Over the centuries that followed, these settlers developed a distinct culture now known as Māori. The population was divided into iwi and hapū who would sometimes cooperate, sometimes compete and sometimes fight against each other. At some point a group of Māori migrated to Rēkohu, now known as the Chatham Islands, where they developed their distinct Moriori culture; the Moriori population was all but wiped out between 1835 and 1862 because of Taranaki Māori invasion and enslavement in the 1830s, although European diseases contributed. In 1862 only 101 survived, the last known full-blooded Moriori died in 1933; the first Europeans known to have reached New Zeala
RNZ International or Radio New Zealand International, sometimes abbreviated to RNZI, is a division of Radio New Zealand and the official international broadcasting station of New Zealand. It broadcasts a variety of news, current affairs and sports programmes in English and news in seven Pacific languages; the station's mission statement requires it to promote and reflect New Zealand in the Pacific, better relations between New Zealand and Pacific countries. As the only shortwave radio station in New Zealand, RNZ International broadcasts to several island nations, it has studios in Radio New Zealand House, Wellington and a transmitter at Rangitaiki in the middle of the North Island. Its broadcasts cover from East Timor in the west across to French Polynesia in the east, covering all South Pacific countries in between; the station targets Micronesia, Papua New Guinea, Samoa, the Cook Islands, Solomon Islands and Tonga during a 24-hour rotation. The signal can be heard in Europe and North America.
RNZ International was launched in 1948 as Radio New Zealand, a subsidiary of what was the New Zealand Broadcasting Corporation. It utilised two 7.5 kW transmitters at Titahi Bay, left behind by the US military during World War II. It closed before reopening in 1976, under the foreign policy of the third Labour government. From 1987, the Government faced growing pressure to have a more active foreign policy towards the Pacific region, it upgraded the station, installed a new 100 kW transmitter and re-launched it as Radio New Zealand International on the first day of the Auckland 1990 Commonwealth Games. The station adopted new digital technology and launched a website in 2000. In 1992, Johnson Honimae was fired as the Solomon Islands Broadcasting Corporation head of current affairs over his work as a freelance reporter in Bougainville for RNZI and other international media outlets. SIBC general manager Patterson Mae was accused of undermining the principles of press freedom, resigned as president of the regional journalism body the Pacific Islands News Association.
In 1998 and again in 2000 RNZI won a Commonwealth Broadcasting Association's Rolls-Royce Award for Excellence. At a function of the Association for International Broadcasting in London, November 2007, RNZI received the International Radio Station of the Year award ahead of the BBC World Service; the association praised the station for what it said was an ability and clarity of vision - and for the delivery of something it said was valued by audiences throughout the region. RNZI won the award for Most Innovative Partnership; as of 2015, RNZI has 13 staff. These include manager Linden Clark, technical manager Adrian Sainsbury, news editor Walter Zweifel and deputy news editor Don Wiseman. Myra Oh, Colette Jansen, Damon Taylor, Dominic Godfrey and Jeremy Veal serve as technical producers and continuity announcers. In May of 2017 Radio New Zealand International's online brand was changed to RNZ Pacific to more reflect what the service does and emphasise its role in engaging with the domestic Pasifika audience in New Zealand.
According to RNZ, "For now, the RNZI brand will continue to be maintained on-air through our international service, but domestically it is now known as RNZ Pacific." Aside from Radio Australia, RNZI is the only international state-owned public broadcaster covering the Pacific region. Its news service focuses on South Pacific countries, includes news bulletins in eight languages; the station's reporters include Johnny Blades, Sally Round, former Pacific Media Centre editor Alex Perrottet, Moera Tuilaepa-Taylor, Indira Moala, Koroi Hawkins, Koro Vaka'uta, Leilani Momoisea, Amelia Langford, Bridget Tunnicliffe, Mary Baines, Jenny Meyer and The Wireless contributor Jamie Tahana. Vinnie Wylie heads the station's sports coverage, freelancers are used for on-the-ground reporting; the station's news service focuses on news that relates to New Zealand, ongoing stories like natural disasters and political crises. It predominantly cites Government and opposition leaders and the spokespeople of non-government organisations and government departments.
Marshall Islands journalist Giff Johnson is an RNZI correspondent, World Bank regional director Franz Dreez-Gross and Victoria University academic John Fraenkel are interviewed for stories. RNZI covers the Papua conflict and interviews exiled Koteka tribal leader Benny Wenda on his visits to New Zealand, it has reported on Vanuatu's Parliamentary debate on the conflict, Indonesian estimates of the death toll and West Papua National Liberation Army claims of militant arrests. The station has interviewed members of the Melanesian Spearhead Group over the army's bid to join the group. However, the station does not have any reporters on the ground; the station provides ongoing coverage of several regional issues, including climate change, rapid emigration, LGBT rights in Oceania, the development of Pacific tax havens and the growing influence of China. It allowed other media to redistribute its ongoing coverage of Fijian politics after the 2000 Fijian coup d'état, has covered the transition to independence in East Timor and political stability in the Solomon Islands.
RNZI gives greater air time to national news stories from South Pacific countries than New Zealand's other mainstream and Pacific media outlets. For instance, during March 2013 it covered the constitutional crisis in Nauru, video of alleged torture of prisoners by Fijian government officials and a World Bank grant to the Samoan government. RNZI produces most of its own programming, including regional current affairs, Pacific business and news bulletins in various languages; some local Pacific Island radio stations rebroadcast selected ite
Auckland is a city in the North Island of New Zealand. Auckland is the largest urban area in the country, with an urban population of around 1,628,900, it is located in the Auckland Region—the area governed by Auckland Council—which includes outlying rural areas and the islands of the Hauraki Gulf, resulting in a total population of 1,695,900. A diverse and multicultural city, Auckland is home to the largest Polynesian population in the world; the Māori-language name for Auckland is Tāmaki or Tāmaki-makau-rau, meaning "Tāmaki with a hundred lovers", in reference to the desirability of its fertile land at the hub of waterways in all directions. The Auckland urban area ranges to Waiwera in the north, Kumeu in the north-west, Runciman in the south. Auckland lies between the Hauraki Gulf of the Pacific Ocean to the east, the low Hunua Ranges to the south-east, the Manukau Harbour to the south-west, the Waitakere Ranges and smaller ranges to the west and north-west; the surrounding hills are covered in rainforest and the landscape is dotted with dozens of dormant volcanic cones.
The central part of the urban area occupies a narrow isthmus between the Manukau Harbour on the Tasman Sea and the Waitematā Harbour on the Pacific Ocean. Auckland is one of the few cities in the world to have a harbour on each of two separate major bodies of water; the isthmus on which Auckland resides was first settled around 1350 and was valued for its rich and fertile land. The Māori population in the area is estimated to have peaked at 20,000 before the arrival of Europeans. After a British colony was established in 1840, William Hobson Lieutenant-Governor of New Zealand, chose the area as his new capital, he named the area for Earl of Auckland, British First Lord of the Admiralty. It was replaced as the capital in 1865 by Wellington, but immigration to Auckland stayed strong, it has remained the country's most populous city. Today, Auckland's central business district is the major financial centre of New Zealand. Auckland is classified as a Beta + World City because of its importance in commerce, the arts, education.
The University of Auckland, established in 1883, is the largest university in New Zealand. Landmarks such as the Auckland Art Gallery Toi o Tāmaki, the Harbour Bridge, the Sky Tower, many museums, parks and theatres are among the city's significant tourist attractions. Auckland Airport handles around one million international passengers a month. Despite being one of the most expensive cities in the world, Auckland is ranked third on the 2016 Mercer Quality of Living Survey, making it one of the most liveable cities; the isthmus was settled by Māori circa 1350, was valued for its rich and fertile land. Many pā were created on the volcanic peaks; the Māori population in the area is estimated to have been about 20,000 before the arrival of Europeans. The introduction of firearms at the end of the eighteenth century, which began in Northland, upset the balance of power and led to devastating intertribal warfare beginning in 1807, causing iwi who lacked the new weapons to seek refuge in areas less exposed to coastal raids.
As a result, the region had low numbers of Māori when European settlement of New Zealand began. On 27 January 1832, Joseph Brooks Weller, eldest of the Weller brothers of Otago and Sydney, bought land including the site of the modern city of Auckland, the North Shore, part of Rodney District for "one large cask of powder" from "Cohi Rangatira". After the signing of the Treaty of Waitangi in February 1840, the new Governor of New Zealand, William Hobson, chose the area as his new capital and named it for George Eden, Earl of Auckland Viceroy of India; the land that Auckland was established on was given to the Governor by a local iwi, Ngāti Whātua, as a sign of goodwill and in the hope that the building of a city would attract commercial and political opportunities for iwi. Auckland was declared New Zealand's capital in 1841, the transfer of the administration from Russell in the Bay of Islands was completed in 1842; however in 1840 Port Nicholson was seen as a better choice for an administrative capital because of its proximity to the South Island, Wellington became the capital in 1865.
After losing its status as capital, Auckland remained the principal city of the Auckland Province until the provincial system was abolished in 1876. In response to the ongoing rebellion by Hone Heke in the mid-1840s, the government encouraged retired but fit British soldiers and their families to migrate to Auckland to form a defence line around the port settlement as garrison soldiers. By the time the first Fencibles arrived in 1848, the rebels in the north had been defeated. Outlying defensive towns were constructed to the south, stretching in a line from the port village of Onehunga in the west to Howick in the east; each of the four settlements had about 800 settlers. In the early 1860s, Auckland became a base against the Māori King Movement, the 12,000 Imperial soldiers stationed there led to a strong boost to local commerce. This, continued road building towards the south into the Waikato, enabled Pākehā influence to spread from Auckland; the city's population grew rapidly, from 1,500 in 1841 to 3,635 in 1845 to 12,423 by 1864.
The growth occurred to other mercantile-dominated cities around the port and with problems of overcrowding and pollution. Auckland's population of ex-soldiers was far greater than that of other settlements: about 50 percent of the popula
Virtual International Authority File
The Virtual International Authority File is an international authority file. It is a joint project of several national libraries and operated by the Online Computer Library Center. Discussion about having a common international authority started in the late 1990s. After a series of failed attempts to come up with a unique common authority file, the new idea was to link existing national authorities; this would present all the benefits of a common file without requiring a large investment of time and expense in the process. The project was initiated by the US Library of Congress, the German National Library and the OCLC on August 6, 2003; the Bibliothèque nationale de France joined the project on October 5, 2007. The project transitioned to being a service of the OCLC on April 4, 2012; the aim is to link the national authority files to a single virtual authority file. In this file, identical records from the different data sets are linked together. A VIAF record receives a standard data number, contains the primary "see" and "see also" records from the original records, refers to the original authority records.
The data are available for research and data exchange and sharing. Reciprocal updating uses the Open Archives Initiative Protocol for Metadata Harvesting protocol; the file numbers are being added to Wikipedia biographical articles and are incorporated into Wikidata. VIAF's clustering algorithm is run every month; as more data are added from participating libraries, clusters of authority records may coalesce or split, leading to some fluctuation in the VIAF identifier of certain authority records. Authority control Faceted Application of Subject Terminology Integrated Authority File International Standard Authority Data Number International Standard Name Identifier Wikipedia's authority control template for articles Official website VIAF at OCLC
University of Auckland
The University of Auckland is the largest university in New Zealand, located in the country's largest city, Auckland. It is the highest-ranked university in the country, being ranked 85th worldwide in the 2018/19 QS World University Rankings. Established in 1883 as a constituent college of the University of New Zealand, the university is made up of eight faculties, it has more than 40,000 students, more than 30,000 "equivalent full-time" students. The University of Auckland began as a constituent college of the University of New Zealand, founded on 23 May 1883 as Auckland University College. Stewardship of the University during its establishment period was the responsibility of John Chapman Andrew. Housed in a disused courthouse and jail, it started out with 95 students and 4 teaching staff: Frederick Douglas Brown, professor of chemistry. By 1901, student numbers had risen to 156. From 1905 onwards, an increasing number of students enrolled in commerce studies; the University conducted little research until the 1930s, when there was a spike in interest in academic research during the Depression.
At this point, the college's executive council issued several resolutions in favour of academic freedom after the controversial dismissal of John Beaglehole, which helped encourage the college's growth. In 1934, four new professors joined the college: Arthur Sewell, H. G. Forder, C. G. Cooper and James Rutherford; the combination of new talent, academic freedom saw Auckland University College flourish through to the 1950s. In 1950, the Elam School of Fine Arts was brought into the University of Auckland. Archie Fisher, appointed principal of the Elam School of Fine Arts was instrumental in having it brought in the University of Auckland; the University of New Zealand was dissolved in 1961 and the University of Auckland was empowered by the University of Auckland Act 1961. In 1966, lecturers Keith Sinclair and Bob Chapman established The University of Auckland Art Collection, beginning with the purchase of several paintings and drawings by Colin McCahon; the Collection is now managed by the Centre based at the Gus Fisher Gallery.
The Stage A of the Science building was opened by Her Majesty Queen Elizabeth The Queen Mother on 3 May. In 1975-81 Marie Clay and Patricia Bergquist, the first two female professors, were appointed. Queen Elizabeth II opened the new School of Medicine Building at Grafton on 24 March 1970; the Queen opened the Liggins Institute in 2002. The North Shore Campus, established in 2001, was located in the suburb of Takapuna, it offered the Bachelor of Information Management degree. At the end of 2006, the campus was closed, the degree relocated to the City campus. On 1 September 2004, the Auckland College of Education merged with the University's School of Education to form the Faculty of Education and Social Work; the faculty is based at the Epsom Campus of the former college, with an additional campus in Whangarei. Professor Stuart McCutcheon became Vice-Chancellor on 1 January 2005, he was the Vice-Chancellor of Victoria University of Wellington. He succeeded Dr John Hood, appointed Vice-Chancellor of the University of Oxford.
The University opened a new business school building in 2007, following the completion of the Information Commons. It has gained international accreditations for all its programmes and now completes the "Triple Crown". In May 2013 the University purchased a site for new 5.2-hectare campus on a former Lion Breweries site adjacent to the major business area in Newmarket. It will provide the University with a site for expansion over the next 50 years, with Engineering occupying the first of the new faculties in 2015. In April 2016, Vice-Chancellor Stuart McCutcheon announced that University of Auckland would be selling off its Epsom and Tamaki campuses in order to consolidate education and services at the City and Newmarket campuses; the Epsom Campus is the site of the University of Auckland's education faculty while the Tamaki campus hosts elements of the medical and science faculties as well as the School of Population Health. In mid–June 2018, McCutcheon announced that the University would be closing down and merging its specialist fine arts and music and dance libraries into the City Campus' General Library.
In addition, the University would cut 100 support jobs. The Vice-Chancellor claimed that these cutbacks would save between NZ$3 million and $4 million dollars a year; this announcement triggered criticism and several protests from students. Students objected to the closure of the Elam Fine Arts Library on the grounds that it would make it harder to access study materials; some dissenters circulated a petition protesting the Vice-Chancellor's restructuring policies. Protests were held in April and June 2018. Unlike other New Zealand universities such as the University of Otago and Victoria University of Wellington, the University of Auckland has not yet divested from fossil fuels. In April 2017, more than 100 students from the Auckland University Medical Students Association marched demanding the removal of coal, o
Samoa the Independent State of Samoa and, until 4 July 1997, known as Western Samoa, is a country consisting of two main islands, Savai'i and Upolu, four smaller islands. The capital city is Apia; the Lapita people settled the Samoan Islands around 3,500 years ago. They developed Samoan cultural identity. Samoa is a unitary parliamentary democracy with eleven administrative divisions; the country is a member of the Commonwealth of Nations. Western Samoa was admitted to the United Nations on 15 December 1976; the entire island group, which includes American Samoa, was called "Navigator Islands" by European explorers before the 20th century because of the Samoans' seafaring skills. New Zealand scientists have dated remains in Samoa to about 2900 years ago; these were found at a Lapita site at Mulifanua and the findings were published in 1974. The origins of the Samoans are studied in modern research about Polynesia in various scientific disciplines such as genetics and anthropology. Scientific research is ongoing.
Intimate sociocultural and genetic ties were maintained between Samoa and Tonga, the archaeological record supports oral tradition and native genealogies that indicate inter-island voyaging and intermarriage between pre-colonial Samoans and Tongans. Notable figures in Samoan history included Queen Salamasina. Nafanua was a famous woman warrior, deified in ancient Samoan religion. Contact with Europeans began in the early 18th century. Jacob Roggeveen, a Dutchman, was the first known European to sight the Samoan islands in 1722; this visit was followed by French explorer Louis-Antoine de Bougainville, who named them the Navigator Islands in 1768. Contact was limited before the 1830s, when English missionaries and traders began arriving. Visits by American trading and whaling vessels were important in the early economic development of Samoa; the Salem brig Roscoe, in October 1821, was the first American trading vessel known to have called, the Maro of Nantucket, in 1824, was the first recorded United States whaler at Samoa.
The whalers came for fresh drinking water and provisions, they recruited local men to serve as crewmen on their ships. Christian missionary work in Samoa began in 1830 when John Williams of the London Missionary Society arrived in Sapapali'i from the Cook Islands and Tahiti. According to Barbara A. West, "The Samoans were known to engage in ‘headhunting', a ritual of war in which a warrior took the head of his slain opponent to give to his leader, thus proving his bravery." However, Robert Louis Stevenson, who lived in Samoa from 1889 until his death in 1894, wrote in A Footnote to History: Eight Years of Trouble in Samoa, "… the Samoans are gentle people." The Germans, in particular, began to show great commercial interest in the Samoan Islands on the island of Upolu, where German firms monopolised copra and cocoa bean processing. The United States laid its own claim, based on commercial shipping interests in Pearl River in Hawaii and Pago Pago Bay in Eastern Samoa, forced alliances, most conspicuously on the islands of Tutuila and Manu'a which became American Samoa.
Britain sent troops to protect British business enterprise, harbour rights, consulate office. This was followed by an eight-year civil war, during which each of the three powers supplied arms, training and in some cases combat troops to the warring Samoan parties; the Samoan crisis came to a critical juncture in March 1889 when all three colonial contenders sent warships into Apia harbour, a larger-scale war seemed imminent. A massive storm on 15 March 1889 destroyed the warships, ending the military conflict; the Second Samoan Civil War reached a head in 1898 when Germany, the United Kingdom, the United States were locked in dispute over who should control the Samoa Islands. The Siege of Apia occurred in March 1899. Samoan forces loyal to Prince Tanu were besieged by a larger force of Samoan rebels loyal to Mata'afa Iosefo. Supporting Prince Tanu were landing parties from four American warships. After several days of fighting, the Samoan rebels were defeated. American and British warships shelled Apia on 15 March 1899, including the USS Philadelphia.
Germany, the United Kingdom and the United States resolved to end the hostilities and divided the island chain at the Tripartite Convention of 1899, signed at Washington on 2 December 1899 with ratifications exchanged on 16 February 1900. The eastern island-group was known as American Samoa; the western islands, by far the greater landmass, became German Samoa. The United Kingdom had vacated all claims in Samoa and in return received termination of German rights in Tonga, all of the Solomon Islands south of Bougainville, territorial alignments in West Africa; the German Empire governed the western Samoan islands from 1900 to 1914. Wilhelm Solf was appointed the colony's first governor. In 1908, when the non-violent Mau a Pule resistance movement arose, Solf did not hesitate to banish the Mau leader Lauaki Namulau'ulu Mamoe to Saipan in the German Northern Mariana Islands; the German colonial administration governed on the principle that "there was only one government in the islands." Thus, there was no Samoan Tupu | <urn:uuid:e11da2bd-4d75-4386-82b3-a8350a88a42a> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/Misa_Telefoni_Retzlaff | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00538.warc.gz | en | 0.96129 | 5,890 | 2.984375 | 3 |
Please share this presentation with others who would like to become self-sufficient and save money on their energy bills.
Do NOT recklessly spend all the extra money saved from reduced energy bills. Try to invest it wisely to look after family members and prepare for possible emergencies in the future.
This presentation is ONLY being made available for a short time frame and will be REMOVED in the near future if Mark comes under too much pressure from big energy companies, if you can not follow the rules above, please CLOSE THIS WINDOW IMMEDIATELY to free up your slot for the next person in line.
If you agree to all the above, click the "I Agree" button below to proceed to the following private presentation.
56 year old geography teacher Mark Edwards created his home-power system after his family was left cold and powerless following an unexpected flood..
He set out to create a power method that was cheap, easy to obtain (or build), easy to move and could use a constant source of power to create energy - meaning no dependency on sun, gas or wind.
He now uses this power source to drastically cut electricity bills and keep his family safe in times of crisis.
We have the potential to produce all of our electricity from clean energy sources
Today, we have the technology and the know-how to move beyond our dependence on polluting power plants by using clean, safe, and free renewable energy. By harnessing the potential of zero point magnetic energy, we can transform how we produce electricity.
A clean energy future will rely not just on renewable energy, but also
on better use of the energy we currently produce. By making the energy we produce last longer, or by increasing "energy efficiency," we can avoid the need for new polluting power plants. We can increase energy efficiency by using available technologies that do the same amount of work but use less energy, like a computer that goes to sleep when it's not in use.
I'm talking about a simple device that can be used by any family around the world and can even change the course of the entire energy industry!
Over the past year over 17000 people have also already successfully used the very same technique to get over tragic milestones such as hurricanes, snow storms or floods.
Based on technology we use on a daily basis, not only can it generate enough power to last through long harsh winters when temperatures outside go below zero, but it can ameliorate your power bill all year long.
I'm sure you're already intrigued and you want to know all about how this system works. Take the quiz to find out!
8 Tips to #Minimize Your Electric Energy Expense in a Flat Infographic
Substitute your air filter.
Change your hot water heater temp.
Wash and dry out your garments successfully.
Usage electricity dependable lighting.
Usage electrical power strips.
Switch off your roof supporters and also illuminations when not in use.
Readjust your thermostat.
how to lower electric bill
how to lower your electric bill
how to lower electric bill in apartment
how to lower my electric bill
how to lower electric bill in winter
how to lower electric bill in summer
how to lower electric bill with heat pump
how to lower electric bill in mobile home
how to lower electricity bills in the summer
how to lower electric bill in old house
how to lower electric bill with aluminum foil
how to lower your electric bill in an apartment
how to make my electricity bill lower
how to make your electric bill lower
how to lower electric bill in florida
how to lower electric bill philippines
how to lower my electric bill in an apartment
how to get a lower electric bill
how to get your electric bill lower
Much more items ...
May 18, 2018
8 Ways to Lower Your Electricity Costs in Your 1st Apartment or condo ...
firstchoicepower ... 8 means to lower your electrical energy costs in your first ...
Regarding this result
People also inquire
Just how perform I maintain my electric bill down?
Listed below are actually 10 means to Lower Your Electric Expense
how to lower electric bill air conditioner
how to lower electric bill in arizona
how to lower electric bill in winter in apartment
how to lower electric bill reddit
how to lower electric bill with ac
how to lower electric bill with electric heat
how to lower electric bill with solar
how to lower electric heat bill
how to lower electricity bill in india
how to lower gas and electric bill
how to lower my electric bill in the summer
how to lower my electric bill in the winter
how to lower my electric bill in winter
how to lower my electric bill tips
how to lower the electric bill in an apartment
how to lower your electric bill during the winter
how to lower your electric bill in the summer
how to lower your electric bill in the winter
how to lower your electric bill using magnets
how to lower your electric bill with magnets
Utilize a programmable thermostat. ...
Bonus insulate your residence. ...
Put on pleasant garments. ...
Change your sky filter. ...
Reduced the temperature level on the water heating system. ...
Balance Energy use by using home appliances strategically. ...
Spare Power by Washing clothes in chilly water.
Even more things ...
Oct 19, 2018
How to Lower Your Electric Bill|Payless Electrical power
paylesspower blog just how to decrease your power costs
Hunt for: How perform I keep my electricity costs down?
Why my electrical costs is therefore higher?
Lots of individuals possess high electrical energy bills because of the appliances that are linked into their electrical outlets, regardless of whether they may not be utilizing all of them regularly. ... While home appliances on stand by don't use as much electrical energy as when they remain in use, it may still build up, as well as it provides to a general much higher electric energy bill.Jan 15, 2019
how to lower your electric bill youtube
how to lower your gas and electricity bills
how to make electric bill lower
how to make my electric bill lower
how to cut lower your electricity bill
how to get help lowering electricity bill
how to get lower electric bills
how to get the electric company to lower your bill
how to keep electricity bills lower in the summer
how to lower apartment electricity bills
how to lower electric and gas bills by changing supplies
how to lower electric and heating bill
how to lower electric and water bill
how to lower electric and water bills
how to lower electric bill 2018
how to lower electric bill apartment temperature
how to lower electric bill by temperature
how to lower electric bill during summer
how to lower electric bill during winter
how to lower electric bill electric heat
Why Is My Electric Costs So High? 5 Trick Factors|EnergySage
news.energysage is your power expense expensive heres exactly how to address
Explore for: Why my electrical costs is therefore higher?
Exactly how can I decrease my electric costs in the summer season?
8 Ways to Lower Your Power Bill This Summer
When you don't require it, switch off your sky hair conditioner. ...
Utilize a programmable thermostat. ...
Switch to ELECTRICITY CELEBRITY home appliances. ...
Look for comprehensive property protection. ...
Clean your sky conditioning vents and units. ...
Switch on your roof fan. ...
Capitalize on natural venting. ...
Block the sunlight along with drapes as well as blinds.
how to lower electric bill florida
how to lower electric bill illegally
how to lower electric bill in an rv
how to lower electric bill in apartment air conditioner
how to lower electric bill in apartment air conditioning
how to lower electric bill in apartment air dry
how to lower electric bill in apartment automatically turn
how to lower electric bill in apartment ceiling fan
how to lower electric bill in apartment cold water
how to lower electric bill in apartment energy bills
how to lower electric bill in apartment energy costs
how to lower electric bill in apartment hardware store
how to lower electric bill in apartment heat water
how to lower electric bill in apartment heated dry
how to lower electric bill in apartment heating bill
how to lower electric bill in apartment home energy
how to lower electric bill in apartment hot water
how to lower electric bill in apartment in summer
how to lower electric bill in apartment incandescent bulb
how to lower electric bill in apartment light bulbs
8 Ways to Lower Your Electricity Costs This Summer Months Smart Power
smartenergy 8 techniques to decrease your electrical energy expense this summer months
Explore for: How can I lower my power costs in the summer?
How can I reduce my electric bill in the winter season?
Luckily, there are actually many techniques to lower your electricity expense this winter
Exactly how to reduce your electricity costs in wintertime.
Improve your regulator. ...
Reduced your thermostat. ...
Inspect your filters. ...
Don't obstruct your air vents. ...
Stay clear of heating uninsulated rooms. ...
Obtain a tune up. ...
Inspect your protection. ...
Make use of clever lighting fixtures practices.
Even more things ...
Jan 11, 2018
Just how to Lower Your Electric Costs this Winter|Direct Energy Weblog
directenergy blogging site just how to lower your power expense this wintertime
Look for: Just how can I reduce my electric expense in the winter?
Just how can I decrease my electric expense in my old house?
Here are actually five methods to make your old property the greenest one in your area.
how to lower electric bill in apartment peak hours
how to lower electric bill in apartment percent less energy
how to lower electric bill in apartment programmable thermostat
how to lower electric bill in apartment reddit
how to lower electric bill in apartment registered trademarks
how to lower electric bill in apartment save money
how to lower electric bill in apartment saving energy
how to lower electric bill in apartment smart power strips
how to lower electric bill in apartment surge protectors
how to lower electric bill in apartment utility companies
how to lower electric bill in apartment wash clothes
how to lower electric bill in apartment water heater
how to lower electric bill in apartment work harder
how to lower electric bill in ct
how to lower electric bill in ny state
how to lower electric bill in phoenix
how to lower electric bill in south carolina
how to lower electric bill in summer in apartment
how to lower electric bill in summer in florida
how to lower electric bill in texas
how to lower electric bill in winter and summer
Substitute lightbulbs and lightweight buttons.
Swap out taps, showerheads as well as bathrooms.
Update doors and windows.
Add insulation and also seal off the attic.
Reassess your electricity source.
Mar 22, 2017
5 ways to create your old home as energy dependable as a new one ...
clark residences genuine property 5 ways to create your aged home as electricity
Hunt for: Exactly how can I lower my electricity expense in my outdated house?
Just how can I minimize my power bill in the house?
Utilize these tips to reduce your electric energy costs:
How do I keep my electric bill down?
Why my electric bill is so high?
How can I lower my electric bill in the summer?
How can I lower my electric bill in the winter?
How can I lower my electric bill in my old house?
How can I reduce my electric bill at home?
What uses the most electricity in the home?
How can I lower my electric bill in an apartment?
How can we use less electricity?
What is the best way to save electricity?
How can I control my electric bill?
How can we save electricity in our daily life?
Replace your sky filter.
Adjust your heater temperature.
Laundry as well as dry out your garments effectively.
Use electricity dependable illumination.
Make use of power bits.
how to lower electric bill in winter reddit
how to lower electric bill majorly
how to lower electric bill on long vacation
how to lower electric bill tucson
how to lower electric bill while on vacation
how to lower electric bill with central air
how to lower electric bill with hot tub
how to lower electric bill with pool
how to lower electric bills in winter
how to lower electric cooling bill
how to lower electric cooling bill while using ac
how to lower electric heating bill
how to lower electrical bill
how to lower electrical bill devices
how to lower electricity bill at home
how to lower electricity bill electric heat
how to lower electricity bill uk
how to lower electricity bill with friedrich
how to lower electricity bills easily
how to lower electricity bills in winter
When certainly not in usage, switch off your ceiling fans and illuminations.
Readjust your temperature.
Practice efficiency with your kitchen area appliances.
May 18, 2018
8 Tips to Reduce Your Electrical Energy Bill in an Apart ... Initial Choice Energy
how to lower gas and electric bills
how to lower monthly electric bill
how to lower my electric and gas bill
how to lower my electric bill in florida
how to lower my electric bills
how to lower our electric bill
how to lower the cost of the electric bill
how to lower the electric bill in an old house
how to lower water and electric bill
how to lower you electric bill
how to lower your apartment electricity bill
how to lower your aps electric bill
how to lower your electric bill clermont fl
how to lower your electric bill in a mobile home
how to lower your electric bill in a trailer
how to lower your electric bill in arizona
how to lower your electric bill in bakery living orange
how to lower your electric bill in florida
firstchoicepower ... 8 techniques to decrease your electrical energy costs in your initial ...
Look for: Exactly how can I lower my electricity bill at house?
What makes use of one of the most energy in the house?
how to lower your electric bill legally
how to lower your electric bill oncor
how to lower your electric bill this summer
how to lower your electric bill utah
how to lower your electric bill with aluminum foil
how to lower your electric bill with electric heat
how to lower your electric heating bill
how to lower your monthly electric bill
how to negotiate a lower electric bill
how to significantly lower electric bill
how to use capacitors to lower electric bill
lower electricity bill how to cut electric bill in half
lower electricity bill how to keep your power bill low
lower electricity bill how to lower electric bill in winter
lower electricity bill how to reduce electricity bill device
lower electricity bill how to reduce electricity bill illegally
lower electricity bill how to reduce electricity bill tricks
Listed below's what utilizes the very most energy at home:
Air conditioning as well as home heating: 47% of energy use.
Heater: 14% of power make use of.
Washer and clothing dryer: thirteen% of power make use of.
Illumination: 12% of electricity usage.
Refrigerator: 4% of electricity usage.
Electric oven: 3 4% of electricity usage.
TELEVISION, VIDEO, wire carton: 3% of power usage.
Dish washer: 2% of energy use.
Even more items ...
Nov 14, 2016
Infographic: What Utilizes the A Lot Of Energy at home?
visualcapitalist what makes use of one of the most electricity residence
Hunt for: What makes use of one of the most electrical power in the property?
How can I lower my electrical expense in an apartment?
Below is actually an examine some distinct factors that you may carry out to spare loan on your energy costs.
Use Make Use Of Electricity Reliable BulbsLight Bulbs ...
Turn Switch Appliances Off. ...
Turn Transform the Lights When You LeaveLeave behind
More products ...
Apr 1, 2019
10 Ways to Save on Your Home's Electrical power Expense Lease Weblog
rent out blog 10 means save apartments month-to-month electric costs
Hunt for: How can I lower my electrical bill in an apartment or condo?
Just how can our team utilize much less electric power?
Exactly How to Utilize Much Less Electrical Energy
Unplug when not being used. ...
Utilize an energy strip. ...
Energy analysis. ...
Change your bulbs. ...
Tape your leakages. ...
Wash garments smartly. ...
Dry your garments properly. ...
Bunch your dishwashing machine.
A lot more things ...
Exactly how to Use Much Less Electrical Power|POPSUGAR Smart Living
popsugar clever living Just how Make use of Much less Electric Power 29003772
Hunt for: How can we make use of less energy?
What is the most ideal method to save energy?
21 recommendations: no cost methods to spare electric power
Shut off needless illuminations. ...
Make use of natural light. ...
Use activity lights. ...
Take much shorter showers. ...
Turn water off when trimming, washing hands, combing teeth. ...
Fix that dripping tap. ...
Unplug extra electronics. ...
Trench the home computer.
Extra items ...
21 pointers: no cost means to conserve electric energy BC Hydro
Look for: What is the most effective way to spare energy?
Exactly how can I manage my power costs?
How can our company save electrical energy in our every day life?
Perform followers utilize a considerable amount of electrical power?
Why is electrical power so costly?
Why is my electrical expense so high in the winter months?
Can Bad wiring trigger a higher electrical bill?
Can a negative hot water heater raising electric expense?
What is an average power bill?
How considerably performs a swimming pool elevate your electrical bill?
Why is electrical costs so higher in summertime?
Just how much is your power expense in the winter season?
What should you prepare your temperature at in the wintertime?
How can I decrease my home heating bill in the winter months?
Carry out glowing heaters utilize a whole lot of power?
How can I create my outdated property even more power effective?
How do you warm a big aged house?
How can I lower my energy?
Why my electrical bill is so higher?
Exactly how perform I keep my electric costs reduced in the summer season?
Carries out disconnecting traits spare money?
Carry out evening lights make use of a considerable amount of electricity?
What is the least expensive method to heat a property?
Just how much electrical power carries out a house usage each day?
Just how considerably carries out an electric bill cost for an apartment or condo?
How can I lower my electric bill?
Just how much is energy each month in a condo?
What is the greatest method to spare electricity?
How can I manage my electricity expense?
Exactly how can we save electrical power in our life?
What utilizes the absolute most energy in your house?
Exactly how can I reduce my electricity costs in the winter?
Exactly how can I reduce my electric costs in a condo?
Exactly how can I decrease my electrical costs in my flat?
Exactly how can I reduce my electric costs in my old home?
What is actually a traditional electricity expense?
Web end results
Exactly how to Lower Your Electric Expense|Payless Energy
paylesspower blogging site just how to lower your electricity costs
Score: 4.6 5,538 testimonials
Oct 19, 2018 Below are actually 10 means to Lower Your Electric Bill. Reduced the temperature level on the water heating system. Balance Electricity make use of through using home appliances smartly.
HOW TO LOWER
BILLS As Well As CONSERVE
Frying Pan TheOrganizer
YouTube Nov 25, 2016
5 Straightforward Methods to
Lower Your Power
Costs fifty% or EVEN MORE
YouTube Dec 20, 2016
How to Decrease
Electric Bill through
Greater than 60% in 1
YouTube Sep 3, 2013
Just how to Lower Your
Gas and Electric
YouTube Oct 27, 2017
Just how to Lower Your
YouTube May 4, 2009
Exactly how to Lower
Electric Costs in 3
Electric Saver 1200
YouTube Dec 27, 2013
Just how to Lower
Electric Costs Save
on Your Electric energy
Bill Approximately 40 ...
YouTube Apr 25, 2018
Best Method To Reduced
Electric Costs ...
Electric Saver 1200
YouTube Dec 18, 2012
Slashes: 5 Ways To
Electric energy Expense |
YouTube Jul 20, 2018
A Cheap Device
That Will Decrease Your
YouTube Jun 9, 2017
15 Ways to Lower Your Electricity Bill NerdWallet
nerdwallet blog financial exactly how to conserve amount of money on your electric costs
Maintaining the lightings on isn't low-priced-- don't bother the central air conditioning, furnace as well as warm water heating unit. ... Always keep reading for means to minimize your electric bill. ... Home home heating and also air conditioning are 10 of the biggest perpetrators behind large power costs-- as well as the greatest places to seek expense ...
Tips for Reducing Your Electric Bill The Spruce
thespruce '... 'Environment-friendly Residing 'Green Staying Tips
Jul 8, 2019 Listed below are actually quick and easy points that you can easily do to reduce your electrical bills all year without losing your family's comfort.
How to Reduce Your Power Costs without Expense or even Lose Lifehacker
lifehacker exactly how to decrease your power bill without cost or even sacrific 59530 ...
Oct 22, 2012 When my first power costs came, it escalated to heights I failed to also anticipate. When I examined popular options, every little thing cost cash.
41 Super Easy Ways to Lower Your Electric Expense
save power future 41 very simple techniques to lower your electricit ...
Everyone's seeking techniques to go green these times. Listed here are 40 easy as well as simple recommendations to decrease your energy costs through creating some small modifications in your house.
Just How to Keep Power Costs Reduced|Investing|United States News
money.usnews amount of money personal money management ... just how to keep electricity expenses reduced
Lesser your electricity costs as well as increase your savings with these affordable tactics.
Electric costs: Free, simple adjustments you can create to spare as well as cut expenses
usatoday tale funds ... spending plan ... electricity bill ... 552876001
Nov 13, 2017 Electric expenses are kind of an enigma, but there's a lot of big as well as small ... examine exactly how to decrease your power costs and minimize your power bill.
50 Tips to Hairstyle Your Electric Bill asunder HomeSelfe
homeselfe 50 tips to cut your electric bill in half
In 2016, the typical power bill in the United States was $119 monthly-- over ... By always keeping the sun out, you can easily reduce air conditioning system power usage.
8 Ways to Lower Your Electric Energy Costs in Your Very First Apartment or condo ...
firstchoicepower ... 8 means to decrease your electrical power expenses in your fi ...
May 18, 2018 Along with the excitement of your initial condo arrives the duty of paying bills. Keep your electric energy costs reduced as well as on budget along with these ...
Electric utility associations
Fight it out Energy
Battle each other Power
Georgia Electrical power
National Framework plc
. National Grid plc
Entergy. Fla Energy & Illumination.
Florida Energy & Illumination.
Ways of energy preservation.
Viewpoint 3+ even more.
Heat energy rehabilitation venting.
Heat healing venting.
Searches associated with exactly how to lower electricity costs.
just how to reduce electricity costs in apartment.
exactly how to lower power bill in summer season.
just how to reduce power bill in winter months.
exactly how to reduce electricity costs in fla.
how to decrease electric expense reddit.
exactly how to lower electric expense in old house.
decrease power bill by 75 percent.
how to cut electric bill in fifty percent.
TXU Energy ® Official Website|7 Highest Make Use Of Days Free.
Right now whichever 7 times you make use of one of the most electrical power each month are complimentary. Automatically. In some cases, it is actually nice to have a Free Successfully pass-- especially when you require it most.
Every month, instantly. Get your freebie today.
TXU Electricity Pure Solar.
Powered by one hundred% Texas Solar Farms. Go sun as well as assistance tidy energy.
TXU Electricity Refer a Friend.
A friend request that pays off. $50 for you as well as $fifty for them.
TXU Power Mobile Application.
Manage your energy make use of. Scenery, income or even approximate your expense.
Seek a Business Quote.
Competitively priced programs. Superb client service.
44 Ways to Lower Your Electric Expense thespruce.
thespruce lesser your electrical costs 1388743 thespruce lesser your power expense 1388743.
Sediment build-up in your warm water heater can reduce the effectiveness of the burner. Make use of the valve behind your warm water heating unit to drain pipes the sediment twice annually. Remain to 41 of 44 under.
41 Super Easy Ways to Lower Your Electric Bill Conserve ...
preserve electricity future 41 super effortless means to reduce your power bill.php conserve power future 41 extremely quick and easy techniques to reduce your electric energy bill.php.
41 Easy Ways to Lower Your Electric Costs. Usage floor heating systems as well as quilts. Open home windows: In the summer months, opening up home windows in the early morning will cool down the house without cranking the sky conditioning.
10 Easy Ways to Lower Your Electric Bill forbes.
forbes websites moneybuilder 2011 08 23 10 effortless methods to reduce your electrical costs forbes web sites moneybuilder 2011 08 23 10 quick and easy methods to reduce your electricity costs.
Aug 23, 2011 · 10 Easy Ways to Lower Your Electric Costs. Therefore, the cost to cool our property is actually receiving profane. Our experts can call the temperature up to 80 degrees, placed a kiddie swimming pool in the living-room, as well as purchase some Misty Mates from HSN, however I am actually not eager to go there. I work coming from home, and I will not be unhappy to conserve a couple of bucks.
15 Ways to Lower Your Electricity Costs NerdWallet.
nerdwallet blogging site money management how to save money on your electrical bill nerdwallet blog financing just how to spare loan on your electric expense.
Carrying out therefore for 8 hrs may reduce your annual heating & cooling expenses through around 10%. A programmable temperature will perform the benefit you. Readjust your fridge and also fridge temperature: Specify your refrigerator to 38 levels as well as your fridge freezer to 5 degrees.
How to Lower Your Electric Expense|DaveRamsey.
daveramsey weblog exactly how to reduce electric costs daveramsey blog post exactly how towards reduce electric bill.
You presumed it ... we are actually discussing the electrical costs. Look at these summer months sparing suggestions on how to decrease your electrical costs as well as still hammer the heat energy this summer months season. Carry Out an Electric power Review. Words analysis does not seem like a lot fun, but if you believe saving money is actually enjoyable, you'll possess a blast. Advantageous (and also most thorough ...
Exactly how to Lessen Your Electricity Bill without any Expense or Sacrifice.
lifehacker how to reduce your electricity costs without cost or even sacrific 5953039 lifehacker exactly how to lessen your electricity costs without price or sacrific 5953039.
The warmth was unyielding, therefore was the central air conditioning. When my very first power expense happened, it escalated to heights I didn't even anticipate. If you yearn for to invest a little bit of amount of money to lower your expense ...
7 Summer Summertime Electricity Sparing to Lower Your Electric Bill, ...Expense
blog.nationwide electricity saving suggestions to reduce energy bill blog.nationwide power sparing recommendations to minimize power bill.
Power business typically raise gasoline as well as electrical energy prices during the course of the best opportunity of the time, according to Energy Upgrade California. You can still fill the dishwashing machine after dishes, but hanging around a bit to operate it may assist reduced summertime power bills.
How to Always Keep Energy Prices Reduced|Investing|US News.
money.usnews loan individual financing investing short articles just how to maintain power costs reduced money.usnews amount of money private money costs short articles just how to maintain electricity costs reduced.
Lower your electrical expense and enhance your savings along with these price effective techniques. Conserve amount of money on your electricity bill with these pro supported recommendations. ...
10 Cool Ways To Lower Your Electrical Costs Bankrate.
bankrate financing clever spending 10 techniques to spare amount of money on your power expense 1. aspx bankrate finance clever investing 10 ways to save money on your utility expense 1. aspx.
The Bankrate Daily. Make sure you're simply getting billed for the electric power you in fact utilized through comparing the meter reading on your energy costs to what you actually view on your meter. That's a lifeless free gift that you are actually being actually surcharged if the volume on your meter is lower than the one on your costs.
3 Ways to Lower Power Bills in the Summer months wikiHow.
m.wikihow Lower Electric energy Expenses in the Summer months m.wikihow Lower Electric energy Expenses in the Summer months.
In the course of the summer, electricity bills can easily take off. There are actually some straightforward energy conserving techniques that may assist you lesser energy bills in the summertime. For greatest outcomes, apply greater than one strategy. If you are trying to reduce costs in the home or even job, speak with your member of the family, roommates, or even coworkers so they recognize just how to spare power, as well.
Texas Electric Energy Rates|Finest Affordable Texas Energy.
Texas Electrical Power Prices. Greatest & Cheap Texas Electric Power Prices. Switch Now!
7 Ways To Minimize Your Electric Energy Expense|CleanTechnica.
cleantechnica 2013 11 03 7 ways minimize electrical energy expense cleantechnica 2013 11 03 7 means decrease electric power costs.
Exactly How To Lower Electric Expense, Action # 1-- Go Solar! I am actually mosting likely to go forward and also start with the very most noticeable-- the most successful means to reduce your power costs is very most likely by going sun.
HOW TO LOWER YOUR ELECTRICAL BILLS AND ALSO CONSERVE FUNDS!!! YouTube.
m.youtube watch?v vcsINpvD27k m.youtube watch?v vcsINpvD27k.
Exactly how to reduce your electrical bills and save funds! Tips and tricks to decrease your month-to-month electric bill in effortless and also easy DO-IT-YOURSELF actions! As a property owner, I am actually always seeking straightforward means to lower my ...
8 Ways to Lower Your Electric Energy Expense This Summer Months.
smartenergy 8 means to reduce your electrical power expense this summer season smartenergy 8 ways to reduce your electric energy expense this summer.
This removes the problem of consistently adjusting your thermostat as well as the apprehension of coming property to a scorching property. A terrific design in the programmable temperature field is actually the Home. While the retail rate for the Nest is $250, the company states that it may reduce electrical power bills by 20%. 3. Switch to POWER SUPERSTAR devices.
Exactly how to Lower Your Electric Bill|Payless Energy.
paylesspower weblog just how to decrease your electrical expense paylesspower blog site how to decrease your electrical expense.
If you have an interest in knowing just how to lower your electricity bill by utilizing pre-paid electricity in Texas, check out Payless Power. Using economical power plans to match both private as well as business demands, Payless Power is actually a provider committed to delivering folks not simply with several of the most ideal electrical energy fees in Texas, yet likewise along with useful ...
Amount of money Sparing Tips for Electric Expense womansday.
womansday lifestyle work funds g3084 means to reduce your electrical bill summer womansday lifestyle work cash g3084 means to decrease your electrical costs summertime.
Save loan on your electricity costs with Loan Saving Tips from WomansDay. We provide a selection of helpful cash conserving techniques as well as ideas to help your household feel really good.
12 means to conserve power as well as money United States Conserves.
americasaves blog 1368 12 means to conserve power and also funds americasaves blog 1368 12 ways to conserve electricity and amount of money.
12 easy means to conserve energy as well as conserve amount of money from Individual Alliance of The United States. 12 techniques to conserve power and also loan United States Conserves Power takes a substantial snack out of home finances, along with the normal family spending regarding $2200 each year on energy expenses.
Just How to Lessen Electric Costs through Much More Than 60% in 1 month ...
m.youtube watch?v AzUESjDN4Ok m.youtube watch?v AzUESjDN4Ok.
Exactly How to Decrease Electric Costs by Greater Than 60% in 1 month! ... someday while browsing online I discovered an advanced new environment-friendly electricity modern technology which assured to reduce your electrical costs through ...
10 Ways to Decrease Your Electric Power Expense MapleMoney.
maplemoney 10 ways to lower your electrical energy costs maplemoney 10 techniques to decrease your electric energy expense.
If you combine your attempts to conserve electrical power with your attempts to decrease your heating system costs and water expenses, you could save a fair bit each month, amounting to significant financial savings over a lifetime. If you are actually tired of paying for excessive for electrical energy, below are actually 10 tips to minimize your electricity bill: Transform off lightings when not in use.
How to Produce Electric Costs Go Down Every Month|Pocketsense.
pocketsense make electric bills down month 12003905 pocketsense make electricity bills down month 12003905.
High electrical costs can easily produce you have a hard time your spending plan. A bill that fluctuates coming from month to month may be uncertain. Having said that, you can handle your energy consumption and create electrical expenses go down every month by means of steady improvement and producing improvements to your property.
7 Tips for How to Lower Electric Expense in the Summertime Goal ...
missiontosave 7 suggestions for exactly how to decrease electric expense in summer months missiontosave 7 pointers for how to lower electricity expense in summer.
Listed here are actually 7 successful recommendations and also methods for how to decrease electric expense in the summer season while appreciating the enjoyable that the season needs to give. The summer months season always possesses a significant rise in the electric costs. Feel it or certainly not, a common family in the USA spends greater than $400 on electrical power throughout the summer months.
Twenty Affordable Ways to Lower Your Energy Bills.
doughroller wise spending twenty cost-effective means to decrease your utility expenses doughroller brilliant costs twenty inexpensive ways to decrease your power costs.
Just how to decrease your power expenses 20 low-cost suggestions. As a youthful grown-up, I never knew that there was actually such a trait as a power use consultation where companies analyze achievable places you have ...
11 Ways to Save and reduce Funds on Electrical Costs.
moneycrashers 10 ways to lessen your utility costs moneycrashers 10 methods to minimize your power bill.
This details offers our company 3 crucial locations to pay attention to, and I've developed 11 techniques you can promptly reduce your utility bill economically as well as effectively: Ways to Save on Electricals in your house. 1. Include Attic Protection.
10 Easy Ways to Lower Your Electric Expense Obtain Abundant Slowly.
getrichslowly 10 easy methods to reduce your power bill getrichslowly 10 easy techniques to reduce your electric expense.
To reduce my electrical bills I've done the following: I use a stress stove to reduce the volume of your time spent cooking food using my cooktop. I'll utilize my crockery flowerpot on the deck to always keep your home also cooler.
How to Reduce Energy Bill|10 Easy Tips to Lower.
Why Is My Electric Bill So High? ... House home heating and also air conditioning are 10 of the greatest root causes responsible for sizable electrical expenses-- and also the greatest places to look for price ...
Tips for Lowering Your Electric Bill The Spruce
Inspect out these summertime conserving pointers on just how to decrease your electricity bill as well as still beat the heat energy this summer season. When my very first electricity expense happened, it escalated to elevations I really did not even assume. ... If you desire to spend a little amount of money to reduce your costs ...
7 Summer Summer Months Saving Sparing to Lower Your Electric BillCosts# | <urn:uuid:d87d7a29-7623-4038-a0a8-bdc88fcb453d> | CC-MAIN-2019-47 | http://wisefugepower.com/lower-electric-bill-pleasanton-ca/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00058.warc.gz | en | 0.908459 | 7,813 | 2.53125 | 3 |
Survival of the Male Breadwinner.
At the heart of the breadwinner model was the expectation that masculinity was a key aspect of obtaining paid work. (7) This concept formed the basis for the creation of a base wage for labouring work; it was formalised in the early twentieth century and remained in place as a feature of wage determination until the late 1960s. (8) This paper will examine the influence of the male breadwinner model on women in the workplace throughout the twentieth and twenty-first centuries.
Writing that considers the participation of the women in the paid workforce vacillates between blatant expressions of male domination and more recent attempts of regulators to bring equity into the employment relationship involving female employees. A more refined lens for viewing past events exposes the true extent of disadvantage suffered. This perspective provides knowledge that informs the pursuit of social justice and equity for women caught in the legacy of the male breadwinner model.
In the late-nineteenth century, 'Women's labour [was] most visible in relation to the "family farm", where female family members assisted in the running of the farm and in providing a steady source of subsistence income'. (9) To work outside the bounds of the family business was considered undesirable and those whose family circumstances were such that they were required to find paid employment were challenged by a working environment which devalued their rights and worth as employees. The struggles women faced in the early 1900s were often as basic as survival (10) and/or as noble as the quest for equality with men (11)--both, at times, equally difficult to achieve.
In the early twentieth century, female breadwinners were in a position of powerlessness, their continued employment was often dependent on the goodwill of other women to assist with their caring responsibilities enabling them to work. 'It was predominantly women's unpaid labour that provided the social services now considered essential responsibilities of government such as health care and child care'. (13) A woman without support had little to assist her in her social isolation. (13) The feminist movement was gaining momentum and journals such as Women's Voice were published to advance women's rights to equal participation within the paid workforce. (14) The attitudes faced by these women could be summed up by the South Australian Attorney-General W J Denny, when he 'observed fatuously that it could be fairly stated that women were not the same as men'. (15)
In 1919, the Australian Conciliation and Arbitration Court determined that women who performed women's work should be paid a basic female wage set at 54% of the male wage regardless of marital status or responsibility for dependants. (16) This decision denied these women and their families the same frugal comfort (17) that was allowed for men under the family model that consisted of a male breadwinner, usually a husband-father, a housewife-mother and children. (18)
The establishment of women's organisations, aimed at collectively supporting professional women against the backdrop of the male breadwinner, played a key role in sustaining the effort to encourage women in non-traditional roles. More than two thirds of female graduates from university in the early twentieth century became teachers (19) who tended to be employed in private secondary girls' schools and women's colleges. Women studying in non-traditional fields experienced greater resistance than those studying to be teachers. Throughout the late-nineteenth and early-twentieth century female doctors were more likely to find work in hospitals specialising in women's and children's health--work considered more suitable for women (20)--a reflection of a changing attitude towards working women that allowed them into suitable positions while still preserving the male breadwinner model. The struggle of women for equality in the workplace was furthered by the Feminist Club of New South Wales, originally formed in 1914. The goals of this club included working towards equity in the areas of employment opportunity and pay. Jesse Street became its President for a brief period in the 1920s, but resigned to do work with the UAW more congenial to her increasingly socialist politics. (21) Street was part of the movements for the rights of married women to work and for improved wages for women. (22) Collective action by women was emerging as a key component of advancing their rights in the workplace.
Through the mid-twentieth century, gains for women were being made but sometimes it was one step forward and two steps back. The prevailing view was that women should be kept in their place, enjoy their position under male protection and preserve the male domain of the workplace. (23) This was underscored by an industrial system that happily acknowledged women as the lower class in the workplace. (24) By the conclusion of World War Two women had been promoted, temporarily, to a wage equivalent to 75% of the wage of males. Most of the female wages were returned to 54% at the conclusion of the war, although Western Australia retained the female wage at the 75% level. (25) This failure to recognise the value of female workers is an example of the detriment women suffered at the hands of the dominant male breadwinner model.
In the post-war period, white-collar occupations especially teaching were most active in challenging the male breadwinner principle. The Teachers' Federation of New South Wales demanded equal pay for equal work. As a result of the Teachers' Federation campaign, the New South Wales Industrial Arbitration Act was amended with the addition of Section 88D, Equal Pay under Certain Circumstances. The amendment required that a woman seeking equal pay had to relate her wage value to that of a man in the same occupation. (26)
Issues specific to women in the workplace began to receive acknowledgement in the public sphere; for example, 'sexual harassment of women was acknowledged as being both undesirable and prevalent, especially in the workplace'. (27) The year 1966 saw the abolition of the exclusion of married women from the Federal Public Service. (28) Gains for women were being made but there was little acknowledgement of the vast policy changes needed for women to be fully integrated in the workforce. (29) Unions and feminist organisations continued the push for increased equality in the workplace, (30) but the breadwinner model persisted. 'The wage fixing principles of the arbitral systems were based on, and confirmed, the assumption that the typical worker ... was understood ... he was a he. He would be paid a fair and reasonable wage on the assumption that he was, or would become, the breadwinner for a family'. (31) The male breadwinner concept remained the most significant barrier to women gaining equality in the workforce. (32) Collective action proved to be significant in the quest for improved pay and conditions for teachers, specifically in New South Wales.
The onus was always on women to prove themselves, but this was difficult to do from a position of disadvantage with little support available in the wider community. (33) Changes in societal attitudes heralded the beginning of a wider understanding by women of their rights as individuals in the workplace. There was also an increasing recognition of the need for women to cooperate to achieve outcomes in policy and legislation.
The 1960s have been shown by history to be an era of change, a time of flux, women were moving to ensure their rights in the workplace. They 'were unionising at a faster pace than men and were emboldened by their increased numbers and solidarity within the labour movement, these women began to demand changes'. (34) Women escalated the growth of the feminist movement, and female labour force participation increased throughout this decade. The value of collective action soon became apparent, since 'The great gains for women have been made, and are made, through collective action'. (35)
At the federal level, the campaign for legislative change to enshrine the principle of equal pay made headway in the 1960s. A number of women's organisations were active in this campaign, including the Australian Federation of Business and Professional Women's Clubs, the Australian National Council of Women, the Australian Federation of Women Voters and the Union of Australian Women. This campaign resulted in the Arbitration Commission's 1969 equal pay decision'. (36)
The emphasis on collective support by women for women, was important in the success of the continuing campaign for equity in the workplace.
In 1959, the NSW Industrial Arbitration Act was amended to allow equal pay under certain circumstances. (37) The responsibility was on women, those seeking equal pay had to argue that their wage was of equivalent value to that of a man in the same occupation. (38) The states followed eight years later. In 1967, the total wage replaced the basic wage and margins, following this the Australian Industrial Relations Commission (AIRC) instituted a system where wages would be reviewed on an annual basis on 'general economic grounds'. (39) These decisions flagged an era in which issues of the family began, very gradually, to be separated from wage fixation.
The Arbitration Commission's 1969 equal pay decision brought Australian women another step closer to equal pay for work of equal value. The ACTU claim for equal pay in the 1972 National Wage Case 'overcame some of the limitations of the 1969 principles by broadening the scope from "equal work" to "equal value", thus opening the door to claims from female dominated areas of work'. (40) However, an application for a minimum wage for women was rejected despite the fact that equal pay was being phased into the awards, due to a lingering intent to use the family unit as the basis for the minimum wage. This was followed in 1974 by the ratification of the International Labour Organisation (ILO) No. 100 Equal Remuneration (1951) affirming the 'equal pay for work of equal value' principle, as well as applying the minimum wage to women. The AIRC effectively made the point that its role was in industrial arbitration, maintaining that the care of families was the responsibility of government. (41) The final formal barrier to gender wage equality was the continued imposition of a higher minimum wage for men to allow for their family responsibilities; this was finally removed in 1974. (42) The institutionalisation of the breadwinner model was in decline. (43)
Women's issues were now firmly on the agenda, but they remained second class citizens in the workplace. (44) At this point in time women walked an awkward tightrope, balancing the need to be seen as individuals contributing meaningfully to the workplace with conforming to the social expectations that still embraced the male breadwinner model. (45) The subsequent formal illegitimacy of the model combined with social change (which included the increasing number of married women in the workforce, a greater number of two-income families and single parent households and a growing demand for equal rights and pay) resulted in discontent between the expectations of employment and the reality of living and working. (46) The undervaluation of female-dominated work began to develop as a pressing issue and any equal pay cases presented to the AIRC were subject to the Government's overarching policies of wage restraint. (47) Events such as this, not necessarily related to the promotion of the male breadwinner model, continued its institutional influence on wage equity. The role the Government undertook with regard to the allocation of assistance to families finally relieved the wage system of its previous responsibility for family.
The equity trends of the 1980s were continued through legislative change. Equal employment opportunity introduced a new round of concern, in that women were forced to work harder than men to prove their equality in professions traditionally the domain of men. This became evident in the '1986 test case seeking revaluation of nurses' work on the basis of comparable worth, through a variation to the Private Hospitals and Doctors' Nurses (ACT) Award'. (48) Rather than supporting the male breadwinner model in an overt way, the decisions of the AIRC contributed to its continued survival, if not in policy, then certainly in practice. Throughout the 1980s the concept of structural efficiency moved aside concerns for comparative wage justice. (49) In an era when equality of pay and conditions was beginning to be taken for granted by society, inequality maintained its status by stealth.
The Accords protected minimum pay rates, through the safeguards of wage indexation and protected vulnerable groups of workers mainly through awards. (50) These protections continued until the introduction of the Workplace Relations Act (1996), which introduced individual contracts, reduced the scope of awards and modified the influence of the AIRC. (51) Since the Workplace Relations Act (and subsequent amendments) reduced the trade unions' exclusive right to represent employees in industrial issues and progressively promoted individual agreement making, women have been denied the power to effectively defend their workplace pay and conditions. (52) The themes of female collectivity and the male breadwinner model consistently emerge in the history of female employees, so it is unsurprising that an individualist industrial relations framework such as WorkChoices disadvantaged female employees.
The 1996 Workplace Relations Act allowed for parents to access entitlements from the ACTU-led Maternity Leave (1979) and Parental Leave (1990) test cases. Additional parental provisions were gained through ACTU test cases in the 1990s. (53) While gains were made that made the role of working caters more manageable, community attitudes continued to allocate caring responsibilities to women. The role of women blurred between breadwinner and principal carer, (54) women were working fewer family friendly hours (55) for less pay (56) while little allowance was made for caring responsibilities. (57) Power, for women in the workplace where bargaining was becoming more individualised, was limited. (58) The participation of unions in the workplace was increasingly restricted, and those previously protected through test cases and through the awards were vulnerable. (59) Women have resorted to developing strategies to manage multiple expectations at work. (60)
'By 2001, 28 per cent of Australian employees (and 45 per cent of female employees) worked part-time, and much of this employment was casual'. (61) The precariousness of work for part-time and casual workers was compounded by their lack of access to benefits often made available for full-time workers. Part-time or casual work was (and still is) often undertaken to facilitate the type of flexibility required by women caring for others. (62) Part-time work consists of typically lower paid, lower level positions with few benefits to assist with work-family balance, caring responsibilities and other roles that women might undertake. (63)
The inexorable march towards individualisation had begun with the introduction of the Workplace Relations Act (1996); workers' rights were whittled away and employers were gaining the upper hand in power relations in the workplace. (64) 'The general pattern from all data is that the progress towards equal pay that was being achieved through the 1970s and 1980s has been halted, and partly reversed, under the enterprise bargaining regimes of the Accord and the Workplace Relations Act'. (65) The support women had previously received from unions in their quest for flexible work was denied them. The male breadwinner was beginning to re-emerge through the disadvantage experienced by women brought about by a neo-liberal approach to industrial relations. (66) Evidence presented to the Standing Committee on Social Issues, NSW Legislative Council, reflected the disadvantage felt by women. This was highlighted by a childcare worker who gave her perspective on the introduction of the WorkChoices legislation.
I am disgusted to think that we are not rewarded for working hard. I used to be paid $280 a week to look after and take on responsibility for other people's children. Though the pay equity case is through, some small businesses will not have to pass on the benefits of that decision, and I am concerned that from now on it will operate on a case-by-case basis ... Regardless of the political rhetoric, in the real world it is not a level or genuine playing field for women to thrash out their rights and conditions with their bosses. Couple this with the removal of the independent umpire and the no disadvantage test, and tell me honestly how we can fight the person who has the purse strings. (67)
The advent of WorkChoices denied women a position of power at the bargaining table. (68) They were required to 'bargain' for their pay and conditions but found that they often had little to bargain with and a lot to bargain for. WorkChoices legislation undermined hard-fought provisions allowing a reduction in the pay and conditions of employees, things that evolved through consultative processes over more than 100 years of conciliation and arbitration. (69) 'The gender pay gap was worse on AWAs: whereas women on registered collective agreements received 90 per cent of the hourly pay of men on such agreements, women on AWAs received only 80 per cent of the hourly pay of men on AWAs'. (70)
The modernised version of the male breadwinner model reemerged with a vengeance with the advent of WorkChoices. Social inequities previously perpetuated found a new home with this legislation. The Fair Work Act carries with it hope for a workplace where 'equal remuneration for work of equal or comparable value' (71) is once again on the agenda of the Federal Government.
In May 2011 Fair Work Australia (FWA) handed down a decision on a landmark Equal Remuneration Case brought by several unions in the social, disability and community services sector (SACS). FWA found that there is a partially gender-driven difference between the wages of SACS employees and those of other public sector employees who perform comparable work. (72) FWA did not make an immediate order, instead deferring a decision to give the parties time to attempt to reach some agreement (73) (the outcome of which was not known at the time of writing). This was the first use of the Fair Work Act's new broader definition of gender equity in remuneration encompassing work of comparable, not just equal value, (74) and has the potential to further advance gender pay equity.
The benefits of the Fair Work Act extend to the modernisation of awards that will further preserve and pursue the need to encourage collective bargaining; the facilitation of flexible working arrangements (particularly for employees with family responsibilities) and the provision of bargaining support for the low paid. Each of these objectives is useful for the support, explicitly or implicitly, of women in the workplace.
Balancing work and family has involved the development of governmental policies which support the needs of employees who have caring responsibilities. These policies, in practice, tend to refer to the juggling act carried out by women who have to balance employment and caring roles. (75) The Fair Work Act aims to achieve a number of outcomes in support of the needs of workers with caring responsibilities, one of these is to improve the pay of women. The low-paid bargaining stream provides a means for improving the pay for the low-paid. Women form a significant number of workers who will benefit from this support as they make up a large proportion of
casual employees. (76) The safety net provided by the National Employment Standards (NES) provides the right to request an extension to the existing parental leave entitlement and also the right to request changes to work to allow for care of children. (77) Women are the main carers of children, and because of this it is they who are the most likely to benefit from these 'right to request' provisions. (78)
Maternity leave, like so many other workplace provisions which assist with work and caring responsibilities, was achieved over decades of persistent lobbying. In 1979, the ACTU test case resulted in the Commission handing down a decision to allow women who were employed under awards to be eligible for 52 weeks maternity leave. It was not until 1990 that men were provided with the same allowance if they were primary carer, (79) a reflection of the view of male roles as breadwinners, not carers. Once again it was the ACTU which, in 2005, pursued an improvement in working conditions affecting women with the Family Provisions test case, and resulted in the right of parents to two years unpaid parental leave and part-time work for those with children under school age. (80) Paid parental leave, finally became available to working parents in January 2011 and provides 18 weeks leave paid at approximately $570 per week. (81) Until then Australia had been one of only two developed countries without a paid parental leave scheme.
The Fair Work legislation holds the anticipation of a better working life, and the expectation of long-term societal change--and it is only with this change that there is hope of eliminating the male breadwinner concept. In recent times it has been preserved in hegemonic attitudes. Work has traditionally been seen as the domain of men and only with the wisdom of hindsight and the benefit of collective achievement can existing perceptions of social justice for women continue to be challenged and advanced.
The male breadwinner model still exists in Australia as a result of imbalances between male and female earnings, the concentration of women in part-time and casual work and the caring responsibilities that tend to be the responsibility of women. The institutional establishment of the male breadwinner model was entrenched when Justice Higgins used this concept to formulate the base wage for male workers, and it remained part of wage fixing until the late 1960s. (82) The cause of the male breadwinner has been advanced in recent times with legislation that inadvertently supports inequality in the workplace. The hope for future change lies in the Fair Work legislation and its expectations for re-building justice in the workplace.
The author would like to thank David Peetz and Kaye Broadbent for their assistance in the development of this paper.
(1) J. Monie, Victorian History and Politics: European Settlement to 1939. Bundoora: The Borchardt Library, La Trobe University, 1982; J. Murphy, 'Breadwinning: Accounts of Work and Family Life in the 1950s.' Labour and Industry 12.3 (2002): 59-75; T. Warren, 'Conceptualising Breadwinning Work.' Work, Employment and Society, 21.2 (2007): 317-36.
(2) D. Palmer, M. Shanahan and R. Shanahan, 'Introduction,' in Australian Labour History Reconsidered, ed. D. Palmer, R. Shanahan and M. Shanahan, Adelaide: Australian Humanities Press, 1999.
(3) J. Monie, A. E. Morris and S. M. Nott, Working Women and the Law: Equality and Discrimination in Theory and Practice, London: Routledge 1991.
(4) D. Palmer, M. Shanahan and R. Shanahan, 'Introduction'.
(5) H. Jones, In Her Own Name, Netley: Wakefield Press, 1986; Palmer et al.
(8) C. Fahey, J. Lack and L. Dale-Hallett, 'Resurrecting the Sunshine Harvester Works: Re-presenting and Reinterpreting the Experience of Industrial Work in Twentieth-century Australia. Labour History, 85 (2003): 9-28.
(9) M. Nugent, Women's Employment and Professionalism in Australia: Histories, Themes and Places. Canberra: Australian Heritage Commission, 2002, 14.
(12) Nugent, 14.
(13) S. Gannon, '(Re)presenting the Collective Girl: A Poetic Approach to a Methodological Dilemma.' Qualitative Inquiry 7.6 (2001): 787-800.
(15) Jones, 265.
(16) Dept of Consumer and Employment Protection, 'History of Pay Equity in Western Australia,' Pay Equity Unit: Labor Relations Division, 2007.
(17) Fahey et al.
(19) Jones, 265.
(20) Jones, 265.
(21) Gail Griffith, 'The Feminist Club of NSW, 1914-1970: A History of Feminist Politics in Decline.' Hecate 14.1 (1988): 60-62.
(25) Dept of Consumer and Employment Protection.
(27) Jones, 309.
(28) Merle Thornton, 'Scenes from a Life in Feminism,' in Hibiscus and W- Tree: Women in Queensland, ed. Carole Ferrier and Deborah Jordan, St Lucia: Hecate Press, 2009, 233-4.
(31) B. Ellem, 'Beyond Industrial Relations: WorkChoices and the Reshaping of Labour, Class and the Commonwealth.' Labour History 90 (2006): 217.
(33) B. Pocock, The Work/Life Collision, Sydney: Federation Press, 2003.
(34) R. Frances, L. Kealey and J. Sangster, 'Women and Wage Labour in Australia and Canada, 1880-1980'. Labour History 71 (1996): 88.
(35) D. Peetz, Collateral Damage: Women and the WorkChoices Battlefield, Association of Industrial Relations Academics of Australia and New Zealand Conference Auckland, 2007, 3.
(36) Nugent, 41-42.
(39) S. J. Deery and D. Plowman, Australian Industrial Relations (3rd ed.), Sydney: McGraw-Hill, 1991, 359.
(40) G. Whitehouse, 'Justice and Equity: Women and Indigenous Workers,' in The New Province for Law and Order, ed. J. Isaac and S. MacIntyre, Cambridge: Cambridge University Press, 2004, 231.
(41) Deery and Plowman.
(45) S. Timberlake, 'Social Capital and Gender in the Workplace,' Journal of Management Development, 24.1 (2005): 34-44.
(46) D. Peetz, 'Retrenchment and Labour Market Disadvantage: Role of Age, Job Tenure and Casual Employment,' Journal of Industrial Relations, 47.3 (2005): 294-309.
(48) Whitehouse, 236.
(49) C. Briggs, J. Buchanan and I. Watson, 'Wages Policy in an Era of Deepening Wage Inequality,' Canberra: Academy of Social Sciences in Australia, 2006, 1-44.
(50) K. Cole, Workplace Relations in Australia, Frenchs Forest: Pearson, 2007.
(52) C. Sutherland, Agreement-making under WorkChoices: The Impact of the Legal Framework on Bargaining Practices and Outcomes, Office of Workplace Rights Advocate Victoria, 2007.
(54) T. Warren, 'Conceptualising Breadwinning Work,' Work, Employment and Society, 21.2 (2007): 317-36.
(55) B. Pocock, The Work/Life Collision, Sydney: The Federation Press, 2003.
(57) G. Strachan, J. Burgess, and L. Henderson, 'Work and Family Policies and Practices,' in G. Strachan, E. French and J. Burgess, eds., Managing Diversity. Sydney: McGraw-Hill, 2009.
(58) D. Peetz, Collateral Damage: Women and the WorkChoices Battlefield, Association of Industrial Relations Academics of Australia and New Zealand Conference, Auckland, 2007.
(59) Peetz, Collateral Damage.
(60) C. P. Lindsay and J. M. Pasqual, 'The Wounded Feminine: From Organisational Abuse to Personal Healing,' Business Horizons, March-April (1993): 35-41.
(61) Whitehouse, 239-40.
(62) Strachan et al.
(63) D. Peetz, A. Fox, K. Townsend and C. Allan, 'The Big Squeeze: Domestic Dimensions of Excessive Work Time and Pressure,' New Economies: New Industrial Relations? 18th Conference of the Australian Industrial Relations Academics in Australia and New Zealand, Noosa, 2004.
(64) Peetz, Collateral Damage.
(65) D. Peetz, 'Nearly the Year of Living Dangerously: In the Emerging Worlds of Australian Industrial Relations,' Asia Pacific Journal of Human Resources, 37.2 (1999), 17.
(66) Peetz, Collateral Damage.
(67) Standing Committee on Social Issues, Inquiry into the Impact of Commonwealth WorkChoices Legislation, Sydney: New South Wales Parliament Legislative Council, 2006, 73-74.
(68) Peetz, Collateral Damage.
(69) Standing Committee on Social Issues.
(70) D. Peetz, Brave New Workplace. Crows Nest: Allen and Unwin, 2006, 100.
(71) Fair Work Bill, House of Representatives, 2008, 266.
(72) Workplace express, 'FWA says SACS workers paid less due to gender factors, but yet to make orders', 16 May 2011.
(73) Fair Work Australia Decision, Equal Remuneration Case, May 2011.
(74) Workplace express.
(75) Strachan et al.
(76) M. Baird and S. Williamson, 'Women, Work and Industrial Relations in 2008,' Journal of Industrial Relations, 51.3 (2009): 331-46; J. Gillard, Foreword, Journal of Industrial Relations, 51.3 (2009), 283-84; Strachan et al.
(77) Baird and Williamson.
(78) Strachan et al.
(79) 'M. Baird, 'Women and Work in Australia: a theoretical and historical overview.' In Women at Work, ed. P. Murray, R. Kramar and P. McGraw, Melbourne: Tilde University Press, 2011.
(82) Fahey et al.
|Printer friendly Cite/link Email Feedback|
|Date:||May 1, 2011|
|Previous Article:||Mothers' Money in Singapore.|
|Next Article:||Praise Be.| | <urn:uuid:ba46ba46-e28f-48b5-b3ed-6d5db40ba90f> | CC-MAIN-2019-47 | https://www.thefreelibrary.com/Survival+of+the+Male+Breadwinner.-a0268220727 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00177.warc.gz | en | 0.95803 | 6,160 | 3.84375 | 4 |
The treaty of Versailles
Part VII - Penalties
The Allied and Associated Powers publicly arraign William It of Hohenzollern, formerly German Emperor, for a supreme offence against international morality and the sanctity of treaties ; ... ]
The German Government recognises the right of the Allied and Associated Powers to bring before military tribunals persons accused of having committed acts in violation of the. laws and customs of war. Such persons shall, if found guilty, be sentenced to punishments laid down by law. This provision will apply notwithstanding any proceedings or prosecution before a tribunal in Germany or in the territory of her allies.
The German Government shall hand over to the Allied and Associated Powers, or to such one of them as shall so request, all persons accused of having committed an act in violation of the laws and customs of war, who are specified either by name or by the rank, office or employment which they held under the German authorities.
Persons guilty of criminal acts against the nationals of one of the Allied and Associated Powers will he broughtt before the military tribunals of that Power.
Persons guilty of criminal acts against the nationals of snore than one of the Allied and Associated Powers will be brought, before military tribunals composed of members of tie military tribunals of the Powers concerned.
In every case the accused will be entitled to name his own counsel.
The German Government undertakes to furnish all documents and information of every kind, the production of which may be considered necessary to ensure the full knowledge of the incriminating acts, the discovery of offenders and the just appreciation of responsibility
- Posts: 5051
- Joined: 12 Mar 2002 20:06
- Location: Russia
- Posts: 5051
- Joined: 12 Mar 2002 20:06
- Location: Russia
....Although the 853 names on the extradition list of February 1920 included leading figures of the old regime, the much reduced list presented by the Allies to the Reichsgericht in early 1921 focused on cases which typified their perceptions of German war crimes. The events of 1914 maintained a central place. Of the total of 45 cases, France submitted 11, Belgium 15, and Britain seven. Italy, Poland, Romania and Yugoslavia had 12 cases between them.' British accusations concerned U-boat warfare and the alleged maltreatment of prisoners of war. The 15 Belgian cases covered three incidents, one each concerning the occupation and the maltreatment of prisoners, and one, the massacre at Andenne-Seilles with 262 civilian dead, which represented the invasion. The 11 French cases covered an incident of willful neglect in a German prisoner-of-war camp and three salient atrocities of 1914. These were the execution of captured French soldiers in Lorraine in August 1914 on the order of Major-General Stenger, the shooting of civilians at Jarny, and the destruction of Nomeny with the loss of 55 inhabitants."
Despite their limitations, the Leipzig proceedings presented the first opportunity to confront from both sides the events that lay behind some of the bitterest Allied accusations. The German state prosecutor, using Allied evidence and witnesses, would charge named individuals for particular acts which were deemed by the Allies to be war crimes. The accused would have the full backing of state-supplied evidence and German defence witnesses to justify their actions and reject the Allied legal definition of war crimes. Yet if a slim chance existed that wartime Allied atrocity allegations might be converted into a sustainable judicial process, the gulf separating the two sides was too wide and the legitimacy accorded the proceedings in Germany too tenuous for it to succeed.
Even bringing the cases to trial encountered deep German reservations. The German government came under pressure from German innocentist campaigners to oppose the entire process and publish a list of German counter-accusations of Allied war crimes. The government delayed initiating proceedings for as long as it could, despite growing Allied pressure. Matters came to a head in early May 1921, when the Allies threatened to occupy the Ruhr over German obstruction of reparations, disarmament, and war crimes trials. The centre-right Fehrenbach government resigned and its centre-leftt successor, under Centre Party leader Josef Wirth, began the trials speedily. The new policy was one of damage limitation, doing the minimum necessary to fulfil the peace terms and avoid sanctions (hence its refusal to publish the countercharges of Allied war crimes) while minimizing nationalist hostility.
This is not to say that the proceedings which opened on 23 May 1921 before the Criminal Senate of the Reichsgericht in Leipzig were a charade. The German judicial system enjoyed real independence from government, as centre-left politicians in Weimar learned. The court president, Dr Karl Schmidt, conducted the trials with punctilious fairness and courtesy towards both Allied witnesses and the top-level delegations from Britain, France, and Belgium which attended the prosecution of `their' cases." More obvious bias carne in the declared reluctance of the Reich Prosecutor, Dr Ludwig Ebermayer, who brought the Allied cases, to proceed against career officers and servicemen whom he considered to be the embodiment of patriotic duty. The Germans also enjoyed an important political discretion in deciding the legal admissibility of the cases and the order in which they would be brought. Finally, the atmosphere of the trials was unavoidably partisan. Apart from the Allied delegations and representatives of the world press, the courtroom was packed with a largely hostile German public. Opportunities were rife for demagogic speeches by the defense and expressions of hatred towards Allied witnesses and observers."
The four British cases (three of the seven having been abandoned owing to the absence of the accused) were the first to be heard. This primacy was almost certainly political. Ebermayer acted in a dilatory fashion in relation to the Belgian and French material (ignoring the cases relating to 19 14), while he was quite keen to proceed in the British cases, especially those to do with naval warfare." This probably stemmed from the German view that the British set the greatest store by the Leipzig proceedings, that they would be the easiest to satisfy, and that once satisfied, the impetus for the trials would abate. The British cases, unlike the massacres, incendiarism, and deportations of 1914, were also relatively simple to settle without major discredit to the army.
Light sentences were given to three junior officers in a wartime prison camp for brutality towards British prisoners. However, a t-.'-boat commander who had torpedoed a hospital ship, the Dover Castle, was found innocent on the grounds that he had merely obeyed German naval orders to sin k hospital ships in designated zones in reprisal for the supposed Allied use of such vessels to transport war materials. The court did not query the orders as such. As if to compensate the British, the Reichsgericht subsequently initiated a case on its own account concerning the sinking of another hospital ship, the Llandovery Castle, by a different I '-boat. Although the U-boat commander was safely in exile, the court found two subordinates guilty not for attacking the ship, though it was far outside the zones designated for this type of action, but for failing to act against their commander when he fired on the survivors in their lifeboats. None of the judgments in the British cases called the principles of German war conduct into question. They merely found individuals guilty of gratuitously offending against basic humanity.
Although German opinion was divided, with liberals welcoming the independence of the court and nationalists condemning the ten-year sentences given to the naval officers, British opinion was by and large satisfied. The English Solicitor-General, Sir Ernest Pollock, reported to the cabinet and to Parliament that the guilty verdicts offset the light sentences:For the first time in the history of the world, we have made a vanquished country try some of its own criminals and [.. .[ the courts of the vanquished country have themselves, in a certain number of cases, already found some of their own nationals guilty of atrocities and sentenced them to terms which, if we think them inadequate, at any rate carry a severe stigma in their own country.
A motion for a parliamentary debate on the trials was defeated and the government considered henceforth that the war crimes issue was closed, with any attempt to reopen it a danger to the normalization of relations with Germany.
By contrast, the Belgian arid French governments reversed their reluctant acceptance of the Leipzig compromise in the light of the court's judgments. The first Belgian case concerned the alleged torture by a German officer in 1917 of Belgian boys aged nine to 12 years in order to extract confessions of sabotage. The court discredited the witnesses' evidence on the grounds of youth and dismissed the case. The Belgian delegation departed in outrage, declaring the proceedings `a travesty of justice'. The remaining Belgian cases, including the Andenne massacre, thus fell by default."'
The French hearings, which lasted from 29 June to 9 July 1921, opened with the trial of General Stenger on the charge of ordering the 58th Infantry Brigade (Sixth Army) to kill all French captive soldiers, including the wounded, during the fighting in French Lorraine in August 1914. Stenger was alleged to have given his order on 21 August and to have renewed it in writing on 26 August. In one important respect, the case was untypical of events in 1914. Stenger's order, if such it was, indisputably contravened the Geneva Convention of 1906 protecting the wounded, and the 1907 Hague Convention on Land Warfare regarding prisoners of war. As we have seen, it is probable that some wounded and captured Allied soldiers were finished off by Germans under intense pressure from the Schlieffen Plan timetable. Later, on the western front, the summary dispatch of helpless enemy captives was a crime not infrequently committed by soldiers of all armies. However, there is no evidence of a formal German policy to shoot prisoners in the fashion suggested by the French on the basis of the Stenger and other cases.
- Posts: 5051
- Joined: 12 Mar 2002 20:06
- Location: Russia
Nonetheless, Stenger was taken by French opinion to embody German military ruthlessness during the invasion, whereas in the eves of many Germans, he was a decorated war hero who had lost a leg in a French artillery attack and appeared in court at Leipzig on crutches. Not surprisingly, the trial turned into a rallying point for outraged German nationalists and right-wing officers' associations. It was envenomed by the testimony of Stenger's subordinate, Captain (now Major) Crusius, who turned state's evidence and admitted to shooting French prisoners on Stenger's instruction. Furthermore, a number of German military witnesses were deemed to be traitors to the Fatherland. Indeed, when one Alsatian soldier (from IR 112) testified to having heard Stenger give the order to shoot prisoners, the general interrupted, shouting: It is a swindle! The witness is a lying Alsatian!""
In fact, the evidence against Stenger was hard to refute. In its victorious battle at Saarburg on 20 August the 58th Infantry Brigade had suffered severe losses." At twilight an unexpected counter-attack by enemy infantry had to be repulsed. The brigade, which had been fighting since the outbreak of war, resumed the pursuit of the retreating enemy early in the morning of 21 August. Before setting off, Stenger had a conversation with officers of the 1st Battalion of IR 112, among them the commander, Major Muller, who was killed on 26 August, and Captain Crusius, commander of the 3rd company.
Crusius testified that this exchange consisted of an order by Stenger to shoot all the wounded Frenchmen on the battlefield, where they had lain during the night. Crusius relayed this to his company as a brigade order. Stenger denied having issued such an order, admitting only that, when he heard that French wounded soldiers fired on the Germans from behind, he told his staff, `such enemies should be shot dead on the spot.' This was not an order, he claimed, merely an expression of opinion. Since Crusius was on trial for shooting French prisoners and sought to exculpate himself by pleading superior orders, his evidence has to be treated with caution. Other officers supported Stenger's denial.
Many of the men in the ranks, however, especially in the 3rd Battalion of IR 112, distinctly recalled receiving a `brigade order' to kill captive and wounded French soldiers, or hearing Stenger express this opinion. Witness Kaupp confirmed that Crusius had relayed the order. Kaupp's men were, he said, at first `indignant' and he told them he understood that only those wounded soldiers should be shot who themselves fired. Witness Ernst said that the order `not to take prisoners' was questioned by his sergeant but the response was `Brigadebefehl' (brigade order). The sergeant banned his men from carrying it out., but while advancing across the parade ground, Ernst heard Major Muller give the order to shoot Frenchmen lying in a hollow. Even more importantly, one-year volunteer Schmerber heard Major Muller say to four officers and Crusius `Brigade order: all wounded soldiers and other individual [French] soldiers are not to be taken captive, but shot.' The officers were disquieted by this, but Muller told them: 'it was a necessary measure for Major-General Stenger had found that French troops fired treacherously at, the Germans. Also no manpower was available for the transport of the prisoners.' Egged on by Muller and Crusius with the words `Don't you know the brigade order yet?', soldiers then killed about 20 wounded Frenchmen.
Together with evidence from captured German war diaries, which the witnesses were unable to disown, there was sufficient evidence to show that sonic form of verbal order was issued by Stenger on 21 August." The French, moreover, could prove that a written order was issued on 26 August at the battle of Thiaville, notably from an inquiry conducted in 1915 among 16 prisoners of IR 112 and 142, as well as from the deposition already referred to by an Alsatian medical officer in IR 112. The recently published diary of the Alsatian recruit, Dominik Richert, confirms the second Stenger order. In fact. Richert intervened in one instance to prevent the killing of captured Frenchmen."
Nonetheless, the court declared there was no case against General Stenger. Instead, it found his subordinate, Crusius, guilty of manslaughter and sentenced him to two and a half years' imprisonment. Stenger emerged from the court carrying flowers from his admirers. He received so many letters `congratulating him on his order to give no quarter' that he placed an advertisement in the press to express his gratitude. ° After his own trial, he was active in the campaign to support the accused in further war crimes trials, sending out circulars to collect money'°' As late as April 1922 he anticipated a renewed call by the Entente for extradition, which he fully expected the German government would refuse. In this, he wrote, it needed the support: of the entire people:The question of the alleged war criminals is not a party question, for the 890 persons on the extradition list come from all classes of our people. It also offends the sense of justice of everyone that only Germany has to conduct such trials, while the notorious war criminals of the Entente, about whom we have abundant prosecution evidence, escape scot-free. Our people in arms in the years 1914-18 is owed a self-evident debt of gratitude, by ensuring that its glorious deeds are kept clear of the dirt of slander.
Stenger's acquittal outraged the French. On 8 July their delegation was recalled from Leipzig, never to return, leaving the cases of .Jarny and Nomeny unheard."" The Avocat General, Matter, who led the official team, noted that inadequate French preparation (especially in relation to foolproof witness statements) had combined with German bias and obstruction to defeat the case against Stenger. In his report to the government, he underlined the hostility displayed towards the Alsatian witnesses, who were constantly interrupted and questioned on their family origins and wartime conduct, and whose evidence counted for nothing against a Prussian general. Overall, Matter came to the opposite conclusion to his English counterpart, Sir Ernest Pollock. He told the government that the Germans had not fulfilled the undertaking made to the Allies in 1921) that when holding war crimes trials on German soil, they would `rise above [ their] own feelings [and I dominate [their] national prejudices', and he advised against further participation.''' The French reproached the British for accepting the Leipzig verdicts, and the Prime Minister, Briand, denounced the `parody' of the trials to parliament.'"'
The Leipzig proceedings failed both as a war crimes tribunal and as a means of resolving opposed national views of the events of 1914. if Leipzig was victors' justice, it was on the terms of the vanquished, satisfying no one except the British government, which had decided to move beyond the whole business. Briand, meanwhile, had begun to seek a more conciliatory approach towards Germany, and he recommended to an inter-Allied meeting in August 1921 that the issue of the trials should be referred to a body of `high legal authorities' so that `time would [... ] be gained and public opinion would have a chance to die down'. "" This struck a chord with the British government, which readily agreed, and with the Wirth government in Germany. But French public and press opinion did not die down. In response to the Leipzig trials, Briand was lobbied in the summer and autumn 1921 by outraged French veterans' and ex-prisoner-of-war associations demanding renewed action against German war criminals; in December he received a similar petition from the Ligue Souvenez-Vous.
He summoned a meeting of the legal experts of the Inter-Allied Commission on Leipzig on 6-7 January 1922.
The outcome of this meeting destroyed any idea that the war crimes issue would simply fade away. The Inter-Allied Commission condemned the Leipzig trials as `highly unsatisfactory' and recommended that the Allies should resume the demand for extradition."') This idea was bluntly rejected by the Wirth government, supported by general press hostility and nationalist demonstrations."' In -March 1922, President Ebert visited the centenary memorial to the 1813 Battle of Leipzig and publicly defended the work of the Supreme Court against `official foreign criticism'."' At the same time, a new French government took office under the wartime president of the Republic, Raymond Poincare, who was intent on pursuing a tougher policy towards Germany on the fulfilment of the peace treaty. Poincare's immediate goal was tightening sanctions in order to ensure the delivery of reparations. German refusal to comply with Articles 228-30 provided him with additional leverage on the former Allies for action against Germany. Like Briand, Poincare was also subject to strong public resentment at German behaviour. He, too, received a petition from the Ligue SouvenezVous. Moreover, as a Lorrainer who had personally and officially shared the outrage at `German atrocities' during the invasion, he felt strongly on the issue. For these reasons, he urged the British to act on the recommendations of the Inter-Allied Commission on the Leipzig Trials and renew the demand for extradition.""
The British rejected the proposal outright when the French tabled it at a conference of ambassadors on 26 July 1922. In a compromise solution which barely disguised the fact that co-operation between the former Allies on the war crimes issue had broken down, a note was issued to the German government stating that the Allied governments condemned the Leipzig trials and reserved the right to pursue the full implementation of Articles 228-30. An additional note reserved the right of prosecution in absentia."' In reality, the lack of political will to pursue extradition, especially on the part of the British, meant that the long German campaign to frustrate the prosecution of war criminals had succeeded.
Yet the issue did not quite end there. Following Leipzig, the French proceeded with trials of German war criminals by court-martial in absentia. The aborted hearing of the case of Nomeny, for example, was followed by the court-martial trial of the accused in Nancy, a decision announced by the Minister of Justice at Nomeny as he awarded the Croix de guerre to the town in September 1921_1 1 Poincare expanded the process in April 1922 to include all 2,000 Germans on the original French list. This now became the principal judicial means of dealing with wartime German atrocities, and by December 1924 more than 1,200 Germans had been found guilty. The centre-left government which came to office in June 1924, under Edouard Herriot, did not stop the trials, but Herriot ordered the _Ministries of Justice and War to exercise the `greatest discretion L...1 vis-à-vis the press', suggesting that the public mood had begun to change.' The Belgians likewise conducted a substantial number of prosecutions by courts-martial not only of cases on the extradition list but of other German soldiers.''"
The Reichsgericht, for its part, worked its way through all 855 accused whose names had been made public in February 1920.'' Its purpose was to exonerate the officers and men who were being condemned in their absence by the French and Belgians. Neither the French nor the Belgian authorities informed the German government of their courts-martial judgments, so the German embassies in Paris and Brussels relied on the daily newspapers in order to report the names of those found guilty to the Foreign and Defence Ministries. The Reich prosecutor then instituted proceedings against them, and if, as was usually the case, these ended with a non-prosecution or an acquittal, the decision was publicized in the German, and if possible, international press. "Those condemned by a foreign court who wished to make a statement in the newspapers were given official help to do so.'' The issue remained one of military honour, with all that this implied for the political rehabilitation of the German army.
The conflicting judgments of the Allied courts-martial and the Reichsgericht sprang from continuing differences in interpretation of the laws of war combined with the German belief in a People's War conducted by francs-tireurs. Three examples illustrate the distinction. In October 1924, a court-martial of the French 20th Army Corps in Nancy tried a number of senior Bavarian officers in absentia for the mass execution at Gerbeviller, imposing death sentences on there. 'The official German story, published in the Reichsarchiv history in 1925, was that the 51st Infantry Division had been involved in bitter fighting in which it faced a well prepared franc-tireur attack. The Reichsgericht found that. the German soldiers, encountering `treacherous' franc-tireur resistance, behaved lawfully.'"'
The swathe of destruction cut by the Third Army through Namur province and north-eastern France was likewise the subject of opposed judgments. The Belgian and French governments accused seven generals of responsibility for the destruction of Dinant, Rethel, and elsewhere. Furthermore, on 9 May 1925 a Belgian court-martial at Dinant sentenced a number of German officers in their absence for the killings in August 1914."" The Leipzig court dismissed all these charges in November-December 1925. One of the seven generals was Johannes Meister, who (as colonel) commanded Grenadier Regiment 101 in August 1914, and who had been charged by the Belgians on 19 June 1922 with the `systematically inhumane conduct of' his troops from 19 to 27 August', and in particular with ordering the execution of a number of civilians at Les Rivages (Dinant) on 25 August. The Reichsgericht noted that this event was the subject of one German internal investigation in 1915 (the inquiry for the White Book) and another in 1920. The court repeated that the troops were fired on by civilians, even women and children. Although some witnesses confirmed that civilians, including women and children, were killed as hostages, the court found `no evidence to show the execution was unlawful. Nor could it, be proved that the defendant issued an order to shoot the civilians.' 121 The court likewise found in relation to the six other generals that no acts `punishable in German law' had been committed."'
In January 1925 the court-martial of Liege and Luxembourg provinces condemned Colonel von Hedernann and General Thessmar to death in absentia for the collective execution at Arlon, on 26 August 1914. Even Bert hold Widmann, the Foreign Ministry's consultant lawyer, admitted that in this instance the Reichsgericht investigation had not been able to find any justification of these executions, or evidence of franc-tireur acts in Rossignol. Yet the court decreed that it was `not improbable' that the executed Belgians might have been `partly' guilty of treacherous or illegal acts against German troops, and it placed the accused `outside prosecution'."'
The trial by French and Belgian courts-martial of alleged war criminals and the parallel hearings of the Reichsgericht show just how irreconcilable the two sides remained over the events of 1914. The issue poisoned relations between the former belligerent powers into the mid-1920s, not least because Germans condemned by the French and Belgian tribunals were liable to arrest if they set foot in those countries, and thus found their ability to travel humiliatingly curtailed. Only in the second half of the 1920s were these sanctions dropped by the French and Belgian authorities and the courts-martial abandoned." In Germany, the issue continued to rankle. Out of 907 cases heard on the basis of the Allied extradition list, nine ended with judgments - five acquittals and four convictions of subaltern soldiers. For most of the remainder, the Reichsgericht had decided by 1925 - on the basis of preliminary proceedings at which the accused did not have to appear - that there was no case to answer." The court even reversed the guilty verdicts in the Llandovery Castle case.'" The activities of the Reichsgericht relating to war crimes continued until the Nazis, on coming to power, ended them.
by John Horne and Alan Kramer
- Forum Staff
- Posts: 23254
- Joined: 20 Jul 2002 19:52
- Location: USA | <urn:uuid:50b0eb14-55b1-4266-a3af-1df8c2b5f38b> | CC-MAIN-2019-47 | https://forum.axishistory.com/viewtopic.php?t=30207 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669847.1/warc/CC-MAIN-20191118205402-20191118233402-00178.warc.gz | en | 0.971094 | 5,470 | 2.71875 | 3 |
Benthic ecosystems and environmental change
Marine ecosystems are highly valuable to human societies, through the provision of ecological goods and services. They are, however, changing rapidly in the face of multiple concurrent stressors, such as seawater warming, ocean acidification, nutrient and pollution input, over-fishing and the spread of non-native species. Understanding how life in the sea is responding to global change is critical if we are to effectively manage and plan for further change and, ultimately, conserve precious living marine resources for future generations.
We are a young but steadily growing research group based at the Marine Biological Association of the United Kingdom. We use a combination of traditional ecological techniques and innovative experimental approaches (lab and field) to better understand how life in the sea is changing, and to predict how it is likely to change in the future.
We try to do two things. First, we collaborate. We are ‘bucket and spade’ ecologists but we collaborate with microbiologists, virologists, physiologists, oceanographers and climate scientists. We also collaborate with like-minded researchers across Europe, Australia and elsewhere. We collaborate because complex systems and problems require complex approaches to better understand them. Second, we observe. Our team and our wider network of collaborators have spent countless days working in shallow-water marine ecosystems in recent years. We do this because appreciating spatial and temporal variability in ecological pattern and process is the first step to understanding human impacts on natural systems.
Benthic ecosystems and environmental change
Kelp forest ecology
Kelp forests dominate shallow rocky habitats across much of the world’s temperate coastline. As foundation species, kelps support high levels of primary productivity, magnified secondary productivity, and provide habitat for highly diverse associated assemblages.
Kelp forests also serve as habitat and nursery grounds for socioeconomically important species of fish, crustaceans and molluscs. However, the structure and extent of kelp forests is affected by environmental change factors, including ocean warming, extreme climatic events and the spread on non-native species.
A key research focus is to address pressing knowledge gaps in our fundamental understanding of the ecological structure and functioning of these critical marine habitats, in order to better predict how kelp populations and communities will respond to current and future environmental changes.
Ecosystem responses to ocean warming and marine heatwaves
The global ocean has warmed significantly in recent decades. As a result, the geographical ranges of many marine species have shifted, generally polewards, with unexpected consequences for the structure and functioning of marine ecosystems.
Understanding how gradual warming trends and extreme climatic events affect marine biodiversity is a major focus of the research group. To achieve this, we are examining the physical drivers of ocean warming and quantifying regional and global patterns in warming trends and the occurrence of marine heatwaves.
We are also assessing the impacts of warming trends and events on marine populations, communities and ecosystems. This research will improve the wider understanding and appreciation of the effects of ocean warming on marine biodiversity.
Impacts of non-native species
The spread of non-native species in marine environments poses a major threat to biodiversity and ecosystem structure and function, and costs the global economy billions of pounds each year. Some marine organisms, including some seaweeds and sea squirts, which can be translocated on ship’s hulls or in ballast water, are potent global invaders that impact aquaculture, biofouling and local biodiversity.
Understanding how the ecological performance of native versus non-native species will be influenced by environmental change is critically important. We are using novel experimental techniques to examine how warming influences both native and non-native species (e.g. ascidians, bryozoans, seaweeds) attached to submerged hard surfaces (e.g. rocky reefs, pylons, jetties).
Furthermore, we are examining how non-native species, such as the Japanese Kelp Undaria pinnatifida (‘Wakame’), may spread from artificial habitats (e.g. ports and marinas) into natural habitats (e.g. rocky reefs and invade native assemblages. The impacts of Wakame invasions on native plants and animals are poorly understood, especially in Europe, and we aim to better understand its impacts in southern England through field and laboratory experiments.
Extreme Climatic Events in Marine Ecosystems
This multi-faceted project focusses on how short-term extreme climatic events, such as marine heatwaves and anomalous storm and freshening events, impact coastal marine ecosystems. The research will examine (1) trends in the frequency and magnitude of these events; (2) the impacts of...
How will ocean warming and changes in storminess and turbidity affect the structure, productivity and resilience of UK kelp forests?
This large-scale field-based project will examine the structure and functioning of UK kelp forests along existing natural gradients in temperature, wave exposure and turbidity. The project employs scientific diving as a tool to access highly productive and diverse kelp forests on shallow...
The influence of multiple global change stressors on marine communities: a novel field approach
Understanding the global impacts and implications of range-shifting species in marine systems
This large, multi-national project aims to better understand the wider implications of species' range shifts in marine ecosystems. Many marine species have shifted their geographical distributions, predominantly poleward, in response to recent oceanic warming. The wider impacts of the...
Peer-reviewed journal articles
79. Epstein, G., Foggo, A., Smale, D. A. (2019) Inconspicuous impacts: widespread marine invader causes subtle but significant changes in native macroalgal assemblages. Ecosphere (Accepted 11/06/2019)
78. Epstein, G., Hawkins, S. J., Smale, D. A. (2019) Identifying niche and fitness dissimilarities in invaded marine macroalgal canopies within the context of contemporary coexistence theory. Scientific Reports 9: 8816
77. Holbrook, N. J., Scannell, H. A., Sen Gupta, Alexander, Benthuysen, J. A., Feng, M., Oliver, E. C. J., Alexander, L., Burrows, M. T., Donat, M. G., Hobday, A., Moore, P. J., Perkins-Kirkpatrick, S. E., Smale, D. A., Straub, S. C., Wernberg, T. A global assessment of marine heatwaves and their drivers. Nature Communications 10: 2624
76. Smale, D. A., Epstein, G., Parry, M., Attrill, M. A. (2019) Spatiotemporal variability in the structure of seagrass meadows and associated macrofaunal assemblages in southwest England (UK): using citizen science to benchmark ecological pattern. Ecology & Evolution 9: 3958-3972
75. Smale, D. A., Wernberg, T., Oliver, E. C. J., Thomsen, M., Harvey, B. P., Straub, S. C., Burrows, M. T., Alexander, L. V., Benthuysen, J. A., Donat, M. G., Feng, M., Hobday, A. J., Holbrook, N. J., Perkins-Kirkpatrick, S. E.,, Scannell, H. A., Sen Gupta, A., Payne, B., Moore, P. J. (2019) Marine heatwaves threaten global biodiversity and the provision of ecosystem services Nature Climate Change 9:306-312.
74. King, N., Wilcockson, D., Hoelters, L., Moore, P., McKeown, N., Groves, E., Smale, D. A., Stamp, T. (2019) Evidence for different thermal ecotypes in range centre and trailing edge kelp populations. Journal of Experimental Marine Biology and Ecology 515:10-17
73. Pessarrodona, A., Foggo, A., Smale, D. A (2019) Can ecosystem functioning be maintained despite climate-driven shifts in species composition? Insights from novel marine forests. Journal of Ecology 107: 91-104
72. Bates, A., Helmuth, B., Burrows, M. T., Duncan, M. I., Garrabou, J., Guy-Haim, T., Lima, F., Queiros, A. M., Seabra, R., Marsh, R., Belmaker, J., Bensoussan, N., Dong, Y., Mazaris, A. D., Smale, D. A., Wahl, M., Rilov, G. (2018) Biologists ignore ocean weather at their peril. Nature 560: 299-301
71. Pessarrodona, A., Foggo, A., Smale, D. A (2018) Can ecosystem functioning be maintained despite climate-driven shifts in species composition? Insights from novel marine forests. Journal of Ecology 107: 91-104
70. Hereward, H., Foggo, A., Hinckley, S., Greenwood, J. Smale, D. A. (2018) Seasonal variability in the population structure of a habitat-forming kelp and a conspicuous gastropod grazer: Do blue-rayed limpets (Patella pellucida) exert top-down pressure on Laminaria digitata populations? Journal of Experimental Marine Biology and Ecology 506: 171-181
69. Epstein, G. Hawkins, S. J., Smale, D. A. (2018) Removal treatments alter the recruitment dynamics of a global marine invader - Implications for management feasibility. Marine Environmental Research 140: 322-331
68. Teagle, H. Moore, P. J., Jenkins, H., Smale, D. A. (2018) Spatial variability in the diversity and structure of faunal assemblages associated with kelp holdfasts (Laminaria hyperborea) in the northeast Atlantic. PLoS ONE 13 (7) e0200411
67. Brewin, R. J. W., Smale, D. A., Moore, P. J., Dall’Olmo, G., Miller, P. I., Taylor, B. H., Smyth, T. J., Fishwick, J. R., Yang, M. (2018) Evaluating operational AVHRR Sea Surface Temperature data at the coastline using benthic temperature loggers. Remote Sensing 10 (6): 1-23
66. Pessarrodona, A., Moore, P. J., Sayer, M., Smale, D. A. (2018) Carbon assimilation and transfer through kelp forests in the NE Atlantic is diminished under a warmer ocean climate. Global Change Biology 24: 4386-4398
65. Hobday, A. J., Oliver, E. C. J., Sen Gupta, A., Benthuysen, J. A., Burrows, M. T. Donat, M. G., Holbrook, N., Moore, P. J., Thomsen, M. S., Wernberg, T., Smale, D. A. (2018) Categorizing and naming marine heatwaves. Oceanography 31: 162-173
64. Teagle, H. & Smale, D. A. (2018) Climate-driven substitution of habitat-forming species leads to reduced biodiversity within a temperate marine community. Diversity and Distributions 24: 1367-1380
63. Oliver E., Donat M.G., Burrows M.T., Moore P.J., Smale D.A., Alexander L.V., Benthuysen J.A., Feng M., Sen Gupta A., Hobday A.J., et al. (2018) Ocean warming brings longer and more frequent marine heatwaves. Nature Communications 9:1324.
62. Smale, D. A., Moore, P. J., Queirós, A. M., Higgs, N. D., Burrows, M. T. (2018) Appreciating interconnectivity between habitats is key to Blue Carbon management. Frontiers in Ecology and the Environment 16 (2): 71-73
61. Epstein, G. & Smale, D. A. (2017) Environmental and ecological factors influencing the spillover of the non-native kelp, Undaria pinnatifida, from marinas into natural rocky reef communities. Biological Invasions 20 (4): 1049-1072
60. King, N., McKeown, N. J., Smale, D. A., Moore, P. J. (2017) The importance of phenotypic plasticity and local adaptation in driving intraspecific variability in thermal niches of marine macrophytes. Ecography (online early)
59. King, N., Wilcockson, D., Smale, D. A., Webster, R., Hoelters, L., Moore, P. J. (2017) Cumulative stress restricts niche filling potential of habitat-forming kelps in a future climate. Functional Ecology 32 (2): 288-299
58. Epstein, G. & Smale, D. A. (2017) Undaria pinnatifida: a case study to highlight challenges in marine invasion ecology and management. Ecology and Evolution 7: 8624–8642
57. Smale, D. A., Taylor, J. D., Coombs, S. H., Moore, G., Cunliffe, M. (2017) Community responses to seawater warming are conserved across diverse biological groupings and taxonomic resolutions. Proceedings of the Royal Society B: Biological Sciences 284: 20170534.
56. De Leij, R., Epstein, G., Brown, M., Smale, D. A. (2017) The influence of native macroalgal canopies on the distribution and abundance of the non-native kelp Undaria pinnatifida in natural reef habitats. Marine Biology 164: 156-163
55. Simpson, T. J. S., Smale, D. A., Justin I. McDonald, J. I., Wernberg, T. (2017) Large scale variability in the structure of sessile invertebrate assemblages in artificial habitats reveals the importance of local-scale processes. Journal of Experimental Marine Biology and Ecology 494:10-19
54. Smale, D. A., Wernberg, T., Vanderklift, M. (2017) Regional-scale variability in the response of benthic macroinvertebrate assemblages to a marine heatwave. Marine Ecology Progress Series 568: 17-30
53. Joint, I. & Smale, D. A. (2017) Marine heatwaves and optimal temperatures for microbial assemblage activity. FEMS Microbiology Ecology 93 (2): fiw243
52. Hargrave, M., Foggo, A., Pessarrodona, A., Smale, D. A. (2017) The effects of warming on the ecophysiology of two co-existing kelp species with contrasting distributions. Oecologia 183: 531-543
51. Smale, D. A. & Moore, P. J. (2017) Variability in kelp forest structure along a latitudinal gradient in ocean temperature. Journal of Experimental Marine Biology and Ecology 486: 255–264 Abstract
50. Wernberg, T.,Bennett, S., Babcock, R.C.,de Bettignies, T., Cure, K., Depczynski, M., Dufois, F., Fromont, J., Fulton, C.J., Hovey, R.K., Harvey, E.S., Holmes, T.H., Kendrick, G.A., Radford, B., Santana-Garcon, J., Saunders, B.J., Smale, D. A., Thomsen, M.S., Tuckett, C.A., Tuya, F., Vanderklift, M.A., Wilson, S.K. (2016) Climate driven regime shift of a temperate marine ecosystem. Science 353 (6295) 169-172 Abstract
49. Teagle, H., Hawkins, S. J., Moore, P. J., Smale, D. A. (2016) The role of kelp species as biogenic habitat formers in coastal marine ecosystems. Journal of Experimental Marine Biology and Ecologyaccepted
48. Hobday, A. J., Alexander, L. V., Perkins, S. E., Smale, D. A., Straub, S. C., Oliver, E. C., Benthuysen, J., Burrows, M. T., Donat, M. G., Feng, M., Holbrook, N. J., Moore, P. J., Scannell, H. A., Sen Gupta, A., Wernberg, T. (2016) A hierarchical approach to defining marine heatwaves. Progress in Oceanography 141: 227-238 Abstract
47. Smale, D. A., Michael T. Burrows, Ally J. Evans, Nathan King, Martin D. J. Sayer, Anna L. E. Yunnie, Pippa J. Moore (2016) Linking environmental variables with regional-scale variability in ecological structure and carbon storage function of kelp forests in the United Kingdom. Marine Ecology Progress Series 542:79-95 Abstract
46. Arnold, M., Teagle, H., Brown, M., Smale, D. A. (2016) The structure of biogenic habitat and epibiotic assemblages associated with the global invasive kelp Undaria pinnatifida in comparison to native macroalgae. Biological Invasions 18: 661-676 Abstract
45. Smale, D. A. and Vance, T. (2015) Climate-driven shifts in species’ distributions may exacerbate the impacts of storm disturbances on northeast Atlantic kelp forests. Marine and Freshwater Research 67:65-74 Abstract
44. Sunday J.M., Pecl G.T., Frusher S., Hobday A.J., Hill N.A., Holbrook N.J., Edgar G.J., Stuart-Smith R.D., Barrett N.S., Wernberg T., Watson R.A., Smale, D. A., Fulton E.A., Slawinski D., Feng M., Radford B.T. and Bates A.E. (2015) Species traits and climate velocity explain geographic range shifts in an ocean warming hotspot. Ecology Letters 18: 944-953 Abstract
43. Smale, D. A., Vance, T., Yunnie, A. L. E., Widdicombe, S. (2015). Disentangling the impacts of heat wave magnitude, duration and timing on the structure and diversity of sessile marine assemblages. PeerJ, 3 e863 Article
42. Marzinelli E.M., Williams S.B., Babcock R.C., Barrett N.S., Johnson C.R., Jordan A., Kendrick G.A., Pizarro O.R., Smale, D. A. and Steinberg P.D. (2015) Large-scale geographic variation in distribution and abundance of Australian deep-water kelp forests. PLoS ONE, 10, e0118390 Article
41. Bates, A. E., Bird, T. J., Stuart-Smith, R. D., Wernberg, T., Sunday, J. M., Barrett, N. S., Edgar, G. J., Frusher, S., Hobday, A. J., Pecl, G. T., Smale, D. A., McCarthy, M. (2015) Distinguishing geographical range shifts from artefacts of detectability and sampling effort. Diversity and Distributions. Early view Abstract
40. Smale, D. A., Wernberg, T., Yunnie, A. & Vance, T., (2014) The rise of Laminaria ochroleuca in the Western English Channel (UK) and comparisons with its competitor and assemblage dominant Laminaria hyperborea. Marine Ecology. Early view Abstract
39. Verges, A., Steinberg, P. D., Hay, M. E., Poore, A. G. B., Campbell, A. H., Ballesteros, E., Heck Jr., K. L., Booth, D. J., Coleman, M. A., Feary, D. A., Figueira, W., Langlois, T., Marzinelli, E. M., Mizerek, T., Mumby, P. J., Nakamura, Y., Roughan, M., van Sebille, E., Sen Gupta, A., Smale, D. A., Tomas, F., Wernberg, T., & Wilson, S. (2014) The tropicalisation of temperate marine ecosystems: climate-mediated changes in herbivory and community phase shifts. Proceedings of the Royal Society B. 281: 20140846 Abstract
38. Brodie J., Williamson C.J., Smale D.A., Kamenos N.A., Mieszkowska N., Santos R., Cunliffe M., Steinke M., Yesson C., Anderson K.M., Asnaghi V., Brownlee C., Burdett H.L., Burrows M.T., Collins S., Donohue P.J.C., Harvey B., Foggo A., Noisette F., Nunes J., Ragazzola F., Raven J.A., Schmidt D.N., Suggett D., Teichberg M. & Hall-Spencer J.M. (2014). The future of the northeast Atlantic benthic flora in a high CO2 world. Ecology and Evolution, 4: 2787-2798 Full text
37. Bates, A. E., Pecl, G. T., Frusher S., Hobday, A. J., Wernberg, T. Smale, D. A., Sunday, J. M., Hill, N., Dulvy, N. K., Colwell, R. K., Holbrook, N., Fulton, E. A., Dirk Slawinski, D., Feng, M., Edgar, G. J., Radford, B. T., Thompson, P. A., Watson, R. A. (2014) Defining and observing climate-mediated range shifts in marine systems. Global Environmental Change, 26: 27-38 Abstract
36. Foster, S., Smale, D. A., How, J. de Lestang, S., Brearley, A., Kendrick, G. A. (2014) Regional-scale patterns of mobile invertebrate assemblage structure on artificial habitats off Western Australia. Journal of Experimental Marine Biology and Ecology 453: 43-53 Abstract
35. Smale D. A. and Wernberg T. (2014) Population structure of the purple sea urchin Heliocidaris erythrogramma along a latitudinal gradient in southwest Australia. Journal of the Marine Biological Association of the United Kingdom, 94: 1033-1040 Abstract
34. Azzarello, J., Smale, D. A., Langlois, T., Hansson, E. (2014) Linking habitat characteristics to abundance patterns of canopy-forming macroalgae and sea urchins in southwest Australia. Marine Biology Research, 10: 682-693 Abstract
33. Smale, D. A., Burrows, M., Moore, P., O’Connor, N. and Hawkins, S. (2013) Threats and knowledge gaps for ecosystem services provided by kelp forests: a northeast Atlantic perspective. Ecology and Evolution, 3: 4016–4038 Abstract Text
32. Smale, D. A. (2013) Multi-scale patterns of spatial variability in sessile assemblage structure do not alter predictably with development time. Marine Ecology Progress Series, 482: 29-41 Abstract
31. Smale, D. A. & Wernberg, T. (2013) Extreme climatic event drives range contraction of a habitat-forming species. Proceedings of the Royal Society B, 280: 20122829 Abstract
30. Wernberg, T., Smale, D. A., Tuya, F. et al (2013) An extreme climatic event alters marine ecosystem structure in a global biodiversity hotspot. Nature Climate Change, 3: 78-82 Abstract
29. Smale, D. A. (2012) Spatial variability in sessile assemblage development in subtidal habitats off southwest Australia (southeast Indian Ocean). Journal of Experimental Marine Biology and Ecology, 438: 76-83 Abstract
28. Smale, D.A., Kendrick, G.A., Harvey, E.S. et al (2012) Regional-scale benthic monitoring for Ecosystem-Based Fisheries Management (EBFM) using an Autonomous Underwater Vehicle (AUV). ICES Journal of Marine Science, 69: 1108-1118 Abstract
26. Smale, D.A. and Wernberg, T (2012) Short-term in situ warming influences early development of sessile assemblages. Marine Ecology Progress Series 453: 129-136 Abstract
25. Wernberg, T., Smale, D. A. and Thomsen, M. (2012) A decade of climate change experiments on marine organisms: procedures, patterns and problems. Global Change Biology 18: 1491–1498 Abstract
24. Smale, D. A and Childs, S. (2012) The occurrence of a widespread marine invader, Didemnum perlucidum (Tunicata, Ascidiacea) in Western Australia. Biological Invasions 14: 1325-1330 Abstract
23. Smale, D.A. and Wernberg, T (2012) Ecological observations associated with an anomalous warming event at the Houtman Abrolhos Islands, Western Australia. Coral Reefs 31: 441 Abstract
22. Jamieson, R.E., Rogers, A.D., Billett, D.S.M., Smale, D.A. and Pearce, D.A. (2012) Patterns of marine bacterioplankton biodiversity in the surface waters of the Scotia Arc, Southern Ocean. FEMS Microbiology Ecology 80:452-468 Abstract
21. Smale D. A., Barnes, D. K. A., Barnes R. S. K., Smith D. J., Suggett, D. J. (2012) Spatial variability in the structure of intertidal crab and gastropod assemblages within the Seychelles Archipelago (Indian Ocean). Journal of Sea Research 69: 8-15 Abstract
20. Smale, D. A., Wernberg, T., Vance, T. (2011) Community development on temperate subtidal reefs: the influences of wave energy and the stochastic recruitment of a dominant kelp. Marine Biology. 158:1757–1766 Abstract
19. Smale, D. A., Wernberg, T., Peck, L. S. and Barnes, D. K. A. (2011) Turning on the heat: ecological response to simulated warming in the sea. PLoS One 6: e16050 1-4 Full text
18. Smale, D. A., Kendrick, G. A., and Wernberg, T. (2011) Subtidal macroalgal richness, diversity and turnover, at multiple spatial scales, along the southwestern Australian coastline. Estuarine, Coastal and Shelf Science 91: 224-231 Abstract
17. Wernberg, T., Russell, B. D., Moore, P. J., Ling, S. D., Smale, D. A., Campbell, A., Coleman, M., Steinberg, P. D., Kendrick, G. A., Connell, S. D. (2011) Impacts of climate change in a global hotspot for temperate marine biodiversity and ocean warming. Journal of Experimental Marine Biology and Ecology400: 7-16 Abstract
16. Smale, D. A., Langlois, T., Kendrick, G. A., Meeuwig, J. Harvey, E. S. (2011) From fronds to fish: the use of indicators for ecological monitoring in marine benthic ecosystems, with case studies from temperate Western Australia. Reviews in Fish Biology and Fisheries 21: 311-337 Abstract
15. Smale, D. A. (2010) Monitoring marine macroalgae: the influence of spatial scale on the usefulness of biodiversity surrogates. Diversity and Distributions 16: 985-995 Abstract
14. Smale, D. A., Kendrick, G. A., and Wernberg, T. (2010) Assemblage turnover and taxonomic sufficiency of subtidal macroalgae at multiple spatial scales. Journal of Experimental Marine Biology and Ecology 384: 76-86 Abstract
13. Smale, D. A., Kendrick, G. A., Waddington, K. I. Van Niel, K. P., Meeuwig, J. J. and Harvey, E. S. (2010) Benthic assemblage composition on subtidal reefs along a latitudinal gradient in Western Australia.Estuarine, Coastal and Shelf Science 86: 83-92 Abstract
12. Smale D. A. and Wernberg T. (2009) Satellite-derived SST data as a proxy for water temperature in nearshore benthic ecology. Marine Ecology- Progress Series 387: 27-37 Abstract
11. Smale, D. A., Brown, K. M., Barnes, D. K. A., Fraser, K. P. P., Clarke, A. (2008) Ice scour disturbance in Antarctic shallow waters. Science 321: 371 Abstract
10. Smale, D. A. (2008) Ecological traits of benthic assemblages in shallow Antarctic waters: does ice scour disturbance select for small, mobile, scavengers with high dispersal potential? Polar Biology 31: 1225-123 Abstract
9. Smale, D. A. and Barnes, D. K. A. (2008) Likely responses of the Antarctic benthos to climate related changes in physical disturbance during the 21st Century, based primarily on evidence from the West Antarctic Peninsula region. Ecography 31: 289-305 Abstract
8. Smale, D.A., Barnes, D.K.A., Fraser, K.P.P. and Peck, L.S. (2008) Benthic community response to iceberg scouring at an intensely disturbed shallow water site at Adelaide Island, Antarctica. Marine Ecology- Progress Series 355: 85-94 Abstract
7. Smale, D. A. (2008) Spatial variability in the distribution of dominant shallow-water benthos at Adelaide Island, Antarctica. Journal of Experimental Marine Biology and Ecology 347: 140-148 Abstract
6. Smale, D.A. (2008) Continuous benthic community change along a depth gradient in Antarctic shallows: evidence of patchiness but not zonation. Polar Biology 31: 189-198 Abstract
5. Barnes, D.K.A., Linse, K. Enderlein P. Smale, D. A. Fraser, K.P.P. and Brown, M. P. (2008) Marine richness and gradients at Deception Island, Antarctica. Antarctic Science 20: 271-280 Abstract
3. Smale, D.A., Barnes, D.K.A. and Fraser, K.P.P. (2007). The influence of depth, site exposure and season on the intensity of iceberg scouring in nearshore Antarctic waters. Polar Biology 30: 769-779 Abstract
2. Smale, D.A, Barnes, D.K.A., Fraser, K.P.P, Mann, P.J. and Brown M.P. (2007) Scavenging in Antarctica: intense variation between sites and seasons in shallow benthic necrophagy. Journal of Experimental Marine Biology and Ecology 349: 405-417 Abstract
1. Smale, D.A., Barnes, D.K.A. and Fraser, K.P.P. (2007) The influence of ice scour on benthic communities at three contrasting sites at Adelaide Island, Antarctica. Austral Ecology 32: 878-888 Abstract
Contributions to compiled volumes
- Wernberg, T., Campbell, A., Coleman, M.A., Connell, S.D., Kendrick, G.A., Moore, P.J., Russell, B.D.,Smale, D.A. & Steinberg, P.D. (2009). Macroalgae and Temperate Rocky Reefs. In: A Marine Climate Change Impacts and Adaptation Report Card for Australia 2009. Eds. Poloczanska, E.S., Hobday, A.J. & Richardson, A.J. NCCARF publication 05/09, ISBN 978-1-921609-03-9.
- Wernberg, T. Smale, D.A., Verges, A., Campbell, A. H. Russell, B. D., Coleman, M. A., Ling, S. D., Steinberg, P. D., Johnson, C. R., Kendrick, G. A. & Connell, S. D. (2012) Macroalgae and Temperate Rocky Reefs. In: A Marine Climate Change Impacts and Adaptation Report Card for Australia 2012. Eds. Poloczanska, E.S., Hobday, A.J. & Richardson. ISBN: 978-0-643-10928-5
- Barnes, D., Bergstrom, D. Bindschadler, R. and 36 others including Smale, D. A. (2009) The Next 100 Years. In Turner, J., Bindschadler, R., Convey, P., Di Prisco, G., Fahrbach, E., Gutt, J., Hodgson, D., Mayewski, P. and Summerhayes, C. (Eds.) Antarctic climate change and the environment (pp. 299-389). Scientific Committee on Antarctic Research, Cambridge UK. | <urn:uuid:f6c8a568-d82b-4e09-aa5a-a865158b83db> | CC-MAIN-2019-47 | https://www.mba.ac.uk/fellows/smale-group | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00057.warc.gz | en | 0.698546 | 7,253 | 3.09375 | 3 |
1835 Hughlings Jackson is born.
1837 Henry Charlton Bastian is born.
1843 David Ferrier is born.
1845 Thomas Barlow is born.
1845 William Gowers is born.
1845 Fletcher Beach is born.
1847 Byrom Bramwell is born.
1852 Judson Sykes Bury is born.
1854 Charles Edward Beevor is born.
1856 Charles Alfred Ballance is born.
1857 Victor Horsley is born.
1858 Leonard Guthrie is born.
1860 The National Hospital for the Paralyzed and Epileptic is founded in a small house in Queen Square, Bloomsbury (discussion here).
1861 Henry Head is born.
1862 Hughlings Jackson joins the staff at the National Hospital.
1864 William Aldren Turner is born.
1867 Joseph Shaw Bolton is born.
1867 Julius Alhaus, a German physician, opens the London Infirmary for Epilepsy and Paralysis
1868 Henry Charlton Bastian is elected Assistant Physician to the National Hospital for the Paralyzed and Epileptic.
1869 Henry Charlton Bastian publishes a paper on aphasia titled, On the Various Forms of Loss of Speech in Cerebral Disease.
1869 Hugh Kerr Anderson is born.
1870 Bertram Louis Abrahams is born.
1871 Edward Farquhar Buzzard is born.
1871 James Crichton-Browne founds the West Riding Lunatic Asylum Medical Reports.
1872 London Infirmary for Epilepsy and Paralysis changes its name to Maida Vale Hospital for Diseases of the Nervous System
1873 Edwin Bramwell is born.
1873 Walter Morley Fletcher is born.
1875 American Neurological Association is founded.
1875 Arthur Stanley Barnes is born (biography here).
1875 Henry Charlton Bastian publishes Paralysis from Brain Disease.
1876 David Ferrier publishes The Functions of the Brain.
1876 The West Riding Lunatic Asylum Medical Reports cease publication.
1876 Maida Vale Hospital changes its name to Hospital for Epilepsy and Paralysis and other Diseases of the Nervous System
1876 Mind – a Journal of Philosophy begins.
1877 Samuel Alexander Kinnier Wilson is born.
1877 Francis Carmichael Purser is born.
1877 Frederick Lucien Golla is born.
1878 Brain – a Journal of Neurology begins.
1878 Thomas Grainger Stewart is born.
1878 Dr Henry Tibbits founds The West End Hospital for Diseases of the Nervous System, Paralysis, and Epilepsy.
1879 George Hall is born.
1880 The Ophthalmological Society of the United Kingdom is founded.
1880 Henry Charlton Bastian publishes The Brain as an Organ of the Mind, which is translated into French and German.
1881 Byrom Bramwell publishes Diseases of the Spinal Cord, which is subsequently translated into French, German, and Russian.
1882 Thomas Graham Brown is born.
1882 Donald Elms Core is born.
1884 Godwin Greenfield is born (biography here).
1884 Godwin Greenfield is born (biography here).
1885 Anthony Feiling is born.
1885 Francis Walshe is born.
1886 The Idiots Act is passed into law.
1886 William John Adie is born.
1886 Henry Charlton Bastian publishes Paralyses, Cerebral, Bulbar, and Spinal.
1886 The Neurological Society of London is founded.
1887 Henry Charlton Bastian occupies the Chair of Medicine at University College Hospital.
1888 George Riddoch is born.
1889 David Ferrier is appointed to the Chair of Neuropathology – a position specifically created for him.
1889 William Gifford Wyllie is born.
1889 Ronald Grey Gordon is born.
1890 The Lunacy Act is passed into law.
1890 Charles Symonds is born (biography here).
1890 Philip Cloake is born.
1891 Frederick Nattrass is born.
1892 William Osler publishes The Principles and Practice of Medicine, which for the first time includes the changes that have been introduced into medicine by bacteriology.
1892 William Aldren Turner is appointed Assistant to David Ferrier in the King’s College Neuropathological Laboratory.
1893 Henry Charlton Bastian publishes Various Forms of Hysterical or Functional Paralysis.
1893 James Ross and Judson Sykes Bury publish Peripheral Neuritis.
1895 Dorothy Russell is born.
1895 Thomas Barlow occupies the Holme chair of Clinical Medicine at University College London.
1895 Fletcher Beach publishes Treatment and Education of Mentally Feeble Children.
1896 The Belgian Neurological Society is founded.
1898 Charles Beevor publishes a Handbook on Diseases of the Nervous System.
1899 The Neurological Society of Paris is founded.
1899 William Aldren Turner is appointed Assistant Physician at King’s College Hospital.
1899 Eric Alfred Blake Pritchard is born.
1899 Fergus Ferguson is born (biography here).
1900 William Esmond Rees is born.
1900 The Danish Neurological Society is founded.
1900 Macdonald Critchley is born (biography here).
1900 Macdonald Critchley is born (biography here).
1903 Edward Graeme Robertson is born.
1903 Hugh Gregory Garland is born.
1903 William Ritchie Russell is born.
1903 Bertram Louis Abrahams is appointed assistant physician to the Westminster Hospital, where he lectures in physiology and medicine.
1904 William Osler becomes Regius Professor of Medicine at Oxford.
1904 John St. Clair Elkington is born.
1905 Samuel Nevin is born.
1905 Thomas Grainger Stewart and Gordon Holmes publish their landmark paper, ‘Symptomatology of Cerebellar Tumours’.
1906 Charles Sherrington publishes The Integrative Action of the Nervous System.
1907 St Mary’s Teaching Hospital founds a Department of Neurology.
1907 Leonard Guthrie publishes Functional Nervous Disorders of Children.
1907 The Swedish Neurological Society is founded.
1907 The Royal Society of Medicine is formed and the Section of Neurology is created out of the Neurological Society of the United Kingdom, which subsequently disbands.
1907 Charles Beevor publishes “his most important research” on the arterial supply to all parts of the brain, filling a gap in “contemporary knowledge”. This comes out in Philosophical Transaction of the Royal Society.
1908 Paul Harmer Sandifer is born.
1908 Charles Beevor dies.
1908 Bertram Louis Abrahams dies.
1908 William Aldren Turner is promoted to Physician in charge of Neurological Cases and becomes Lecturer in Neurology in King’s Medical School.
1909 The Amsterdam Neurological Society is founded, and the Swiss Neurological Society also is founded.
1909 The Neurological Institute of New York is founded.
1910 Robert Porter is born. Evidence suggests that he never becomes a member of the Association of British Neurologists, despite building a new department of neurology at the Central Middlesex Hospital from 1947 until 1962. He is not listed as a member of the Neurological Section of the Royal Society of Medicine.
1910 Joseph Shaw Bolton becomes Director of the West Riding Mental Hospital.
1910 William Aldren Turner and Thomas Grainger Stewart publish Textbook of Nervous Diseases.
1910 Thomas Barlow is elected President of the Royal College of Physicians.
1911 Hughlings Jackson dies.
1911 Joseph Shaw Bolton is appointed to the Chair of Mental Diseases at the University of Leeds.
1912 Samuel Alexander Kinnear Wilson describes progressive lenticular degeneration which becomes eponymously known as ‘Wilson’s disease.’
1912 Hugh Kerr Anderson becomes Master of Caius College, Cambridge.
1912 A special department for Diseases of the Nervous System is established at the Middlesex Hospital Medical School with H Campbell Thomson in charge.
1912 Judson Bury publishes Diseases of the Nervous System.
1913 Thomas Barlow is elected President of the International Medical Congress.
1913 Henry Miller is born.
1913 The Mental Deficiency Act is passed into law.
1914 The Medical Research Committee is formed.
1914 The First World War begins.
1914 Joseph Shaw Bolton publishes an important but largely ignored work The Brain in Health and Disease.
1914 R. MacNab Marshall is appointed to the Victorian Infirmary, Glasgow, as a Physician for Diseases of the Nervous System.
1915 Henry Charlton Bastian dies.
1915 The West End Hospital for Diseases of the Nervous System, Paralysis, and Epilepsy becomes The West End Hospital for Nervous Diseases.
1915 Francis Walshe is appointed Major, RAMC. He is wrongly remembered as a consultant neurologist in the Army; however, the British military never officially recognizes neurology as a specialty – this from his letters.
1915 William Gowers dies.
1917 John Stanton is born.
1918 The First World War ends.
1918 Leonard Guthrie dies.
1919 Samuel Alexander Kinnier Wilson is appointed Neurologist at King’s College Hospital.
1919 William Osler dies and is remembered as the greatest personality in the medical world at the time of his death and on both sides of the ocean.
1919 Charles Ballance publishes a monograph titled, Essays on the Surgery of the Brain.
1920 The Norwegian Neurological Society is founded.
1920 Thomas Graham Brown becomes Professor of Physiology at the Welsh National School of Medicine.
1920 Charles Symonds is appointed physician in nervous diseases to Guy’s Hospital in the clinic established by Sir Arthur Hurst.
1920 Samuel Alexander Kinnier Wilson becomes editor of resurrected Edinburgh Review of
Neurology and Psychiatry, and renames it the Journal of Neurology and
1921 Edward Farquhar Buzzard and J G Greenfield published Pathology of the Nervous System.
1921 Edwin Bramwell reports making £5000 from his practice alone.
1922 Edwin Bramwell becomes Professor of Medicine at Edinburgh University after receiving the Moncrieff-Arnot Chair of Medicine there.
1922 Price publishes a handbook of medicine, and asks James Collier and William John Adie to the author the chapter on diseases of the nervous system. Critchley recalled that students (himself included) would buy the book, tear out the pages devoted to neurology, tenderly bind them, and discard the rest of the volume.
1923 William Gifford Wyllie becomes Medical Registrar and Pathologist at Great Ormond Street Hospital, which places him in position to become one of the countries first paediatric neurologists.
1923 Frederick Golla becomes Director of the Central Pathological Laboratory at Maudsley Hospital.
1923 Anthony Feiling becomes Assistant Physician at St George’s. From this period onwards, he co-presides over the open neurological demonstration clinic there with James Collier.
1924 Edward Farquhar Buzzard becomes Physician Extraordinary to the King.
1924 Thomas Grainger Stewart becomes a full physician at the National Hospital.
1925 The Association of Physicians meets in Edinburgh, where Byrom Bramwell is presented with a portrait.
1926 National Hospital, Queen Square is renamed the National Hospital for the Relief and Cure of Diseases of the Nervous System including Paralysis and Epilepsy.
1926 Francis Carmichael Purser is given the “complementary post” of honorary professor in neurology, Dublin University.
1926 Anthony Feiling becomes Dean of St George’s Medical School.
1926 Joseph Shaw Bolton publishes a polemic against the Freudian school of psychiatry entitled Myth of the Unconscious Mind.
1926 Ronald Grey Gordon publishes Personality.
1926 Hugh Kerr Anderson enters into negotiations with the Rockefeller Foundation, which lead to a gift of £700,000 towards the construction of new University Library and facilities for biological research at Cambridge.
1927 Edwin Bramwell becomes President of the Neurological Section of the Royal Society of Medicine.
1927 Ronald Grey Gordon publishes Neurotic Personality.
1927 George Hall is appointed Physician, Royal Victoria Infirmary, Newcastle where his “interests are mainly neurological”.
1928 Hugh Kerr Anderson dies.
1928 David Ferrier dies.
1928 Samuel Alexander Kinnear Wilson publishes Modern Problems in Neurology.
1928 Edward Farquhar Buzzard becomes Regius Professor of Medicine at
1928 James Birley succeeds Farquhar Buzzard as Director of the neurological department at St Thomas’ Hospital.
1928 Dorothy Russell spends a year in Boston and works with Frank Mallory.
1928 William Aldren Turner retires from King’s College Hospital, and is appointed Consulting Physician to the Hospital and Emeritus Lecturer on Neurology in the Medical School.
1928 Macdonald Critchley is appointed to King’s College Hospital staff in Neurology.
1929 The Ferrier Prize in Neurology is established at King’s in 1929 by his friends and colleagues to commemorate his life and work. The prize was worth £20 and included a bronze medal.
1929 The Local Government Act is passed into law.
1929 Donald Armour is elected President of the Neurological Section of the Royal Society of Medicine.
1929 Dorothy Russell spends a year in Montreal at the Neurological Institute, and works with Wilder Penfield.
1929 Fletcher Beach dies.
1929 The Ferrier Prize in Neurology is established at King’s College, and awarded £20 and bronze medal to its winner.
1930 Douglas McAlpine receives patronage through his father and creates an inpatient neurological clinic at Middlesex Hospital.
1930 The Poor Law and Mental Treatment Acts are passed into law.
c. 1930 Eric Alfred Blake Pritchard becomes a Physician at University College Hospital and the National Hospital for Nervous Diseases.
1931 Byrom Bramwell dies.
1931 Frederick Nattrass publishes a textbook titled: The Commoner Nervous Diseases.
1931 Arthur Stanley Barnes becomes Dean of the Faculty of Medicine at the University of Birmingham.
1932 First meeting of the Association of British Neurologists.
1932 Edgar Adrian and Charles Sherrington share the Nobel Prize in Physiology.
1932 John St. Clair Elkington is appointed neurologist to St Thomas’s Hospital.
1933 Association of British Neurologists hold their inaugural general meeting at the Medical Society of London. Wilfred Harris is the first President.
1933 Edward Arnold Carmichael is appointed Director of the MRC Clinical Neurological Research Unit. His appointment is on a five year basis.
1933 William Rees appointed Assistant physician, Cardiff Royal Infirmary.
1933 Dorothy Russell is appointed to the scientific staff of the Medical Research Council at London Hospital.
1933 Philip Cloake becomes a professor of medicine at Birmingham.
1933 Edwin Bramwell is elected President of the Royal College of Physicians of Edinburgh.
c. 1933 Conrad Meredyth Hind Howell starts a neurological consultative clinic at St Bartholowmew’s Hospital. This experience convinces him of the desirability of having a neurologist on the staff when he retired in 1937, and he welcomes the appointment of Dr Denny Brown.
1934 Donald Elms Core dies.
1934 The Nottingham General Hospital establishes an Out-Patient Nerve Clinic.
1934 Francis Carmichael Purser becomes Kings Professor of the Practice of Medicine at Trinity College, Dublin.
1934 Hughlings Jackson Centenary Dinner is celebrated in London.
1934 The Polish Neurological Society is founded.
1934 Francis Carmichael Purser dies.
1935 Edward Graeme Robertson returns to Australia, where he becomes an important leader in Australian neurology.
1935 William John Adie dies.
1936 Professor Edwin Bramwell becomes President of the Association of British Neurologists.
1936 The Greek Neurological Society is founded.
1936 Anthony Feiling resigns his deanship over St George’s Medical School, which is described as unremarkable, although he does hire their first psychiatrist, Desmond Curran.
1936 Charles Alfred Ballance dies.
1936 Edward Farquhar Buzzard delivers his presidential speech to the British Medical Association in which he outlines his vision of the perfect medical school. Lord Nuffield is in the audience and subsequently helps Buzzard realize his dream with a grant of more than one million pounds to Oxford University.
1937 Derek Denny Brown is appointed as Neurologist to the Hospital at St Bartholomew’s, although there is no special department of neurology.
1937 Francis Walshe becomes Editor of Brain.
1937 The Association of British Neurologists invites the members of the Neurological Society of Amsterdam to meet with them in London.
1937 Frederick Golla is appointed to the Chair of Mental Pathology, University of London
1937 The Maida Vale Hospital changes its name to Maida Vale Hospital for Nervous Diseases (including Epilepsy and Paralysis).
1937 Samuel Alexander Kinnier Wilson dies.
1937 Edward Arnold Carmichael becomes the editor of the Journal of Neurology and Psychopathology upon the death of Kinnier Wilson, and the journal is renamed the Journal of Neurology and Psychiatry. The committee includes: G Jefferson, Aubrey Lewis, A Meyer, R A McCance, Denis Williams, E D Adrian, R G Gordon, J G Greenfield, F C Bartlett, and W Russell Brain.
1937 Samuel Nevin is appointed to King’s College as an Assistant neurologist, taking over the spot vacated by Kinnier Wilson.
1938 The Institute for the Teaching and Study of Neurology opens at the National Hospital, Queen Square.
1938 Edward Farquhar Buzzard becomes President of the Association of British Neurologists.
1938 Derek Denny Brown is appointed Neurologist at St. Bartholomew’s Hospital. His is the first official neurological appointment, although other physicians of nervous diseases such as J A Ormerod, H H Tooth, and C. M Hinds Howell have held positions there. No department of neurology is founded at the same time.
1938 William Ritchie Russell is appointed lecturer in neurology at Edinburgh University.
1939 Frederick Golla becomes Director of the Burden Neurological Institute in Bristol, where the first trials of electroconvulsive therapy are pioneered in Britain.
1939 Whyllie McKissock is appointed “Associate Neurological Surgeon” in March 1939 at Great Ormond Street Hospital for Sick Children.
1939 Fergus Ferguson is appointed Consultant neurologist to the Western Command and the Emergency Medical Services.
1939 The Second World War begins.
1939 George Riddoch is appointed heat of the E.M.S. Neurological Unit at Chase Farm Hospital. 1939 -- also advises the E.M.S. on the organization of the Peripheral Nerve Injuries Centres
1940 Henry Head dies.
1940 Samuel Alexander Kinnear Wilson’s textbook, Neurology, is published posthumously by A. N. Bruce.
1940 Francis Walshe publishes Diseases of the Nervous System.
1941 Fredrick Nattrass is appointed to the first Whole-time Chair of Medicine in Newcastle.
1941 The first leucotomy is performed in Britain at the Burden Neurological Institute.
1941 Arthur Stanley Barnes retires from his deanship of the Faculty of Medicine, University of Birmingham.
1941 John Gaylor is appointed neurologist to the Western Infirmary, Glasgow.
c.1941 Samuel Nevin becomes Director of the Research Laboratory at the Institute of Psychiatry, Maudsley Hospital.
1943 Edward Farquhar Buzzard resigns as Regius Professor of Medicine at Oxford.
1944 The Journal of Neurology and Psychiatry changes its name to the Journal of Neurology, Neurosurgery, and Psychiatry.
1944 Judson Sykes Bury dies.
1945 The Second World War ends.
1945 Stanley Barnes is elected President of the Association of British Neurologists.
1945 Edward Farquhar Buzzard dies.
1945 Helen Dimsdale and Dorothy Russell are elected to the Association of British Neurologists. Helen Dimsdale eventually becomes treasurer.
1946 St Bartholomew’s appoints Dr J W Aldren Turner Neurologist to the Hospital and creates a department with beds for him.
1946 Hugh Garland founds and becomes Physician in Charge of Neurological Department at Leeds General Infirmary.
1946 Dr J W Aldren Turner is appointed Neurologist to St. Bartholomew’s Hospital in a Special Department of Neurology and he is given a small number of beds.
1946 Francis Walshe is elected to the Royal Society.
1946 Philip Cloak resigns his chair of medicine, and takes a part-time Personal Chair in neurology at Birmingham. He tries to create a tripartite academic division of neurology, neurosurgery, and psychiatry but fails.
1946 Dorothy Russell becomes Professor of Morbid Anatomy at London Hospital, and becomes Director of the Bernhard Baron Institute of Pathology. She is the first woman in the Western World to head a department of pathology.
1946 Gordon Holmes is elected President of the Association of British Neurologists and resigns his post as Secretary.
1946 Macdonald Critchley becomes Secretary of the Association of British Neurologists.
1947 Robert Porter is appointed Physician with a special interest in Neurology at the Central Middlesex Hospital. When he answers a questionnaire from the Neurology Committee of the Royal College of Physicians in the early 1960s, he identifies himself as a general physician working in neurology there.
1947 Alan Barham Carter becomes a Consultant physician in Ashford, where he works as a general physician with an interest in neurology for the next thirty-one years.
1947 Thomas Graham Brown retires from the Chair of Physiology at the Welsh National School of Medicine.
1947 George Riddoch dies.
1947 Maida Vale and the National Hospital merge but with the result that really Queen Square becomes the dominant London hospital for neurology.
1948 Hugh Garland becomes consultant neurologist to the Leeds Regional Hospital Board.
1948 Francis Walshe publishes Critical Studies in Neurology.
1948 Edgar Adrian is elected President of the Association of British Neurologists.
1948 Neurosurgical unit is created in Aberdeen.
1948 William Ritchie Russell becomes the editor of the Journal of Neurology, Neurosurgery, and Psychiatry.
1948 Martin Nichols becomes the first neurosurgeon appointed at the Aberdeen Royal Infirmary.
1948 The Canadian Neurological Society is founded.
1948 The American Academy of Neurology is founded.
1948 William Rees appointment as Consultant physician is changed to Consultant neurologist at Swansea General Hospital, Morriston Hospital, and Neath Hospital – all positions he holds until 1967.
1949 Hugh Garland becomes Editor of the Leeds University Medical Journal.
1950 The Institute of Neurology – an amalgamation of the National and Maida Vale Hospitals for Nervous Diseases – affiliates with the University of London.
1950 The National Institute for Neurological Diseases and Blindness is founded in the United States because of Public Law 692.
1950 Francis Walshe is elected President of the Association of British Neurologists.
1950 Edward Graeme Roberts (and several Australian neurologists) form the Australian Association of Neurologists. It seems clear that this Association was modeled on the Association of British Neurologists.
1950 Ronald Grey Gordon dies.
1951 John Stanton is appointed Senior Registrar in Psychiatry at Newcastle upon Tyne working under Alexander Kennedy. Kennedy hires the neurologically minded Stanton because of a desire to make psychiatry more organic in its focus. Stanton’s subsequent years are marked by a neuropsychiatric outlook typical to Newcastle, even though he moves to Edinburgh in 1953.
1951 Frederick Nattrass writes a chapter titled “Diseases of the Nervous System” which appears in Chamberlain’s Textbook of Medicine.
c.1951 Samuel Nevin introduces the electron microscope to British neurology.
1952 Edwin Bramwell dies.
1952 Francis Walshe becomes President of the Royal Society of Medicine.
1952 Arthur Stanley Barnes publishes a History of the Birmingham Medical Centre.
1952 J G Greenfield is elected President of the Association of British Neurologists.
1952 Macdonald Critchley resigns as Secretary of the Association of British Neurologists.
1952 Edward Arnold Carmichael becomes Secretary of the Association of British Neurologists.
1953 Great Ormond Street Hospital for Sick Children establishes a Department of Neurology headed up by Paul Sandifer. Paul Sandifer thus becomes the first institutionally recognized pediatric neurologist in Britain.
1953 Francis Walshe resigns as Editor of Brain.
1953 Western General Hospital creates a Neurology Unit at the Northern General Hospital alongside Respiratory Medicine and Rheumatology. John Marshall heads the unit.
1954 Anthony Feiling is elected President of the Association of British Neurologists.
1955 Charles Symonds retires from the neurology department of Guy’s Hospital.
1955 Hugh Garland retires as Editor of the Leeds University Medical Journal.
1955 Denis Williams becomes Secretary of the Association of British Neurologists.
1955 Dr Giuseepe Pampiglione is appointed as the first Neurophysiologist to Great Ormond Street Hospital for Sick Children.
1955 George Hall dies.
1956 Charles Symonds is elected President of the Association of British Neurologists.
1957 Thomas Grainger Stewart dies.
1958 Professor F J Nattrass is elected President of the Association of British Neurologists.
1958 Henry Miller establishes a new department of neurology at Newcastle.
1959 Dorothy Russell publishes Pathology of Tumors of the Nervous System.
1960 W Russell Brain is elected President of the Association of British Neurologists.
1960 William Gooddy becomes Secretary of the Association of British Neurologists.
1960 Dorothy Russell become Emeritus Professor.
1961 Henry Miller becomes Reader in Neurology, Royal Victoria Infirmary and University of Newcastle.
1961 Helen Dimsdale becomes treasurer of the Association of British Neurologists.
1962 Macdonald Critchley becomes President of the Association of British Neurologists.
1962 Eric Alfred Blake Pritchard dies.
1963 John St. Clair Elkington dies.
1964 Henry Miller becomes Professor of Neurology, Royal Victoria Infirmary and University of Newcastle.
1964 R A Henson becomes Secretary of the Association of British Neurologists.
1964 Fergus Ferguson is elected President of the Association of British Neurologists.
1965 Francis Walshe publishes Further Critical Studies in Neurology.
1965 The first specialist neurological post in Aberdeen is created, although a neurosurgical unit has existed there since 1948.
1965 John A Simpson becomes the First Professor of Neurology at the University of Glasgow.
1965 Alan Downie becomes the first Consultant neurologist in the Aberdeen Royal Infirmary.
1965 Thomas Graham Brown dies.
1966 R S Allison is elected President of the Association of British Neurologists.
1966 Henry Miller becomes Vice-Chancellor, University of Newcastle.
1966 William Ritchie Russell is appointed to the first chair of neurology at Oxford.
1966 Helen Dimsdale resigns as Treasurer of the Association of British Neurologists.
1967 Hugh Garland dies at the age of 64.
1968 Graham Wakefield becomes the first consultant neurologist appointed at the Royal United Hospital in Bath. He initially engages in general medical work, but eventually concentrates on adult neurological service.
1968 Frederick Golla dies.
1968 W Ritchie Russell is elected President of the Association of British Neurologists.
1969 William Gifford Wyllie dies.
1969 Philip Cloake dies.
1969 Robert Porter dies.
1970 The collected papers of Charles Symonds are published as Studies in Neurology, which was subsequently reviewed by the Times Literary Supplement.
1970 Samuel Nevin is elected President of the Association of British Neurologists.
1970 John Bernard Stanton dies at the young age of 51.
1973 Francis M. R. Walshe dies.
1974 Fergus Ferguson dies.
1975 Edward Graeme Robertson dies in Australia.
1975 Anthony Feiling dies.
1976 Henry Miller dies.
1978 Charles Symonds dies.
1979 Frederick Nattrass dies.
1979 Samuel Nevin dies.
1980 William Ritchie Russell dies.
1983 Dorothy Russell dies.
1987 Dr Graham Wakefield retires from Royal United Hospital, Bath. | <urn:uuid:2fbe74a7-b0eb-41ed-9bdb-2725c308dabf> | CC-MAIN-2019-47 | http://www.dictionaryofneurology.com/2013/06/timeline-of-british-neurology-1835-1987.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00138.warc.gz | en | 0.921701 | 6,096 | 2.703125 | 3 |
Last week I posted about how Socialism was the result of runaway romanticism. I looked at the history of the romantics who created the philosophy of Socialism.
One advantage of the internet is that easy access to material that once was kept in dusty old tomes in university libraries where the average people can’t actually read them. Like this.
All socialism involves slavery.
What is essential to the idea of a slave? We primarily think of him as one who is owned by another. To be more than nominal, however, the ownership must be shown by control of the slave’s actions—a control which is habitually for the benefit of the controller. That which fundamentally distinguishes the slave is that he labours under coercion to satisfy another’s desires. The relation admits of sundry gradations. Remembering that originally the slave is a prisoner whose life is at the mercy of his captor, it suffices here to note that there is a harsh form of slavery in which, treated as an animal, he has to expend his entire effort for his owner’s advantage. Under a system less harsh, though occupied chiefly in working for his owner, he is allowed a short time in which to work for himself, and some ground on which to grow extra food. A further amelioration gives him power to sell the produce of his plot and keep the proceeds. Then we come to the still more moderated form which commonly arises where, having been a free man working on his own land, conquest turns him into what we distinguish as a serf; and he has to give to his owner each year a fixed amount of labour or produce, or both: retaining the rest himself. Finally, in some cases, as in Russia before serfdom was abolished, he is allowed to leave his owner’s estate and work or trade for himself elsewhere, under the condition that he shall pay an annual sum. What is it which, in these cases, leads us to qualify our conception of the slavery as more or less severe? Evidently the greater or smaller extent to which effort is compulsorily expended for the benefit of another instead of for self-benefit. If all the slave’s labour is for his owner the slavery is heavy, and if but little it is light. Take now a further step. Suppose an owner dies, and his estate with its slaves comes into the hands of trustees; or suppose the estate and everything on it to be bought by a company; is the condition of the slave any the better if the amount of his compulsory labour remains the same? Suppose that for a company we substitute the community; does it make any difference to the slave if the time he has to work for others is as great, and the time left for himself is as small, as before? The essential question is—How much is he compelled to labour for other benefit than his own, and how much can he labour for his own benefit? The degree of his slavery varies according to the ratio between that which he is forced to yield up and that which he is allowed to retain; and it matters not whether his master is a single person or a society. If, without option, he has to labour for the society, and receives from the general stock such portion as the society awards him, he becomes a slave to the society. Socialistic arrangements necessitate an enslavement of this kind; and towards such an enslavement many recent measures, and still more the measures advocated, are carrying us. Let us observe, first, their proximate effects, and then their ultimate effects.
And this from De Toqueville.
Yes, gentlemen, sooner or later, the question of socialism, which everyone seems to fear and which no one, up to now, has dared treat of, must be brought into the open, and this Assembly must decide it. We are duty-bound to clear up this issue, which lies heavy upon the breast of France. I confess that it is principally because of this that I mount the podium today, that the question of socialism might finally be settled. I must know, the National Assembly must know, all of France must know—is the February Revolution a socialist revolution or is it not? [“Excellent!”]
It is not my intention to examine here the different systems which can all be categorized as socialist. I want only to attempt to uncover those characteristics which are common to all of them and to see if the February Revolution can be said to have exhibited those traits.
Now, the first characteristic of all socialist ideologies is, I believe, an incessant, vigorous and extreme appeal to the material passions of man. [Signs of approval.]
Thus, some have said: “Let us rehabilitate the body”; others, that “work, even of the hardest kind, must be not only useful, but agreeable”; still others, that “man must be paid, not according to his merit, but according to his need”; while, finally, they have told us here that the object of the February Revolution, of socialism, is to procure unlimited wealth for all.
A second trait, always present, is an attack, either direct or indirect, on the principle of private property. From the first socialist who said, fifty years ago, that “property is the origin of all the ills of the world,” to the socialist who spoke from this podium and who, less charitable than the first, passing from property to the property-holder, exclaimed that “property is theft,” all socialists, all, I insist, attack, either in a direct or indirect manner, private property. [“True, true.”] I do not pretend to hold that all who do so, assault it in the frank and brutal manner which one of our colleagues has adopted. But I say that all socialists, by more or less roundabout means, if they do not destroy the principle upon which it is based, transform it, diminish it, obstruct it, limit it, and mold it into something completely foreign to what we know and have been familiar with since the beginning of time as private property. [Excited signs of assent.]
Now, a third and final trait, one which, in my eyes, best describes socialists of all schools and shades, is a profound opposition to personal liberty and scorn for individual reason, a complete contempt for the individual. They unceasingly attempt to mutilate, to curtail, to obstruct personal freedom in any and all ways. They hold that the State must not only act as the director of society, but must further be master of each man, and not only master, but keeper and trainer. [“Excellent.”] For fear of allowing him to err, the State must place itself forever by his side, above him, around him, better to guide him, to maintain him, in a word, to confine him. They call, in fact, for the forfeiture, to a greater or less degree, of human liberty, [Further signs of assent.] to the point where, were I to attempt to sum up what socialism is, I would say that it was simply a new system of serfdom. [Lively assent.]
I have not entered into a discussion of the details of these systems. I have indicated what socialism is by pointing out its universal characteristics. They suffice to allow an understanding of it. Everywhere you might find them, you will be sure to find socialism, and wherever socialism is, these characteristics are met.
IS SOCIALISM, gentlemen, as so many have told us, the continuation, the legitimate completion, the perfecting of the French Revolution? Is it, as it has been pretended to be, the natural development of democracy? No, neither one or the other. Remember the Revolution! Re-examine the awesome and glorious origin of our modern history. Was it by appealing to the material needs of man, as a speaker of yesterday insisted, that the French Revolution accomplished those great deeds that the whole world marvelled at? Do you believe that it spoke of wages, of well-being, of unlimited wealth, of the satisfaction of physical needs?
Citizen Mathieu: I said nothing of the kind.
Citizen de Tocqueville: Do you believe that by speaking of such things it could have aroused a whole generation of men to fight for it at its borders, to risk the hazards of war, to face death? No, gentlemen, it was by speaking of greater things, of love of country, of the honor of France, of virtue, generosity, selflessness, glory, that it accomplished what it did. Be certain, gentlemen, that it is only by appealing to man’s noblest sentiments that one can move them to attain such heights. [“Excellent, excellent.”]
And as for property, gentlemen: it is true that the French Revolution resulted in a hard and cruel war against certain property-holders. But, concerning the very principle of private property, the Revolution always respected it. It placed it in its constitutions at the top of the list. No people treated this principle with greater respect. It was engraved on the very frontispiece of its laws.
The French Revolution did more. Not only did it consecrate private property, it universalized it. It saw that still a greater number of citizens participated in it. [Varied exclamations. “Exactly what we want!”]
It is thanks to this, gentlemen, that today we need not fear the deadly consequences of socialist ideas which are spread throughout the land. It is because the French Revolution peopled the land of France with ten million property-owners that we can, without danger, allow these doctrines to appear before us. They can, without doubt, destroy society, but thanks to the French Revolution, they will not prevail against it and will not harm us. [“Excellent.”]
And finally, gentlemen, liberty. There is one thing which strikes me above all. It is that the Old Regime, which doubtless differed in many respects from that system of government which the socialists call for (and we must realize this) was, in its political philosophy, far less distant from socialism than we had believed. It is far closer to that system than we. The Old Regime, in fact, held that wisdom lay only in the State and that the citizens were weak and feeble beings who must forever be guided by the hand, for fear they harm themselves. It held that it was necessary to obstruct, thwart, restrain individual freedom, that to secure an abundance of material goods it was imperative to regiment industry and impede free competition. The Old Regime believed, on this point, exactly as the socialists of today do. It was the French Revolution which denied this.
Gentlemen, what is it that has broken the fetters which, from all sides, had arrested the free movement of men, goods and ideas? What has restored to man his individuality, which is his real greatness? The French Revolution! [Approval and clamor.] It was the French Revolution which abolished all those impediments, which broke the chains which you would refashion under a different name. And it is not only the members of that immortal assembly—the Constituent Assembly, that assembly which founded liberty not only in France but throughout the world—which rejected the ideas of the Old Regime. It is the eminent men of all the assemblies which followed it!
AND AFTER this great Revolution, is the result to be that society which the socialists offer us, a formal, regimented and closed society where the State has charge of all, where the individual counts for nothing, where the community masses to itself all power, all life, where the end assigned to man is solely his material welfare—this society where the very air is stifling and where light barely penetrates? Is it to be for this society of bees and beavers, for this society, more for skilled animals than for free and civilized men, that the French Revolution took place? Is it for this that so many great men died on the field of battle and on the gallows, that so much noble blood watered the earth? Is it for this that so many passions were inflamed, that so much genius, so much virtue walked the earth?
No! I swear it by those men who died for this great cause! It is not for this that they died. It is for something far greater, far more sacred, far more deserving of them and of humanity. [“Excellent.”] If it had been but to create such a system, the Revolution was a horrible waste. A perfected Old Regime would have served adequately. [Prolonged clamor.]
I mentioned a while ago that socialism pretended to be the legitimate continuation of democracy. I myself will not search, as some of my colleagues have done, for the real etymology of this word, democracy. I will not, as was done yesterday, rummage around in the garden of Greek roots to find from whence comes this word. [Laughter.] I look for democracy where I have seen it, alive, active, triumphant, in the only country on earth where it exists, where it could possibly have been established as something durable in the modern world—in America. [Whispers.]
There you will find a society where social conditions are even more equal than among us; where the social order, the customs, the laws are all democratic; where all varieties of people have entered, and where each individual still has complete independence, more freedom than has been known in any other time or place; a country essentially democratic, the only completely democratic republics the world has ever known. And in these republics you will search in vain for socialism. Not only have socialist theories not captured public opinion there, but they play such an insignificant role in the intellectual and political life of this great nation that they cannot even rightfully boast that people fear them.
America today is the one country in the world where democracy is totally sovereign. It is, besides, a country where socialist ideas, which you presume to be in accord with democracy, have held least sway, the country where those who support the socialist cause are certainly in the worst position to advance them I personally would not find it inconvenient if they were to go there and propagate their philosophy, but in their own interests, I would advise them not to. [Laughter.]
A Member: Their goods are being sold right now.
Citizen de Tocqueville: No, gentlemen. Democracy and socialism are not interdependent concepts. They are not only different, but opposing philosophies. Is it consistent with democracy to institute the most meddlesome, all-encompassing and restrictive government, provided that it be publicly chosen and that it act in the name of the people? Would the result not be tyranny, under the guise of legitimate government and, by appropriating this legitimacy assuring to itself the power and omnipotence which it would otherwise assuredly lack? Democracy extends the sphere of personal independence; socialism confines it. Democracy values each man at his highest; socialism makes of each man an agent, an instrument, a number. Democracy and socialism have but one thing in common—equality. But note well the difference. Democracy aims at equality in liberty. Socialism desires equality in constraint and in servitude. [“Excellent, excellent.”]
THE FEBRUARY REVOLUTION, accordingly, must not be a “social” one, and if it must not be then we must have the courage to say so. If it must not be then we must have the energy to loudly proclaim that it should not be, as I am doing here. When one is opposed to the ends, he must be opposed to the means by which one arrives at those ends. When one has no desire for the goal he must not enter onto the path which necessarily leads him there. It has been proposed today that we enter down that very path.
We must not follow that political philosophy which Baboeuf so ardently embraced [cries of approval]—Baboeuf, the grand-father of all modern socialists. We must not fall into the trap he himself indicated, or, better, suggested by his friend, pupil and biographer, Buonarotti. Listen to Buonarotti’s words. They merit attention, even after fifty years.
A Member: There are no Babovists here.
Citizen de Tocqueville: “The abolition of individual property and the establishment of the Great National Economy was the final goal of his (Baboeuf’s) labors. But he well realized that such an order could not be established immediately following victory. He thought it essential that [the State] conduct itself in such manner that the whole people would do away with private property through a realization of their own needs and interests.” Here are the principal methods by which he thought to realize his dream. (Mind you, it is his own panegyrist I am quoting.) “To establish, by laws, a public order in which property-holders, provisionally allowed to keep their goods, would find that they possessed neither wealth, pleasure, or consideration, where, forced to spend the greater part of their income on investment or taxes, crushed under the weight of a progressive tax, removed from public affairs, deprived of all influence, forming, within the State, nothing but a class of suspect foreigners, they would be forced to leave the country, abandoning their goods, or reduced to accepting the establishment of the Universal Economy.”
A Representative: We’re there already!
Citizen de Tocqueville: There, gentlemen, is Baboeuf’s program. I sincerely hope that it is not that of the February republic. No, the February republic must be democratic, but it must not be socialist—
A Voice from the Left: Yes! [“No! No!” (interruption)]
Citizen de Tocqueville: And if it is not to be socialist, what then will it be?
A Member from the Left: Royalist!
Citizen de Tocqueville (turning toward the left): It might, perhaps become so, if you allow it to happen, [much approval] but it will not.
If the February Revolution is not socialist, what, then, is it? Is it, as many people say and believe, a mere accident? Does it not necessarily entail a complete change of government and laws? I don’t think so.
When, last January, I spoke in the Chamber of Deputies, in the presence of most of the delegates, who murmured at their desks, albeit because of different reasons, but in the same manner in which you murmured at yours a while ago—[“Excellent, excellent.”]
(The speaker turns towards the left)
—I told them: Take care. Revolution is in the air. Can’t you feel it? Revolution is approaching. Don’t you see it? We are sitting on a volcano. The record will bear out that I said this. And why?—[Interruption from the left.]
Did I have the weakness of mind to suppose that revolution was coming because this or that man was in power, or because this or that incident excited the political anger of the nation? No, gentlemen. What made me believe that revolution was approaching, what actually produced the revolution, was this: I saw a basic denial of the most sacred principles which the French Revolution had spread throughout the world. Power, influence, honors, one might say, life itself, were being confined to the narrow limits of one class, such that no country in the world presented a like example.
That is what made me believe that revolution was at our door. I saw what would happen to this privileged class, that which always happens when there exists small, exclusive aristocracies. The role of the statesman no longer existed. Corruption increased every day. Intrigue took the place of public virtue, and all deteriorated.
Thus, the upper class.
And among the lower classes, what was happening? Increasingly detaching themselves both intellectually and emotionally from those whose function it was to lead them, the people at large found themselves naturally inclining towards those who were well-disposed towards them, among whom were dangerous demagogues and ineffectual utopians of the type we ourselves have been occupied with here.
Because I saw these two classes, one small, the other numerous, separating themselves little by little from each other, the one reckless, insensible and selfish, the other filled with jealousy, defiance and anger, because I saw these two classes isolated and proceeding in opposite directions, I said—and was justified in saying—that revolution was rearing its head and would soon be upon us. [“Excellent.”]
Was it to establish something similar to this that the February Revolution took place? No, gentlemen, I refuse to believe it. As much as any of you, I believe the opposite. I want the opposite, not only in the interests of liberty but also for the sake of public security.
I ADMIT that I did not work for the February Revolution, but, given it, I want it to be a dedicated and earnest revolution because I want it to be the last. I know that only dedicated revolutions endure. A revolution which stands for nothing, which is stricken with sterility from its birth, which destroys without building, does nothing but give birth to subsequent revolutions. [Approval.]
I wish, then, that the February revolution have a meaning, clear, precise and great enough for all to see.
And what is this meaning? In brief, the February Revolution must be the real continuation, the honest and sincere execution of that which the French Revolution stood for, it must be the actualization of that which our fathers dared but dream of. [Much assent.]
Citizen Ledru-Rollin: I demand the floor.
Citizen de Tocqueville: That is what the February Revolution must be, neither more nor less. The French Revolution stood for the idea that, in the social order, there might be no classes. It never sanctioned the categorizing of citizens into property-holders and proletarians. You will find these words, charged with hate and war, in none of the great documents of the French Revolution. On the contrary, it was grounded in the philosophy that, politically, no classes must exist; the Restoration, the July Monarchy, stood for the opposite. We must stand with our fathers.
The French Revolution, as I have already said, did not have the absurd pretension of creating a social order which placed into the hands of the State control over the fortunes, the well-being, the affluence of each citizen, which substituted the highly questionable “wisdom” of the State for the practical and interested wisdom of the governed. It believed that its task was big enough, to grant to each citizen enlightenment and liberty. [“Excellent.”]
The Revolution had this firm, this noble, this proud belief which you seem to lack, that it sufficed for a courageous and honest man to have these two things, enlightenment and liberty, and to ask nothing more from those who govern him.
The Revolution was founded in this belief. It had neither the time nor the means to bring it about. It is our duty to stand with it and, this time, to see that it is accomplished.
Finally, the French Revolution wished—and it is this which made it not only beatified but sainted in the eyes of the people—to introduce charity into politics. It conceived the notion of duty towards the poor, towards the suffering, something more extended, more universal than had ever preceded it. It is this idea that must be recaptured, not, I repeat, by substituting the prudence of the State for individual wisdom, but by effectively coming to the aid of those in need, to those who, after having exhausted their resources, would be reduced to misery if not offered help, through those means which the State already has at its disposal.
The fact is that Socialism involves taking. Has anybody ever heard any Socialist talking about what they have to give? All the rhetoric you hear from Socialists is how others should give up what they have. For the more deserving, which somehow always seems to involve the Socialists
Not just taking people’s property, but their souls as well. There is a word for that. That word is slavery. For something that has existed since the beginning of time and still exists in parts of the world, it’s amazing that almost nothing good can ever be said about it. When a socialist talks about people It’s always about how the Socialist is the best equipped to make decisions for them. Even when those decisions turn out so consistently bad.
We Americans have spent a generation’s treasure, an amount so huge that it boggles the mind on the poor and they remain in the same place. In fact, the people that spend the money want it to remain so. But that money has been stolen from the productive and more importantly, from the future and there is nothing to show for it. This is what Socialist thinking and romanticism creates.
Unlike even hardcore leftists like Barack Obama and Hillary Clinton, Sanders comes out openly for the redistribution of wealth. “It’s not your money. It’s society’s.” That’s it. We hear a lot about how the redistribution of wealth is bad for the economy, and of course it is. When you take wealth out of the hands of people who competently created it, and put it into the hands of those who could never have made an honest $10 (see Sanders’ personal history), then you’re asking for trouble, economically. It would be like turning your bank account or IRA over to a ten-year-old, telling him, “Spend it as you like.”
We hear less about how redistribution of wealth is morally wrong. Yet it is. Ayn Rand, in her novel Atlas Shrugged and elsewhere, had the gumption and honesty to point out what should have been obvious. When you steal from a person, you violate his individual rights. It doesn’t become “not theft” simply because the person makes over $250,000 a year, or over a billion a year, just because you choose to arbitrarily violate his individual and property rights at that point.
What about the effect of wealth redistribution on those to whom it’s redistributed? We almost never hear anything about that. It’s taken for granted that the person receiving the wealth redistribution is better off. But how can that be?
For one thing, government is inefficient, and often corrupt. Government is not a private charity. An honest charity has a rational interest in seeing to it that the charity’s beneficiary will get the intended benefit. Sometimes charities are corrupt, but usually they are not. When they are, they are exposed and shamed, even prosecuted. When a government charity is proven corrupt, they usually end up with more tax money and anyone who criticizes this will be labeled a racist. Bottom line? Private charities can go out of business; government programs almost never do.
If you’re really in need of charity, then you’re far better off with a private charity, than with a red-tape laden, paperwork-drowning government-run one. Look at the fiasco that’s Obamacare. This is what happens when you try to turn charity into a government-run program.
Redistribution of wealth changes the nature of charity. Instead of the recipient thinking, “Somebody, out of the kindness of his or her heart, wanted me to be better off,” the recipient knows full well that the donations were taken by force. This changes the psychology. It changes the whole dynamic. The psychology shifts from benevolence and appreciation to entitlement, even nastiness or defensiveness. “Well of course I’m getting this help. I need it, and I deserve it.” People obtaining assistance and benefits from the government often complain about the poor treatment they receive from government welfare officers. But what else can they expect? There’s no mutual respect or benevolence in a context where force rules.
Redistribution of wealth takes the rational judgment of the donor out of charity. This means the beneficiary gets the benefit as an entitlement, as a right, not as a favor. The moment this happens, it’s no longer charity. Government, while much less efficient than private charity, is better at ensuring that those who do not deserve charity nevertheless get it. It’s so easy to lie, cheat and manipulate your way through a government system, since “judgment” consists of looking right on paper more than a donor making any kind of intelligent judgment. Who deserves charity, and who does not? Rationally speaking, a person deserves help if (1) the help is temporary, i.e. a hand up and not a handout; and (2) the person suffers through no fault of his or her own. The help, of course, must always be voluntary. While a person has every legal right to give charity to someone not deserving by this definition, it’s appalling tyranny to watch the government impose it as an entitlement. (This applies to corporations no less than individuals, by the way.)
Government does not give hands up, as most people—Donald Trump included, in his defense of the so-called social safety net—mistakenly assume. Government provides toxic incentives to stay on the benefit. I cannot tell you the number of people I have known, through my work, who reluctantly get dependent on a government handout (e.g. Social Security disability), a meager income to be sure, and then feel an incentive not to work lest they lose the benefit. It shatters their spirit, their self-confidence, and any rational incentive to make themselves into capable, self-sustaining individuals, even on a modest scale. And this misery happens in the context of a stagnant, minimal income.
Entitlement and redistribution of wealth shatter lives. I am so sick of people against these things being on the defensive. We give the moral high ground to the likes of Hillary Clinton and Bernie Sanders, whose beliefs and policies are responsible not only for progressive destruction of the economy, but also for sacrificing the lives and souls of millions of people who become dependent on this permanent form of help.
One thing to remember is that like our current Democrat candidates for President, Socialist don’t see you as anything other than things to be used. How that is different than any of the societies that have existed since the dawn of time is beyond me.
Venezuela, for example, is marching into the socialist future by marching into the socialist past. The latest news is that the entire population of the country is now subject to being drafted as agricultural laborers.
Venezuela said private and public companies will be obliged to let their workers be reassigned to grow crops, in a dramatic move in the middle of the country’s crippling economic crisis. The Labor Ministry announced the measure as part of the economic emergency already in effect; it will require all employers in Venezuela to let the state have their workers ‘to strengthen production’ of food.
This was announced in Venezuela more than a week ago, but the first reports showed up in English in the American press just a few days ago, and it is still being ignored by many mainstream outlets. It would have been a shame, after all, to upset all those dead-end Bernie supporters at the Democratic convention with disquieting news from utopia.
Anyone who knows much about the history of the twentieth century (which is to say, appallingly few of us) will experience a little shock of recognition from that report. This is precisely what the Soviets used to do, dragooning white-collar professionals—engineers, lawyers, playwrights, college professors—to trudge out to the fields at harvest time every year in a flailing attempt to squeeze production out of a disastrous system of “collectivized” agriculture.
I doubt it ever made much difference, and I’ve always suspected its real purpose was not to aid in the harvest but to remind the rank-and-file of the Soviet intelligentsia how easily the state could ship them off to do forced manual labor.
This is what used to be known as “universal labor conscription,” which was imposed by the Soviets in 1918, in which “all those capable of working, regardless of their regular jobs, were subject to being called upon to carry out various labor tasks”—a system pretty much identical to the Medieval institution of serfdom. The measure under which this system was imposed was called the “Declaration of the Rights of the Toiling Masses and Exploited People.” George Orwell never had to make anything up.
Now we’re seeing this again in Venezuela. As in the Soviet Union, Venezuela has specifically targeted agriculture and food production and distribution to reshape according to socialist ideals. For the Soviets, the big targets were the kulaks, prosperous free-holding farmers, who were viewed as dangerously independent and had to be replaced by collective farms. For Venezuela, it was the supermarkets and shop-owners who were targeted as exploiters and enemies of the regime. The result is the same: a chronic shortage of food that has people scavenging in dumpsters and raiding zoos to slaughter animals for their meat.
This is a revealing story about the actual meaning of socialism and what it really does for “the worker.” It ends in an “economic emergency” being used as an excuse for the state “giving itself authority to order individuals from one job to another.” So the advanced economic system of the future ends up being, in practice, a throwback to the primitive economic system of the barbaric past.
The end is near for Venezuela. It takes real genius to create endemic poverty in a state where you can pump resources out of the ground and famine where the growing season is year around. Yet that is what Venezuela has managed. So they no longer can no longer even afford the ink to print the fake money.
This probably not going to end well for Venezuela. Which is what happens when romantic nostalgia meets harsh realities. The fact is that once things become involuntary, the blood will inevitably start to flow. If the what happened in the last century is any indication there will be lots of blood, but very little food and none of the prosperity that the Progressives always promise.
China has perhaps the bloodiest history. Here is slavery and Socialism run amuck. Far from being a paradise China was turned into hell on earth. All common sense and internal knowledge was sacrificed to the romantic vision and in the end there was nothing to show for it. The irony is that if Mao had gone to Hong Kong and talked Mr. Cowperthwaite, all his goals would have been achieved. Instead he wanted to let the country starve. The problem is that if you starve the countryside one year or so, the cities aren’t going to have anything to eat the next year.
The video makes the sickness almost seem like a good thing. The results speak for themselves. by enslaving everyone to the gods of production, the Chinese Communists drained away the souls and possibilities of all those people. they also laid the trap in the network of lies that were inevitable as the people in charge desired to prove that the fantasy was real. But the fantasy was impossible and Mao was not a messenger of god.
About the same time that the Great Leap Backward, the US invented the supermarket. At the same time that the Chinese were literally eating themselves food was so abundant that produce was piled up for people to buy. The advocates of Socialism here in the states have always had supermarkets. They cannot seem to wrap their heads around the fact that in a Socialist society there are no extras like supermarkets stuffed with food.
I’ve had the honor of knowing many who have escaped from the Socialist East of the Soviet Empire. I’ve heard the stories of privation and inequality from people with first hand experience. Socialism is not an abstract to them, but something they endured and overcame. I do know one thing, that almost all of them would do just about anything to prevent the same things from happening here. We should learn from that.
The idea that Socialism results in a more equitable and efficient society has never been proven as a fact. Rather the opposite in fact. Every attempt to create a Socialist utopia has resulted in a bloody corrupt massively totalitarian society with the people on top soaking off the wealth and the rest of the populace barely surviving. Every single one. It’s time to ask if this is who we want to be.
As I’ve pointed out before the reality is that Socialism and a modern technological cannot coexist. A society that has the benefit of technology has to be antifragile so the society can absorb the disruptions and changes the new technologies bring. A Socialist society is inherently fragile, fragile as the feudal societies that the Socialists draw inspiration from. Socialism cannot survive in a society that requires change or progress. So that is the choice we make, move forward with Liberty or back with Socialism. Which do you choose?
For more on the dysfunctional economy click Here or on the tag below. | <urn:uuid:babe9fd4-65b8-42b9-96e8-89b45c4a5ae0> | CC-MAIN-2019-47 | https://theartsmechanical.wordpress.com/2016/08/08/socialism-is-slavery/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00498.warc.gz | en | 0.974039 | 7,798 | 3.265625 | 3 |
Attention Deficit Disorder/Attention Deficit and Hyperactivity Disorder: A common diagnosis for children who demonstrate marked degrees of inattentiveness, children who exhibit impassivity and in some cases, hyperactivity. A medical diagnosis is given to children who exhibit symptoms before the age of seven and medication or behavior modification programs are frequently prescribed. Typical behaviors include: a short attention span, are highly distracted, acting before thinking about the results, constant interrupting, engaging in risky or dangerous behavior. Children with the hyperactive component are squirmy and fidgety, talk excessively and have difficulty participating in quiet activities.
A person who has been adopted.
Adoption is a way of meeting the developmental needs of a child by legally transferring ongoing parental responsibilities for that child from the parents to Adoptive Parents and in the process, creating a new kinship network that forever links the birth family and the Adoptive Family through the child who is shared by both. This new kinship network may also include significant foster families, both formal and informal, that have been a part of the child’s experience. The role of Resource Parents is to consider the adoption option when children and youth cannot return to their parents. Through adoption, Adoptive Parents keep the child or youth connected to their past.
- Adoption subsidy (AAP)
Adoption Assistance Program (AAP) – AAP is financial and/or medical assistance given on an on-going basis to an adoptive parent to help with the child’s special needs. This subsidy may be provided through federal, state, county and/or local resources. (Also see Title IV-E.)
- Best Interest of the Child
“Best Interest” includes concepts of a child’s sense of time, as well as a child’s need for safety, well-being and permanence (a family intended to last a lifetime). Resource Parents serve as advocates for the best interest of the child.
Used to describe a person with total loss of vision. Persons with partial vision may be described as partially sighted, visually impaired, or persons with partial vision.
- Case or Family Conferencing
The caseworker is responsible for periodically bringing together key stakeholders involved with a family and child, to review progress, to assess strengths and needs and to plan with them. Resource Parents attend case conferences and should participate actively to assist in reviewing progress, assessing strengths and needs and planning for the future.
- Case Planning
This is a process whereby the Children’s Social Worker helps families make effective plans for the safety, well-being and timely permanence for their children. Resource Parents are active stakeholders in the case planning process, especially as it relates to the child in foster care.
- Case Review
Law requires that every child in foster care have a review of his or her case, to confirm that policy and law are being assured. Judicial review happens every six months for every child in out-of-home care. Resource Parents are expected to participate actively in case review.
- Cerebral Palsy
A group of conditions resulting from brain damage before, during or shortly after birth. The most obvious symptom is an inability to coordinate or control muscles in one or more parts of the body. There is a wide range in the level of disability. In more serious cases, mental retardation, convulsive disorders and problems with thinking, vision and hearing may occur.
- Child Protective Services
The legal intervention of child welfare agencies, through the judicial (court) system, to protect children and families. Resource Parents are often called upon to provide information to the child protective services worker, as well as to testify in court.
- Children’s Social Worker (CSW)
The representative who works with the family who is in receiving services from the Department of Children and Family Services (DCFS). This person may work with the birth family, the court, outside agencies providing ancillary services to the child, the child’s out-of-home caregiver (If the child was not able to remain safely at home) and others to ensure that the birth family and/or child are receiving all services needed to facilitate reunification and ensure timely permanence.
- Closed adoption
An adoption in which identifying information about the birth parents and adoptive parents is not made available. California is a “closed adoption” state where all identifying information is considered confidential so records containing this confidential information are usually sealed as a result of state law and /or court order.
- Concurrent Planning
The development of two permanency goals at the same time; reunification and an alternative permanent plan should reunification efforts prove unsuccessful. Concurrent planning allows for the contingency of finding a resource family which will supports efforts to reunify the child and family yet also, if necessary, adopt a child who cannot return home. Concurrent planning allows child welfare agency staff to petition to: identify, recruit, process and approve a qualified family for adoption while filing the petition to terminate the parental rights of a child’s parents.
The policy or law limiting information that may be discussed about children and their families. Resource Parents must maintain confidentiality.
- Congenital disability
A disability that has existed since birth. Birth defect is no longer considered an appropriate term.
- Consolidated Home Study
Also referred to as home study or family assessment. The process of dually preparing a family to obtain a foster care license from the state and assessing the family for the adoption of a child – the two processes have been consolidated into one. It is the practice of educating prospective caregivers for children about adoption, ensuring that their home would be a safe and appropriate place for a child, and determining what kind of child would best fit into that family. Family assessments are usually done by social workers affiliated with a public or private adoption agency. Independent social workers, adoption attorneys and other adoption facilitators may also do family assessments. An approved assessment is required before a child can be placed for adoption.
- Creating Alternative Plans
The child welfare agency must begin creating alternative plans for permanency for the child(ren) with the family at the opening of a child welfare case. Alternative plans include relative placement, foster care, guardianship and adoption. Resource Parents work collaboratively with the caseworker.
The legal responsibility for the care and supervision of a child.
Used to describe a person with total loss of hearing. Persons with partial hearing may be described as hearing impaired, having a hearing impairment or having a partial hearing loss.
- Developmental Disability
A chronic mental/cognitive and/or physical impairment incurred before the age of 22 that is likely to continue indefinitely. The disability may substantially impact independent functioning and may require life-long support. The term includes people with mental retardation, cerebral palsy, epilepsy, autism, and sensory impairments. These impairments may have been present from birth or may have resulted from a traumatic accident. Although this is the federal definition, some states or other organizations serving people with developmental disabilities may use a broader or narrower definition to include those they are able to serve.
A temporary or permanent condition that interferes with a person’s ability to function independently – walk, talk, see, hear, learn. It may refer to a physical, mental or sensory condition. Terms no longer considered acceptable when talking about people with disabilities include: disabled, handicapped, crippled or deformed. Acceptable descriptions include: a person with a disability, a child with special needs, a boy who is visually impaired, a girl who has a hearing loss, a child who uses a wheelchair.
When a child placed for adoption is removed from the prospective adoptive home and returned to foster care before the adoption is finalized. Reasons for disruptions vary but are generally the result of some incompatibility between the child and the family. In many cases, the child is eventually placed with another adoptive family. The family who could not keep that child may consider other children.
- Down Syndrome
A person with Down Syndrome is born with an extra chromosome. This causes mild to moderate mental retardation, slanted eyes, short stature and poor muscle tone. Respiratory infections and congenital heart disease are common and generally treatable.
- Emotional Maltreatment
Emotional maltreatment is defined by state law and is usually indicated by a combination of behavioral indicators including speech disorder; lags in physical development; failure to thrive; hyperactive/disruptive behavior; sallow, empty facial appearance; habit disorder ( sucking, biting, rocking); conduct/learning disorders; neurotic traits (sleep disorder, inhibition of play, unusual fearfulness); behavioral extremes; overly adaptive behavior ( inappropriately adult or infantile); developmental lags; attempted suicide. Resource Parents help children heal from emotional maltreatment. They also may serve as models, teachers or mentors to parents to prevent future emotional maltreatment.
- Emotionally Disturbed
Term used to describe a person with behaviors that are outside the norm of acceptability. A child may be emotionally disturbed as a result of a traumatic or stressful event in his/her life. The emotional disturbance may be temporary or chronic; it may be organic or purely functional. A high percentage of children available for adoption are considered to be emotionally disturbed to some degree as a result of abuse, neglect, and removal from their family.
The action taken by the court to legally make an adopted child a member of his/her adoptive family.
(legal risk or fost-to-adopt) Adoptive/foster placement involve foster children who are not legally free for adoption but who may become available for adoption pending a legal termination of birth parents’ rights. If is unlikely that efforts to reunite the family will be successful, the child will be placed in a resource family that is licensed for foster care and approved to adopt.
- Foster Care
Foster care is a protective service for families. Foster care usually means families helping families. Children who have been physically abused, sexually abused, neglected or emotionally maltreated are given a family life experience in an agency-approved, certified or licensed home for a planned, temporary period of time. The primary goal of foster care is to reunite children with their families. Resource Parents are often in a position to help children and their families reunify. Resource Parents are also often in a position to emotionally support parents who cannot do the job of parenting and must make a plan for adoption or another permanent plan for their children. The role of the Resource Parent today is to help protect both children and their families as they work toward reunification or an alternative permanent plan if reunification is not possible.
Foster Parents are also often in a position to emotionally support parents who cannot do the job of parenting and must make a plan for adoption or another permanent plan for their children. The role of the Resource Parent today is to help protect both children and their families as they work toward reunification or another permanent plan if reunification is not possible.
- Foster Parents
Also known as Resource Parents in LA County. People licensed by the state, or certified through a Foster Family Agency, to provide a temporary home for children who cannot safely live with their birth parents.
- Front Loading Services
This is a term indication that the agency puts in place as many services as possible early after a case is opened in order to prevent removal or to achieve timely reunification. Working with the caseworker, Resource Parents must be aware of and participate in (where appropriate) the services available to the parents and the child in their care.
- Full Disclosure
Parents of children in foster care must be fully informed of their child(ren)’s status, DCFS and Court processes and the possible outcomes based on their participation in their reunification efforts. know everything the agency staff knows. They need full information about all the alternatives they face, as well as the legal timeframes. Likewise, Resource Parents must receive all information about a child available at the time of placement. Working with the CSW, Resource Parents must share as much information as is available with the parents of the child in their care. Resource Parents should develop a list of questions to ask the CSW before and at the time of placement.
- Group Home
A large foster home licensed to provide care for several children (perhaps up to 10). Some group homes function as family homes with parents who are always available; others have staff members who work at different times along with the group home parents.
- Guardian Ad-Litem (GAL)
A person appointed by the court to represent a child in all court hearings that concern him/her.
- Independent Adoption
In an independent adoption, birth parents choose the child’s adoptive family and place the child directly in the home. It is usually done with the assistance of an attorney. An independent adoption may also take place when a child who is not a dependent of the court is adopted by his or her relative.
- Individualized Education Plan (IEP)
IEPs are the result of an educational assessment that determines that a child has significant learning challenges. Such a plan is made for children who are having difficulty learning in school, whether due to learning disabilities, developmental disabilities or emotional and behavior problems. Learning and behavioral goals and objectives with specific measurable outcomes are identified.
- Life Book
A child-driven collection of pictures, stories and drawings that tell about the life of a child. This book is particularly important for children in foster care who have moved from place to place and have lost significant people in their everyday lives. A child’s life book is an excellent therapeutic tool and may be a treasured keepsake.
Term used when an adoptive family has been selected for a waiting child. In most cases, the family is getting to know more about the child, but the child has not yet moved into the adoptive home.
- Mental retardation
A level of intellectual functioning that is significantly below average. A person with mental retardation generally has an IQ below 70. Also referred to as cognitive impairment.
- Mentally ill/mental disorder
Term used to describe a person whose thought processes and/or behaviors do not fit the norm. Many mental illnesses are attributed to a chemical imbalance in the brain and can be effectively treated with medication or psychological counseling. Some mental illnesses seem to run in families. A mental illness is not the same as mental retardation, though intellectual functioning may be negatively affected by the behaviors associated with the mental illness.
Neglect is defined by state law and some of the common indicators include: being underweight, poor growth patterns, consistent hunger, poor hygiene, inappropriate clothing, inadequate supervision, unattended physical problems or medical needs, abandonment, begging and/or stealing food, extended stays or rare attendance at school, fatigue, delayed speech development, seek inappropriate affection, expressionless, assume adult responsibilities and/or concerns, abdominal distention, bald patches on the scalp, substance abuse, vocalize in whispers or whines.
- Open Adoption
An adoption where there is some interaction between the birth family, adoptive family and the adopted child. Generally the adoptive family and the birth family agree to a level and style of communication that is comfortable for both parties and in the best interests of the child. Communication may be by phone, correspondence or personal contacts. In a semi-open adoption, contact may be maintained through and intermediary, usually the adoption agency.
- Orientation Meeting
An initial group meeting for prospective foster and adoptive (resource) parents where information about the agency’s procedures and policies are explained and questions about foster care licensure and the family assessment process may be answered.
Permanence is the assurance of a family for a child intended to last a lifetime. Permanence assures a child a family where he or she will be safe and nurtured. Resource Parents work in coordination with the Children’s Social Worker and others to assure that a child returns to his or her birth family home or, should that turn out not to be a safe option, that the child has a timely plan for alternative permanence such as adoption or placement with extended family.
- Permanency Hearings
Originally called a “dispositional hearing,” the “permanency planning hearing” is held 12 months after a child enters foster care. A child is considered to have entered foster care from either the date of the first judicial finding of deprivation (i.e., adjudication) or to the date 60 days after the date on which the child is removed from home. The court explores the option of ordering activation of the identified alternative permanent plan. Resource Parents attend permanency hearings and participate as the judge requests.
- Permanency Planning
Permanency planning is the formulation of methods to provide services to children and their families to help keep children with their parents if at all possible. If children cannot live with their parents, permanency planning provides for placing children with relatives. If a relative placement is not possible, permanency planning provides for temporary, short-term, foster care placement with a plan to return to the parents. Finally, if return to the parents is not possible, permanency planning provides for alternative permanence via adoption, guardianship or independent living, depending upon the age, strengths and needs of the child and family. Resource Parents are active members on the permanency planning team, helping the family and CSW formulate plans.
- Physical Abuse
Physical abuse is defined by state law and is usually indicated by unexplained bruises, welts, burns, fractures/dislocations and lacerations or abrasions. Other behavioral indicators include a child who feels deserving of punishment, is wary of adult contact, is apprehensive when other children cry, is aggressive, withdraws, is frightened of his or her parent(s), is afraid to go home, reports injury by parent(s), often has vacant or frozen stares, lies very still while surveying surroundings (infant), responds to questions in monosyllables, demonstrates inappropriate or precocious maturity or indiscriminately seeks affection. Resource Parents must report any physical abuse experienced by a child, as well as help a child to heal from physical abuse. They also may serve as models, teachers or mentors to parents to prevent future abuse.
A child may have had numerous out-of-home placements after a social services agency has determined that a child is not safe in his/her current home. The agency may place a child with relatives, in an emergency shelter, foster home, group home, residential treatment center or psychiatric hospital. This term is also used to refer to the day when a child moves into an adoptive home.
- Post Adoption Services
Services provided by an adoption agency and/or other community resource to the adopted person, the adoptive parents and/or birth parents after and adoption has been legally finalized. These services may include assisting with reunions, providing non-identifying information, referrals for counseling, support groups, and respite care.
The period between the time when a child moves into the adoptive family home and the finalization of adoption. A variety of post-placement activities may be offered by an adoption agency to an adoptive family, such as counseling, referrals, support and visits by a social worker.
- Post-Traumatic Stress Disorder (PTSD)
A set of behaviors resulting from experiencing or witnessing an event or series of events which were most likely of a violent or abusive nature and traumatic for the child. Children who have been removed from their homes, have lost significant people in their lives and lived in multiple foster homes also may have this disorder. Some of the characteristics include flashbacks, persistent thoughts and dreams related to the event/s, and dissociation. Therapy has proven to be effective tool in helping children recover from traumatic experiences.
- Reactive Attachment Disorder
An emotional and behavioral disorder marked by a child’s inability to establish a healthy parent-child relationship of trust and reciprocal exchange of affection. This is most often a result of repeated separations from a primary caretaker and disruptions in the cycle of the child’s feelings of need and having those needs satisfied before the age of five. Children with reactive attachment disorder may fail to initiate or respond appropriately to most social interactions, or they may be indiscriminate in their interactions-overly friendly with people they don’t know. A great deal of material on this subject, as well as parent support groups, is available for adoptive families of children with this disorder.
- Reasonable Efforts
Although defined by state law, this term simply means that the child welfare agency has done everything reasonably possible to prevent removal and/or to achieve reunification. Resource Parents must participate actively in assuring that all possible steps are taken to help a family achieve reunification.
The voluntary act of transferring legal rights for the care, custody and control of a child and to any benefits which, by law, would flow to or from the child, such as inheritance, to the adoption agency or, another family through the adoption agency. An adoption agency or lawyer must work with the state court system to make a relinquishment legal. (See Termination of Parental Rights.)
- Residential Treatment Center (RTC)
A placement that provides care for more than 10 children. May also be referred to as a residential child care facility where housing, meals, schooling, medical care and recreation are provided. Therapists, counselors and teachers are trained to meet the needs of children with emotional and behavior problems.
- Respite Care
The assumption of daily caregiving responsibilities on a temporary basis. Usually designed as a 24-hour-a-day option to provide parents or other caregivers temporary relief from the responsibilities of caring for a child.
Risk is the likelihood of any degree of long-term harm or maltreatment. It does not predict when the future harm might occur but rather the likelihood of the harm happening at all. Resource Parents can help caseworkers assess risk and likelihood of future harm.
Safety refers to a set of conditions that positively or negatively describes the physical and emotional well-being of children. A child may be considered safe when there are not threats of harm present or when the protective capacities can manage any foreseeable threats of harm. Resource Parents must keep children safe and free of threats of harm.
- Searching For Relatives
Law requires that the child welfare agency must search for any relatives with whom the child can be placed for alternative permanence, either for foster care, or for adoption/guardianship. Resource Parents may discover information about relatives and must share that information with the agency CSW. When relatives become resources for the child and family. Resources Parents share information and help transition children to the home of the relatives.
An involuntary muscle contraction which is a symptom of epilepsy or a brain disorder. A convulsion refers to seizures that involve contractions throughout the entire body. Many seizure disorders can be controlled with medication. The term “epileptic” is no longer considered acceptable.
- Sexual Abuse
Sexual abuse is defined by state law and is usually indicated by a child’s disclosure and a combination of physical indicators including difficulty in walking or sitting, torn, stained, or bloody underclothing, pain, swelling, or itching in genital area, pain during urination, bruises, bleeding or laceration in external genitalia, vaginal, or anal areas; vaginal/penile discharge, venereal disease, especially in pre-teens, poor sphincter tone, pregnancy, bizarre, sophisticated or unusual sexual behavior or knowledge, poor peer relationships, delinquency, running away, change in school performance, withdrawal, fantasy or infantile behavior.
- Shelter Home
A licensed foster home that is prepared to take children immediately after they have been removed from their birth home. Shelter homes keep children for a short period of time, generally no more than 90 days. If a child cannot return home, he/she will be moved to a regular or specialized Resource Family that is prepared to meet the child’s needs.
Describes a muscle with sudden, abnormal involuntary spasms. People with cerebral palsy often have spastic muscles. It should be used to describe a muscle rather than a person.
- Special Needs
Term used to identify the needs of a child waiting for adoption. Nearly all children in foster care are considered to have special needs due to their age, ethnic heritage, need to be placed with siblings, and physical, mental/cognitive, and emotional problems that may be genetic, the result of abuse and neglect, or the result of multiple moves in foster care.
- Speech Impairment
Difficulty producing readily understandable speech. A person with speech impairment may have limited speech or irregular speech patterns.
- Termination of Parental Rights
Legal action taken by a judge to terminate the parent-child relationship. This action ends the rights of a parent to the care, custody and control of a child and to any benefits which, by law, would flow to or from the child, such as inheritance. When the parental rights of both birth parents have been legally relinquished or terminated, the child is considered legally free for adoption. The 326.26 hearing is frequently where this occurs.
Because a child experiences time differently than adults, it is important to make decisions based upon a child’s sense of time. Legally, because of the passage of the Adoption and Safe Families Act (ASFA), the permanency planning hearing must be held 12 months after a child enters foster care. The child welfare agency must initiate or join in termination proceedings for all children who have been in foster care for 15 out of the most recent 22 months. (The law also provides for circumstances in which it is not necessary to file such proceedings.) Resource Parents must be aware of timeframes and help the agency worker progress in a timely manner.
- Title IV-E
The Title IV-E AAP is a federal program that provides assistance to families adopting qualifying children from foster care. Money through this program is distributed to adoptive families by each state.
- Waiting Child
Term used to identify a child, usually in the foster care system, who is waiting for adoption. These children generally are of school age, members of a sibling group, children of color, and have physical, mental/cognitive, and emotional problems that may be genetic or the result of experiences of abuse and neglect.
Well-being includes the physical, emotional, social, mental and moral/spiritual healthy development of a child. Resource Parents must assure well-being of children in foster care. | <urn:uuid:8044b281-6cd7-4d87-b44e-d6ff5735c634> | CC-MAIN-2019-47 | https://dcfs.lacounty.gov/resources/glossary/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00537.warc.gz | en | 0.947036 | 5,441 | 3.6875 | 4 |
For this church:
The Middle Ages, c1080
Bingham, then spelled ‘Bingheham’, is listed in Domesday as part of the lordship of Roger de Busli, but there is no mention of a church. Roger de Busli owned lands throughout Nottinghamshire and founded Blyth Priory in the north of the county.
Bingham was clearly a significant community for much of the middle ages. At the time of Domesday it was the meeting place of the ‘Bingameshou Wapentake’, alternatively known as the ‘Bingham Hundred’, there being six such divisions of Nottinghamshire at that time. By 1291 it was clearly important ecclesiastically too, being made the centre of the ‘Bingham Deanery’, which was later divided into three (now two) deaneries. This suggests that even if no church existed in 1086, then it is likely that one would have been founded very shortly after that date. Indeed, the font in the church may possibly have come from an earlier chapel.
In around 1225 the first known Rector of Bingham began the building of the current church, known as All Saints’ Church, starting with the lowest part of the tower.
Who was this first Rector, though? It is known that he was the son of an Earl, and that he later became a bishop; but what was his name, which Earl was his father, and of where did he become bishop? It used to be thought that his name was Roger, he was alternatively known as Robert, that he was the son of the ‘Earl of Saunty’, and that he became Bishop of Salisbury. However work by the Rev F Bingham shows that this has almost certainly come from a rare misreading of an original text by Thoroton, which misunderstanding has been handed down to others. It now appears most likely that the first Rector of Bingham was called William, that he was the son of the Earl (or Count) of Savoy, that he held Bingham in plurality with the parish of Wer in Lancashire, and that in 1226 he left both parishes to become Bishop of Valence.
Roger de Busli had died without an heir in the time of William Rufus, and his possessions had reverted to the Crown. King Henry III granted the manor of Bingham to William, Earl of Derby in 1235 and he enfeoffed it to a Ralph Bugge in 1266. (This was a feudal arrangement which meant that Ralph Bugge became lord of the manor of Bingham, owing alliegance to the Earl of Derby as overlord.) Ralph Bugge was a wealthy Nottingham wool merchant who had also made investment in land. After his death his son, Richard, changed his name (perhaps understandably!) Thus when he was knighted by King Edward I he became Sir Richard de Bingham. He served in two Parliaments and was Sheriff of Nottinghamshire and Derbyshire in 1302.
Not all the rectors of Bingham have been as distinguished as the first one. In 1280 there was a law suit in the Church courts between the rector Robert Bugge and Blyth Priory. The Priory, being Roger de Busli’s foundation, was entitled to tithes from lands around Bingham, but the rector was accused of appropriating these. A settlement was arrived at whereby the rector would pay 4 marks per annum to the Priory.
The building work on the church continued through the 13th Century and into the 14th. The rest of the tower, together with the main building, were added over a period of about 100 years. This includes the nave, chancel, side aisles and transepts. (The north transept may be a little later.) The porch and the room that is now the choir vestry appear to have been added around the same time, or very shortly after. Sir Richard de Bingham, now lord of the manor, contributed significantly to this work, and without his encouragement and financial aid it would never have been brought to its present form. Sir Richard died sometime between 1308 and 1314 and his tomb is probably marked by the effigy of a cross-legged knight inside the church near the main altar. Richard was succeeded by his son, Sir William de Bingham, who died around the time of the Black Death in 1349, and the partial alabaster effigy in the church is probably the remains of his tomb.
One surprising thing is that, at the same time as the parish church was being completed with his help, Sir Richard was also building a private manorial chapel for his own family use at the opposite end of Bingham (at the corner of what is now Kirkhill and School Lane). In 1301 he obtained permission to found this chapel, dedicated to St Helen, which was licensed by the Archbishop of York in 1308. It had its own priest to say prayers for the souls of his family past and present.
Rather less than one hundred years after the original church had been completed, a further wave of building activity took place, this time associated with the new lords of the manor, the Rempstones, who had replaced the now impoverished de Bingham family. Sir Thomas Rempstone, senior, had a spectacularly successful career in the service of Henry, Duke of Lancaster, the nobleman who forced the abdication of Richard II in 1399 and ascended the throne as King Henry IV. Sir Thomas, who was rewarded by the king with property in Bingham, was accidentally drowned in the River Thames in 1406 and buried in the chancel of Bingham church. Around this date the north window of the north transept must have been replaced, as was the east window of the chancel, this latter being designed with tracery in the new ‘Perpendicular’ fashion. The original parts of the wooden chancel screen date from around this time too.
Sir Thomas was almost certainly responsible for founding a Guild of St Mary in the church in 1400, which had the express purpose of employing a priest to pray for the new king and his family. The south transept was used as a chapel for the Guild, making use of the piscina and aumbry that are still there. By the time it was abolished at the Reformation in about 1550, the Guild owned property not only in Bingham but also in the nearby villages of Aslockton, Tythby and Radcliffe-on-Trent, as well as a Guildhall standing in Bingham Market Place. This was presumably a timber-framed building used for feasts on ‘holy-days’ and other social gatherings.
Sir Thomas’ son (of the same name) accompanied King Henry V to France with a force of 32 retainers, and fought at Agincourt. In 1426 he was captured by the forces of Joan of Arc and ransomed. He died in 1438 and a carved alabaster effigy of both him and his wife survived in Bingham church until the late 1600s. The two windows in the east side of the North Transept were replaced in ‘Perpendicular’ style approximately at the time of his death, and contained stained glass bearing the family coat of arms.
The Reformation in the 16th Century brought about the most dramatic change in the whole history of Bingham’s church, as it did for all other English churches. From the point when in 1534 King Henry VIII signed the Act of Supremacy, declaring that the Church of England was no longer subject to the control of the Pope in Rome, Christian practice in England began to change. Masses for the dead were banned, and the chantries that these had supported were dissolved, their endowments being taken by the Crown. The monasteries likewise were dissolved, and again the Crown took their assets. To steer through his ecclesiastical reforms, Henry appointed Thomas Cranmer as Archbishop of Canterbury in 1533. Cranmer was a gentleman’s son from Aslockton, just two miles away from Bingham, and he knew Bingham Church. A letter survives from 1533 written by him to his sister in Radcliffe-on-Trent recommending to her the school run by the Bingham rector, the Rev John Stapleton. This school is believed to have been held in what is now the choir vestry. It is possible that Cranmer himself received his early education here.
In reality there were few significant changes to the services in English churches while Henry VIII remained on the throne. But when Henry died in 1547 and was succeeded by his son Edward VI who was still a child, Cranmer was able to bring in the reforms that he felt were necessary. An English prayer book was produced in 1549, taking the place of all the services in Latin. (Cranmer was himself the chief author of this.) A second prayer book in 1552 moved the Church in a yet more Protestant direction. Along with the change of language, services were simplified and a lot of the more colourful rituals removed. Candles and coloured vestments were no longer to be used; many decorative features in churches such as statues and stained-glass windows were discouraged, and in many cases removed.
There were those who felt that Cranmer’s reforms did not go far enough, and especially from the early 17th Century the more extreme form of Protestant religion known as Puritanism became more widespread. The different approaches to religion were, of course, one of the factors leading to the Civil War in the 1640s, the removal and execution of King Charles I and the rise to power of Oliver Cromwell. It is well known that the Puritans under Cromwell sought to remove everything that they felt had Catholic leanings, and this led them to destroy statues, stained-glass windows, paintings and many other decorative features in churches. Celebrations such as Christmas were banned, and church weddings discontinued. (In Bingham civil weddings were conducted in the Market Place.) Eventually, following the Restoration of the royal family with the coronation of King Charles II in 1660, a more moderate approach to faith prevailed.
These wider events had their effect on Bingham. One particular result of the reforms was the dissolution of the Guild of St Mary. In 1553 its Guildhall and assets were granted by the Crown to Thomas Reeve and George Cotton, who must have been supporters of the royal cause. During the Commonwealth the Norman font was thrown out of the church, and it was replaced by a new one in 1663 after the Restoration.
During these years the Rectors of Bingham included some fairly illustrious clergy. Robert Abbot became rector in 1598. He was the elder brother of George Abbot who was Archbishop of Canterbury from 1611 to 1633. Robert Abbot was appointed Master of Balliol College, Oxford in 1609. He retained his rectorship of Bingham, so may not have been resident in the parish for much of his time! In 1615 he became Bishop of Salisbury.
A third successive future bishop succeeded John Hanmer. This was Matthew Wren, uncle of the famous Sir Christopher Wren, architect of St Paul’s Cathedral. Matthew Wren was appointed to the living of Bingham in 1624. He must have spent even less time in Bingham than his predecessors. He served as Master of Peterhouse, Cambridge, from 1625 to 1634. He was also during this time Prebendary of Winchester, Vice-Chancellor of Cambridge University and Dean of Windsor. He became Bishop of Hereford in 1634, Bishop of Norwich in 1635 and Bishop of Ely in 1638.
Of these three, John Hanmer was a man with puritan sympathies, as can be told from the books he encouraged. Matthew Wren, on the other hand, was a supporter of Archbishop Laud and the royalist cause and, as a result, from 1641 to 1659 he was imprisoned in the Tower of London.
It was not untypical for supporters of the two sides to rub cheek by jowl, and the religious issues could divide families. This was true of the Porter family in Bingham. One of that family, Samuel Porter, was a puritan and was Rector of Bingham from 1643. He, along with a number of other members of the family, was buried inside the church. (They were also local landowners.) Another member of the family, Henry Porter, disagreed with other members of his family on religious matters, and chose to be buried outside the church, so that the wall of the church itself would keep him apart from them.
In 1662, after the Restoration of the monarchy, Dr Samuel Brunsell was formally instituted as rector, a post he had in effect been fulfilling since 1648. Recent researches by C Davies have shown that this was probably a reward for services rendered. During the Civil War Samuel Brunsell had spent time in the Netherlands, and was closely involved with members of the exiled royal family. He served Katherine Stanhope, Countess of Chesterfield, who after her husband’s death took care of Princess Mary (daughter of King Charles I and child bride of Prince William of Orange). He came back to England in 1648 and somehow managed to convince the Parliamentary Commissioners that he was a suitable man to take up ministry in Bingham. During the following years, though, he acted as an undercover agent for the royalists – perhaps the 17th Century equivalent of a spy – as part of Lady Stanhope’s attempt to raise money to overthrow Cromwell and the Commonwealth. After the Restoration, Brunsell received several rewards of which the living of Bingham was one. (It was a rich living, then in the patronage of the Stanhopes.) He also held the livings of Screveton and of Upton at the same time, as well as canonries at both Southwell Minster and Lincoln Cathedral. There was a connection too with a former rector. Brunsell’s elder brother, Henry Brunsell was another royalist supporter who was well rewarded after the Restoration. Henry Brunsell married the sister of Sir Christopher Wren, and Matthew Wren was their uncle.
Brunsell is also known as one of the last men officially to ‘lay’ a ghost. Apparently this particular ghost walked because its body had not received a proper Christian burial. Dr Brunsell caused a coffin to be prepared and a grave dug. Cornelius Brown then relates:
This has been told to me by an old man whose grandmother heard it from her grandmother (all Bingham folk), that Dr. Brunsell, majestic in wig and gown, with the populace in procession, escorted that coffin, borne on bier shoulder high, from Chapel Lane to the churchyard, where the solemn burial service was read and the coffin lowered into the grave. Thenceforth that restless spirit troubled no more the good people of Bingham.
Samuel Brunsell was buried in the north transept of the church, and a finely engraved slate slab placed over his tomb. When the new platform was put into the church a trapdoor opening was left so that the stone could still be viewed. After his death in 1687 he was succeeded as Rector by his son, Henry Brunsell, who is also buried in the north transept and has a viewable tomb.
Apart from the removal of decorative items and those seen as ‘popish’, there were few changes to the building during this period. The church was given a lower flat roof in 1584 (though it was later raised again). This made the interior of the nave dark and oppressive. The oldest of the church bells also dates from this era, traditionally said to have been cast to commemorate the English defeat of the Spanish Armada in 1588. This bell was made by Francis Wattes who was a native of Bingham and is said to have been buried in the churchyard. Another bell is surprisingly dated 1647, soon after Bingham life was disrupted by both the Civil War and an outbreak of the Plague, and a third was added after the Restoration in 1662.
Around the beginning of this period, or perhaps a little before, England underwent a revolution in both its economic life and physical appearance, when the large medieval open fields were ‘enclosed’ into the pattern of hedged fields which still survives in essence today. In the Bingham area this happened largely at the initiative of Philip Stanhope, Earl of Chesterfield, who was by far the largest local landowner, as well as being patron of Bingham Parish Church. The process modernized farming life, and incidentally produced a greater income for the parson in the form of ‘tithes’. Bingham was recognised as the richest parish in Nottinghamshire, based on the value of the tithes, ‘glebe land’ (which the rector farmed himself) and other income.
The 1700s were generally a period of religious apathy in the Church of England, and many parsons were only interested in the income they could glean from their parishes. Numerous churches fell into disrepair, and many people turned to the newly-emerging nonconformist groups such as the Methodists for spiritual fulfilment. The first Methodists were recorded in Bingham in the 1790s.
Bingham Church had only two rectors in nearly a century – from 1711 to 1810 – and both caused difficulties for different reasons. There was an initial burst of activity when the Rev Henry Stanhope (an illegitimate son of the Earl of Chesterfield) arrived in 1711. Above the chancel arch on the nave wall he erected a royal coat-of-arms in ornamental plasterwork which was later described in 1792 as ‘exceedingly elegant and esteemed a great amenity’. The arms (of Queen Anne) bore the date 1711 and were flanked by plaster cherubs. At the east end of the chancel the marble communion table was given a similar ornamental plaster reredos, incorporating the names of both the rector and his curate. (None of this work survives to the present day). Unfortunately soon after his arrival Henry Stanhope went mad and had to be confined for the rest of his long life, being declared ‘incapable of duty by means of a phrensy’; the services were provided by a succession of curates who were often priests serving at other churches as well as at Bingham. The results were not always felicitous. Throsby, while refusing to name the particular clergyman concerned, describes one occasion thus:
Suffice it to say that the jolly god, Bacchus, had so bountifully distributed his delicious draughts to the priest, that with much difficulty he read to the end of the Te Deum, “O Lord in thee have I trusted, let me never be confounded.” He then dropped on his breech, and slept his congregation out of doors. The boys of the town, sometime afterwards, on his leaving the church, saluted his ears with hoots and hisses into the fields.
From 1764 to 1810 the rector was the Rev John Walter, whose main interests appear to have been hunting and high living. He frequently resorted to the law over tithes and other matters of dispute with his parishioners. (He tried to revert to the practice of tithes being provided in kind, and would sit in the church porch for them to be delivered to him. The local farming community was less than delighted by this. Maybe some of the rector’s claims were valid that they had, for instance, watered the milk!) In 1770 he built a large new Rectory with a tithe barn to store the produce of his tithes. (The former parsonage house was a much more modest affair which stood in the (then) north-west corner of the churchyard. It was let out to tenants for a while, and was still standing in 1800, but was later destroyed. The Georgian Rectory was eventually pulled down in the 1960s, and the site used for the new school. Some cottages just south of the church were knocked together to make a new Rectory. This was never very satisfactory, and a new purpose-built Rectory was erected in 1990.)
In 1773, during the Rev John Walter’s incumbency, the chancel was ‘beautified’, probably by the erection of a ceiling to hide the roof, and new altar rails were added; these changes were however regarded by the 1840s as being ‘in barberous taste’! A new marble altar slab was brought from Bakewell in the Peak District in 1769. Writing in 1797 Throsby, making a sketch of the church, described it as ‘a neat, but not an elegant structure’.
John Walter does not seem to have got on well with his parishioners. Some of his actions appear to have been designed deliberately to annoy them, such as having four of them arrested for playing quoits in the Market Place. At other times the parishioners got their own back. An old custom was discovered of ringing the church bell in the very early morning to awaken the workers for work in the fields. The Rectory being so close to the church, the rector was the one who heard this bell most clearly!
Perhaps despite the rector, during this period church life remained vigorous and many parishioners were active performers of church music. A choir, or ‘society of singers’ was established by 1778 (which used to fine its members two pence for swearing in church), and there were also several fiddle players, and an enthusiastic group of bell-ringers, known as the ‘society of ringers’, who turned out to ring peals to celebrate national events such as the Battles of Trafalgar (1805) and Waterloo (1815). A number of people were members of both of these two societies. One such was Matthew Stewardson who died in 1780 and on whose gravestone in the churchyard it is inscribed:
This stone erected by the joint contributions of the Society of Singers and Ringers in this town to perpetuate his memory & also a manifest token of their concern for the loss of so valuable a member
Some events are known from this time which affected the church. On the morning of 21st September 1775 the spire was struck by a lightning bolt. The only deaths resulting from this were of 13 pigeons and 3 jackdaws, but eleven boys who were in the church were ‘thrown down with great violence’ and three of them were said to be ‘much scorched’. The bells were unharmed, but the clock was damaged, and some stones dislodged from the tower.
In December 1776 the church was broken into (perhaps a less common occurrence then than it is now). The thieves were not able to find much to steal, and made off with no more than some sacramental linen and gold lace from the pulpit.
It hardly ever happens that anyone answers the call in the banns of marriage that if anyone “know any cause or just impediment why these persons may not lawfully be joined together in holy matrimony, ye are to declare it.” It happened in Bingham, though, in 1791 when a young man seeking to be married was underage, and the marriage was forbidden by his father.
In 1810 the Rev Robert Lowe was appointed Rector of Bingham. Robert Lowe had an influence on national government policy with his ideas for dealing with the problem of relief for the poor. His ideas formed a basis for the 1834 Poor Law Amendment Act (also known as the New Poor Law). A workhouse was established in Bingham in 1818, and then a new one in 1837. He is probably best known, though, as being the father of Robert Lowe, later Viscount Sherbrooke, an important figure in 19th Century British politics and who served as Chancellor of the Exchequer from 1868 to 1873 as part of William Gladstone’s government. Viscount Sherbrooke was born in Bingham Rectory.
A memorial to Robert Lowe (Snr.) is on the south wall of the chancel.
Bingham church entered a new era in its history with the appointment as rector in 1845 of the Rev Robert Miles, a wealthy and forthright evangelical, son of a Member of Parliament, with a highly artistic family. In particular his youngest son, George Francis (Frank) Miles, who was born in 1852 became a fashionable London artist. Frank Miles also became a friend (and perhaps more) of Oscar Wilde who visited Bingham Rectory, and a friend also of Lillie Langtry (a celebrated beauty known as ‘the Jersey Lily’ and a close intimate of the Prince of Wales, later Edward VII). Lillie Langtry also visited the Rectory, and it is thought that she may have been a model for some of the figures painted in the church during Robert Miles’ time. (The figures in the angel painting and the Miles memorial window are the only ones surviving today that could be candidates.) Robert Miles’ wife, Mary, was also an artist, as was at least one of their daughters, and there was a cousin who was a musician and friend of Ralph Vaughan Williams.
Almost as soon as he arrived in Bingham, Robert Miles enlisted the services of Sir George Gilbert Scott, one of the foremost ‘gothic’ architects of his day, to restore the church and make other improvements. These were carried out in 1846. The changes involved removing almost of the alterations carried out in the previous century. The decorated plasterwork was all removed, and the chancel was altered, removing the ceiling, and probably adding the decorations that can still be seen on the beams. Much of the badly-eroded medieval window tracery was replaced. The current pews date from this era, as does the marble-tiled floor in the chancel which Robert Miles put in at his own expense.
Robert Miles also built the Bingham Church School at the same time. (The building is now called Church House and used as a church hall.) The first headmaster appointed for the school, in 1846, was Alfred Mowbray, a strong adherent of the Oxford Movement (ie the ‘High Church’ group in the Church of England). Mowbray, along with the Rev Nathaniel Keymer, the curate, had a significant influence on Miles, who became much more of an enthusiast for Anglo-Catholicism. In 1858 Mowbray left to found the well-known and influential religious bookshop in Oxford that still carries his name, though it has been a part of Hatchards since 2006. He is responsible for the design of one of the windows in Bingham Church.
Further restoration of the church was carried out under Miles’ direction in 1873, the architect this time being his artistic son Frank Miles, even though he was barely 21 years old. This included raising the nave roof. (The roof had been lowered in 1584 from its original height. The new roof is about 4 feet higher than the pre-1584 roof. The original roof line is visible on the interior west wall.) Three small clerestory windows were inserted on each side in the new upper sections of the walls.
The chancel screen was also redesigned at this time, and decorated by Frank Miles and by Robert Miles’ wife, Mary. Some of the carving in the church was undertaken by Eleanor, Robert and Mary Miles’ eldest daughter. This included some of the nave corbels, and carving on the oak reredos (which was later replaced by the current one).
Mary Miles was a not inconsiderable artist herself. In 1884 she executed a mural painting of a procession of angels playing musical instruments (depicting the 150th Psalm) as a frieze around the plastered walls of the church. (The frieze was removed in the early 1900s and most of it has been lost. A fragment of it was framed, and this was restored to the church some years ago and now hangs on the chancel wall.)
During Robert Miles’ time much stained glass was inserted in the church windows, and parts of the interior were painted with religious pictures, many executed by his artistic wife and children. These were frequently in the ‘Pre-Raphaelite’ style. The current clergy vestry was added to the church in 1863 to accommodate the organ; a new clock was installed in the tower in 1871, and the lychgate put in in 1881.
A plaque on the wall of the choir vestry acknowledges the work that was done on the church building in the time of Robert Miles.
Miles’ successor in 1883 was the Rev Percy Droosten, formerly Vicar of Prittlewell (Southend on Sea). He was something of an English and Greek scholar. He translated The Confessions of St Augustine into English from the original Greek, and wrote a number of articles on church law and history. He introduced full High Church services into the church amidst some opposition from his parishioners.
Droosten was followed by Canon Henry Hutt, rector from 1910 to 1933, whose views were more centrally Anglican. He did much to change the interior of the church, removing many of the Miles’ paintings, and encouraging his parishioners to make numerous gifts of elaborate furnishings to beautify the building. He employed the architect W D Caröe to restore the church in 1912-13 and to design many of the new fittings. These included particularly the choir stalls in the chancel.
After the First World War the upper section of the chancel screen was removed and replaced by a new screen as a War Memorial (amid some opposition from non-conformists who objected to what they felt was presumption on the part of the Church of England rector).
A recognised character of Bingham at around this time, and a committed member of the church, was Miss Ann Harrison. After the First World War, when she was already a very old woman, she felt it was necessary to raise some money to purchase a Roll of Honour book, to contain the names of those who gave their lives in the war. This she did by collecting scraps from many local households, which she then sold as pig food. She gained a great deal of respect, and after her death in 1928 at the age of 98 years, the community had a wooden statuette of her carved and placed in the church. The Roll of Honour which she purchased is still in use, and the names are read from it on every Remembrance Sunday.
In 1922 the bells were all taken down and they were rehung with two extra bells added. Further remedial work was carried out on the tower and spire at the same time, including on the weathercock. (While the weathercock was on the ground children from the school were encouraged to jump over it, so that they could claim in later life that they had once jumped over the weathercock that was on top of the church spire!)
In 1925 the church celebrated its 700th anniversary, with special services on All Saints’ Day, 1st November. (The church was still normally known as All Saints’ Church at this date.) To mark the anniversary a new altar was introduced into the church, with carved wooden cross and candlesticks. There also were two large candlestands and, most significantly, a large carved and gilded reredos was installed, incidentally hiding the lower half of the east window which had been designed by Mary Miles.
In 1926 the old Norman font was brought back into the church and restored. It was dedicated exactly one year (less a day) after the 700th anniversary service. Most of the money for this particular restoration was raised by the children of Bingham. After Canon Hutt’s death the baptistry was enclosed as a memorial to him, something he had wished to be done. The work was again by Caröe. The children once again contributed, this time by providing a figure of St Christopher placed on the west wall.
For a time in the mid 20th Century Bingham received the ministry of two Bishops. The Rt Rev Morris Gelsthorpe was instituted as Rector of Bingham in 1953. As a young man Morris Gelsthorpe had served as curate in Sunderland under the vicar, the Rev Bertram Lasbrey. The two men became firm lifelong friends. Lasbrey was appointed Bishop of Niger in 1922, and soon afterwards he invited Gelsthorpe to join him in ministry in Africa. Gelsthorpe became assistant bishop in Niger in 1932, later moving to serve in Sudan and becoming first Bishop of Sudan when it was constituted as a separate diocese in 1945.
When Morris Gelsthorpe returned to England as Rector of Bingham, Lasbrey (now retired) came to join him and lived at Bingham Rectory. It was said that Bingham had not only a bishop as its rector, but a bishop as its curate too! In reality, of course, they both served as assistant bishops in Southwell Diocese. A monument to them both is on the north wall of the nave.
Bishop Gelsthorpe established a side chapel in the south transept, and in 1957 it was dedicated by the Dean of Lincoln. This restored an old use of the transept. It had been the chapel of the Guild of St Mary in the 15th and early 16th Centuries. (The oak altar and reredos are still in place. The altar rails from that chapel are now used in the nave with the same altar brought to the middle.) The oak west screen, including the door to the tower, was erected at the same time. The woodwork for this was all provided by Robert Thompson, the ‘mouseman’ of Kilburn.
Another important anniversary, the 750th, came up in 1975, during the time of the Rev David Keene. As had happened 50 years before, there were celebratory services. As a mark of the celebrations the clock was electrified so that it no longer had to be wound laboriously by hand. A cut-away drawing of the tower showing the operation of the bells was made at the same time.
The arch above the fairly new west screen was filled in with glass in 1972. New lighting was installed in 1979.
In 1988, during the time of the Rev David Swain, the church entered a Covenant for Unity with Bingham Methodist Church and the local Roman Catholic parish (St Anne’s, based at Radcliffe-on-Trent.) Commitment to working together has been a powerful feature of church life in Bingham since then.
In 1992 a platform was built across the front of the nave and incorporating both transepts. This has had a major effect on the building, allowing services to be conducted with an altar much nearer to the people than before. A sound support system was installed at around the same time.
Also in 1992 the Hanging Cross was put in place. This was given as a War Memorial to those who gave their lives in the Second World War. The idea was initiated by the Bingham Branch of the Royal British Legion. A few years later, in 1996, the laid-up banners of the British Legion and RAF Association were mounted on a bracket on the west wall.
After several years of gradual deterioration, the pipe organ finally gave up the ghost in 1997. It was replaced by a modern electronic organ. A spin-off from this was that it released the space the pipe organ had formerly occupied, allowing for a considerable enlargement of the clergy vestry.
In 1998 the floor of the bell-ringing room started to show signs of weakness. It was noticed that the beams supporting this floor had been cut short to accommodate the clock weights, presumably when the clock was installed, but they had not been reinstated when it had been electrified. The floor had to be strengthened for safety of the ringers.
The lychgate that Robert Miles had installed was starting to lean to one side, and showing other signs of deterioration. It was restored in 2000 as a Millennium Project by the Friends of Bingham Parish Church.
In 2007 toilet facilities were put in the church in the base of the tower. A couple of memorials that would otherwise have been obscured were resited in the body of the church. At the same time a tea-point was built at the west end of the north aisle. These were designed by Allan Joyce, architect, and the design of the tea-point aims to reflect, in a more modern idiom, the Caröe woodwork around the baptistry on the south side.
In the early morning of 27th February 2008 a minor earthquake shook the East Midlands. There was little damage in Bingham, but the finial cross was shaken off the east end of the chancel of St Mary and All Saints’ Church and shattered. It has now been replaced.
One remaining question is when the Church, which was known as “All Saints’ Church” since its foundation in the 13th Century, became known as “St Mary and All Saints’”. The words “St Mary” were added to the original name of the church in acknowledgement of the Guild of St Mary, which had formed a part of the life of the church and community in the Middle Ages. It is not clear, though, who had the idea of adding it, and it seems to have taken a while to catch on. The plaque on the vestry wall which recognises Robert Miles’ work probably dates from soon after that work was completed in 1873. This names the church as “St Mary and All Saints’”. However the church continued to be referred to as “All Saints’ Church” through the rest of the 19th Century, and well into the 20th, just occasionally being referred to by the longer name. The new name began to take hold around 1950, and has now become the name by which the church is most commonly known, frequently now being shortened to merely “St Mary’s”.
In recent times the church has been working closely with the surrounding community. Perhaps the most notable sign of this has been the Christmas Tree Festivals, begun in 2006. | <urn:uuid:4a283eca-2103-4937-a89c-059d065bd147> | CC-MAIN-2019-47 | http://southwellchurches.nottingham.ac.uk/bingham/hhistory.php | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00058.warc.gz | en | 0.990458 | 8,032 | 2.53125 | 3 |
Editor’s note: Usually our blog posts are only a 2,000 or so words. This month’s guest post is much lengthier – but we feel it’s worth it! Grab a cuppa and get comfy, as our guest writer, Marnie Ginsberg, gives a longer, but very easy-to-follow, overview of the science behind how children learn to read.
Imagine the confusion of neurologists in China in the 30’s who were caring for a businessman who suffered a severe stroke. This man was bilingual in Chinese and English, yet after the stroke, he was no longer able to read…Chinese.
However, he could still read English!
What could explain the loss of reading in one language, but not another?!
Reading researcher Maryann Wolf details this story in her book Proust and the Squid: The Story of Science of the Reading Brain, as she reveals that this seeming mystery is quite understandable–given what we now know from brain imaging.
First, a little bit of background: The Chinese written language is a logographic language, meaning that Chinese characters represent words or smaller units of meaning.
English, on the other hand, is an alphabetic language, meaning that English characters, or letters, represent sounds in words. Letters themselves do not represent meaningful words.
How does this relate to the stroke victim who could read English but no longer Chinese?
This is where it gets interesting in light of the vast numbers of studies of the last 100+ years on how children learn to read!
Alphabetic languages, such as English, Spanish, and Greek, rely mostly on the left hemisphere of the brain to read words. Chinese, in contrast, also utilises much of the right hemisphere of the brain as well in order to translate the picture-like logographs into meaningful words.
It is likely the businessman suffered a stroke on his right posterior hemisphere, but not his left. Thus, Chinese reading was lost, but English reading was retained.
What does this story of a Chinese businessman, brain hemispheres, and modern imaging have to do with how children learn to read English?
Common sense tells us that we read by sight. That learning to read is mostly a visual process, right?
Indeed, good adult readers recognise words by sight so rapidly that they recognise words faster than even letter names. As a result of this accomplishment of our adult reading abilities, a lot of confusion has settled in the land about how we read and how we should teach reading. However, when we only look at what we can observe on the outside (of the brain) and at what we do as mature readers, we miss the real story of how our reading brain develops–at least for alphabetic languages such as English.
Thousands of brain imaging, psychological, and other types of reading research studies have built a mountain of evidence to demonstrate that young readers of alphabetic languages learn to read via sound. Yes, of course, our eyes and visual system are involved. But the heavy lifting in cracking our written code–learning how to convert letters on the page into sounds in words that make meaningful words–is done by our sound processing system, or phonological system.
This blog post will explain the most compelling, and well-researched theory of how learners acquire the ability to read English words. It’s not just what we “see” on the surface!
There are many brains which ‘get by’ in learning to read in English, but these readers struggle as they rely on parts of their brains that are poorly adapted (the right hemisphere, like our Chinese businessman who also read a logographic script) to meet the needs of an alphabetic writing system. Instead, I am going to look at brains that are efficient and accurate at learning to read in an alphabetic system. In the English reading world, there are different types of reading brains, but only one type of brain is highly successful at reading in an alphabetic system.
We’ll examine how to build that type of brain.
[But there’s good news for readers who struggle! When readers like them of all ages receive training on how to read like successful readers, they do learn to read well and their brain changes! (See this research example.)]
Let’s travel along with a hypothetical young five year-old, Jonah, as he goes about becoming a reader.
Jonah has already spent the first five years of his life learning much of the English language and he’s good at both listening and speaking (for his age). This has been accomplished through many one-on-one conversational turns between his primary caregivers and him. He also learned more advanced words through listening along to storybooks.
This pivotal oral language experience and skill, however, has given him marginal benefit at knowing how to translate specifically the 26 letters of our alphabet into meaningful words that make up sentences and meaningful lines of thought. In other words, if his teacher reads to aloud him Dr. Seuss’ famous The Cat and the Hat , Jonah has no trouble understanding, or comprehending, just what havoc the cat caused for poor Sally and her brother that one rainy day.
On the other hand, if Jonah’s teacher hands him The Cat and the Hat without having read it to him previously, he is stumped.
He has learned several letter-sounds so he recognises some of the squiggles on the page as having to do with letters and sounds. But he can’t yet translate, or decode, any of them into words that make sense to him.
How can he take a word like ‘sun’ or ‘shine’ and decode those letter strings into meaningful words and concepts?
Given age appropriate oral language abilities, Jonah requires three new key ingredients in order to begin to translate the letters in ‘sun’ into the meaningful word itself. He needs to develop:
The concept of the alphabetic principle means that our written language is a code for sounds. For example the ‘s’ represents the /s/ in ‘sun’. Similarly, the ‘i_e’ represents the /ie/ sound in ‘shine’.
This insight of the alphabetic principle isn’t self-evident to non-readers. Some beginning readers think long words are for words like ‘snake’ because snakes are long. Others create other curious explanations for how to attack, or decode, unfamiliar words. It’s the adult readers job to reveal to the child how our code works.
If a child has never seen a door get unlocked, how will giving her the key to the lock help unless you reveal to her how the lock works? Similarly, advanced readers in the child’s life need to reveal how our written code works. How the words are lifted from the page. Very few children deduce how our written language works on their own.
Helping the young child discover the alphabetic principle can be quite simple. Here’s one way: just drag a pencil or finger along the letter-sounds in a word as you drag out the sounds in the word, like below.
At this point, Jonah, indeed, may not be “reading.” He may just be repeating the word he heard the adult speak aloud. However, he’s beginning to understand how our code works….The squiggles on the page are a code for sounds in words; that’s the alphabetic principle.
It’s an “aha!” that every beginning reader should be taught. The two skills that follow help the alphabetic principle move beyond just a concept to rapid facility with decoding–the alphabetic principle in action.
Watch this short example of a young student in one of her first encounters with the alphabetic principle. As she goes through this word building activity, she’ll also be gaining knowledge in the 2 upcoming ingredients: letter-sound knowledge (phonics) and phonemic awareness.
The second much-needed skill is letter-sound knowledge. Jonah needs to know that when he sees the letter ‘s’, he’s likely going to say “/s/”. Not “/m/”. Not “/oy/”. Not “sit”.
This information is also called phonics knowledge. For instance, in The Cat and the Hat’s first sentence, Jonah will need to learn various phonics information, such as:
In order to know that ‘shine’ consists of the sounds /sh/ /ie/ /n/, Jonah will need to deconstruct ‘shine’. He’ll need to convert ‘sh’ into /sh/ and the other letter combinations (or graphemes) into their relevant sounds (or phonemes).
To move from beginning reader to advanced reader, Jonah will need to learn which phonemes are tied to over 240 graphemes, such ‘t,’ ‘m,’ ‘er’, ‘oa’, ‘ck’, or ‘gn’.
Much more than just the 26 alphabet sounds, right?
It IS a lot to learn in the early years of learning to read, and it’s made more complicated by the fact that some graphemes can represent more than one phoneme (i.e., ‘ow’ = /oa/ or /ow/ in ‘snow’ or ‘cow’).
But that’s not all!
Many phonemes represent more than one grapheme, as in the /oa/ sound: ‘go’, ‘home’, ‘show’, ‘boat’, ‘toe’, etc. Learning these grapheme-phoneme correspondences takes practice reading and doing word work activities, which is why computerised games that teach all of these phonics spellings, such as Phonics Hero, can be so beneficial (grab yourself a free Phonics Hero Teacher Account here or trial as a parent here).
Phonics Hero offers Jonah the possibility of fun while his brain makes links (paired-associate learning) between the 240+ graphemes he will need to learn in order to become a strong adult reader and recognise words like ‘infantile’, ‘prevaricate’, and ‘gregariousness’.
If you’re new to thinking about how children learn to read, you may not be aware that the reading education field has been susceptible to philosophical debates that have become so heated as to be termed the “reading wars.” These debates mostly center on the place of phonics instruction in a beginning readers’ classroom.
Some (the “phonics” side) claim that phonics information needs to be explicitly and systematically taught. Others (the “whole word” or “whole language” side) purport that this information is better deduced through reading of authentic texts and whole word learning. A more recent version of the whole-word line of thought emphasises that reading instruction that centers on levelled texts, contextual guessing, understanding of the whole text, and occasional phonics lessons, will result in better outcomes.
These arguments have fuelled little good and mostly chaos for classroom teachers and, even worse, low performance for those students most in need of systematic phonics instruction. [See, for instance, over 60% of U.S. fourth graders are not proficient according to federal assessments.]
The winds are turning, however.
The evidence-base on the essential need for systematic phonics instruction has reached such a mountain’s height that more and more leaders, teachers, and parents are aware of the research. Tellingly, the U.S. based International Literacy Association (ILA) has just released a nuanced position paper validating the importance of explicit and systematic phonics despite previously endorsing some leaders and texts that have historically thumbed their noses at the contemporary research base on how children learn to read words.
Finally, the concept of the alphabetic principle and knowledge of phonics information are insufficient alone to learn to read words. The developing reader also needs to perceive the individual sounds in words; we call this oral language ability, phonemic awareness.
For instance, Jonah needs to be able to recognise that the word ‘play’ can be segmented into the sounds /p/ /l/ and /ay/. Or, he might be able to hear three segmented sounds, such as /w/ /e/ /t/ and blend those sounds together to hear the word, “wet.” These are just two types of phonemic awareness. See this little girl demonstrate:
Phonemic awareness allows the child to utilise our written code. It’s the engine that allows the phonics knowledge to be put into play.
When Jonah encounters the word, ‘shine’, he has to:
Many young beginning readers can not do this without support! They lack phonemic blending ability.
In addition, Jonah also needs to be able to quickly separate, or phonemically segment, the sounds in words. If he can perceive that ‘sun’ is three sounds: /s/ /u/ and /n/, he can connect each of those sounds to their respective spellings. He may already have memorised how to read and spell the word; he’s cracked the code for the word ‘sun’.
Phonemic blending and segmenting were emphasised in recent decades in research syntheses, such as the U.S. National Reading Panel Report and the so-nicknamed Rose Report from the U.K., as the main phonemic awareness vehicles to learning to decode. Indeed, they are essential.
But further research has revealed that one more phonemic awareness skill is still needed – phonemic manipulation.
Phonemic manipulation is the ability to add, delete, or change phonemes in words. For example, ask a beginning reader orally “what is ‘slip’ without the /l/ ” and you’ll likely receive little more than a blank stare. However, ask any good 4th grade reader and she’ll be able to do the task easily.
This higher level phonemic awareness skill is harder than blending and segmenting and is a better predictor of reading achievement than just blending segmenting. David Kilpatrick makes the case that phonemic proficiency – rather than phonemic awareness – better describes the skill that sets apart the strong from the weaker reader. “Proficiency” draws out the concept of speed of processing, which appears to be essential for both decoding and automatic word identification.
How does playing around with sounds in words relate to decoding and word learning?
All the mechanisms for why phonemic proficiency makes such a difference are not yet fleshed out by research, but here are two partial explanations. First, when Jonah encounters a word for the first time and he tries to decode it, he’s likely to make an error in his first guess. However, if he deletes an attempted vowel sound and adds another option to test out another possible word, he’s relying on advanced phonemic awareness skills (i.e., phonemic proficiency).
For example, take the word ‘shine’ again. Jonah may or may not have been explicitly taught the spelling pattern of ‘i_e’ for /ie/. Even if he had been taught it, he may not notice the ‘e’ at the end of the word when he first observes the ‘i’ spelling in ‘shine’.
As a result, his first attempt to read ‘shine’ sounds like “shin”, which is a reasonable choice if the ‘e’ ending is ignored. Have no fear, though.
Jonah is searching for meaning as he reads. He realises that, “The sun did not shin,” makes no sense. He attempts to correct his mis-reading by dropping the /i/ sound in “shin” and replaces it with another common “i” sound -/ie/.
Now the word sounds like “shine” and that fits the sentence. He is pleased with his second attempt and moves on. Thus phonemic proficiency has served him well in his decoding process.
Jonah’s reading of “shine” can be used to describe the theory of how word learning occurs – the self-teaching hypothesis (popularised by Share, 1995). Jonah’s teacher or parent likely provided Jonah with the ingredients to be able to decode some words himself: the alphabetic principle, phonics knowledge, and phonemic awareness.
But Jonah deduced that ‘i_e’ must be /ie/ in ‘shine’. Through attacking most of the word correctly but getting stuck with the vowel sound, he figured out the information he was lacking that must be needed to make a meaningful word in the sentence, “The sun did not shine.”
In that “positive learning trial” (Jorm and Share, 1983), he may have learned consciously or subconsciously that ‘i_e’ is likely the sound /ie/. He may have also learned that ‘shine’ is the correct spelling of the word. He may have even learned a spelling pattern of the two sounds in ‘ine’ to more quickly read ‘twine’, ‘mine’,’fine’, or ‘spine’ in the future.
This spelling and reading-word-parts information may have become part of his orthographic learning. Orthographic learning, or orthographic mapping, is the correct knowledge of our spelling system: that the spelling ‘i_e’ is the /ie/ in ‘shine’ and ‘like’ and that ‘whose’ is spelled with a ‘wh’ ‘o_e’ and ‘s’. The reading brain “orthographically maps” the concept of ‘wh’ to ‘whose’ for future reference.
A typically developing reader can decode a word for the first time, such as ‘shine’ and recall the way it looks for automatic future reading with as few as one to four exposures of the word. Readers develop an abstract concept of the word and its interior parts so that they might recognise the same word in their very next exposure even if the word is written:
This process of decoding → orthographic mapping → automatic word recognition is not just a one-way street, however.
As the developing reader orthographically maps more and more words in our language, she develops deeper phonemic awareness. The very print itself fixes the abstract phoneme representations into a concrete visual form – letters! This helps explain how reading itself has been demonstrated to reciprocally interact with phonemic development. This bi-directional relationship between decoding and phonemic awareness is yet another explanation for why strong readers have phonemic proficiency. The act of reading accurately and then orthographically mapping so many words and letter combinations has developed their processing of sounds in words. The phonemes in words have become inextricably linked with print.
Thus we’ve come full circle with phonemic awareness. A rudimentary amount of it is necessary to really understand the alphabetic principle and to begin to decode unfamiliar words. But correct reading itself also trains the reader in stronger and stronger phonemic awareness. This is the other route in which we understand that phonemic proficiency is developed as students move from beginning to advance reading.
Good readers develop it through the process of accurately decoding words by sound. Poor readers who rely on mostly visual-only information to recall a word (in other words ignoring some or all of the phonetic cues in the spelling of the word) are not able to orthographically map the words in their brain. They don’t have tight sound – symbol (or phoneme – grapheme) connections on which to hang the information they may have just read in a word.
Jonah is not the only one who is still learning to read words. Even you, mature reader, may be learning to recognise rapidly new words, such new medical terms, foreign names, or high falutin vocabulary. 😉
Let’s experiment with this, shall we? Try to read this text with some made up words:
Did you deduce how to pronounce ‘murks’ and ‘yez’? Did it take much time?
For most good readers, these words were likely phonologically decoded in very little time. Your brain quickly cracked the code on the spelling of “murks” and translated them into phonemes, to then be pronounced accurately.
You likely broke it down into…
….and blended them together and arrived at /merks/ as a reasonable pronunciation for ‘murks’.
That’s the process that a beginning reader goes through every time she reads an unknown word–the early words, such as ‘pen’ and the later words such as ‘stupendous’.
Let’s test your ability to do the same as Jonah, shall we?
Choose the correct word for the blank:
Which did you choose?
They are all viable spelling options for the sound /er/!
And, if you’ve just read this text one time through, you’ve only seen the proposed pronunciation for the ‘ants’ substitute 4 times in your entire life.
But, if you’re a good reader, you probably can recall which spelling is the “correct” one:
You didn’t deduce it alone from sound-based cues. All the choices are reasonable options for the /er/ sound. The ‘i’ spelling is often the /er/ sound as in ‘bird’, ‘shirt’, ‘skirt’, ‘girl’, or ‘quirky’, for instance.
And, although less common, ‘ear’ is the /er/ sound in many common words, such as ‘earth’, ‘learn’ and ‘earn’. (There are even several other possibilities for the /er/ sound.)
You likely chose ‘murk’ because your amazing brain had already orthographically mapped the /er/ sound to ‘ur’ and ‘murk’ as yet another letter string to tie to a specific group of sounds and a specific semantic concept (i.e., an ant).
[Researchers have even tested how long this memory trace could last. At least 30 days! Possibly much much more] (Share, 2004).
But this would have been much harder for you to orthographically map if you didn’t have good code cracking ability and speed (phonemic proficiency). OR, it would be harder if the word had no code, no logic, to cue you.
Consider: what if I had written this:
Would you remember after 5 minutes whether ‘ants’ was:
Without a meaningful code, and without a system for cracking it, you’re left to process this new word with other systems of the brain. And that visual-only system has not proven able to allow readers to learn the 40 to 70 thousand words mature readers recognise at a glance.
Phonological decoding, or sound-based decoding, draws the reader’s attention to the orthographic elements of each new word. I call it noticing the “inside parts” of words. That ‘ur’ is the /er/ sound we use in ‘murks’.
You got it now, right? 😉
Orthographic learning is item based, not stage based. Even though you are likely capable of reading thousands of words, and might be considered at the most “advanced” stage of a reader, you still had to go through the same process to mentally pronounce ‘murks’ that any 5 year-old would have to do to pronounce ‘bird’.
Are you not at an advanced stage of reader?
You didn’t recognize ‘murks’ by sight because it was your first exposure. It was a new item in your mental orthographic system.
Whether we are a beginning or advanced reader, we’re all pioneers on the frontier of learning new words, or new orthographic patterns – for our own reading brain.
Today may be the day that you learn, not only to read ‘murks’ but also:
Each novel word is a new item to be decoded. Yet another code to be cracked. And, within just a handful of exposures, that word will likely be orthographically encoded, able to be recognised and pronounced in a split second. But perhaps you may have to check with a dictionary if you don’t know how to pronounce the above words.
Now we’ve journeyed from China to analysing Jonah’s process of reading simple words such as ‘sun’ and ‘shine’. Hopefully, you understand a bit better the route that Jonah takes to move from beginning reader, who slowly decodes one word at a time, to that of an advanced reader who can recognise in milliseconds, literally, thousands upon thousands of words.
The brain’s amazing system for learning to read words takes Jonah from matching sound (phoneme) to symbol (grapheme) over and over again till the words become memorised (orthographically mapped). Words that are easily recognized. Now they are read by sight in the blink of an eye.
The Self-Teaching Theory helps explain a number of confusing issues in the teaching of reading and reading research. For example, how can it be that research so convincingly demonstrates that students benefit from being taught phonics information explicitly AND children rapidly learn way more phonics information than they could possibly be explicitly taught?
In Kindergarten, or Year One, the average child might know 25-100 words. By first grade perhaps they can read about 500-600. By fourth grade, the number has jumped to 4,000. By 12th grade, a good reader may know as many as 30,000 to 70,000. That’s exponential growth.
Could teachers have explicitly taught (and students learned) that many spellings and words? Over 18 words a day if one assumes 40,000 words by the end of high school. It is hard to imagine that many words could be covered, and learned, through explicit teaching in school.
No worries. A better understanding of how children learn to read is that they benefit from a sufficient amount of:
How can this happen with all the unexpected spellings that we have in English?
The good reader’s brain adjusts to the occasional quirk because she’s drawing from her sound decoding brain and her meaning making brain. She hits upon a word and within a few exposures she has bound those sounds and symbols together. And that work helps not only with the word at hand, but also with other spelling patterns and words in words she’ll uncover in the future.
Share explains it this way:
‘Because neither contextual guessing nor direct instruction, in and of themselves, are likely to contribute substantially to printed word learning, the ability to translate printed words independently into their spoken equivalents assumes a central role in reading acquisition. According to the self-teaching hypothesis, each successful decoding encounter with an unfamiliar word provides an opportunity to acquire the word-specific orthographic information that is the foundation of skilled word recognition. A relatively small number of (successful) exposures appear to be sufficient for acquiring orthographic representations, both for adult skilled readers (Brooks, 1977) and young children (Manis, 1985; Reitsma, 1983a, 1983b). In this way, phonological recoding [“decoding”] acts as a self-teaching mechanism or built-in teacher enabling a child to independently develop both (word)-specific and general orthographic knowledge. Although it may not be crucial in skilled word recognition, phonological recoding may be the principal means by which the reader attains word reading proficiency (1995, p. 155)’.
Most of those who struggle with this process often don’t have
In describing the process that the good reader undergoes in order to become a fluent reader, I didn’t discuss many specifics for instruction itself. The above is more a description of the processes that a developing reader’s brain goes through. There are actually many instructional routes that would provide these key ingredients for good word reading acquisition.
But research has given us a nearly slam-dunk case on the critical features that we should look for when we consider reading instruction:
Ideally, these key features of instruction are present for the child from the beginning of her reading journey so that she gets an early, strong start to reading. Then she’ll be much more likely to enjoy reading, read more, and continue to develop her reading comprehension through a lifetime of wide reading (Stanovich, 1986).
If one or several of these important elements are missing, most children will struggle to become fluent. Consider these four elements for your child or your class of students and ensure that all ingredients are in the mix.
Finally, I leave you with a test – do you remember how to spell our made up word for ‘ants’? | <urn:uuid:ff937afb-5908-4127-9014-0cfd932ff8a9> | CC-MAIN-2019-47 | https://www.phonicshero.com/how-children-learn-to-read/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00298.warc.gz | en | 0.954641 | 6,222 | 3.421875 | 3 |
The figure of this noble bird is well known throughout the civilized world, emblazoned as it is on our national standard, which waves in the breeze of every clime, bearing to distant lands the remembrance of a great people living in a state of peaceful freedom. May that peaceful freedom last for ever!
The great strength, daring, and cool courage of the White-headed Eagle, joined to his unequalled power of flight, render him highly conspicuous among his brethren. To these qualities did he add a generous disposition towards others, he might be looked up to as a model of nobility. The ferocious, overbearing, and tyrannical temper which is ever and anon displaying itself in his actions, is, nevertheless, best adapted to his state, and was wisely given him by the Creator to enable him to perform the office assigned to him.
The flight of the White-headed Eagle is strong, generally uniform, and protracted to any distance, at pleasure. Whilst travelling, it is entirely supported by equal easy flappings, without any intermission, in as far as I have observed it, by following it with the eye or the assistance of a glass. When looking for prey, it sails with extended wings, at right angles to its body, now and then allowing its legs to hang at their full length. Whilst sailing, it has the power of ascending in circular sweeps, without a single flap of the wings, or any apparent motion either of them or of the tail; and in this manner it often rises until it disappears from the view, the white tail remaining longer visible than the rest of the body. At other times, it rises only a few hundred feet in the air, and sails off in a direct line, and with rapidity. Again, when thus elevated, it partially closes its wings, and glides downwards for a considerable space, when, as if disappointed, it suddenly checks its career, and resumes its former steady flight. When at an immense height, and as if observing an object on the ground, it closes its wings, and glides through the air with such rapidity as to cause a loud rustling sound, not unlike that produced by a violent gust of wind passing amongst the branches of trees. Its fall towards the earth can scarcely be followed by the eye on such occasions, the more particularly that these falls or glidings through the air usually take place when they are least expected.
At times, when these Eagles, sailing in search of prey, discover a Goose, a Duck, or a Swan, that has alighted on the water, they accomplish its destruction in a manner that is worthy of your attention. The Eagles, well aware that water-fowl have it in their power to dive at their approach, and thereby elude their attempts upon them, ascend in the air in opposite directions over the lake or river, on which they have observed the object which they are desirous of possessing. Both Eagles reach a certain height, immediately after which one of them glides with great swiftness towards the prey; the latter, meantime, aware of the Eagle's intention, dives the moment before he reaches the spot. The pursuer then rises in the air, and is met by its mate, which glides toward the water-bird, that has just emerged to breathe, and forces it to plunge again beneath the surface, to escape the talons of this second assailant. The first Eagle is now poising itself in the place where its mate formerly was, and rushes anew to force the quarry to make another plunge. By thus alternately gliding, in rapid and often repeated rushes, over the ill-fated bird, they soon fatigue it, when it stretches out its neck, swims deeply, and makes for the shore, in the hope of concealing itself among the rank weeds. But this is of no avail, for the Eagles follow it in all its motions, and the moment it approaches the margin, one of them darts upon it, and kills it in an instant, after which they divide the spoil.
During spring and summer, the White-headed Eagle, to procure sustenance, follows a different course, and one much less suited to a bird apparently so well able to supply itself without interfering with other plunderers. No sooner does the Fish-Hawk make its appearance along our Atlantic shores, or ascend our numerous and large rivers, than the Eagle follows it, and, like a selfish oppressor, robs it of the hard-earned fruits of its labour. Perched on some tall summit, in view of the ocean, or of some water-course, he watches every motion of the Osprey while on wing. When the latter rises from the water, with a fish in its grasp, forth rushes the Eagle in pursuit. He mounts above the Fish-Hawk, and threatens it by actions well understood, when the latter, fearing perhaps that its life is in danger, drops its prey. In an instant, the Eagle, accurately estimating the rapid descent of the fish, closes his wings, follows it with the swiftness of thought, and the next moment grasps it. The prize is carried off in silence to the woods, and assists in feeding the ever-hungry brood of the marauder.
This bird now and then procures fish himself, by pursuing them in the shallows of small creeks. I have witnessed several instances of this in the Perkiomen Creek in Pennsylvania, where in this manner, I saw one of them secure a number of Red-fins, by wading briskly through the water, and striking at them with his bill. I have also observed a pair scrambling over the ice of a frozen pond, to get at some fish below, but without success.
It does not confine itself to these kinds of food, but greedily devours young pigs, lambs, fawns, poultry, and the putrid flesh of carcasses of every description, driving off the Vultures and Carrion Crows, or the dogs, and keeping a whole party at defiance until it is satiated. It frequently gives chase to the Vultures, and forces them to disgorge the contents of their stomachs, when it alights and devours the filthy mass. A ludicrous instance of this took place near the city of Natchez, on the Mississippi. Many Vultures were engaged in devouring the body and entrails of a dead horse, when a White-headed Eagle accidentally passing by, the Vultures all took to wing, one among the rest with a portion of the entrails partly swallowed, and the remaining part, about a yard in length, dangling in the air. The Eagle instantly marked him, and gave chase. The poor Vulture tried in vain to disgorge, when the Eagle, coming up, seized the loose end of the gut, and dragged the bird along for twenty or thirty yards, much against its will, until both fell to the ground, when the Eagle struck the Vulture, and in a few moments killed it, after which he swallowed the delicious morsel.
The Bald Eagle has the power of raising from the surface of the water any floating object not heavier than itself. In this manner it often robs the sportsman of ducks which have been killed by him. Its audacity is quite remarkable. While descending the Upper Mississippi, I observed one of these Eagles in pursuit of a Green-winged Teal. It came so near our boat, although several persons were looking on, that I could perceive the glancings of its eye. The Teal, on the point of being caught, when not more than fifteen or twenty yards from us, was saved from the grasp of its enemy, one of our party having brought the latter down by a shot, which broke one of its wings. When taken on board, it was fastened to the deck of our boat by means of a string, and was fed by pieces of catfish, some of which it began to eat on the third day of its confinement. But, as it became a very disagreeable and dangerous associate, trying on all occasions to strike at some one with its talons, it was killed and thrown overboard.
When these birds are suddenly and unexpectedly approached or surprised, they exhibit a great degree of cowardice. They rise at once and fly off very low, in zig-zag lines, to some distance, uttering a hissing noise, not at all like their usual disagreeable imitation of a laugh. When not carrying a gun, one may easily approach them; but the use of that instrument being to appearance well known to them, they are very cautious in allowing a person having one to get near them. Notwithstanding all their caution, however, many are shot by approaching them under cover of a tree, on horseback, or in a boat. They do not possess the power of smelling gunpowder, as the Crow and the Raven are absurdly supposed to do; nor are they aware of the effects of spring-traps, as I have seen some of them caught by these instruments. Their sight, although probably as perfect as that of any bird, is much affected during a fall of snow, at which time they may be approached without difficulty.
The White-headed Eagle seldom appears in very mountainous districts, but prefers the low lands of the sea-shores, those of our large lakes, and the borders of rivers. It is a constant resident in the United States, in every part of which it is to be seen. The roosts and breeding places of pigeons are resorted to by it, for the purpose of picking up the young birds that happen to fall, or the old ones when wounded. It seldom, however, follows the flocks of these birds when on their migrations.
When shot at and wounded, it tries to escape by long and quickly repeated leaps, and, if not closely pursued, soon conceals itself. Should it happen to fall on the water, it strikes powerfully with expanded wings, and in this manner often reaches the shore, when it is not more than twenty or thirty yards distant. It is capable of supporting life without food for a long period. I have heard of some, which, in a state of confinement, had lived without much apparent distress for twenty days, although I cannot vouch for the truth of such statements, which, however, may be quite correct. They defend themselves in the manner usually followed by other Eagles and Hawks, throwing themselves backwards, and furiously striking with their talons at any object within reach, keeping their bill open, and turning their head with quickness to watch the movements of the enemy, their eyes being apparently more protruded than when unmolested.
It is supposed that Eagles live to a very great age,--some persons have ventured to say even a hundred years. On this subject, I can only observe, that I once found one of these birds, which, on being killed, proved to be a female, and which, judging by its appearance, must have been very old. Its tail and wing-feathers were so worn out, and of such a rusty colour, that I imagined the bird had lost the power of moulting. The legs and feet were covered with large warts, the claws and bill were much blunted; it could scarcely fly more than a hundred yards at a time, and this it did with a heaviness and unsteadiness of motion such as I never witnessed in any other bird of the species. The body was poor and very tough. The eye was the only part which appeared to have sustained no injury. It remained sparkling and full of animation, and even after death seemed to have lost little of its lustre. No wounds were perceivable on its body.
The White-headed Eagle is seldom seen alone, the mutual attachment which two individuals form when they first pair seeming to continue until one of them dies or is destroyed. They hunt for the support of each other, and seldom feed apart, but usually drive off other birds of the same species. They commence their amatory intercourse at an earlier period than any other land bird with which I am acquainted, generally in the month of December. At this time, along the Mississippi, or by the margin of some lake not far in the interior of the forest, the male and female birds are observed making a great bustle, flying about and circling in various ways, uttering a loud cackling noise, alighting on the dead branches of the tree on which their nest is already preparing, or in the act of being repaired, and caressing each other. In the beginning of January incubation commences. I shot a female, on the 17th of that month, as she sat on her eggs, in which the chicks had made considerable progress.
The nest, which in some instances is of great size, is usually placed on a very tall tree, destitute of branches to a considerable height, but by no means always a dead one. It is never seen on rocks. It is composed of sticks, from three to five feet in length, large pieces of turf, rank weeds, and Spanish moss in abundance, whenever that substance happens to be near. When finished, it measures from five to six feet in diameter, and so great is the accumulation of materials, that it sometimes measures the same in depth, it being occupied for a great number of years in succession, and receiving some augmentation each season. When placed in a naked tree, between the forks of the branches, it is conspicuously seen at a great distance. The eggs, which are from two to four, more commonly two or three, are of a dull white colour, and equally rounded at both ends, some of them being occasionally granulated. Incubation lasts for more than three weeks, but I have not been able to ascertain its precise duration, as I have observed the female on different occasions sit for a few days in the nest, before laying the first egg. Of this I assured myself by climbing to the nest every day in succession, during her temporary absence,--a rather perilous undertaking when the bird is sitting.
I have seen the young birds when not larger than middle-sized pullets. At this time they are covered with a soft cottony kind of down, their bill and legs appearing disproportionately large. Their first plumage is of a greyish colour, mixed with brown of different depths of tint, and before the parents drive them off from the nest they are fully fledged. As a figure of the Young White-headed Eagle will appear in the course of the publication of my Illustrations, I shall not here trouble you with a description of its appearance. I once caught three young Eagles of this species, when fully fledged, by having the tree, on which their nest was, cut down. It caused great trouble to secure them, as they could fly and scramble much faster than any of our party could run. They, however, gradually became fatigued, and at length were so exhausted as to offer no resistance, when we were securing them with cords. This happened on the border of Lake Ponchartrain, in the month of April. The parents did not think fit to come within gun-shot of the tree while the axe was at work.
The attachment of the parents to the young is very great, when the latter are yet of a small size; and to ascend to the nest at this time would be dangerous. But as the young advance, and, after being able to take wing and provide for themselves, are not disposed to fly off, the old birds turn them out, and beat them away from them. They return to the nest, however, to roost, or sleep on the branches immediately near it, for several weeks after. They are fed most abundantly while under the care of the parents, which procure for them ample supplies of fish, either accidentally cast ashore, or taken from the Fish Hawk, together with rabbits, squirrels, young lambs, pigs, opossums, or racoons. Every thing that comes in the way is relished by the young family, as by the old birds.
The young birds begin to breed the following spring, not always in pairs of the same age, as I have several times observed one of these birds in brown plumage mated with a full-coloured bird, which had the head and tail pure white. I once shot a pair of this kind, when the brown bird (the young one) proved to be the female.
This species requires at least four years before it attains the full beauty of its plumage when kept in confinement. I have known two instances in which the white of the head did not make its appearance until the sixth spring. It is impossible for me to say how much sooner this state of perfection is attained, when the bird is at full liberty, although I should suppose it to be at least one year, as the bird is capable of breeding the first spring after birth.
The weight of Eagles of this species varies considerably. In the males, it is from six to eight pounds, and in the females from eight to twelve. These birds are so attached to particular districts, where they have first made their nest, that they seldom spend a night at any distance from the latter, and often resort to its immediate neighbourhood. Whilst asleep, they emit a loud hissing sort of snore, which is heard at the distance of a hundred yards, when the weather is perfectly calm. Yet, so light is their sleep, that the cracking of a stick under the foot of a person immediately wakens them. When it is attempted to smoke them while thus roosted and asleep, they start up and sail off without uttering any sound, but return next evening to the same spot.
Before steam navigation commenced on our western rivers, these Eagles were extremely abundant there, particularly in the lower parts of the Ohio, the Mississippi, and the adjoining streams. I have seen hundreds while going down from the mouth of the Ohio to New Orleans, when it was not at all difficult to shoot them. Now, however, their number is considerably diminished, the game on which they were in the habit of feeding, having been forced to seek refuge from the persecution of man farther in the wilderness. Many, however, are still observed on these rivers, particularly along the shores of the Mississippi.
In concluding this account of the White-headed Eagle, suffer me, kind reader, to say how much I grieve that it should have been selected as the Emblem of my Country. The opinion of our great Franklin on this subject, as it perfectly coincides with my own, I shall here present to you. "For my part," says he, in one of his letters, "I wish the Bald Eagle had not been chosen as the representative of our country. He is a bird of bad moral character; he does not get his living honestly; you may have seen him perched on some dead tree, where, too lazy to fish for himself, he watches the labour of the Fishing-Hawk; and when that diligent bird has at length taken a fish, and is bearing it to his nest for the support of his mate and young ones, the Bald Eagle pursues him, and takes it from him. With all this injustice, he is never in good case, but, like those among men who live by sharping and robbing, he is generally poor, and often very lousy. Besides, he is a rank coward: the little King Bird, not bigger than a Sparrow, attacks him boldly, and drives him out of the district. He is, therefore, by no means a proper emblem for the brave and honest Cincinnati of America, who have driven all the King Birds from our country; though exactly fit for that order of knights which the French call Chevaliers d'Industrie."
BALD EAGLE, Falco Haliaetus, Wils. Amer. Orn., vol. iv. p. 89. Adult.
SEA EAGLE, Falco ossifragus, Wils. Amer. Orn., vol. vii. p. 16. Young.
FALCO LEUCOCEPHALUS, Bonap. Synops., p. 26.
AQUILA LEUCOCEPHALA, WHITE-HEADED EAGLE, Swains. & Rich. F. Bor. Amer., vol. ii. p. 15.
WHITE-HEADED or BALD EAGLE, Falco leucocephalus, Nutt. Man., vol. i. p. 72.
WHITE-HEADED EAGLE, Falco leucocephalus, Aud. Orn. Biog., vol. i. p. 160; vol. ii. p. 160; vol. v. p. 354.
Bill bluish-black, cere light blue, feet pale greyish-blue, tinged anteriorly with yellow. General colour of upper parts deep umber-brown, the tail barred with whitish on the inner webs; the upper part of the head and neck white, the middle part of the crown dark brown; a broad band of the latter colour from the bill down the side of the neck; lower parts white, the neck streaked with light brown; anterior tibial feather tinged with brown. Young with the feathers of the upper parts broadly tipped with brownish-white, the lower pure white.
Wings long, second quill longest, first considerably shorter. Tail of ordinary length, much rounded, extending considerably beyond the tips of the wings; of twelve, broad, rounded feathers.
Bill, cere, edge of eyebrow, iris, and feet yellow; claws bluish-black. The general colour of the plumage is deep chocolate, the head, neck, tail, abdomen, and upper and under tail-coverts white.
Length 34 inches; extent of wings 7 feet; bill along the back 2 3/4 inches, along the under mandible 2 3/4, in depth 1 5/12; tarsus 3, middle toe 3 1/2. | <urn:uuid:6f21a854-ce42-430d-ae1d-03844836c729> | CC-MAIN-2019-47 | https://www.audubon.org/birds-of-america/white-headed-eagle | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00058.warc.gz | en | 0.977549 | 4,537 | 2.703125 | 3 |
Proper motion is the astronomical measure of the observed changes in the apparent places of stars or other celestial objects in the sky, as seen from the center of mass of the Solar System, compared to the abstract background of the more distant stars. The components for proper motion in the equatorial coordinate system are given in the direction of right ascension and of declination, their combined value is computed as the total proper motion. It has dimensions of angle per time arcseconds per year or milliarcseconds per year. Knowledge of the proper motion and radial velocity allows calculations of true stellar motion or velocity in space in respect to the Sun, by coordinate transformation, the motion in respect to the Milky Way. Proper motion is not "proper", because it includes a component due to the motion of the Solar System itself. Over the course of centuries, stars appear to maintain nearly fixed positions with respect to each other, so that they form the same constellations over historical time.
Ursa Major or Crux, for example, looks nearly the same now. However, precise long-term observations show that the constellations change shape, albeit slowly, that each star has an independent motion; this motion is caused by the movement of the stars relative to the Solar System. The Sun travels in a nearly circular orbit about the center of the Milky Way at a speed of about 220 km/s at a radius of 8 kPc from the center, which can be taken as the rate of rotation of the Milky Way itself at this radius; the proper motion is a two-dimensional vector and is thus defined by two quantities: its position angle and its magnitude. The first quantity indicates the direction of the proper motion on the celestial sphere, the second quantity is the motion's magnitude expressed in arcseconds per year or milliarcsecond per year. Proper motion may alternatively be defined by the angular changes per year in the star's right ascension and declination, using a constant epoch in defining these; the components of proper motion by convention are arrived at.
Suppose an object moves from coordinates to coordinates in a time Δt. The proper motions are given by: μ α = α 2 − α 1 Δ t, μ δ = δ 2 − δ 1 Δ t; the magnitude of the proper motion μ is given by the Pythagorean theorem: μ 2 = μ δ 2 + μ α 2 ⋅ cos 2 δ, μ 2 = μ δ 2 + μ α ∗ 2, where δ is the declination. The factor in cos2δ accounts for the fact that the radius from the axis of the sphere to its surface varies as cosδ, for example, zero at the pole. Thus, the component of velocity parallel to the equator corresponding to a given angular change in α is smaller the further north the object's location; the change μα, which must be multiplied by cosδ to become a component of the proper motion, is sometimes called the "proper motion in right ascension", μδ the "proper motion in declination". If the proper motion in right ascension has been converted by cosδ, the result is designated μα*. For example, the proper motion results in right ascension in the Hipparcos Catalogue have been converted. Hence, the individual proper motions in right ascension and declination are made equivalent for straightforward calculations of various other stellar motions.
The position angle θ is related to these components by: μ sin θ = μ α cos δ = μ α ∗, μ cos θ = μ δ. Motions in equatorial coordinates can be converted to motions in galactic coordinates. For the majority of stars seen in the sky, the observed proper motions are small and unremarkable; such stars are either faint or are distant, have changes of below 10 milliarcseconds per year, do not appear to move appreciably over many millennia. A few do have significant motions, are called high-proper motion stars. Motions can be in seemingly random directions. Two or more stars, double stars or open star clusters, which are moving in similar directions, exhibit so-called shared or common proper motion, suggesting they may be gravitationally attached or share similar motion in space. Barnard's Star has the largest proper motion of all stars, moving at 10.3 seconds of arc per year. L
Parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight, is measured by the angle or semi-angle of inclination between those two lines. Due to foreshortening, nearby objects show a larger parallax than farther objects when observed from different positions, so parallax can be used to determine distances. To measure large distances, such as the distance of a planet or a star from Earth, astronomers use the principle of parallax. Here, the term parallax is the semi-angle of inclination between two sight-lines to the star, as observed when Earth is on opposite sides of the Sun in its orbit; these distances form the lowest rung of what is called "the cosmic distance ladder", the first in a succession of methods by which astronomers determine the distances to celestial objects, serving as a basis for other distance measurements in astronomy forming the higher rungs of the ladder. Parallax affects optical instruments such as rifle scopes, binoculars and twin-lens reflex cameras that view objects from different angles.
Many animals, including humans, have two eyes with overlapping visual fields that use parallax to gain depth perception. In computer vision the effect is used for computer stereo vision, there is a device called a parallax rangefinder that uses it to find range, in some variations altitude to a target. A simple everyday example of parallax can be seen in the dashboard of motor vehicles that use a needle-style speedometer gauge; when viewed from directly in front, the speed may show 60. As the eyes of humans and other animals are in different positions on the head, they present different views simultaneously; this is the basis of stereopsis, the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distances to objects. Animals use motion parallax, in which the animals move to gain different viewpoints. For example, pigeons down to see depth; the motion parallax is exploited in wiggle stereoscopy, computer graphics which provide depth cues through viewpoint-shifting animation rather than through binocular vision.
Parallax arises due to change in viewpoint occurring due to motion of the observer, of the observed, or of both. What is essential is relative motion. By observing parallax, measuring angles, using geometry, one can determine distance. Astronomers use the word "parallax" as a synonym for "distance measurement" by other methods: see parallax #Astronomy. Stellar parallax created by the relative motion between the Earth and a star can be seen, in the Copernican model, as arising from the orbit of the Earth around the Sun: the star only appears to move relative to more distant objects in the sky. In a geostatic model, the movement of the star would have to be taken as real with the star oscillating across the sky with respect to the background stars. Stellar parallax is most measured using annual parallax, defined as the difference in position of a star as seen from the Earth and Sun, i. e. the angle subtended at a star by the mean radius of the Earth's orbit around the Sun. The parsec is defined as the distance.
Annual parallax is measured by observing the position of a star at different times of the year as the Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars; the first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Stellar parallax remains the standard for calibrating other measurement methods. Accurate calculations of distance based on stellar parallax require a measurement of the distance from the Earth to the Sun, now based on radar reflection off the surfaces of planets; the angles involved in these calculations are small and thus difficult to measure. The nearest star to the Sun, Proxima Centauri, has a parallax of 0.7687 ± 0.0003 arcsec. This angle is that subtended by an object 2 centimeters in diameter located 5.3 kilometers away. The fact that stellar parallax was so small that it was unobservable at the time was used as the main scientific argument against heliocentrism during the early modern age.
It is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed implausible: it was one of Tycho's principal objections to Copernican heliocentrism that in order for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn and the eighth sphere. In 1989, the satellite Hipparcos was launched for obtaining improved parallaxes and proper motions for over 100,000 nearby stars, increasing the reach of the method tenfold. So, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy; the European Space Agency's Gaia mission, launched in December 2013, will be able to measure parallax angles to an accuracy of 10 microarcseconds, thus mapping nearby stars up to a distance of tens of thousands of ligh
A giant star is a star with larger radius and luminosity than a main-sequence star of the same surface temperature. They lie above the main sequence on the Hertzsprung–Russell diagram and correspond to luminosity classes II and III; the terms giant and dwarf were coined for stars of quite different luminosity despite similar temperature or spectral type by Ejnar Hertzsprung about 1905. Giant stars have radii up to a few hundred times the Sun and luminosities between 10 and a few thousand times that of the Sun. Stars still more luminous than giants are referred to as hypergiants. A hot, luminous main-sequence star may be referred to as a giant, but any main-sequence star is properly called a dwarf no matter how large and luminous it is. A star becomes a giant after all the hydrogen available for fusion at its core has been depleted and, as a result, leaves the main sequence; the behaviour of a post-main-sequence star depends on its mass. For a star with a mass above about 0.25 solar masses, once the core is depleted of hydrogen it contracts and heats up so that hydrogen starts to fuse in a shell around the core.
The portion of the star outside the shell expands and cools, but with only a small increase in luminosity, the star becomes a subgiant. The inert helium core continues to grow and increase temperature as it accretes helium from the shell, but in stars up to about 10-12 M☉ it does not become hot enough to start helium burning. Instead, after just a few million years the core reaches the Schönberg–Chandrasekhar limit collapses, may become degenerate; this causes the outer layers to expand further and generates a strong convective zone that brings heavy elements to the surface in a process called the first dredge-up. This strong convection increases the transport of energy to the surface, the luminosity increases and the star moves onto the red-giant branch where it will stably burn hydrogen in a shell for a substantial fraction of its entire life; the core continues to gain mass and increase in temperature, whereas there is some mass loss in the outer layers. § 5.9. If the star's mass, when on the main sequence, was below 0.4 M☉, it will never reach the central temperatures necessary to fuse helium.
P. 169. It will therefore remain a hydrogen-fusing red giant until it runs out of hydrogen, at which point it will become a helium white dwarf. § 4.1, 6.1. According to stellar evolution theory, no star of such low mass can have evolved to that stage within the age of the Universe. In stars above about 0.4 M☉ the core temperature reaches 108 K and helium will begin to fuse to carbon and oxygen in the core by the triple-alpha process.§ 5.9, chapter 6. When the core is degenerate helium fusion begins explosively, but most of the energy goes into lifting the degeneracy and the core becomes convective; the energy generated by helium fusion reduces the pressure in the surrounding hydrogen-burning shell, which reduces its energy-generation rate. The overall luminosity of the star decreases, its outer envelope contracts again, the star moves from the red-giant branch to the horizontal branch. Chapter 6; when the core helium is exhausted, a star with up to about 8 M☉ has a carbon–oxygen core that becomes degenerate and starts helium burning in a shell.
As with the earlier collapse of the helium core, this starts convection in the outer layers, triggers a second dredge-up, causes a dramatic increase in size and luminosity. This is the asymptotic giant branch analogous to the red-giant branch but more luminous, with a hydrogen-burning shell contributing most of the energy. Stars only remain on the AGB for around a million years, becoming unstable until they exhaust their fuel, go through a planetary nebula phase, become a carbon–oxygen white dwarf. § 7.1–7.4. Main-sequence stars with masses above about 12 M☉ are very luminous and they move horizontally across the HR diagram when they leave the main sequence becoming blue giants before they expand further into blue supergiants, they start core-helium burning before the core becomes degenerate and develop smoothly into red supergiants without a strong increase in luminosity. At this stage they have comparable luminosities to bright AGB stars although they have much higher masses, but will further increase in luminosity as they burn heavier elements and become a supernova.
Stars in the 8-12 M☉ range have somewhat intermediate properties and have been called super-AGB stars. They follow the tracks of lighter stars through RGB, HB, AGB phases, but are massive enough to initiate core carbon burning and some neon burning, they form oxygen–magnesium–neon cores, which may collapse in an electron-capture supernova, or they may leave behind an oxygen–neon white dwarf. O class main sequence stars are highly luminous; the giant phase for such stars is a brief phase of increased size and luminosity before developing a supergiant spectral luminosity class. Type O giants may be more than a hundred thousand times as luminous as the sun, brighter than many supergiants. Classification is complex and difficult with small differences between luminosity classes and a continuous range of intermediate forms; the most massive stars develop giant or supergiant spectral features while still burning hydrogen in their cores, due to mixing of heavy elements to the surface and high luminosity which produces a powerful stellar wind and causes the star's atmosphere to expand.
A star whose initial mass is less than 0.25 M☉ will not become a giant star at all. For most of th
Ara is a southern constellation situated between Scorpius and Triangulum Australe. Ara was one of the 48 Greek constellations described by the 2nd century astronomer Ptolemy, it remains one of the 88 modern constellations defined by the International Astronomical Union; the orange supergiant Beta Arae is the brightest star in the constellation, with an apparent magnitude of 2.85—marginally brighter than the blue-white Alpha Arae. Seven star systems are known to host planets; the sunlike star Mu Arae hosts four known planets, while Gliese 676 is a binary red dwarf system with four known planets. The Milky Way crosses the northwestern part of Ara. In ancient Greek mythology, Ara was identified as the altar where the gods first made offerings and formed an alliance before defeating the Titans. One of the southernmost constellations depicted by Ptolemy, it had been recorded by Aratus in 270 BC as lying close to the horizon, the Almagest portrays stars as far south as Gamma Arae. Professor of astronomy Bradley Schaefer has proposed that ancient observers must have been able to see as far south as Zeta Arae to define a pattern that looked like an altar.
In illustrations, Ara is depicted as an altar with its smoke'rising' southward. However, depictions of Ara vary in their details. In the early days of printing, a 1482 woodcut of Gaius Julius Hyginus's classic Poeticon Astronomicon depicts the altar as surrounded by demons. Johann Bayer in 1603 depicted Ara as an altar with burning incense. Hyginus depicted Ara as an altar with burning incense, though his Ara featured devils on either side of the flames. However, Willem Blaeu, a Dutch uranographer active in the 16th and 17th centuries, drew Ara as an altar designed for sacrifice, with a burning animal offering. Unlike most depictions, the smoke from Blaeu's Ara rises northward. In Chinese astronomy, the stars of the constellation Ara lie within The Azure Dragon of the East. Five stars of Ara formed a tortoise, while another three formed Chǔ, a pestle; the Wardaman people of the Northern Territory in Australia saw the stars of Ara and the neighbouring constellation Pavo as flying foxes. Covering 237.1 square degrees and hence 0.575% of the sky, Ara ranks 63rd of the 88 modern constellations by area.
Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 22°N. Scorpius runs along the length of its northern border, while Norma and Triangulum Australe border it to the west, Apus to the south, Pavo and Telescopium to the east respectively; the three-letter abbreviation for the constellation, as adopted by the International Astronomical Union, is Ara. The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of twelve segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 16h 36.1m and 18h 10.4m, while the declination coordinates are between −45.49° and −67.69°. Bayer gave eight stars Bayer designations, labelling them Alpha through to Theta, though he had never seen the constellation directly as it never rises above the horizon in Germany. After charting the southern constellations, Lacaille recharted the stars of Ara from Alpha though to Sigma, including three pairs of stars next to each other as Epsilon, Kappa and Nu.
Ara thus has rich star fields. Within the constellation's borders, there are 71 stars brighter than or equal to apparent magnitude 6.5. Just shading Alpha Arae, Beta Arae is the brightest star in the constellation, it is an orange-hued star of spectral type K3Ib-IIa, classified as a supergiant or bright giant, around 650 light-years from Earth. It is around 8.21 times 5,636 times as luminous as the Sun. At apparent magnitude 2.85, this difference in brightness between the two is undetectable by the unaided eye. Close to Beta Arae is a blue-hued supergiant of spectral type B1Ib. Of apparent magnitude 3.3, it is 1110 ± 60 light-years from Earth. It has been estimated to be between 12.5 and 25 times as massive as the Sun, have around 120,000 times its luminosity. Alpha Arae is a blue-white main sequence star of magnitude 2.95, 270 ± 20 light-years from Earth. This star is around 9.6 times as massive as the Sun, has an average of 4.5 times its radius. It is 5,800 times as luminous as the Sun, its energy emitted from its outer envelope at an effective temperature of 18,044 K.
A Be star, Alpha Arae is surrounded by a dense equatorial disk of material in Keplerian rotation. The star is losing mass by a polar stellar wind with a terminal velocity of 1,000 km/s. At magnitude 3.13 is Zeta Arae, an orange giant of spectral type K3III, located 490 ± 10 light-years from Earth. Around 7–8 times as massive as the Sun, it has swollen to a diameter around 114 times that of the Sun and is 3800 times as luminous. Delta Arae is a blue-white main sequence star of spectral type B8Vn and magnitude 3.6, 198 ± 4 light-years from Earth. It is around 3.56 times as massive as the Sun. Eta Arae is an orange giant of apparent magnitude 3.76. Exoplanets have been discovered in seven star systems in the constellation. Mu Arae is a sunlike star. HD 152079 is a sunlike star with a planet. HD 154672 is an ageing sunlike star with a Hot Jupiter. HD 154857 is a sunlike star with one suspected planet. HD 156411 is larger than the sun with a gas giant planet in orbit. Gliese 674 is a nearby red dwarf star with a planet.
Gliese 676 is a binary star system composed of two red dwarves with
Right ascension is the angular distance of a particular point measured eastward along the celestial equator from the Sun at the March equinox to the point above the earth in question. When paired with declination, these astronomical coordinates specify the direction of a point on the celestial sphere in the equatorial coordinate system. An old term, right ascension refers to the ascension, or the point on the celestial equator that rises with any celestial object as seen from Earth's equator, where the celestial equator intersects the horizon at a right angle, it contrasts with oblique ascension, the point on the celestial equator that rises with any celestial object as seen from most latitudes on Earth, where the celestial equator intersects the horizon at an oblique angle. Right ascension is the celestial equivalent of terrestrial longitude. Both right ascension and longitude measure an angle from a primary direction on an equator. Right ascension is measured from the Sun at the March equinox i.e. the First Point of Aries, the place on the celestial sphere where the Sun crosses the celestial equator from south to north at the March equinox and is located in the constellation Pisces.
Right ascension is measured continuously in a full circle from that alignment of Earth and Sun in space, that equinox, the measurement increasing towards the east. As seen from Earth, objects noted to have 12h RA are longest visible at the March equinox. On those dates at midnight, such objects will reach their highest point. How high depends on their declination. Any units of angular measure could have been chosen for right ascension, but it is customarily measured in hours and seconds, with 24h being equivalent to a full circle. Astronomers have chosen this unit to measure right ascension because they measure a star's location by timing its passage through the highest point in the sky as the Earth rotates; the line which passes through the highest point in the sky, called the meridian, is the projection of a longitude line onto the celestial sphere. Since a complete circle contains 24h of right ascension or 360°, 1/24 of a circle is measured as 1h of right ascension, or 15°. A full circle, measured in right-ascension units, contains 24 × 60 × 60 = 86400s, or 24 × 60 = 1440m, or 24h.
Because right ascensions are measured in hours, they can be used to time the positions of objects in the sky. For example, if a star with RA = 1h 30m 00s is at its meridian a star with RA = 20h 00m 00s will be on the/at its meridian 18.5 sidereal hours later. Sidereal hour angle, used in celestial navigation, is similar to right ascension, but increases westward rather than eastward. Measured in degrees, it is the complement of right ascension with respect to 24h, it is important not to confuse sidereal hour angle with the astronomical concept of hour angle, which measures angular distance of an object westward from the local meridian. The Earth's axis rotates westward about the poles of the ecliptic, completing one cycle in about 26,000 years; this movement, known as precession, causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates are inherently relative to the year of their observation, astronomers specify them with reference to a particular year, known as an epoch.
Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch. Right ascension for "fixed stars" near the ecliptic and equator increases by about 3.05 seconds per year on average, or 5.1 minutes per century, but for fixed stars further from the ecliptic the rate of change can be anything from negative infinity to positive infinity. The right ascension of Polaris is increasing quickly; the North Ecliptic Pole in Draco and the South Ecliptic Pole in Dorado are always at right ascension 18h and 6h respectively. The used standard epoch is J2000.0, January 1, 2000 at 12:00 TT. The prefix "J" indicates. Prior to J2000.0, astronomers used the successive Besselian epochs B1875.0, B1900.0, B1950.0. The concept of right ascension has been known at least as far back as Hipparchus who measured stars in equatorial coordinates in the 2nd century BC, but Hipparchus and his successors made their star catalogs in ecliptic coordinates, the use of RA was limited to special cases.
With the invention of the telescope, it became possible for astronomers to observe celestial objects in greater detail, provided that the telescope could be kept pointed at the object for a period of time. The easiest way to do, to use an equatorial mount, which allows the telescope to be aligned with one of its two pivots parallel to the Earth's axis. A motorized clock drive is used with an equatorial mount to cancel out the Earth's rotation; as the equatorial mount became adopted for observation, the equatorial coordinate system, which includes right ascension, was adopted at the same time for simplicity. Equatorial mounts could be pointed at objects with known right ascension and declination by the use of setting circles; the first star catalog to use right ascen
The Kelvin scale is an absolute thermodynamic temperature scale using as its null point absolute zero, the temperature at which all thermal motion ceases in the classical description of thermodynamics. The kelvin is the base unit of temperature in the International System of Units; until 2018, the kelvin was defined as the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. In other words, it was defined such that the triple point of water is 273.16 K. On 16 November 2018, a new definition was adopted, in terms of a fixed value of the Boltzmann constant. For legal metrology purposes, the new definition will come into force on 20 May 2019; the Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Thomson, 1st Baron Kelvin, who wrote of the need for an "absolute thermometric scale". Unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or written as a degree; the kelvin is the primary unit of temperature measurement in the physical sciences, but is used in conjunction with the degree Celsius, which has the same magnitude.
The definition implies that absolute zero is equivalent to −273.15 °C. In 1848, William Thomson, made Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby "infinite cold" was the scale's null point, which used the degree Celsius for its unit increment. Kelvin calculated; this absolute scale is known today as the Kelvin thermodynamic temperature scale. Kelvin's value of "−273" was the negative reciprocal of 0.00366—the accepted expansion coefficient of gas per degree Celsius relative to the ice point, giving a remarkable consistency to the accepted value. In 1954, Resolution 3 of the 10th General Conference on Weights and Measures gave the Kelvin scale its modern definition by designating the triple point of water as its second defining point and assigned its temperature to 273.16 kelvins. In 1967/1968, Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature "kelvin", symbol K, replacing "degree Kelvin", symbol °K. Furthermore, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM held in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is equal to the fraction 1/273.16 of the thermodynamic temperature of the triple point of water."In 2005, the Comité International des Poids et Mesures, a committee of the CGPM, affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the Kelvin thermodynamic temperature scale would refer to water having an isotopic composition specified as Vienna Standard Mean Ocean Water.
In 2018, Resolution A of the 26th CGPM adopted a significant redefinition of SI base units which included redefining the Kelvin in terms of a fixed value for the Boltzmann constant of 1.380649×10−23 J/K. When spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm; when reference is made to the "Kelvin scale", the word "kelvin"—which is a noun—functions adjectivally to modify the noun "scale" and is capitalized. As with most other SI unit symbols there is a space between the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a "degree", the same as with the other temperature scales at the time, it was distinguished from the other scales with either the adjective suffix "Kelvin" or with "absolute" and its symbol was °K. The latter term, the unit's official name from 1948 until 1954, was ambiguous since it could be interpreted as referring to the Rankine scale. Before the 13th CGPM, the plural form was "degrees absolute".
The 13th CGPM changed the unit name to "kelvin". The omission of "degree" indicates that it is not relative to an arbitrary reference point like the Celsius and Fahrenheit scales, but rather an absolute unit of measure which can be manipulated algebraically. In science and engineering, degrees Celsius and kelvins are used in the same article, where absolute temperatures are given in degrees Celsius, but temperature intervals are given in kelvins. E.g. "its measured value was 0.01028 °C with an uncertainty of 60 µK." This practice is permissible because the degree Celsius is a special name for the kelvin for use in expressing relative temperatures, the magnitude of the degree Celsius is equal to that of the kelvin. Notwithstanding that the official endorsement provided by Resolution 3 of the 13th CGPM states "a temperature interval may be expressed in degrees Celsius", the practice of using both °C and K is widespread throughout the scientific world; the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been adopted.
In 2005 the CIPM embarked on a programme to redefine the kelvin using a more experimentally rigorous methodology. In particular, the committee proposed redefining the kelvin such that Boltzmann's constant takes the exact value 1.3806505×10−23 J/K. The committee had hoped tha
In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines; each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary due to the temperature of the photosphere, although in some cases there are true abundance differences; the spectral class of a star is a short code summarizing the ionization state, giving an objective measure of the photosphere's temperature. Most stars are classified under the Morgan-Keenan system using the letters O, B, A, F, G, K, M, a sequence from the hottest to the coolest; each letter class is subdivided using a numeric digit with 0 being hottest and 9 being coolest. The sequence has been expanded with classes for other stars and star-like objects that do not fit in the classical system, such as class D for white dwarfs and classes S and C for carbon stars.
In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for sub-giants, class V for main-sequence stars, class sd for sub-dwarfs, class D for white dwarfs; the full spectral class for the Sun is G2V, indicating a main-sequence star with a temperature around 5,800 K. The conventional color description takes into account only the peak of the stellar spectrum. In actuality, stars radiate in all parts of the spectrum; because all spectral colors combined appear white, the actual apparent colors the human eye would observe are far lighter than the conventional color descriptions would suggest. This characteristic of'lightness' indicates that the simplified assignment of colors within the spectrum can be misleading.
Excluding color-contrast illusions in dim light, there are indigo, or violet stars. Red dwarfs are a deep shade of orange, brown dwarfs do not appear brown, but hypothetically would appear dim grey to a nearby observer; the modern classification system is known as the Morgan–Keenan classification. Each star is assigned a spectral class from the older Harvard spectral classification and a luminosity class using Roman numerals as explained below, forming the star's spectral type. Other modern stellar classification systems, such as the UBV system, are based on color indexes—the measured differences in three or more color magnitudes; those numbers are given labels such as "U-V" or "B-V", which represent the colors passed by two standard filters. The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified a prior alphabetical system. Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions.
Main-sequence stars vary in surface temperature from 2,000 to 50,000 K, whereas more-evolved stars can have temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are listed from hottest to coldest; the spectral classes O through M, as well as other more specialized classes discussed are subdivided by Arabic numerals, where 0 denotes the hottest stars of a given class. For example, A0 denotes A9 denotes the coolest ones. Fractional numbers are allowed; the Sun is classified as G2. Conventional color descriptions are traditional in astronomy, represent colors relative to the mean color of an A class star, considered to be white; the apparent color descriptions are what the observer would see if trying to describe the stars under a dark sky without aid to the eye, or with binoculars. However, most stars in the sky, except the brightest ones, appear white or bluish white to the unaided eye because they are too dim for color vision to work. Red supergiants are cooler and redder than dwarfs of the same spectral type, stars with particular spectral features such as carbon stars may be far redder than any black body.
The fact that the Harvard classification of a star indicated its surface or photospheric temperature was not understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated, this was suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere to stellar spectra. Harvard astronomer Cecilia Payne demonstrated that the O-B-A-F-G-K-M spectral sequence is a sequence in temperature; because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals; the Yerkes spectral classification called the MKK system from the authors' initial | <urn:uuid:42c92104-51b2-4123-b154-518057acae17> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/Kappa_Arae | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00377.warc.gz | en | 0.934601 | 7,909 | 3.953125 | 4 |
«Landscape oriented» urban strategies
This paper is framed in the contemporary discourse of landscape, which shifts, during the twentieth century, from being considered just a scene to a dynamic system that works through processes. It evolves from the pictorial to the instrumental, operational and strategic. This dynamic condition gives it the ability to create itself and can be introduced into the basis of the design. This shift emphasizes the interactions between natural, cultural, economic and social processes, and we can characterize it both spatially and temporally.
The transformation of these processes is an inspiration and a model for the new urban condition. Projects that reflect the emerging trend of orienting urban form through the landscape will be reviewed. We will also look at urban expansion and renewal projects that incorporate this approach and become instigators of a set of interrelated dynamics between the social, the economic, ecological and cultural. This specificity allows the landscape to articulate with the urban, and through its dynamics, understand how cities are formed, are revitalised and evolve over time.
LANDSCAPE ORIENTED” URBAN STRATEGIES
The projects reviewed have been selected because they include ecological processes and landscape strategies at the first stages of new urban form and demonstrate their ability to create urban development. All the examples represent the current practice of landscape architecture in different parts of the world and meet the following requirements:
• They include a variety of different types and forms of urban landscapes: open spaces, urban regeneration, urban expansion areas, and new residential developments.
• They include different scales of urban landscapes: regional scale, city scale and neighborhood scale.
• They cover all types of land uses, including residential, commercial, industrial and recreational.
• They have different locations: urbanized consolidated areas, periurban fringe areas, or areas outside the urban edge.
The proyects selected are included in the following table:
Lower Don Lands, Toronto, Canada
Major world cities such as Toronto are in transition and many need to integrate post-industrial landscapes while also radically reframing their interactions with the natural environment. The Lower Don Lands project is unique among these efforts by virtue of its size, scope, and complexity.
In 2007, Waterfront Toronto, with the support of the City of Toronto, launched an international juried design competition to determine a master vision to tackle the challenge of redeveloping the Lower Don Lands. The goal of the competition was to produce a unifying and inspiring concept for merging the natural and urban fabric into a green, integrated and sustainable community. The design teams were asked to produce a compelling concept for the Lower Don Lands with the river as the central feature, while at the same time providing for new development and new linkages to the rest of the city, using the following key principles to guide their designs:
• Naturalize the mouth of the Don River
• Create a continuous riverfront park system
• Provide for harmonious new development
• Connect waterfront neighborhoods
• Prioritize public transit
• Humanize the existing infrastructure
• Expand opportunities for interaction with the water
• Promote sustainable development
The office of Michael Van Valkenburgh Associates (MVVA Inc.) won the competition. In the MVVA team’s design, the engine of transformative urbanism is a dramatic repositioning of natural systems, landscape systems, transportation systems, and architectural environments. A renewed recognition of the functional and experiential benefits of river ecology allows a sustainable approach to flood control and river hydrology to become the symbolic and literal center around which a new neighborhood can be constructed.
This master plan brings together transformative landscape methodologies with innovative scientific approaches to natural reclamation and makes them operational at the scale of the city and the regional ecology. Within its plan to recycle 115 hectares of Toronto’s waterfront, the Port Lands Estuary project unites the client’s major programmatic initiatives into a single framework for the study area that will simultaneously make the site more natural (with the potential for new site ecologies based on the size and complexity of the river mouth landscape) and more urban (with the development of a green residential district and its integration into an ever-expanding network of infrastructure and use).
Both the urban and the natural elements of the landscape are seen as having the potential to introduce complex new systems to the site that will evolve over the course of many years, and give form and character to the development of the neighborhood.
Water City, Qianhai, Shenzen, China
The Qianhai Water City site includes a thousand and nine hundred hectares (1.900 Has) of reclaimed land surrounding the Qianhai Harbor, on the western coast of Shenzhen, at a key point of the Pearl River Delta. The area has exceptionally poor water quality. Upon implementation, Qianhai is envisioned to be the financial, logistics and service hub of Shenzhen, and a major new urban center in the Pearl River Delta mega-region, linking Hong Kong to Shenzhen and Guangzhou.
Landscape Architect James Corner and his office Field Operations envision a new “Water City” for 1.5 million people. A new and vibrant 21st century city: dense, compact, mixed, sustainable and centered around the area´s most important resource – water.
Figure 2: Qianhai Water City Masterplan. Courtesy of: © Field Operations
The great opportunity is the occasion to embrace the water as the defining feature of the landscape´s identity.
This watery identity is an approach to processing, remediating and enhancing the water on the site and in the harbor that is environmentally innovative, while simultaneously generating a wide range of watery urban environments throughout the territory´s 18 square kilometers. (18km2). Shenzen should aspire to create a waterfront city that rivals Hong Kong, Sydney and Vancouver in its quality, character and globally recognizable physical, economic and cultural identity.
Given this aspiration, the successful planning proposal for Qianhai cannot have a conventional planning that privilege buildings over landscape, or infrastructure over ecology. Rather, the successful urban plan must outline a strategy that synthesizes these systems in order to create a robust and resilient urban matrix capable of continuous adaptation, transformation and revision.
This design proposal achieves this synthesis in a creative and realizable way. The proposal breaks down the massive territory of the site into five manageable development sub-districts through the introduction of five Water Fingers that extend along the line of the existing rivers and channels. These fingers hybridize an innovative hydrological infrastructure and an iconic public realm, serving to process and remediate on site water. It also expands the amount of development frontage and creates a series of public open spaces that structure and organizes the development of the overall Qianhai area.
The urban fabric within each development sub-district takes the scale of the typical Shenzen block, but breaks it down further through the introduction of a tertiary network of roadways and open space corridors in order to promote pedestrian movement, avoid the isolation of the super-block format, and generate diverse range of urban neighborhoods within each sub-district.
The result is a hyper-dense, ecologically sensitive urban landscape that offers an iconic waterfront, diverse building stock, cultural and recreational destinations, as well as a series of public open spaces that are all easily accessible from any point within Qianhai.
Figure 3: Water fingers. Courtesy of: © Field Operations
The “water-finger” landscapes remediate on-site water. A network of large-scale filtration landscapes will purify water. There is a strong relationship between the wet landscape and the green open spaces.
Confluence District, Lyon, France
Today one of the biggest urban development projects in Europe is being carried out in Lyon, in the Confluence of the rivers Rhône and Saône. The city center of Lyon will be doubled using 150 hectares of industrial area, with high quality in terms of urban planning, architecture, and landscape architecture. This area is the southern tip of Lyon’s central peninsula, long devoted to manufacturing and transport. Reclaimed from the waters in past centuries, this riverside site is re-embracing its banks and natural environment. The redevelopment is gradually highlighting an outstanding location and unique landscapes. Only a few years ago it was little more than a neglected wasteland. Instead, a neighborhood for living in and sharing is being built. This new urban development consists of two phases:
Phase One (In French: ZAC1) is four hundred thousand square meters (400.000m2) of new buildings in 41 hectares, distributed as follows:
CONFLUENCE DISTRICT, LYON
PHASE 1 PHASE 2
Total area 400.000m2 420.000m2
Housing 145.000m2 140.000 m2
Retail 130.000 m2 230.000m2
Hotels and shopping 95.000m2 15.000m2
Recreation 30.000m2 35.000m2
It stands around centerpieces such as the Place Nautique, the Saône Park, the Place des Archives and the Retail and Leisure Cluster. This Phase One will also continue with the conversion of the old Rambaud Port buildings – La Sucrière, Les Salins and the Espace Group complex – into recreational, cultural and business buildings.
Phase Two of the Lyon Confluence urban project (In French: ZAC 2) was master-planned by the Herzog & de Meuron firm together with landscape architect Michel Desvigne. It is four hundred and twenty thousand square meters (420.000 m2) of new buildings in 35 hectares, distributed as shown above in Figure 1. Around 30% of the existing market buildings will be conserved. Phase Two features three new bridges: Pont Raymond Barre for the extended tramway; Pont des Girondins to connect Lyon Confluence and Gerland (on the Rhône’s east bank) and La Transversale, a straight route for pedestrian travel, including two footbridges over the Rhône and Saône.
As opposed to rigid and inflexible redevelopment plans, Francois Grether (architect and planner) and Michel Desvinge (landscape architect) have devised a “strategy of infiltration” for the Confluence District in Lyon. It is a flexible occupation, as parcels become available for new programs, structured by a “dispersed and mobile” system of parks.
During the 30-year transformation process, all exterior land will be a park at one time or another, either provisionally or for the more long term. As Michel Desvigne says:
“We are not envisaging a hypothetical, definitive state but a succession of states that correspond to the different stages of the metamorphosis. Exterior areas will be born, disappear, shift, according to the evolution of the building and the rhythm of the liberation of land, to make up a sort of moving gap, like that of crop rotation”.
All of the buildings of the Confluence District are directly related to the park system and every inhabitant will have a relationship with a garden or walk. A network of walks and gardens weaves between new blocks throughout the southern end of the peninsula. The phasing of the project depends on the different industrial parcels being available for new development at different periods, led to the natural evolution of a “two speed” landscape. Temporary and perennial elements could be staged on the territory. Temporary features instantly enhance the site´s public perception: meadows of flowers, tree nurseries, and a 2.5 km park as the spinal cord of the park system along the Saone. The perennial elements, such as lines and clusters of trees, infrastructure and buildings progressively define the projected spatial configuration.
Water also plays an important role in the project; its organization corresponds with the pedestrian walkways. The port along the Saone is redefined and several large basins prefigured by temporary gardens will be built towards the district interior. New waterways are established parallel to the rivers, providing protection against the strong tidal variations of the rivers. The new waterways are filled by recuperating water with a system of channels, drains and pools within the park network. New flora is establishing itself in the protected ecosystem. The rainwater recuperation is also phased, allowing certain lots to serve as temporary retention basins. The hydraulic mechanisms determine, to a certain degree, the design of the park.
Vathorst, Amersfoort, The Netherlands
In 1995, WEST 8 developed the Master Plan for Amersfort with a program that consisted of 10,000 homes for 30,000 to 40,000 inhabitants. Adrian Geuze is the principal of the office WEST8 of landscape architecture and urban design, in Rotterdam. He is one of the creators of large urban transformation projects (Among them, the Madrid RIO Project).
This project comes under the Vinex Plan, which has proved to be a smart strategy that has accumulated some interesting new urban developments, with high quality architecture and careful treatment of the landscape. Amersfoort is a city located on the banks of the river Eem, in the central region of the Netherlands. With 135,000 inhabitants, it is the second city of the region in size, after Utrecht.
The new developments in Vathorst and the Water City are an example of the efforts made by the designing team since the initial proposals to avoid tabula rasa. The intention is to build a new urban growth in a periphery without previous references, avoiding the homogenization and monofunctionality of the suburban landscape. In this case the landscape of the site becomes the main concept idea for the project. The shape and character of the project is derived from the landscape structures and inherited attributes of the site and its surrounding territory. It is a high density housing area (65 h/ha) designed in the tradition of the Old Dutch canal cities, with a water connection to the Ijsselmeer Sea.
The master plan is for 11000 dwellings, 90 hectares of commercial, industrial and office programs and required public facilities. It is divided into four zones:
• A concentration of industry, commercial and office program at the junction of national infrastructure (railways and motorways).
• A low-density urbanization respecting the existing rural landscape with tree lines
• A high-density cluster around a clean water basin
• Urban morphology is recreated by the traditional Dutch landscape and the water channelled towns.
In the Water City masterplan, a new network of channels is designed, connecting with the Ijsselmeer and inspired with traditional bridges: high, so that ships can pass underneath. Looks for an individual housing typology reminiscent of traditional single Dutch houses, narrow and high, of different heights and color of the stone or brick. The low houses can also be considered as a free interpretation of the traditional Dutch house with canal frontage, reformulated here as house-yard.
Figure 6: Aerial View of The Water City, Vathorst. Courtesy of: © West 8
CONCLUSIONS AND DISCUSSION
In the projects reviewed, we see a trend where urban growth does not simply expand on the surrounding territory, but rather transforms it so that it can reintegrate into the cycles of nature and cultural background of the place.
Landscape architecture projects that interpret the landscape as a complex dynamic system can enhance a set of interrelated dynamics: social, economic, ecological, cultural and infrastructural. We also note that the landscape is a medium that can:
• Read and understand the complexity of the territory
• Act at different scales and transcend administrative boundaries;
• Recognize historical and cultural values and retrofit them with a contemporary logic;
• Accommodate the different needs of land uses at different scales;
• Act at different cross-sectorial issues
• Be the bearer of the processes that move between society and space.
We have seen emerging projects where regional and urban development goals are expressed by landscape strategies based on the specific features and characteristics of places and where the dialogue Ecology – Landscape – Urbanity gives identity to the territory.
Cristina del Pozo, PhD
SUNLIGHT Landscape Studio
Program Director. Master´s Degree in Landscape Arcitecture. CEU San Pablo University, Madrid.
Published in: Strategies for the Post-speculative City. Edited by Juan Arana and Teresa Fanchini. EUSS 2013. ISOCARP.
• ALLEN, S. (2001). Mat urbanism: The thick 2-D.» In case: Le Corbusier’s Venice Hospital and the Mat Building Revival. Prestel Verlag, Munich. 118-126.
• ANTROP, M. (2004). Landscape change and the urbanization process in Europe. Landscape and Urban Planning, 67(1-4), 9-26.
• ASSARGARD, H. (2011). Landscape Urbanism from a methodological perspective and a conceptual framework. Master´s Thesis of Landscape Planning. Department of Urban and Rural Development. Swedish University of Agricultural Sciences.
• BAVA, H. (2002). Landscape as a foundation. Topos. Magazine, 40, 70-77.
• BUND DEUTSCHER LANDSCHAFTSARCHITEKTEN (BDLA) (2009). Landscape as system. Contemporary German Landscape Architecture. Birkhäuser Verlag. Berlin.
• CORNER, J. (1999). Recovering landscape: Essays in contemporary landscape architecture. Princeton Architectural Press, New York.
• DIEDRICH, L. (2009). Territories. From landscape to city. Agence Ter. Birkhäuser Verlag. Berlin.
• FONT, A. (2006). L´ explosió de la ciutat/The explosion of the city. Urban, (11), 128.
• FORMAN, R. T. and M. Godron (1986). Landscape Ecology. Wiley & Sons, New York.
• FRAMPTON, K. (1995). Toward an urban landscape. Columbia Documents of Architecture and Theory, Vol 4. 83-94.
• MCHARG, I. & AMERICAN MUSEUM OF NATURAL HISTORY (1995). Design with nature. Wiley New York.
• MOSSOP, E. (2006). Landscapes of infrastructure. Waldheim C.: The Landscape Urbanism Reader. Princeton Architectural Press. NY.
• POLLACK, L. (2002). Sublime matters: Fresh Kills. Praxis: Journal of Writing and Building.Volume 4: Landscapes, 58–63.
• Rowe, C., RIAMBAU SAURÍ, E., & KOETTER, F. (1981). Ciudad Collage. Editorial Gustavo Gili. Barcelona.
• SMELIK, F., & ONWUKA, C. (2008). West 8, Mosaics. Birkhäuser Verlag. Berlin.
• SIEVERTS, T. (2003). Cities without cities : An interpretation of the Zwischenstadt (English language ed). London;NY: Spon Press.
• DE SOLÁ-MORALES, M. (1996). Terrain vague. Quaderns d’Arquitectura i Urbanisme, (212), 34-43.
• DE SOLÁ-MORALES, M. (2008). De cosas urbanas. Editorial Gustavo Gili. Barcelona.
• SABATÉ, J. (2011) Algunos retos metodológicos para una renovación del planeamiento. En: Alicia Novick, A; Núñez, T; Sabaté Bel, J. (eds). Miradas desde la Quebrada de Humahuaca. Territorios, proyectos y patrimonio. Buenos Aires.
• VIGANÓ, P. (2001). Piano territoriale di Coordinamento. Provincia di Lecce. Territori de la nuova modernitá. Ed. Electa Napoli.
• WALDHEIM, C. (2002). Landscape urbanism: A genealogy. Praxis, 4, 10-17.
• WALDHEIM, C. (2006). Introduction, A reference manifesto. The Landscape Urbanism Reader. Princeton Architectural Press NY, 11.
• WALL, A. (1999). Programming the urban surface. In Corner, J. (ed). Recovering landscape: Essays in contemporary landscape architecture. Princeton Architectural Press, 233. | <urn:uuid:2e9a2a91-81f2-4172-aac3-e7589a744bc0> | CC-MAIN-2019-47 | http://www.sunlight.es/landscape-oriented-urban-strategies/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665767.51/warc/CC-MAIN-20191112202920-20191112230920-00177.warc.gz | en | 0.882306 | 4,291 | 2.5625 | 3 |
This piece is going to describe the future of the Internet and the Internet of Things. This isn’t just a potential future—it’s a virtual inevitability. Not many have heard it. You’ll be one of the first.
The concept is called Universal Daemonization, and I’ve been writing and presenting on the topic for about a year now.
Universal Daemonization, at its core, is a new way in which humans and other types of objects will interact with the world. Here are the primary concepts and components:
- Both humans and objects will broadcast daemons around themselves, complete with hundreds of pieces of information about them. Imagine an aggregate of every profile ever completed, only being dynamically updated by interaction with the world
- These daemons will sit on top of a modified version of a standard tech stack, i.e., TCP/IP, HTTP, REST-based web services
- Both humans and objects will have Intention Brokers (IBs) that parse the daemons around them and take actions on their behalf
- For humans this will come in the form of Personal Assistants like Siri, Google Now, and Cortana, which will parse the daemons around them and do things like update their preferences, submit food orders, send social pokes to people nearby with similar interests, and requesting more information about products of interest
- For objects and machines the Intention Brokers will interact with the world in prescribed ways that pertain to its function. Parking meters will photograph cars parked in its spot, submit license plates, report tampering, etc., and bar surfaces will monitor how many drinks are on it, how many people are sitting in front of it, and ask lonely patrons how they’re doing if they’re alone (and their daemon says they’re willing to chat)
- People will be tied into the world through the connection between their daemons and a universal authentication framework. This will allow your Personal Assistant, or you directly, to make requests of the environment using the appropriate level of authorization that you have to do so
- So, a regular citizen could be inside a club and say, “Take a picture of the dance floor from overhead.”, and his personal assistant would do that by finding the API for the camera listed as above the dancefloor and submitting a POST request to it
- Similarly, a police officer could approach a crime scene and tell her personal assistant, “Retrieve all video of this location for the last 2 hours.”, and that video would be sent to her viewer and the police department from the surrounding 27 city cameras on light poles, parking meters, trees, and even authorized citizen cameras
- This will mean continuous customization of your environment based on where you are. When you enter a restaurant your PA will read the restaurant’s daemon, tell you all the specials, tell you who your waiter is (if you still have one), and then order for you if you want your go-to meal. It’ll arrive with extra ketchup, because that’s how you liked it
- All this was possible because the restaurant had a REST API that your PA submitted to on your behalf. It crawled the API, found the food you want, and customized it according to your preferences. This was on the drive over, or as you walked up to the building, and when you are done you just walk away because you paid beforehand without doing anything
- Machines will interact with each other in this way as well, GETTING and POSTING to APIs on a continuous basis, learning about the world around them, and sending updates, providing value, and doing what it was they were built for
- This will enable a whole new type of live dashboard for any level of a household or business. Analytics engines will pull information and make requests to required services at various intervals in order to provide real-time views of every aspect of life
- The living room wall in a family will be transformable into a real-time display of the entire family’s fitness, diet, blood work, grades in school, heartrate, daily purchase history, summary of voice and text messages used, social interaction tree, college fund savings goals, current home value based on who moved in on the block today, and current retirement fund performance—all updated to the minute
- And the same will be possible for businesses. Employee health stats, attendance, safety incidents, delays in shipments, air quality in the main worker areas, current company trading price, employee morale based on social media analysis, money lost in health insurance based on the physical health of employees, etc.—all updated to the second and displayed for any executive who asks
Everything will be broadcasting data and providing services to certain people, and the data pulled will be displayed in powerful ways to better enable decision makers (which will increasingly be machines/daemons themselves).
People and objects will be in a constant state of interaction with the world. Personal Assistants / Identity Brokers will be continuously sending GET and POST requests to surrounding human/object APIs, using their identity’s token as authorization. And basic, nearly imperceptible actions by a human, such as a shiver, will be responded to by our PAs by a POST request to the nearest
climate API for a temperature increase.
Desires—even those you didn’t know you had or don’t remember conveying—will become silent commands to the environment to conform to your preferences. And everyone and everything will be doing this…all the time.
That’s Universal Daemonization.
Technologies closer than they may appear
What’s fascinating about this is how tangible it is given existing technology. We have the protocols and tech stacks. All we need is someone to realize how close we are and how much money can be made from it.
And while the technology is remarkably within reach, it’s application in this way will be highly emergent in nature. The social implications will be particularly significant, as who you are—and the privileges you enjoy—will exponentially magnify what you have access to.
Doors will literally open in front of some people as they walk, while for others they will remain forever closed. And your PA will whisper ratings of peoples’ quality/usefulness as they approach you from afar.
Of course, big changes require big money, but to find sponsors we need look no further than governments and advertisers.
Governments will invent budgets once they realize the monitoring and tracking power of centralized and continuous identity broadcast, and it’ll all happen quickly under the Jedi-hand-gesture of “security”.
To accelerate things even further, the advertising industry will dump untold billions the moment they realize the staggering potential to hyperfocus their spend on those most likely to purchase.
It’s simply too logical, too obvious, and has too much potential to be stopped.
- Humans and objects will broadcast daemons around them, advertising their attributes and interaction capabilities
- The daemons will sit on a TCP/IP, HTTP, and REST Web Services stack
- Intention Brokers will interact with surrounding daemons on the behalf of their human/object owners
- All interactions, whether automated or manual, will leverage a federated identity infrastructure that determines who can do what to various objects
- This interaction will enable ubiquitous and continuous customization of environments, perfectly targeted advertising, and hyper-magnification of socio-economic capabilities between individuals and groups
Universal Daemonization will change how humans interact with the world, and how the world interacts with itself. It’s impossible to foresee all the various forms it will take.
The only thing we know for sure is that it’s coming, and that we should get ready.
People are colossally underestimating the Internet of Things.
The IoT is not about alarm clocks that start your coffee maker, or about making more regular “things” accessible over the internet. The IoT will fundamentally alter how humans interact with the physical world, and will ultimately register as more significant than the internet itself.
Here are the major pieces that will make up the real IoT:
- Universal Daemonization will give every object (humans, businesses, cars, furniture) a bi-directional digital interface that serves as a representation of itself. These interfaces will broadcast information about the object, as well as provide interaction points for others. Human objects will display their favorite books, where they grew up, etc. for read-only information, and they’ll have /connect interfaces for people to link up professionally, to request a date, or to digitally flirt if within 50 meters, etc. Businesses will have APIs for displaying menus, allergy information if it’s a restaurant, an /entertainment interface so TV channels will change when people walk into a sports bar, and a /climate interface for people to request a temperature increase if they’re cold.
- Personal Assistants will consume these services for you, letting you know what you should know about your surroundings based on your preferences, which you’ve either given it explicitly or it’s learned over time. They’ll also interact with the environment on your behalf, based on your preferences, to make the world more to your liking. So they’ll order a water when you sit down to eat at a restaurant, send a coffee request (and payment) to the barista as you walk into your favorite coffee shop, and raise the temperature in any build you walk into because it knows you have a cold.
- Digital Reputation will be conveyed for humans through their daemons and federated ID. Through a particular identity tied to our real self, our professional skills, our job history, our buying power, our credit worthiness—will all be continuously updated and validated through a tech layer that works off of karma exchanges with other entities. If you think someone is trustworthy, or you like the work they do, or you found them hilarious during a dinner party, you’ll be able to say this about them in a way that sticks to them (and their daemon) for others to see. It’ll be possible to hide these comments, but most will be discouraged from doing so by social pressure.
- Augmented Reality will enable us to see the world with various filters for quality. So if I want to see only funny people around me, I can tell Siri, “Show me the funniest people in the room.”, and 4 people will light up with a green outline. You can do the same for the richest, or the tallest, or the people who grew up in the same city as you. You’ll be able to do the same when looking for the best restaurants or coffee shops as you walk down an unfamiliar street.
What these advances mean for humanity
- The combination of daemons and digital reputation will completely disrupt how work is done on Earth. Instead of antiquated (and ineffective) interviews, a technical layer powered by matching algorithms will take information about jobs that need to be done and match them to people who are available (and qualified) to do them. Transportation, household jobs, creative work, mainstream corporate requisitions—these will all be staffed based on the best possible fit, and it’ll happen in seconds rather than weeks or months.
- Because so many of the objects we interact with will be daemonized, we’ll be receiving an extraordinary amount of information from the world around us. This information will be used to create full-scope life dashboards that will illuminate and guide our behavior with regard to finances, health, social interaction, education, etc. Personal dashboards will be displayed on our living room walls, showing how the family did that day in food intake, calories burned, steps walked, and Karma gained and lost. Heads of household will see how college saving is going, how the family’s investments are doing, and what if any tweaks should be made to existing strategies. The same will exist for businesses, with unified dashboards showing employee morale, cyber risk, public sentiment, logistical efficiency, employee health, and any anomalies worth noting, along with a list of recommendations for improvement.
- Your daemon will be a representation of you, so you’ll be able to pay for things, open doors, get into clubs, gain access to your car, enter your hotel room, open your home’s front door, send people money or Karma for doing things you approve of, etc., all with a word or motion or gesture. You’ll also be able to praise or dislike people or things with these gestures, which will stick to their daemons and profiles as part of their identity. The key is that your presence and gestures will represent something to the world, as your singular identity.
- The world will adjust to you as you move through it. Car seats will adjust even though you’ve never been in it before. Lights will dim or change color based on your mood, and entertainment will adjust based on your preferences when you walk into buildings. Your personal assistant will be making these things happen on your behalf, using your identity to get access to perks and specials and privileged locations based on your reputation and Karma. The world (public infrastructure, your home and office, and businesses you frequent will be constantly customizing themselves based on your preferences.
Universal Daemonization will open entirely new categories of possibility, but here are a few examples. Consider that these aren’t necessarily universally desired, and some of them are downright frightening in their power and scope.
But what I’m describing here is what I believe is going to happen, not what I necessarily believe should happen. These are two separate things, but if they are useful to the right subset of people or markets they will happen regardless of who approves.
- Your environment will customize to you as you move through it, not only in your own home and office, but in public and in other businesses as well. The environment will read your daemon and adjust accordingly
- Your Personal Assistant will be continually sending and receiving information on your behalf as you come in proximity with other people and objects. Submitting job qualifications, relationship interest, requests for more information about a product or service, requests to gain access to restricted areas or events, etc.
- For those who empower their Personal Assistant to help manage their lives, there will be little need to manually check the location of someone you’ll be meeting soon, or when their flight lands, or what they might want to eat. This will all be as easily available
- Logistics becomes infinitely easier when every object has its own daemon that can report contents, location, speed, previous checkpoints, and any other metadata regarding the route, payload, etc. Many of these things are available now in various forms, but Universal Daemonization allows this data to live within the object itself and be updated far more dynamically
- Work become a matter of reputation and discrete tasks that will be distributed to the most qualified person within seconds. The decision will be based on physical location, work reputation, skills, qualifications, credentials, reviews, and up-to-the-second availability. Imagine the Uber driver’s app, but everyone has one at all times, and for every skill that they’re good at. Siri will simply ask you quietly in your ear, “Web security assessment job from a 94 rated individual–do we accept?” And the same for dog grooming, massages, house-sitting, and personal finance–but with billions of people participating and competing for the same work.
- Human sensory experience of the world will be augmented by the various layers of information provided by the objects around us, including other humans. We’ll see, hear, and maybe even smell when people or things are of benefit, or are dangerous, to us in various contexts. Examples could include highlighting notable people in groups, warning you against rough neighborhoods or individuals while you drive or walk, etc.
- When you look at a crowd you might see clusters of color (blue, red, green, purple) where various cliques are assembling, or see where the most dangerous people are, or the most wealthy, or the most beautiful
- The same may be true when you’re looking at neighborhoods, maps, buildings, or even individuals. You will see layers of information rendered as altered color, colored halos, or numbers on or around their person. Examples could include their Universal Reputation score, a subset of it such as credit worthiness, reliability, agreeableness, net worth, percentage of similar interests to yours. Your display of others will be completely configurable, showing people what matters to them about humans and objects that they’re perceiving
- Bouncers will be able to allow people into exclusive clubs based on a visual cue on their person, such as a green halo, or a green checkbox floating on their chest. The validation that produces the visual effect will come from them having received a valid invite, or from them having a popularity or beauty score that reaches a given standard.
- You will be able to request photos and video from multiple angles in many locations. You’ll tell your personal assistant, “Get a picture of me from above, or from the side, or from across the river, or from the other side of the street. This will issue requests to surrounding cameras along with your ID that authorizes you (or not) to access that camera, at which point the photo will be taken or not.
- Exclusive offers can be sent to people of all manner of characteristics. People with the highest incomes, people who own houses over a certain size, people with certain bloodlines, people who have gone to certain schools, people who are over a certain height and also drive BMWs, etc. This concept of exclusivity will be one that is highlighted significantly by the combination of these technologies, and there will be many cases where they are used to increase the distance between those that have and those that do not.
This interactive capability between objects will not come without downsides. Universal Daemonization, and the services that emerge from it, will introduce an extraordinary new surface area for attack.
Here are some example abuse cases:
- Daemon spoofing that allows one user to become another. Essentially the identity theft of the future
- Users overshare information in their daemons that is sucked up by Passive Parsing Modules (PPMs) that sit in crowded locations
- Input validation failures, due to insufficient Security Broker protection, leads to dangerous/harmful manipulation of the object
- Insufficient authentication or authorization on daemons allows for harvesting of personal information not meant for public
- Attacks against the validation services allow people to post false validators in their own daemons, granting them illegitimate access and perks, e.g., showing themselves as making more than 100K/year, having a credit score above 800, or having VIP access to a given club, etc.
- Replay attacks against resources, whereby an attack captures a successful, authorized interaction with a service and then replays that request to gain the same access
This is just a small subset of the security issues that we’ll need to address. But don’t convince yourself that these are so serious that it’ll stop Universal Daemonization from happening. They’re not. The functionality offered by this model will be so compelling that it will be rolled out regardless. It’ll be our responsibility to secure it as it happens, just like many times in the past.
IoT isn’t about smart gadgets or connecting more things to the Internet. It’s about continuous two-way interaction between everything in the world. It’s about changing how humans and other objects interact with the world around them.
It will turn people and objects from static to dynamic, and make them machine-readable and fully interactive entities. Algorithms will continuously optimize the interactions between everyone and everything in the world, and make it so that the environment around humans constantly adjusts based on presence, preference, and desire.
The Internet of Things is not an Internet use case. Quite the opposite, the IoT represents the ultimate platform for human interaction with the physical world, and it will turn the Internet into a mere medium.
Let’s get ready.
- There will be strong controls in place for dealing with malicious and accidental reputation tampering, as one’s reputation will become an increasingly important part of peoples’ lives and livelihood.
- A “daemon” is a service that listens for requests and responds to them in various ways when they arrive.
- As someone working in information security, the potential for abuse here is just staggering. Not just by attackers, but by governments. But we cannot afford to ignore what’s coming because we don’t like what it’ll bring.
- Think about dating, seamless payments, customized experiences, humans adjusting their behavior based on being communicated your preferences by their PAs, etc. It touches everything.
- This is just a summary, and doesn’t cover things like the implications to the concept of “private conversation” when everything is listening and recording.
- If the tone of this piece seems overconfident or presumptuous, I both agree and apologize. I am attempting something new by presenting some of my ideas in a way that will encourage one to read them, and that unfortunately seems to require posturing like an ass. Apologies.
- For a glimpse of the types of analytics and dashboarding that will soon be commonplace, have a look at http://dashboard.sidlee.com.
- Here is a more thorough discussion of the topic here on the site.
- Here is the deck I used to present UD at HouSecCon in 2014.
- I lead a project called The OWASP Internet of Things Top 10 that highlights the primary areas of security concern for IoT.
- The icons in the images are samples from Paul Sahner at iconizeme.com. | <urn:uuid:80d372dd-154b-43d5-8790-f23d9b132c7c> | CC-MAIN-2019-47 | https://danielmiessler.com/blog/universal-daemonization-future-internet-iot/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668910.63/warc/CC-MAIN-20191117091944-20191117115944-00018.warc.gz | en | 0.943118 | 4,583 | 2.84375 | 3 |
Mathematical induction is a mathematical proof technique. It is essentially used to prove that a property P(n) holds for every natural number n, i.e. for n = 0, 1, 2, 3, and so on. Metaphors can be informally used to understand the concept of mathematical induction, such as the metaphor of falling dominoes or climbing a ladder:
Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung (the basis) and that from each rung we can climb up to the next one (the step).— Concrete Mathematics, page 3 margins.
The method of induction requires two cases to be proved. The first case, called the base case (or, sometimes, the basis), proves that the property holds for the number 0. The second case, called the induction step, proves that if the property holds for one natural number , then it holds for the next natural number . These two steps establish the property for every natural number The base case does not necessarily begin with . In fact, it often begins with the number one, and it can begin with any natural number, establishing the truth of the property for all natural numbers greater than or equal to the starting number.
The method can be extended to prove statements about more general well-founded structures, such as trees; this generalization, known as structural induction, is used in mathematical logic and computer science. Mathematical induction in this extended sense is closely related to recursion. Mathematical induction, in some form, is the foundation of all correctness proofs for computer programs.
Although its name may suggest otherwise, mathematical induction should not be misconstrued as a form of inductive reasoning as used in philosophy (also see Problem of induction). Mathematical induction is an inference rule used in formal proofs. Proofs by mathematical induction are, in fact, examples of deductive reasoning.
- 1 History
- 2 Description
- 3 Examples
- 4 Variants
- 4.1 Induction basis other than 0 or 1
- 4.2 Induction on more than one counter
- 4.3 Infinite descent
- 4.4 Prefix induction
- 4.5 Complete (strong) induction
- 4.6 Forward-backward induction
- 5 Example of error in the inductive step
- 6 Formalization
- 7 Transfinite induction
- 8 Relationship to the well-ordering principle
- 9 See also
- 10 Notes
- 11 References
In 370 BC, Plato's Parmenides may have contained an early example of an implicit inductive proof. The earliest implicit traces of mathematical induction may be found in Euclid's proof that the number of primes is infinite and in Bhaskara's "cyclic method". An opposite iterated technique, counting down rather than up, is found in the Sorites paradox, where it was argued that if 1,000,000 grains of sand formed a heap, and removing one grain from a heap left it a heap, then a single grain of sand (or even no grains) forms a heap.
An implicit proof by mathematical induction for arithmetic sequences was introduced in the al-Fakhri written by al-Karaji around 1000 AD, who used it to prove the binomial theorem and properties of Pascal's triangle.
None of these ancient mathematicians, however, explicitly stated the induction hypothesis. Another similar case (contrary to what Vacca has written, as Freudenthal carefully showed) was that of Francesco Maurolico in his Arithmeticorum libri duo (1575), who used the technique to prove that the sum of the first n odd integers is n2. The first explicit formulation of the principle of induction was given by Pascal in his Traité du triangle arithmétique (1665). Another Frenchman, Fermat, made ample use of a related principle: indirect proof by infinite descent. The induction hypothesis was also employed by the Swiss Jakob Bernoulli, from then on it became more or less well known. The modern rigorous and systematic treatment of the principle came only in the 19th century, with George Boole, Augustus de Morgan, Charles Sanders Peirce, Giuseppe Peano, and Richard Dedekind.
The simplest and most common form of mathematical induction infers that a statement involving a natural number n (that is, an integer or 1) holds for all values of n. The proof consists of two steps:
- The initial or base case: prove that the statement holds for 0, or 1.
- The induction step, inductive step, or step case: prove that for every , if the statement holds for , then it holds for . In other words, assume that the statement holds for some arbitrary natural number , and prove that the statement holds for .
The hypothesis in the inductive step, that the statement holds for a particular , is called the induction hypothesis or inductive hypothesis. To prove the inductive step, one assumes the induction hypothesis for and then uses this assumption, involving , to prove that the statement holds for .
Authors who prefer to define natural numbers to begin at 0 use that value in the base case. Authors who prefer to define natural numbers to begin at 1 use that value in the base case.
Sum of consecutive natural numbersEdit
Mathematical induction can be used to prove that the following statement, P(n), holds for all natural numbers n.
P(n) gives a formula for the sum of the natural numbers less than or equal to number n. The proof that P(n) is true for each natural number n proceeds as follows:
Proposition. For any ,
Proof. Let be the statement We give a proof by induction on n.
Base case: Show that the statement holds for n = 0 (taking 0 as a natural).
P(0) is easily seen to be true:
Inductive step: Show that for any that if holds, then also holds. This can be done as follows.
Assume the induction hypothesis that is true (for some arbitrary value of ). It must then be shown that is true, that is:
Using the induction hypothesis, the left-hand side can be equated to:
Algebraically, we have that:
which shows that indeed holds.
Since both the base case and the inductive step have been performed, by mathematical induction the statement P(n) holds for all natural numbers n. ∎
A trigonometric inequalityEdit
Induction is often used to prove inequalities. As an example, we prove that for any real number and natural number .
At first glance, it may appear that a more general version, for any real numbers , could be proven without induction, solely using trigonometry formulas holding for all real values of and . However, is essential: since while for negative values of , the statement is clearly false in general for negative . Moreover, taking shows that it is also false in general for non-integral values of . This suggests we confine to proving the statement specifically for natural values of , and we turn to induction in order to pass from one value of to the next in a relatively straightforward manner, starting from a trivially verifiable base case.
Proposition. For any , .
Proof. Let be a fixed, arbitrary real number and be the statement . We induct on .
Base case: The calculation verifies the truth of the base case .
Inductive step: We show that for any natural number . We use the angle addition formula and the triangle inequality , both of which hold for all real numbers . For every , assuming the truth of the induction hypothesis gives us the following chain of equalities and inequalities:
The inequality implied by the first and last lines shows that implies for every , which completes the inductive step. Thus, the proposition holds by induction. ∎
This section includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (July 2013) (Learn how and when to remove this template message)
In practice, proofs by induction are often structured differently, depending on the exact nature of the property to be proven. All variants of induction are special cases of transfinite induction; see below.
Induction basis other than 0 or 1Edit
If one wishes to prove a statement not for all natural numbers, but only for all numbers n greater than or equal to a certain number b, then the proof by induction consists of:
- Showing that the statement holds when .
- Showing that if the statement holds for an arbitrary number , then the same statement also holds for .
This can be used, for example, to show that for .
In this way, one can prove that some statement holds for all , or even for all . This form of mathematical induction is actually a special case of the previous form, because if the statement to be proved is then proving it with these two rules is equivalent with proving for all natural numbers with an induction base case .
Example: forming dollar amounts by coinsEdit
Assume an infinite supply of 4- and 5-dollar coins. Induction can be used to prove that any whole amount of dollars greater than or equal to can be formed by a combination of such coins. Let denote the statement "the amount of dollars can be formed by a combination of 4- and 5-dollar coins". The proof that is true for all can then be achieved by induction on as follows:
Base case: Showing that holds for is easy: take three 4-dollar coins.
Induction step: Given that holds for some value of (induction hypothesis), prove that holds, too:
- Assume is true for some arbitrary . If there is a solution for dollars that includes at least one 4-dollar coin, replace it by a 5-dollar coin to make dollars. Otherwise, if only 5-dollar coins are used, must be a multiple of 5 and so at least 15; but then we can replace three 5-dollar coins by four 4-dollar coins to make dollars. In each case, is true.
Therefore, by the principle of induction, holds for all , and the proof is complete.
In this example, although also holds for , the above proof cannot be modified to replace the minimum amount of dollar to any lower value . For , the base case is actually false; for , the second case in the induction step (replacing three 5- by four 4-dollar coins) will not work; let alone for even lower .
Induction on more than one counterEdit
It is sometimes desirable to prove a statement involving two natural numbers, n and m, by iterating the induction process. That is, one proves a base case and an inductive step for n, and in each of those proves a base case and an inductive step for m. See, for example, the proof of commutativity accompanying addition of natural numbers. More complicated arguments involving three or more counters are also possible.
The method of infinite descent is a variation of mathematical induction which was used by Pierre de Fermat. It is used to show that some statement Q(n) is false for all natural numbers n. Its traditional form consists of showing that if Q(n) is true for some natural number n, it also holds for some strictly smaller natural number m. Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) that Q(n) cannot be true for any n.
The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statement P(n) defined as "Q(m) is false for all natural numbers m less than or equal to n", it follows that P(n) holds for all n, which means that Q(n) is false for every natural number n.
The most common form of proof by mathematical induction requires proving in the inductive step that
whereupon the induction principle "automates" n applications of this step in getting from P(0) to P(n). This could be called "predecessor induction" because each step proves something about a number from something about that number's predecessor.
A variant of interest in computational complexity is "prefix induction", in which one proves the following statement in the inductive step:
The induction principle then "automates" log n applications of this inference in getting from P(0) to P(n). In fact, it is called "prefix induction" because each step proves something about a number from something about the "prefix" of that number — as formed by truncating the low bit of its binary representation. It can also be viewed as an application of traditional induction on the length of that binary representation.
If traditional predecessor induction is interpreted computationally as an n-step loop, then prefix induction would correspond to a log-n-step loop. Because of that, proofs using prefix induction are "more feasibly constructive" than proofs using predecessor induction.
Predecessor induction can trivially simulate prefix induction on the same statement. Prefix induction can simulate predecessor induction, but only at the cost of making the statement more syntactically complex (adding a bounded universal quantifier), so the interesting results relating prefix induction to polynomial-time computation depend on excluding unbounded quantifiers entirely, and limiting the alternation of bounded universal and existential quantifiers allowed in the statement.
One can take the idea a step further: one must prove
whereupon the induction principle "automates" log log n applications of this inference in getting from P(0) to P(n). This form of induction has been used, analogously, to study log-time parallel computation.
Complete (strong) inductionEdit
Another variant, called complete induction, course of values induction or strong induction (in contrast to which the basic form of induction is sometimes known as weak induction), makes the inductive step easier to prove by using a stronger hypothesis: one proves the statement P(m + 1) under the assumption that P(n) holds for all natural n less than m + 1; by contrast, the basic form only assumes P(m). The name "strong induction" does not mean that this method can prove more than "weak induction", but merely refers to the stronger hypothesis used in the inductive step.
In fact, it can be shown that the two methods are actually equivalent, as explained below. In this form of complete induction one still has to prove the base case, P(0), and it may even be necessary to prove extra base cases such as P(1) before the general argument applies, as in the example below of the Fibonacci number Fn.
Although the form just described requires one to prove the base case, this is unnecessary if one can prove P(m) (assuming P(n) for all lower n) for all m ≥ 0. This is a special case of transfinite induction as described below. In this form the base case is subsumed by the case m = 0, where P(0) is proved with no other P(n) assumed; this case may need to be handled separately, but sometimes the same argument applies for m = 0 and m > 0, making the proof simpler and more elegant. In this method, however, it is vital to ensure that the proof of P(m) does not implicitly assume that m > 0, e.g. by saying "choose an arbitrary n < m", or by assuming that a set of m elements has an element.
Complete induction is equivalent to ordinary mathematical induction as described above, in the sense that a proof by one method can be transformed into a proof by the other. Suppose there is a proof of P(n) by complete induction. Let Q(n) mean "P(m) holds for all m such that 0 ≤ m ≤ n". Then Q(n) holds for all n if and only if P(n) holds for all n, and our proof of P(n) is easily transformed into a proof of Q(n) by (ordinary) induction. If, on the other hand, P(n) had been proven by ordinary induction, the proof would already effectively be one by complete induction: P(0) is proved in the base case, using no assumptions, and P(n + 1) is proved in the inductive step, in which one may assume all earlier cases but need only use the case P(n).
Example: Fibonacci numbersEdit
Complete induction is most useful when several instances of the inductive hypothesis are required for each inductive step. For example, complete induction can be used to show that
where is the nth Fibonacci number, (the golden ratio) and are the roots of the polynomial . By using the fact that for each , the identity above can be verified by direct calculation for if one assumes that it already holds for both and . To complete the proof, the identity must be verified in the two base cases: and .
Example: prime factorizationEdit
Another proof by complete induction uses the hypothesis that the statement holds for all smaller more thoroughly. Consider the statement that "every natural number greater than 1 is a product of (one or more) prime numbers", which is the "existence" part of the fundamental theorem of arithmetic. For proving the inductive step, the induction hypothesis is that for a given the statement holds for all smaller . If is prime then it is certainly a product of primes, and if not, then by definition it is a product: , where neither of the factors is equal to 1; hence neither is equal to , and so both are greater than 1 and smaller than . The induction hypothesis now applies to and , so each one is a product of primes. Thus is a product of products of primes, and hence by extension a product of primes itself.
Example: dollar amounts revisitedEdit
We shall look to prove the same example as above, this time with strong induction. The statement remains the same:
However, there will be slight differences in the structure and the assumptions of the proof, starting with the extended base case:
Base case: Show that holds for .
The base case holds.
Induction hypothesis: Given some , assume holds for all with .
Inductive step: Prove that holds.
Choosing , and observing that shows that holds, by inductive hypothesis. That is, the sum can be formed by some combination of and dollar coins. Then, simply adding a dollar coin to that combination yields the sum . That is, holds. Q.E.D.
Sometimes, it is more convenient to deduct backwards, proving the statement for , given its validity for . However, proving the validity of the statement for no single number suffices to establish the base case; instead, one needs to prove the statement for an infinite subset of the natural numbers. For example, Augustin Louis Cauchy first used forward (regular) induction to prove the inequality of arithmetic and geometric means for all powers of 2, and then used backward induction to show it for all natural numbers.
Example of error in the inductive stepEdit
The inductive step must be proved for all values of n. To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction that all horses are of the same color:
- Base case: In a set of only one horse, there is only one color.
- Inductive step: Assume as induction hypothesis that within any set of horses, there is only one color. Now look at any set of horses. Number them: . Consider the sets and . Each is a set of only horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among all horses.
The base case is trivial (as any horse is the same color as itself), and the inductive step is correct in all cases . However, the logic of the inductive step is incorrect for , because the statement that "the two sets overlap" is false (there are only horses prior to either removal, and after removal the sets of one horse each do not overlap).
where P(.) is a variable for predicates involving one natural number and k and n are variables for natural numbers.
In words, the base case P(0) and the inductive step (namely, that the induction hypothesis P(k) implies P(k + 1)) together imply that P(n) for any natural number n. The axiom of induction asserts the validity of inferring that P(n) holds for any natural number n from the base case and the inductive step.
The first quantifier in the axiom ranges over predicates rather than over individual numbers. This is a second-order quantifier, which means that this axiom is stated in second-order logic. Axiomatizing arithmetic induction in first-order logic requires an axiom schema containing a separate axiom for each possible predicate. The article Peano axioms contains further discussion of this issue.
The axiom of structural induction for the natural numbers was first formulated by Peano, who used it to specify the natural numbers together with the following four other axioms:
- 0 is a natural number.
- The successor function s of every natural number yields a natural number (s(x)=x+1).
- The successor function is injective.
- 0 is not in the range of s.
may be read as a set representing a proposition, and containing natural numbers, for which the proposition holds. This is not an axiom, but a theorem, given that natural numbers are defined in the language of ZFC set theory by axioms, analogous to Peano's.
The principle of complete induction is not only valid for statements about natural numbers, but for statements about elements of any well-founded set, that is, a set with an irreflexive relation < that contains no infinite descending chains. Any set of cardinal numbers is well-founded, which includes the set of natural numbers.
Applied to a well-founded set, it can be formulated as a single step:
- Show that if some statement holds for all m < n, then the same statement also holds for n.
This form of induction, when applied to a set of ordinals (which form a well-ordered and hence well-founded class), is called transfinite induction. It is an important proof technique in set theory, topology and other fields.
Proofs by transfinite induction typically distinguish three cases:
- when n is a minimal element, i.e. there is no element smaller than n;
- when n has a direct predecessor, i.e. the set of elements which are smaller than n has a largest element;
- when n has no direct predecessor, i.e. n is a so-called limit ordinal.
Strictly speaking, it is not necessary in transfinite induction to prove a base case, because it is a vacuous special case of the proposition that if P is true of all n < m, then P is true of m. It is vacuously true precisely because there are no values of n < m that could serve as counterexamples. So the special cases are special cases of the general case.
Relationship to the well-ordering principleEdit
The principle of mathematical induction is usually stated as an axiom of the natural numbers; see Peano axioms. It is strictly stronger than the well-ordering principle in the context of the other Peano axioms. Indeed, suppose the following:
- Every natural number is either 0, or n + 1 for some natural number n.
- For any natural number n, n + 1 is greater than n.
It can then be proved that induction, given the above listed axioms, implies the well-ordering principle.
Proof. Suppose there exists a non-empty set, S, of naturals that has no least element. Let P(n) be the assertion that n is not in S. Then P(0) is true, for if it were false then 0 is the least element of S. Furthermore, suppose P(1), P(2),..., P(n) are all true. Then if P(n+1) is false n+1 is in S, thus being a minimal element in S, a contradiction. Thus P(n+1) is true. Therefore, by the induction axiom, P(n) holds for all n, so S is empty, a contradiction.
However, the set of ordinals up to ω+ω is well-ordered and satisfies the other Peano axioms, but the induction axiom fails for this set. For example, let P(n) be the predicate "n is a [clarify]". Then P(0) is true, and P(n) implies P(n+1), but one cannot conclude P(ω), as this is false.
Peanos axioms with the induction principle uniquely model the natural numbers. Replacing the induction principle with the well-ordering principle allows for more exotic models that fulfill all the axioms.
It is mistakenly printed in several books and sources that the well-ordering principle is equivalent to the induction axiom. In the context of the other Peano axioms, this is not the case, but in the context of other axioms, they can be equivalent.
The common mistake in many erroneous proofs is to assume that n-1 is a unique and well-defined natural number, a property which is not implied by the other Peano axioms.
- Matt DeVos, Mathematical Induction, Simon Fraser University
- Gerardo con Diaz, Mathematical Induction, Harvard University
- "The Definitive Glossary of Higher Mathematical Jargon — Proof by Induction". Math Vault. 1 August 2019. Retrieved 23 October 2019.
- Anderson, Robert B. (1979). Proving Programs Correct. New York: John Wiley & Sons. p. 1. ISBN 978-0471033950.
- Suber, Peter. "Mathematical Induction". Earlham College. Retrieved 26 March 2011.
- Acerbi, Fabio (2000). "Plato: Parmenides 149a7-c3. A Proof by Complete Induction?". Archive for History of Exact Sciences. 55: 57–76. doi:10.1007/s004070000020.
- Chris K. Caldwell. "Euclid's Proof of the Infinitude of Primes (c. 300 BC)". utm.edu. Retrieved 28 February 2016.
- Cajori (1918), p. 197: 'The process of reasoning called "Mathematical Induction" has had several independent origins. It has been traced back to the Swiss Jakob (James) Bernoulli, the Frenchman B. Pascal and P. Fermat, and the Italian F. Maurolycus. [...] By reading a little between the lines one can find traces of mathematical induction still earlier, in the writings of the Hindus and the Greeks, as, for instance, in the "cyclic method" of Bhaskara, and in Euclid's proof that the number of primes is infinite.'
- Hyde, Dominic; Raffman, Diana (2018), Zalta, Edward N. (ed.), "Sorites Paradox", The Stanford Encyclopedia of Philosophy (Summer 2018 ed.), Metaphysics Research Lab, Stanford University, retrieved 23 October 2019
- Rashed, R. (1994), "Mathematical induction: al-Karajī and al-Samawʾal", The Development of Arabic Mathematics: Between Arithmetic and Algebra, Boston Studies in the Philosophy of Science, 156, Kluwer Academic Publishers, pp. 62–84, ISBN 9780792325659
- Mathematical Knowledge and the Interplay of Practices "The earliest implicit proof by mathematical induction was given around 1000 in a work by the Persian mathematician Al-Karaji"
- Rashed, R. (18 April 2013). The Development of Arabic Mathematics: Between Arithmetic and Algebra. Springer Science & Business Media. p. 62. ISBN 9789401732741.
- "It is sometimes required to prove a theorem which shall be true whenever a certain quantity n which it involves shall be an integer or whole number and the method of proof is usually of the following kind. 1st. The theorem is proved to be true when n = 1. 2ndly. It is proved that if the theorem is true when n is a given whole number, it will be true if n is the next greater integer. Hence the theorem is true universally. . .. This species of argument may be termed a continued sorites" (Boole circa 1849 Elementary Treatise on Logic not mathematical pages 40–41 reprinted in Grattan-Guinness, Ivor and Bornet, Gérard (1997), George Boole: Selected Manuscripts on Logic and its Philosophy, Birkhäuser Verlag, Berlin, ISBN 3-7643-5456-9)
- Peirce, C. S. (1881). "On the Logic of Number". American Journal of Mathematics. 4 (1–4). pp. 85–95. doi:10.2307/2369151. JSTOR 2369151. MR 1507856. Reprinted (CP 3.252-88), (W 4:299-309).
- Shields (1997)
- Ted Sundstrom, Mathematical Reasoning, p. 190, Pearson, 2006, ISBN 978-0131877184
- Buss, Samuel (1986). Bounded Arithmetic. Naples: Bibliopolis.
- "Forward-Backward Induction | Brilliant Math & Science Wiki". brilliant.org. Retrieved 23 October 2019.
- Cauchy, Augustin-Louis (1821). Cours d'analyse de l'École Royale Polytechnique, première partie, Analyse algébrique, Archived 14 October 2017 at the Wayback Machine Paris. The proof of the inequality of arithmetic and geometric means can be found on pages 457ff.
- Cohen, Joel E. (1961), "On the nature of mathematical proof", Opus. Reprinted in A Random Walk in Science (R. L. Weber, ed.), Crane, Russak & Co., 1973.
- Öhman, Lars–Daniel (6 May 2019). "Are Induction and Well-Ordering Equivalent?". The Mathematical Intelligencer. 41 (3): 33–40. doi:10.1007/s00283-019-09898-4.
- Franklin, J.; A. Daoud (2011). Proof in Mathematics: An Introduction. Sydney: Kew Books. ISBN 978-0-646-54509-7. (Ch. 8.)
- Hazewinkel, Michiel, ed. (2001) , "Mathematical induction", Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
- Hermes, Hans (1973). Introduction to Mathematical Logic. Hochschultext. London: Springer. ISBN 978-3540058199. ISSN 1431-4657.
- Knuth, Donald E. (1997). The Art of Computer Programming, Volume 1: Fundamental Algorithms (3rd ed.). Addison-Wesley. ISBN 978-0-201-89683-1. (Section 1.2.1: Mathematical Induction, pp. 11–21.)
- Kolmogorov, Andrey N.; Sergei V. Fomin (1975). Introductory Real Analysis. Silverman, R. A. (trans., ed.). New York: Dover. ISBN 978-0-486-61226-3. (Section 3.8: Transfinite induction, pp. 28–29.)
- Acerbi, F. (2000). "Plato: Parmenides 149a7-c3. A Proof by Complete Induction?". Archive for History of Exact Sciences. 55: 57–76. doi:10.1007/s004070000020.
- Bussey, W. H. (1917). "The Origin of Mathematical Induction". The American Mathematical Monthly. 24 (5): 199–207. doi:10.2307/2974308. JSTOR 2974308.
- Cajori, Florian (1918). "Origin of the Name "Mathematical Induction"". The American Mathematical Monthly. 25 (5): 197–201. doi:10.2307/2972638. JSTOR 2972638.
- Fowler D. (1994). "Could the Greeks Have Used Mathematical Induction? Did They Use It?". Physis. XXXI: 253–265.
- Freudenthal, Hans (1953). "Zur Geschichte der vollständigen Induction". Archives Internationales d'Histoire des Sciences. 6: 17–37.
- Katz, Victor J. (1998). History of Mathematics: An Introduction. Addison-Wesley. ISBN 0-321-01618-1.
- Peirce, C. S. (1881). "On the Logic of Number". American Journal of Mathematics. 4 (1–4). pp. 85–95. doi:10.2307/2369151. JSTOR 2369151. MR 1507856. Reprinted (CP 3.252-88), (W 4:299-309).
- Rabinovitch, Nachum L. (1970). "Rabbi Levi Ben Gershon and the origins of mathematical induction". Archive for History of Exact Sciences. 6 (3): 237–248. doi:10.1007/BF00327237.
- Rashed, Roshdi (1972). "L'induction mathématique: al-Karajī, as-Samaw'al". Archive for History of Exact Sciences (in French). 9 (1): 1–21. doi:10.1007/BF00348537.
- Shields, Paul (1997). "Peirce's Axiomatization of Arithmetic". In Houser; et al. (eds.). Studies in the Logic of Charles S. Peirce.
- Unguru, S. (1991). "Greek Mathematics and Mathematical Induction". Physis. XXVIII: 273–289.
- Unguru, S. (1994). "Fowling after Induction". Physis. XXXI: 267–272.
- Vacca, G. (1909). "Maurolycus, the First Discoverer of the Principle of Mathematical Induction". Bulletin of the American Mathematical Society. 16 (2): 70–73. doi:10.1090/S0002-9904-1909-01860-9.
- Yadegari, Mohammad (1978). "The Use of Mathematical Induction by Abū Kāmil Shujā' Ibn Aslam (850-930)". Isis. 69 (2): 259–262. doi:10.1086/352009. JSTOR 230435. | <urn:uuid:5fd444eb-b73f-4558-be64-efa86fb575c9> | CC-MAIN-2019-47 | https://en.m.wikipedia.org/wiki/Mathematical_induction | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00257.warc.gz | en | 0.879909 | 7,397 | 4.4375 | 4 |
Structural Biochemistry/Control of Gene Expression in Prokaryotes
- 1 DNA-Binding Proteins Distinguish Specific Sequences of DNA
- 2 In Prokaryotes, DNA-Binding Proteins Bind Explicitly to Regulatory Sites in Operons
- 3 Regulatory Circuits Can Result in Switching Between Patterns of Gene Expression
- 4 Gene Expression Can Be Controlled at Posttranscriptional Levels
- 5 References
DNA-Binding Proteins Distinguish Specific Sequences of DNA
The method prokaryotes use most often when responding to environmental changes is altering their gene expression. Expression is when a gene is transcribed into RNA and then translated into proteins. The two types of expression are constitutive—where genes are constantly being expressed—and regulated—where specific conditions need to be met inside the cell for a gene to be expressed. This sub-section focuses on how prokaryotes go about regulating the expression of their genes.
Transcription, when DNA is converted into RNA, is the first place for controlling gene activity. Proteins interact with DNA sequences to either promote or prevent the transcription of a gene.
Keep in mind that DNA sequences are not discernible from one another in terms of having features that a regulatory system would be able to register. Therefore, when regulating gene expression, prokaryotes rely on other sequences within their genome, called regulatory sites. These regulatory sites are most often also DNA-binding protein binding sites and are close to the DNA destined for transcription in prokaryotes.
An example of one of these regulatory sites is in E. coli: when sugar lactose is introduced into the environment of the bacterium a gene for encoding the production of an enzyme, β-galactosidase, begins to be expressed. This enzyme’s function is to process lactose so that the cell can extract energy and carbon from it.
The sequence of nucleotides of this regulatory site (pictured) displays an almost completely inverted repeat. This shows that the DNA has a nearly twofold axis of symmetry, which in most regulatory sites usually correlates to symmetry in the protein that binds to the site. When studying protein-DNA interactions, symmetry is generally present.
Furthering the investigation into the expression of the lac regulatory site and the protein-DNA interactions that take place there, scientists looked at the structure of the complex formed between the DNA-binding unit that recognizes the lac site and the site itself, which is part of a larger oligonucleotide. They found that the DNA-binding unit specific to the lac regulatory site comes from the protein lac repressor. The function of lac repressor, as the name suggests, is the repression of the lactose-processing gene’s expression. This DNA-binding unit’s twofold axis of symmetry matches the symmetry of the DNA, and the unit binds as a dimer. From each monomer of the protein an α helix is inserted into the DNA’s major groove. Here amino acid side chains interact (via very site-specific Hydrogen bonding) with the exposed base pairs in such a fashion that the lac repressor can only very tightly bind to this specific site in the genome of E. coli.
The helix-turn-helix motif is common to many prokaryotic DNA-binding proteins
After discerning the structures of many prokaryotic DNA-binding proteins, a structural pattern that was observed in many proteins was a pair of α helices separated by a tight turn. These are called helix-turn-helix motifs and are made of two distinct helices: the second α helix (called the recognition helix) lies in the major groove and interacts with base pairs while the first α helix is primarily in contact with the DNA backbone.
In Prokaryotes, DNA-Binding Proteins Bind Explicitly to Regulatory Sites in Operons
Looking back at the previous example with E. coli and β-galactosidase, we can garner the common principles of how DNA-binding proteins carry out regulation. When E. coli’s environment lacks glucose—their primary source of carbon and energy—the bacteria can switch to lactose as a carbon source via the enzyme β-galactosidase. β-galactosidase hydrolyzes lactose into glucose and galactose which are then metabolized by the cell. The permease facilitates the transport of lactose across the cell membrane of the bacterium and is essential. The transacetylase is on the other hand not required for lactose metabolism but plays a role in detoxifying compounds that the permease may also be transporting. Here we can say that the expression levels of a group of enzymes that together contribute to adapting to a change in a cell’s environment change together.
An E.coli bacterium growing in an environment with a carbon source such as glucose or glycerol will have around 10 or fewer molecules of the enzyme β-galactosidase in it. This number shoots up to the thousands, however, when the bacterium is grown on lactose. The presence of lactose alone will increase the amount of β-galactosidase by a large amount by promoting the synthesis of new enzymes rather than activating a precursor.
When figuring out the mechanism of gene regulation in this particular instance, it was observed that the two other proteins galactoside permease and thiogalactoside transacetylase were synthesized alongside β-galactosidase.
An operon is made up of regulatory components and genes that encode proteins
The fact that β-galactosidase, the transacetylase, and the permease were regulated in concert indicated that a common mechanism controlled the expression of the genes encoding all three. A model called the operon model was proposed by Francois Jacob and Jacques Monod to explain this parallel regulation and other observations. The three genetic parts of the operon model are (1) a set of structural genes, (2) an operator site (a regulatory DNA sequence), and (3) a regulator gene that encodes the regulator protein.
In order to inhibit the transcription of structural genes, the regulator gene encodes a repressor protein meant for binding to the operator site. In the case of the lac operon, the repressor protein is encoded by the i gene, which binds to the o operator site in order to prevent the transcription of the z, y, and a genes (the structural genes for β-galactosidase). There is also a promoter site, p, on the operon whose function is to direct the RNA polymerase to the proper transcription initiation site. All three structural genes, when transcribed, give a single mRNA that encodes β-galactosidase, the permease, and the transacetylase. Because this mRNA encodes more than one protein, it is called a polycistronic (or polygenic) transcript.
The lac repressor protein in the absence of lactose binds to the operator and blocks transcription
The lac repressor (pictured bound to DNA) is a tetramer with amino- and carboxyl-terminal domains. The amino-terminal domain is the one that binds to the DNA while the carboxyl-terminal forms a separate structure. The two sub-units pictured consolidate to form the DNA-binding unit. When lactose is absent from the environment of the bacterium, the lac repressor binds to the operator DNA snugly and swiftly (4x10^6 times as powerfully to the operator as opposed to random sites on the genome). The binding of the repressor precludes RNA polymerase from transcribing the z, y, and a genes which are downstream from the promoter site and code for the three enzymes. The dissociation constant for the complex formed by the lac repressor and the operator is around 0.1 pM and the association rate constant is a whopping 10^10 M-1s-1. This suggests that the repressor diffuses along a DNA molecule to find the operator rather than via an encounter from an aqueous medium.
When it comes to the DNA-binding preference of the lac repressor, the level of specificity is so high that it can be called a nearly unique site within the genome of E. coli. When the dimers of the amino-terminal domain bind to the operator site, the dimers of the carboxyl-terminal site are able to attach to one of two sites within 500 bp of the primary operator site that approximate the operator’s sequence. Each monomer interacts with the bound DNA’s major groove via a helix-turn-helix unit.
Ligand binding can induce structural changes in regulatory proteins
Now let’s look at how the presence of lactose changes the behavior of the repressor as well as the expression of the operon. All operons have inducers—triggers that facilitate the expression of the genes within the operon—and the inducer of the lac operon is allolactose, a molecule of galactose and glucose with an α-1,6 linkage.
In the β-galactosidase reaction, allolactose is a side product and is produced at low levels when the levels of β-galactosidase are low in the bacterium. Additionally, though not a substrate of the enzyme, isopropylthiogalactoside (IPTG) is a powerful inducer of β-galactosidase expression.
In the lac operator, the way the inducer prompts gene expression is by inhibiting the lac repressor from binding to the operator. Its method of inhibition is by binding to the lac repressor itself thus immensely reducing the affinity of the repressor to bind to the operator DNA. The inducer binds to each monomer at the center of the large domain, causing conformational changes in the DNA-binding domain of the repressor. These changes drastically reduce the DNA-binding affinity of the repressor.
The operon is a common regulatory unit in prokaryotes
Numerous other gene regulation complexes within prokaryotes function analogously to the lac operon. An example of another network like this one is that which takes part in the synthesis of purine (and pyrimidine to a certain extent). These genes are repressed by the pur repressor, which is 31% identical to the lac repressor in sequence with a similar 3D structure. In this case, however, the pur repressor behaves opposite from the lac repressor: it blocks transcription by binding to a specific DNA site only when it is also bound to a small molecule called a corepressor (either guanine or hypoxanthine).
Transcription can be stimulated by proteins that contact RNA polymerase
While the previous examples of DNA-binding proteins all function by preventing the transcription of a DNA sequence until some condition in the environment is met, there are also examples of DNA-binding proteins that actually encourage transcription.
A good instance of this is the catabolite activator protein in E. coli. When the bacterium is grown in glucose it has very low amounts of catabolic enzymes whose function it is to metabolize other sugars. The genes that encode these enzymes are in fact inhibited by glucose, an effect known as catabolite repression. Glucose lowers the concentration of cAMP (cyclic AMP). When the concentration of cAMP is high, it stimulates the transcription of these catabolic enzymes made for breaking down other sugars. This is where the catabolite activator protein (CAP or CRP, cAMP receptor protein) comes into play. CAP, when bound to cAMP, will stimulate the transcription of arabinose and lactose-catabolizing genes. CAP, which binds only to a specific sequence of DNA, binds as a dimer to an inverted repeat at the position -61 relative to the start site for transcription, adjacent to where RNA polymerase binds (pictured).
This CAP-cAMP complex enhances transcription by about a factor of 50 by making the contact between RNA polymerase and CAP energetically favorable. There are multiple CAP binding sites within the E. coli genome, therefore increasing the concentration of cAMP in the bacterium’s environment will result in the formation of these CAP-cAMP complexes, thus resulting in the transcription of many genes coding for various catabolic enzymes.
Regulatory Circuits Can Result in Switching Between Patterns of Gene Expression
In investigating gene-regulatory networks and how they function, studies of bacterial viruses—especially bacteriophage λ—have been invaluvable. Bacteriophage λ is able to develop via either a lytic or lysogenic pathway. In the lytic pathway, transcription takes place for most of the genes in the viral genome which leads to the production of numerous virus particles (~100) and the eventual lysis of the bacterium. In the lysogenic pathway, the bacterial DNA incorporates the viral genome where most of the viral genes stay unexpressed; this allows for the viral genetic material to be carried in the replicate of the bacteria. There are two essential proteins plus a set of regulatory sequences within the viral genome that are the cause for the switch between the choice of pathways.
Lambda repressor regulates its own expression
λ repressor is one of these key regulatory proteins which promotes the transcription of the gene that encodes the repressor when levels of the repressor are low. When levels of the repressor are high, it blocks transcription of the gene. It is also a self-regulating protein. While the λ repressor binds to many sites in the λ phage genome, the one relevant here is the right operator, which includes 3 binding sites for the dimer of the λ repressor in addition to 2 promoters within an approximately 80 base pair region. The role the first promoter plays is driving the expression of the λ repressor gene, while the other promoter is responsible for driving the expression of a variety of other viral genes. The λ repressor binds to the first operating site with the most affinity; and when it is bound to this first operating site, the chances of a protein binding to the adjacent operating site increase 25 times. When the first and second operating sites have these complexes bound to them, the dimer of the λ repressor inhibits the transcription of the adjacent gene whose purpose is to encode the protein Cro (controller of repressor and others). The repressor dimer at the second operating site can interact with RNA polymerase so as to stimulate the transcription of the promoter which controls the transcription of the gene encoding the λ repressor. This is how the λ repressor facilitates its own production. λ repressor fusions can be used to study protein-protein interactions in E. coli. There are two different domains in λ repressor: the N-terminal (DNA binding activity) and the C-terminal domain (dimerization). In order to have an active repressor fusion, the C-terminal domain should be replaced with a Heterodimers domain and form a dimer or higher order oligomer. However, inactive repressor fusions cannot attach to the DNA sequences and affect the expression of phage or reporter.
A circuit based on lambda repressor and Cro form a genetic switch
We can see in the above picture how the λ repressor blocks production of Cro by binding to the first operating site with the most affinity. Cro meanwhile blocks the production of the λ repressor by binding to the third operating site with the most affinity. This entire circuit is the deciding factor as to whether the lytic or lysogenic pathway will be followed: if λ repressor is high and Cro is low, the lysogenic path will be chosen; if Cro is high and the λ repressor is low, the lytic path will be chosen.
Many prokaryotic cells release chemical signals that regulate gene expression in other cells
Some prokaryotes are also known to undergo a process where they release chemicals called autoinducers into their medium (quorum sensing). These autoinducers, which are most of the time acyl homoserine lactones, are taken up by the surrounding cells. When the levels of these autoinducers reach a certain point, receptor proteins bind to them and activate the expression of several genes, including those that promote the synthesis of more autoinducers. This is a way for prokaryotes to interact with one another chemically to change their gene-expression patterns depending on how many other surrounding cells there are in their medium. Communities of prokaryotes that carry these mechanisms of quorum sensing out are collectively called a biofilm.
Gene Expression Can Be Controlled at Posttranscriptional Levels
Though most of gene expression regulation happens at the initiation of transcription, other steps of transcription are also possible targets for regulation.
Exploring the genes of the tryptophan operon (abbreviated as trp operon) in order to study the regulation of tryptophan synthesis shows two types of mutants. One type of mutant involves structural gene mutations and the other a regulatory mutant. The mutants that involve structural gene mutations are auxotrophic for tryptophan and need tryptophan to growth. To convert the precursor molecule chorismate to tryptophan, the trpE, trpD, trpC, trpB, and trpA genes codes for a polycistronic message and the mRNA will be translated to the enzyme that carries out the conversion.
The second type of mutants is able to constitutively synthesize the enzymes necessary for the synthesis of tryptophan. The trpR gene codes for the tryptophan repressor. The gene mapped in another quadrant of the E. coll chromosome compared to the trp operon. The trpR gene cannot regulate the synthesis of tryptophan efficiently. Studies on the dimeric trp repressor protein show that it does not function alone. The repressor must bind the last product of that metabolic pathway in order to regulate the synthesis of tryptophan. Thus, tryptophan is a corepressor for its own biosynthesis. This process is called feedback repression at the transcriptional level.
When the concentration of tryptophan is high enough, then the repressor binds to tryptophan to make a repressor-tryptophan complex. This complex will attach to the operator region of the trp operon and prevents RNA polymerase to bind and initiate transcription of the structural genes. Also, when the concentration of tryptophan is low in the cell, due to lack of tryptophan-complex RNA polymerase is able to bind to the gene and transcribe the structural genes. Therefore, tryptophan will be biosynthesized.
Attenuation is a prokaryotic mechanism for regulating transcription through the modulation of nascent RNA secondary structure
While studying the tryptophan operon, Charles Yanofsky discovered another means of transcription regulation. The trp operon encodes 5 enzymes that convert chorismate into tryptophan, and upon examining the 5’ end of the trp mRNA he found there was a leader sequence consisting of 162 nucleotides that came before the initiation codon of the first enzyme. His next observation was that only the first 130 nucleotides were produced as a transcript when the levels of tryptophan were high, but when levels were low a 7000-nucleotide trp mRNA which included the entire leader sequence was produced. This mode of regulation is called attenuation, where transcription is cut off before any mRNA coding for the enzymes is produced.
Attenuation depends on the mRNA’s 5’ end features. The first part of the sequence codes for a leader peptide of 14 amino acids. The attenuator comes after the open reading frame for this peptide—it is an RNA region capable of forming a few alternate structures. Because transcription and translation are very closely coupled in bacteria, the translation of say the trp mRNA begins very soon after the synthesizing of the ribosome-binding site.
The structure of mRNA is altered by a ribosome, which is stalled by the absence of an animoacyl-tRNA necessary for the translation of the leader mRNA. This allows RNA polymerase to transcribe the operon past the attenuator site
1. Screening Peptide/Protein Libraries Fused to the λ Repressor DNA-Binding Domain in E. coli Cells. Leonardo Mariño-Ramírez, Lisa Campbell, and James C. Hu. Methods Mol Biol.Published in final edited form as: Methods Mol Biol. 2003; 205: 235–250
2. Arkady B. Khodursky, Brian J. Peter, Nicholas R. Cozzarelli, David Botstein. DNA Microarray Analysis of Gene Expression in Response to Physiological and Genetic Changes That Affect Tryptophan Metabolism in Escherichia coli. 2000 October 24; 97(22): 12170–12175. Published online 2000 October 10.
- Leonardo Mariño-Ramírez, Lisa Campbell, and James C. Hu. Screening Peptide/Protein Libraries Fused to the λ Repressor DNA-Binding Domain in E. coli Cells. 2003; 205: 235–250.
- Arkady B. Khodursky, Brian J. Peter, Nicholas R. Cozzarelli, David Botstein. DNA microarray analysis of gene expression in response to physiological and genetic changes that affect tryptophan metabolism in Escherichia coli. 2000 October 24; 97(22): 12170–12175. Published online 2000 October 10. | <urn:uuid:bd5ee034-a2c6-4b97-b3fc-a99a0bbde0f2> | CC-MAIN-2019-47 | https://en.wikibooks.org/wiki/Structural_Biochemistry/Control_of_Gene_Expression_in_Prokaryotes | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00458.warc.gz | en | 0.915674 | 4,531 | 3.953125 | 4 |
In computing, a printer is a peripheral device which makes a persistent representation of graphics or text on paper. While most output is human-readable, bar code printers are an example of an expanded use for printers; the first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century. The first electronic printer was the EP-101, invented by Japanese company Epson and released in 1968; the first commercial printers used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems for computer use. In the 1980s were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, dot matrix systems that could mix text and graphics but produced low-quality output; the plotter was used for those requiring high quality line art like blueprints. The introduction of the low-cost laser printer in 1984 with the first HP LaserJet, the addition of PostScript in next year's Apple LaserWriter, set off a revolution in printing known as desktop publishing.
Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and laser printed; the HP Deskjet of 1988 offered the same advantages as laser printer in terms of flexibility, but produced somewhat lower quality output from much less expensive mechanisms. Inkjet systems displaced dot matrix and daisy wheel printers from the market. By the 2000s high-quality printers of this sort had fallen under the $100 price point and became commonplace; the rapid update of internet email through the 1990s and into the 2000s has displaced the need for printing as a means of moving documents, a wide variety of reliable storage systems means that a "physical backup" is of little benefit today. The desire for printed output for "offline reading" while on mass transit or aircraft has been displaced by e-book readers and tablet computers.
Today, traditional printers are being used more for special purposes, like printing photographs or artwork, are no longer a must-have peripheral. Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure; these devices have not yet become commonplace. Personal printers are designed to support individual users, may be connected to only a single computer; these printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. However, they are slow devices ranging from 6 to around 25 pages per minute, the cost per page is high. However, this is offset by the on-demand convenience; some printers can print documents stored from digital cameras and scanners. Networked or shared printers are "designed for high-volume, high-speed printing", they are shared by many users on a network and can print at speeds of 45 to around 100 ppm.
The Xerox 9700 could achieve 120 ppm. A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but, not connected with a physical computer printer. A virtual printer can be used to create a file, an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user. A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are used to label cartons before shipment, or to label retail items with UPCs or EANs. A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material are laid down under computer control, it is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper.
The choice of print technology has a great effect on the cost of the printer and cost of operation, speed and permanence of documents, noise. Some printer technologies do not work with certain types of physical media, such as carbon paper or transparencies. A second aspect of printer technology, forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface. Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected; the machine-readable lower portion of a cheque must be printed using MICR ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly; the following printing technologies are found in modern printers: A laser printer produces high quality text and graphics.
As with digital photocopiers and multifunction printers, laser printers employ a xerographic printing process but differ from analog photocopiers in
An industrial robot is a robot system used for manufacturing. Industrial robots are automated and capable of movement on three or more axis. Typical applications of robots include welding, assembly and place for printed circuit boards and labeling, product inspection, testing, they can assist in material handling. In the year 2015, an estimated 1.64 million industrial robots were in operation worldwide according to International Federation of Robotics. The most used robot configurations are articulated robots, SCARA robots, delta robots and cartesian coordinate robots. In the context of general robotics, most types of robots would fall into the category of robotic arms. Robots exhibit varying degrees of autonomy: Some robots are programmed to faithfully carry out specific actions over and over again without variation and with a high degree of accuracy; these actions are determined by programmed routines that specify the direction, velocity and distance of a series of coordinated motions. Other robots are much more flexible as to the orientation of the object on which they are operating or the task that has to be performed on the object itself, which the robot may need to identify.
For example, for more precise guidance, robots contain machine vision sub-systems acting as their visual sensors, linked to powerful computers or controllers. Artificial intelligence, or what passes for it, is becoming an important factor in the modern industrial robot; the earliest known industrial robot, conforming to the ISO definition was completed by "Bill" Griffith P. Taylor in 1937 and published in Meccano Magazine, March 1938; the crane-like device was built entirely using Meccano parts, powered by a single electric motor. Five axes of movement were possible, including grab rotation. Automation was achieved using punched paper tape to energise solenoids, which would facilitate the movement of the crane's control levers; the robot could stack wooden blocks in pre-programmed patterns. The number of motor revolutions required for each desired movement was first plotted on graph paper; this information was transferred to the paper tape, driven by the robot's single motor. Chris Shute built a complete replica of the robot in 1997.
George Devol applied for the first robotics patents in 1954. The first company to produce a robot was Unimation, founded by Devol and Joseph F. Engelberger in 1956. Unimation robots were called programmable transfer machines since their main use at first was to transfer objects from one point to another, less than a dozen feet or so apart, they used hydraulic actuators and were programmed in joint coordinates, i.e. the angles of the various joints were stored during a teaching phase and replayed in operation. They were accurate to within 1/10,000 of an inch. Unimation licensed their technology to Kawasaki Heavy Industries and GKN, manufacturing Unimates in Japan and England respectively. For some time Unimation's only competitor was Cincinnati Milacron Inc. of Ohio. This changed radically in the late 1970s when several big Japanese conglomerates began producing similar industrial robots. In 1969 Victor Scheinman at Stanford University invented the Stanford arm, an all-electric, 6-axis articulated robot designed to permit an arm solution.
This allowed it to follow arbitrary paths in space and widened the potential use of the robot to more sophisticated applications such as assembly and welding. Scheinman designed a second arm for the MIT AI Lab, called the "MIT arm." Scheinman, after receiving a fellowship from Unimation to develop his designs, sold those designs to Unimation who further developed them with support from General Motors and marketed it as the Programmable Universal Machine for Assembly. Industrial robotics took off quite in Europe, with both ABB Robotics and KUKA Robotics bringing robots to the market in 1973. ABB Robotics introduced IRB 6, among the world's first commercially available all electric micro-processor controlled robot; the first two IRB 6 robots were sold to Magnusson in Sweden for grinding and polishing pipe bends and were installed in production in January 1974. In 1973 KUKA Robotics built its first robot, known as FAMULUS one of the first articulated robots to have six electromechanically driven axes.
Interest in robotics increased in the late 1970s and many US companies entered the field, including large firms like General Electric, General Motors. U. S. startup companies included Adept Technology, Inc.. At the height of the robot boom in 1984, Unimation was acquired by Westinghouse Electric Corporation for 107 million U. S. dollars. Westinghouse sold Unimation to Stäubli Faverges SCA of France in 1988, still making articulated robots for general industrial and cleanroom applications and bought the robotic division of Bosch in late 2004. Only a few non-Japanese companies managed to survive in this market, the major ones being: Adept Technology, Stäubli, the Swedish-Swiss company ABB Asea Brown Boveri, the German company KUKA Robotics and the Italian company Comau. Number of axes – two axes are required to reach any point in a plane. To control the orientation of the end of the arm three more axes (yaw, pit
Seiko Epson Corporation, or Epson, is a Japanese electronics company and one of the world's largest manufacturers of computer printers, information and imaging related equipment. Headquartered in Suwa, Japan, the company has numerous subsidiaries worldwide and manufactures inkjet, dot matrix and laser printers, desktop computers, business and home theatre projectors, large home theatre televisions and industrial automation equipment, point of sale docket printers and cash registers, integrated circuits, LCD components and other associated electronic components, it is one of three core companies of the Seiko Group, a name traditionally known for manufacturing Seiko timepieces since its founding. The roots of Seiko Epson Corporation go back to a company called Daiwa Kogyo, Ltd., founded in May 1942 by Hisao Yamazaki, a local clock shop owner and former employee of K. Hattori, in Suwa, Japan. Daiwa Kogyo was supported by an investment from the Hattori family and began as a manufacturer of watch parts for Daini Seikosha.
The company started operation in a 230-square-metre renovated miso storehouse with 22 employees. In 1943, Daini Seikosha established a factory in Suwa for manufacturing Seiko watches with Daiwa Kogyo. In 1959, the Suwa Factory of Daini Seikosha was split up and merged into Daiwa Kogyo to form Suwa Seikosha Co. Ltd: the forerunner of the Seiko Epson Corporation; the company has developed many timepiece technologies. In particular, it developed the world's first portable quartz timer in 1963, the world's first quartz watch in 1969, the first automatic power generating quartz watch in 1988 and the Spring Drive watch movement in 1999; the watch business is the root of the company’s micromechatronics technologies and still one of the major businesses for Seiko Epson today although it accounts for less than one-tenth of total revenues. The watches made by the company are sold through the Seiko Watch Corporation, a subsidiary of Seiko Holdings Corporation. In 1961, Suwa Seikosha established a company called Shinshu Seiki Co. as a subsidiary to supply precision parts for Seiko watches.
When the Seiko Group was selected to be the official time keeper for the 1964 Summer Olympics in Tokyo, a printing timer was required to time events, Shinshu Seiki started developing an electronic printer. In September 1968, Shinshu Seiki launched the world's first mini-printer, the EP-101, soon incorporated into many calculators. In June 1975, the name Epson was coined for the next generation of printers based on the EP-101, released to the public.. In April of the same year Epson America Inc. was established to sell printers for Shinshu Seiki Co. In June 1978, the TX-80, eighty-column dot-matrix printer was released to the market, was used as a system printer for the Commodore PET Computer. After two years of further development, an improved model, the MX-80, was launched in October 1980, it was soon described in the company's advertising as the best selling printer in the United States. In July 1982, Shinshu Seiki named itself the Epson Corporation and launched the world's first handheld computer, HX-20, in May 1983 the world's first portable color LCD TV was developed and launched by the company.
In November 1985, Suwa Seikosha Co. Ltd. and the Epson Corporation merged to form Seiko Epson Corporation. The company developed the Micro Piezo inkjet technology, which used a piezoelectric crystal in each nozzle and did not heat the ink at the print head while spraying the ink onto the page, released Epson MJ-500 inkjet printer in March 1993. Shortly after in 1994, Epson released the first high resolution color inkjet printer, the Epson Stylus Color utilizing the Micro Piezo head technology. Newer models of the Stylus series employed Epson’s special DURABrite ink, they had two hard drives. The HD 850 and the HD 860 MFM interface; the specifications are reference The WINN L. ROSCH Hardware bible 3rd addition SAMS publishing. In 1994 Epson started outsourcing sales reps to help sell their products in retail stores in the United States; the same year, they started the Epson Weekend Warrior sales program. The purpose of the program was to help improve sales, improve retail sales reps' knowledge of Epson products and to address Epson customer service in a retail environment.
Reps were assigned on weekend shift around 12–20 hours a week. Epson started the Weekend Warrior program with TMG Marketing with Keystone Marketing Inc to Mosaic, now with Campaigners INC; the Mosaic contract expired with Epson on June 24, 2007 and Epson is now represented by Campaigners, Inc. The sales reps of Campaigners, Inc. are not outsourced as Epson hired "rack jobbers" to ensure their retail customers displayed products properly. This frees up their regular sales force to concentrate on profitable sales solutions to VAR's and system integrators, leaving "retail" to reps who did not require sales skills. Starting in 1983, Epson entered the personal computer market with the QX-10, a CP/M-compatible Z80 machine. By 1986, the company had shifted to the growing PC compatible market with the Equity line. Epson withdrew from the PC market in 1996. In June 2003, the company became public following their listing on the 1st section of the Tokyo Stock Exchange; as of 2009, the Hattori family and its related individuals and companies are s
A cleanroom or clean room is a facility ordinarily utilized as a part of specialized industrial production or scientific research, including the manufacture of pharmaceutical items and microprocessors. Cleanrooms are designed to maintain low levels of particulates, such as dust, airborne organisms, or vaporized particles. Cleanrooms have an cleanliness level quantified by the number of particles per cubic meter at a predetermined molecule measure; the ambient outdoor air in a typical urban area contains 35,000,000 particles for each cubic meter in the size range 0.5 μm and bigger in measurement, equivalent to an ISO 9 cleanroom, while by comparison an ISO 1 cleanroom permits no particles in that size range and just 12 particles for each cubic meter of 0.3 μm and smaller. The modern cleanroom was invented by American physicist Willis Whitfield; as employee of the Sandia National Laboratories, Whitfield created the initial plans for the cleanroom in 1960. Prior to Whitfield's invention, earlier cleanrooms had problems with particles and unpredictable airflows.
Whitfield designed his cleanroom with a constant filtered air flow to flush out impurities. Within a few years of its invention in the 1960s, Whitfield's modern cleanroom had generated more than US$50 billion in sales worldwide; the majority of the integrated circuit manufacturing facilities in Silicon Valley were made by three companies: MicroAire, PureAire, Key Plastics. These competitors made laminar flow units, glove boxes, clean rooms and air showers, along with the chemical tanks and benches used in the'Wet Process' building of integrated circuits; these three companies were the pioneers of the use of Teflon for airguns, chemical pumps, water guns, other devices needed for the production of integrated circuits. William C. McElroy Jr. worked as engineering manager, drafting room supervisor, QA/QC, designer for all three companies and his designs added 45 original patents to the technology of the time. McElroy wrote a four page article for MicroContamination Journal, wet processing training manuals, equipment manuals for wet processing and clean rooms.
Cleanrooms can be large. Entire manufacturing facilities can be contained within a cleanroom with factory floors covering thousands of square meters, they are used extensively in semiconductor manufacturing, the life sciences, other fields that are sensitive to environmental contamination. There are modular cleanrooms; the air entering a cleanroom from outside is filtered to exclude dust, the air inside is recirculated through high-efficiency particulate air and/or ultra-low particulate air filters to remove internally generated contaminants. Staff enter and leave through airlocks, wear protective clothing such as hoods, face masks, gloves and coveralls. Equipment inside the cleanroom is designed to generate minimal air contamination. Only special mops and buckets are used. Cleanroom furniture is easy to clean. Common materials such as paper and fabrics made from natural fibers are excluded, alternatives used. Cleanrooms are not sterile. Particle levels are tested using a particle counter and microorganisms detected and counted through environmental monitoring methods.
Polymer tools used in cleanrooms must be determined to be chemically compatible with cleanroom processing fluids as well as ensured to generate a low level of particle generation. Some cleanrooms are kept at a positive pressure so if any leaks occur, air leaks out of the chamber instead of unfiltered air coming in; some cleanroom HVAC systems control the humidity to such low levels that extra equipment like air ionizers are required to prevent electrostatic discharge problems. Low-level cleanrooms may only require special shoes, with smooth soles that do not track in dust or dirt. However, for safety reasons, shoe soles must not create slipping hazards. Access to a cleanroom is restricted to those wearing a cleanroom suit. In cleanrooms in which the standards of air contamination are less rigorous, the entrance to the cleanroom may not have an air shower. An anteroom is used to put on clean-room clothing; some manufacturing facilities do not use realized cleanrooms, but use some practices or technologies typical of cleanrooms to meet their contamination requirements.
In hospitals, theatres are similar to cleanrooms for surgical patients' operations with incisions to prevent any infections for the patient. Cleanrooms maintain particulate-free air through the use of either HEPA or ULPA filters employing laminar or turbulent air flow principles. Laminar, or unidirectional, air flow systems direct filtered air downward or in horizontal direction in a constant stream towards filters located on walls near the cleanroom floor or through raised perforated floor panels to be recirculated. Laminar air flow systems are employed across 80% of a cleanroom ceiling to maintain constant air processing. Stainless steel or other non shedding materials are used to construct laminar air flow filters and hoods to prevent excess particles entering the air. Turbulent, or non unidirectional, air flow uses both laminar air flow hoods and nonspecific velocity filters to keep air in a cleanroom in constant motion, although not all in the same direction; the rough air seeks to trap particles that may be in the air and drive them towards the floor, where they enter filters and leave the cleanroom environment.
US FDA and EU have laid down guidelines and limit for microbial contamination, s
A watch is a timepiece intended to be carried or worn by a person. It is designed to keep working despite the motions caused by the person's activities. A wristwatch is designed to be worn around the wrist, attached by a watch strap or other type of bracelet. A pocket watch is designed for a person to carry in a pocket; the study of timekeeping is known as horology. Watches progressed in the 17th century from spring-powered clocks, which appeared as early as the 14th century. During most of its history the watch was a mechanical device, driven by clockwork, powered by winding a mainspring, keeping time with an oscillating balance wheel; these are called mechanical watches. In the 1960s the electronic quartz watch was invented, powered by a battery and kept time with a vibrating quartz crystal. By the 1980s the quartz watch had taken over most of the market from the mechanical watch; this is called the quartz revolution. Developments in the 2010s include smartwatches, which are elaborate computer-like electronic devices designed to be worn on a wrist.
They incorporate timekeeping functions, but these are only a small subset of the smartwatch's facilities. In general, modern watches display the day, date and year. For mechanical watches, various extra features called "complications", such as moon-phase displays and the different types of tourbillon, are sometimes included. Most electronic quartz watches, on the other hand, include time-related features such as timers and alarm functions. Furthermore, some modern smartwatches incorporate calculators, GPS and Bluetooth technology or have heart-rate monitoring capabilities, some of them use radio clock technology to correct the time. Today, most watches in the market that are inexpensive and medium-priced, used for timekeeping, have quartz movements. However, expensive collectible watches, valued more for their elaborate craftsmanship, aesthetic appeal and glamorous design than for simple timekeeping have traditional mechanical movements though they are less accurate and more expensive than electronic ones.
As of 2018, the most expensive watch sold at auction is the Patek Philippe Henry Graves Supercomplication, the world's most complicated mechanical watch until 1989, fetching 24 million US dollars in Geneva on November 11, 2014. Watches evolved from portable spring-driven clocks. Watches were not worn in pockets until the 17th century. One account says that the word "watch" came from the Old English word woecce which meant "watchman", because it was used by town watchmen to keep track of their shifts at work. Another says that the term came from 17th century sailors, who used the new mechanisms to time the length of their shipboard watches. A great leap forward in accuracy occurred in 1657 with the addition of the balance spring to the balance wheel, an invention disputed both at the time and since between Robert Hooke and Christiaan Huygens; this innovation increased watches' accuracy enormously, reducing error from several hours per day to 10 minutes per day, resulting in the addition of the minute hand to the face from around 1680 in Britain and 1700 in France.
The increased accuracy of the balance wheel focused attention on errors caused by other parts of the movement, igniting a two-century wave of watchmaking innovation. The first thing to be improved was the escapement; the verge escapement was replaced in quality watches by the cylinder escapement, invented by Thomas Tompion in 1695 and further developed by George Graham in the 1720s. Improvements in manufacturing such as the tooth-cutting machine devised by Robert Hooke allowed some increase in the volume of watch production, although finishing and assembling was still done by hand until well into the 19th century. A major cause of error in balance wheel timepieces, caused by changes in elasticity of the balance spring from temperature changes, was solved by the bimetallic temperature compensated balance wheel invented in 1765 by Pierre Le Roy and improved by Thomas Earnshaw; the lever escapement was the single most important technological breakthrough, was invented by Thomas Mudge in 1759 and improved by Josiah Emery in 1785, although it only came into use from about 1800 onwards, chiefly in Britain.
The British had predominated in watch manufacture for much of the 17th and 18th centuries, but maintained a system of production, geared towards high-quality products for the elite. Although there was an attempt to modernise clock manufacture with mass production techniques and the application of duplicating tools and machinery by the British Watch Company in 1843, it was in the United States that this system took off. Aaron Lufkin Dennison started a factory in 1851 in Massachusetts that used interchangeable parts, by 1861 it was running a successful enterprise incorporated as the Waltham Watch Company; the concept of the wristwatch goes back to the production of the earliest watches in the 16th century. Elizabeth I of England received a wristwatch from Robert Dudley in 1571, described as an armed watch; the oldest surviving wristwatch is one given to Joséphine de Beauharnais. From the beginning, wristwatches were exclusively worn by women, while men used pocket watches up until the early 20th century.
Wristwatches were first worn by military men towards the end of the 19th century, when the importance of synchronizing maneuvers during war, without revealing the plan to the enemy through signaling, was recognized. The Garstin Company of London patented a "Watch Wristlet" design in 1893, but they were producing similar designs from the 1880s
The SCARA acronym stands for Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm. In 1981, Sankyo Seiki, Pentel and NEC presented a new concept for assembly robots; the robot was developed under the guidance of Hiroshi Makino, a professor at the University of Yamanashi. The robot was called Selective Compliance Assembly Robot Arm, SCARA, its arm was rigid in the Z-axis and pliable in the XY-axes, which allowed it to adapt to holes in the XY-axes. By virtue of the SCARA's parallel-axis joint layout, the arm is compliant in the X-Y direction but rigid in the'Z' direction, hence the term: Selective Compliant; this is advantageous for many types of assembly operations, i.e. inserting a round pin in a round hole without binding. The second attribute of the SCARA is the jointed two-link arm layout similar to our human arms, hence the often-used term, Articulated; this feature allows the arm to extend into confined areas and retract or "fold up" out of the way. This is advantageous for transferring parts from one cell to another or for loading/ unloading process stations that are enclosed.
SCARAs are faster than comparable Cartesian robot systems. Their single pedestal mount requires a small footprint and provides an easy, unhindered form of mounting. On the other hand, SCARAs can be more expensive than comparable Cartesian systems and the controlling software requires inverse kinematics for linear interpolated moves; this software comes with the SCARA though and is transparent to the end-user. Most SCARA robots are based on serial architectures, which means that the first motor should carry all other motors. There exists a so-called double-arm SCARA robot architecture, in which two of the motors are fixed at the base; the first such robot was commercialized by Mitsubishi Electric. Another example of a dual-arm SCARA robot is Mecademic's DexTAR educational robot. Articulated robot Gantry robot Schoenflies displacement Why SCARA? A Case Study – A Comparison between 3-axis r-theta robot vs. 4-axis SCARA robot by Innovative Robotics, a division of Ocean Bay and Lake Company
Automation is the technology by which a process or procedure is performed with minimal human assistance. Automation or automatic control is the use of various control systems for operating equipment such as machinery, processes in factories and heat treating ovens, switching on telephone networks and stabilization of ships and other applications and vehicles with minimal or reduced human intervention; some processes have been automated, while others are semi-automated. Automation covers applications ranging from a household thermostat controlling a boiler, to a large industrial control system with tens of thousands of input measurements and output control signals. In control complexity it can range from simple on-off control to multi-variable high level algorithms. In the simplest type of an automatic control loop, a controller compares a measured value of a process with a desired set value, processes the resulting error signal to change some input to the process, in such a way that the process stays at its set point despite disturbances.
This closed-loop control is an application of negative feedback to a system. The mathematical basis of control theory was begun in the 18th century, advanced in the 20th. Automation has been achieved by various means including mechanical, pneumatic, electronic devices and computers in combination. Complicated systems, such as modern factories and ships use all these combined techniques; the benefit of automation include labor savings, savings in electricity costs, savings in material costs, improvements to quality and precision. The World Bank's World Development Report 2019 shows evidence that the new industries and jobs in the technological sector outweigh the economic effects of workers being displaced by automation; the term automation, inspired by the earlier word automatic, was not used before 1947, when Ford established an automation department. It was during this time that industry was adopting feedback controllers, which were introduced in the 1930s. Fundamentally, there are two types of control loop.
In open loop control the control action from the controller is independent of the "process output". A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building.. In closed loop control, the control action from the controller is dependent on the process output. In the case of the boiler analogy this would include a thermostat to monitor the building temperature, thereby feed back a signal to ensure the controller maintains the building at the temperature set on the thermostat. A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to give a process output the same as the "Reference input" or "set point". For this reason, closed loop controllers are called feedback controllers; the definition of a closed loop control system according to the British Standard Institution is'a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero.'
A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control. The advanced type of automation that revolutionized manufacturing, aircraft and other industries, is feedback control, continuous and involves taking measurements using a sensor and making calculated adjustments to keep the measured variable within a set range; the theoretical basis of closed loop automation is control theory. One of the simplest types of control is on-off control. An example is the thermostat used on household appliances which either opens or closes an electrical contact. Sequence control, in which a programmed sequence of discrete operations is performed based on system logic that involves system states. An elevator control system is an example of sequence control. A proportional–integral–derivative controller is a control loop feedback mechanism used in industrial control systems.
In a PID loop, the controller continuously calculates an error value e as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional and derivative terms which give their name to the controller type. The theoretical understanding and application dates from the 1920s, they are implemented in nearly all analogue control systems. Sequential control may be either to a fixed sequence or to a logical one that will perform different actions depending on various system states. An example of an adjustable but otherwise fixed sequence is a timer on a lawn sprinkler. States refer to the various conditions that can occur in a sequence scenario of the system. An example is an elevator, which uses logic based on the system state to perform certain actions in response to its state and operator input. For example, if th | <urn:uuid:59917cd3-1fbd-4178-bc4e-f1b3a1d32b71> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/Epson_Robots | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00097.warc.gz | en | 0.953186 | 6,917 | 3.71875 | 4 |
The past decade has seen a huge scientific effort unfold in order to measure the gut microbiome. Findings of the gut’s composition, function and influence are showing we are genetically more microbe than human. And excitingly, by simply understanding our microbial inhabitants we can change our gut health for the better.
And naturally too.
One of the easiest approaches to naturally supporting your gut health lies not in what you do; but what you don’t do.
Because majority rules.
Choosing to NOT eat foods that encourage unfavourable microbes or NOT follow harmful lifestyle practices are an important part of natural gut healing.
Your microbial millions can then do the hard work for you, weeding out the trouble-makers and restoring balance and harmony.
Leaving you healthier, happier and glowing.
From the inside out.
Why is Gut Health so Important?
There is a huge (and growing) body of evidence to suggest your trillions of resident microbes are not only calling the shots and driving your physical, mental and emotional wellbeing, they also influence your habits, your behaviour and your happiness.
Isn’t this what we all strive for?
The trillions of bacteria, fungi and viruses living in your gut, flourish within their own ecosystems. Creating vitamins, metabolites and energy not only for each other but symbiotically for you too.
All without your conscious input.
The balance and diversity of your gut microbiome drive processes we traditionally believed were due to organ systems and cellular functions; such as inflammation, immunity and cellular integrity.
Diseases Associated with Gut Health
States of dis-ease are quickly being recognised to be largely rooted in microbial pathways and managed by microbiome diversity, composition and influence. Making our daily choices (or avoidances) far more important than we previously understood.
Obesity: The microbiomes of those that are obese or overweight are known to be less diverse than the population within a ‘healthy’ weight range. It appears not only food and lifestyle choices play a crucial role in our fat metabolism but also our gut bacteria. Animal studies support this. One study showing mice who received antibiotics in their first month of life developed 60% more fat than those not exposed to antimicrobial drugs.
Tip: Choose to avoid unnecessary antibiotic interventions to maintain microbial balance.
Autism: Autistic spectrum disorder (ASD) children have vastly different microbiomes to neurotypical children. In a 2013 study, Italian researchers showed that ASD children’s gut microbiota composition (types of bacteria present) is very different to ‘healthy children’. Meaning the ecosystem of affected children is higher in ‘unfavourable’ bacterial groups and produces vastly different metabolites. Additionally, it is estimated that 50% of ASD children suffer from GI issues such as indigestion, malabsorption and allergies.
Tip: A microbiome test offers unparalleled insight into your microbiome providing invaluable information to empower you to make appropriate choices for your family.
Mental Wellbeing: 90% of your feel-good hormone, serotonin, is manufactured in your gut. The interaction of microbes with specialised EC (enterochromaffin) intestinal cells help modulate the production of this critical hormone. EC cells are stimulated by bacteria to produce this vital neurotransmitter. Similarly, some bacteria themselves produce brain chemicals.
This all begs the question, which came first?
Does an unhealthy microbiome develop as a result of a disease or do your dietary and lifestyle choices combine with your genetics to determine your microbiome?
Research is pointing to all of the above. A lot like the chicken and the egg.
Good news is the choices we make today can positively change or even correct imbalances in our microbiome to influence many conditions associated with an unhealthy gut. Such as:
- Autoimmune conditions
- Alzheimer's disease
- Dental cavities
- Gastric ulcers
- Irritable Bowel Syndrome
Making it so important to be mindful of what you are feeding your ecosystem, but also what you avoid in order to allow this natural and fascinating process to function optimally.
In order to come to this awareness, we need to understand it a little more.
What is Your Gut Microbiome?
Your gut, specifically your colon, is home to the largest concentration of microbes in your body. Current estimates suggest around 38 trillion (3.8 x 10^13 ) microbes thrive in the 400g of material waiting in your colon.
And within this 400g of future faeces, your microbes make up approximately 200g.
When we talk about microbes we describe all microbial life, not only bacterial life. Like a rainforest is far more diverse than ‘just trees’, our trillions of microbes are far more diverse than ‘just bacteria’.
The scores of bacteria in your gut coexist with:
- Archaea. Bacteria-like single-celled organisms.
- Fungi. Mostly yeasts and collectively referred to as the mycobiome.
- Microbial eukaryotes. Parasites, they’re not all bad.
- And viruses. There are many, often called the virome.
This eclectic collection of microbes is called the microbiota. And the genetic material they represent (their genes) is commonly referred to as the microbiome.
So in short, the gut microbiome is a 200g predominantly bacterial (archaea, fungi, parasites and viruses make up less than 0.1%) vast community living in your large intestine, eating what you eat and contributing a significant volume of DNA.
From here, we can look closer to understand the contribution of the different types (phyla) of bacteria within your gut microbiome. The most predominant gut bacteria fall into five phyla although many others may be present, these represent the majority for most people.
And as these large groups are examined further, the abundance of various bacterial genera and species are revealed. For example, the presence (and abundance) of Akkermansia muciniphila (a member of the Verrucomicrobia phylum) can be a positive finding as it has a key role in maintaining gut integrity and mucus production.
Your gut microbiome reads like a menu of what you eat every day.
Skilled practitioners and scientists can interpret microbiome test results to provide invaluable insight into not only what you eat but importantly what you don’t eat.
Providing you with informed dietary recommendations and support on your healing journey.
All by using a simple microbiome test.
Why Take a Gut Microbiome Test?
Knowledge is power. And we know that different bacteria consume different foods and produce different by-products.
So it follows that their relative abundance and balance can be controlled by you, through diet.
Naturally. You can choose who to feed.
Knowing precisely who is thriving (or not) in your gut allows you to really target the source of your issue, and correct it. Quite the opposite of symptom spotting.
If you’ve been living with chronic health issues you’ll know how much energy can be spent on finding someone who understands you and your individual health concerns.
Microbiome testing removes the guesswork and gives you a snapshot of exactly what’s going on in your gut.
Combined with appropriate and individual dietary/lifestyle advice, microbiome testing can allow you to take back control of your health.
It can change your life.
And your resident microbes’ too.
After all, your wellbeing supports their lives. Your body is their home.
Gut Health and Your Body
One of the most empowering outcomes of microbiome awareness is rooted in the intrinsic ability of the body to heal.
- Digestive disorders. Irritable bowel syndrome (IBS), Crohn’s Disease, chronic constipation, leaky gut, SIBO, bloating, diarrhoea ...
- Skin conditions. Eczema, psoriasis, rosacea and acne ...
- Autoimmune diseases. Fibromyalgia, Hashimoto’s, multiple sclerosis (MS), coeliac, lupus, rheumatoid arthritis ...
As you begin to understand how your digestion and microbes work you’ll begin to make choices based on their wellbeing. Which will, in turn, influence your own.
We are a superorganism.
A collection of many, all with a common goal. Our bodies do not set out to work against us (although it may feel like it at times). Our bodies want to be well and disease is simply a symptom of an imbalance and a call for help.
Understanding the source of our symptoms is the key to wellbeing.
And when we remove foods and practices that harm us, our body (and our microbes) restore balance. That innate intelligence within all life works towards health.
The microbiome adjusts. The body heals.
And the mind comes along for the ride.
Gut Health and Your Mind
We are so fortunate to bear witness to the plethora of amazing research linking gut health to mental health.
It is now common knowledge that the gut is home to millions on millions of neurons and produces neurotransmitters, messengers for sending and receiving messages to AND from the brain.
It’s little wonder we used phrases like gut feeling, gut instinct and food for thought, describing this connection long before science ‘discovered’ it.
Consequently, people all over the world are healing from mental illness through diet with the right support and lifestyle changes.
New fields are emerging like nutritional psychiatry, exploring the gut-brain connection (also called the gut-brain axis) and finally acknowledging food can treat psychological conditions.
It was never ‘all in your head’.
The Gut-Brain Connection
It is all also in your gut.
The gut is home to your Enteric Nervous System (or ENS) an intricate network of neurons deeply connected with your gastrointestinal (GI) tract. This includes the vagus nerve, the largest pathway connecting your brain to your gut. And it’s a two-way street. Signals have been shown to go BOTH ways.
Not surprisingly, stress has been found to inhibit the vagal nerve and has negative effects on the GI tract and microbiota and is involved in the disease processes of many illnesses including IBS and IBD (inflammatory bowel disease - a similar condition presenting with intestinal dysbiosis).
The role of stress in disease is widespread and stress reduction is an important consideration for everyone, especially those of us looking to heal gut issues.
Stress is hugely significant in the development and chronic nature of many gut-related conditions but unfortunately, it is one of many potential causes of an unhealthy gut.
What Causes an Unhealthy Gut
Building a better microbiome isn’t just about what you feed your body, but what you keep out. Modern life has introduced many microbiome polluters, substances that critically change the composition of your microbiome.
Sadly they are often hidden in your daily routine.
If you are looking for an easy place to start with gut-healing eliminating microbiome polluters is a great place to start.
- Antibiotics. Antibiotics have been designed to be anti-biotic, biotic meaning relating to or resulting from living organisms. And rather than being targeted they are wide acting chemicals whose effects have been likened to a ‘bomb going off’ in your gut. Antibiotics undeniably save lives but should be used only when absolutely necessary. Every time you take antibiotics, not only are you wiping out the ‘bad’ bacteria, you’re also destroying most of your beneficial gut bacteria. It can take years for your body to rebalance your gut bacteria without additional help.
Tip: Supplementing with a high-quality probiotic and eating fermented foods after taking antibiotics will help re-seed and support your microbiome balance.
- Processed Foods. Ultra-processed foods common in the Western diet are a relatively recent addition to the human menu and are often high in refined carbohydrates, harmful sugars and devoid of any real nutritional benefit. It follows that diets rich in these food-like-substances are not supportive of the bacterial friends that the human pre-fast food diet nurtured. Dr Mercola suggests these foods select for and feed ‘bad’ bacteria. Additionally, many of the artificial additives found in processed foods, like Polysorbate 80, also have a detrimental effect on your microbiome.
Tip: If it’s in a packet and you can’t pronounce it, don’t eat it.
- Tap Water. The water you receive when you turn on the tap is far from pure. Although it may be free of bacterial and protozoan infectious organisms (due to heavy chlorination) it contains a multitude of other harmful substances. Municipal water is treated with numerous chemicals that are toxic in high doses. The Australian Drinking Water Guidelines describes a number of buffering agents that are used in addition to the well-known chlorine and fluoride. They are chemicals who mimic the action of important thyroid hormones, substances collectively termed endocrine disruptors.
Tip: Use water filters in your kitchen and install a shower filter to reduce your family’s exposure. Heated water increases both vaporisation of the chemicals and your skin’s permeability.
- Low-fibre, high-fat diet. Low-carb is all the rage right now with bacon and butter being promoted as health foods. In moderation, these foods can be a balanced part of any diet, but problems can arise when we prioritize fat and reduce health-supportive fibres. Helpful bacteria eat fibre and harmful bacteria love fat, so a fibre-deficient high-fat diet can be the instigator for an unhealthy microbiome.
Tip: Take Michael Pollan’s sage advice. “Eat food. Not too much. Mostly plants.” And you’ll be well on your way to getting enough fibre, think soluble, insoluble and resistant starch.
Gut Microbiome Case Study
An intriguing case study into the effects of a pre-modern-era diet was conducted by Jeff Leach, an anthropologist studying the microbiomes of the Hadza. A central-Tanzanian hunter-gatherer tribe whose existence has remained largely unchanged for 10,000 years.
No antibiotics, no processed anything (they don’t even cultivate crops), seasonal rainwater and a very primal high-fibre diet.
He mapped his own microbiome on both the Standard American Diet (with the apt acronym - SAD) and after following the Hadza fibre-rich diet.
The results show a striking shift in the predominance of Firmicutes (a fat-loving phylum known to contribute to obesity) and Bacteroidetes (a phylum associated with leanness and overall health).
On the left, the SAD diet results are dominated by the red coloured Firmicutes. The graph on the right shows a dramatic shift to the blue coloured Bacteroidetes flourishing in response to his increased consumption of fibre, only 2-3 weeks later.
This undoubtedly illustrates the importance of both microbiome testing but also the rapid response of the microbiome. Bacteria replicate in response to food availability and they decrease in response to food scarcity. Exponentially.
It’s the balance of health-promoting microbes and the less favourable species which determines how you feel and how your body responds to their metabolites.
Signs and Symptoms of an Unhealthy Gut
We often don’t realise how unwell we were until we feel better. The dis-ease becomes a constant companion and our frame of reference. We learn to tolerate and live with all the niggly symptoms of disturbed health.
We miss the signs.
It’s only when we look back that we see how far we’ve come.
If you are struggling with one or more of the following symptoms, gut microbiome testing might be a helpful tool for you.
- Digestive Signs
Typical upset stomach symptoms like chronic diarrhoea, constipation, bloating and flatulence can all point to an unbalanced microbiome or dysbiosis. These can be caused by bacterial, fungal or parasitic disturbances resulting in inflammation, excessive gas production and irritation or leaky gut. Other symptoms include bad breath or halitosis and heartburn.
- Skin Signs
Our skin is said to be a mirror of our gut. Dark circles under the eyes, acne, blotches, rashes and rosacea can all be associated with gut troubles. Of particular note is eczema, a chronic inflammatory skin condition that is as unsightly as it is debilitating. Elimination diets can be very effective in relieving skin conditions as they are easily visually monitored.
- Mood Signs
Significant mood changes can indicate poor gut health. Seemingly harmless symptoms like moodiness, headaches and brain fog can mean the beginning of more concerning poor gut health symptoms like anxiety, depression and memory loss. ADD, ADHD and other behavioural and psychological syndromes have been shown to respond favourably to improvements in gut health.
- Body Signs
Your body will try to alert you to your hidden gut issues with symptoms such as sleep disturbances, rapid weight change and fatigue. More serious symptoms like autoimmune conditions (lupus, rheumatoid arthritis, Crohn’s disease, coeliac) are also heavily involved in unhealthy gut processes. Suffering from low immunity, joint pain or arthritis can also point to the gut.
- Food Signs
If you battle with food cravings or food intolerances and sensitivities chances are your gut is the culprit. Eating a diet high in sugar or suffering with nutritional deficiencies are also indicators of a gut out of balance.
Sound all too familiar? Take our online Gut Health Quiz to discover if you have any of the hidden warning signs of an unhealthy gut.
And remember, if you suffer any of the symptoms mentioned above it is important to get assessed by your healthcare provider or a qualified practitioner. While these symptoms are typical of gut health issues and related conditions there may be other underlying causes.
How Foods Help or Hurt Your Gut
Diet is the most natural (and empowering) way to address your gut health. Choosing to eat foods that positively influence your microbiome and avoid those that do not is a crucial step in gut healing.
Consequently, there are a growing number of diets that have had wonderful results for people all over the world.
Paleo, Keto, Plant-Based Whole Food, Low-Carb High-Fat, Gluten-Free Casein-Free and everything in between. And while all this information is wonderful, the conflicting advice can quickly become overwhelming.
Which is why choosing to work with a qualified practitioner who has tested your microbiome can fast-track your healing and takes the guesswork out of choosing the diet that’s right for you.
In the meantime, understanding how food affects your gut is an important step forward as often simply by eliminating troublesome foods you will notice improvements.
How Wheat Affects Your Gut
Wheat has received a lot of bad press recently, and with good reason. In addition to heavy spraying and ever increasing glyphosate concentrations, modern day wheat is very high in gluten.
Wheat gluten is comprised of different protein types called gliadins and glutenins.
And these soluble gliadin proteins act as messengers triggering your gut lining cells to produce and release another protein (called zonulin) which opens the spaces between gut cells (called tight junctions).
This increased permeability allows large molecules, including gluten, to pass through the gut lining causing an inflammation cascade of immune cells, cytokines and symptoms of what is commonly referred to as leaky gut.
On a cellular level, eating wheat causes your gut lining to leak.
What Dairy Does in Your Gut
Similarly, the dairy protein casein can negatively affect gut health. The casein proteins present in bovine dairy products are classified as A1 or A2-type bovine beta-caseins.
A1 caseins have been shown to stimulate gastro-intestinal inflammation by the release of the opioid beta-casomorphin. A1 is also thought to interfere with the production of the enzyme lactase causing malabsorption and mimicking lactose intolerance and associated symptoms.
Other troublesome and common foods include soy, refined sugar, caffeine and alcohol.
The Resistant Starch Pathways in Your Gut
On the other hand, consuming foods rich in resistant starch helps nourish and heal the gut. When resistant starch reaches the large intestine intact, it feeds bacteria that produce short-chain fatty acids (SCFA).
Butyrate is one of these SCFAs and has been associated with many health benefits including reduced intestinal inflammation, increased cellular fluid transport, increased motility and reinforces gut cell defences.
All very good reasons to eat more foods rich in resistant starch.
Steps to Improve Gut Health Through Diet
While these are broad guidelines for improving gut health through diet it’s important to remember that like your microbiome is unique. It’s a complex fingerprint of all the foods, experiences and life that has gone before you.
So it’s imperative to acknowledge what works for someone else may not work for you. You need to tune into your body, understand what’s in your microbiome and go from there. Gently.
Some widely regarded great starting steps are:
- Food. Following a whole food diet, low in processed foods and high in plants will immediately increase your fibre quota (to feed your helpful microbes) and lower your fat and sugar intake. Keeping stalks on fibrous vegetables like broccoli, cauliflower and asparagus are instant resistant starch boosters. Leeks are also wonderfully fibre-full.
Further reading: 26 Best Foods For Gut Health
- Fats. Increasing your intake of quality fats from fish, coconut oil, avocados and flax whilst reducing in from more unhealthy sources such as vegetable oils and saturated fats will help heal and repair your gut lining.
Find out more: Gut Bacteria and Weight Loss, The Surprising Connection
- Ferments. Incorporating good quality, homemade fermented foods into your daily routine such as sauerkraut, kimchi or fermented vegetables are an easy (and cost effective) way to boost probiotics naturally. A tablespoon a day can be enough to keep unfavourable microbes at bay. And if your kids are not keen on eating kraut, consider adding the juice to their cooled dinner. They won’t even know.
Read more: Fermenting Is The Food Trend You Need - For Your Gut’s Sake
- Friends. The importance of good friends cannot be emphasised enough. Both in your gut with the addition of a reputable multi-strain, high CFU probiotic and finding a community who understands your food goals and supports you. Gaining a community around your lifestyle choices can help when no one else seems to understand.
Looking for support? Book your Free Discovery Call with Amanda.
If you’re looking for some guidance or an easy to follow plan our Gut Heal and Nourish Program is a great place to start.
Learning to listen to your body and monitor how it responds to diet and lifestyle changes you make is a powerful gut-healing practice and important life skill.
How Your Lifestyle Can Heal Your Gut
In addition to feeding your microbiome for health and creating a community of like-minded health-conscious people, there are a number of gut-supportive lifestyle choices or practices you can incorporate into your week.
- Get outside. Numerous studies have linked dirt microbes to better gut health. Mycobacterium
vaccaefound in soil has been shown to fight depression and boost immune responses. And while M.vaccae can be bought as a probiotic, it lives free-of-charge in your garden. So forget the hand sanitiser and get dirty.
- See the sun. While sunlight is undeniably good for mental wellbeing, vitamin D deficiency has been shown to reduce the antimicrobial molecules (defensins) that are important for maintaining healthy gut flora.
- Toxin reduction. Reducing your exposure to xenobiotic (including antimicrobial) chemicals and pesticides will help you maintain your microbiome diversity. Commonly used chemicals such as chlorine, sanitisers and household insecticides are designed to kill microorganisms including yours.
- Reduce Stress. Stress comes in many forms, there’s; psychological, sleep disturbance, environmental stressors (extreme weather etc), diet, physical activity, noise and even pollutants act as stressors. Research is showing that stress impacts the function, composition and metabolic activity of the gut microbiome. Different origins of stress have different e
ffectsand can be both helpful or harmful.
- Intermittent Fasting. Fasting is quickly becoming a recognised beneficial practice for improving and maintaining gut health. It appears that traditional wisdom knew a thing or two about microbiome maintenance long ago. Intermittent fasting involves eating and fasting in cyclical patterns, often with up to 18 hours between meals. The health benefits are proving to be highly popular, all from skipping breakfast.
- Stay Hydrated. Keeping you and your gut microbes well watered is an easy way to maintain a healthy gut naturally. Did you know, around 8 litres of water per day is absorbed in digestive processes! As well as the myriad of general health benefits of consuming water, it is important to make sure your water is as pure as possible.
- Sleep. Sleep has long been touted a
cure all, and conversely,the sleep deprivation is known to have serious consequences. And your microbiome plays a crucial role, via the gut-brain axis and also directly. Serotonin and GABA are important neurotransmitters involved in relaxation and sleep. And both are produced by gut bacteria. Streptococcus and Enterococcus produce serotonin which helps make the sleep molecule melatonin. Lactobacillus and Bifidobacteria produce GABA (gamma-aminobutyric acid) decreases beta-brainwaves and increases the calming alpha-brainwaves.
- Learn. And finally, one of the most empowering things you can do for your family’s health is to
educatethem on the importance of gut health. Incorporating simple diet and lifestyle changes into your everyday routine (or eliminating harmful ones) can reap massivephysical, emotional and environmental benefits.
Natural gut health is a lifestyle philosophy.
Becoming aware of your daily practices, your diet and who’s in your microbiome are powerful tools to improve every aspect of your health.
Including foods that heal, eliminating those that do not and truly understanding the dynamic and supportive role that microorganisms play in our biology can bring about massive change.
Your health is absolutely in your control.
That said however, there really are no quick-fix solutions.
When it comes to healing the gut naturally, persistence, patience and the right advice are cornerstones for gut health. Our Gut Heal and Nourish Program offers personalised support, an easy-to-follow 8-week meal plan, recipes, shopping lists, supplement advice and more.
And if you apply what you’ve learned, you’re well on your way to ultimate gut health, naturally. | <urn:uuid:1793fbca-9032-4970-b5a8-7f1067b861e9> | CC-MAIN-2019-47 | https://pranathrive.com/how-to-improve-gut-health/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665521.72/warc/CC-MAIN-20191112101343-20191112125343-00418.warc.gz | en | 0.921529 | 5,653 | 3.09375 | 3 |
Age-related macular degeneration (AMD) is a chronic eye disease that affects more than 10 million Americans. It is the leading cause of vision loss in people over 60 in the United States, and the number of people with AMD rises as they get older.
Macular degeneration affects tissue in the part of your retina that is responsible for central vision, called the macula. It causes blurred vision or a blind spot in the center of your vision, and can interfere with reading, driving, and other daily activities. You may first notice symptoms when you need more light to see up close.
There are two forms of AMD. Dry AMD affects about 85% of those with the disease and causes gradual loss of central vision, sometimes starting in one eye. Wet AMD, which accounts for 90% of all severe vision loss from the disease, often involves a sudden loss of central vision. Most people with the wet form of AMD previously had the dry form.
Signs and Symptoms
Symptoms of dry AMD include:
- Needing more light when doing close-up work
- Blurring of print when trying to read
- Colors appear less bright
- Haziness of vision
- Blurred spot in the central field of vision, which may get larger and darker
Symptoms of wet AMD include:
- Straight lines that appear wavy
- Objects appearing further away or smaller than usual
- Loss of central vision
- Sudden blind spot
What Causes It?
The macula, a part of your eye's retina, is made of cells, called rods and cones, which are sensitive to light and are needed for central vision. Underneath the macula is a layer of blood vessels called the choroids, which provides blood to the macula. A layer of tissue on the retina called the retinal pigment epithelium (RPE) keeps the macula healthy by transporting nutrients from the blood vessels to the macula and moving waste products from the macula to the blood vessels.
As you get older, the RPE can thin and not move nutrients and waste back and forth as efficiently. Waste builds up in the macula, and cells in the macula become damaged from lack of blood, affecting your vision.
With dry AMD, RPE cells lose their color and do not get rid of waste products from the rods and cones. As waste builds up, the rods and cones deteriorate.
With wet AMD, blood vessels grow underneath the macula and leak fluid or blood. Researchers do not know exactly what causes the new blood vessels to grow, although they think that it may be the breakdown in waste removal. That could explain why people with the wet form almost always start out with the dry form. The new blood vessels interfere with getting nutrients to the macula, and the rods and cones start to break down.
Who is Most At Risk?
People with the following conditions or characteristics are at risk for developing AMD:
- Age, macular degeneration is the leading cause of severe vision loss in people over 60.
- Gender, women are more likely to develop it than men.
- Cigarette smoking
- Family history of macular degeneration
- Heart disease
- High cholesterol
- Light eye color
- Long-term exposure to sunlight
- Low levels of antioxidants in your blood
- Carrying weight around your waist (belly fat)
- Use of antacids, using anatacids regularly has been linked with developing AMD
What to Expect at Your Provider's Office
Your eye doctor can screen you for AMD as you get older. However, if you have any changes in your central vision, or in your ability to see colors, you should see your doctor right away. Your doctor may use several methods to test you for AMD including:
- A vision field test.
- Testing with an Amsler grid, which involves covering one eye and staring at a black dot in the center of a checkerboard-like grid. If the straight lines in the pattern look wavy, or some of the lines seem to be missing, these may be signs of macular degeneration.
- Fluorescein angiography, where a special dye is injected into a vein in your arm and pictures are taken as the dye passes through the blood vessels in the retina.
- Optical coherence tomography (OCT), an imaging test that can look for areas where the retina may be thin or where there may be fluid under the retina.
There is no known cure for AMD, however, there are things that can help slow vision loss. Certain procedures and medications may stop the wet form of the disease from getting worse. Adding antioxidants to your diet may help prevent the wet and dry forms of AMD and slow their progression.
The dry form of AMD can progress to the wet form. If you have dry AMD, you will test your eyes daily at home using an Amsler grid. Let your doctor know immediately if there is any change in your vision.
Wear sunglasses, hats, and visors when exposed to the sun.
For wet AMD, a type of medication called antivascular endothelial growth factor (anti-VEGF) can be injected into your eye to stop new blood vessels from growing. Two such drugs are approved to treat AMD:
- Pegaptanib (Macugen)
- Ranibizumab (Lucentis)
Surgical and Other Procedures
Surgical and other procedures may help some cases of wet macular degeneration.
Photocoagulation (laser surgery). In photocoagulation, doctors use a laser to seal off blood vessels that have grown under the macula. Whether this procedure is used depends on:
- Where the blood vessels are located
- How much fluid or blood has leaked out
- How healthy the macula is
Photodynamic therapy is often used to seal off blood vessels that are under the center of the macula. Using photocoagulation on that location would result in permanent central vision loss. With photodynamic therapy, the doctor gives you a drug that stays in the blood vessels under the macula. When a light is shined in your eye, the drug closes them off without damaging the rest of the macula. Photodynamic therapy slows vision loss but does not stop it.
Complementary and Alternative Therapies
Supplements are a valuable treatment for dry AMD. They may also help prevent both wet and dry types. However, you should not try to self treat vision problems. See your doctor first for a diagnosis and treatment plan.Nutrition
To treat AMD
- AREDS formula (vitamin C, vitamin E, beta-carotene, and zinc, plus copper). The Age-Related Eye Disease Study (AREDS) found that a combination of antioxidant vitamins plus zinc helped slow the progression of intermediate macular degeneration to an advanced stage. Because the advanced stage is when most vision loss happens, the supplement can help stave off vision loss.
The National Eye Institute recommends that people with intermediate AMD in one or both eyes or with advanced AMD (wet or dry) in one eye but not the other take this formulation each day. However, this combination of nutrients did not help prevent AMD, nor did it slow progression of the disease in those with early AMD. The doses of nutrients are:
- Vitamin C (500 mg per day)
- Vitamin E (400 IU per day)
- Beta-carotene (15 mg per day, or 25,000 IU of vitamin A)
- Zinc (80 mg per day)
- Copper (2 mg per day, to prevent copper deficiency that can occur when taking extra zinc)
Ocuvite PreserVision is formulated to contain the proper amounts of these nutrients. People who already take a multivitamin should let their doctor know before taking this formulation. Zinc can be harmful at a high dose, like the 80 mg used in this formulation, so be sure to take this combination only under your doctor's supervision. Zinc can cause copper deficiency, so a small amount of copper is added to the nutrients.
In the study, 7.5% of people who took zinc had problems including:
- Urinary tract infections
- Enlarged prostate
- Kidney stones
Compared to 5% of the people in the study who did not receive zinc.
- Lutein and zeaxanthin. High levels of these two antioxidants that give plants orange, red, or yellow color may help protect against AMD, either by acting as antioxidants or by protecting the macula from damage from light. One study found that people with AMD who took lutein alone, or in combination with other antioxidants, had less vision loss, while those who took a placebo had no change. However, another study failed to find any benefit from lutein. Egg yolks, spinach, and corn have high concentrations of lutein and zeaxanthin.
Help reduce risk of AMD
- Leafy greens. People who eat dark, leafy greens, such as spinach, kale, collard greens, and watercress tend to have a lower risk of AMD.
One study found that taking vitamins B6, B12, and folic acid reduced the risk of AMD in women over 40 with a history of, or at risk for, heart disease. The doses used were:
- Vitamin B6 (50 mg daily)
- Vitamin B12 (1,000 mcg per day)
- Folic acid (2,500 mcg per day)
Folic acid can mask a vitamin B12 deficiency. Talk to your doctor before taking these vitamins at these doses.
- Omega-3 fatty acids (fish oil). In a study of more than 3,000 people over the age of 49, those who ate more fish were less likely to have AMD than those who ate fewer fish. Other studies show that eating fatty fish at least once a week cuts the risk of AMD in half. Another larger study found that consuming docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA), two types of omega-3 fatty acids found in fish, 4 or more times per week may reduce the risk of developing AMD. However, this same study suggests that alpha-linolenic acid (another type of omega-3 fatty acid) may actually increase the risk of AMD. It is safe to eat more fish, although you may want to eat fish with lower levels of mercury.
Women who are pregnant or breastfeeding are advised to eat no more than 12 ounces a week of a variety of fish and shellfish that are lower in mercury. Talk to your doctor before taking fish oil supplements if you are at risk for AMD. Fish oil may increase your risk of bleeding, especially if you already take bloodthinners, such as warfarin (Coumadin) or aspirin.Herbs
The use of herbs is a time-honored approach to strengthening the body and treating disease. Herbs, however, can trigger side effects and can interact with other herbs, supplements, or medications. For these reasons, you should take herbs with care, under the supervision of a health care practitioner.
- Ginkgo (Ginkgo biloba). 160 to 240 mg per day. Ginkgo contains flavonoids, which researchers think may also help AMD. Two studies showed that people with AMD who took ginkgo were able to slow their vision loss. Ginkgo can increase the risk of bleeding, so people who take bloodthinners, such as warfarin (Coumadin), clopidogrel (Plavix), aspirin, or any other medication that decreases clotting, should not take ginkgo without talking to their doctor.
- Bilberry (Vaccinium myrtillus), 120 to 240 mg, 2 times per day, and grape seed (Vitis vinifera), 50 to 150 mg per day). Are also high in flavonoids, so researchers think that they may help prevent and treat AMD. However, so far no studies have looked at using bilberry or grape seed to treat AMD. Bilberry and grape seed may increase the risk of bleeding, so people who take blood thinners, such as warfarin (Coumadin), clopidogrel (Plavix), aspirin, or any other medication that decreases clotting, should not take either bilberry or grape seed without talking to their doctor. People with low blood pressure, heart disease, diabetes, or blood clots should not take bilberry without first talking to their doctor. DO NOT take bilberry if you are pregnant or breastfeeding.
- Milk thistle. 150mg, 2 to 3 times per day. Silymarin, from milk thistle, is a major supporter of liver function. The liver is a key organ for maintenance of eye health because the fat soluble vitamins and the B vitamins are stored there. There is some concern that milk thistle compounds have estrogen-like effects in the body. If you have hormone-sensitive issues, you should discuss the risks and benefits with your physician.The same holds true for people taking any prescription medication since milk thistle exerts its influence via the liver and that is where the majority of medications are metabolized. If you have an allergy to ragweed, you may also react to milk thistle. Speak to your doctor.
Severe AMD can cause legal blindness. Low vision aids may help if you have partial blindness. Sometimes blood vessels build up underneath the retina, causing the retina to become detached or scarred. If this happens, the chances of preserving your central vision are poor. This condition, called subretinal neovascularization, happens in about 20% of cases of AMD. It often comes back even after laser treatment.
Your eye doctor will see you regularly to monitor your vision and eye health.
Ahmadi MA, Lim JI. Pharmacotherapy of age-related macular degeneration. Expert Opin Pharmacother. 2008;9(17):3045-52.
Age-Related Eye Disease Study Research Group. A randomized, placebo-controlled, clinical trial of high-dose supplementation with vitamins C and E, beta carotene, and zinc for age-related macular degeneration and vision loss: AREDS report no. 8. Arch Ophthalmol. 2001;119(10):1417-36.
Age-Related Eye Disease Study Research Group. A randomized, placebo-controlled, clinical trial of high-dose supplementation with vitamins C and E, beta carotene, and zinc for age-related macular degeneration and vision loss: AREDS report no. 9. Arch Ophthalmol. 2001;119(10):1439-52.
Augood C, et al. Oily fish consumption, dietary docosahexaenoic acid and eicosapentaenoic acid intakes, and associations with neovascular age-related macular degeneration. Am J Clin Nutr. 2008;88(2):398-406.
Bartlett HE, Eperjesi F. Effect of lutein and antioxidant dietary supplementation on contrast sensitivity in age-related macular disease: a randomized controlled trial. Eur J Clin Nutr. 2007 Sep;61(9):1121-7.
Bone RA, Landrum JT, Guerra LH, Ruiz CA. Lutein and zeaxanthin dietary supplements raise macular pigment density and serum concentrations of these carotenoids in humans. J Nutr. 2003;133(4):992-8.
Cai J, Nelson KC, Wu M, Sternberg P Jr, Jones DP. Oxidative damage and protection of the RPE. Prog Retin Eye Res. 2000;19(2):205-21.
Carpentier S, Knaus M, Suh M. Associations between lutein, zeaxanthin, and age-related macular degeneration: an overview. Crit Rev Food Sci Nutr. 2009;49(4):313-26.
Chang CW, Chu G, Hinz BJ, Greve MD. Current use of dietary supplementation in patients with age-related macular degeneration. Can J Opthalmol. 2003;38(1):27-32.
Cho E, Hung S, Willet WC, et al. Prospective study of dietary fat and the risk of age-related macular degeneration. Am J Clin Nutr. 2001;73(2):209-18.
Christen WG, Glynn RJ, Chew EY, Albert CM, Manson JE. Folic acid, pyridoxine, and cyanocobalamin combination treatment and age-related macular degeneration in women: The Women's Antioxidant and Folic Acid Cardiovascular Study. Arch Intern Med. 2009;169(4):335-41.
Coleman H, Chew E. Nutritional supplementation in age-related macular degeneration. Curr Opin Ophthalmol. 2007 May;18(3):220-3. Review.
Cong R, Zhou B, Sun Q, Gu H, Tang N, Wang B. Smoking and the risk of age-related macular degeneration: a meta-analysis. Ann Epidemol. 2008;18(8):647-56.
Diamond BJ, Shiflett SC, Feiwell N, Matheis RJ, Noskin O, Richards JA, et al. Ginkgo biloba extract: mechanisms and clinical indications. Arch Phys Med Rehabil. 2000;81(5):668-78.
Eat fish and protect against MD. Health News. 2006 Sep;12(9):8.
Evans JR. Antioxidant vitamin and mineral supplements for age-related macular degeneration. Cochrane Database Syst Rev. 2002;20:CD000254.
Falsini B, Piccardi M, Iarossi G, Fadda A, Merendino E, Valentini P. Influence of short-term antioxidant supplementation on macular function in age-related maculopathy: a pilot study including electrophysiologic assessment. Ophthalmology. 2003;110(1):51-60;discussion 61.
Ferri. Ferri's Clinical Advisor 2016. 1st ed. Philadelphia, PA: Elsevier Mosby. 2016.
Fies P, Dienel A. [Ginkgo extract in impaired vision – treatment with special extract Egb 761 of impaired vision due to dry senile macular degeneration]. Wiedn Med Wochenschr. 2002;152(15-16):423-6.
Flood V, Smith W, Wang JJ, Manzi F, Webb K, Mitchell P. Dietary antioxidant intake and incidence of early age-related maculopathy: the Blue Mountains Eye Study. Ophthalmology. 2002;109(12):2272-8.
Gohel P, Mandava N, Olson J, Durairaj V, Age-related Macular Degeneration: An Update on Treatment. Amer J of Med. 2008;121(4).
Hambridge M. Human zinc deficiency. J Nutr. 2000;130(5S suppl):1344S-9S.
Heber D, Bowerman S. Applying science to changing dietary patterns. J Nutr. 2001;131(11 Suppl):3078-81S.
Hodge WG, Barnes D, Schachter HM, Pan YI, Lowcock EC, Zhang L, et al. Evidence for the effect of omega-3 fatty acids on progression of age-related macular degeneration: a systematic review. Retina. 2007 Feb;27(2):216-21. Review.
Hyman L, Neborsky R. Risk factors for age-related macular degeneration: an update. Burr Opin Ophthalmol. 2002;13(3):171-5.
Jones AA. Age related macular degeneration--should your patients be taking additional supplements? Aust Fam Physician. 2007 Dec;36(12):1026-8.
Kuzniarz M, Mitchell P, Flood VM, Wang JJ. Use of vitamin and zinc supplements and age-related maculopathy: the Blue Mountains Eye Study. Ophthalmic Epidemiol. 2002;9(4):283-95.
Landrum JT, Bone RA. Lutein, zeaxanthin, and the macular pigment. Arch Biochem Biophys. 2001;385(1):28-40.
Lim LS, Mitchell P, Seddon JM, Holz FG, Wong TY. Age-related macular degeneration. Lancet. 2012; 379(9827):1728-38.
Ma L, Dou HL, Wu YQ, Huang YM, Huang YB, Xu XR, Zou ZY, Lin XM. Lutein and zeaxanthin intake and the risk of age-related macular degeneration: a systematic review and meta-analysis. Br J Nutr. 2011 Sep 8:1-10. [Epub ahead of print]
Ma L, Yan SF, Huang YM, et al. Effect of lutein and zeaxanthin on macular pigment and visual function in patients with early age-related macular degeneration. Ophthalmology. 2012;119(11):2290-7.
Mataix J, Desco MC, Palacios E, Garcia-Pous M, Navea A. Photodynamic therapy for age-related macular degeneration. Ophthalmic Surg Lasers Imaging. 2009;40(3):277-84.
McBee WL, Lindblad AS, Ferris III FL. Who should receive oral supplement treatment for age-related macular degeneration? Curr Opin Ophthalmol. 2003;14(3):159-62.
Merle B, Delyfer MN, Korobelnik JF, Rougier MB, Colin J, Malet F, Féart C, Le Goff M, Dartigues JF, Barberger-Gateau P, Delcourt C. Dietary omega-3 fatty acids and the risk for age-related maculopathy: the Alienor Study. Invest Ophthalmol Vis Sci. 2011 Jul 29;52(8):6004-11.
Michels S, Kurz-Levin M. Age-related macular degeneration (AMD). Ther Umsch. 2009;66(3):189-95.
Morris MS, Jacques PF, Chylack LT, Hankinson SE, Willett WC, Hubbard LD, Taylor A. Intake of zinc and antioxidant micronutrients and early age-related maculopathy lesions. Ophthalmic Epidemiol. 2007 Sep-Oct;14(5):288-98.
Peeters A, Magliano DJ, Stevens J, Duncan BB, Klein R, Wong TY. Changes in abdominal obesity and age-related macular degeneration: the Atherosclerosis Risk in Communities Study. Arch Ophthalmol. 2008;126(11):1554-60.
Rakel. Integrative Medicine. 3rd ed. Philadelphia, PA: Elsevier Saunders. 2012.
Rakel. Textbook of Family Medicine. 8th ed. Philadelphia, PA: Elsevier Saunders. 2011.
Robman L, Vu H, Hodge A, Tikellis G, Dimitrov P, McCarty C, Guymer R. Dietary lutein, zeaxanthin, and fats and the progression of age-related macular degeneration. Can J Ophthalmol. 2007 Oct;42(5):720-6.
Seddon JM. Multivitamin-multimineral supplements and eye disease: age-related macular degeneration and cataract. Am J Clin Nutr. 2007 Jan;85(1):304S-307S. Review.
Seddon JM, Rosner B, Sperduto RD, Yannuzzi L, Haller JA, Blair NP, Willett W. Dietary fat and risk for advanced age-related macular degeneration. Arch Opthalmol. 2001;119(8):1191-9.
Supplements may slow age-related macular degeneration. Mayo Clin Health Lett. 2002;20(3):4.
Supplements slow the course of macular degeneration. Harv Womens Health Watch. 2001;9(5):1-2.
Trieschmann M, Beatty S, Nolan JM, Hense HW, Heimes B, Austermann U, et al. Changes in macular pigment optical density and serum concentrations of its constituent carotenoids following supplemental lutein and zeaxanthin: the LUNA study. Exp Eye Res. 2007 Apr;84(4):718-28.
Review Date: 11/6/2015
Reviewed By: Steven D. Ehrlich, NMD, Solutions Acupuncture, a private practice specializing in complementary and alternative medicine, Phoenix, AZ. Review provided by VeriMed Healthcare Network. Also reviewed by the A.D.A.M. Editorial team. | <urn:uuid:cabec09e-1c8d-47d0-9d28-4f37c82ab79d> | CC-MAIN-2019-47 | https://ssl.adam.com/content.aspx?productid=107&isarticlelink=false&pid=33&gid=000104&site=ummchealth.adam.com&login=UMMCHealth | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00418.warc.gz | en | 0.872849 | 5,247 | 3.609375 | 4 |
This article needs additional citations for verification. (January 2009) (Learn how and when to remove this template message)
Tours (// TOOR, French: [tuʁ] (listen)) is a city in the west of France. It is the administrative centre of the Indre-et-Loire department and the largest city in the Centre-Val de Loire region of France (although it is not the capital, which is the region's second-largest city, Orléans). In 2012, the city of Tours had 134,978 inhabitants, and the population of the whole metropolitan area was 483,744.
Town hall and Place Jean Jaurès
|Region||Centre-Val de Loire|
|Intercommunality||Tours Métropole Val de Loire|
|• Mayor (2017-2020)||Christophe Bouchet|
|34.36 km2 (13.27 sq mi)|
|• Density||4,100/km2 (11,000/sq mi)|
|Time zone||UTC+01:00 (CET)|
|• Summer (DST)||UTC+02:00 (CEST)|
37261 /37000, 37100, 37200
|Elevation||44–119 m (144–390 ft)|
|1 French Land Register data, which excludes lakes, ponds, glaciers > 1 km2 (0.386 sq mi or 247 acres) and river estuaries.|
Tours stands on the lower reaches of the Loire river, between Orléans and the Atlantic coast. The surrounding district, the traditional province of Touraine, is known for its wines, for the alleged perfection (as perceived by some speakers and for historical reasons) of its local spoken French, and for the Battle of Tours (732). The historical center of Tours (Also called "Le vieux Tours") is a UNESCO World Heritage Site. The city is also the end-point of the annual Paris–Tours cycle race.
- 1 History
- 2 Climate
- 3 Sights
- 4 Language
- 5 City
- 6 Population
- 7 Transportation
- 8 Sport
- 9 Catholics from Tours
- 10 Notable natives and residents
- 11 Twin towns - sister cities
- 12 Gallery
- 13 See also
- 14 References
- 15 Further reading
- 16 External links
In Gallic times the city was important as a crossing point of the Loire. Becoming part of the Roman Empire during the 1st century AD, the city was named "Caesarodunum" ("hill of Caesar"). The name evolved in the 4th century when the original Gallic name, Turones, became first "Civitas Turonum" then "Tours". It was at this time that the amphitheatre of Tours, one of the five largest amphitheatres of the Empire, was built. Tours became the metropolis of the Roman province of Lugdunum towards 380–388, dominating the Loire Valley, Maine and Brittany. One of the outstanding figures of the history of the city was Saint Martin, second bishop who shared his coat with a naked beggar in Amiens. This incident and the importance of Martin in the medieval Christian West made Tours, and its position on the route of pilgrimage to Santiago de Compostela, a major centre during the Middle Ages.
In the 6th century Gregory of Tours, author of the Ten Books of History, made his mark on the town by restoring the cathedral destroyed by a fire in 561. Saint Martin's monastery benefited from its inception, at the very start of the 6th century from patronage and support from the Frankish king, Clovis, which increased considerably the influence of the saint, the abbey and the city in Gaul. In the 9th century, Tours was at the heart of the Carolingian Rebirth, in particular because of Alcuin abbot of Marmoutier.
In 732 AD, Abdul Rahman Al Ghafiqi and a large army of Muslim horsemen from Al-Andalus advanced 500 kilometres (311 miles) deep into France, and were stopped at Tours by Charles Martel and his infantry igniting the Battle of Tours. The outcome was defeat for the Muslims, preventing France from Islamic conquest. In 845, Tours repulsed the first attack of the Viking chief Hasting (Haesten). In 850, the Vikings settled at the mouths of the Seine and the Loire. Still led by Hasting, they went up the Loire again in 852 and sacked Angers, Tours and the abbey of Marmoutier.
During the Middle Ages, Tours consisted of two juxtaposed and competing centres. The "City" in the east, successor of the late Roman 'castrum', was composed of the archiepiscopal establishment (the cathedral and palace of the archbishops) and of the castle of Tours, seat of the authority of the Counts of Tours (later Counts of Anjou) and of the King of France. In the west, the "new city" structured around the Abbey of Saint Martin was freed from the control of the City during the 10th century (an enclosure was built towards 918) and became "Châteauneuf". This space, organized between Saint Martin and the Loire, became the economic centre of Tours. Between these two centres remained Varennes, vineyards and fields, little occupied except for the Abbaye Saint-Julien established on the banks of the Loire. The two centres were linked during the 14th century.
Tours became the capital of the county of Tours or Touraine, territory bitterly disputed between the counts of Blois and Anjou – the latter were victorious in the 11th century. It was the capital of France at the time of Louis XI, who had settled in the castle of Montils (today the castle of Plessis in La Riche, western suburbs of Tours), Tours and Touraine remained until the 16th century a permanent residence of the kings and court. The rebirth gave Tours and Touraine many private mansions and castles, joined together to some extent under the generic name of the Châteaux of the Loire. It is also at the time of Louis XI that the silk industry was introduced – despite difficulties, the industry still survives to this day.
Charles IX passed through the city at the time of his royal tour of France between 1564 and 1566, accompanied by the Court and various noblemen: his brother the Duke of Anjou, Henri de Navarre, the cardinals of Bourbon and Lorraine. At this time, the Catholics returned to power in Angers: the intendant assumed the right to nominate the aldermen. The Massacre of Saint-Barthelemy was not repeated at Tours. The Protestants were imprisoned by the aldermen – a measure which prevented their extermination. The permanent return of the Court to Paris and then Versailles marked the beginning of a slow but permanent decline. Guillaume the Metayer (1763–1798), known as Rochambeau, the well known counter-revolutionary chief of Mayenne, was shot there on Thermidor 8, year VI.
However, it was the arrival of the railway in the 19th century which saved the city by making it an important nodal point. The main railway station is known as Tours-Saint-Pierre-des-Corps. At that time, Tours was expanding towards the south into a district known as the Prébendes. The importance of the city as a centre of communications contributed to its revival and, as the 20th century progressed, Tours became a dynamic conurbation, economically oriented towards the service sector.
First World WarEdit
The city was greatly affected by the First World War. A force of 25,000 American soldiers arrived in 1917, setting up textile factories for the manufacture of uniforms, repair shops for military equipment, munitions dumps, an army post office and an American military hospital at Augustins. Thus Tours became a garrison town with a resident general staff. The American presence is remembered today by the Woodrow Wilson bridge over the Loire, which was officially opened in July 1918 and bears the name of the man who was President of the USA from 1913 to 1921. Three American air force squadrons, including the 492nd, were based at the Parçay-Meslay airfield, their personnel playing an active part in the life of the city. Americans paraded at funerals and award ceremonies for the Croix de Guerre; they also took part in festivals and their YMCA organised shows for the troops. Some men married women from Tours.
In 1920, the city was host to the Congress of Tours, which saw the creation of the French Communist Party.
Second World WarEdit
Tours was also marked by the Second World War. In 1940 the city suffered massive destruction, and for four years it was a city of military camps and fortifications. From 10 to 13 June 1940, Tours was the temporary seat of the French government before its move to Bordeaux. German incendiary bombs caused a huge fire which blazed out of control from 20 to 22 June and destroyed part of the city centre. Some architectural masterpieces of the 16th and 17th centuries were lost, as was the monumental entry to the city. The Wilson Bridge (known locally as the 'stone bridge') carried a water main which supplied the city; the bridge was dynamited to slow the progress of the German advance. With the water main severed and unable to extinguish the inferno, the inhabitants had no option but to flee to safety. More heavy air raids by Allied forces devastated the area around the railway station in 1944, causing several hundred deaths.
A plan for the rebuilding of the downtown area drawn up by the local architect Camille Lefèvre was adopted even before the end of the war. The plan was for 20 small quadrangular blocks of housing to be arranged around the main road (la rue Nationale), which was widened. This regular layout attempted to echo, yet simplify, the 18th-century architecture. Pierre Patout succeeded Lefèvre as the architect in charge of rebuilding in 1945. At one time there was talk of demolishing the southern side of the rue Nationale in order to make it in keeping with the new development.
The recent history of Tours is marked by the personality of Jean Royer, who was Mayor for 36 years and helped to save the old town from demolition by establishing one of the first Conservation Areas. This example of conservation policy would later inspire the Malraux Law for the safeguarding of historic city centres. In the 1970s, Jean Royer also extended the city to the south by diverting the course of the River Cher to create the districts of Rives du Cher and des Fontaines; at the time, this was one of the largest urban developments in Europe. In 1970, the François Rabelais University was founded; this is centred on the bank of the Loire in the downtown area, and not – as it was then the current practice – in a campus in the suburbs. The latter solution was also chosen by the twin university of Orleans. Royer's long term as Mayor was, however, not without controversy, as exemplified by the construction of the practical – but aesthetically unattractive – motorway which runs along the bed of a former canal just 1,500 metres (4,900 feet) from the cathedral. Another bone of contention was the original Vinci Congress Centre by Jean Nouvel. This project incurred debts although it did, at least, make Tours one of France's principal conference centres.
Jean Germain, a member of the Socialist Party, became Mayor in 1995 and made debt reduction his priority. Ten years later, his economic management is regarded as much wiser than that of his predecessor, the financial standing of the city having returned to a stability. However, the achievements of Jean Germain are criticised by the municipal opposition for a lack of ambition: no large building projects comparable with those of Jean Royer have been instituted under his double mandate. This position is disputed by those in power, who affirm their policy of concentrating on the quality of life, as evidenced by urban restoration, the development of public transport and cultural activities.
Tours has an oceanic climate that is very mild for such a northerly latitude. Summers are influenced by its inland position, resulting in frequent days of 25 °C (77 °F) or warmer, whereas winters are kept mild by Atlantic air masses.
|Climate data for Tours (1981–2010 averages)|
|Record high °C (°F)||16.9
|Average high °C (°F)||7.3
|Average low °C (°F)||2.0
|Record low °C (°F)||−17.4
|Average precipitation mm (inches)||66.2
|Average precipitation days||11.9||9.5||9.9||9.6||9.8||7.0||6.9||6.2||7.8||10.5||11.2||11.4||111.6|
|Average snowy days||2.4||2.9||1.8||0.7||0.1||0.0||0.0||0.0||0.0||0.0||1.0||1.7||10.6|
|Average relative humidity (%)||87||84||79||74||77||75||72||73||77||84||87||89||79.8|
|Mean monthly sunshine hours||69.9||90.3||144.2||178.5||205.6||228.0||239.4||236.4||184.7||120.6||76.7||59.2||1,833.3|
|Source #1: Météo France|
|Source #2: Infoclimat.fr (humidity and snowy days, 1961–1990)|
The cathedral of Tours, dedicated to Saint Gatien, its canonized first bishop, was begun about 1170 to replace the cathedral that was burnt out in 1166, during the dispute between Louis VII of France and Henry II of England. The lowermost stages of the western towers (illustration, above left) belong to the 12th century, but the rest of the west end is in the profusely detailed Flamboyant Gothic of the 15th century, completed just as the Renaissance was affecting the patrons who planned the châteaux of Touraine. These towers were being constructed at the same time as, for example, the Château de Chenonceau.
When the 15th-century illuminator Jean Fouquet was set the task of illuminating Josephus's Jewish Antiquities, his depiction of Solomon's Temple was modeled after the nearly-complete cathedral of Tours. The atmosphere of the Gothic cathedral close permeates Honoré de Balzac's dark short novel of jealousy and provincial intrigues, Le Curé de Tours (The Curate of Tours) and his medieval story Maitre Cornélius opens within the cathedral itself.
Other points of interestEdit
Before the French Revolution, the inhabitants of Tours (Les Tourangeaux) were renowned for speaking the "purest" form of French in the entire country. As their accent was that of the court, the pronunciation of Touraine was traditionally regarded as the most standard pronunciation of the French language, until the 19th century when the standard pronunciation of French shifted to that of Parisian bourgeoisie. This is explained by the fact that the court of France was living in Touraine between 1430 and 1530 and concomitantly French, the language of the court, has become the official language of the entire kingdom.
A Council of Tours in 813 decided that priests should preach sermons in vulgar languages because the common people could no longer understand classical Latin. This was the first official recognition of an early French language distinct from Latin, and can be considered as the birth date of French.
Finally the ordinance of Villers-Cotterêts, signed into law by Francis I in 1539, called for the use of French in all legal acts, notarised contracts and official legislation to avoid any linguistic confusion.
The city of Tours has a population of 140,000 and is called "Le Jardin de la France" ("The Garden of France"). There are several parks located within the city. Tours is located between two rivers, the Loire to the north and the Cher to the south. The buildings of Tours are white with blue slate (called Ardoise) roofs; this style is common in the north of France, while most buildings in the south of France have terracotta roofs .
Tours is famous for its original medieval district, called le Vieux Tours. Unique to the Old City are its preserved half-timbered buildings and la Place Plumereau, a square with busy pubs and restaurants, whose open-air tables fill the centre of the square. The Boulevard Beranger crosses the Rue Nationale at the Place Jean-Jaures and is the location of weekly markets and fairs.
Tours is famous for its many bridges crossing the river Loire. One of them, the Pont Wilson, collapsed in 1978, but was rebuilt just like it was before.
Near the cathedral, in the garden of the ancient Palais des Archevêques (now Musée des Beaux-Arts), is a huge cedar tree said to have been planted by Napoleon. The garden also has in an alcove a stuffed elephant, Fritz. He escaped from the Barnum and Bailey circus during their stay in Tours in 1902. He went mad and had to be shot down, but the city paid to honor him, and he was stuffed as a result.
Tours is home to François Rabelais University, the site of one of the most important choral competitions, called Florilège Vocal de Tours International Choir Competition, and is a member city of the European Grand Prix for Choral Singing.
Tours is on one of the main lines of the TGV. It is possible to travel to the west coast at Bordeaux in two and a half hours, to the Mediterranean coast via Avignon and from there to Spain and Barcelona, or to Lyon, Strasbourg and Lille. It takes less than one hour by train from Tours to Paris by TGV and one hour and a half to Charles de Gaulle airport. Tours has two main stations: the central station Gare de Tours, and Gare de Saint-Pierre-des-Corps, just outside the centre, the station used by trains that do not terminate in Tours.
Tours has a tram system, which started service at the end of August 2013. 21 Citadis trams were ordered from Alstom designed by RCP Design Global. There is also a bus service, the main central stop being Jean Jaures, which is next to the Hôtel de Ville, and rue Nationale, the high street of Tours. The tram and bus networks are operated by Fil Bleu and they share a ticketing system. A second tram line is scheduled for 2025.
Tours does not have a metro rail system.
The volleyball club called "Tours Volleyball" is one of the best Europeans.
Catholics from ToursEdit
Tours is a special place for Catholics who follow the devotion to the Holy Face of Jesus and the adoration of the Blessed Sacrament. It was in Tours in 1843 that a Carmelite nun, Sister Marie of St Peter reported a vision which started the devotion to the Holy Face of Jesus, in reparation for the many insults Christ suffered in His Passion. The Golden Arrow Prayer was first made public by her in Tours.
The Venerable Leo Dupont also known as The Holy Man of Tours lived in Tours at about the same time. In 1849 he started the nightly adoration of the Blessed Sacrament in Tours, from where it spread within France. Upon hearing of Sister Marie of St Peter’s reported visions, he started to burn a vigil lamp continuously before a picture of the Holy Face of Jesus and helped spread the devotion within France. The devotion was eventually approved by Pope Pius XII in 1958 and he formally declared the Feast of the Holy Face of Jesus as Shrove Tuesday (the Tuesday before Ash Wednesday) for all Roman Catholics. The Oratory of the Holy Face on Rue St. Etienne in Tours receives many pilgrims every year.
Tours was the site of the episcopal activity of St. Martin of Tours and has further Christian connotations in that the pivotal Battle of Tours in 732 is often considered the very first decisive victory over the invading Islamic forces, turning the tide against them. The battle also helped lay the foundations of the Carolingian Empire
Notable natives and residentsEdit
- Berengarius of Tours (999–1088), theologian
- Bernard of Tours (fl. 1147, d. before 1178), philosopher and poet
- Jean Fouquet (1420–1481), painter
- Abraham Bosse (1604–1676), artist
- Louise de la Vallière (1644–1710), courtesan
- Philippe Néricault Destouches (1680–1754), dramatist
- Jean Baudrais (1749–1832), 18th-century French playwright
- Nicolas Heurteloup (1750–1812), surgeon
- Philippe Musard (1792-1859), conductor and composer
- Gabriel Lamé (1795–1870), mathematician
- Honoré de Balzac (1799–1850), novelist
- André-Michel Guerry (1802-1866), lawyer and statistician
- Théophile Archambault (1806-1863), psychiatrist
- Ernest Goüin (1815–1885), French engineer
- Marie of St Peter (1816–1848), mystic carmelite nun
- Philippe de Trobriand (1816–1897), author, American military officer
- Émile Delahaye (1843–1905), automobile pioneer
- Georges Courteline (1858–1929), dramatist and novelist
- Emile B. De Sauzé (1878-1964), language educator
- Daniel Mendaille (1885–1963), stage and film actor
- Paul Nizan (1905–1940), novelist and philosopher
- Yves Bonnefoy (1923–2016), poet
- Paul Guers (1927) (Paul Jacques Dutron), actor
- Philippe Lacoue-Labarthe (1940–2007), philosopher, literary critic and translator
- Jean-Louis Bruguière (born 1943), top French investigating judge
- Jean Chalopin (born 1950), television and movie producer, director and writer
- Jacques Villeret (1951–2005), actor
- Dominique Bussereau (born 1952), politician
- Yves Ker Ambrun (born 1954), known as YKA, cartoonist
- Laurent Petitguillaume (born 1960), radio and television host
- Luc Delahaye (born 1962), photographer
- Stéphane Audeguy (born 1964), writer, literary critic and teacher
- Pascal Hervé (born 1964), cyclist
- Laurent Mauvignier (born 1967), writer
- Xavier Gravelaine (born 1968), football player
- Nâdiya (born 1973), singer
- Harry Roselmack (born 1973), television presenter
- Delphine Bardin (born 1974), classical pianist
- Ludovic Roy (born 1977), footballer
- Zaz (born 1980), singer
- Luc Ducalcon (born 1984), rugby union player
- Biga Ranx (born 1988), reggae singer, producer and writer
Twin towns - sister citiesEdit
Tours is twinned with:
Looking towards central Tours from the north bank of the Loire, adjacent to the Pont Mirabeau.
- Bishop of Tours
- Tours FC – a soccer club based in the town
- The Turonian Age in the Cretaceous Period of geological time is named for the city of Tours
- Listing of the work of Jean Antoine Injalbert-French sculptor Sculptor of Tours railway station statues also those on Tours Hotel de Ville.
- Marcel Gaumont. Sculptor of war memorial
- "Populations légales 2016". INSEE. Retrieved 25 April 2019.
- "Données climatiques de la station de Tours" (in French). Meteo France. Retrieved 31 December 2015.
- "Climat Centre-Val de Loire" (in French). Meteo France. Retrieved 31 December 2015.
- "Normes et records 1961-1990: Tours - St Symphorien (37) - altitude 112m" (in French). Infoclimat. Retrieved 31 December 2015.
- "Tours, France". Archived from the original on 22 July 2012. Retrieved 3 August 2012.
- Montvalon, Jean-Baptiste de. "Pourquoi les accents régionaux résistent en France". Le Monde.fr (in French). ISSN 1950-6244. Retrieved 19 July 2015.
- "Tours selects Citadis and APS". Railway Gazette International. London. 14 September 2010. Retrieved 15 September 2010.
- Dorothy Scallan. "The Holy Man of Tours." (1990) ISBN 0-89555-390-2
- Davis, Paul K. (1999) "100 Decisive Battles From Ancient Times to the Present" ISBN 0-19-514366-3
- "Jumelages et partenariats". tours.fr (in French). Tours. Retrieved 16 November 2019.
- Practical Tours, the comprehensive guide to living in Tours, Tours: Stéphanie Ouvrard, 2013, archived from the original on 19 February 2015, retrieved 24 May 2013 | <urn:uuid:74ca0122-f99e-4f25-872e-4bc1e28776a0> | CC-MAIN-2019-47 | https://en.m.wikipedia.org/wiki/Tours,_France | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00418.warc.gz | en | 0.937741 | 5,488 | 2.84375 | 3 |
Hamilton and Webster (2009) state that “globalization generates opportunities for business to enter new markets, take advantage of differences in the costs and quality of labor and other resources, gain economies of scale and get access to raw materials”. Business globalization, or in other words international business, refers to the increased of mobility of services, goods, capital and technology throughout the world. The goods and services created in one location can now be found throughout the world.
According to Harrison (2010) globalization is characterized by several aspects: he development of international trades and the creation of the global marketplace lead to the increase over time of international trade and service. Then, production and investment flows are globally organized. Thus, costs are lower and specialist advantages are in different geographical location. Thanks to the technological development, people around the world are more connected and migration is a major feature of the globalization. This later offers them the possibility of moving across national borders.
With this migration and the speed with which communication has improved, communication and cultural flows are important. Although globalization can be a real opportunities for companies, it could be also a threat for their development if managers in charge of this expansion have not gathered and analyses all the information they need about the marketplace they are considering. Indeed, they have to be sure they target areas where there is a market opportunity. They also need to be aware of several factors like laws, culture, society habits and other ones that might conflict the business.
Over the past 40 years, the Careful Group has grown to become the second world’s largest and the Rupee’s largest retailer of groceries and consumer odds. During these years, Careful expanded internationally. A pioneer in countries such as Brazil (1975) and China (1 995), Careful operates more than 9,500 stores, which are present in 32 countries across 3 geographic zones: Latin America, Europe and Asia, through 4 main formats: hypermarket, supermarket, cash and carry and convenience stores. (www. Careful. Com). However, Carouser’s situation is not as good as we can think.
Since few years, Careful has been hit by falling profit and drooping sales caused by the uncertain economic environment and strategic missteps. Net profits fell by 4% last year and net sales rose only by 1% to 81. Ban euros. In the first nine months of the year, Carouser’s total like-for-like sales were down 0. 2% (Latin America is the only region to report an increase in like-for-like sales of 8. 6%). (Carouser’s press release, 2012). In recent years, Careful has abandoned several countries including Japan, Mexico, Russia and Thailand.
These decisions are “in line with Carouser’s new strategy of focusing on geographies and countries in which it holds or aims to develop a leading position”. (Carouser’s press release, 201 2) Mr. Plats took over in May with a rife “to reverse years Of underperformed in Carouser’s European markets” (Vidalia, D. 2012). “He wants it to pull out of encore markets, cut costs at home in France and give store managers more autonomy. ” (Misleader, N. 2012). Thus, he has started to refocus on core markets, notably in Europe, Brazil and China.
According to Black (201 2), Shore Capital analyst, “Careful has been run like a dysfunctional football team, where the owners and the management haven’t seen eye-to-eye, and that impacts the players on the ground and that’s exactly what’s happened at Careful. It’s important that he represents a period of sustained stability and control of the business,” To sum up, Carouser’s strategic missteps took its toll but like the retailing market, Careful has also been hit by the uncertain economic environment.
This paper is going to constitute a fictive internal management report for Careful, analyzing the effects of the external global environment on Carouser’s business process and strategies. TO do so, a first chapter will provide an outlook of the retailing market allowing understanding what the current situation of this market is, especially in a global context. Then the second chapter will focus on Carouser’s analyses, to understand to what extent Careful can use its strengths and improve its weaknesses but also take advantage of opportunities and resist threats.
Finally, the last chapter will investigate the key challenges and implications of Careful by carrying out a PEST analysis. I. Globalization and Retailing market According to Marcel Correctness and Rajah Ala (2012) retailers want expend globally for several reasons. “Common ones include a quest for greater economies of scale and scope, a need to diversify risks, a desire to attract rest talent and create new opportunities for existing leaders, and a need to make up for constraints imposed by regulatory agencies when a retailer becomes too big for its home market.
Market overview Between 2007 and 2011, the global retailing industry experienced a moderate compound growth of 4. 3%. (Marketing, 2012). Food and grocery is the largest segment, accounting for 61% of the industry total value, with total revenues of $6,678. 9 in 2011. For comparison, the apparel, luxury goods and accessories segment accounts only for 15. 6% of the industry’ generated revenues of $1708. 7 billion in 201 1 . (Marketing, 2012). Economists forecast an acceleration of this performance with an increase of 26. 2% since 2011.
This expected growth would drive the industry to a value of 313,815. 8 billion by the end of 2016. Surprisingly, despite the economic crisis, 2012 was the fastest growing year for retailing since 1999. And this is due to emerging markets that are performing well. For example, China has seen a 10% growth in its retailing market in 2012. The other factor that contributed to this growth is the growth of Internet retailing. This fastest growing channel grew 20% in 201 2 and now account for 4% of all retail sales globally. This trend is expected to grow and rise 6% by 2017.
However, this global growth has to be moderate since there is some inflation to be taken into account. (Late, D. 2012). The other trend that is important to notice is the Middle East and Africans growths. Indeed, these markets surpassed Latin America in 2012, which was one of the markets driving the global growth. Increasing by 14%, they became the fastest growing region. (Remuneration, 2012). According to Daniel Late (2012), unlike in Europe where non-grocery sales suffered of the economic crisis, in the Middle East and Africa, non-grocery sales outperformed grocery sales.
Forces driving competition According to Porter (1980), there are “5 forces” driving the competition on a market: buyer power, Supplier power, new entrants, threats of substitutes, and degree of rivalry. This model provides a useful framework for analyzing the competitive environment of an industry and thus identify and analyses the main threats that can face Careful. ; Buyer power In a market, there are consumers who have wants and needs, and suppliers who supply their demands. (Heathery and Otter, 2011). Since there are a large amount of customers within the global retailing industry, the consumer power is low.
However, because of the economic downturn, consumers are cost conscious and seek value for money, which puts pressure on retailers to deliver products, especially brand product at low cost. In France, the French retailer Lecher has understood this trend and has developed a strategy to make low price its competitive advantage. Thus, thanks to low prices and an aggressive communication campaign, sales in 2012 were up 5%. According to a Nielsen Panel, Lecher prices are between 4 and 5% below Carouser’s prices. (Nicolas, 2012). ; Supplier power In the retailing industry, there are a lot of suppliers.
Thanks to this high imputation, retailers can easily use it to have lower prices. Indeed, leading incumbents find their product from various suppliers so they can choose the more interesting according to the price and quality. Furthermore, switching costs are usually low. The leading position and global presence of retailers like Wall-mart or Careful are also strength to negotiate with suppliers. The difference of inputs and the importance of quality/cost are the two main supplier power drivers. ; New entrants Since there are several leading multinational retailers in the market, it may be difficult and may need significant costs to entry.
Such companies have developed strong brand awareness through advertising and marketing campaigns. To succeed their entry, new retailers will have to study consumers so that they can provide efficient products but also communicate efficiently. For that, new competitors need sufficient funding to launch their activity and survive in this high competitive context. Careful is a perfect illustration. Indeed, Careful failed against an aggressive development of it competitors in Singapore because this later did not succeed to localize and Offer products expected by the local population (Sharing Ely, 2012).
Threats of substitutes There are no real substitutes to the retailing industry. ; Degree of rivalry The retailing market is heavily fragmented, with a large number of players present and is lead by several main industry groups like Wall-Mart, Careful, Metro and Tests. Moreover, with the development of Internet retailing, it is easier for new companies to enter into the market. The similarity of players due to a lack of differentiated products reduces their competitive advantage and thus, drives the degree of rivalry. That is why Lecher chose to capitalize on regional product.
This decision was a key of success and participated to he 5% growth in sales occurred in 201 1 (Mortar, 2012). Careful should try to follow the same scheme, by accentuating its implication in the local economies and thus, collaborating more with the local suppliers. This strategy can also enable Careful to meet more effectively the demand of local consumers. . Careful SOOT According to Heathery and Otter (2011 “the capacity of a business to take advantage of opportunities and resist threats will depend on its internal strengths and weaknesses.
Thus it is essential to undertake an internal analyses to understand to what extent Careful can take advantage of opportunities and resist threats, and in the next chapter, understand by using a PEST analysis how Careful is influenced by the external environment. Strengths ; Leading retailer in Europe and second largest in the world present in mature and emerging markets Thanks to its notoriety and its size, Careful has a high bargaining power. Indeed, bigger a company is, more economies of scale and business scalability will be high.
Moreover, the high growth rates offered by emerging countries enables Careful to boost revenues and profits and thus to make up for the sluggish situation in its home market. This illustrates perfectly the other advantage to expand worldwide: diversify risks. Presence in several economies enables to diversify the revenues and reduced vulnerability to a single economy. ; Multi-format and convergence of brands strategies Careful operates through 4 different formats. While the hypermarket format offers a wide selection of merchandises, the supermarket and convenience formats target the daily or monthly purchases and are more accessible.
F-Rutherford, small retailers and other businesses purchase in cash and carry stores. In addition, Careful is also present on-line, selling its reduce on e-commerce websites. Thus, the different formats enable Careful to adapt its stores to the market, to the consumers’ needs. According to the location and the consumers’ habits in this location, the strategic format varies. Moreover, it also enables to suit the merchandise selection and price preferences of varying consumers. In other words, Careful can target different customer segments. In order to beneficial of its brand reputation, the group has initiated a single brand strategy.
Furthermore, this strategy increases the group’s market visibility and facilitates quick penetration in new markets. 30 years of experience in retailing private label products Because of the economic slowdown, consumers prefer generic and private label products instead of expensive brands. With its first unbranded product launched in 1976 and Careful brand-name products in 1 985, Careful has an important experience in this field meeting the consumers demand, especially in Europe, which is the key growing markets for private label products.
Denominator, 2011 Weaknesses ; Continuous decline in profit margins and slow revenue growth in France More than 70% of the firm’s sales are in Europe but this is its slowest growth area. According to Correctness and Ala ‘The stronger the retailer’s market position at home, the better its chances of sustaining overseas investments. ” (Quoted in Retail Doesn’t Cross Borders, 2012). Careful lost market share and its profits fell by 40% in the first half of 201 1. ; Careful has been slow in moving into internet retailing Retailers have developed a new way for consumers to purchase.
Nowadays, it is possible De order groceries online and collect them at drives stores which at pick up point. Careful opened Its first drive in 2011 and had only 30 drives at the end of the years. In imprison, Lecher had 144 drives in 2011 and expected 250 units by the end of the year in 2012. (Nicolas, 2012). Opportunities ; Emerging market Careful being present in emerging markets like Brazil, Argentina or India, takes advantage of these growing market to ameliorate its revenues and margins. According too Careful s press release, in 2012, sales in Latin America up 5,2% and 12,3% in Asia. Emergence of internet retailing in Latin America In Latin America, Internet retailing is almost three times smaller than direct sales.
However, this market has almost tripled in size over the last five years ND it is expected to keep growing. Brazil, the biggest Internet retail market, accounts for 70% of all regional sales. (Late, D. 201 2) ; Retail reform in India Mufti-national retailers are now able to buy up to a 51 % stake in Indian’s multi- brand retailers. This decision allows these chains to sell directly to Indian consumers. (BBC, 2012). However, some conditions are imposed on groups wanting to invest in India.
For example, “companies will have to invest at least $mm (Meme), open outlets only in towns with a population of more than one million and source at least 30% of produce from India В» (BBC, 2012). The retreats ; High competition The competition in Carousers home market as well as in the other regions is high. Not only Careful competes with multinational retailers but also with local retailers dominating the local market, more adapted to the consumers needs. In France, Careful has to face the significant growth of Lecher which sales grew 5% in 2011.
In China, Wall-Mart plans a vast expansion of its stores. The Times reports the US retail group В« would open 100 new stores in China over the next three years, adding to the 370 stores it already owns -? the highest number of any of the foreign supermarkets. The new stores will add 18,000 jobs to its 100,000-strong workforce in the world’s biggest food and grocery market. R, ; High exposure to low growth European markets Careful derives a majority of its revenues from France and Western European countries. Due to the debt crisis, the economies of these nations are estimated to record low growth rates.
According to a Carouser’s press release, sales down 2,2% in 201 2, “impacted by continued pressure on consumption in Southern Europe”. ; Fiscal cliff The other threat that Careful and the economy sector can worry about is the “fiscal Cliff’. At the end of the year, the terms of the Budget Control Act of 201 1 in the USA are scheduled to go into effect. This means a 2% tax increase for most workers, and to several tax breaks for business and charges in alternative minimum tax, tax increase for higher income and spending cuts in more than 1 000 government programs.
The positive aspect is that this will enable to reduce the US budget deficit by estimated $ 560 billions but on the other hand, it will cut the gross domestic product by 4% points and cause an economic slowdown. Unemployed would rise 1% point and a loss of 2 lions jobs. (C. Larger, 2012). Many analysts say that would send U. S. Economy into a recession, if not a depression. (CNN, 2012). Even if Careful is not present in the USA, the Fiscal Cliff threatens the global economy. But a disaster can be avoided by the Congress action. II.
External business environment: Key challenges and implications Political and legal Nowadays, government and business are interdependent (Harrison, 2008). That means that they are linked through several relationships. For example, government determines the legal framework, manages the macroeconomic environment etc. Furthermore, government also need business. Indeed, businesses are a source of tax revenues and since the private sector is the dominant element within capitalist economies, these private business investments drive the economic growth and prosperity.
However, with the globalization, governments have now to deal with multinational companies and multinational companies have to deal with different governments, so with different laws and rules. Government decisions are key factor in the international expansion of companies and can be opportunities to take advantage of. For example, the Indian government decided recently to open its retail sector to global supermarket chains. Thus, these chains are allowed to sell directly to Indian consumers by buying up to a 51 % stake in Indian’s multi-brand retailers. BBC, 2012). According to the 201 2 global retail Development Index (Takeaway, 2012), India is ranked 5 and thus is an interesting market to consider.
Indeed, “India market with accelerated retail growth of 15 to 20% expected over the next five years. Growth is supported by strong macroeconomic conditions, including a 6 to 7% rise in GAP, higher exposable incomes, and rapid arbitration. Yet, while the overall retail market contributes to 14% of Indian’s GAP, organized retail penetration remains low, at 5 to 6%, indicating room for growth. So, it would be an interesting market for Carouser’s supermarkets, all the more Careful is already established in the country through cash and carry stores. However, groups wanting to invest in India should be also aware of the conditions imposed by India. For example, В« companies will have to invest at least $ mm (Meme), open outlets only in towns with a population of more than one million ND source at least 30% of produce from India, according to reports. В» (BBC, 2012). Moreover, since countries are linked through international trade, some political decision can influence these trades.
As Harrison States “political and economic instability is a major cause of risk in the external environment, whether it affects the international environments as a whole or the environment in a particular country”. (2010, quoted IPPP). The case of Italy is a good illustration. Reacting to the news that Prime Minister Mario Mont plans to resign, Italian stockers have fallen of Local banks as well as European Banks had been hardly hit by this news: Germany’s Commemorate fell 2. 2% and France’s BAN Paris dropped 1. 4%. (BBC, 2012). So this shows that political and economical environments are strongly linked.
Economic and Financial The economic and financial environment comprises different factors like the rate of economic growth, exchanges rates, inflation, and so on, which influence business. The rate of economic growth for example enables to indicate the speed at which the total level of demand for goods and services is changing. (Hamilton and Webster, 2009). According to specialists, the world economy will grow at an annual rate of 3% up to 2030 and 4,3% in developing countries. (World Bank, 2012). Goldman Sacks (2003) adds that China and India are predicted to be the world’s two biggest economies in 2050.
As We could understand thanks to the previous numbers, developing economies, in particular BRICK economies grow quickly and can therefore be attractive for business. But according to PWS, other countries have to be considered. PWS estimate that the E (BRICK economies, Mexico, Indonesia, Turkey) will be 50% bigger than the GO (USA, Japan, Germany, Uk, Italy, France, Canada) by 2020. PWS, http://www. PWS. Com) However, before thinking to enter a new market, Careful should be aware of national barriers to trade like tariffs, imposed quotas and non-tariffs barrier. (Morrison, 2009).
Tariffs protect local industry and provide government revenue. Imposed quotas limit imports to a specific quantity of value. Thus, in this case, it is better to consider sourcing locally using local suppliers. Moreover, institutions or agreements like the General Agreement on tariffs and Trade, The World Bank and The International Monetary fund can have a dramatic influence on retailers. They help manage, regulate, and police the global marketplace and thus provide the stability in the trading environment. Nowadays, economic power is hold by these supranational organizations.
Moreover, through globalization, barriers between nations are reduced. Thereby, nations are more and more interdependent. Thus, globalization carriers dangers because depending on foreign suppliers, business are vulnerable to events in foreign economies and market outside their control. (Hamilton and Webster, 2009). This explains the current recession in Japan. This later depends strongly on its exports and yet, robbers in other markets like the slowdown in the US and Rezone and the anti-Japan protest in China has hurt demand for exports.
Social, cultural, and Environmental Although globalization can have some beneficial aspects, it also brings a number of problems of a social nature. Indeed, globalization increases the gap between rich and poor. On one hand the income of the rich increase as a faster rate than the income of the poor. On the other hand, all economies do not participates in the international trades. (Harrison 2010). However, business is not only about making profits. “The purpose of business activity is o provide goods and services to satisfy people’s want and needs.
There is therefore a social purpose behind business activity. ” (Harrison, 2010, Pl 80). Held (1 999, quoted in Hamilton and Webster, 2009, pep) added that globalization can be seen as “the widening, deepening and speeding up of worldwide interconnectedness in all expects of contemporary social life, from the cultural to the criminal, the financial to the spiritual Thus globalization is not only an economic phenomenon. It involved also many other cultural and social dimensions. Nowadays, one of the key challenges facing by businesses s the rise of CARS concerns.
Consumers are conscious of the social and environmental impacts that are caused by businesses. Thereby, associations to protect consumers, employees and environment multiply and businesses have to prove their commitment. Thus Careful has implemented environmental and social policies in all countries. Careful offers more environment-friendly product and sourcing, and tries to reduce the impact of its stores and its logistic impact on the environment. Careful also carries out its social and societal responsibilities through different actions. (www. Refocus. Com).
Amongst them, the integration in the local economy is also interesting to face cultural challenges. Through a local integration, Careful benefits from local products needed by local consumers and provided by local suppliers. Indeed, Careful has to deal with customers but also employees from different cultures. This difference can be an obstacle if not well considered. For example, Careful did not succeeded in providing relevant product to its Japanese consumers. Thus, its main local concurrent knew better how to meet the local consumes demand and could attract ore consumers, forcing Careful to leave the market. | <urn:uuid:39bf4598-8408-4957-984a-0acbeecd5c61> | CC-MAIN-2019-47 | https://paperap.com/paper-on-global-business-environment-case-study-carrefour/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00378.warc.gz | en | 0.94985 | 4,962 | 3.09375 | 3 |
Source: Courtesy of Wikimedia Commons
ROGERS, ROBERT (early in his career he may have signed Rodgers), army officer and author; b. 8 Nov. 1731 (n.s.) at Methuen, Massachusetts, son of James and Mary Rogers; m. 30 June 1761 Elizabeth Browne at Portsmouth, New Hampshire; d. 18 May 1795 in London, England.
While Robert Rogers was quite young his family moved to the Great Meadow district of New Hampshire, near present Concord, and he grew up on a frontier of settlement where there was constant contact with Indians and which was exposed to raids in time of war. He got his education in village schools; somewhere he learned to write English which was direct and effective, if ill spelled. When still a boy he saw service, but no action, in the New Hampshire militia during the War of the Austrian Succession. He says in his Journals that from 1743 to 1755 his pursuits (which he does not specify) made him acquainted with both the British and the French colonies. It is interesting that he could speak French. In 1754 he became involved with a gang of counterfeiters; he was indicted but the case never came to trial.
In 1755 his military career proper began. He recruited men for the New England force being raised to serve under John Winslow, but when a New Hampshire regiment was authorized he took them into it, and was appointed captain and given command of a company. The regiment was sent to the upper Hudson and came under Major-General William Johnson. Rogers was recommended to Johnson as a good man for scouting duty, and he carried out a series of reconnaissances with small parties against the French in the area of forts Saint-Frédéric (near Crown Point, N.Y.) and Carillon (Ticonderoga). When his regiment was disbanded in the autumn he remained on duty, and through the bitter winter of 1755–56 he continued to lead scouting operations. In March 1756 William Shirley, acting commander-in-chief, instructed him to raise a company of rangers for scouting and intelligence duties in the Lake Champlain region. Rogers did not invent this type of unit (a ranger company under John Gorham* was serving in Nova Scotia as early as 1744) but he became particularly identified with the rangers of the army. Three other ranger companies were formed in 1756, one of them commanded by Rogers’ brother Richard (who died the following year).
Robert Rogers won an increasing reputation for daring leadership, though it can be argued that his expeditions sometimes produced misleading information. In January 1757 he set out through the snow to reconnoitre the French forts on Lake Champlain with some 80 men. There was fierce fighting in which both sides lost heavily, Rogers himself being wounded. He was now given authority over all the ranger companies, and in this year he wrote for the army what may be called a manual of forest fighting, which is to be found in his published Journals. In March 1758 another expedition towards Fort Saint-Frédéric, ordered by Colonel William Haviland against Rogers’ advice, resulted in a serious reverse to the rangers. Rogers’ reputation with the British command remained high, however, and as of 6 April 1758 Major-General James Abercromby, now commander-in-chief, gave him a formal commission both as captain of a ranger company and as “Major of the Rangers in his Majesty’s Service.” That summer Rogers with four ranger companies and two companies of Indians took part in the campaign on Lac Saint-Sacrement (Lake George) and Lake Champlain which ended with Abercromby’s disastrous defeat before Fort Carillon. A month later, on 8 August, Rogers with a mixed force some 700 strong fought a fierce little battle near Fort Ann, New York, with a smaller party of Frenchmen and Indians under Joseph Marin de La Malgue and forced it to withdraw.
British doubts of the rangers’ efficiency, and their frequent indiscipline, led in this year to the formation of the 80th Foot (Gage’s Light Infantry), a regular unit intended for bush-fighting. The rangers were nevertheless still considered essential at least for the moment, and Major-General Jeffery Amherst, who became commander-in-chief late in 1758, was as convinced as his predecessors of Rogers’ excellence as a leader of irregulars. Six ranger companies went to Quebec with James Wolfe* in 1759, and six more under Rogers himself formed part of Amherst’s own army advancing by the Lake Champlain route. In September Amherst ordered Rogers to undertake an expedition deep into Canada, to destroy the Abenaki village of Saint-François-de-Sales (Odanak). Even though the inhabitants had been warned of his approach, Rogers surprised and burned the village; he claims to have killed “at least two hundred” Indians, but French accounts make the number much smaller. His force retreated by the Connecticut River, closely pursued and suffering from hunger. Rogers himself with great energy and resolution rafted his way down to the first British settlement to send provisions back to his starving followers. The expedition cost the lives of about 50 of his officers and men. In 1760 Rogers with 600 rangers formed the advance guard of Haviland’s force invading Canada by the Lake Champlain line, and he was present at the capitulation of Montreal.
Immediately after the French surrender, Amherst ordered Rogers to move with two companies of rangers to take over the French posts in the west. He left Montreal on 13 September with his force in whaleboats. Travelling by way of the ruined posts at the sites of Kingston and Toronto (the latter “a proper place for a factory” he reported to Amherst), and visiting Fort Pitt (Pittsburgh, Pa) to obtain the instructions of Brigadier Robert Monckton, who was in command in the west, he reached Detroit, the only fort with a large French garrison, at the end of November. After taking it over from François-Marie Picoté de Belestre he attempted to reach Michilimackinac (Mackinaw City, Mich.) and Fort Saint-Joseph (Niles), where there were small French parties, but was prevented by ice on Lake Huron. He states in his later A concise account of North America (but not in his report written at the time) that during the march west he met Pontiac*, who received him in a friendly manner and “attended” him to Detroit.
With the end of hostilities in North America the ranger companies were disbanded. Rogers was appointed captain of one of the independent companies of regulars that had long been stationed in South Carolina. Subsequently he exchanged this appointment for a similar one in an independent company at New York; but the New York companies were disbanded in 1763 and Rogers went on half pay. When Pontiac’s uprising broke out he joined the force under Captain James Dalyell (Dalzell), Amherst’s aide-de-camp, which was sent to reinforce the beleaguered garrison of Detroit [see Henry Gladwin]. Rogers fought his last Indian fight, with courage and skill worthy of his reputation, in the sortie from Detroit on 31 July 1763.
By 1764 Rogers was in serious financial trouble. He had encountered at least temporary difficulty in obtaining reimbursement for the funds he had spent on his rangers, and the collapse of a trading venture with John Askin* at the time of Pontiac’s uprising worsened his situation. According to Thomas Gage he also lost money gambling. In 1764 he was arrested for debt in New York but soon escaped.
Rogers went to England in 1765 in hope of obtaining support for plans of western exploration and expansion. He petitioned for authority to mount a search for an inland northwest passage, an idea which may possibly have been implanted in his mind by Governor Arthur Dobbs of North Carolina. To enable him to pursue this project he asked for the appointment of commandant at Michilimackinac, and in October 1765 instructions were sent to Gage, now commanding in America, that he was to be given this post. He was also to be given a captain’s commission in the Royal Americans; this it appears he never got.
While in London Rogers published at least two books. One was his Journals, an account of his campaigns which reproduces a good many of his reports and the orders he received, and is a valuable contribution to the history of the Seven Years’ War in America. The other, A concise account of North America, is a sort of historical geography of the continent, brief and lively and profiting by Rogers’ remarkably wide firsthand knowledge. Both are lucid and forceful, rather extraordinary productions from an author with his education. He doubtless got much editorial help from his secretary, Nathaniel Potter, a graduate of the College of New Jersey (Princeton University) whom he had met shortly before leaving America for England; but Sir William Johnson’s description of Rogers in 1767 as “a very illiterate man” was probably malicious exaggeration at best. Both books were very well received by the London critics. A less friendly reception awaited Ponteach; or, the savages of America: a tragedy, a play in blank verse published a few months later. It was anonymous but seems to have been generally attributed to Rogers. John R. Cuneo has plausibly suggested that the opening scenes, depicting white traders and hunters preying on Indians, may well reflect the influence of Rogers, but that it is hard to connect him with the highflown artificial tragedy that follows. Doubtless, in Francis Parkman’s phrase, he “had a share” in composing the play. The Monthly Review: or, Literary Journal rudely called Ponteach “one of the most absurd productions of the kind that we have seen,” and said of the “reputed author”, “in turning bard, and writing a tragedy, he makes just as good a figure as would a Grubstreet rhymester at the head of our Author’s corps of North-American Rangers.” No attempt seems to have been made to produce the play on the stage.
His mission to London having had, on the whole, remarkable success, Rogers returned to North America at the beginning of 1766. He and his wife arrived at Michilimackinac in August, and he lost no time in sending off two exploring parties under Jonathan Carver and James Tute, the latter being specifically instructed to search for the northwest passage. Nothing important came of these efforts.
Both Johnson, who was now superintendent of northern Indians, and Gage evidently disliked and distrusted Rogers; Gage no doubt resented his having gone to the authorities in London over his head. On hearing of Rogers’ appointment Gage wrote to Johnson: “He is wild, vain, of little understanding, and of as little Principle; but withal has a share of Cunning, no Modesty or veracity and sticks at Nothing . . . He deserved Some Notice for his Bravery and readiness on Service and if they had put him on whole Pay. to give him an Income to live upon, they would have done well. But, this employment he is most unfit for, and withal speaks no Indian Language. He made a great deal of money during the War, which was squandered in Vanity and Gaming. and is some Thousands in Debt here [in New York].” Almost immediately Gage received an intercepted letter which could be read as indicating that Rogers might be intriguing with the French. Rogers was certainly ambitious and clearly desired to carve out for himself some sort of semi-independent fiefdom in the west. In 1767 he drafted a plan under which Michilimackinac and its dependencies should be erected into a “Civil Government,” with a governor, lieutenant governor, and a council of 12 members chosen from the principal merchants trading in the region. The governor and council would report in all civil and Indian matters direct to the king and the Privy Council in England. This plan was sent to London and Rogers petitioned the Board of Trade for appointment as governor. Such a project was bound to excite still further the hostility of Gage and Johnson, and it got nowhere. Rogers quarrelled with his secretary Potter and the latter reported that his former chief was considering going over to the French if his plan for a separate government was not approved. On the strength of an affidavit by Potter to this effect Gage ordered Rogers arrested and charged with high treason. This was done in December 1767 and in the spring Rogers was taken east in irons. In October 1768 he was tried by court martial at Montreal on charges of “designs . . . of Deserting to the French . . . and stirring up the Indians against His Majesty and His Government”; “holding a correspondence with His Majesty’s Enemies”; and disobedience of orders by spending money on “expensive schemes and projects” and among the Indians. Although these charges were supported by Benjamin Roberts, the former Indian department commissary at Michilimackinac, Rogers was acquitted. It seems likely that he had been guilty of no crime more serious than loose talk. The verdict was approved by the king the following year, though with the note that there had been “great reason to suspect . . . an improper and dangerous Correspondence.” Rogers was not reinstated at Michilimackinac. In the summer of 1769 he went to England seeking redress and payment of various sums which he claimed as due him. He received little satisfaction and spent several periods in debtors’ prison, the longest being in 1772–74. He sued Gage for false imprisonment and other injuries; the suit was later withdrawn and Rogers was granted a major’s half pay. He returned to America in 1775.
The American Revolutionary War was now raging. Rogers, no politician, might have fought on either side, but for him neutrality was unlikely. His British commission made him an object of suspicion to the rebels. He was arrested in Philadelphia but released on giving his parole not to serve against the colonies. In 1776 he sought a Continental commission, but General George Washington distrusted and imprisoned him. He escaped and offered his services to the British headquarters at New York. In August he was appointed to raise and command with the rank of lieutenant-colonel commandant a battalion which seems to have been known at this stage as the Queen’s American Rangers. On 21 October this raw unit was attacked by the Americans near Mamaroneck, New York. A ranger outpost was overrun but Rogers’ main force stood firm and the attackers withdrew. Early in 1777 an inspector general appointed to report on the loyalist units found Rogers’ in poor condition, and he was retired on half pay. The Queen’s Rangers, as they came to be known, later achieved distinction under regular commanders, notably John Graves Simcoe*.
Rogers’ military career was not quite over. Returning in 1779 from a visit to England, he was commissioned by General Sir Henry Clinton – who may have been encouraged from London – to raise a unit of two battalions, to be recruited in the American colonies but organized in Canada, and known as the King’s Rangers. The regiment was never completed and never fought. The burden of recruiting it fell largely on Rogers’ brother James, also a ranger officer of the Seven Years’ War. Robert by now was drunken and inefficient, and not above lying about the number of men raised. Governor Frederick Haldimand wrote of him, “he at once disgraces the Service, & renders himself incapable of being Depended upon.” He was in Quebec in 1779–80. At the end of 1780, while on his way to New York by sea, he was captured by an American privateer and spent a long period in prison. By 1782 he was back behind the British lines. At the end of the war he went to England, perhaps leaving New York with the British force at the final evacuation in 1783.
Rogers’ last years were spent in England in debt, poverty, and drunkenness. Part of the time he was again in debtors’ prison. He lived on his half pay, which was often partly assigned to creditors. He died in London “at his apartments in the Borough [Southwark],” evidently intestate; letters of administration of his estate, estimated at only £100, were granted to John Walker, said to be his landlord. His wife had divorced him by act of the New Hampshire legislature in 1778, asserting that when she last saw him a couple of years before “he was in a situation which, as her peace and safety forced her then to shun & fly from him so Decency now forbids her to say more upon so indelicate a subject.” Their only child, a son named Arthur, stayed with his mother.
The extraordinary career that thus ended in sordid obscurity had reached its climax in the Seven Years’ War, before Rogers was 30. American legend has somewhat exaggerated his exploits; for he often met reverses as well as successes in his combats with the French and their Indian allies in the Lake Champlain country. But he was a man of great energy and courage (and, it must be said, of considerable ruthlessness), who had something of a genius for irregular war. No other American frontiersman succeeded so well in coping with the formidable bush-fighters of New France. That the frontiersman was also the author of successful books suggests a highly unusual combination of qualities. His personality remains enigmatic. Much of the evidence against him comes from those who disliked him; but it is pretty clear that his moral character was far from being on the same level as his abilities. Had it been so, he would have been one of the most remarkable Americans of a remarkable generation.
[Robert Rogers’ published works have all been reissued: Journals of Major Robert Rogers . . . (London, 1765) in an edition by F. B. Hough (Albany, N.Y., 1883) with an appendix of documents concerning Rogers’ later career, in a reprint with an introduction by H. H. Peckham (New York, ), and in a facsimile reprint (Ann Arbor, Mich., ); A concise account of North America . . . (London, 1765) in a reprint (East Ardsley, Eng., and New York, 1966); and the play attributed to Rogers, Ponteach; or, the savages of America: a tragedy (London, 1766), with an introduction and biography of Rogers by Allan Nevins (Chicago, 1914) and in Representative plays by American dramatists, ed. M. J. Moses (3v., New York, 1918-), I, 115–208. Part of the play is printed in Francis Parkman, The conspiracy of Pontiac and the Indian war after the conquest of Canada (2v., Boston, 1910), app.B.
Unpublished mss or transcripts of mss concerning Rogers are located in Clements Library, Thomas Gage papers, American series; Rogers papers; in PAC, MG 18, L4, 2, pkt.7; MG 23, K3; and in PRO, Prob. 6/171, f.160; TS 11/387, 11/1069/4957.
Printed material by or relating to Rogers can be found in The documentary history of the state of New-York . . . , ed. E. B. O’Callaghan (4v., Albany, 1849–51), IV; Gentleman’s Magazine, 1765, 584–85; Johnson papers (Sullivan et al.); “Journal of Robert Rogers the ranger on his expedition for receiving the capitulation of western French posts,” ed. V. H. Paltsits, New York Public Library, Bull., 37 (1933), 261–76; London Magazine, or Gentleman’s Monthly Intelligencer, XXXIV (1765), 630–32, 676–78; XXXV (1766), 22–24; Military affairs in North America, 1748–65 (Pargellis); Monthly Review: or, Literary Journal (London), XXXIV (1766), pt.1, 9–22, 79–80, 242; NYCD (O’Callaghan and Fernow), VII, VIII, X; “Rogers’s Michillimackinac journal,” ed. W. L. Clements, American Antiquarian Soc., Proc. (Worcester, Mass.), new ser., 28 (1918), 224–73; Times (London), 22 May 1795; Treason? at Michilimackinac: the proceedings of a general court martial held at Montreal in October 1768 for the trial of Major Robert Rogers, ed. D. A. Armour (Mackinac Island, Mich., 1967).
The considerable Rogers cult that has been in evidence in the United States during the last generation probably owes a good deal to K. L. Roberts’ popular historical novel, Northwest passage (Garden City, N.Y., 1937; new ed., 2v., 1937). Entries for Rogers are to be found in the DAB and DNB. J. R. Cuneo, Robert Rogers of the rangers (New York, 1959) is an excellent biography based on a wide range of sources but marred by lack of specific documentation. See also: Luca Codignola, Guerra a guerriglia nell’America coloniale: Robert Rogers a la guerra dei sette anni, 1754–1760 (Venice, 1977), which contains a translation into Italian of Rogers’ Journals; H. M. Jackson, Rogers’ rangers, a history ([Ottawa], 1953); S. McC. Pargellis, Lord Loudoun, and “The four independent companies of New York,” Essays in colonial history presented to Charles McLean Andrews by his students (New Haven, Conn., and London, 1931; repr. Freeport, N.Y., 1966), 96–123; Francis Parkman, Montcalm and Wolfe (2v., Boston, 1884; repr. New York, 1962); J. R. Cuneo, “The early days of the Queen’s Rangers, August 1776–February 1777,” Military Affairs (Washington), XXII (1958), 65–74; Walter Rogers, “Rogers, ranger and loyalist,” RSC Trans., 2nd ser., VI (1900), sect.ii, 49–59. c.p.s.] | <urn:uuid:9d32d0f4-48e2-4457-911b-a3dfeccc8f8f> | CC-MAIN-2019-47 | http://www.biographi.ca/en/bio.php?BioId=36271 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00218.warc.gz | en | 0.979362 | 4,782 | 2.703125 | 3 |
A Comprehensive Guide to Overcoming Your Low Self-Esteem
Table of Contents
- Defining Self-Esteem
- Video: Understanding and Fixing
- Why Do People Feel Low Self-Esteem?
- How Self-Esteem Impacts Mental Health
- Low Self-Esteem and Addiction
- Impacts in Quality of Life
- Video: A User’s Guide
- How to Improve Self-Esteem
- Helpful Resources
- Self-Esteem Impacts Your Well-Being
Self-esteem is how a person feels about themselves. It is their belief they have abilities to do things and add value to others’ lives.
Positive or healthy self-esteem usually has good side effects. Doctors think people with good self-esteem do better in school, have happier relationships, and have well-adjusted kids.1 Low or poor self-esteem can result in opposite effects. Low self-esteem can affect a person’s mental health and potentially even lead to drug and alcohol abuse.
Self-esteem plays an important role in not only how a person feels about themselves, but how they go through life. One author and American sociologist named Neil Smelser wrote this about self-esteem: “many, if not most, of the major problems plaguing society have roots in the low self-esteem of many people who make up society”. 2
Because low self-esteem often starts early on in life, it can be a hard habit to break. A person has to complete re-learn the way they think about themselves and how they act around others. This is challenging but not impossible. And the rewards of enhanced self-esteem can be a better quality of life overall.
The term “self-esteem” hasn’t been around forever. Doctors first started using it about the late 1800s.3 Psychologist William James is the first known doctor to mention specifically the words “self-esteem.”4 In his writing, he said that a person’s self-esteem is like a ratio of potential to succeed. He wrote that the more successful a person was, the more satisfied in their life they would be.
Self-Esteem was Studied Extensively in the 1940s
While James and other experts may have discussed self-esteem in the 1890s, the idea of self-esteem didn’t really get going until the 1940s. In the 1940s and 1950s, researchers started to do studies on how self-esteem affected concepts in life like happiness and success. The results of these research studies helped doctors start to link good self-esteem with better outcomes in therapy. Doctors started to look at the research to find ways to help their patients build healthier self-esteem.
Self-Esteem Linked to Other Conditions
Two doctors that did a lot of important research on self-esteem were Maslow and Raimy.5 These two doctors studied issues such as self-esteem and schizophrenia and self-esteem and happiness in married people. The results of these studies helped doctors identify that people with low self-esteem had more problems in life and were less likely to have success in therapy.6
How Demographics Affect Self-Esteem
In 1965, a researcher named Morris Rosenberg released a big study on self-esteem and some of the factors that play a role in self-esteem.7 These include:
He also found that factors such as anxiety and social isolation also played a role in low self-esteem. 8
Increase in Research During the 1970s
By the 1970s, researchers had hundreds of studies about self-esteem to review. While researchers were talking a lot about self-esteem, the words hadn’t really hit the public knowledge yet. But they would – thanks to the increasing role of “self-help” literature. Self-help literature is a category of books or articles that are supposed to help people live better lives. Some people may read self-help books to help their love life, lose weight, or be a better parent. These books helped people to recognize the words “self-esteem.”
Video: Understanding & Fixing Low Self-Esteem
Symptoms of Low Self-Esteem
Decades of research have helped doctors understand more about how to recognize low self-esteem.9 Some behaviors a person may have when they have low self-esteem include:
A person won’t start a job or task because they know they’ll fail at it.
A person makes excuses for everything that didn’t go their way.
A person always makes comments like “I never do anything right” or “It’s my fault.”
A person seems to worry too much about what other people think of them.
A person always seems to have physical symptoms that keep them from doing things, such as headaches, body aches, or problems sleeping.
A person quits activities or jobs almost immediately after they start them because they get frustrated.
A person cheats or lies to win because they don’t think they can do it on their own.
A person is controlling or bossy to cover up the fact they don’t feel good about themselves.
A person withdraws socially.
A person can’t take criticism or praise.
A person is either overly helpful or refuses to help.
Everybody has an episode of low self-esteem every once in a while. It’s when this feeling persists and symptoms occur most of the time that low self-esteem can be a problem.
The Most Common Symptom is Feeling Worthless
The most common symptom in people with self-esteem is a feeling of worthlessness. A person with low self-esteem doesn’t believe in themselves or their abilities.
Why Do People Feel Low Self-Esteem?
Low self-esteem is often the result of multiple reasons. Different things can happen in a person’s life that may cut down their self-esteem. Examples of these include:10
A history of stressful life events. These may include a divorce, heath of a pertner, or problems with money.
History of struggles in school that bring down their confidence. Examples could include having problems on standardized tests, having a learning disability, or failing to pass an important test.
An unhappy childhood, where a person’s parents weren’t supportive or were very critical of the person when they were growing up.
Being in an abusive relationship, such as with a parent, caregiver, friend, or partner.
Having chronic health problems, such as pain, heart problems, or physical disability.
Having a history of mental illness, such as depression or anxiety.
Low Self-Esteem Starts Early in Life
Doctors know that for a lot of people, low self-esteem usually starts when a person is little. People may get messages from their family, teachers, siblings, or parents that they can’t live up to expectations. Influences from the media, such as images of picture-perfect models, can also make a person feel inferior. For some people, the idea that they aren’t good enough or that only perfection will do sticks with the person. They find or feel like they can’t live up to the expectations they place on themselves or that others place on them.11
Sometimes, low self-esteem also has to do with a person’s personality. Some people find it easier to think and act negatively than others do.
Social Media Affects Self-Esteem
A study published in the journal Psychology of Popular Media Culture found that social networking sites — such as Facebook and Instagram — can have beneficial and negative effects on a person’s self-esteem.12 On the negative side, social media can result in social comparison, where a person compares themselves with other people and feels inferior to them. The positive side is that sometimes social media can help a person to develop goals to improve themselves.
Over time, low self-esteem can take a toll on a person. It makes them afraid to live their life and try new things. According to Chris Williams, a professor of psychosocial psychiatry at the University of Glasgow, these aren’t healthy ways to cope with low self-esteem. In an interview with Britain’s National Health Service:
“In the short term, avoiding challenging and difficult situations makes you feel a lot safer. In the longer term, this can backfire because it reinforces your underlying doubts and fears. It teaches you the unhelpful rule that the only way to cope is by avoiding things.”13
Effect of Disability on Self-Esteem
Disabilities, including physical disabilities, may also impair a person’s self-esteem. According to an article published in the medical journal De Gruyter, of 186 persons with physical disabilities, people who live with physical disabilities and are sedentary have lower self-esteem than those with physical disabilities that are still active or engage in regular physical activities.14 Examples of these physical disabilities include amputees and cerebral palsy.
Addiction Can Lead to Low Self-Esteem
Sometimes the causes of low self-esteem are like a chicken-or-the-egg discussion. For example, some people think a person uses drugs because they have low self-esteem while other people may develop low self-esteem due to side effects from their drug use.
These are just some of the examples of why a person may have low self-esteem. There are lots of people that don’t have any of these concerns in their medical history and still have low self-esteem.
Misconceptions About Self-Esteem
Wealth Doesn’t Protect Against Low Self-Esteem
One of the most common misconceptions about self-esteem is that if a person has wealth, a good education, parents in an important role, or have a big job title, they have good self-esteem.15 According to researchers writing in the journal Addiction & Health, these factors don’t cause a person to have good self-esteem.
This is important to realize because there are a lot of people out there who seem to “have it all.” But inside, they may have feelings of worthlessness and worry. People may not take them as seriously when they say they have low self-esteem because their life seems so good on the outside. But it’s important to realize that people who have a lot can struggle too.
You Can’t Just Snap Out of It
Another misconception about low self-esteem is that a person simply can or should “snap out of it”.16 Low self-esteem is often the result of lots of different factors that have added up over time and kept a person feeling low or sad about themselves. Once low self-esteem starts, it gets maintained by certain factors and ways of thinking.
For example, when a person has low self-esteem, they always assume things will turn out badly for them. An example could be trying for a promotion at work. A person with low self-esteem tells themselves if they apply, they’ll only get passed over because that’s what always happens. As a result, a person starts showing unhelpful behaviors. These are behaviors like avoiding the problem or taking excessive precautions, so they aren’t even in consideration for a job change.
Unhelpful behaviors further unhelpful emotions. These include feelings like anxiety or depression. A person may start to think critically of themselves like they don’t deserve this promotion and they can never hope to have anything more. This only furthers a person’s low self-esteem and impacts their mental health.
How Low Self-Esteem Impacts Your Mental Health
A lot of research studies have linked low self-esteem and mental health problems. And it seems like a lot of these problems start early in life. People with low self-esteem as a child or adolescent start to have a lot of anxiety around school, their relationships, their friendships, and more. They start feeling like a failure before they’ve even had the opportunity to try.17
Three Year Study of Teenagers Links Self-Esteem with Mental Health
An article published in the journal Child and Adolescent Psychiatry and Mental Health studied 201 young people ages 13 to 18 and then checked in with them three years later to find out about their mental health.18 When the researchers first met the young people, they asked them questions about their self-esteem, mental health, and physical health. After three years, the researchers sent them a questionnaire to fill out. Some of the questions asked if the young people agreed with certain sentences, like “I take a positive attitude toward myself” or “I feel I am a valuable person, at least on par with others.” 19
When the researchers figured up all the answer, they decided there was a strong relationship between self-esteem and anxiety and depression. The people in the study who had low self-esteem were more likely to have problems with anxiety, depression, and attention.20 The people who had high self-esteem earlier were less likely to have anxiety and depression.
Doctors know low self-esteem can lead to the development of mental illnesses like depression and anxiety. Depression is when a person doesn’t have a lot of hope. They don’t have a lot of energy and some people say they feel worthless. Anxiety is when a person is afraid or worried about things to an extreme level.
Study of Tweens Links Low Self-Esteem with Depression
Another study of adolescents ages 10 to 12 years old published in the journal Developmental Psychology found low self-esteem was a risk factor for depression.21 The study asked 674 people between the ages of 10 and 12 years old to answer questions about self-esteem, depression, and other topics. They also interviewed the young people’s families about issues like depression in the mom or dad.
When the researchers were done with the survey, they found that at age 10, both boys and girls had similar levels of self-esteem.22 At age 12, girls had better scores in areas such as school competence, honesty, trustworthiness, and relationships with peers. The researchers found that if a child had low self-esteem at age 10, they were more likely to have depression-like symptoms when they were age 12. They also found people who rated themselves as not honest or trustworthy were most likely to have depression.
Theories Why Self-Esteem and Mental Health are Linked
According to the article, doctors have two theories about why a person who has self-esteem is more likely to have depression.23 The first thing is called the vulnerability model. This model suggests that low self-esteem is a risk factor that can potentially cause depression. The other model is the “scar” model. This model states that depression causes self-esteem by leaving a permanent “scar” in a person’s overall sense of well-being that’s hard for them to bounce back from. Most researchers tend to think the vulnerability effect is more likely. According to an article in the journal Developmental Psychology, the effects of low self-esteem on depression are twice those of depression on self-esteem. 24
The researchers decided that low self-esteem was likely a trigger for depression.25 The researchers supported the vulnerability model because they found that if a person didn’t have a significant sense of self-worth, they were most likely to be depressed. The research also supports the idea that low self-esteem as a young person can have significant effects on a person throughout their life, including in their mental health.
Studies on Self-Esteem and Social Media are Just Getting Started
In surveys of persons regarding their attitudes about social media, some people report experiencing feelings of inadequacy and poorer self-esteem compared to those who did not regularly use social media.26 These findings were published in the journal Psychology of Popular Media Culture. 27
One of the hardest parts about having a mental illness is that it can keep a person’s self-esteem down. A person with low self-esteem may feel like other people judge them or won’t accept them because they have a problem.28 That makes a the person feel even worse. They may be nervous that other people won’t accept them or make them feel like they can’t do anything right.
Bad or sad things are likely bound to happen in someone’s life. A person with high self-esteem may be able to handle them better because they believe they can overcome them.29 A person with low self-esteem may have worsening symptoms of depression, anxiety, and other mental illnesses because they don’t see themselves as a person who can move past their problems.
Low Self-Esteem’s Effects on Addiction
According to NAMI, a history of low self-esteem in childhood and early life can make a person more likely to be addicted to drugs later in life.30 There’s a lot of research to back this up too.
Low Self-Esteem Early in Life Can Lead to Addiction
One study from Florida State University found that boys who had low self-esteem at age 11 were more likely to be addicted to drugs by the time they were 20.31 The researchers published their study in the Journal of Child and Adolescent Substance Abuse. One of the researchers named John Taylor said this about their findings “Low self-esteem is kind of the spark plug for self-destructive behavior, and drug use is one of these. It’s a fundamental need to have a good sense of self. Without it, people may become pathologically unhappy about themselves, and that can lead to some very serious problems”.32
In Boys, Low Self-Esteem Leads to Addiction
The researchers found that boys with low self-esteem were 1.6 times more likely to be drug-dependent later in life than other children who didn’t have low self-esteem.33 They also found that people who had tried drugs by age 13 were more likely to be dependent years later than people who hadn’t. For example, 37 percent of the people who had tried drugs by age 13 were addicted to them later in life while only 3 percent of those who hadn’t tried drugs by age 13 were addicted.34
In Girls, Low Self-Esteem Leads to Eating Disorders
The researchers found low self-esteem in girls was more likely to cause problems like depression and eating disorders instead of drug addiction.35 They thought boys had higher rates of drug problems because of their self-esteem.
Prison Study Supports the Connection Between Self-Esteem and Addiction
A 2011 study published in the journal Addiction & Health also looked at how self-esteem and addiction are connected.36 The study authors interviewed 200 people in prison for addiction, theft, or prostitution and compared their answers to a sample of 100 people who were in prison. At the end of the study, the authors found the higher a person said their self-esteem was, the more likely they were to avoid illegal drugs and narcotics. The people in their survey who had a history of drug abuse, theft, or prostitution had lower self-esteem than the people who were not.
Nearly 100% of People Treated for Mental Illness Have Low Self-Esteem
Another study published in the journal Annals of General Psychiatry studied 957 psychiatric patients, 182 patients who didn’t have a mental illness but were physically ill, and 51 control subjects. 37At the study’s conclusion, the authors found that virtually every person with a mental health disorder experienced some degree of lower self-esteem. The study found the people with the lowest self-esteem were those who suffered from the following conditions:
Patients who suffered from drug addiction and had major depression had some of the lowest self-esteem scores among those tested.38 The researchers also found that people who had a psychiatric disorder and a chronic medical condition (like diabetes or high blood pressure) were more likely to have low self-esteem. 39
Self-Reporting Makes it Hard to Get Definite Numbers
While there are some questionnaires that can help doctors figure out if a person probably has low self-esteem, there aren’t definite numbers about self-esteem. There’s no current estimate like “this many people addicted to drugs have low self-esteem.” But doctors do know that self-esteem plays a role in drug addiction. Sometimes, using drugs have the potential to cause negative feelings about a person. They may feel weak or outcast because they can’t quit using drugs. Others may turn to drugs to escape from their feelings of worthlessness. The two conditions can play into each other in big ways.
How Low Self-esteem Impacts Quality of Life
Low self-esteem can greatly impact a person’s quality of life. They don’t believe in themselves and their abilities to achieve success in their relationships, work, and life overall. Even if they do achieve and accomplish, they think it’s due to luck or other forces, and not because they deserve it or earned it.
Some of the ways that low self-esteem impact a person’s quality of life include the following:
A person with low self-esteem is usually the bully or the person being bullied. They may act angry constantly and try to make their partner feel bad. Other people with low self-esteem may feel like they don’t deserve to be loved. As a result, they may tolerate bad treatment and misbehavior from their partner.
A person that doesn’t believe in themselves doesn’t often take care of themselves. They may eat to excess or drink too much alcohol or use drugs.
A person with low self-esteem isn’t usually resilient. This means they don’t handle hard times or challenges well. They immediately think the situation is hopeless and they won’t get over it.
Negative feelings can lead to mental health concerns. As mentioned above, poor self-esteem can lead to poor mental health. People with poor self-esteem are so critical of themselves they may have persistent feelings of anxiety, depression, guilt, and sadness.
Harmful Behaviors Caused by Low Self-Esteem
In addition to these impacts on quality of life, people with low self-esteem often engage in behaviors that harm their health and well-being. These include greater instances of drug abuse, suicide attempts, or eating disorders. These behaviors damage a person’s health and keep a person’s self-esteem low. Even if a person is using drugs or alcohol as a means to escape their thoughts, the inevitable crash that comes after a person uses drugs or alcohol can bring all the thoughts and feelings of low self-esteem back.
Video: A User’s Guide to Building Self-Esteem
How Can You Improve Your Self-Esteem?
It’s hard for a person to change their mind about how they feel about themselves overnight. It takes time for a person’s self-esteem to build up and improve. A lot of times, they may need to see a medical professional and do a lot of activities and positive thinking over time. They often need encouragement from family and friends to keep their self-esteem higher.
Ways a person can build their self-esteem include the following:
Internal Ways to Build Self-Esteem
Stop comparing themselves to other people. A person who is always trying to be like others can’t be themselves. And they’ll only get disappointed when they try to be like someone else. If a person senses that they’re comparing themselves, they need to stop and say something positive about themselves. An example could be “I am enough. I accept myself, and I don’t need to be like anyone else.”
Write a list of good qualities about themselves. A person should keep this list close at hand, such as a picture on their cell phone. When they doubt themselves, they should look at it.
Recognize when they’re engaging in negative self-talk. If they find themselves thinking they are bad, ugly, fat, stupid, or any other negative word, they need to stop. Then, they should tell themselves something positive and say it like they mean it. This can be as simple as saying “NO! I AM a good person.”
Move past the past. To move forward, a person can’t keep looking backward. Once they’ve dealt with their past, they need to stop dwelling on things. They can tell themselves, “This is over. I’m moving on, and I don’t need to worry about this anymore.”
External Ways to Build Self-Esteem
Exercise is physical activity that increases feel-good chemicals in the brain that help a person feel better. It’s also an activity a person is doing for themselves and no one else.
Learn to be Direct
A person with low self-esteem needs to share their thoughts and feelings. They don’t need to be afraid of sharing their opinions. Often, the more they share them, the better they will feel.
Find a Positive Activity to Enjoy
Taking up a new hobby can be a way for a person to develop a positive social atmosphere. Some people even volunteer as a way to do something good for their community and meet new people.
Keep a Journal
A person can write in a journal every day to recognize ways they succeeded in building their self-esteem and ways they may have struggled. Over time, they will likely find their successes are greater than their setbacks.
Taking Care of Yourself Helps Improve Self-Esteem
To start to build self-esteem, it’s important a person take care of themselves. This means doing things like eating a healthy diet, exercising, and maybe even meditating to relax.40 When a person makes themselves a priority and tries to improve their health, they start to feel better about themselves. They start to see themselves as a person of value and worth.
There are several theories and ways doctors suggest people boost their self-esteem. One method is called “acceptance and commitment therapy”.41 There are three steps to this approach.
First, a person should think about times when they’ve felt down or struggled with their self-esteem. This could be when a person is in school, sees a certain person, or goes out in public.
Second, a person should use an approach that helps them see their thoughts in a different way. Examples could include saying the words several times over and over, trying to write them with a hand a person doesn’t normally write with, or even singing a song with the words. While this approach may seem silly, it helps a person put distance between themselves and their thoughts. It allows a person to see words for what they are – letters that are out together. These words only have meaning if a person listens to them and believes them.
Third, a person should accept their thoughts and feel what they mean to that person. A person doesn’t have to act in them or believe them. They just need to recognize them as thoughts and that they don’t have to have too much power over a person.
Improving Self-Esteem at Work and at School
A person with low self-esteem may commonly struggle at school and in the workplace. This can impair their abilities to achieve a promotion or raise as well as to perform well in school. As a result, a person may feel increasingly frustrated with their life. This can only perpetuate a person’s low self-esteem.
Some of the ways a person can work to improve their self-esteem at school and in the workplace include the following:
Celebrate everyday victories. Every day, a person should reflect on one to three things they did well during the school or workday.33 This can help a person better identify the ways they are succeeding in their efforts.
Tape a large red stop sign on a notebook or office wall. When a person has a negative thought, they should look at the stop sign and think about how they need to stop negative thinking and move on to positive thoughts.
Create a “praise” board that features positive aspects of a person’s professional and personal careers. This can be even small things, such as a sentence of positive feedback on a project or paper.
With time and regular practice, a person can start to enhance their self-esteem. To really accomplish it, a person really has to be aware of their thoughts and feelings.
Resources to Improve Your Low Self-Esteem
If a person needs help for low self-esteem, drug abuse, or mental health concerns (or a combination of all), the first place to start can be their doctor’s office. If they are having thoughts of self-harm or suicide, they should seek emergency medical attention. They can also call the National Suicide Prevention Lifeline, which is open 24 hours a day. The phone number is 1-800-273-8255.
Psychologists and Psychiatrists
A person’s primary care physician can often refer them to a mental health expert (such as a psychiatrist) or a rehabilitation facility where they can find out more information about a pathway to sobriety while also receiving psychiatric support. Often, rehabilitation centers can help a person detox from a substance of abuse while also offering counseling and support groups. Helping to correct a person’s “addictive thinking” can also help them break their addiction to feeling bad about themselves and putting themselves down.
Examples of some of the therapy approaches a doctor can use include the following:
Cognitive Behavioral Therapy
This approach involves having a person recognize and reflect on their thoughts and then learning how they can adjust these behaviors to more positive ones.
This therapy approach involves rewarding a person for positive behaviors. Examples could be writing in a journal to reflect on positive behaviors a person has displayed over the course of a day.
Motivational Enhancement Therapy
Motivational enhancement therapy involves a therapist talking with a person to help them find their personal motivations to improve their self-esteem. This approach is one of self-discovery.
If a person already has an established relationship with a support group, community health center, or therapist, they should reach out to one or more of these individuals. They may recommend more frequent counseling sessions or other approaches to helping a person enhance their self-esteem. A person may be able to participate in online chat groups if they don’t have an in-person support group in their community.
Phone and Computer Apps
In addition to these efforts, there are lots of apps and other support tools that can help a person work to build their self-esteem on a daily basis. Examples of these apps (all are free, but some may have extras that can be purchased in-app) include:
Grateful: A Gratitude Journal
This app has small prompts on a daily basis that ask a person to focus on what they are thankful for and to think positively about their life.
This app offers free daily guided meditation that takes 10 minutes a day or less. People can use this app to focus their thinking, enhance positivity, and enhance their belief in themselves.
Stop, Breathe, and Think
This app offers daily mindfulness sessions as well as specific “sessions” or guided content for concerns such as stress relief, reduction of anxiety, and enhancement of focus.
This app sends texts message every weekday that are motivational and uplifting. It also features a gratitude journal where a person can record one thing they are thankful for each day.
Another app focused on enhancing happiness and overall well-being, Happier encourages periodic, 10-second pauses in a person’s daily life to enhance self-esteem and positivity.
The Centre for Clinical Interventions also offers a self-esteem workbook and educational learning modules that are free of charge. A person can access these by going to the following link: cci.health.wa.gov.au/Resources/Looking-After-Yourself/Self-Esteem.
There aren’t any specific medicines that can help enhance a person’s self-esteem. However, a doctor may be able to prescribe medicines to treat conditions that may affect a person’s self-esteem. Examples can include anti-depressants for depression or medicines to reduce anxiety in those who have panic disorders and other concerns.
These resources aren’t a substitute for professional medical help and rehabilitation, if needed. However, they can help a person who is struggling with their self-esteem.
Low Self-Esteem Impacts Your Well-Being and is Often Underestimated
Low self-esteem is an important, yet often underestimated, factor in a person’s overall sense of well-being, mental health, and addiction recovery. When a person’s sense of self-worth and purpose is lower, they may be more likely to engage in drug use and other negative behaviors. By addressing aspects that can raise a person’s self-esteem, ideally, a person can live a healthier, happier life.
In many ways, self-esteem is like an addiction. A person is addicted to unhelpful thoughts and feelings about themselves. Even when they work on their self-esteem, one thing going wrong can trigger a person and make them feel negative about themselves. This is why it is so important that a person with low self-esteem learn coping skills so they can deal with the times ahead when they feel bad about themselves. It’s important a person with a history of low self-esteem constantly engage in healthy and helpful behaviors. This includes things like taking care of themselves and finding activities they truly enjoy to participate with people they enjoy socializing with. This constant awareness of their self-esteem and how to recognize thoughts that worsen low self-esteem can help a person foster healthy self-esteem.
This article was brought to you by 449 Recovery | <urn:uuid:3eb1e928-5510-4934-b12e-dd87a616e5e3> | CC-MAIN-2019-47 | https://www.449recovery.org/low-self-esteem/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00418.warc.gz | en | 0.960363 | 7,131 | 3.296875 | 3 |
A DR Classique, written four years ago today, in honor of the departed.
“Like a wet, furry ball they plucked me up…”
– Rupert Brooke
In August 1914, millions of young men began putting on uniforms. These wet, furry balls were plucked from towns all over Europe…put on trains and sent towards the fighting. Back home, mothers, fathers and bar owners unrolled maps so they could follow the progress of the men and boys they loved…and trace, with their fingers, the glory and gravity of war.
I found one of those maps…with the front lines as they were in 1916 still indicated…rolled up in the attic of our house in France. I looked at it and wondered what people must have thought…and how horrified they must have been at what happened.
It was a war unlike any other the world had seen. Aging generals…looked to the lessons of the American war between the states…or the Franco-Prussian war of 1870…for clues as to how the war might proceed. But there were no precedents for what was to happen. It was a new era in warfare.
People were already familiar with the promise of the machine age. They had seen it coming, developing, building for a long time. They had even changed the language they used to reflect this new understanding of how things worked. In his book, “Devil Take the Hindmost,” Edward Chancellor recalls how the railway investment mania had caused people to talk about “getting up steam” or “heading down the track” or “being on the right track”. All of these new metaphors would have been mysteriously nonsensical prior to the Industrial Age. The new technology had changed the way people thought…and the way they spoke.
Armistice Day: Power Beyond Expectations
World War I showed the world that the new paradigm had a deadly power beyond what anyone expected.
At the outbreak of the war, German forces followed von Schlieffen’s plan. They wheeled from the north and drove the French army before them. Soon the French were retreating down the Marne Valley near Paris. And it looked as though the Germans would soon be victorious.
The German generals believed the French were broken. Encouraged, General von Kluck departed from the plain; instead of taking Paris, he decided to chase the French army, retreating adjacent to the city, in hopes of destroying it completely.
But there was something odd…there were relatively few prisoners. An army that is breaking up usually throws off lots of prisoners.
As it turned out, the French army had not been beaten. It was retreating in good order. And when the old French general, Galieni, saw what was happening…the German troops moving down the Marne only a few miles from Paris…he uttered the famous remark, “Gentlemen, they offer us their flank.”
Galieni attacked. The Germans were beaten back and the war became a trench-war nightmare of machine guns, mustard gas, barbed wire and artillery. Every day, “The Times” (of London) printed a list of casualties. When the generals in London issued their orders for an advance…the list grew. During the battle of the Somme, for example, there were pages and pages of names.
Armistice Day: 21 Days
By the time the United States entered the war, the poet Rupert Brooke was already dead, and the life expectancy for a soldier on the front lines was just 21 days.
One by one, the people back at home got the news…the telegrams…the letters. The church bells rang. The black cloth came out. And, one by one, the maps were rolled up. Fingers forgot the maps and clutched nervously at crosses and cigarettes. There was no glory left…just tears.
In the small villages of France hardly a family was spared. The names on the monument in the center of town…to “Nos Heros…Mort Pour La France” record almost every family name we know – Bremeau, Brule, Lardeau, Moreau, Moliere, Demazeau, Thollet…the list goes on and on. There was a bull market in death that did not end until November 11, 1918…at 11 A.M. For years after…at 11 A.M., the bells tolled, and even in America, people stood silently…recalling the terrible toll of four years of war. Now it is almost forgotten.
We have a new paradigm now. And a new war. The new technology has already changed the language we use… and is changing, like the railroads, the world we live in. We think differently…using the metaphor of free- wheeling, fast-moving, networked technology to understand how the world works.
We are fascinated by the new technology…We believe it will help us win wars with few casualties, as well as create vast new wealth…and a quality of life never before possible.
And yet, we are still wet, furry balls, too.
I will observe a moment of silence at 11 A.M.
November 11, 2003 — Paris, France
P.S. The effects of WWI lasted a long, long time. In the 1980s, my father got a small inheritance from his Uncle Albert. “Uncle Albert?” I remember my father saying. “Who’s Uncle Albert?” The man in question was indeed an uncle…but he had been forgotten for many years. A soldier in WWI, Albert had suffered a brain injury from an exploding bomb…and never recovered. He spent his entire adult life in a military hospital.
Bill Bonner is the founder and editor of The Daily Reckoning. He is also the author, with Addison Wiggin, of the Wall Street Journal best-seller: “Financial Reckoning Day: Surviving The Soft Depression of The 21st Century” (John Wiley & Sons).
Oh, dear reader…it’s not easy.
The press…the headlines…your neighbors…economists… brokers…analysts…maybe even your spouse – almost everyone in the entire financial system tells you not to worry. There it is on the front page of USA Today: “New data point to growth for jobs…Fed chief, public spot reasons for optimism…”
There has been a “sea change,” they say. Now, everyone thinks that clear skies and favorable winds will take them where they want to go – to effortless prosperity, gain with no pain, a free lunch and dinner every day of the year. Debt? Don’t worry about it, they say. This economy is so dynamic, so prosperous, so innovative – we’ll work our way out of debt, no sweat.
How could it be, dear reader? We reported yesterday that debt was increasing 6 times faster than income. In the month of September, consumer debt increased by $15.1 billion – bringing the total to $1.97 billion. How do you work your way out of debt while adding $6 of new debt for every new dollar of income? The nation already owes nearly $3 trillion more to foreigners than foreigners owe to it. And that amount grows by half a trillion each year, thanks to a trade gap that is 10 times as large this year as it was 10 years ago. As a percentage of GDP, American debt levels already beat anything the world has ever seen – and get larger every day. For nearly 100 years, the ratio of debt to GDP was between 120% and 160%. Only in the 1929 bubble did it ever become really grotesque…peaking out at 260%. Guess what it is today? Over 300% and growing.
Yes, GDP growth was recently clocked at more than 7% per year. Yes, productivity numbers came in at more than 8%. And yes, the latest numbers appear to show employment increasing.
But these bits of information are nothing more than random noise…or worse, intentionally misleading drivel. Employment numbers may be up – but compared to every past recovery, they are pathetic. And next month might show unemployment falling again. A close inspection would show that the productivity numbers are as phony and meaningless as an election campaign. And the GDP? A humbug…a charlatan…a flim-flam…a false shuffle. “You give me a trillion dollars and I’ll show you a good time too,” said Buffett of the current boom. Pump enough new credit and federal spending into the system, in other words…and something is bound to happen.
What has happened is that the flood of new money and Lazy- Boy credit enabled Americans to make even bigger fools of themselves – borrowing and spending money when they desperately need to save it.
Buffett, writing in Fortune magazine, describes the buildup in debt as though it were equivalent to selling the nation’s assets to foreigners. “My reason for finally putting my money where my mouth is [by buying foreign currencies] is that our trade deficit has greatly worsened, to the point that our country’s ‘net worth,’ so to speak, is now being transferred abroad at an alarming rate.”
According to the Sage of the Plains, we live in “Squanderville”…where we’ve been wasting our national wealth year after year. Buffett estimates that the total wealth of the nation is about $50 trillion. But year after year, we spend more than we make and mortgage a little more of our capital stock to foreigners. Already, 5% has moved into foreign hands…
“More important,” he writes, “is that foreign ownership of our assets will grow at about $500 billion per year at the present trade-deficit level, which means that the deficit will be adding about one percentage point annually to foreigners’ net ownership of our national wealth.”
For the moment, the Feds seem to have succeeded. They have lured the citizens of Squanderville deeper into debt, encouraging them to spend more money they don’t have on things they don’t need.
But long can this go on? Doesn’t each dollar of new credit (debt) make the situation worse…adding to the bills that someday, somehow – either by creditor or debtor – must be paid?
What kind of ‘recovery’ is it when you are merely squandering your wealth at a faster rate?
We are killjoys and spoilsports for even asking the question.
Here’s Eric with more news….
– A terrorist bombing in Saudi Arabia over the weekend killed 17 people in Riyadh, while blasting a hole through investor complacency on Wall Street. The Dow Jones Industrial Average fell about 55 points yesterday to 9,755, while the Nasdaq Composite dropped 1.5% to 1,942.
– For months, investors have been wallowing in the warm fuzzies of a recovering economy, while ignoring the cold realities of escalating violence in the Middle East. A steady stream of bullish economic headlines here at home has made it much easier to ignore the steady stream of bearish headlines from ‘over there.’
– But the bad-news headlines are becoming much more difficult to ignore. Certainly, one or two random sniper attacks in Iraq or Saudi Arabia won’t reduce the S&P 500’s earnings. On the other hand, escalating terrorist activity is hardly bullish…except for commodities.
– Yesterday, crude oil rose to a fresh three-week high of $30.88 a barrel, while gold jumped $3.30 to $386.70 an ounce. “The commodity bull market is just getting started, especially for oil,” says John Myers, editor of Outstanding Investments. “Saudi Arabia is in big, big trouble. Within the country’s own borders, undeniable tensions are rising. Not only do they threaten the future of the Royal House of Saud, Saudi Arabia’s ruling family…but they also jeopardize the future of every nation that depends on Saudi oil. Especially, of course, America.
– “Our future and the Saudis’ is so intertwined, we will HAVE to get involved. But in the midst of the crisis, there’s also opportunity. If I’m right, petroleum prices will once again lock into a steep rising trend. Only this time, it will be permanent…”
– We suspect that the stock market’s “disillusionment phase” is about to begin. The problems in the Middle East are probably worse than they appear, while the strength of the U.S. economy is probably less than it appears.
– Here in the States, the economy is registering some impressive growth. “[But] the obvious and important tactical question is whether this newfound vigor is sustainable,” says the ever-bearish Stephen Roach. “The latest U.S. employment numbers are hardly in keeping with the vigorous hiring-led upturns of the past. In the recoveries of the mid-1970s and early 1980s, for example, the great American job machine was generating new employment of around 300,000 per month within six months after cyclical upturns commenced…In this broader context, job gains of 125,000 over the past two months remain woefully deficient. Normally, at this stage of a cyclical upturn – fully 23 months after the trough of a recession – private-sector hiring is up about 5.5% (based on the average of the six preceding business cycles). As of October 2003, the private job count was still down nearly 1% from the level prevailing at the official cyclical turning point in November 2001…
– “The detail behind the hiring improvement of the past three months bears special attention,” Roach continues. “Fully 78% of the employment growth over the past three months has been concentrated in three of the most sheltered segments of the workforce – education and health services, temporary staffing, and government. That hardly qualifies as a full-fledged upturn in business hiring that lays the groundwork for a classic cyclical revival…For a saving- short, overly-indebted, post-bubble US economy, I continue to think it’s entirely premature to issue the all-clear sign, there’s a limit to the potential vigor of a U.S.- centric global growth dynamic.”…or at least, there ought to be.
– Again, most investors don’t seem to care. Job growth may be anemic, but speculation is all the rage. The volatile Nasdaq Composite has soared more than 75% during the past year, while numerous penny stocks have become dollar stocks.
– For more than a year, reckless speculation has excelled over plain-vanilla forms of speculation…This too shall change. During the disillusionment phase of stock market cycles, caution excels over reckless speculation.
– “While past performance suggests that the current bull market, which celebrated its first anniversary Oct. 10, is still young, it also alerts investors to the sobering fact that a market peak may be looming,” USA Today reports. “If this bull market turns out to be a short-term rise in a longer-term bear market – often called a “cyclical” bull market – history says time might be running out on the current uptrend. Ned Davis, found that the 17 cyclical bull markets since 1900 have lasted 371 days. The current bull market is already 394 days old.” Let’s keep the hearse nearby.
Bill Bonner, back in Paris…
*** For some reason, many of the fastest-growing, fastest talking communications tech companies in the nation have installed themselves in the techland paradise between Washington DC and Dulles Airport. Making our way to the airport last night, we passed Computer Associates, Oracle, Nextel, Juniper and dozens of others – many of the companies that blew up so spectacularly in 2000-2001…and now have ballooned again.
We don’t know one from another…and have done no research…but our guess is that you could sell short any company between Dulles and the Beltway…except for maybe the Days Inn…and turn a nice profit.
*** Today is Remembrance Day in Canada…Veteran’s Day in the U.S…and Armistice Day in France. At the 11th hour of the 11th day of the 11th month the bells will toll throughout France – remembering the end of the nation’s most costly war, which ended on this day in 1918.
It was on this day, too, that Wilfred Owen’s mother received a telegram informing her that her son had been killed. Coming as it did on the day the war ended, the news must have brought more than just grief. “What was the point?” she may have wondered.
Wilfred Owen had wondered, too. His poetry mocked the glory of war. He described soldiers who had been gassed as ‘gargling’ their way to death from “froth-corrupted lungs.”
“You would not tell with such high zest To children ardent for some desperate glory The old lie: Dulce et Decorum est Pro patria mori.”
Owen saw many men die; it was neither sweet nor glorious, he observed, but ghastly.
Still, men seem to want to kill one another from time to time. In the Great War, millions died. You can ask people today why they died, and no one can give you a good reason. No nation had anything to gain…and none gained a thing. But the dead men were no less dead for want of a good reason.
“Don’t forget to spend a moment on Remembrance Day,” the Canadian Broadcasting Company reminded us on Sunday, “to recall those many Canadians who died protecting our liberty and our country.”
Here at the Daily reckoning, we believe in honoring the dead, too. But not with humbug. Canada had even less stake in the war than the major combatants. No matter which way the war went, it would have made little difference in the far north. But we appreciate bravery for its own sake – even if it is in an absurd cause.
One of the last Canadian WWI veterans died last week at 106 years old. There are only 10 left. (In France, there were 36 still with a pulse as of last week.) But the old soldiers are dying fast. Soon there will none left.
Canadian soldiers were among the best colonial troops…and the most likely to be killed. If dying in war is sweet, the Newfoundlanders were particularly blessed. One out of 4 of the 6,000 men of the Newfoundland Regiment never returned home. But “nothing matched the toll of the massacre at Beaumont-Hamel on the western front on July 1, 1916,” reports the Toronto Globe and Mail. “About 800 Newfoundlanders charged out of their trenches into the teeth of German machine-gunfire. They had been told that the Germans would be weakened by intense bombardment, that the lethal strings of thick barbed wire strewn across no man’s land would be gone and that another regiment would join them. None of it was true. The next morning, only 68 members of the regiment answered the roll call.
“One eyewitness said the Newfoundlanders advanced into the hail of bullets with their chins tucked into their necks, as they might weather an ocean storm.”
Then, the old lie swallowed them up, like a tempest. More below…
*** Maria is a hit in Germany. Two months in a row, she has appeared as the cover girl of a German magazine called Madame. But she is giving up her modeling career to become an actress. Yesterday, she went for an important audition – to determine whether she will be admitted to London’s Royal Academy of Dramatic Arts. We’ll find out later today how it went…
*** Her father, it seems, is also turning into somewhat of an international hit. We have just received an offer from a German publisher to translate our book, Financial Reckoning Day, into German…we are ‘tracking’ on the bestseller list at the National Post in Canada…and we’re told by our publishers in New York that their offices in Australia and Singapore are out of stock.
We’ve been obliged to postpone the initial launch of the book in the UK and across English-speaking Europe because pre-orders have depleted stock in London. Still, it appears some copies have been smuggled in by gnomes…a reader from Scotland writes:
“A large box of Kruger rands are on their way to me as I write. I have read Messrs Bonner and Wiggin’s new book…[‘Financial Reckoning Day’] I am still dazed from the experience. The only rational thought I have managed to produce in recent days is to sell my share portfolio and clear my debts.
“I strongly recommend that a copy of the book should be sent to Mr. Alan Greenspan, Gordon Brown et al – preferably wrapped in used bills. I suggest that the cover be changed to that of a pretty girl in a state of undress so as to at least get them to open the cover and peruse the contents. A complementary container of Valium should accompany the book so as to provide a means to moderate the shock once the realization of the legacy they will be leaving the world has sunk in. If Daily Reckoning can’t afford it, I would be happy to forward one of my Krugers when it arrives. Keep it up.” | <urn:uuid:793fc00a-04f3-4478-9b2c-dd4d54bf88ed> | CC-MAIN-2019-47 | https://dailyreckoning.com/armistice-day-3/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00218.warc.gz | en | 0.963003 | 4,573 | 2.78125 | 3 |
Each month, the wonderful State of Nature blog asks leading critical thinkers a question. This month that question is Fascism.
In fact, fascism has never gone away. If by fascism, we mean the historical regime that created the name and embraced the ideology explicitly, then we have to conclude that the concept is only applicable to the political regime that reigned in Italy between 1922 and 1943. This, however, amounts to little more than a tautology: ‘the Italian fascist regime’ = ‘the Italian fascist regime’. History clearly never repeats itself, so any attempt to apply the category of fascism outside of that context would be doomed to fail. That may be a necessary cautionary remark for historians, but how about social and political theorists? Can fascism be a heurist tool to think about and compare different forms of power?
If by fascism we mean a political model that was only epitomized and made visible by the Italian kingdom during 1922-43, then we arrive at a very different conclusion. Consider for a moment the features that characterize that form of power: hyper-nationalism, racism, machismo, the cult of the leader, the political myth of decline-rebirth in the new political regime, the more or less explicit endorsement of violence against political enemies, and the cult of the state. We can then certainly see how that form of power, after its formal fall in 1943, continued to exist in different forms and shapes not simply in Europe, but also elsewhere. We can see how fascist parties continued to survive, how fascist discourses proliferated and how different post-war regimes emerging world-wide exhibited fascist traits without formally embracing fascism.
Coming close to our times, we can see how Trumpism, as an ideology, embodies a neoliberal form of fascism that presents its own peculiar features, such as the respect of the formal features of representative democracy, the combination of free-market ideology and populist rhetoric, and the paradox of a critique of the state accompanied by the massive recourse to its institutions. But it also exhibits features, such as the extreme form of nationalism, the systematic racism, the macho-populism, and an implicit legitimation of violence, which are typical of fascism. In sum, we should consider fascism as a tendency of modern power and its logic of state sovereignty, a tendency that, like a Karstic river, flows underneath formal institutions but may always erupt in its most destructive form whenever there is an opening for it.
Chiara Bottici is an Associate Professor in Philosophy at New School for Social Research and Eugene Lang College (New York). Her recent books include Imaginal Politics: Images beyond Imagination and The Imaginary (Columbia University Press, 2014), Imagining Europe: Myth, Memory, and Identity (Cambridge University Press, 2013), co-authored with Benoit Challand, and the co-edited collections, The Anarchist Turn (Pluto 2013, with Simon Critchley and Jacob Blumenfeld), and Feminism, Capitalism and Critique (Palgrave 2017, with Banu Bargu).
Is fascism making a comeback? Perhaps. But history is not predetermined. It presents us with a succession of choices.
What does seem true is that the film of the 1930s is re-running in slow motion. We face a world capitalist crisis that is probably more intractable than that of the 1930s, with economic stagnation, growing social decay, a breakdown of the international order, increasing arms expenditure and war, and imminent climate catastrophe.
The political and business elite has no solutions to any of the major problems confronting humanity and the planet. Parliamentary democracies have been hollowed out by corporate power. Authoritarian nationalist regimes are in control elsewhere. Fascist organisations are gaining in electoral support.
Labour movements – the unions and the mass socialist parties – have been weakened by 35 years of neoliberalism. Most working people, battered by the crisis, lack effective mechanisms for fighting back collectively. Social life is characterised by atomisation, alienation, and anomie. This is the seedbed for nationalism, racism, fascism, and war.
The Right has no solutions and nothing to offer. The essence of its politics, therefore, is to turn working people against each other, making scapegoats of women, the poor, the disabled, ethnic-minority people, Muslims, LGBT people, migrants, refugees, and so on. It takes different forms in different places. Trump in the US. Brexit in Britain. Le Pen in France. The AfD in Germany. But the essential message is the same. And this has the potential to harden into all-out fascism – the violence and repression of armed thugs out to smash the unions, the Left, and the minorities.
But fascism could have been stopped in the 1920s and 1930s, and it could be stopped today. It all depends on what we do. The challenge is extreme: we need nothing less than a radical programme of economic and social change to reverse a generation of financialisation, privatisation, austerity, and the grinding down of working people.
To stop the fascists, we have to show the great mass of ordinary working people that an alternative is possible: that if we unite and organise and fight back, we can challenge the grotesque greed of the super-rich rentier class that is currently leaching the wealth of society to the top, and remodel society on the basis of equality, democracy, peace, and sustainability.
Neil Faulkner is an historian, archaeologist, and political thinker. Author of numerous books, including A Radical History of the World (Pluto, forthcoming), Creeping Fascism: Brexit, Trump, and the Rise of the Far Right (Public Reading Rooms, 2017), and A People’s History of the Russian Revolution (Pluto, 2017).
Fascism is not making a comeback because it never left. Contrary to the thinking of some nominalist historians, it didn’t begin in the 1920s and end in 1945. It is not an artefact, but alive, well and continuing to thrive.
The United States’ Directive JCS 1779 of 1947 facilitated the reinstatement of over 90 per cent of those German officials previously purged under de-Nazification measures – including Hauptsturmführer Klaus Barbie (before he fled to Argentina in 1951). In Italy, concerns over the strength of the indigenous communist party led the CIA to allocate more than $10 million to a Christian Democrat Party riddled with unreconstructed fascists. Having worked with former Nazi operatives to defeat a communist insurgency in Greece in 1949, the United States extended Marshall Plan aid to Salazar’s Portugal and normalised relations with the Franco regime, which reportedly viewed the resultant 1953 Pact of Madrid as proof that it had been right all along.
Germany’s Nationaldemokratische Partei Deutschlands, which received between 500,000 and 750,000 votes in the general elections of 2005-13, is a direct successor to the Deutsche Reichspartei (founded by General der Flieger Alexander Andrae in 1946). In Spain, the Democracia Nacional, emerged from the Círculo Español de Amigos de Europa which included the former commander of the Walloon Schutzstaffel, StandartenführerLéon Degrelle, for whom Franco provided asylum and obstructed Belgian extradition attempts thenceforth. Degrelle’s close associate, Jean-Marie Le Pen, who attracted five million votes in the 2002 French elections, formed the Front National in 1972.
In the United States, the second Klan had achieved an estimated membership of around 4 million people during the 1920s (making it one of the largest civil society organisations in world history). This did not disappear after the military defeat of European fascism. It forged links with groups such as the American Nazi Party (founded in 1959) and the United Kingdom’s National Socialist Movement led by a former member of the British Union of Fascists, Colin Jordan, and the future leader of the British National Party, John Tyndall (who appointed Nick Griffin to the party in 1995). The key figure behind this trans-Atlantic collaboration was Harold Covington – whose influence Dylan Roof cited as a motive for the 2015 Charleston shooting.
Tim Jacoby is a Professor at the Global Development Institute at the University of Manchester. Author of Understanding Conflict and Violence: Theoretical and Interdisciplinary Approaches (Routledge, 2008), and Social Power and the Turkish State (Routledge, 2004), and co-author of four other books.
Rose Sydney Parfitt
There is, I think, no question that fascism is making a comeback. Clearly, the language, symbols and logic of fascism are being deployed today more overtly than at any time since the early 1940s. That is not to say, however, that fascism ever went away, or – in the context of our once-European, now-global legal order – that the kernel of fascism has not been with us from the beginning.
This suggestion, that fascism may be lodged somewhere in the DNA of the normative system we now take for granted, might seem odd coming from a legal scholar. After all, law, with its emphasis on equal rights and non-aggression, was violated systematically by Nazi Germany, Fascist Italy and their allies, and is usually understood as the most important weapon we have against far-right resurgence. We should remember, however, that inter-war fascism did not spring out of nowhere. On the contrary, fascism took almost 500 years of European colonialism, with its brutal expansionism and Social Darwinist logic (at the time, entirely ‘legal’), and turned it in on itself.
In the process of its transformation from a European to a near-universal system via decolonisation, ‘development’ and the collapse of Communism, the law by which most states are regulated today is supposed to have abandoned these discriminatory and expansionist tendencies. Yet its core premises remain the same. Law’s primary subjects(states and individuals) may now be more numerous, but they are still recognised as free only in a negative sense (‘free from’ not ‘free to’), and equal only in a formal (legal not material) sense. Likewise, law’s non-state, non-human objects continue to be regarded as ‘natural resources’, secured in unlimited supply by technology’s capacity to usher capital into ever-more obscure corners of the ‘market’.
Yet the supply of ‘resources’ is not, in fact, unlimited – as nineteenth-century imperialists and twentieth-century fascists insisted. As they also recognised – and celebrated – this means that the state cannot function as an egalitarian framework within which prosperous individual futures can be pursued in mutual harmony – or, at least, not unless some external ‘living space’ can be found wherein to harvest meat, fish, oil, gas, water, dysprosium, avocados and other ‘essential’ commodities. The state, in other words, is not a ‘level playing-field’ but a collective vehicle – a battering-ram – available for appropriation by those who are already winning the endless war of accumulation in which only the fittest (wealthiest, most powerful) have a right to survive. In short, fascism, seemingly the antithesis of the rule of law, may in practice be its apotheosis.
Let me, then, respond to the question posed with another question. In the context of a global legal order which views famine, poverty, exploitation and planetary destruction as consistent with universal ‘freedom’ and ‘equality’, will fascism ever go away?
Rose Sydney Parfitt is a Lecturer in Law at Kent Law School and an Australian Research Council (DECRA) Research Fellow at Melbourne Law School, where she leads a research project entitled ‘International Law and the Legacies of Fascist Internationalism’. Her book on modular history and international legal subjectivity is coming out in 2018 with Cambridge University Press.
My answer is ambiguous. On the one hand, the social and political conditions for the re-emergence of fascism as a movement are ripening across the advanced capitalist world. The global slump that began with the 2008 recession has decimated the living standards of the working and middle classes – both self-employed and professionals and managers. The near collapse of the political and economic organizations of the labor movement, and the active collaboration of social-democratic parties in implementing neo-liberalism and austerity, have crippled the emergence of a progressive, solidaristic and militant response ‘from below’ to the crisis. Angry at both the large transnational corporations and seeing no alternative from labor, broad segments of the middle classes are drawn to racist and xenophobic politics that target both the ‘globalists’ and ‘undeserving’ immigrants and other racialized minorities. These politics fuel the electoral success of right-wing populist parties, which encourage fascist street fighters to target organized workers, immigrants and others.
On the other hand, the social and political conditions for a fascist seizure of power are not on the agenda in any advanced capitalist country. Capitalists have handed power over to the enraged middle classes organized in fascist parties only when the labor movement threatened radical change, but failed to follow through. For better or worse, it has been over forty years since the labor movement anywhere in the global North has posed a threat to the rule of capital. Today capitalists have little desire to hand power over to right-populist electoral formations, and have no need for fascist gangs.
While the prospect of a fascist seizure of power is not on the agenda, the labor movement and the Left need to mobilize whenever fascist groups emerge – to crush them while they are still weak.
Charlie Post is a long time socialist and activist in the City University of New York faculty union. Author of The American Road to Capitalism(Haymarket, 2012) and numerous articles on labour, politics and social struggles in the US.
Fascism is arguably making a comeback. There are places in the world where fascist or neo-nazi forces have managed to enter parliaments and install themselves as a more or less legitimate political option. Notice the case of Golden Dawn in crisis-ridden Greece! However, this does not necessarily mean that liberal democracy is currently facing a terminal danger due to this comeback, as in the 1930s. In particular, we should be aware of three crucial issues:
- The issue of conceptual clarity is paramount. Today, almost everything we dislike is summarily denounced as ‘fascism’ – hence the conceptual confusion between fascism, populism, authoritarianism, etc. Notice the way the Donald Trump phenomenon is treated.
- Even more troubling than fascism seems to be a particular way of dealing with it by more moderate political forces, by adopting its main messages and tropes, the so-called ‘Mainstreaming’ or ‘Normalisation’ of fascism. These ideas can become quite appealing to many of us, thus posing once more the issue of ‘Authoritarian personality’, of an ‘Inner fascism’ potentially present in all of us – hence the importance of a psycho-social approach to study this phenomenon.
- Finally, it should not escape our attention that the main reason for the comeback of fascism and its increasing contemporary psycho-social appeal may lie elsewhere: in the reign of neoliberalism and the miserable failure of social democracy to offer any real hope to segments of the population facing incresing inequality and a downward spiral of social and economic mobility.
Yannis Stavrakakis is Professor of Political Discourse Analysis at the Aristotle University of Thessaloniki. Author of Lacan and the Political(Routledge, 1999) and The Lacanian Left (SUNY Press, 2007), and co-editor of Discourse Theory and Political Analysis (Manchester University Press, 2000). Since 2014 he has been director of the POPULISMUS Observatory: www.populismus.gr
William I. Robinson
Fascism, whether in its classical twentieth century form or possible variants of twenty-first century neo-fascism, is a particular response to capitalist crisis. Global capitalism entered into a deep structural crisis with the Great Recession of 2008, the worst since the 1930s. Trumpism in the United States, Brexit in the United Kingdom, the increasing influence of neo-fascist and authoritarian parties and movements throughout Europe and around the world (such as in Israel, Turkey, the Philippines, India, and elsewhere) represent a far-right response to the crisis of global capitalism.
Twenty-first century fascist projects seek to organize a mass base among historically privileged sectors of the global working class, such as white workers in the Global North and middle layers in the Global South, that are experiencing heightened insecurity and the specter of downward mobility in the face of capitalist globalization. Fascism hinges on the psychosocial mechanism of displacing mass fear and anxiety at a time of acute capitalist crisis towards scapegoated communities, such as immigrant workers, Muslims and refugees in the United States and Europe. Far-right forces do this through a discursive repertoire of xenophobia, mystifying ideologies that involve race/culture supremacy, an idealized and mythical past, millennialism, and a militaristic and masculinist culture that normalizes, even glamorizes war, social violence and domination.
In the United States, emboldened by Trump’s imperial bravado, his populist and nationalist rhetoric, and his openly racist discourse, predicated in part on whipping up anti-immigrant, anti-Muslim, and xenophobic sentiment, fascist groups in civil society have begun to cross-pollinate to a degree not seen in decades, as they have gained a toehold in the Trump White House, in state and local governments around the country, and of course in the Republican Party.
But fascism is not inevitable. We stand at a crossroads and whether or not we slide into fascism depends on how the mass struggles and political battles unfold in the coming months and years.
William I. Robinson is Professor of Sociology, Global Studies, and Latin American Studies at the University of California at Santa Barbara. His most recent book is Global Capitalism and the Crisis of Humanity(Cambridge University Press, 2014). Next year Haymarket books will publish his new manuscript: Into the Tempest: Essays on the New Global Capitalism.
We find ultra-nationalist and fascist parties in the representative assemblies of countries such as Greece (Golden Dawn), Cyprus (National Popular Front), Hungary (Jobbic) and India (Bharatiya Janata Party). In the US, the Alt-Right Movement and its white supremacist ideology has found expression in the views of President Donald Trump. On the 12 of November 2017, around 60,000 Ultra-Nationalists marched in Poland on the country’s independence day, chanting ‘white Europe of brotherly nations’. We may conclude therefore that fascism or a contemporary version of fascism is gaining traction globally.
Still, whilst fascist parties and movements are on the rise we are yet to witness a widespread emergence of neo-fascist political regimes. We have not in other words seen the suspension of every democratic framework and the abolition of individual rights. Umberto Eco identified thirteen characteristics in authoritarian fascist regimes, amongst them the loss of individual rights, nationalism, the banning of critique, gaining traction through the exploitation of difference, and a call for traditional values. Of course, some of these characteristics have taken root in neo-fascist groups and ultra-nationalist parties, and even more disturbingly we notice that a growing number of people are becoming attracted to these types of thinking. Nevertheless, I would like to suggest that as long as people and political systems can counter such groups/parties, either by bringing them before the law or through debates that expose the irrationalism of their positions, then I think this interest in fascism may be just a passing trend.
We can counter the rise of fascism or totalising and undemocratic ideas within our societies. How? I agree here with Foucault[i] that to do so we need to be vigilant of the fascist within us that makes us desire power and its promises. We need therefore to be constantly questioning our very desires, either for political formations or figures that are lovers of power. In our contemporary western democratic societies, I would add the only way of sustaining a critical attitude requires us to find time and space to think alone and together. The biggest detractor of that is capitalism. If we are to stop the rise of fascism we need to retrieve the time that it is eaten up by capitalism and its various hands, managerialism, efficiency, profit and so on.
Elena Loizidou is Reader in Law and Political Theory at the University of London, Birkbeck College, School of Law. Author of Judith Butler: Ethics, Law, Politics (Routledge-Glasshouse, 2007), and editor of Disobedience: Concept and Practice (Routledge, 2013), along with numerous articles and chapters on feminism, anarchism and the law.
[i] ‘The strategic adversary is fascism … the fascism in us all, in our heads and in our everyday behavior, the fascism that causes us to love power, to desire the very thing that dominates and exploits us.’ (Michel Foucault, ‘Preface’ to Anti-Oedipus, 1977, p. xiii) | <urn:uuid:a1e96159-0f94-485a-8e81-8ab410323bd4> | CC-MAIN-2019-47 | http://criticallegalthinking.com/2017/12/04/fascism-making-comeback-part/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00418.warc.gz | en | 0.940803 | 4,428 | 2.515625 | 3 |
Shame and Humiliation
Part II of V: Veritas, Prometheus, Mendacius and Humiliation
The understanding of shame and humiliation is writ deep in our culture. I will illustrate this with a story from Aesop’s Fables. But before I do, I have first to remind everyone who Prometheus was because the fable begins with him. Prometheus was a Titan, the gods who first overthrew the primordial gods only to be overthrown in turn by the Olympian gods. Prometheus was a second generation lesser god, the god of forethought and crafty counsel, a traitor to his side in providing aid to Zeus to help the Olympians overthrow his fellow Titans led by Cronos. He was also a potter.
In Genesis in the Torah, there are two different stories of how man came into being, first by naming. God said and there was. The second was by God moulding the man’s material form from clay. In ancient Greek mythology, that task was assigned to a lesser god, Prometheus. In Genesis, Cain and Abel rival over who should be recognized for making the best sacrifice – the best of a farmer’s versus a hunter’s (or cowboy’s) labours. In Greece, the story takes place in a different direction. For instead of taking the best that you have laboured to bring forth and sacrificing that to God, Prometheus tricks the gods, more particularly Zeus, in the switcheroo at Mecone and ensures that the best and most nourishing part of a sacrificed bull will be reserved for the celebration of human men; he left only the inedible parts, the bones and organs, to be sacrificed to the Olympians. Second, when Zeus, to prevent Prometheus from taking the last necessary step in that sacrifice, withheld fire from him, Prometheus stole a fire bolt, hid it in a fennel stalk and gave it to man. Prometheus was the first humanist.
In revenge for both these acts of rebellion, Zeus created Pandora, the first woman. In Genesis, Eve has a twofold history, created by God in the same way Adam was and, secondly, by taking one side of Adam and forming a woman. In the Greek mythological tradition, Pandora was created specifically to bring mischief to men. Instead of the snake being the trickster, as in the bible, in Aesop’s fable that was directly Pandora’s function. In the Hebrew tradition, that element of Greek mythology was imported into the interpretation of the Biblical tale of human creation. Pandora has been projected onto Eve’s character ever since the two traditions came into contact.
For his double-crossing the gods, humans are not thrown out of the Garden of Innocence called Eden. Rather Prometheus was punished, not by having, like Sisyphus, to roll a great rock up a mountain, only to have it roll down just before it reached the top so that the next day he had to roll it up again. Instead, an eagle was assigned to pick out the eye and/or the heart of Prometheus (after all, he had shown too much compassion for human men), and/or the liver, thought to be the source of bile. As soon as that part was eaten, it grew back so the process was repeated day after day. Prometheus was punished for his foresight – it was taken away. Prometheus was punished for his compassion – it was converted into self-pity at his own suffering. Prometheus was punished for his rebellious spirit against the Olympian gods.
Aesop tells a subsidiary tale. One day, Prometheus decided to sculpt a mirror image of Veritas, the daughter of Zeus and the embodiment of honesty. When Zeus summoned Prometheus to appear before him, perhaps to explain what tricks he was up to, Dolos (Dolus), his assistant, was left behind. Dolos’ chief characteristic was to be a trickster, a master of deception and craftiness, treachery and guile that was even superior to that of his master, Prometheus. Dolos had been fired up by his master’s ambition and decided to fashion a replica of Veritas on his own. The replica would be of the same size and weight and share the same features of Veritas so it would be impossible to tell the facsimile from the real thing. However, Dolos, though an exceptional copier, was not as experienced as Prometheus in his preparations. He ran out of clay before he could complete his copy. Prometheus, when he returned, was delighted at the result, praised Dolos and did not notice that the copy of the original lacked the feet of clay of the original. He infused the copy with a love of honesty to ensure it was a precise copy in spirit as well as in the flesh and placed the sculpture into his kiln.
When the fixing of the clay was completed, while Veritas as the model could walk away on her own two feet, the imitation was frozen on the spot. That forgery, which lacked feet of clay, was named Mendacius, from which we have inherited the word, mendacity, the characteristic of being, not only a liar, but being a born liar. A liar has no feet and cannot travel. A liar becomes fixed in and by his or her own lies. However, as a faithful copy of the original, Mendacius always claims to uphold honesty as the highest principle.
Philosophers serve truth; they understand they are not and never will be masters as they try to perform their duties as the cleaning staff of the intellectual life. Many journalists, perhaps most, pursue honesty and claim it is the truth. They are heirs to Hermes (Mercury in the Roman pantheon), a son of Zeus, and see themselves as belonging to the Olympian gods. But, mistakenly, many and perhaps most follow the facsimile of truth, Mendacius, the immobilized image of Veritas. They fail to recognize that truth is established by subjecting one’s own presuppositions to self-criticism and not recognizing that the truth is never ensured by honesty. For them, truth is ensured by following rules, by recognizing the sacredness of boundaries and not by questioning those rules and boundaries.
On the other hand, they are populist democrats of the human spirit, for those boundaries are universal and apply to rich men as well as thieves, to the rulers as well as the ruled. Thus, they do not recognize that, in the name of honesty as a facsimile of the truth, in the name of universality, their underlying and unconscious goal in life is to prove that all are bound by these rules. Secretly, their greatest achievement will be to prove, that, like themselves, those who are great achievers also have “feet of clay”, which means they lack feet and are stuck and mired in the hidden drives in their own lives. The exposure may be accurate. At other times, it is simply a revelation of the journalist’s own inadequacies projected onto the target and done without the effort or even the ability to see and grasp that the result is more a product of their own projections than the alleged failings of the object of their mischief.
Humiliation, instead of attending to a specific wrong, deflects attention from that fault to attend to an allegedly greater one, an offence against an abstract and universal principle. By abstracting and deflecting, the public and, more importantly, Rachel herself, is distracted from the need to experience guilt. The process, instead, drives shame into even deeper recesses in the soul. And the shaming allows the multitude to coalesce and feel good about themselves at Rachel’s expense. Most importantly, shaming prevents us from expunging our sense of shame within and inhibits us from striving and standing on the stage to express our own self-worth. Who would want to take the risk and be subjected to so much scorn and humiliation? As Brené Brown so richly characterizes the difference between shame and guilt: “Guilt: I’m sorry. I made a mistake. Shame: I’m sorry. I am a mistake.” Blaming someone tells the other that he or she did something wrong; shaming someone tells you that the other is bad.
When I transferred in my last year of high school from Harbord Collegiate to Bathurst Heights, portraying that you loved learning in that new school was grounds for shaming. The standard was that it was alright to get top marks, but you also had to show that you did so without cracking a sweat. Shame is a straight jacket that prevents the highest achievement. If you shame another, you cannot feel empathy. And compassion for another is the best antidote to the toxicity of shame.
In Russia, gays and lesbians are persecuted both by the law and by vigilante action to out gays, humiliate them and get them fired. Last night on Pride Weekend around the world on CBC’s The Passionate Eye, I watched the documentary “Hunted in Russia.” In excellent journalism, the film portrayed how vigilantes systematically outed gays, beat and persecuted them, exposed their faces and got them fired on the Catch-22 that the pictures of their victims had appeared in the press as gays and, thus, they were guilty of promoting homosexuality which was against the law. When the persecutors were ever prosecuted for assault, they were mostly able to get off through lawyers’ tricks, widespread support in society and the complicity of the law. On the other hand, if a gay or lesbian protested the infringement of his or her rights, they never could get a permit, and, if they protested in the name of being gay, they were prosecuted under the law for promoting homosexuality. Even without mentioning homosexuality, if two gathered in one place, one to hand out leaflets and the other to carry a sign, even if they insisted they were not together, they were harassed, arrested and prosecuted for launching an illegal protest, for there were two of them together protesting and they lacked a permit.
Shame is a virus and easily detectable because in any age, but particularly in our electronic age, it triumphs when it now goes viral. Vigilantes in Russia use the internet to persecute gays. When going viral is held in such high esteem, a culture of shame expands and grows like a cancer in the body politic.
Tony Judt made cancers on the body politic his intellectual obsession. (I have already published a long essay on Tony Judt focused on his anti-Zionism of which these few comments formed a small part.) But he was driven by a fear of humiliation and that was how the last two years of his life ended. As his wife, Jennifer Homans, described it, “The more he retreated the more public he became. His private life at home and with friends was his greatest comfort but it was also deeply sad: he couldn’t be the things he wanted to be and he was haunted and humiliated by his ‘old’ self—what he called ‘the old Tony,’ who was lost to him forever.” He declared, “whenever anyone asks me whether or not I am Jewish, I unhesitatingly respond in the affirmative and would be ashamed to do otherwise.” He insisted he would and did not feel shame. But he feared, and suffered, humiliation. Why?
One theme, repeatedly mentioned, but not highlighted in his last book Thinking the Twentieth Century, written when he was dying of ALS (Lou Gehrig’s Disease), is perhaps the most revealing. Judt described his father “as a frustrated man: trapped in an unhappy marriage and doing work which bored and perhaps even humiliated him.” Humiliation is that theme. His mother too suffered from shame. “Mother was discreet to the point of embarrassment about her Jewishness versus the overtly foreign and Yiddish quality of most of the rest of his extended family.” When his father drove their Citroën to visit relatives in a poor area of London, Tony Judt “wanted to disappear down the nearest manhole” because of “the envious attention his new car was attracting.” When he lived on the kibbutz in Israel, he recognized that its functioning was based on the “successful deployment of physical intimidation and moral humiliation.” Not our usual association with kibbutzim, but an incisive comment true of most tribal and collectivist societies, whether a small town or an imperial Soviet Union or Russia.
When Judt became a fellow at King’s College, Cambridge, and had some authority, the student cohort who now attended these elite colleges came, not from the aristocracy and private schools, but from excellent state schools. Once they were discovered by one of the “bedders” (women from town who served as surrogate mothers to the young boys and girls, for King’s had become co-ed by that time) cavorting on college grounds nude.
The “bedder” was humiliated and felt ashamed. Three factors explained her reaction: the presence of girls; when she came upon them, they made no effort to dissimulate or even cover up; worst of all, they laughed at her discomfort. In short, they had broken the rules of engagement between herself as a working class woman in the midst of a society of privilege in the name of populist egalitarianism. She felt humiliated.
As Judt explained the situation, previous cohorts of students, though often repugnant snobs and sods brought up in privilege, recognized her station and respected her class and its values. They knew better than to treat a servant as an equal sharing their values. Those gentlemen “would have apologized, expressed their regret in the form of a gift and offered an affectionate, remorseful embrace.” Treating the “bedder” as an equal had “as much as anything hurt her feelings.” She had lost a claim on their forbearance and respect: her role had been reduced to mere employment rather than being a surrogate mother. The new rich bourgeois class shared none of the sensibilities of those who practiced the better side of noblesse oblige, but shared the same ignorant principle amongst themselves: “all human relations are best reduced to rational calculations of self-interest.”
The bourgeoisie, the Olympian gods of modernity who had overthrown the aristocratic Titans, were true believers in the reduced and impoverished capitalist vision: “the ideal of monadic productive units maximizing private advantage and indifferent to community or convention.” They have no “understanding of social intercourse, the unwritten rules that sustain it, and the a priori interpersonal ethics on which it rests.” They spouted and said that they revered Adam Smith’s Wealth of Nations, probably not having read it, but certainly not having read his volume, A Theory of Moral Sentiments. They acted as if all humans were driven, at bottom, by self-interest. If they became journalists and messengers of the greater gods, their perpetual mission was to demonstrate the validity of that Truth.
Why was humiliation and shame the most evident by-product of this indifference to and failure to recognize class differences? Respect and recognition are the proper antidotes to class differences and economic conditions. And it works both ways. One must show the greatest respect to those who do not share our privileges, no matter what your level or lifestyle. But one must also show the greatest respect to those who have earned it, whether in their intellectual or social productivity. Bringing them low when they slip up is not showing respect. Humiliating them in total disproportion to any error they have committed is merely an effort to displace lack of respect for ourselves in the terrible guise of righteousness and honesty.
All this merely explicates that shame and humiliation were crucial themes for Judt. These extracts do not explain why humiliation was so important so that these two themes became a window through which he experienced the world.
Tony Judt only hints at all the humiliations he suffered on growing up. When he was an established academic in London and went in to launch a complaint about mistreatment of a Czech acquaintance by the authorities, he learned that he was totally ignorant of the circumstances and problematics of the case. He was offended and embarrassed “to be thought both unimportant and uniformed.” And, of course, his humiliation at needing help all the time to do almost everything during his last two years of an immobilized life must have been the pinnacle of humiliation for him. But then why was humiliation so central to Judt’s historical experience?
Because, in the end, Tony Judt was himself a journalist and not a philosopher, a messenger of the gods, but one sent out and about to ensure the gods came to recognize they were only mortal. In 2010, Maggie Smith published a book, Asylum, migration and community which probes the experience refugees feel when they exit a country and then the double humiliation they experience in their country of asylum. Their loss of status is more embarrassing than anything else they experience, especially if they come from middle class roots. Humiliation is almost always about failure of recognition. And journalists are the group most sensitive to this failure to recognize their role as Hermes to Zeus, as messengers of the Olympian gods.
Tony Judt was a famous scholar, but before that government bureaucrat he appeared to be an ignorant dolt. Tony’s father was an informed and articulate reader, thinker and believer, but he worked in a hairdressing parlour. Tony’s mother was a died-in-the-wool English woman ashamed of her Jewishness and the European accents of her social circle. After all, her friends were “greenies.” Tony was embarrassed and humiliated at the kibbutz because they saw him as just a grunt when he really was a very successful student who had achieved entry into one of the most prestigious academic institutions in Britain. The kibbutzniks had no appreciation of that accomplishment. Judt just generalized on that ignorance and branded them provincial for not recognizing his achievements. And the “bedder” at Cambridge was embarrassed and humiliated, not simply because the students did not recognize the class to which she belonged and the rules of discourse long established in dealing with class relations, for all rules had to be universal and not conventional. These children of the nouveau riche did not see her as an independent Other with sensibilities and responsibilities. The previous privileged classes at least had the decency to give her the semblance of respect and recognition.
The humiliator generally is indifferent or has contempt for the position or the person of the Other, whether a thief or a rich man, whether an ordinary citizen or a great ruler. For journalists are those most attuned to humiliation. They suffer its pains and pangs every day of their lives. One who is humiliated is not only embarrassed, but can develop a repressed anger and urge to retaliate for that non-recognition, an attitude exemplified by Cain when God recognized Abel and not him. The humiliatee wants the injustice corrected and can become a demon in the pursuit of his or her version of social justice. At the extreme, humiliation, revenge and the desire for social justice can be found to be a pervasive theme in the actions of mass killers at schools and at places of work. (Cf. Charles B. Strozier, David M. Terman and James Jones (eds.) with Katherine A. Boyd, The Fundamentalist Mindset: Psychological Perspectives on Religion, Violence, and History.) Journalists are the mass murderers of reputations.
Since Ruth Benedict in the year of Tony Judt’s birth characterized Japan as a shame culture and America as a guilt culture, and since then others have characterized Jewish culture as a guilt culture par excellence, and still others have built on and revised and improved on that distinction so that one broad consensus emerged. Shame cultures grant low cultural value to the individual. Shame can then be used as an effective tool of social guidance. Guilt cultures grant low cultural value to the community and guilt must be instilled within each individual to ensure a degree of social conformity to social norms. Why then was shame so preeminent in Judt’s psyche?
No culture relies solely on shame or guilt. Cultures use an admixture of both. A high degree of one versus the other allows one to characterize a culture as predominantly a shame or, alternatively, a guilt culture. But a culture can have high value placed on both individualism and community. This was true of the Jewish culture of the biblical period and contributes to its “schizophrenic” frenzy until today. It was both a shame and a guilt culture. Tony Judt was driven by a search for community in Zionism, in the kibbutz, in Cambridge University college life and in his intellectual devotion to social justice. In his behaviour and in his intellectual pursuits and writings, he was the consummate individual with an original voice. But in the value given to social order, a shared community was a prerequisite to enjoyment of public life. Guilt is expressed greatest if an individual like Tony Judt fails to grant adequate credit, recognition and acknowledgement to an Other. But shame becomes the main descriptor when social norms rather than individual achievements fail to be recognized. Tony Judt had very little sense of guilt, but was enormously sensitive to humiliation.
So Judt became the scourge of Zionism as the greatest expression of a guilt culture in today’s world. (I will deal with this theme separately in a discussion of the UN Human Rights Report on the Gaza War and the Israeli response.) He became, not an English, but an American Jew determined to turn the tables and humiliate both America and Israel as he also expunged any personal shame and became the widely admired brilliant writer, historian and critic. As an equal opportunity provider, he even had time to distribute the product of his poison pen on the English, the French and others. Only the Czechs get off, and that is because they were the vehicle for his rebirth and rejuvenation. The despiser of identity politics becomes its exemplar when applied to nations.
So why is shame such a vice and shaming others and humiliating them even worse? Because shamers undermine self-respect and respect for another. Shame can overwhelm you and shaming can drown you in a tsunami totally out of control. Enhancing anyone’s susceptibility to shame is not a good deed. Overwhelming someone with a cascade of shaming in an uncontrollable storm of public humiliation is definitely a bad thing. It is a virtue to stand before oneself and before others and be without shame. It is a vice and betrayal of oneself to allow oneself to be drowned in humiliation.
When someone’s actions bring disgrace and ignominy on themselves, they must face their guilt and be subjected to the condemnation of the law and or the moral code of a society applied to the specific offence, not a general abstract principle. Offending a specific law or lying in a specific situation, does not require shaming. Quite the reverse. Shaming inhibits anyone from coming face to face with one’s guilt, for facing one’s guilt requires enhancing one’s self-respect. A person should not be forced or induced to do something because he or she feels ashamed. If you feel afraid and cowardly, the answer is not feeling deep shame or having shame heaped upon you. The answer is getting in touch with the source of your courage. This is not achieved through a torrent of reproach.
It takes a great deal of effort to enhance and build up a culture of guilt. However, firestorms and tsunamis of humiliation can wreck havoc in a very short time, not just to the victim of shaming, but to the whole culture. We are all brought low by an expression of such self-indulgence. It is one thing to win. It is quite another to shame, mortify and humiliate those who do not, but especially those who do succeed but then reveal a fatal flaw. Putting others to shame is not the object of a contest. Encouragement of the highest achievements of all the players is.
If someone esteems shame, embarrassment, mortification and humiliation of another, if one takes secret pleasure in inducing a feeling of self-hatred and the pain that goes along with it, then one is causing harm and injury to the spirit of what it is to be human. Infliction of pain on another goes much deeper than a stab or a bullet wound. Because it plants a seed of self-contempt, a drop of poison that can expand and consume another’s soul.
It is easy to confuse an effort to make another face his or her guilt with subjecting another to humiliation. But the best clue that I know of in discriminating between the two is proportion. When the condemnation is totally disproportionate to any offence that might have been committed, then what we have is an exercise in witch hunting and not a moral or legal trial. If one is caught making a sexist comment or what appears to be a poor joke, the proportionate response is to check whether sexism lay behind the comment. If it does, then the person should be told directly and in person your response. If, however, one’s instigation sends a tidal way of condemnation and stripping of another from all honours and respect, and without checking whether there even was any behaviour to back up the charge that the man was a sexist, then it is clear that it is society that is disgracing itself and not the individual.
If a person is ostensibly caught telling a lie or deliberately misleading another, it is incumbent upon us first to check that it is really a lie deliberately intended to deceive, or whether the inability to be totally open stems from another source. And we do well if we ensure that we ourselves are not dissembling by dressing up our pursuit of humiliation in the name of a righteous cause like honesty, transparency and a respect for those who raised you.
Tomorrow: Part III of V – The Spectrum of Humiliation | <urn:uuid:a3082cd6-c70c-43e6-aed5-3b4bae9e84be> | CC-MAIN-2019-47 | https://howardadelman.com/tag/tony-judt/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00337.warc.gz | en | 0.976462 | 5,384 | 3.28125 | 3 |
|In the beginning God created the heavens and the earth...
Then God commanded, "Let the earth produce all kinds of
plants, those that yield grain and those that yield fruit,"
and it was done. So the earth produced all kinds of plants
and God was pleased
|Gen. 1:1, 11-12 (paraphrase)|
|The earth lies polluted|
|under its inhabitants;|
|for they have transgressed the laws,|
|violated the statutes,
broken the everlasting covenant.
|Therefore, a curse devours the earth,|
|and its inhabitants suffer for their guilt;|
|therefore, the inhabitants of the earth|
and few are left.
|Isaiah 24:54 (RSV)|
Why should Christians care about the environment? Simply because we learn in Genesis that God has promised to fulfill all of creation, not just humanity, and has made humans the stewards of it. More importantly, God sent Christ into the very midst of creation to be "God with us" and to fulfill the promise to save humankind and nature. God's redemption makes the creation whole, the place where God's will is being done on earth as it is in heaven.
God's promises are not mere pledges. They are covenants. And covenants are agreements between people and between people and God. The covenants with Noah and Abraham and the New Covenant mean that people of faith are responsible for their part in renewing and sustaining the creation.
This statement helps us to see the degradation of the earth as sin, our sin. We, the people who have accepted the redeeming love of God, have broken the covenant to care for creation. The challenge in the paper is to confess our sin, to take seriously our role as stewards of the earth, and to work for the renewal of creation.
The needs of the world are apparent. The call is clear. The most motivating aspect of this statement is the claim that stewardship of the creation is a matter of faith.
Planet earth is in danger. The ecological crisis that threatens the survival of life on earth is evident now not only to professional biologists, botanists, environmental scientists, but to all. Awareness grows that humanity is facing a global crisis.
The crisis is evident in the quality of air we breathe, in the food we eat, in the rivers where we can no longer fish or swim, in the waste dumps leaking their toxins into our water supplies, in news reports about oil spills and acid rain and holes in our protective ozone layer. The tragic disasters of Bhopal, Chernobyl, the Rhine, Love Canal, Three Mile Island, and Times beach are part and parcel of the contamination that is progressing at a steady, daily rate.
We read staggering statistics: Agricultural practices in North America today destroy topsoil at the rate of six billion tons per year. In the United States alone, we dump 80 billion pounds of toxic wastes into our waters annually. Twenty-two acres of tropical rain forest are demolished each minute, an area the size of a football field every second of every day. A million species of plants and animals will be extinct by the turn of the century. Dr. Musafa Tolba, director general of the United Nations Environment Program, says that the destruction of genetic material and environment has reached such a pitch that "we face, by the turn of the century, an environmental catastrophe as complete, as irreversible as any nuclear holocaust." These figures, combined with what we experience daily, are both mind boggling and numbing.
Moreover, humanity possesses the power to destroy creation. Jonathan Schell in The Fate of the Earth correctly identifies this as an ecological peril: "The nuclear peril is usually seen in isolation from the threats to other forms of life and their ecosystems, but in fact it should be seen as the very center of the ecological crisis." It is also a spiritual peril. Disarmament and the fate of the planet are interlinked.
Humanity now possesses the power to create and manufacture new forms of life. Humanity's ability to alter the basic design of living things and bring into being totally new forms of life marks a watershed in our relationship to God's creation. Society's understanding of nature and reality are being transformed by the ability to create and market life itself. In our contemporary technological ability to destroy and create life, humanity strives, in belief and in practice, to replace God as Creator and Sustainer of all.
Beyond humanity's power of life and death ever creation, the global environment continues to deteriorate in large part because the lifestyle of an affluent minority puts tremendous drains on its resources. The prevailing model of economic development assumes that the resources of the earth are valuable only insofar as they may be exploited, that humanity is free to conquer the earth, and that the resultant riches prosper the conquerors. Scarcity of global resources and threats to the earth's life-supporting capacity stem from this distortion in humanity's relationship to creation.
Tragically, the churches have been slow to bring forward life affirming understandings of the earth and its ecology. There is no comprehensive treatment of what spiritual resources might be brought to bear in response to the environmental problems caused by industry, urbanization, nuclear power, and the application of technology on a huge scale. Spiritual resources of any nation are basic for a healthy life in the present and a future with integrity. Does the Christian faith have resources to shape and to redeem humanity's relationship with creation? What theological questions need to be looked at? What biblical texts lie untapped and unexplored?
A. The Genesis Creation Story: The most obvious biblical texts are the first 11 chapters of Genesis. The doctrine of creation as recorded in Genesis includes three affirmations about the universe and the human race: 1. The universe did not initially bring itself into being but God brought it into being and God continues to sustain it. 2. Humankind through sin and disobedience has violated and devastated the world in which God has created human, animal, and inanimate life on earth. 3. Human beings were created for mutually sustaining relationships with one another, with the creation, and with God.
In Genesis, the account of God's relationship with creation, and humanity's role, begins with the creation and continues on through the ninth chapter, with the story of Noah and the flood. The Genesis stories are a rich source of what we might call spiritual ecology. Humankind is made in God's image (Gen. 1:27). Genesis 1:26-28 speaks of humanity having dominion over creation. Genesis 2 stresses tilling the earth and replenishing it. The story of the fall in Genesis 3 describes the disastrous effects of human sin. Following Adam's sin, the ground was cursed (Gen. 3:17), and after Cain's murder of Abel, Cain in "cursed from the ground," which no longer is fruitful. Cain is consigned to wander ruthlessly in the land of Nod. But the story does not stop there. In Genesis 5:29, when Noah is born, God promises relief from the hard labor resulting from God's curse upon the ground. And this promise is fulfilled after the flood. The Lord declares, "Never again will I curse the ground because of humankind... While the earth remains, seedtime and harvest, cold and heat, summer and winter, day and night, shall not cease" (Gen. 8:21-22).
The central point of Noah's story and the ark, however, is the covenant established by God with "living things of every kind" Here, for the first time, the word covenant is explicitly used and addressed to humankind. However, God's covenant is established not just with people; it is a covenant with all creation. Five times in Genesis 8and 9; the scope of God's covenant is repeated a covenant between God and every living creature, with "all living things on earth of every kind." God's faithful love extends to and includes all that has been made. The rainbow is the sign of this promise.
The Genesis story of creation is completed as it began: with the assurance of God's faithful and saving relationship to the world. The rainbow reminds us that creation is not merely the stage for the drama between God and humankind, that the promises given by God are directed not only to humanity, but to the creation that upholds all life as well.
B. The "Wisdom Literature" and Creation: The wisdom literature is particularly rich in the theology of creation. In general, wisdom literature focuses on creation, including human experience, in an open search for God's truth. It then seeks to order life according to the truth that is discovered. In this way, the guiding, nurturing presence of God is revealed.
Proverbs 3:19-20 states:The Lord by wisdom founded the earth; by understanding God established the
heavens; by God's knowledge the deeps broke forth, and the clouds drop down
Such passages see God's wisdom as both the source of creation and as reflected in power and beauty throughout creation.
The most powerful portrayal of God's relationship to creation within the wisdom literature, and perhaps in all the Bible, is found at the end of the book of Job, in chapters 38 through 42. God's answer to Job comes as a series of questions poetically stressing God's presence in all creation. "Where were you when I laid the foundation of the earth? Tell me, if you have understanding... Have you commanded the morning since your days began and caused the dawn to know its place... Have you comprehended the expanse of the earth? Declare, if you know all this... From whose womb did the ice come forth, and who has given birth to the hoarfrost of heaven" (Job 38:4,12,18,29). Job responds, "Behold, I am of small account; what shall I answer thee? I lay my hand on my mouth" (40:4). Contrary to attitudes of humanity's dominance over the earth this vision disarms human arrogance and self-sufficiency, and calls for a stance of humble awe and wonder towards the divinely ordered ecology of the created world.
C. The Brethren Understanding of Creation has been less doctrinal than confessional, affirming our total dependence upon God the Creator. The Scriptures speak quite plainly: "In the beginning God created the heavens and the earth" (Gen. 1:1). God's power is not limited. God creates solely by the might of God's Word original, dynamic, gracious, all-powerful. The same word of God active in creation is active also in redemption (John 1:1-3). The very God who created all things is also the maker of a "new heaven and a new earth" (Rev. 21:1a). So the fitting response of all creatures, Brethren believe, is obedient gratitude for the gift of life, yes, of new life in Christ through the Spirit (Brethren Encyclopedia, Vol. 1, p.351).
Early Anabaptist theology did not separate God's purpose for humanity from God's purpose for the rest of creation. Brethren accepted the Genesis creation story that the human creature was placed in the garden after all the necessary elements of human survival had been produced the ecosphere of air, water, warmth, nurture, sustenance, etc. Thus God's purpose to bring about reconciliation between God and humanity must include creation. Because of the absolute dependence of humankind on environment and vice versa (humankind is part of the environment), a plan of shalom for humanity excluding nature would be unthinkable. The fall included an alienation of humanity and nature from God, and could only be reconciled with the redemption of both. When creation is redeemed, it will happen simultaneously with the total redemption of humanity and nature (Rom. 8).
The biblical text that most strongly molds our understanding of creation is the prologue to the Gospel of John. This text declares that God's act of creation and the incarnation in Christ are inseparable. The Word (logos) is the means of the world's creation. And the Word, present with God, goes forth from God in the incarnation and returns to God. "No single thing was created without him. All that came to be was alive with his life" (John 1:3- 4).
This passage, and others as well, emphasizes the intimacy of relationship between the Creator, creation, and God's redemptive love for all creation in the incarnation of Jesus. The purpose, destiny, and fulfillment of humanity and creation are to be found in its relationship to the Creator, who came upon this earth not as a domineering master, but as a servant and friend.
The Hebrew Scriptures are a record of the relationship between Israel, creation, and Yahweh. Relationship to creation focuses around the land. Hebrew Scripture scholar and theologian Walter Brueggemann goes so far as to say, "The Bible itself is primarily concerned with the issue of being displaced and yearning for a place...land is a central, if not the central theme of biblical faith" (The Land).
God as creator is considered in the biblical tradition to be the sole owner of the earth. At the heart of creation faith is the understanding that "the land is mine; for you are strangers and sojourners with me" (Lev. 25:23). "The earth is the Lord's" (Psa. 24:1). Yet while no individual Israelite was to imagine that they possessed any land in their own right, God gave the land to Israel as a whole (Deuteronomy 1:8). Certain families within Israel used the land allotted to them (Joshua 13 ff.) but only on condition that all members of the tribe or family might share in the income derived from the land. Any monopolizing of land was, therefore, a serious failure in worship
Continually, the prophets warned that the land has seductive power. The temptation is to cling to it, possess it, manage it, rule over it, and own it to treat it as though it were one's own domain rather than to cherish it and as stewards hold it in trust as Yahweh's gift. The gift of land to the people of Israel was conditional upon living within that land as if it were Yahweh's and they were Yahweh's people. But because they forgot this, choosing instead to possess the land as if it were their own, they lost it. That is the judgement announced by Jeremiah.
Israel's relationship to the land can symbolize humanity's relationship to creation. Saving that creation and our place within it can come only by treating it as God's gift rather than our possession. We need to confess that Western Christianity has been extremely weak in proclaiming a gospel of a humble and nurturing love for creation. Part of the reason may be that we have strayed far from this conviction of divine ownership of the land, of equal sharing of all families in the use of it.
Biblical creation ethics is essentially sabbath ethics, for the sabbath is the law of creation. According to Exodus 23:10-11, in the seventh year Israel is to leave the land untouched, "that the poor of your people may eat." In Leviticus 25:1-7, the law of the sabbatical year is repeated, so that "the land may celebrate its great sabbath to the lord," The sabbath rest for the land every seven years contains God's blessing for the land. Moreover, the sabbath rest is a piece of deep ecological wisdom and sharply contrasts with the destructive practices of much of modern industrialized agriculture.
Biblical passages frequently suggest that humanity's rebellion against God results in the land itself suffering, mourning, and becoming unfaithful. Our modern culture has all but lost this vision of the land. Jeremiah 2:7 refers to the unfaithfulness and sins of humanity expressed in the destruction of the environment. It says, "I brought you into a plentiful land to enjoy its fruits and its good things. But when you came in you defiled my land..." That's exactly what we have done.
Contrasted with the wondrous pictures of creation's intended harmony and wholeness given in the Scriptures, environmental ruin is a direct offense against God the Creator. Indeed biblical insight names human sin as the cause of our deteriorating environment. Selfish lives alienated from God's purposes and love quite literally cause the land to mourn and the whole creation to be in travail. "How long will the land mourn, and the grass of every field wither?" asks Jeremiah (12:4). The biblical answer carries promise for the renewal of the created order, continually springing fresh from the resources of God's grace. Just as God responds to human sin and rebellion with the invitation to new life, the response to the degradation of the earth is the concrete hope for restoring "shalom" and, in the words of the Psalm, the renewing of the face of the earth (104:30)
Though Brethren theological understandings have not referred explicitly to the preservation of the earth, Brethren practice has tended in that direction. A community of believers who would live in harmony must seek a redemptive relationship with their environment. By nurturing the earth, the Brethren achieved prosperity that set a trend for Brethren for generations. Doing the Creator's will in a faithful community requires a recognition that the created world in which humans move and have their being is not irrelevant but is the very context in which faithfulness to God is expressed.
E. The Renewal of Creation: Given that God has established a covenant with all creation; that God, humanity, and creation are bound together in an interdependent relationship; and that creation is an expression of God and its destiny lies in relationship to God, it follows that God's work of redemption through Christ extends to the creation.
Sin breaks the intended fellowship and harmonious relationship between God, humanity, and creation. The reign of sin and death alienates God from humanity and creation and propels the earth toward self-destruction. But in Christ the power of sin and death is confronted and overcome, and creation is reconciled to God. In and through the incarnation of the divine word, humanity and the whole creation are enabled to taste "new life." For through the crucifixion and resurrection of Christ, God has inaugurated the renewal of this broken world. In his own person, Jesus Christ exemplifies the glorious destiny of a transfigured creation.
The parables and teachings of Jesus are filled with examples drawn from the realm of nature. Vineyards, soil, fruit, seeds, and grain are the frequent examples used by Jesus to explain God's truth. And the Sermon on the Mount includes a direct, but often overlooked, teaching regarding our relationship to creation. "Blessed are the meek," Jesus said, "for they shall inherit the earth" (Matt. 5:5).
God's love for the world, for the whole cosmos, is the resounding biblical theme and the reason for God's embrace of the world in Jesus Christ. Paul's writing in Romans underscores these truths. Paul's letter explains to us the relationship between God's work of redemption in our own lives and in all creation. The final victory has been won by Christ. We belong to God. And the whole world belongs to God. We have become new, claimed by the power of God's spirit. Likewise, creation has entered into this renewal. The power of sin in its midst, which has wrought destruction, is not the final word. Rather, the goodness of creation and God's stewardship of the creation are its final destiny.
F. The Worth of Creation in and of Itself: To say that creation, the natural order, has worth in and of itself means that nature's value exists independently of humankind. It has a right to exist unconnected to human interest.
Before the European Renaissance and the European conquest of America, Africa and Asia, land, water, forest, and air were regarded as God's property, left to human beings for common use. It was the Renaissance that deprived nature of its rights and declared it to be "property without an owner," property that belonged to the one who took possession of it by occupation. Today, only the air is available for common use. If we would live with integrity in the community of creation then before all else the rights of the earth as a system and the rights of all species of animals and plants must be recognized by human beings. We need to codify the "rights of the earth and of all life" parallel to the 1948 "Universal Declaration of Human Rights."
Creation is good in and of itself as God's intention and work. This applies to all the beings, animate and inanimate, made by God. The world of sea and forest, desert and field with myriad creatures became, after all, the very ground of the incarnation of the Word. The created order is dependent on God in its own way and finds its meaning and purpose in God. Human action that tends to disrupt or destroy a part of the created order is, therefore, interfering with God's plan. This understanding does not amount to a worship of nature but a recognition of the transcendent power and sovereignty of God. The protection of the created order is, therefore, required of all human beings, for they alone have the power to impose their will on other created orders.
G. Justice: The vision of redeemed creation is that of a harmonious, abundant, and secure life together. Biblically speaking, justice is that which makes for wholeness in nature, in persons, and in society. This concept of justice does not originate with the great prophets of the sixth and eighth centuries B.C.; it stems from an ancient understanding of creation as harmonious world order. When the Hebrew confession says again and again that Yahweh is "just," it means that God fashions order from chaos, holds back the chaos, and balances things anew when chaos intrudes. Justice is the achievement of harmony.
Hebrew understanding of justice is all-inclusive. It does not refer only to human relationships and human events. It applies to all nature human and non-human. Its many-sidedness may refer one time to human events and another time to events we assign to nature (e.g., the flood). Such an understanding of justice is more all-embracing than the one in the Anglo-American moral tradition that defines justice narrowly to include liberty and equality and is human-centered. Moreover, the many dimensions of the Hebrew view forbid using a single word to bear the full notion of justice. Therefore, words like righteousness, loving-kindness, faithfulness, completeness, integrity, order, instruction, peace, wholeness, equity, and "justice" are used to represent what is meant by justice. Nonetheless, for persons of faith justice is the right and harmonious ordering of life in all its dimensions under the sovereignty of God.
The biblical vision of God's intention for humankind living in harmonious relationship with creation (e.g., Gen. 1-3; Psa. 104; Rom. 8) is available to the church, though we have often neglected such relationship with creation.
The creation story in Genesis says that humanity is created to live in harmony with creation. The Bible knows nothing of a right relationship with God the Creator that does not include a right relationship with the creation: with land and mountains, oceans and skies, sun and moon, plants and animals, wind and rain. Our vocation is to walk with God in gently tending God's wonderful, strong, fragile, and enduring creation. The meaning of our existence is found in this vocation.
We are called to be stewards and partners in God's continuing creation. Christian ecology or Christian stewardship is rooted in the Scriptures and flows from caring for all creation. Christian stewardship is doing the Creator's will in caring for the earth and striving to preserve and restore the integrity, stability, and beauty of the created order a response in God's Image and service to Creation's eager expectation of redemption. Christian stewardship is living with respect for the earth so that creation is preserved, brokenness is repaired, and harmony is restored. Christian stewardship seeks the Creator's reign a reign redeemed of human arrogance, ignorance, and greed.
Creation gives God glory and honor. The gift of environment came forth from God's creative word and is a testimony to God's wonder and love. Christians have no less a calling than to participate in the preservation and renewal of this precious gift. With the words of Revelation, we can then proclaim in word and deed,
Worthy art thou, our Lord and God,
to receive glory and honor and power,
for thou didst create all things,
and by thy will they existed and were created. (Rev. 4:11)
At the heart of the crisis lies the world view of Western culture. With the Enlightenment and the scientific revolution, Western culture came to assume that humanity had both the right and duty to dominate nature. The view of life became secularized; we came to understand the world apart from any relational reference to God. The purpose of objective, scientific knowledge was to exercise power over creation, which became "nature" raw material existing only for exploitation.
Science and technology placed an immense range of power in human hands. Modern means of production are the basis for today's economy and provide possibilities that have never existed before. Abuse of technology is largely responsible for the increasing exploitation and destruction of the environment. Technology has brought many blessings but has also developed into a threat to the human future (for example, Three Mile Island). It has created complex systems in which even small human errors can be disastrous.
The roots of the crisis, however, are to be sought in the very hearts of humankind. We harbor the illusion that we human beings are capable of shaping the world. Such pride leads to an overestimation of our human role with respect to the whole of life, to the support of constant economic growth without reference to ethical values, to the conviction that the created world has been put into our hands for exploitation rather than for care and cultivation, to a blind faith that new discoveries will solve problems as they arise, and to the subsequent neglect of the risks brought about by our own making.
We need the resources of science and technology as we face the future. But if we are to serve the cause of justice, peace, and the preservation of the environment, we must radically re-evaluate the expectations that science and technology have generated. As Christians we cannot uncritically advocate any view of human progress which does not promote human wholeness. Therefore, we must not share unqualified confidence in human achievement. We must also resist the growing tendency toward feelings of powerlessness, resignation, and despair. Christian hope is a movement of resistance against fatalism. It is through conversion to Christ, who came that we might live abundantly, that the full meaning of human life is revealed.
Faced with a threatened future of humanity, we confess the truth of the gospel. Listening to the word of God, we believe that the future will become open to us as we turn to Jesus Christ and accept our responsibility to live in Christ and in God's image. Believing that the crisis in which we find ourselves ultimately has its roots in the fact that we have abandoned God's ways, we proclaim that God opens the future to those who turn to God.
We confess that we do not possess God's final truth. We have failed in many ways, we have often not lived up to God's calling, and have failed to proclaim the truth of Jesus Christ. Our witness has often been unclear for we have disregarded the prophetic voices who warned us against impending dangers and have been blind to the gospel's claim upon us in respect to justice, peace, and the integrity of creation. We need a new beginning.
We confess our failure both as a church and as individual members of Christ. We have failed to witness to the dignity and sanctity of all life and to God's care for all its creation.
We have failed to develop a lifestyle that expresses our self-understanding as participants, stewards, and servants of God's creation. We have failed to consistently challenge political and economic systems that misuse power and wealth, that exploit resources for their self-interest, and that perpetuate poverty.
We pray for God's forgiveness and commit ourselves to seek ways:
We believe that:
1. God, Creator of heaven and earth and all earth's creatures, looks lovingly upon all the works of creation and pronounces them good.
2. God, our Deliverer, acts to protect, restore and redeem the earth and all its creatures from sinful human pride and greed that seeks unwarranted mastery over the natural and social orders.
3. God in Jesus Christ reunites all things and calls humans back from sinful human sloth and carelessness to the role of the steward, the responsible servant, who as God's representative cares for creation, for all life, both animate and inanimate.
4. God our Creator-Deliverer acts in the ecological-social crisis of our time, demonstrating today the same divine love shown on the cross of Christ. As a covenant people, we are called to increase our stewardship, in relation both to nature and to the political economy, to a level in keeping with the peril and promise with which God confronts us in this crisis.
5. All creation belongs to God (Psa. 24). God, not humanity, is the source, the center the depth and height of all creation. The whole creation is ordered to the glory of God (Rev. 1:8). Human beings, both individually and collectively, have no right to systematically abuse or dispose of nature for their own ends.
6. Even amid human violation and devastation, God is at work renewing creation. One important way is through humans who join God in reconciling and restoring the earth to its new creation.
7. Human dominion in God's image is not mastery, control, and possession, but a stewardship of love for and service of this world in God's name. Such stewardship respects the integrity of natural Systems and lives within the limits that nature places on economic growth and material consumption.
As we face the threats to survival, we realize that we are entering a new period of history. Humanity has itself created the capacity to destroy all life. The end of creation and of human life is now a possibility. How can the churches proclaim the gospel in this situation? How are we to speak of God's grace and forgiveness? Can we point to possible new departures? What is Christian hope in the face of the temptation simply to survive?
The church is that part of creation that has received and covenanted itself to embody God's redemption in Christ. As Paul writes in Romans, the whole created universe yearns with eager expectation for the children of God to be revealed. As the body of Christ we are to live out a new and restored relationship to the creation which itself has been won back to God by Christ's redemptive death and resurrection. The church therefore is to live as a visible sign of a restored relationship among humanity, the creation, and God.
Our situation raises new issues that need urgent and open discussion, for example:
As we enter the post Cold War era and struggle to address this new environmental agenda, the church needs to be aware of conflicts which are perhaps deeper and harder to resolve than the Cold War itself. Europe, United States, and the USSR who promoted industrial development in the Third World are now seeking to curb further growth in order to protect their own self-interests. All too often the First World has not given adequate consideration to the impact their policies have on the quality of life of Third World people.
There is little likelihood of resolving this new conflict, where the "haves" remain the haves and the "have-nots" become the "never-shall-haves," unless the northern industrialized countries, particularly the consumer societies of the West, change their lifestyle. The average family in the United States affects the environment 40 times more than a family in India and a hundred times more than a Kenyan family. On a per capita basis, the United States uses 45 times more energy than India.
At the end of World War II there were about 2.4 billion people in the world. Now there are 5.3 billion. If the current rates of growth continue, nearly a billion more people will be sharing the planet by the year 2000. There are already many places where human concentrations have overwhelmed the present ability of the environment to support them at a quality of life that is humane and acceptable. The breakdown is evident in many developing countries but by environmental standards, wealthier countries are at least as guilty of overburdening the environment because they consume more resources per capita and rely on more disruptive technologies. Certainly attention to population growth is necessary to maintain life on planet Earth.
At this more profound level of conflict the enemy is not external; it is us. In the crime of ecological destruction we are both criminal and victim. More precisely, since industrialism's ravenous appetite daily diminishes the health and life of the ecosystem, the conflict is between us and our children: our lifestyle versus their future.
The environment does not depend upon us. It is clearly the other way around. The question is whether we humans have the will to respect and maintain the environment so that our kind may continue to inhabit the earth. The question is still open. It could be that we humans do not have a future.
Today, our Western culture is being undermined by an emphasis on exploitation, comfort, and convenience. It seems difficult for persons to consider that their small actions affect the environment and the ultimate success or demise of humanity. Our attitude seems to be, if it's comfortable, if it's convenient, if it's profitable, do it. Can a culture repent and take steps to halt its deterioration? There are some signs of hope but there are also signs that the lesson is not yet learned; that comfort and convenience are more important than care of the environment. The environment will no doubt survive. The question is "will our kind remain?"
As Christians, we can reform our theology and contribute to society a new appreciation for the sacredness of all creation. Individually and collectively, we can change the way we live so that instead of destroying the earth, we help it to thrive, today and for future generations to come. As a church, are we ready to commit ourselves to this challenge?
The Creator-Redeemer seeks the renewal of the creation and calls the people of God to participate in saving acts of renewal. We are called to cooperate with God in the transformation of a world that has not fulfilled its divinely given potential or beauty, peace, health, harmony, justice, and joy (Isa.11:6-9, Mic. 4:3-4, Eph. 2:10, Rev. 21:1-5). Our task is nothing less than to join God in preserving, renewing and fulfilling the creation. It is to relate to nature in ways that sustain life on the planet, provide for the essential material and physical needs of all humankind, and increase justice and well-being for all life in a peaceful world.
Therefore, the Church of the Brethren Annual Conference
Further, the General Board calls upon local and state governments to enhance and expand constructive action for the caring of environment that would lead to a higher quality of life for all citizens. It also calls upon the Federal government to:
S. Joan Hershey, Chair
Donald E. Miller, General Secretary
Action of the 1991 Annual Conference: The resolution from the General Board on CREATION: CALLED TO CARE was presented by S. Joan Hershey, chair, and Shantilal Bhagat, staff. The report was adopted with two (2) amendments by the delegate body, both of which have been incorporated in the wording of the preceding text. | <urn:uuid:3fd68f5f-eabb-4edc-b707-8a6de9406fd2> | CC-MAIN-2019-47 | http://www.brethren.org/ac/statements/1991creationcalledtocare.html?referrer=http://earthministry.org/faith-statements/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00417.warc.gz | en | 0.9549 | 7,407 | 2.8125 | 3 |
Note: Comments on this article may be posted either here on HNN or at the website of Inside Higher Ed, where this article was first published. -- Editors
In the buildup to the vote by a House of Representatives committee officially calling for U.S. foreign policy to recognize that a genocide of Armenians took place during World War I, at the behest of the “Young Turk” government of the Ottoman Empire, a flurry of advertising in American newspapers appeared from Turkey.
The ads discouraged the vote by House members, and called instead for historians to figure out what happened in 1915. The ads quoted such figures as Condoleezza Rice, the secretary of state, as saying: “These historical circumstances require a very detailed and sober look from historians.” And State Department officials made similar statements, saying as the vote was about to take place: “We think that the determination of whether the events that happened to ethnic Armenians at the end of the Ottoman Empire should be a matter for historical inquiry.”
Turkey’s government also has been quick to identify American scholars (there are only a handful, but Turkey knows them all) who back its view that the right approach to 1915 is not to call it genocide, but to figure out what to call it, and what actually took place.
Normally, you might expect historians to welcome the interest of governments in convening scholars to explore questions of scholarship. But in this case, scholars who study the period say that the leaders of Turkey and the United States — along with that handful of scholars — are engaged in a profoundly anti-historical mission: trying to pretend that the Armenian genocide remains a matter of debate instead of being a long settled question. Much of the public discussion of the Congressional resolution has focused on geopolitics: If the full House passes the resolution, will Turkey end its help for U.S. military activities in Iraq?
But there are also some questions about the role of history and historians in the debate. To those scholars of the period who accept the widely held view that a genocide did take place, it’s a matter of some frustration that top government officials suggest that these matters are open for debate and that this effort is wrapped around a value espoused by most historians: free and open debate.
“Ultimately this is politics, not scholarship,” said Simon Payaslian, who holds an endowed chair in Armenian history and literature at Boston University. Turkey’s strategy, which for the first 60-70 years after the mass slaughter was to pretend that it didn’t take place, “has become far more sophisticated than before” and is explicitly appealing to academic values, he said.
“They have focused on the idea of objectivity, the idea of ‘on the one hand and the other hand,’ ” he said. “That’s very attractive on campuses to say that you should hear both sides of the story.” While Payaslian is quick to add that he doesn’t favor censoring anyone or firing anyone for their views, he believes that it is irresponsible to pretend that the history of the period is uncertain. And he thinks it is important to expose “the collaboration between the Turkish Embassy and scholars cooperating to promote this denialist argument.”
To many scholars, an added irony is that all of these calls for debating whether a genocide took place are coming at a time when emerging new scholarship on the period — based on unprecedented access to Ottoman archives — provides even more solid evidence of the intent of the Turkish authorities to slaughter the Armenians. This new scholarship is seen as the ultimate smoking gun as it is based on the records of those who committed the genocide — which counters the arguments of Turkey over the years that the genocide view relies too much on the views of Armenian survivors.
Even further, some of the most significant new scholarship is being done by scholars who are Turkish, not Armenian, directly refuting the claim by some denial scholars that only Armenian professors believe a genocide took place. In some cases, these scholars have faced death threats as well as indictments by prosecutors in Turkey.
Those who question the genocide, however, say that what is taking place in American history departments is a form of political correctness. “There is no debate and that’s the real problem. We’re stuck and the reality is that we need a debate,” said David C. Cuthell, executive director of the Institute for Turkish Studies, a center created by Turkey’s government to award grants and fellowships to scholars in the United States. (The center is housed at Georgetown University, but run independently.)
The action in Congress is designed “to stifle debate,” Cuthell said, and so is anti-history. “There are reasonable doubts in terms of whether this is a genocide,” he said.
The term “genocide” was coined in 1944 by Raphael Lemkin, a Jewish-Polish lawyer who was seeking to distinguish what Hitler was doing to the Jews from the sadly routine displacement and killing of civilians in wartime. He spoke of “a coordinated plan of different actions aiming at the destruction of essential foundations of the life of national groups, with the aim of annihilating the groups themselves.” Others have defined the term in different ways, but common elements are generally an intentional attack on a specific group.
While the term was created well after 1915 and with the Holocaust in mind, scholars of genocide (many of them focused on the Holocaust) have broadly endorsed applying the term to what happened to Armenians in 1915, and many refer to that tragedy as the first genocide of the 20th century. When in 2005 Turkey started talking about the idea of convening historians to study whether a genocide took place, the International Association of Genocide Scholars issued a letter in which it said that the “overwhelming opinion” of hundreds of experts on genocide from countries around the world was that a genocide had taken place.
Specifically it referred to a consensus around this view: “On April 24, 1915, under cover of World War I, the Young Turk government of the Ottoman Empire began a systematic genocide of its Armenian citizens — an unarmed Christian minority population. More than a million Armenians were exterminated through direct killing, starvation, torture, and forced death marches. The rest of the Armenian population fled into permanent exile. Thus an ancient civilization was expunged from its homeland of 2,500 years.”
Turkey has put forward a number of arguments in recent years, since admitting that something terrible did happen to many Armenians. Among the explanations offered by the government and its supporters are that many people died, but not as many as the scholars say; that Armenians share responsibility for a civil war in which civilians were killed on both sides; and that the chaos of World War I and not any specific action by government authorities led to the mass deaths and exiles.
Beyond those arguments, many raise political arguments that don’t attempt to deny that a genocide took place, but say that given Turkey’s sensitivities it isn’t wise to talk about it as such. This was essentially the argument given by some House members last week who voted against the resolution, saying that they didn’t want to risk anything that could affect U.S. troops. Similarly, while Holocaust experts, many of them Jewish, have overwhelmingly backed the view that Armenians suffered a genocide, some supporters of Israel have not wanted to offend Turkey, a rare Middle Eastern nation to maintain decent relations with the Israel and a country that still has a significant Jewish population.
Dissenters or Deniers?
Probably the most prominent scholar in the United States to question that genocide took place is Bernard Lewis, an emeritus professor at Princeton University, whose work on the Middle East has made him a favorite of the Bush administration and neoconservative thinkers. In one of his early works, Lewis referred to the “terrible holocaust” that the Armenians faced in 1915, but he stopped using that language and was quoted questioning the use of the term “genocide.” Lewis did not respond to messages seeking comment for this article. The Armenian National Committee of America has called him “a known genocide denier” and an “academic mercenary.”
The two scholars who are most active on promoting the view that no genocide took place are Justin McCarthy, distinguished university scholar at the University of Louisville, and Guenter Lewy, a professor emeritus of political science at the University of Massachusetts at Amherst. Both of them are cited favorably by the Turkish embassy and McCarthy serves on the board of the Institute of Turkish Studies.
McCarthy said in an interview that he is a historical demographer and that he came to his views through “the dull study of numbers.” He said that he was studying population trends in the Ottoman Empire during World War I and that while he believes that about 600,000 Armenians lost their lives, far more Muslims died. “There’s simply no question,” he said, that Armenians killed many of them.
The term genocide may mean something when talking about Hitler, McCarthy said, “where you have something unique in human history.” But he said it was “pretty meaningless” to use about the Armenians. He said that he believes that between the Russians, the Turks and the Armenians, everyone was killing everyone, just as is the case in many wars. He said that to call what happened to the Armenians genocide would be the equivalent of calling what happened to the South during the U.S. Civil War genocide.
So why do so many historians see what happened differently? McCarthy said the scholarship that has been produced to show genocide has been biased. “If you look at who these historians are, they are Armenians and they are advancing a national agenda,” he said. Cuthell of the Institute for Turkish Studies said that it goes beyond that: Because the Armenians who were killed or exiled were Christians (as are many of their descendants now in the United States), and those accused of the genocide were Muslims, the United States is more sympathetic to the Armenians.
Lewy said that before he started to study the issue, he too believed that a genocide had taken place. He said that intellectuals and journalist “simply echo the Armenian position,” which he said is wrong. He said that there is the “obvious fact” that large numbers of Armenians were killed and he blamed some of the skepticism of Turkey’s view (and his) on the fact that Turkey for so long denied that anything had taken place, and so lost credibility.
In 2005, the University of Utah Press published a book by Lewy that sums up his position, Armenian Massacres in Ottoman Turkey: A Disputed Genocide. Lewy’s argument, he said in an interview, “is that the key issue is intent” and that there is “no evidence” that the Young Turks sought the attacks on the Armenians. “In my view, there were mass killings, but no intent.” Lewy’s argument can also be found in this article in The Middle East Forum, as can letters to the editor taking issue with his scholarship.
The Evidence for Genocide
Many scholars who believe that there was a genocide say that Lewy ignored or dismissed massive amounts of evidence, not only in accounts from Armenians, but from foreign diplomats who observed what was going on — evidence about the marshaling of resources and organizing of groups to attack the Armenians and kick them out of their homes, and the very fact of who was in control of the government at the time.
Rouben Adalian, director of the Armenian National Institute, called the Lewy book part of an “insidious way to influence Western scholarship and to create confusion.” He said it was “pretty outrageous” that the Utah press published the book, which he called “one of the more poisonous products” to come from “those trying to dispute the genocide.”
John Herbert, director of the University of Utah Press, is new in his job there and said he wasn’t familiar with the discussions that took place when Lewy submitted his book. But he said that “we want to encourage the debate and we’ve done that.”
Notably, other presses passed on the book. Lewy said he was turned down 11 times, at least 4 of them from university presses, before he found Utah. While critics say that shows the flaws in the book, Lewy said it was evidence of bias. “The issue was clearly the substance of my position,” he said.
Of course the problem with the “encouraging the debate” argument is that so many experts in the field say that the debate over genocide is settled, and that credible arguments against the idea of a genocide just don’t much exist. The problem, many say, is that the evidence the Turks say doesn’t exist does exist, so people have moved on.
Andras Riedlmayer, a librarian of Ottoman history at Harvard University and co-editor of the H-TURK e-mail list about Turkish history, said that in the ’80s, he could remember scholarly meetings “at which panels on this issue turned into shouting matches. One doesn’t see that any more.” At this point, he said, the Turkish government’s view “is very much the minority view” among scholars worldwide.
What’s happening now, he said, outside of those trying to deny what took place, “isn’t that the discussion has diminished, but that the discussion is more mature.” He said that there is more research going on about how and why the killings took place, and the historical context of the time. He also said that he thought there would be more research in the works on one of “the great undiscussed issues of why successive Turkish governments over recent decades have found it worthwhile to invest so much political capital and energy into promoting that historical narrative,” in which it had been “fudging” what really happened.
Among the scholars attracting the most attention for work on the genocide is Taner Akçam, a historian from Turkey who has been a professor at the University of Minnesota since 2001, when officials in Turkey stepped up criticism of his work. Akçam has faced death threats and has had legal charges brought against him in Turkey (since dropped) for his work, which directly focuses on the question of the culpability of Young Turk leaders in planning and executing the genocide. (Akçam’s Web site has details about his research and the Turkish campaigns against him.) Opposition to his work from Turkey has been particularly intense since the publication last year of A Shameful Act: The Armenian Genocide and the Question of Turkish Responsibility.
In an interview, Akçam said that his next book — planned for 2008 — may be “a turning point” in research on the genocide. He is finishing a book on what took place in 1915 based only on documents he has reviewed in Ottoman archives — no testimony from survivors, no documents from third parties. The documents, only some of which he has written about already, are so conclusive on the questions Turkey pretends are in dispute, he said, that the genocide should be impossible to deny.
To those like Lewy who have written books saying that there is no evidence, “I laugh at them,” Akçam said, because the documents he has already released rebut them, and the new book will do so even more. “There is no scholarly debate on this topic,” he said.
The difficulty, he said, is doing the scholarship. In the archives in Turkey, he said, the staff are extremely professional and helpful, even knowing his views and his work. But he said that he has received numerous death threats and does not feel safe in Turkey for more than a few days, and even then must keep a low profile. As to legal risks, he said that laws on the books that make it illegal to question the Turkish state on certain matters, are inconsistently enforced, so while he has faced legal harassment, he generally felt that everything would work out in the end. But Akçam is well known, has dual German-Turkish citizenship, and a job at an American university, and he said those are advantages others do not have.
He plans to publish his next book first in Turkey, in Turkish, and then to translate it for an American audience.
Another scholar from Turkey working on the Armenian genocide is Fatma Müge Göçek, an associate professor of sociology at the University of Michigan. Until she came to Princeton to earn her Ph.D., Göçek said that she didn’t know about the Armenian genocide. For that matter, she said she didn’t know that Armenians lived in Turkey — “and I had the best education Turkey has to offer.”
Learning the full history was painful, she said, and started for her when Armenians she met at Princeton talked to her about it and she was shocked and angry. Upon reading the sorts of materials she never saw in Turkey, the evidence was clear, she said.
Göçek’s books to date have been about the Westernization of the Ottoman Empire, but she said she came to the view that she needed to deal with the genocide in her next book. “I have worked on how the Ottoman Empire negotiated modernity,” she said, and the killings of 1915 are part of “the dark side of modernity.”
So the book she is writing now is a sociological analysis of how Turkish officials at the time justified to themselves what they were doing. She is basing her book on the writings these officials made themselves in which they frame the issue as one of “the survival of the Turks or of the Armenians” to justify their actions. While Göçek will be focusing on the self-justification, she said that the diaries and memoirs she is citing also show that the Turkish leaders knew exactly what they were doing, and that this wasn’t just a case of chaos and civil war getting out of hand.
Göçek said she was aware of the harassment faced by Akçam and others from Turkey who have stated in public that a genocide took place. But she said scholars must go where their research leads them. “That is why one decides to become an academic — you want to search certain questions. If you do not want to, and you are not willing to, you should go do something else.”
HNN Hot Topics: Armenian Genocide
This article was first published at the website of Inside Higher Ed and is reprinted with permission of the author.
comments powered by Disqus
ingrid ericsen - 10/12/2009
Turks being subhuman back then...
I think it's a very well earned label because that is how the Turks treated the Armenians.....like they were
And it continues today in Turkey's cowardly denials of the globally and well documented Armenian genocide.
ingrid ericsen - 10/12/2009
As the grandaughter of An Armenian survivor (who as a teenager, witnessed the slaughter of half his siblings by the Turks! I'm appalled that this genocide is still being discussed and negotiated as if it was a figment of their imaginations! Teh Turks stole their food, their churches,thier homes and their familes. Before butchering them, the Turkish soldiers raped Armenian women and young girl's. These atrocoties all well cdocumented by many countries and it disgusts me that this slaughter garners a mere fraction of the attention of another genocide, one that happened approx 20 years later with the groups hated by the Nazi's. If better technology (via video footage)was available back then, this would not be something to debate today. Funny thing is, my grandfather told his stories to us but he never spoke ill of the Turks, even after surviving what they did to hm and his family.
A true Christian.
Fahrettin Tahir - 11/4/2007
In 1914 Armenians were seen as first world and Turks as subhumans.
mike morris - 10/27/2007
"The Armenian disaster was the only event in history where a third world nation hit white skinned christans."
Mr. Tahir, I have Armenian friends and Turkish friends, and their complexions are remarkably similar. And Armenia is less 'third world' than Turkey? I don't even know what Third World means, truthfully. But I understand your larger point: murderous Europeans slaughter and enslave innocent non-Europeans. Yes, and Europeans have slaughtered Europeans; non-Europeans have and continue to slaughter and enslave non-Europeans (Sudan, the Congo, Burma, etc.); and, yes, non-Europeans have slaughtered and enslaved Europeans. Would it be impolite to remind you that Islam attacked Christendom previous to any Christian attacks on Islam? I haven't heard any apologies for the Islamic slave trade, Constantinople circa 1453, or anything like that, either.
But on one very important point you and I seem to be in agreement: right now the West, in the unfortunate person of George W. Bush, is inflicting unjustifiable devastation (and threatening more) upon more than one non-European nation. He is a war criminal, and I make no excuses for him.
Fahrettin Tahir - 10/23/2007
Between 1820 when the entire moslem population of what became independent as Greece was slaughtered and 1912 when what had been moslem majority areas for half a millenium were turned into christian majority areas by mass murder and deportations, 5 Million Europeans of Turkish Culture were murdered to make Islam disappear from Europe. One of the allied targets of the first world war was to complete this process by dividing up what was left Turkey ( today Turkey) among Armenia, Greece and Russia. The Turks were to disappear from history. In 1914 the democratically elected representatives of the Armenians minority in the Ottoman parliament ran off to Russia for this purpose. They started a guerilla war behind the front by massacres on the moslem population to which the Ottoman government reacted the way which governments at that point in history did. Remember: the USA finished off the Indians, the British murdered tens of millions of their Indians, Marx called Russia the state which had left no crime not done, the Germans solved the Hottentot problem, the French were last seen in Rwanda in the 1990ies. The only two differences to what Turkey did was: 1- The existence of these European nations was not being endangered by the people they were killing and 2-They were white skinned christians killing third world nations. The Armenian disaster was the only event in history where a third world nation hit white skinned christans. These are now working for revenge.
An act of parliament, which the US congress is, is a political act with political intentions. This resolution has two targets. One is what can be seen in Europe, where members of the Turkish minority in order to have influence in the countzries where they live are asked to parrot christian claims, and refusing are refused influence, thus efectively preventing the Turks in Europe from exercising their democratic rights and two preparing the ground for Armenian demands that Turkey give apart of her territory to Armenia, which had already occupied a fifth of the territory of the neigboring Aserbaijanian republic. In a context where the west would also be discussing return of the formerly Turkish territories in Europe this might even be acceptable, but there is of course no thinking of this. The resolution is effectively a declaration of war on Turkey. Armenian fight for revenge is poisoning Turkey’s relationship with the west. Again this would not be happening if anybody in the west thought about those 5 Millions, but they think that was quite ok, and have erased this event from their history books.
Turkey of course is not the only country being treaded badly by the West. In each and every individual case the West convinces her inhabitants that they are dealing with nasty regimes, but the bottom line is that the situation has deteoriated to a level where Mr Bush has seen himself forced to warn about WWIII. He wasn’t saying that that would be because of the way the West is treating the rest of the world. But that exactly is the issue.
Vernon Clayson - 10/23/2007
I grant that history is largely analyzing past events but had you read the original acrticle you would have observed that it was the House of Representatives attempting to define history. They, not even pretending to be scholarly, brought up events of long ago that have faint bearing on their duties and responsibilities, and those were in a foreign country that may or may not be defined by boundaries of the long ago nor are governed by the same people or government of that ancient time. History should study the events, it is certainly not something feuding politicians should use to sway politics. Rep. Pelosi pursued this only to bring trouble to the current president and in the process approached being treasonous by endangering the efforts of our military. You didn't read the original artical, did you?
S. McLemee - 10/22/2007
<em>Great article, I don't thinks so -unless beating a dead horse is within your parameters for judging articles.</em>
Perhaps you may have noticed that this site is called the History News Network. Evidently you are unaware that the study of history routinely involves discussion of the past.
Perhaps you should keep that in mind. It may make things around here less confusing for you. To judge from other comments, you seem to think this is like talk radio, but with typing.
Vernon Clayson - 10/22/2007
Great article, I don't thinks so -unless beating a dead horse is within your parameters for judging articles. With all the problems we ourselves have as a nation the House of Representatives bothering with this matter was as stupid as Harry Reid taking time out of his leadership responsibilities to condemn Rush Limbaugh. Turkey is a sovereign nation, how about if their parliament decides to vote on whether we were engaged in genocide when we bombed civilians in WWII? Our media is nearly hysterical when we kill civilians in Iraq, what would the tender hearts have done in the 1940s?
People who live in glass houses, etc., etc.
David Whitman - 10/21/2007
Dear Scott Jaschick
This is a fabulous article, it helps expose how crafty the denial of the Armenian Genocide has become. An attempt is being made to make something which is so black and white to appear as being Grey. This article is of a very high standard and worthy to be in the HRN.
- ‘The Crown’: The History Behind Season 3 on Netflix
- No, Trump in 2019 is not like George Washington in 1794
- Confederate Statue in North Carolina Comes Down After 112 Years
- NASA Renames Object After Uproar Over Old Name’s Nazi Connotations
- New Statue Unsettles Italian City: Is It Celebrating a Poet or a Nationalist?
- Beloved University professor passes away at 64
- British Historians Antony Beevor, Tom Holland and Dan Snow say they cannot vote for party under Corbyn
- He Predicted Both Trump’s Election and Impeachment. What Else Does He Know?
- Dorothy Seymour Mills, who received belated credit for husband's baseball books, dies at 91
- A Defense of Aristocracy: On Anthony T. Kronman’s “The Assault on American Excellence” | <urn:uuid:a508b8af-2248-4915-b31d-d2fad2226fe0> | CC-MAIN-2019-47 | http://historynewsnetwork.org/article/43861 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00138.warc.gz | en | 0.97779 | 5,784 | 2.53125 | 3 |
A gong (from Indonesian and Malay: gong; Javanese: ꦒꦺꦴꦁ gong; Chinese: 鑼; pinyin: luó; Japanese:
The gong's origin is likely China's Western Regions, sixth century; the term gong originated in Java. Scientific and archaeological research has established that Burma, China, Java and Annam were the four main gong manufacturing centres of the ancient world. The gong found its way into the Western World in the 18th century when it was also used in the percussion section of a Western-style symphony orchestra. A form of bronze cauldron gong known as a resting bell was widely used in ancient Greece and Rome, for instance in the famous Oracle of Dodona, where disc gongs were also used.
Gongs broadly fall into one of three types: Suspended gongs are more or less flat, circular discs of metal suspended vertically by means of a cord passed through holes near to the top rim. Bossed or nipple gongs have a raised centre boss and are often suspended and played horizontally. Bowl gongs are bowl-shaped and rest on cushions. They may be considered a member of the bell category. Gongs are made mainly from bronze or brass but there are many other alloys in use.
Gongs produce two distinct types of sound. A gong with a substantially flat surface vibrates in multiple modes, giving a "crash" rather than a tuned note. This category of gong is sometimes called a tam-tam to distinguish it from the bossed gongs that give a tuned note. In Indonesian gamelan ensembles, some bossed gongs are deliberately made to generate in addition a beat note in the range from about 1 to 5 Hz. The use of the term "gong" for both these types of instrument is common.
- 1 Types
- 2 Traditional suspended gongs
- 3 Other uses
- 4 Gong manufacturers
- 5 Gongs – general
- 6 Orchestral usage
- 7 Signal gongs
- 8 Other uses
- 9 List of gongs
- 10 See also
- 11 References
- 12 Further reading
- 13 External links
Suspended gongs are played with hammers and are of two main types: flat faced discs either with or without a turned edge, and gongs with a raised centre boss. In general, the larger the gong, the larger and softer the hammer. In Western symphonic music, the flat faced gongs are generally referred to as tam-tams to distinguish them from their bossed counterparts. Here, the term "gong" is reserved for the bossed type only. The gong has been a Chinese instrument for millennia. Its first use may have been to signal peasant workers in from the fields, because some gongs are loud enough to be heard from up to 5 miles (8 km) away. In Japan, they are traditionally used to start the beginning of sumo wrestling contests.
Large flat gongs may be 'primed' by lightly hitting them before the main stroke, greatly enhancing the sound and causing the instrument to "speak" sooner, with a shorter delay for the sound to "bloom". Keeping this priming stroke inaudible calls for a great deal of skill. The smallest suspended gongs are played with bamboo sticks or even western-style drumsticks. Contemporary and avant-garde music, where different sounds are sought, will often use friction mallets (producing squeals and harmonics), bass bows (producing long tones and high overtones), and various striking implements (wood/plastic/metal) to produce the desired tones.
Rock gongs are large stones struck with smaller stones to create a metallic resonating sound.
Traditional suspended gongs
Chau gong (Tam-tam)
By far the most familiar to most Westerners is the chau gong or bullseye gong. Large chau gongs, called tam-tams have become part of the symphony orchestra. Sometimes a chau gong is referred to as a Chinese gong, but in fact, it is only one of many types of suspended gongs that are associated with China. A chau gong is made of copper-based alloy, bronze, or brass. It is almost flat except for the rim, which is turned up to make a shallow cylinder. On a 10-inch (25 cm) gong, for example, the rim extends about 1⁄2 inch (1 cm) perpendicular to the surface. The main surface is slightly concave when viewed from the direction to which the rim is turned. The centre spot and rim of a chau gong are left coated on both sides with the black copper oxide that forms during manufacture; the rest is polished to remove this coating. Chau gongs range in size from 7 to 80 inches (18 to 203 cm) in diameter.
The earliest Chau gong is from a tomb discovered at the Guixian site in the Guangxi Zhuang Autonomous Region of China. It dates from the early Western Han Dynasty. Gongs are depicted in Chinese visual art as of the 6th Century CE, and were known for their very intense and spiritual drumming in rituals and tribal meetings. Traditionally, chau gongs were used to clear the way for important officials and processions, much like a police siren today. Sometimes the number of strokes was used to indicate the seniority of the official. In this way, two officials meeting unexpectedly on the road would know before the meeting which of them should bow down before the other.
Uses of gongs in the symphony orchestra
The tam-tam was first introduced as an orchestral instrument by François-Joseph Gossec in 1790, and it was also taken up by Gaspare Spontini and Jean-François Le Sueur. Hector Berlioz deployed the instrument throughout his compositional career, and in his Treatise on Instrumentation he recommended its use "for scenes of mourning or for the dramatic depiction of extreme horror." Other composers who adopted the tam-tam in the opera house included Gioachino Rossini, Vincenzo Bellini, and Richard Wagner; Rossini in the final of act 3 of Armida (1817), Bellini in Norma (1831) and Wagner in Rienzi (1842). Within a few decades the tam-tam became an important member of the percussion section of a modern symphony orchestra. It figures prominently in the symphonies of Peter Ilyich Tchaikovsky, Gustav Mahler, Dmitri Shostakovich and, to a lesser extent, Sergei Rachmaninov and Sergei Prokofiev. Giacomo Puccini used gongs and tam-tams in his operas. Igor Stravinsky greatly expanded the playing techniques of the tam-tam in his The Rite Of Spring to include short, quickly damped notes, quick crescendos, and a triangle beater scraped across the front of the instrument. Karlheinz Stockhausen used a 60" Paiste tam-tam in his Momente.
A nipple gong has a central raised boss or nipple, often made of different metals than other gongs with varying degrees of quality and resonance. They have a tone with less shimmer than other gongs, and two distinct sounds depending on whether they are struck on the boss or next to it. They are most often but not always tuned to various pitches.
Nipple gongs range in size from 6 to 20 inches (15 to 51 cm) or larger. Sets of smaller, tuned nipple gongs can be used to play a melody.
Nipple gongs are used in Chinese temples for worship and Buddhist temples in Southeast Asia.
In Indonesian gamelan ensembles, instruments that are organologically gongs come in various sizes with different functions and different names. For example, in the central Javanese gamelan, the largest gong is called gong ageng, ranges in size up to 1 meter in diameter, has the deepest pitch and is played least often; the next smaller gong is the gong suwukan or siyem, has a slightly higher pitch and replaces the gong ageng in pieces where gong strokes are close together; the kempul is smaller still, has a higher pitch, and is played more frequently. The gong ageng and some gong suwukan have a beat note.
An essential part of the orchestra for Chinese opera is a pair of gongs, the larger with a descending tone, the smaller with a rising tone. The larger gong is used to announce the entrance of major players or men and to identify points of drama and consequence. The smaller gong is used to announce the entry of lesser players or women and to identify points of humour.
Opera gongs range in size from 7 to 12 inches (18 to 30 cm), with the larger of a pair 1 or 2 inches (3 or 5 cm) larger than the smaller.
A Pasi gong is a medium-size gong 12 to 15 inches (30 to 38 cm) in size, with a crashing sound. It is used traditionally to announce the start of a performance, play or magic. Construction varies, some having nipples and some not, so this type is named more for its function than for its structure or even its sound.
Pasi gongs without nipples have found favour with adventurous middle-of-the-road kit drummers.
A tiger gong is a slightly descending or less commonly ascending gong, larger than an opera gong and with a less pronounced pitch shift. Most commonly 15 inches (38 cm) but available down to 8 inches (20 cm).
A Shueng Kwong gong is a medium to large gong with a sharp staccato sound.
Wind gongs (also known as Feng or Lion Gongs) are flat bronze discs, with little fundamental pitch, heavy tuned overtones, and long sustain. They are most commonly made of B20 bronze, but can also be made of M63 brass or NS12 nickel-silver. Traditionally, a wind gong is played with a large soft mallet, which gives it a roaring crash to match their namesake. They are lathed on both sides and are medium to large in size, typically 15 to 22 inches (38 to 56 cm) but sizes from 7 to 60 inches (18 to 152 cm) are available. The 22-inch (56 cm) size is most popular due to its portability and large sound.
They are commonly used by drummers in rock music. Played with a nylon tip drumstick they sound rather like the coil chimes in a mantle clock. Some have holes in the centre, but they are mounted like all suspended gongs by other holes near the rim. The smaller sizes, 7 to 12 inches (18 to 30 cm), have a more bell-like tone due to their thickness and small diameter.
Sculptural gongs (also known as Gong Sculptures) are gongs which serve the dual purpose of being a musical instrument and a work of visual art. They are generally not disc shaped, but instead take more complex, even abstract forms. Sculptural gongs were pioneered in the early 1990s by Welsh percussionist and metal crafter, Steve Hubback, who was partially inspired by the work of the French Sound Sculptors, Francois and Bernard Baschet.
In older Javanese usage and in modern Balinese usage, gong is used to identify an ensemble of instruments. In contemporary central Javanese usage, the term gamelan is preferred and the term gong is reserved for the gong ageng, the largest instrument of the type, or for surrogate instruments such as the gong komodong or gong bumbung (blown gong) which fill the same musical function in ensembles lacking the large gong. In Balinese usage, gong refers to Gamelan Gong Kebyar.
Besides many traditional and centuries old manufacturers all around China, including Tibet, as well as Burma, Korea, Indonesia, and the Philippines, gongs have also been made in Europe and America since the 20th century.
Paiste is the largest non-Asian manufacturer of gongs. This Swiss company of Estonian lineage makes gongs at their German factory. Also in Germany, Meinl have gongs made for them by former Paiste employee, Broder Oetken, who also has his own branded range of gongs. Italian company UFIP make a range of gongs at their factory in Pistoia. Michael Paiste, outside of the larger family business, makes gongs independently in Lucerne, Switzerland. Other independent gong manufacturers in Europe include Welshman Steve Hubback, currently based in the Netherlands; Matt Nolan and Michal Milas in the UK; and Joao Pais-Filipe in Portugal.
In North America, Sabian make a small number of gongs and Zildjian sell Zildjian-branded gongs which have in the past been made by Zildjian, but current production looks to be Chinese in origin. Ryan Shelledy is an independent gong maker based in the Midwestern United States.
Gongs – general
Gongs vary in diameter from about 20 to 60 inches (50 to 150 cm). They are made of a bronze alloy composed of a maximum of 22 parts tin to 78 parts copper, but in many cases the proportion of tin is considerably less. This alloy is excessively brittle when cast and allowed to cool slowly, but it can be tempered and annealed in a peculiar manner to alleviate this. When suddenly cooled from red heat, the alloy becomes so soft that it can be hammered and worked on the lathe then hardened by reheating. Afterwards, the gong has all of the qualities and timbre of the Chinese instruments. The composition of the alloy of bronze used for making gongs is stated to be as follows: 76.52% Cu, 22.43% Sn, 0.26% Pb, 0.23% Zn, 0.81% Fe. In Turkish Cymbal making there is also sulfur and silicon in the alloy.
Turkish Cymbals and Gamelan Gongs share beta phase bronze as a metallurgical roots. Tin and copper mix phase transition graphs show a very narrow up-down triangle at 21–24% tin content and 780 °C (1,440 °F) symbolized by β. This is the secret of all past bronze instrument making. When bronze is mixed and heated, it glows orange-red which indicates it has been heated to the beta phase borders where the metal needs to be submerged in cold water to lock the alloy in the beta phase for cymbal making. The gong is then beaten with a round, hard, leather-covered pad that is fitted on a short stick or handle. It emits a peculiarly sonorous sound which can be varied by particular ways of striking the disk. Its complex vibrations burst into a wave-like succession of tones that can be either shrill or deep. In China and Japan gongs are used in religious ceremonies, state processions, marriages and other festivals.
The gong has been used in the orchestra to intensify the impression of fear and horror in melodramatic scenes. The tam-tam was first introduced into a western orchestra by François Joseph Gossec in the funeral march composed at the death of Mirabeau in 1791. Gaspare Spontini used the tam-tam in La Vestale's (1807) Act II finale. Berlioz called for 4 tam-tams in his Requiem of 1837. The tam-tam was also used in the funeral music played when the remains of Napoleon were brought back to France in 1840. Meyerbeer made use of the instrument in the scene of the resurrection of the three nuns in Robert le diable. Four tam-tams are used at Bayreuth in Parsifal to reinforce the bell instruments although there is no indication given in the score. In more modern music, the tam-tam has been used by composers such as Karlheinz Stockhausen in Mikrophonie I (1964–65) and by George Crumb. in Makrokosmos III: Music For A Summer Evening(1974), Crumb expanded the timbral range of the tam-tam by giving performance directions such as using a "well-rosined contrabass bow" to bow the tam-tam. This produced an eerie harmonic sound. Stockhausen created more interesting sounds using hand-held microphones and a wide range of scraping, tapping, rubbing, and beating techniques with unconventional implements such as plastic dishes, egg timers, and cardboard tubes. Gongs can also be immersed into a tub of water after being struck. This is called "water gong" and is called for in several orchestral pieces.
Gongs are also used as signal devices in a number of applications.
A vessel over 100 metres (330 ft) in length must carry a gong in addition to a bell and whistle, the volume of which is defined in the International Regulations for Preventing Collisions at Sea. A vessel at anchor or aground sounds the gong in the stern immediately after ringing a bell in her bows so as to indicate her length.
Gongs are present on rail vehicles, such as trams, streetcars, cable cars or light rail trains, in the form of a bowl-shaped signal bell typically mounted on the front of the leading car. It was designed to be sounded to act as a warning in areas where whistles and horns are prohibited, and the "clang of the trolley" refers to this sound. Traditionally, the gong was operated by a foot pedal, but is nowadays controlled by a button mounted on the driving panel. Early trams had a smaller gong with a bell pull mounted by the rear door of these railcars. This was operated by the conductor to notify the driver that it is safe to proceed.
A railroad crossing with a flashing traffic signal or wigwag will also typically have a warning bell. Electromechanical bells, known in some places as a gong, are struck by an electric-powered hammer to audibly warn motorists and pedestrians of an oncoming train. Many railroad crossing gongs are now being replaced by electronic devices with no moving parts.
A bowl-shaped, center mounted, electrically controlled gong is standard equipment in a boxing ring. Commonly referred to as "the gong", it is struck with a hammer to signal the start and end of each round.
Electromechanical, electromagnetic or electronic devices producing the sound of gongs have been installed in theatres (particularly those in the Czech Republic) to gather the audience from the lounge to the auditorium before the show begins or proceeds after interlude.
Gongs have been used in upper class households as waking devices, or sometimes to summon domestic help. Gongs were used in more homes to call the household to a meal.
In the British and Australian military, "gong" is slang for a medal.
In popular music, there was the multi-national psychedelic jazz-rock band Gong led by Australian musician/poet Daevid Allen. Marc Bolan and T. Rex had a hit song on their album Electric Warrior called "Get It On (Bang a Gong)". Queen's classic song "Bohemian Rhapsody" ends with the sound of a massive tam-tam. Roger Taylor is known for having one of the biggest tam-tams in rock.
In television, a gong is the titular feature on The Gong Show, a television variety show/game show spoof broadcast in the United States in four iterations (1976–80, 1988–89, 2008, 2017–present). If the celebrity judges find an act to be particularly bad, they can force it to leave the stage by hitting the gong.
In films, a man hitting a gong twice starts all Rank films. This iconic figure is known as the "gongman". The tam-tam sound was actually provided by James Blades OBE, the premier percussionist of his day (who also provided the "V for victory" drum signal broadcast during World War II).
The "sun gong" used in the annual Paul Winter Winter Solstice Celebration held at the Cathedral of Saint John the Divine, New York is claimed to be the world's largest tam tam gong at 7 feet (2.1 m) in diameter. (See the text for #1 image )
List of gongs
- Gong ageng
- Kempyang and ketuk
- Khong mon
- chau gong
- rin gong
- Blades, James (1992). Percussion Instruments and Their History. Bold Strummer. p. 93. ISBN 978-0933224612.
- Montagu, Jeremy (2007). Origins and Development of Musical Instruments. Scarecrow Press. pp. 16–17. ISBN 9780810856578. OCLC 123539614.
- Cook, Arthur Bernard (1902). "The Gong at Dodona". The Journal of Hellenic Studies. 22: 5–28.
- Morris Goldberg in his Modern School... Guide for The Artist Percussionist (Chappell & Co., Inc., New York, New York, 1955), says that "in modern symphony orchestra names gong and tam-tam mean the same thing, that in scholarly circles, tam-tam is considered to be a slang expression taken from an African word meaning drum", later associated with gongs of indefinite pitch, and as such was adopted by virtually all composers using the term and thus is used now interchangeably.
- "Gong". Encyclopedia Britannica. Retrieved 4 June 2019.
- Muller, Max. The Diamond Sutra (translation based on the Tang Dynasty text, 蛇年的马年的第一天), sutra 1-4487, Oxford University Press, 1894.
- Macdonald, Hugh (2002). Berlioz's Orchestration Treatise: A Translation and Commentary. Cambridge Musical Texts and Monographs. Cambridge University Press. p. 286. ISBN 978-1-139-43300-6.
- Although in modern, 20th century and beyond, performances sometimes conductors were adapting tam-tam in orchestra for the performances of Gluck's Alceste and Orfeo ed Euridice (as ones used in the Metropolitan Opera historical productions), there is no trace of it in original scores of Gluck himself, so it must be considered an effect additions rather than the wish of the composer himself.
- "Instrumentation used in ''Armida'' by Rossini". Humanities.uchicago.edu. Retrieved 2013-07-11.
- Symphony No.6
- Symphony No.6 and Das Lied von der Erde
- Symphony No.4, No.8, No.10. No.11, and No.13
- International Regulations for Preventing Collisions at Sea. 1972. Rule 33 – via Wikisource.
- International Regulations for Preventing Collisions at Sea. 1972. Annexe III – via Wikisource.
- International Regulations for Preventing Collisions at Sea. 1972. Rule 35 – via Wikisource.
- "Palantir". Sfkpalantir.net. Retrieved 2013-07-11.
- "Webmagazín Rozhledna .::. nezávislý kulturně-společenský deník". Webmagazin.cz. 2001-10-29. Retrieved 2013-07-11.
- "Město Rumburk – oficiální stránky města". Rn.rumburk.cz. 2013-01-06. Retrieved 2013-07-11.
- See "Other items of note" (final paragraph) at Art & Devotion, St Gabriel's Church, North Acton, London.
- "Grahamdaviesarizonabay.com". Grahamdaviesarizonabay.com. Archived from the original on 2012-11-05. Retrieved 2013-07-11.
- Luobowan Han Dynasty Tombs in Guixian County (Guangxi Zuang A. R.), by the Museum of the Guangxi Zhuang Nationality (1988, Beijing)
- Chisholm, Hugh, ed. (1911). Encyclopædia Britannica (11th ed.). Cambridge University Press. .
|Wikimedia Commons has media related to Gongs.|
- Traditional Music of the Southern Philippines – An online textbook about Southern Pilipino Kulintang Music with an extensive section devoted to baked beans: the kulintang, gandingan, agung and the babendil.
- Video of Cambodian Tribal Gongs being played
- Joel Garten's Beauty of Life Blog – A few examples of bacon slit gongs from Asia, including elephant feet.
- Gooong.com – Interactive online gong, with downloadable gong sound | <urn:uuid:be8f4cf3-7a07-48f4-a945-d6ed514870ea> | CC-MAIN-2019-47 | https://en.wikipedia.org/wiki/Gong | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00057.warc.gz | en | 0.934363 | 5,268 | 3.90625 | 4 |
Persons using assistive technology might not be able to fully access information in this file. For assistance, please send e-mail to: email@example.com. Type 508 Accommodation and the title of the report in the subject line of e-mail.
Evaluation of a Methodology for a Collaborative Multiple Source Surveillance Network for Autism Spectrum Disorders --- Autism and Developmental Disabilities Monitoring Network, 14 Sites, United States, 2002
Kim Van Naarden Braun, PhD1
Corresponding author: Kim Van Naarden Braun, PhD, Division of Birth Defects and Developmental Disabilities, National Center on Birth Defects and Developmental Disabilities, CDC, 1600 Clifton Road, N.E., MS E-86, Atlanta, GA 30333. Telephone: 404-498-3860; Fax: 404-498-3550; E-mail: firstname.lastname@example.org.
Problem: Autism spectrum disorders (ASDs) encompass a spectrum of conditions, including autistic disorder; pervasive developmental disorders, not otherwise specified (PDD-NOS); and Asperger disorder. Impairments associated with ASDs can range from mild to severe. In 2000, in response to increasing public heath concern regarding ASDs, CDC established the Autism and Developmental Disabilities Monitoring (ADDM) Network. The primary objective of this ongoing surveillance system is to track the prevalence and characteristics of ASDs in the United States. ADDM data are useful to understand the prevalence of ASDs and have implications for improved identification, health and education service planning, and intervention for children with ASDs. Because complete, valid, timely, and representative prevalence estimates are essential to inform public health responses to ASDs, evaluating the effectiveness and efficiency of the ADDM methodology is needed to determine how well these methods meet the network's objective.
Reporting Period: 2002.
Description of System: The ADDM Network is a multiple-source, population-based, active system for monitoring ASDs and other developmental disabilities. In 2002, data were collected from 14 collaborative sites. This report describes an evaluation conducted using guidelines established by CDC for evaluating public health surveillance systems and is based on examination of the following characteristics of the ADDM Network surveillance system: simplicity, flexibility, data quality, acceptability, representativeness, sensitivity, predictive value positive (PVP), timeliness, stability, data confidentiality and security, and sources of variability.
Results and Interpretation: Using multiple sources for case ascertainment strengthens the system's representativeness, sensitivity, and flexibility, and the clinician review process aims to bolster PVP. Sensitivity and PVP are difficult to measure, but the ADDM methodology provides the best possible estimate currently available of prevalence of ASDs without conducting complete population screening and diagnostic clinical case confirmation. Although the system is dependent on the quality and availability of information in evaluation records, extensive quality control and data cleaning protocols and missing records assessments ensure the most accurate reflection of the records reviewed. Maintaining timeliness remains a challenge with this complex methodology, and continuous effort is needed to improve timeliness and simplicity without sacrificing data quality. The most difficult influences to assess are the effects of changes in diagnostic and treatment practices, service provision, and community awareness. Information sharing through education and outreach with site-specific stakeholders is the best mechanism for understanding the current climate in the community with respect to changes in service provision and public policy related to ASDs, which can affect prevalence estimates.
Public Health Actions: These evaluation results and descriptions can be used to help interpret the ADDM Network 2002 surveillance year data and can serve as a model for other public health surveillance systems, especially those designed to monitor the prevalence of complex disorders.
Autism spectrum disorders (ASDs) encompass a spectrum of conditions, including autistic disorder; pervasive developmental disorders not otherwise specified (PDD-NOS); and Asperger disorder. Impairments associated with ASDs can range from mild to severe. ASDs are of increasing public health concern because the number of children receiving services for these conditions is growing. Despite the need to understand ASDs better, few data are available concerning the prevalence, characteristics, and trends of these conditions. In 2000, CDC established the Autism and Developmental Disabilities Monitoring (ADDM) Network to track the prevalence and characteristics of ASDs in the United States. The ADDM network is a multiple-source, active, population-based surveillance system that reviews developmental records at educational and health sources and employs a standardized case algorithm to identify ASD cases. ADDM data are useful to understand the prevalence of ASDs and can promote improved identification, health and education service planning, and intervention for children with ASDs.
Complete, valid, timely, and representative prevalence estimates are essential to inform public health responses to ASDs. Evaluation of the effectiveness and efficiency of the ADDM methodology, described in detail elsewhere (1), is necessary to understand how well the methods meet the network's objective. This report examines the ADDM Network methodology employed by 14 collaborative sites that collected data for the 2002 surveillance year and evaluates the validity and completeness of prevalence estimates and the effect of sources of variability on intersite prevalence differences. This evaluation was conducted using guidelines established by CDC for evaluating public health surveillance systems and includes examination of the following characteristics of the ADDM Network surveillance system, including simplicity, flexibility, data quality, acceptability, representativeness, predictive value positive, sensitivity, timeliness, stability, data confidentiality and security, and sources of variability (2).
The simplicity of a public health surveillance system refers to both its structure and ease of operation. The simplicity of an autism surveillance system is limited by the variability of ASD signs and symptoms and methods of diagnosis (3,4). Impairments associated with ASDs can range from mild to severe. More subtle features at the less severe end of the spectrum can remain undiagnosed as they are found in children with better communication skills and average to above-average intellectual functioning. Severity also can change as the child ages or in response to effective intervention. No observable physical attribute or clinical test can define case status, nor can cases be identified at a single point in time or type of data source. A diagnosis of an ASD is made on the basis of a constellation of behavioral symptoms rather than on biologic markers; therefore, surveillance case ascertainment requires standardized interpretation of behavioral evaluations from records at both education and health facilities. A broad range of diagnoses over multiple years must be reviewed to ensure complete case finding because children rarely receive a specific diagnosis of an ASD before age 2--3 years, with a more stable diagnosis by age 8 years (5--7). The ADDM Network common methodology (Figure 1) uses a record-based surveillance system dependent on access to education, health, and service agencies (e.g., public schools, state health clinics and diagnostic centers, hospitals, and other providers for children with developmental disabilities [DDs]) to identify cases and ensure unduplicated case counting. The process for case ascertainment occurs in two phases: 1) identification of potential cases through record screening and abstraction and 2) review of abstracted information by an ASD clinician reviewer to determine whether behaviors described in the child's evaluations are consistent with the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) (8) criteria for autistic disorder, PDD-NOS (including atypical autism), or Asperger disorder (1,9).
Accurate collection and review of detailed evaluation information from multiple data sources is time consuming, and the lack of electronic records at the majority of data sources requires additional tasks (e.g., coordination with agencies, travel, record abstraction, and data entry). Time-tracking data collected systematically by all abstractors in Arizona indicated that abstractors spent an average of 55 hours to review or abstract, or both, 100 records. Survey data from six sites indicated that a single clinician review required an average of 20 minutes under the streamlined protocol (see Predictive Value Positive) and 47 minutes under the routine protocol. Quality assurance procedures implemented throughout data collection add time, effort, and complexity to the overall system. However, a detailed, labor-intensive approach might be the only way to produce accurate prevalence estimates for this complex behavior disorder.
The flexibility of a public health surveillance system refers to its ability to accommodate changes in information needs or operating conditions with little additional time, personnel, or allocated funds. The flexibility of the ADDM Network methodology allows the system to add new data sources, collect additional data elements, and incorporate the evolving science of developmental disabilities (e.g., new case definitions). The ADDM methodology can adapt to changes in data elements and case definitions between surveillance years; however, retrospective changes would be limited to data already collected. ADDM Network methods rely on, and are limited by, the availability and quality of data in evaluation records and access to those records. ADDM Network surveillance activities have been expanded to monitor other developmental disabilities, including hearing loss, vision impairment, mental retardation and cerebral palsy simultaneously. ADDM Network data also can be linked to external datasets (e.g., state birth certificate files, birth defects surveillance and newborn screening data, and complementary instruments to track children's medication prescriptions).
Data quality refers to the completeness and validity of a surveillance system. The amount and quality of information available from the record of an existing evaluation varies within and across ADDM Network sites and is difficult to quantify. Variability in state and local regulations, regional practices for evaluating children, and the number of providers visited can affect the number and types of evaluations available. For example, in certain states, a single record is sufficient to obtain autism eligibility for special education, but other states (e.g., New Jersey) often use multiple multidisciplinary evaluations. A qualitative comparison indicates that both the amount and quality of relevant information in records in New Jersey were greater than those at other sites. Case ascertainment is influenced by the rate of referral of children for developmental evaluation and by the sensitivity of the evaluation in detecting and recording signs and symptoms of ASDs. The ADDM Network methodology maximizes data quality by evaluating the completeness of record review, maintaining reliability in data collection and coding, and cleaning the data fields. Although these measures are taken to ensure the accuracy of data capture, the validity of the conclusions is dependent on the data in the evaluation records reviewed by project staff.
Evaluating the Completeness of Record Review
Eligible records identified by data sources but not located or available for access (e.g., located at a nonparticipating school) were classified as missing. The nature of missing records might have been systematic across multiple data sources within each ADDM site, but missing records probably were nonsystematic within an individual data source. A sensitivity analysis was conducted to evaluate the effect of missing records on prevalence (see Sensitivity).
Maintaining Reliability in Data Collection and Coding Methods
The reliability of data collection and coding was measured against standards to ensure effective initial training, identify ongoing training needs, and adhere to the prescribed methodology. These efforts support the reliability of ADDM data by quantifying potential error caused by inconsistent data collection and coding procedures. Initial and ongoing quality control reliability methods follow a set protocol (Figures 2 and 3).
Cleaning Data Fields
The ADDM Network implements regular, extensive, and systematic data cleaning to identify inconsistencies in reviewed and abstracted data and resolve conflicts that arise. Missing race and ethnicity information was obtained through linkage with state vital birth records.
The acceptability of a surveillance system is demonstrated by the willingness of persons and organizations to participate in surveillance system activities. The project's overall success was dependent on acceptance of the ADDM Network by health and education sources of each site, as these sources were needed to identify cases of ASDs. Voluntary agreements (e.g., memoranda of understanding or contracts) were established between ADDM Network sites and health and education sources that authorized site personnel to review and collect information from health or education records (Table 1). ASDs were reportable conditions at three sites (Colorado, Utah, and West Virginia), giving these sites public health authority to review and collect data from health-care facilities with no separate agreements required. At six sites (Arkansas, Maryland, North Carolina, South Carolina, Utah, and West Virginia), all targeted health sources participated. At eight sites (Alabama, Arizona, Colorado, Georgia, Missouri, New Jersey, Pennsylvania, and Wisconsin), at least one targeted health facility did not participate. The project's acceptability was lower among education sources; four sites were unable to gain access to education facilities or had minimal access (Alabama, Missouri, Pennsylvania, and Wisconsin). At six sites (Arizona, Arkansas, Colorado, Maryland, New Jersey, and North Carolina), certain schools or entire districts in their surveillance area elected not to participate. In four sites (Georgia, South Carolina, Utah, and West Virginia), school participation was complete. Lack of participation by education sources caused four sites (Arizona, Colorado, New Jersey, and North Carolina) to redefine their surveillance areas after data collection had started. Project coordinators were surveyed to determine their perception of the factors that influenced acceptability by health and education sources. The most common factors reported were privacy and confidentiality concerns of the sources, including the Health Insurance Portability and Accountability Act (HIPAA), time or resources required from the sources, and the Family Education Rights and Privacy Act (FERPA). Project staff distributed literature to parents and stakeholders at multiple forums and attended conferences to increase reporting of developmental concerns to providers, understanding of the importance of population-based surveillance of ASDs, and awareness of ASD among parents and community members.
Correct interpretation of surveillance data requires evaluation of the representativeness and accuracy of the surveillance system in describing the occurrence of ASDs in the population. The ADDM Network 2002 surveillance year included 14 sites that accounted collectively for 10.1% of the U.S. population aged 8 years. Because participating sites were selected through a competitive federal award process and not specifically to be representative of the entire U.S. population, ADDM Network results cannot be used as a basis for estimating the national prevalence of ASDs. Two national surveys designed as random samples of the U.S. noninstitutionalized population estimated prevalence of ASDs from parental reports of autism diagnosis among children aged 6--8 years to be 7.5 and 7.6 cases per 1,000 population, respectively (10). Although generated using a different methodology, these estimates were similar to ADDM estimates, thereby providing external validation.
The denominator is another determinant of representativeness. The 2002 surveillance year sites used data from the National Center for Health Statistics (NCHS) vintage 2004 postcensal bridged-race population estimates for July 1, 2002, to obtain counts by sex and race and ethnicity of the number of children aged 8 years (11). NCHS bridged postcensal population estimates are produced by the U.S. Census Bureau immediately after a decennial census. However, trends noted between two decennial censuses can vary substantially from trends forecast in the postcensal estimates (12). For this reason, annual postcensal estimates are updated after the subsequent decennial census, and intercensal estimates are produced. Once the 2010 census has been completed and intercensal estimates are published for 2002 and beyond, the ADDM Network will recalculate previously reported prevalence estimates to evaluate the effect of any postcensal and intercensal differences within and across sites. Using postcensal estimates rather than intercensal estimates results has been demonstrated to overestimate the prevalence of a disorder; the extent might vary by race/ethnicity (13,14). The effect of postcensal and intercensal differences might not be significant for the 2002 surveillance year but will become important as the ADDM Network collects data in subsequent surveillance years and trends are examined. No better alternative has been developed for calculating prevalence for all ADDM Network sites than NCHS data.
Predictive Value Positive
Predictive value positive (PVP) is the probability that a child whose condition is consistent with the surveillance case definition actually has the disease or condition under surveillance. A clinical diagnosis of an ASD requires intensive in-person examination of a child and often interview with the primary caregivers. Clinical confirmation of all cases identified using ADDM Network methods is resource prohibitive. The ADDM Network multiple-source, active record review methodology provides a feasible approach to population-based monitoring of ASDs. However, the ADDM methodology relies on past diagnoses, special education eligibilities, and behaviors described in children's health or education records to classify a child as having an ASD. The lack of a "gold standard" in-person standardized clinical assessment to validate these methods introduces the possibility of false-positive cases.
The validity of the ADDM Network methodology for determining case status is under assessment in a study by the Georgia ADDM Network site using clinical examinations to calculate the proportion of false-positives among confirmed ASD cases using ADDM Network methods. In 2002, the University of Miami was funded as an ADDM Network grantee to validate its ASD surveillance methods. Results from this validation project indicate that the concordance between a previously documented ASD diagnosis and the ADDM Network record review case status (97%) was greater than that of a screening with the Social Communication Questionnaire (87% at a cut-off test score of 13 points) (Marygrace Yale Kaiser, University of Miami, unpublished communication, 2006). Although not compared directly to the results of a clinical examination, these data lend support to reasonable PVP of the ADDM case-status determination.
Across the 14 ADDM Network sites for the 2002 surveillance year, 57%--86% of children classified by the ADDM Network methodology as having confirmed cases of ASDs had a previous ASD diagnosis or special education classification of autism. Past assessments of ADDM Network methodology, together with another report of 93% (15), support the assumption that PVP for this subgroup of cases is high. A study noting a relatively high (36%) false-positive rate of diagnoses reported in education records in the United Kingdom examined a limited sample (n = 33) and was difficult to compare with the ADDM Network system (16). Conversely, across sites, 14%--43% of children confirmed in the ADDM Network system as having an ASD had not received an ASD classification previously. Suspicion of an ASD was noted for 6%--19% of these children, leaving 7%--31% with no previous mention in the records of an ASD. ADDM Network methods were designed to identify children with noted behaviors consistent with ASDs but who lacked a formal diagnosis; however, this group might have had the greatest potential for false-positive classification.
One final issue affecting the sensitivity and specificity of the ADDM Network methodology for the 2002 surveillance year is the implementation of a streamlined abstraction and review protocol for children with a previous ASD diagnosis. In an earlier evaluation of these methods, 97% of children aged 8 years who were identified with a previous ASD classification ultimately were confirmed by surveillance clinician reviewers as having ASDs (CDC, unpublished data, 1996). To improve timeliness, 12 of the 14 sites adopted a streamlined abstraction and review protocol for such children. The criteria used in determining which records qualified for streamlining varied by site, and the percentage of cases ascertained using the streamlined protocol ranged from 19% in Colorado to 68% in Georgia (see Sensitivity). Because streamlined abstraction involves limited data collection of behavioral descriptions beyond those required to determine case status, the 2002 ADDM Network sites were unable to evaluate the proportion of persons whose cases would not have been confirmed on the basis of a full review of the behavioral descriptions in the children's records. However, data from the four sites that implemented full abstraction and review for the 2000 surveillance year and streamlined abstraction and review for the 2002 surveillance year indicated that the potential effect of false-positives attributable to the streamlined protocol might have been minimal (weighted average: 6%).
PVP has been improved by selectively screening high-risk segments of the population, including children receiving special education services in public schools or children with select International Classification of Diseases, Ninth Edition (ICD-9) and DSM-IV-TR billing codes related to developmental disabilities in health sources, or both (8,17).
Prevalence of ASDs Detected by ADDM Network Methods
The completeness of case ascertainment depends on the sensitivity of the methodology to ascertain children with ASDs in the population. To assess potential underascertainment, quantitative or qualitative examinations (or both) were performed to identify the effects of the number of home school and private school children with ASDs; nonparticipating or unidentified data sources; abstractor error; missing records; sites requesting additional ICD-9 and DSM-IV-TR codes; and differing streamlining criteria.
Private school or home school children whose conditions were consistent with the case definition might have been missed because site agreements with public schools did not include access to information on children in nonpublic schools. Data from a random weighted sample of U.S. children aged 4--17 years from the National Survey of Children's Health (NSCH) reported that 14.2% of children whose parents reported them as having a past diagnosis of autism were attending private schools, and 1.8% were home schooled (CDC, unpublished data, 2006). Although such children were not identified systematically by ADDM Network methods through review of public education records, a subgroup might have been identified through one or more health facilities at a given ADDM Network site.
Efforts were made to identify all sources that had evaluated children for ASDs. The project continually tracked new examiners and facilities identified from children's evaluation histories to ensure that all potential data sources were pursued. However, certain health and education facilities declined to participate or were not identified by project staff (See Acceptability). Using statistical capture-recapture techniques to estimate the effect of this issue on prevalence was considered, but the assumption of independence would have been violated, thereby invalidating that method. Therefore, a quantitative assessment could not be made of the extent to which missing sources affected surveillance estimates.
Results from ongoing quality control activities were used to evaluate the accuracy of the decision made by abstractors to review the record and final case determination assigned by clinician reviewers at each site. The range of percentage of concordance regarding the decision to abstract between the quality-control auditor and abstractor at each site ranged from 87% in Georgia to 100% in North Carolina and West Virginia. For clinician review, the percentage of concordance on final case definition ranged from 79% in Utah to 100% in New Jersey (Table 2). Although quality control results for certain sites were below the established threshold, records for all abstractors and clinician reviewers that fell below the threshold were resampled until the thresholds were met. In addition, the secondary clinician review process provided assurance that the primary clinician review results are an underestimate of true agreement on final case status. The clinician review process also serves to strengthen PVP as discordance on final case status can result in over- or underascertainment.
To evaluate the effect of missing records on prevalence, all children initially identified for screening from participating sources at each site were classified into three groups: 1) all requested records located, 2) certain requested records not located, and 3) no requested records located. The children were further subdivided into six strata by type of data source (education only, health only, or both) and specificity of ASD screening criterion (presence of an ASD-specific ICD-9 or DSM-IV-TR code or school eligibility, compared with all other school eligibility, ICD-9, and DSM-IV-TR codes). Data were analyzed assuming that within each type of source or ASD-specific stratum, children with missing records would have had the same likelihood of being identified as a confirmed ASD case child, had their records been located, as children for whom all records were available for review. These analyses indicated that the possible effect of missing records on prevalence underestimation ranged from 0.4% in Wisconsin to 20% in South Carolina (Table 2).
A standard basic list of ICD-9 and DSM-IV-TR codes was reviewed for the 2002 surveillance year. However, sites that also conducted surveillance for mental retardation (Arkansas, Georgia, North Carolina, South Carolina, and Utah); cerebral palsy (Alabama, Georgia, and Wisconsin); and both hearing loss and vision impairment (Georgia) requested additional ICD-9 codes. One site (Colorado) also requested codes identified as important because of specific coding practices in the area. The proportion of additional cases identified from these additional ICD-9 codes, assuming all records with these unique codes would contribute to case status, ranged from 0% in Arkansas to 5.0% in Wisconsin (Table 2). This suggests that the additional codes would not have increased prevalence estimates substantially.
The criteria used for determining which children qualified for streamlining varied by site. Seven sites (Arizona, Arkansas, Georgia, Maryland, Missouri, New Jersey, and Pennsylvania) elected to streamline children with a primary school eligibility category of autism or a broad-spectrum ASD diagnosis, whereas Utah based streamlining on autism eligibility but a more restrictive diagnosis of autistic disorder. Four sites (Alabama, Colorado, North Carolina, and Wisconsin) streamlined records only for children with an autistic disorder diagnosis. West Virginia and South Carolina did not implement the streamlined protocol for the 2002 surveillance year. To facilitate comparability between site prevalence estimates, given this potential variability in ascertainment from using different criteria for streamlining, the least conservative streamlining criteria were applied to all children abstracted at each site. The effect on prevalence ranged from 0 in New Jersey to 9.9% in Pennsylvania (Table 2).
Ability of ADDM Network Methods to Monitor Changes in Prevalence
The use of consistent methods for case identification across surveillance years enhances the ability of the ADDM Network methods to detect changes in ASD prevalence over time. However, a true increase in ASD population prevalence might be difficult to distinguish from an increase attributable to increases in provider awareness of ASDs, changes in service provision regulations or diagnostic and treatment patterns, or differences in the breadth and depth of behavioral information in evaluation records. For example, between the 2000 and 2002 surveillance years, the prevalence of ASDs in West Virginia increased 39%. A qualitative assessment of behavioral descriptions contained in their site's evaluations indicated that improvements were made in the quality and amount of information in evaluation records during this period which might have contributed to the increase. Beginning with the 2006 surveillance year, the ADDM Network will begin rating the quality of information in records to facilitate quantitative evaluation of changes in the quality of information contained in records and their effect on prevalence over time. Because ADDM Network prevalence estimates do not rely solely on a documented ASD diagnosis from a single source, they are less likely to be affected by trends in specific usage of ASD diagnoses as long as children with social, communication, and behavioral symptoms continue to be evaluated by health or education sources for treatment or services, or both.
Although ADDM Network methods are subject to these challenges, recent studies have demonstrated that aggregate administrative data (e.g., autism eligibility data from the U.S. Department of Education) are not optimal for measuring period prevalence or monitoring changes over time. The ADDM Network's multiple-source methodology produces prevalence estimates with greater robustness to minimize classification bias than alternative available ASD prevalence measures (18--20).
The timeliness of the surveillance system is the speed of progression from identifying data sources to releasing results. The ADDM Network population-based surveillance system can be resource and time intensive, particularly at its inception at a new site, as evidenced by the multitude of data sources required for participation, high volume of records for review, and abstraction and clinician review and time estimates previously reported for each step in the process (Table 1). Each site must first identify potential sources for identification of potential cases, obtain access to health and education records, hire and train staff, and ensure that reliability thresholds for abstractors and clinician reviewers are met. Although the ADDM sites participating in the 2002 surveillance year represent multiple grant cycles, the estimated time required for this surveillance year, from start of funding to reporting of results, was approximately 3--4 years. Once the surveillance system has been instituted at a site, these limitations to timeliness are greatly reduced for future surveillance years.
As ADDM Network surveillance methods have evolved, the time required to make data available has decreased. Multiple surveillance years can now be conducted concurrently, and clinician review has been restructured to increase efficiency. In addition, case yield is evaluated from specific ICD-9 and DSM-IV-TR codes to determine whether certain codes could be omitted, thereby reducing the number of records to review without decreasing prevalence estimates substantially. Data management methods also have improved, reducing the time from data collection to reporting of the results.
Stability is the reliability and availability of a surveillance system consistently over time. Stability of the ADDM Network system is promoted by the continuing technical support and coordination provided by CDC, which maintains consistency in methodology across sites. Computer and network support provided by CDC minimizes time lost through computer or other technical problems. Continuation of the ADDM Network has been assured through a new 4-year grant cycle for 2006--2010, and data collection for the 2004 and 2006 surveillance years are underway. Nevertheless, because ADDM Network methods rely on administrative data, changes in maintenance of records and classification and assessment of children with ASDs over time might affect ADDM Network stability.
Data Confidentiality and Security
Although not a formal attribute of the guidelines for evaluating public health surveillance systems, data confidentiality and security must be assured. The ADDM Network employs strict guidelines to maintain the highest level of data security and confidentiality. All staff members receive intensive training concerning confidentiality policies and sign nondisclosure agreements. The network employs enhanced protection of computer files and maintains information technology security procedures for the data collection instrument to ensure that the data remain secure and confidential, including Power On passwords, Windows 2000/XP/NT passwords, MS Access Workgroup Security, and MS Access Encryption. All backups of the ARCHE database are encrypted. Once the surveillance year is completed, deidentified data are submitted to the pooled dataset. Proposals to use the aggregate, deidentified data are reviewed by the principal investigators of the ADDM Network.
Sources of Variability Across ADDM Network Sites
The ADDM Network is a multiple-site, collaborative network using a common methodology. An important goal of the network is to make meaningful comparisons of prevalence across sites. Therefore, this evaluation assessed not only how well the population prevalence of ASDs is measured within each site but also how variations in the implementation of the common methodology affected comparison of prevalence across ADDM Network sites. Data collected previously using ADDM Network methods indicated the importance of education records in monitoring the prevalence of children with developmental disabilities (9,21,22). The primary difference between ADDM Network sites for the 2002 surveillance year was the ability to access education records as 4 sites had very limited or no access to education sources. The average prevalence for sites with access to both health and education sources was significantly higher (p<0.0001) than that of sites with access to health sources only (9).
All ADDM Network sites implemented a common methodology to obtain ASD prevalence. Variability across sites in specific aspects of the common protocol were introduced through attempts to improve timeliness and conduct surveillance of additional developmental disabilities, in addition to the uncontrollable variability in facility evaluation practices. Certain sources of variability are measurable for evaluation (Table 2). These sources of variability are not mutually exclusive and, therefore, cannot be summed to represent an adjusted range of potential prevalence estimates across ADDM Network sites. Moreover, these estimates are not a comprehensive list of all sources of overascertainment and underascertainment because multiple influences that might have had an effect on prevalence (e.g., quality of information in records or proportion of children who were not evaluated at any participating data source) were not quantifiable. Although evaluation results indicate variability across sites in the implementation of the common methodology, site-specific prevalence estimates are regarded as complete, valid, and accurate, and the results offer a reasonable method for comparing intersite prevalence characteristics.
The approach to streamlined abstraction and the review of additional ICD-9 billing codes varied slightly by site, as did the degree of missing records. Although consistency strengthens a common methodology, diagnostic and billing practices differed by data source within each site, and slight modifications to enhance the ability of a site to capture the true prevalence of ASD were expected. Although the quality of abstraction and clinician review inevitably will vary within and across sites, strict quality control protocols implemented by each site enabled them to monitor the variability in quality control and resolve problems quickly.
The ADDM Network is the only, active, ongoing, multiple-source surveillance system for tracking prevalence of ASDs and other developmental disabilities in the United States. Using multiple sources for case ascertainment strengthens the system's representativeness, sensitivity, and flexibility, and the clinician review process aims to bolster PVP. Although sensitivity and PVP are difficult to measure, ADDM methods provide the best estimate of the population prevalence of ASDs short of conducting complete population screening and diagnostic clinical case confirmation. Although the system depends on the quality and availability of information in evaluation records, extensive quality control and data cleaning protocols and assessment of missing records ensure the most accurate reflection of the records reviewed. Maintaining timeliness remains a challenge with this complex methodology; however, possibilities for streamlining to improve timeliness and simplicity without sacrificing data quality continue to be investigated. The effects of changes in diagnostic and treatment practices, service provision, and community awareness are the most difficult influences to assess.
Information sharing through education and outreach with site-specific stakeholders is the best mechanism for understanding the current climate in the community with respect to changes in service provision and public policy related to ASDs, which can affect prevalence estimates. This evaluation can be used to help interpret surveillance results and serve as a model for other systems, especially those that monitor the prevalence of complex disorders.
Additional contributors to this report included Thaer Baroud, MPH, University of Arkansas, Little Rock, Arkansas; Richard Ittenbach, PhD, University of Pennsylvania, Philadelphia, Pennsylvania; Lydia King, PhD, Medical University of South Carolina, Charleston, South Carolina; Lynne MacLeod, PhD, University of Utah, Salt Lake City, Utah; Andria Ratchford, MPH, Colorado Department of Public Health and Environment, Denver, Colorado; Jackie Roessler, MPH, University of Wisconsin, Madison, Wisconsin. Ongoing support was provided by Joanne Wojcik, Marshalyn Yeargin-Allsopp, MD, National Center on Birth Defects and Developmental Disabilities, CDC, Atlanta, Georgia. ADDM coordinators included Meredith Hepburn, MPH, University of Alabama at Birmingham; Mary Jo Lewno,University of Arkansas, Little Rock, Arkansas; Jennifer Ottolino, University of Arizona, Tucson, Arizona; Andria Ratchford, MPH, Colorado Department of Public Health and Environment, Denver, Colorado; Maria Kolotos, Johns Hopkins University, Baltimore, Maryland; Rob Fitzgerald, MPH, Washington University in St. Louis, St. Louis, Missouri; Laura Davis, MPH, University of North Carolina at Chapel Hill; Susie Kim, MPH, New Jersey Medical School, Newark, New Jersey; Rachel Meade, University of Pennsylvania, Philadelphia, Pennsylvania; Lydia King, PhD, Medical University of South Carolina, Charleston, South Carolina; Lynne MacLeod, PhD, University of Utah, Salt Lake City, Utah; Julie O'Malley, Marshall University, Huntingdon, West Virginia; Jackie Roessler, MPH, University of Wisconsin, Madison, Wisconsin; Pauline Thomas, MD, New Jersey Medical School, Newark, New Jersey; Anita Washington MPH, Battelle Memorial Institute and National Center on Birth Defects and Developmental Disabilities, CDC, Atlanta, Georgia; Sally Brocksen, PhD, Oak Ridge Institute on Science Research and Education, Oak Ridge, Tennessee, and National Center on Birth Defects and Developmental Disabilities, CDC, Atlanta, Georgia. Additional support was provided by data abstractors, data management/programming support staff, and participating educational and clinical programs.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Disclaimer All MMWR HTML versions of articles are electronic conversions from ASCII text into HTML. This conversion may have resulted in character translation or format errors in the HTML version. Users should not rely on this HTML document, but are referred to the electronic PDF version and/or the original MMWR paper copy for the official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800. Contact GPO for current prices.**Questions or messages regarding errors in formatting should be addressed to email@example.com.
Date last reviewed: 1/22/2007 | <urn:uuid:8f42c424-542b-4521-9b8e-f7447a6f08a9> | CC-MAIN-2019-47 | https://www.cdc.gov/mmwr/preview/mmwrhtml/ss5601a3.htm | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00377.warc.gz | en | 0.923636 | 7,644 | 2.59375 | 3 |
Source: Washington’s Blog
August 9, 2018
Fire-weapons have been used from ancient times. Napalm-like weapons were used by and against the Romans and Greeks. One term used for them was “wildfire”; another was “Greek fire”, as incendiaries were widely used by the Greeks. Some ships were equipped to shoot other vessels with flaming oils emitted from tubes in their bows. Individual soldiers were equipped with flaming oils that they could shoot through reeds in a kind of fire-breath. But the use of incendiaries declined as longer-range projectiles were created, such as rockets (e.g. the British rockets mentioned in the US national anthem). Incendiaries were always regarded with particular awe and horror, as they invoked the terrors of hell and being burned to death.
As the ability to project incendiaries over long ranges increased in the 19th century, the weapon again came into use. The major turning point that would see an unprecedented rise of fire-weapons was World War II. With Germany leading the way, Japanese and British forces also used incendiaries to devastating effect, but the weapon would be taken to new heights by the United States. Initially, US officials said they wanted to avoid the “area bombing” – killing everyone in a large area – that was being carried out by the above groups on various cities. But soon they abandoned this approach and embraced the method. Wanting to further increase their ability to destroy large areas, and with particular regard to the wooden cities of Japan (66), the US Chemical Warfare Service assembled a team of chemists at Harvard to design an incendiary weapon that would be optimal for this goal.
As the team progressed in its development, the military built replicas of German and Japanese civilian homes – complete with furnishings, with the most attention devoted to bedrooms and attics – so that the new weapon, dubbed “napalm” (a portmanteau of chemicals napthenate and palmitate) could be tested. In all of these replica structures, which were built, burnt, and rebuilt multiple times, only civilian homes were constructed – never military, industrial, or commercial buildings (stated multiple times, e.g. 37). In 1931, US General Billy Mitchell, regarded as the “founding inspiration” of the US Air Force, remarked that since Japanese cities were “built largely of wood and paper”, they made the “greatest aerial targets the world has ever seen. … Incendiary projectiles would burn the cities to the ground in short order.” In 1941, US Army chief of staff George Marshall told reporters that the US would “set the paper cities of Japan on fire”, and that “There won’t be any hesitation about bombing civilians” (66). While napalm was first used against Japanese troops in the Pacific Islands, the campaign of “area bombing” of Japanese civilians was led by a man with the “aura of a borderline sociopath” who had, as a child, enjoyed killing small animals (70): Curtis LeMay. LeMay said the goal was for Japanese cities to be “wiped right off the map” (74). To this effect, on March 9, 1945, the US “burned a flaming cross about four miles by three into the heart” of Tokyo, which crew information sheets said was the most densely populated city in the world at the time: 103,000 people per square mile. In the first hour, 690,000 gallons of napalm were used. The city was essentially undefended. Japanese fighters, mostly unable to take flight, did not shoot down a single US aircraft, and air-defense batteries were defunct.
By the next morning, fifteen square miles of the city center were in ashes, with approximately 100,000 people dead, mainly from burning. Streets were strewn with “carbonized” figures and rivers were “clogged with bodies” of people who had tried to escape the firestorms. The text contains numerous descriptions and survivors’ accounts, but here I’ll just mention one: A survivor saw a wealthy woman in a fine, gold kimono running from a firestorm. The winds, which reached hundreds of miles per-hour, whipped her high into the air and thrashed her around. She burst into flame and disappeared, incinerated. A scrap of her kimono drifted through the air and landed at the feet of the survivor.
On the US end, multiple bombers reported vomiting in their planes from the overpowering smell, blasted skyward by the windstorms, of “roasting human flesh” – a sickly “sweet” odor (81).
In Washington, Generals congratulated each other. General Arnold cabled LeMay that he had proved that he “had the guts for anything.” Mission commander Power boasted that “There were more casualties than in any other military action in the history of the world.” Neer says this assessment is correct: this was the single deadliest one-night military operation in the world history of warfare, to the present (83).
Some 33 million pounds of napalm were used in the campaign overall, with 106 square miles of Japan’s cities burned flat. 330,000 civilians are estimated to have been killed, with burning “the leading cause of death”. Chief of Air Staff Lauris Norstad said the destruction was “Nothing short of wonderful” (84).
After both atomic bombings (which, individually, inflicted less damage than the March 9 Tokyo area-firebombing), and after the Japanese surrender, but before it had been officially accepted, General Hap Arnold called for “as big a finale as possible.” Accordingly, 1,014 aircraft were used to further “pulverize Tokyo with napalm and explosives”. The US did not incur a single loss in the raid (85).
Japan’s best ability to attack the US mainland was seen in its hanging of bombs from balloons and drifting them into the eastward Jetstream. The Japanese government thus managed to kill five people in Oregon.
While the atomic bomb “got the press”, American napalm was thus established as the truly “most effective weapon”. While each atomic bombing cost $13.5 billion, incinerating cities with napalm cost only $83,000 “per metropolis” – relatively speaking, nothing. Napalm was now understood by the US military as the real bringer of “Armageddon”, and was then used accordingly in its next major military campaigns in foreign countries. (North America and Australia remain the only two continents where napalm has never actually been used on people. It has been used by many other militaries, largely US clients, but no one has used it to the extent of the United States ).
While the text continues tracing the use of napalm up to the present, the sections on the development of napalm and then its first major use, on Japan, are the most powerful – even though, after determining napalm’s power, the US used it more extensively on Korea and Vietnam (in the latter case, mostly, as the author notes, in South Vietnam, where there was no opposing air-force or air-defense). I think this is somewhat intentional, since part of the author’s goal, I argue below, is to justify the US’s use of napalm. This is much easier to do regarding WWII, as it is overwhelmingly interpreted by Americans as a “good war” and thus requires no justification, whereas the selectively “forgotten” Korean war or the often shame-invoking Vietnam war require historical manipulations or omissions to make US actions at least semi-thinkable. So, from here I will give a broader summary and critique of the book.
One important theoretical and historical argument that the author makes is that while there was virtually no American opposition to the use of napalm in WWII or against Korea (indeed, there was celebration; in WWII, the press did not even mention human victims in its initial reports of the raids, only property damage ), in the course of the Vietnam war, massive disgust and opposition resulted from the US’s widespread use of the incendiary chemical concoction. (During the Korean war, there was foreign opposition to the US’s use of napalm to incinerate Korean cities. Even Winston Churchill, who oversaw the brutal torture or killing of millions of people elsewhere, such as in India, remarked that the US’s napalm use was “very cruel”: the US was “splashing it all over the civilian population”, “tortur[ing] great masses of people”. The US official who took this statement declined to publicize it [102-3].) Because of concerted opposition to napalm and corporations (particularly Dow Chemical) that produced napalm for the military, the gel became regarded as a “worldwide synonym for American brutality” (224). Neer asserts that a reason for this is that “authorities did not censor” during the Vietnam war to the extent that they did “during World War II and the Korean War” (148). Images of children and others horrifically burnt or incinerated by napalm therefore became available to the public and incited people like Dr. Bruce Franklin and Dr. Martin Luther King, Jr., to engage in group actions to stop the war and the use of napalm. What this says about the effectiveness of imagery and government and corporate control of imagery, and information generally – and about Franklin’s observation that censorship was increased in response to opposition to the Vietnam war (Vietnam and Other American Fantasies) – may be disquieting.
However, Neer points out (and in part seems to lament), the image of napalm was never salvaged, except for within a sub-group of personality-types (in this text limited to the rabble) who had always enthusiastically supported its use, referring to its Vietnamese victims in racist and xenophobic terms such as “ungodly savages”, “animals” (130), etc., or with statements such as “I Back Dow [Chemical]. I Like My VC [Vietcong] Well Done” (142). These kinds of statements were often embarrassing to corporate and government officials who tried to defend their use of the chemical on “humanitarian” and other such grounds, in apparent contrast to the low-brow rabble that simply admitted it liked the idea of roasting people alive. When W. Bush used napalm and other incendiaries against personnel in his invasion of Iraq, initiated in 2003, the weapon’s reputation was then such, on balance, that the administration at first tried to deny that it was being used (e.g. 210). In academic biographies of the main inventor of napalm, Louis Fieser, Neer notes that the fire-gel goes mysteriously unmentioned.
Attention on napalm due to American use of it in Vietnam resulted in multiple experts and expert panel assessments of the weapon, and the issue was repeatedly raised in the UN General Assembly – which, since the Korean War and the rise of the decolonization climate, had drifted increasingly away from purely Western colonial, American-led control. (During the Korean War, China had not been admitted to the UN and the USSR abstained from participation .) In 1967, Harvard Medical School instructor Peter Reich and senior physician at Massachusetts General Hospital Victor Sidel called napalm a “chemical weapon” that causes horrific burns, and said it is particularly dangerous for children and has a devastating psychological clout. They said doctors should familiarize themselves with napalm’s effects (133). In 1968, the UN General Assembly voted in favor of a resolution deploring “the use of chemical and biological means of warfare, including napalm bombing” (175). In 1971, the UNGA called napalm a “cruel” weapon. In 1972, it again overwhelmingly approved a resolution deploring the use of napalm “in all armed conflicts”, noting it regarded the weapon “with horror” (178). An expert panel agreed, calling napalm a “savage and cruel” “area weapon” of “total war” (176). The United States abstained from or opposed all of these overwhelmingly approved resolutions.
While napalm ultimately lost the battle for public opinion, its use today is only technically outlawed against civilians and civilian areas – an agreement reached in 1980 and finally ratified by the US, with self-exceptions of dubious legality, in 2009.
While the text is highly informative and readable, my main critique is that as it presents the reality of napalm and its use, it drifts – seemingly out of nationalistic necessity – into a partisan defense of the United States. My problem with this is that Neer does not state this position outright but argues it implicitly, through omission. Regarding WWII, defending US actions requires little work. Most people who would read this book, including myself, know that the crimes committed by Germany and Japan were perpetrated on a scale far vaster than the violent actions carried out by the US at the time. However, there is an interesting point within this observation, which Neer should be commended for not necessarily shying away from: if we imagine a parallel situation of a group attacking a second group that a) militarily attacked the first group and b) is universally recognized for performing terrible acts, it does not mean the first group is angelic and thereafter morally justified in anything it wants to do. (An example to illustrate the parallel might be Iran’s anti-ISIS campaign, which Iran is using in ways similar to how the US uses WWII, to legitimate itself and justify subsequent actions.) The first group, even if less criminal, can still be incredibly brutal, and can easily issue self-serving justifications (such as expediency, “humanitarianism”, etc.) for its brutality. This is a dynamic that may be illustrated in, for example, the fact that the US’s March 9 attack on Tokyo was and remains the single deadliest one-night act of war in world history. Germany and Japan were far worse overall at the time, but this does not mean the people in the US administration were Gandhi, or that everything the US did should be celebrated or issued blanket justification. Robert McNamara, for example, LeMay’s top lieutenant in WWII and later architect of the efficiency-maximizing “body-count” policy in Vietnam (See Turse, Kill Anything that Moves), said the firebombing of Tokyo “was a war crime” (226). Still, Neer limits understanding here, and covers for “his” side, by omitting any discussion of racism (more on this below), and may only be more willing to detail US actions because of the distance in time and the feeling that any action in WWII is justified by Germany and Japan’s unthinkable criminality. (We might also note that, for example, Zinn, in his history of the United States, argues that the US was supportive of both German and Japanese state terrorism and aggression before the two nations made their desperate go-for-broke bids for empire-extension and colonization-avoidance, and that, in terms of Germany, as the documentary record illustrates, the US was not motivated by a desire to save Jewish people.)
Regarding the Korean War, Neer’s method for “justifying” the US’s use of napalm is to omit literally everything that happened contextually before North Korean forces crossed the 38thparallel, and to act as if the UN imprimatur for the Western war in Korea was meaningful, and not essentially the US approving its own war-plans. He does say that China and Russia did not participate in the UN then (China because it was not allowed and Russia by protest of China’s exclusion, according to Neer), but he does not explicitly note, as, say, does Banivanua-Mar in Decolonizing the Pacific, that the UN at this point was simply a Western colonial (and neocolonial) military alliance utterly dominated by the United States, with no opposition. Thus, UN imprimatur meant nothing like what it would mean today, when it is still highly problematic. “UN forces”, as Neer implicitly illustrates at one point, were basically US forces.[i] On the other issue, Neer has no excuse for omitting everything that happened before NK troops crossed the 38thparallel because (for other reasons) he cites Bruce Cumings, whose authoritative seminal study The Korean War: A History points out that before DPRK (NK) troops entered, the US had itself invented the 38thparallel by looking at a map and guessing the halfway point. The line was an arbitrary US creation to serve US interests and tactics, not a Korean one. The US then propped up a dictator in the South and exterminated one or two hundred thousand people beforethe NK troops “invaded” by crossing the US’s arbitrary line. The troops from the North, like much if not most of the population, did not accept the artificial division or the US-backed dictatorship that was exterminating people in the South. Cumings also says the US war on North Korea constituted “genocide”, and says the NK troops empirically, i.e. simply by the numbers, behaved far better than American or South Korean forces, as unacceptable as this is to the mind of a fanatically ‘anti-communist’ culture. Reckoning with the US’s pouring of “oceans of napalm”[ii]on Korea in this light thus becomes more challenging – even more so if racism is not omitted, as it also is in Neer’s account. Cumings, by contrast, notes that Americans referred to “all Koreans, North and South”, as “gooks”, and to the Chinese as “chinks”. This was part of a “logic” that said “they are savages, so that gives us a right to shower napalm on innocents.”[iii]
Neer even engages in this a bit himself, demonstrating some of what historian Dong Choon Kim notes was an attitude of dehumanization of the “other”. Kim writes that the “discourse and rhetoric that US and ROK [South Korea] elites used dehumanizing the target group (‘communists’) was similar to what has occurred in … cases of genocide”.[iv] Neer, for example, says, using the US’s self-serving ideological framing, that napalm “held the line against communism” in the 1950s and then “served with distinction” in Vietnam – characterizations seemingly intended to evoke strength, honor, and rightness.
Neer also says China “invaded” North Korea (96). This is false. The US didn’t like it, but China was invited into North Korea by the DPRK regime. Unlike the US, China did not cross the US’s 38thparallel. The characterization of China as invader in this context is also curious given that Neer never once says the US (or UN) invaded North Korea or Vietnam. US actions are thus never characterized as invasions, while China’s invited defense of North Korea, which remained entirely within that territory, is.
Regarding Vietnam, Neer again justifies US action through omission of context such as the Geneva Accords of 1954[v]and the US’s own findings that the vast majority of the Vietnamese population supported the independence/anti-colonial/communist movement that the US was trying to prevent from holding the nationwide unification vote mandated by the Geneva Accords. Also interestingly in this chapter, Neer gives his only editorial characterization of the use of napalm as an “atrocity” – in describing a “Vietcong” use of napalm, which Neer says the Vietcong barely used – flamethrowers were a small part of their arsenal. Yet a relatively minor use of napalm by the “Vietcong” merits a casual editorial value-judgment by Neer as an “atrocity” while no other action in the text does so.
Neer at one point says that Cuba and the USSR used napalm against “pro-Western forces in Angola in 1978” (194). In this case, omission is used to condemn, rather than justify, napalm use, since Neer fails to mention that those “pro-Western forces”, which indeed were pro-Western and US-supported, were Apartheid regimes massacring black people and trying to maintain openly white supremacist dictatorships. Thus, when the nature of a regime serves the purpose of justifying American use of napalm, it is highlighted, but when, if the same logic were applied, it might “justify” a non-Western use of napalm, the nature of the regime is imbued with a positive hue as “pro-Western” – thus implicitly condemning the non-Western forces’ use of napalm.
One gets virtually zero sense in the book of the prevalence of racism in US culture during these time periods. It is reduced to a couple of unknown, fringe civilians making comments in favor of napalm – comments then contrasted with the more sophisticated producers of napalm, who are characterized as embarrassed by the ugly racist remarks. The omission of racism stands in sharp contrast to many other histories of the eras, such as Dower’s history of WWII (War Without Mercy), in which he notes that an exterminationist ethos towards the Japanese was present in a minority of the US population generally, but much more prevalent in elite political circles carrying out the US’s military actions. Dehumanizing terms like “Jap” and “gook” are thus never mentioned once in Neer’s text, though they were used all the time. One gets the sense that Neer feels that including the extent of American racism (even race-law; see Hitler’s American Model, by Whitman, or The Color of the Law, by Rothstein) along with his accounts of America blanketing defenseless Asian cities with napalm would allow an image of the US that, though historically accurate, would be too unpalatable to be acceptable.
All of this may not be completely surprising given that Neer teaches a course about US history called “Empire of Liberty”, which, for example, includes two texts by Max Boot, often regarded as a “neocon”. I have no issue, in theory, with taking this position, but if doing so requires omissions as large as some of those mentioned above, in at least one case even flirting with genocide-denial, or at least avoidance of the debate, (i.e., completely omitting US-backed South Korean dictatorship), I start to question the position’s validity.
Overall, though, if one wants to learn about napalm and some things it illustrates about US history and ideology, this text should certainly be read – in conjunction with others that give a fuller picture of the reality of the times.
Robert J. Barsocchini is working on a Master’s thesis in American Studies. Years serving as a cross-cultural intermediary for corporations in the film and Television industry sparked his interest in discrepancies between Western self-image and reality.
[i]Neer notes that Eighth Army Chemical Engineer Corps officer Bode said that of the approx. 70,000 pounds of napalm being thrown on Korea on “a good day”, about 60,000 pounds of this was thrown by US forces. P. 99.
[ii]Cumings, Bruce. The Korean War: A History. Modern Library. 2011. P. 145.
[iii]Ibid. p. 81, 153.
[iv]Kim, Dong Choon. “Forgotten War, Forgotten Massacres—the Korean War (1950–1953) as Licensed Mass Killings.” Journal of Genocide Research, vol. 6, no. 4, 2004, pp. 523–544. P. 17.
[v]Neer does mention other Vietnam-related events in the 1950s, thus giving at least some broader context. | <urn:uuid:ab23ad9d-15d2-4b38-8726-6c812f0554fd> | CC-MAIN-2019-47 | http://btl.ticadine.com/crimes-against-humanity/the-other-hiroshimas-a-review-of-napalm-an-american-biography-by-robert-m-neer/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00097.warc.gz | en | 0.966909 | 5,136 | 3.796875 | 4 |
A recent article on the troubles and tribulations of the F 35 fighter claims that:
“… this flying supercomputer …..is a cascading series of compromises that has produced an aircraft inadequate to meet any of its functions…” Critics claim that the F 35 “can’t turn, can’t run, can’t climb”.
While the issue of the true performance of the F35 is something that must be left to the experts (the pilots) we claim that modern (and complex) products cannot and should not be designed without taking complexity into account. Here is why.
The trend to articulate product offering is putting pressure on manufacturing companies like never before. Therefore, the complexity of modern products and of the associated manufacturing processes is rapidly increasing. High complexity, as we know, is a prelude to vulnerability. It is a fact that in all spheres of social life excessive complexity leads to inherently fragile situations. Humans perceive this intuitively and try to stay away from highly complex situations. But can complexity be taken into account in the design and manufacturing of products? The answer is affirmative. Recently developed technology, which allows engineers to actually measure the complexity of a given design or product, makes it possible to use complexity as a design attribute. Therefore, a product may today be conceived and designed with complexity in mind from day one. Not only stresses, frequencies or fatigue life but also complexity can become a design target for engineers. Evidently, if CAE is to cope with the inevitable increase of product complexity, complexity must somehow enter the design-loop. As mentioned, today this is possible. Before going into details of how this may be done, let us first take a look at the underlying philosophy behind a “Complexity-Based CAE” paradigm. Strangely enough, the principles of this innovative approach to CAE have been established in the 14-th century by Francis William of Ockham when he announced his law of parsimony – “Entia non sunt multiplicanda praeter necessitatem” – which boils down to the more familiar “All other things being equal, the simplest solution is the best.” The key, of course, is measuring simplicity (or complexity). Today, we may phrase this fundamental principle in slightly different terms:
Complexity X Uncertainty = Fragility
This is a more elaborate version of Ockham’s principle (know as Ockham’s razor) which may be read as follows: The level of fragility of a given system is the product of the complexity of that system and of the uncertainty of the environment in which is operates. In other words, in an environment with a given level of uncertainty or “turbulence” (sea, atmosphere, stock market, etc.) a more complex system/product will result to be more fragile and therefore more vulnerable. Evidently, in the case of a system having a given level of complexity, if the uncertainty of its environment is increased, this too leads to an increase of fragility. We could articulate this simple concept further by stating that:
C_design X (U_manufacturing + U_environment) = F
In the above equation we explicitly indicate that the imperfections inherent to the manufacturing and assembly process introduce uncertainty which may be added to that of the environment. What this means is simple: more audacious (highly complex) products require more stringent manufacturing tolerances in order to survive in an uncertain environment. Conversely, if one is willing to decrease the complexity of a product, then a less sophisticated and less expensive manufacturing process may be used if the same level of fragility is sought. It goes without saying that concepts such as fragility and vulnerability are intimately related to robustness. High fragility = low robustness. In other words, for a given level of uncertainty in a certain operational environment, the robustness of a given systems or product is proportional to its complexity. As mentioned, excessive complexity is a source of risk, also in engineering.
Now that we understand why measuring complexity may open new and exciting possibilities in CAE and CAD let us take a closer look at what complexity is and how it can be incorporated in the engineering process by becoming a fundamental design attribute. In order to expose the nature of complexity, an important semantic clarification is due at this point: the difference between complex and complicated. A complicated system, such as mechanical wrist watch, is indeed formed of numerous components – in some cases as many as one thousand – which are linked to each other but, at the same time, the system is also deterministic in nature. It cannot behave in an uncertain manner. It is therefore easy to manage. It is very complicated but with extremely low complexity. Complexity, on the other hand, implies the capacity to deliver surprises. This is why humans intuitively don’t like to find themselves in highly complex situations. In fact, highly complex systems can behave in a myriad of ways (called modes) and have the nasty habit of spontaneously switching mode, for example from nominal to failure. If the complexity in question is high, not only the number of failure modes increases, the effort necessary to cause catastrophic failure decreases in proportion.
Highly complicated products do not necessarily have to be highly complex. It is also true that high complexity does not necessarily imply very many interconnected components. In fact, a system with very few components can be extremely difficult to understand and control. And this brings us to our definition of complexity. Complexity is a function of two fundamental components:
- Structure. This is reflected via the topology of the information flow between the components in a system. Typically, this is represented via a System Complexity Map or a graph in which the components are the nodes (vertices) of the graph, connected via links (see example below).
- Entropy. This is a fundamental quantity which measures the amount of uncertainty of the interactions between the components of the system.
Obtaining a System Complexity Map is simple. Two alternatives exist.
- Run a Monte Carlo Simulation with a numerical (e.g. FEM) model, producing a rectangular array in which the columns represent the variables (nodes of the map) and the rows correspond to different stochastic realizations of these variables.
- Collect sensor reading from a physical time-dependent system, building a similar rectangular array, in which the realizations of the variables are obtained by sampling the sensor channels at a specific frequency.
Once such arrays are available, they may be processed by OntoNet™ which directly produces the maps. A System Complexity Map, together with its topology, reflects the functionality of a given system. Functionality, in fact, is determined by the way the system transmits information from inputs to outputs and also between the various outputs. In a properly functioning system at steady-state, the corresponding System Complexity Map is stable and does not change with time. Evidently, if the system in question is deliberately driven into other modes of functioning – for example from nominal to maintenance – the map will change accordingly. An example of a System Complexity Map of a power plant is illustrated below:
A key concept is that of hub. Hubs are nodes in the map which possess the highest degree (number of connections to other nodes). Hubs may be regarded as critical variables in a given system since their loss causes massive topological damage to a System Complexity Map and therefore loss of functionality. Loss of a hub means one is on the path to failure. In ecosystems, hubs of the food-chain are known as keystone species. Often, keystone species are innocent insects or even single-cell animals. Wipe it out and the whole ecosystem may collapse. Clearly, single-hub ecosystems are more vulnerable than multi-hub ones. However, no matter how many hubs a system has, it is fundamental to know them. The same concept applies to engineering of course. In a highly sophisticated system, very often even the experienced engineer who has designed it does not know all the hubs. One reason why this is the case is because CAE still lacks the so-called systems-thinking and models are built and analyzed in “stagnant compartments” in a single-discipline setting. It is only when a holistic approach is adopted, sacrificing details for breadth, that one can establish the hubs of a given system in a significant manner. In effect, the closer you look the less you see.
Robustness has always been a concern of engineers. But can complexity be used to define and measure robustness? There exist many “definitions” of robustness. None of them is universally accepted. Most of these definitions talk of insensitivity to external disturbances. It is often claimed that low scatter in performance reflects high robustness and vice-versa. But scatter really reflects quality, not robustness. Besides, such “definitions” do not allow engineers to actually measure the overall robustness of a given design. Complexity, on the other hand, not only allows us to establish a new and holistic definition of robustness, but it also makes it possible to actually measure it, providing a single number which reflects “the global state of health” of the system in question. We define robustness as the ability of a system to maintain functionality. How do you measure this? In order to explain this new concept it is necessary to introduce the concept of critical complexity. Critical complexity is the maximum amount of complexity that any system is able to sustain before it starts to break down. Every system possesses such a limit. At critical complexity, systems become fragile and their corresponding Process Maps start to break-up. The critical complexity threshold is determined by OntoNet™ together with the current value of complexity. The global robustness of a system may therefore be expressed as the distance that separates its current complexity from the corresponding critical complexity. In other words, R= (C_cr – C)/C_cr, where C is the system complexity while C_cr the critical complexity. With this definition in mind it now becomes clear while Ockham’s rule so strongly favors simpler solutions! A simpler solution is farther from its corresponding criticality threshold than a more complex one – it is intrinsically more robust.
The new complexity-based definition of robustness may also be called topological robustness as it quantifies the “resilience” of the system’s System Complexity Map in the face of external and internal perturbations (noise). However, the System Complexity Map itself carries additional fundamental information that establishes additional mechanisms to assess robustness in a more profound way. It is obvious that a multi-hub system is more robust – the topology of its System Complexity Map is more resilient, its functionality is more protected – than a system depending on a small number of hubs. A simple way to quantify this concept is to establish the degree of each node in the System Complexity Map – this is done by simply counting the connections stemming from each node – and to plot them according to increasing order. This is known as the connectivity histogram. A spiky plot, known also as a Zipfian distribution, points to fragile systems, while a flatter one reflect a less vulnerable System Complexity Map topology.
The density of a System Complexity Map is also a significant parameter. Maps with very low density (below 5-10%) point to systems with very little redundancy, i.e. with very little fail-safe capability. Highly dense maps, on the other hand, reflect situations in which it will be very difficult to make modifications to the system’s performance, precisely because of the high connectivity. In such cases, introducing a change at one node will immediately impact other nodes. Such systems are “stiff” in that reaching acceptable compromises is generally very difficult and often the only alternative is re-design.
And how about measuring the credibility of models? Models are only models. Remember how may assumptions one must make to write a partial differential equation (PDE) describing the vibrations of a beam? The beam is long and slender, the constraints are perfect, the displacements are small, shear effects are neglected, rotational inertia is neglected, the material is homogenous, the material is elastic, sections remain plane, loads are applied far from constraints, etc., etc. How much physics has been lost in the process? 5%? 10%? But that’s not all. The PDE must be discretized using finite difference or finite element schemes. Again, the process implies an inevitable loss of physical content. If that were not enough, very often, because of high CPU-consumption, models are projected onto the so-called response surfaces. Needless to say, this too removes physics. At the end of the day we are left with a numerical artifact which, if one is lucky (and has plenty of grey hair) the model captures correctly 80-90% of the real thing. Many questions may arise at this point. For instance, one could ask how relevant is an optimization exercise which, exposing such numerical constructs to a plethora of algorithms, delivers an improvement of performance of, say, 5%. This and other similar questions bring us to a fundamental and probably most neglected aspect of digital simulation – that of model credibility and model validation. The importance of a knowing how much one can trust a digital model is of paradigm importance:
- Models are supposed to be cheaper than the real thing – physical tests are expensive.
- Some things just cannot be tested (e.g. spacecraft in orbit).
- If a model is supposed to replace a physical test but one cannot quantify how credible the model is (80%, 90% or maybe 50%) how can any claims or decisions based on that model be taken seriously?
- You have a model with one million elements are you are seriously thinking considering mesh refinement in order to get “more precise answers” but you cannot quantify the level of trust of your model. How significant is the result of the mesh refinement?
- You use a computer model to deliver an optimal design but you don’t know level of trust of the model. It could very well be 70% or 60%. Or less. You then build the real thing. Are you sure it is really optimal?
But is it possible to actually measure the level of credibility of a computer model? The answer is affirmative. Based on complexity technology, a single physical test and a single simulation are sufficient to quantify the level of trust of a given computer model, providing the phenomenon in question is time-dependent. The process of measuring the quality of the model is simple:
- Run a test and collect results (outputs) in a set of points (sensors). Arrange them in a matrix.
- Run the computer simulation, extracting results in the same points and with the same frequency. Arrange them in a matrix.
- Measure the complexity of both data sets. You will obtain a Process Map and the associated complexity for each case, C_t and C_m (test and model, respectively).
The following scenarios are possible:
- The values of complexity for the two data sets are similar: your model is good and credible.
- The test results prove to be more complex than simulation results: your model misses physics or is based on wrong assumptions.
- The simulation results prove to be more complex than the physical test results: your model probably generates noise.
But clearly there is more. Complexity is equivalent to structured information. It is not just a number. If the complexities of the test and simulation results are equal (or very similar) one has satisfied only the necessary condition of model validity. A stronger sufficient condition requires in addition the following to hold:
- The topologies of the two Process Maps are identical.
- The hubs of the maps are the same.
- The densities of the maps (i.e. ratio of links to nodes) are the same.
- The entropy content in both cases is the same.
The measure of model credibility, or level of trust, may now be quantified as:
MC = abs[ (C_test – C_model)/C_test) ]
The figure below illustrates the System Complexity Maps obtained from a crash test (left) and simulation (right). The simulation model has a complexity of 6.53, while the physical test 8.55. This leads to a difference of approximately 23%. In other words, we may conclude that according to the weak condition, the model captures approximately 77% of what the test has to offer. Moreover, the System Complexity Maps are far from being similar. Evidently, the model still requires a substantial amount of work.
But there is more – the same index may be used to “measure the difference” between two models in which:
- The FE meshes have different bandwidth (a fine and a coarse mesh are built for a given problem).
- One model is linear, the other is non-linear (one is not sure if a linear model is suitable for a given problem).
- One model is run on 1 CPU and then on 4 CPUs (it is known that with explicit models this often leads to different results).
And what about complexity and CAD? It is evident to every engineer that a simpler solution to a given problem is almost always:
- Easier to design
- Easier to assemble/manufacture
- Easier to service/repair
- Intrinsically more robust
The idea behind complexity-based CAD is simple: design a system that is as simple as possible but which fulfills functional requirements and constraints. Now that complexity may be measured in a rational manner, it can become a specific design objective and target and we may put the “Complexity X Uncertainty = Fragility” philosophy into practice. One way to proceed is as follows:
- Establish a nominal parametric model of a system (see example below, illustrating a pedestrian bridge)
- Generate a family of topologically feasible solutions using Monte Carlo Simulation (MCS) to randomly perturb all the dimensions and features of the model.
- Generate a mesh for each Monte Carlo realization.
- Run an FE solver to obtain stresses and natural frequencies.
- Process the MCS with OntoNet™.
- Define constraints (e.g. dimensions) and performance objectives (e.g. frequencies, mass).
- Obtain a set of solutions which satisfy both the constraints as well as the performance objectives.
- Obtain the complexity for each solution
- Select the solution with the lowest complexity.
The above process may be automated using a commercial CAD system with meshing capability, a multi-run environment which supports Monte Carlo simulation and an FE solver.
In the case of our bridge example, two solutions, possessing very similar mass, natural frequencies, stresses and robustness but dramatically different values of complexity are illustrated bwlo. The solution on the right has complexity of 8.5 while the one on the left 5.4.
Given that the complexity of man-made products, and the related manufacturing processes, is quickly growing, these products are becoming increasingly exposed to risk, given that high complexity inevitably leads to fragility. At the same time, the issues of risk and liability management are becoming crucial in today’s turbulent economy. But highly complex and sophisticated products are characterized by a huge number of possible failure modes – mainly due to discipline interaction – and it is a practical impossibility to analyze them all. An example is illustrated below, where a System Complexity Map of an aircraft is shown.
Therefore, the alternative is to design systems that are intrinsically robust, i.e. that possess built-in capacity to absorb both expected and unexpected random variations of operational conditions, without failing or compromising their function. Robustness is reflected in the fact that the system is no longer optimal, a property that is linked to a single and precisely defined operational condition, but results acceptable (fit for the function) in a wide range of conditions. In fact, contrary to popular belief, robustness and optimality are mutually exclusive. Complexity-based design, i.e. a design process in which complexity becomes a design objective, opens new avenues for the engineering. While optimal design leads to specialization, and, consequently, fragility outside of the portion of the design space in which the system is indeed optimal, complexity-based design yields intrinsically robust systems. The two paradigms may therefore be compared as follows:
- Old Paradigm: Maximize performance, while, for example, minimizing mass.
- New Paradigm: Reduce complexity accepting compromises in terms of performance.
A fundamental philosophical principle that sustains the new paradigm is L. Zadeh’s Principle of Incompatibility: High complexity is incompatible with high precision. The more something is complex, the less precise we can be about it. A few examples: the global economy, our society, climate, traffic in a large city, the human body, etc., etc. What this means is that you cannot build a precise (FE) model of a highly sophisticated system. And it makes little sense to insist – millions of finite elements will not squeeze precision from where there isn’t any. Nature places physiological limits to the amount of precision in all things. The implications are clear. Highly sophisticated and complex products and systems cannot be designed via optimization, precisely because they cannot be described with high precision. In fact, performance maximization (optimization) is an exercise of precision and this, as we have seen, is intrinsically limited by Nature. For this very reason, models must be realistic, not precise. | <urn:uuid:a7ca94ea-8115-45b0-8108-15307456bdb5> | CC-MAIN-2019-47 | https://ontonixqcm.wordpress.com/2014/08/16/the-f35-too-complex-to-succeed/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670389.25/warc/CC-MAIN-20191120010059-20191120034059-00016.warc.gz | en | 0.942253 | 4,465 | 2.59375 | 3 |
Abu ʿAbdullah Muhammad ibn Idris al-Shafi‘i is the leader of the Shafi’i school of fiqh (jurisprudence). He was born in Gaza, Palestine and died in Egypt. He was active in juridical matters and his teaching eventually led to the Shafi’i school of fiqh (or Madh’hab) named after him. He was prolific writer and one of his works was a collection of his poems. Shafi’i had special passion and love for Ahlul-Bayt as manifested and expressed in his famous poems. He expressly told that salvation and eternal prosperity rested in obedience and adherence to the teachings of the Prophet of Islam and his holy Ahlul-Bayt (household). He even considered them as his intercessors on the Day of Judgment. Indeed, not only did he consider them as his intercessors but he also linked the validity of his acts of worship and prayers to following them. He maintained that if someone did not salute the Prophet (pbuh) and his family in his prayers, his prayers were void.
Abu ʿAbdullah Muhammad bin Idris bin al-Abbas better known as Imam Shafi’i (150 – 204 A.H.) was the leader of the Shafi’i school of fiqh. The Shafi’i school is one of the four major Sunni schools of jurisprudence. Muhammad ibn Idris al-Shafi‘i was born in Gaza of Palestine. He was Quraishi from mother’s side. He was born in Gaza in the town of Asqalan and moved to Mecca when he was about two years old. He is reported to have studied under the scholars of his time including Muslim Ibn Khalid, the Mufti of Mecca at his time. Having studied Arabic literature with the latter, he started learning fiqh (jurisprudence).
Shafi’i was thirteen when he moved to Medina and started acquiring education under Malik bin Anas, the founder of the Maliki School of jurisprudence. At a young age, he memorized the Holy Quran and got acquainted with different religious sciences. His knowledge of Arabic grammar and Arabic literature was unique. He spent time among Huzail, Rabi’ah and Mudar finding precise roots and origins of words. He learnt fiqh under the students of Ibn Abbas who himself was a student of the Commander of the Faithful, Ali and some other students of the Ahlul-Bayt – peace be upon them. He also lived in Iraq and Egypt where he was busy teaching and giving fatawa (verdicts). Shafi’i lived in the era of the Abbasid caliphs. In the year 204 of the Islamic calendar he travelled to Egypt where he died and was buried in Cairo.
Shafi’i was the most prolific writer among the jurisprudents of his time. He benefited from his literary and scientific capacity in his works because he mastered Arabic literature, etymology, jurisprudential sciences and was very well acquainted with the principles of Arabic eloquence. According to some accounts, he reached a high rank in fiqh, hadith, Quranic sciences, poetry and etymology. He authored many works some of which are surviving. As many as 110 volumes of books and treatises have been ascribed to him most of them on fiqh (jurisprudence). Two of his works are most famous. They are Al-Resala — the best known book by al-Shafi'i in which he examined usul al-fiqh (sources of jurisprudence) – and Kitab al-Umm - his main surviving text on Shafi’i fiqh.
Shafi’i travelled to different places far and wide to acquire knowledge; he travelled to Mecca, Medina, Kufa, Baghdad, Egypt and Yemen. Imam Shafi’i got acquainted with the thoughts, views and doctrines of different jurisprudential and theological schools. He benefited from their experience and scientific achievements and he criticized them wherever necessary. At times, he debated with the prominent figures of these schools.
“Shafi’i had love and respect for Ali bin Abi Talib more than the rest of Sunni leaders. He says about Imam Ali (as) that Ali had four virtues and if someone other than him had one of those virtues, he deserved to be respected and dignified. They are piety, knowledge, bravery and dignity. Shafi’i also says that the Prophet (pbuh) reserved Ali for knowledge of Quran because the Prophet (pbuh) called on him, ordered him to judge among people. The Prophet (pbuh) signed (confirmed) his verdicts and would also say that Ali was true and Mu’awiyah was false.”
One of the subjects seen visibly in al-Shafi’i’s works and is well-known and which he emphasizes upon greatly is superiority of Ahlul-Bayt (as) and love of them.
In one of the hajj rituals when pilgrims were at Mina and when some enemies of the Ahlul-Bayt (as) were in attendance, Shafi’i called the love of Ahlul-Bayt an entitlement of the family of the Prophet and addressed the pilgrims as such:
یا راکِباً قِف بالمُحَصَّبِ مِن منیً
وَاهتِف بِقاعِدِ خَیفِها وَالنَّاهِضِ
سَحَراً إذا فاضَ الحَجیجُ إلی مِنیً
فَیضاً کَمُلتَطِمِ الفُراتِ الفائِضِ
إن کانَ رَفضاً حُبُّ آلِ مُحمَّدٍ
فَلیَشهَدِ الثَّقَلانِ أنّی رَافضِی
"O' Pilgrims! On your way to the House of Allah, pause shortly in the sands of Muzdalifah. At dawn, when the caravans of pilgrims move toward Mina, like a roaring river, call upon them and say: "If love of the Prophet's family means “rafdh", then let mankind know, that surely I am a "Rafedhi."
One of the points raised in Shafi’i’s poems is love of Ahlul-Bayt, the family of the Prophet (peace be upon him and his family) which he deems obligatory. Shafi’i maintains that if a person does not declare salawaat for him and his family, his prayer is incomplete and will not be accepted:
یا آلَ بَیتِ رَسولِ الله حُبُّکُمُ
فَرضٌ مِنَ الله فی القُرآنِ أنزَلَهُ
کَفاکُم مِن عظیمِ القَدرِ أنّکُم
مَن لَم یُصلِّ عَلَیکُم لَا صَلَاةَ لَهُ
"O Ahle Bait of the Prophet of Allah! Love for you has been made obligatory for us by Allah, as revealed in the Holy Qur'an (referring to the above verse). It is sufficient for your dignity that if one does not send salutations to you in the ritual prayers, his prayers will not be accepted."
Among the other poems Shafi’i has composed are poems about Imam Ali (as). When asked about Imam Ali (as), Shafi’i said painfully as such:
إنّا عَبــیدٌ لِفتیً أنزلَ فِیـــهِ هَل أتَی
إلی مَتــی أکتُمُهُ؟ إلی مَتی؟ إلی مَتی؟
“I am the servant of that young man about whom Sura Hal Ataa (Chapter Insaan) was revealed. How long should I conceal it? How long? How long?”
He was the one who expressly and openly declared his love and attachment towards the successor of the Prophet. It was at a time when a suffocating atmosphere had been imposed on the lovers of Ali (as):
قَالُوا: تَرَفَّضتَ قُلتُ: کَلَّا
مَا الَّرفضُ دینی وَ لا إعتِقادی
لکِن تَوَلَّیتُ غَیرَ شَکٍّ
خَیرَ إمَامٍ وَ خَیرَ هَادی
إن کَانَ حُبُّ الوَلِیِّ رَفضاً
فإنَّنی أرفَضُ العِبادَ
They say: You are a Rafedhi and heretic, I say: Never did I become a Rafedhi, apostasy is not my religion. But needless to say that in my heart, there is much love (and respect) for the greatest leader (Imam Ali (As). If loving Wali of Allah (the friend of God?) is Rafdh, then I'm Rafedhi of 1st rank!
In other poems, he makes clear and tangible reference to intercession of the Ahlul-Bayt (as) of the Holy Prophet (pbuh) hoping to be interceded with God by them on the Day of Resurrection, the day of reckoning:
لَئِن کانَ ذَنبِی حُبُّ آلِ محمَّدٍ
فذلِکَ ذَنبٌ لَستُ عَنهُ أتوبُ
هُمُ شُفَعائی یومَ حَشری و مَوقِفی إذا
کثرتنی یوم ذاک ذنوب
If loving the household of Prophet (pbuh) is a sin, then I will never repent on this sin!
Of course, on the Day of Judgment, they (Ahl-e-Bait) will be my intercessors on the Day when I shall be resurrected. That is when my sins are too many on that day.
Shafi’i was in a gathering when someone started speaking about Ali (as), his two noble sons and his pure wife. A man who had been from the enemies of Ahlul-Bayt (as) says: It is not good to speak about them. Leave it as it is a talk about Rafedhis. At this time Shafi’i says in his poems:
بَرِئتُ إلَی المُهیمِنِ مِن أُناسٍ
یَرَونَ الرَّفضَ حُبَّ الفاطِمیّه
إذا ذکروا عَلیّا أو بَنیه
أفاضوا بالرّوایات الویّه
عَلی آل الرّسُول صلاة رَبی
وَ لَعنتُه لتلکَ الجَاهلیّه
I disassociate myself from those (people) who believe that remembering the sons of Fatima (AS) is Rafdh. If anyone talks about Ali (as) and Fatima (as) and their sons, they (enemies of Ahl-e-Bayt) mend this way, they think that it is a foolishness (to remember Ali and Fatima). Supplications (Duroods and Salams) of my Allah be upon Prophet (pbuh). And curse of Allah be upon this ignorance and infidelity (hating Ahul-Bayt).
Shafi’i considers love of the Ahlul-Bayt (as) to be in his flesh and blood and this family as a means for his growth, guidance and everything he has. He says:
و سائلی عن حُب أهل البیت هل؟
أُقرّ إعلاناً به أم أجحدُ
هَیهاتَ ممزوجٌ بلحمی و دَمی
حُبُّهم و هو الهُدی و الرشدُ
یا أهل بیتِ المصطفی یا عدتی
وَ مَن علی حبّهُم أعتَمدُ
أنتم إلی اللهِ غَداً وَسیلَتی
و کیف أخشی؟ و بکم اعتضدُ
ولیّکم فی الخُلدِ حَیّ خَالدٌ
و الضدُ فی نارٍ لَظیً مُخلّدُ
“O those who ask me about my love of Ahlul-Bayt (as); should I confess openly that I love them or should I deny that? Never shall I deny their love because their love and affinity is blended in my flesh and blood. Their love is a means of my guidance and growth. O family of Muhammad, O those whom I turn to, O those whose love is my reliance, you are my intercessors on the Day of Judgment. Why should I be afraid when I trust you and have confidence in you? He who loves you will reside eternally in Paradise and your enemies will be for ever in Hell fire”
Else where, he describes, in his poems, the pure family of Prophet Muhammad as the ark of salvation and love of them as the covenant of Allah. So he says:
وَ لَمَّا رَأیتُ النَّاسَ قَد ذَهَبَت بِهِم
مَذَاهِبُهُم فِی أبحُرِ الغَیِّ وَ الجَهلِ
رَکِبتُ عَلَی اسمِ الله فِی سُفُنِ النَّجَا
وَ هُم آل بَیتِ المُصطفَی خَاتَمِ الرُّسُلِ
وَ أمسَکتُ حَبلَ اللهِ وَ هُوَ وَلاءوهُم
کَما قَد أُمِرنَا بالتَمسُّکِ بالحَبلِ
When I saw different religions and jurisprudential schools steering towards ignorance and misguidance, I embarked in the name of God on the ark of salvation i.e. the family of the Seal of Prophets and got hold of the divine covenant which is the very love of them. Indeed, God has commanded us to hold fast to the divine covenant.
Therefore, it is understood from Shafi’i’s poems that he held high respect for the family of the Prophet of Islam (s) and he was not afraid of telling it to others. He considered guidance and prosperity to be found and achievable only through following the Prophet (pbuh) and his noble family. It is his ardent aspiration to be interceded with God on the Day of Judgment. Indeed, not only does he consider them as his intercessors but he also linked the validity of his acts of worship and prayers to following them. He maintained that if someone did not salute the Prophet (pbuh) and his family in his prayer, his prayer were void.
It is noteworthy that one of the reasons and proofs indicating Imam Shafi’i’s fondness and attachment towards Ahlul-Bayt (as) is his making reference to the sayings of Imam Sadiq (as) in his book titled “Al-Umm”.
The four Sunni legals schools or madhhabs which the majority of Muslims follow are Maliki, Hanafi, Shafi’i and Hanbali.
Vide: Badi’ Ya’qub, Emil, Diwan al-Imam al-Shafi’i, Beirut – Lebanon, Dar al-Kitab al-Arabi, 1430 A.H.
Masudi, Ali bin Hussein, Murawij al-Dhahab, vol.3, p. 437, Tehran, Scientific and Cultural Publications, 1374 (1995).
Vide: Tawakkoli, Muhammad Rauf, The Fourth Imam of Ahl al-Sunnah wa al-Jama’ah, Tehran, Tawkkoli, 1377 (A.H.)
Muhammad Abu Zuhra, p. 3, 252, Gulzihar, Al-Aqidah wash-Sharih’ah al-Islam, p. 200, cited from History of Shia and Islamic sects until the fourth century, Muhammad Jawad Mashkoor, third edition, p. 103 – 104 , Ishraqi Bookstore, 1262 (1983).
Collection of Imam Shafi’i Poetry, p. 93, Dar al-Kitab al-Arabi, Beirut, 1414 A.H.
Ibid, p. 115.
Ibid, p. 59.
Ibid, p. 72
Ibid, p. 48.
Ibid, p. 152.
Ibid, p. 222 – 223.
Ibid, p. 278. For more details vide: Praise of Ahlul-Bayt (as) in Muhammad bin Idri al-Shafi’I’s Poems, Ayyub Shafi’i Pour, Taghrib News Agency; Barfi, Muhammad, (article), A Study of the Causes of Convergence and Divergence between Shafi’i and Hanafi Religions with Imamiyah Shi’ah, Resalat News Papaper, No. 6942 dated 20, 12, 88 (Cultural); Suyuti Shafi’i, Jalaluddin Abdur Rahman, Virtues of Ali (as) (911 A.H.). | <urn:uuid:17654e54-2499-41d4-880d-ffd6ee75605b> | CC-MAIN-2019-47 | http://www.islamquest.net/en/archive/question/en18815 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00258.warc.gz | en | 0.928114 | 4,605 | 2.71875 | 3 |
The morning of June 27th is a sunny, summer day with blooming flowers and green grass. In an unnamed village, the inhabitants gather in the town square at ten o’clock for an event called “the lottery.” In other towns there are so many people that the lottery must be conducted over two days, but in this village there are only three hundred people, so the lottery will be completed in time for the villagers to return home for noon dinner.
This seemingly idyllic beginning establishes a setting at odds with the violent resolution of the story. Early details, such as sun and flowers, all have positive connotations, and establish the theme of the juxtaposition of peace and violence. The lottery is mentioned in the first paragraph, but not explained until the last lines.
The children arrive in the village square first, enjoying their summer leisure time. Bobby Martin fills his pockets with stones, and other boys do the same. Bobby helps Harry Jones and Dickie Delacroix build a giant pile of stones and protect it from “raids” by other children. The girls stand talking in groups. Then adults arrive and watch their children’s activities. The men speak of farming, the weather, and taxes. They smile, but do not laugh. The women arrive, wearing old dresses and sweaters, and gossip amongst themselves. Then the women call for their children, but the excited children have to be called repeatedly. Bobby Martin runs back to the pile of stones before his father reprimands him and he quietly takes his place with his family.
The children’s activities—gathering stones—have a false innocence about them. Because this resembles the regular play of children, the reader may not assume gathering stones is intended for anything violent. The word “raids,” however, introduces a telling element of violence and warfare into the children’s innocent games. Similarly, the reader is lulled into a false sense of security by the calm and innocuous activities and topics of conversation among the adult villagers. We see the villagers strictly divided along gendered lines, even as children.
Mr. Summers, the man who conducts the lottery, arrives. He also organizes the square dances, the teen club, and the Halloween program, because he has time to devote to volunteering. He runs the coal business in town, but his neighbors pity him because his wife is unkind and the couple has no children. Mr. Summers arrives bearing a black box. He is followed by the postmaster, Mr. Graves, who caries a stool.
Mr. Graves sets the stool in the center of the square and the black box is placed upon it. Mr. Summers asks for help as he stirs the slips of paper in the box. The people in the crowd hesitate, but after a moment Mr. Martin and his oldest son Baxter step forward to hold the box and stool. The original black box from the original lotteries has been lost, but this current box still predates the memory of any of the villagers. Mr. Summers wishes to make a new box, but the villagers don’t want to “upset tradition” by doing so. Rumor has it that this box contains pieces of the original black box from when the village was first settled. The box is faded and stained with age.
The details of the lottery’s proceedings seem mundane, but the crowd’s hesitation to get involved is a first hint that the lottery is not necessarily a positive experience for the villagers. It is also clear that the lottery is a tradition, and that the villagers believe very strongly in conforming to tradition—they are unwilling to change even something as small as the black box used in the proceedings.
Much of the original ritual of the lottery has been forgotten, and one change that was made was Mr. Summers’s choice to replace the original pieces of wood with slips of paper, which fit more easily in the black box now that the population of the village has grown to three hundred. The night before the lottery, Mr. Summers and Mr. Graves always prepare the slips of paper, and then the box is kept overnight in the safe of the coal company. For the rest of the year, the box is stored in Mr. Graves’s barn, the post office, or the Martins’ grocery store.
Even though the villagers value tradition, many of the specific parts of their traditions have been lost with time. This suggests that the original purpose of the lottery has also been forgotten, and the lottery is now an empty ritual, one enacted simply because it always has been. When we later learn the significance of the slips of paper, it seems horribly arbitrary that they are simply made by a person the night before.
In preparation for the lottery, Mr. Summers creates lists of the heads of families, heads of households in each family, and members of each household in each family. Mr. Graves properly swears in Mr. Summers as the officiator of the lottery. Some villagers recall that there used to be a recital to accompany the swearing in, complete with a chant by the officiator. Others remembered that the officiator was required to stand in a certain way when he performed the chant, or that he was required to walk among the crowd. A ritual salute had also been used, but now Mr. Summers is only required to address each person as he comes forward to draw from the black box. Mr. Summers is dressed cleanly and seems proper and important as he chats with Mr. Graves and the Martins.
The lottery involves organizing the village by household, which reinforces the importance of family structures here. This structure relies heavily on gender roles for men and women, where men are the heads of households, and women are delegated to a secondary role and considered incapable of assuming responsibility or leadership roles. Even though the setting of this story is a single town, it is generic enough that it might be almost anywhere. In doing this, Jackson essentially makes the story a fable—the ideas explored here are universal.
Just as Mr. Summers stops chanting in order to start the lottery, Mrs. Tessie Hutchinson arrives in the square. She tells Mrs. Delacroix that she “clean forgot what day it was.” She says she realized it was the 27th and came running to the square. She dries her hands on her apron. Mrs. Delacroix reassures her that Mr. Summers and the others are still talking and she hasn’t missed anything.
Tessie Hutchinson’s late arrival establishes her character in a few sentences: she cares little about the lottery and the pomp and circumstance of the ritual. She is different from the other villagers, and thus a potential rebel against the structure of the village and the lottery.
Mrs. Hutchinson looks through the crowd for her husband and children. The crowd parts for her as she joins them at the front, and some point out her arrival to her husband. Mr. Summers cheerfully says that he’d thought they’d have to start without Tessie. Tessie jokes back that Mr. Summers wouldn’t have her leave her dirty dishes in the sink, would he? The crowd laughs.
Mr. Summers says that they had better get started and get this over with so that everyone can go back to work. He asks if anyone is missing and, consulting his list, points out that Clyde Dunbar is absent with a broken leg. He asks who will be drawing on his behalf. His wife steps forward, saying, “wife draws for her husband.” Mr. Summers asks—although he knows the answer, but he poses the question formally—whether or not she has a grown son to draw for her. Mrs. Dunbar says that her son Horace is only sixteen, so she will draw on behalf of her family this year.
Mrs. Dunbar is the only woman to draw in the lottery, and the discussion of her role in the ritual proceedings emphasizes the theme of family structure and gender roles. Women are considered so inferior that even a teenaged son would replace a mother as the “head of household.” The formality surrounding these proceedings shows Mrs. Dunbar’s involvement to be an anomaly for the village.
Mr. Summers asks if the Watson boy is drawing this year. Jack Watson raises his hand and nervously announces that he is drawing for his mother and himself. Other villagers call him a “good fellow” and state that they’re glad to see his mother has “got a man to do it.” Mr. Summers finishes up his questions by asking if Old Man Warner has made it. The old man declares “here” from the crowd.
A hush falls over the crowd as Mr. Summers states that he’ll read the names aloud and the heads of families should come forward and draw a slip of paper from the box. Everyone should hold his paper without opening it until all the slips have been drawn. The crowd is familiar with the ritual, and only half-listens to these directions. Mr. Summers first calls “Adams,” and Steve Adams approaches, draws his slip of paper, and returns to his family, standing a little apart and not looking down at the paper.
The description of the lottery’s formalities builds the reader’s anticipation, as the many seemingly mundane rituals all lead up to a mysterious, ominous outcome. The arc of the story depends on the question of just what will happen to the “winner” of the lottery.
As the reading of names continues, Mrs. Delacroix says to Mrs. Graves that is seems like no time passes between lotteries these days. It seems like they only had the last one a week ago, she continues, even though a year has passed. Mrs. Graves agrees that time flies. Mr. Delacroix is called forward, and Mrs. Delacroix holds her breath. “Dunbar” is called, and as Janey Dunbar walks steadily forward the women say, “go on, Janey,” and “there she goes.”
Snap shots of village life, like the conversation between Mrs. Delacroix and Mrs. Graves, develop the humanity of the characters and makes this seem just like any other small town where everyone knows each other. The small talk juxtaposed against murder is what makes the story so powerful. Janey is taking on a “man’s role,” so she is assumed to need encouragement and support.
Mrs. Graves watches Mr. Graves draw their family’s slip of paper. Throughout the crowd, men are holding slips of paper, nervously playing with them in their hands. “Hutchinson” is called, and Tessie tells her husband to “get up there,” drawing laughs from her neighbors.
The men’s nervousness foreshadows the lottery’s grim outcome. Tessie acts at odds with the pervasive mood, drawing laughs from the crowd. Tessie does not question the lottery at this point, and treats the proceedings lightheartedly—from a position of safety.
In the crowd, Mr. Adams turns to Old Man Warner and says that apparently the north village is considering giving up the lottery. Old Man Warner snorts and dismisses this as foolish. He says that next the young folks will want everyone to live in caves or nobody to work. He references the old saying, “lottery in June, corn be heavy soon.” He reminds Mr. Adams that there has always been a lottery, and that it’s bad enough to see Mr. Summers leading the proceedings while joking with everybody. Mrs. Adams intercedes with the information that some places have already stopped the lotteries. Old Man Warner feels there’s “nothing but trouble in that.”
The conversation between Mr. Adams and Old Man Warner establishes why the lottery is continued in this village, while it has been ended in others: the power of tradition. As the oldest man in the village, Old Man Warner links the lottery to traditional civilization, equating its removal to a breakdown of society and a return to a primitive state. For the villagers, the lottery demonstrates the organization and power of society—that is, a group of people submitting to shared rules in exchange for protection and support. But we see that the lottery also shows the arbitrariness and corruption of many of these social rules.
Mrs. Dunbar says to her oldest son that she wishes everyone would hurry up, and Horace replies that they’re almost through the list of names. Mrs. Dunbar instructs him to run and tell his father once they’re done. When Old Man Warner is called to select his slip of paper, he says that this is his seventy-seventh lottery. When Jack Watson steps forward, he receives several comments from the crowd reminding him to not be nervous and to take his time.
Mrs. Dunbar’s impatience, Old Man Warner’s pride, and Jack Watson’s coming-of-age moment show how integrated the lottery is into this society. No one questions the practice, and they all arrange their lives around it. Jackson shows how difficult it is to give up a tradition when everyone else conforms to it.
Finally, the last man has drawn. Mr. Summers says, “all right, fellows,” and, after a moment of stillness, all the papers are opened. The crowd begins to ask who has it. Some begin to say that it’s Bill Hutchinson. Mrs. Dunbar tells her son to go tell his father who was chosen, and Horace leaves. Bill Hutchinson is quietly staring down at his piece of paper, but suddenly Tessie yells at Mr. Summers that he didn’t give her husband enough time to choose, and it wasn’t fair.
Mr. Summer’s casual language and camaraderie with the villagers contrast with what is at stake. Tessie’s reaction is the first explicit sign of something horrifying at the heart of the lottery. She is as outspoken in her anger as she was in her humor—although rather too late, and it’s assumed she wouldn’t argue if someone else had been chosen. Bill resignedly accepts the power of the tradition.
Mrs. Delacroix tells Tessie to “be a good sport,” and Mrs. Graves reminds her “all of us took the same chance.” Bill Hutchinson tells his wife to “shut up.” Mr. Summers says they’ve got to hurry to get done in time, and he asks Bill if he has any other households in the Hutchinsons’ family. Tessie yells that there’s her daughter Eva and Eva’s husband Don, and says that they should be made to take their chance, too. Mr. Summers reminds her that, as she knows, daughters draw with their husband’s family. “It wasn’t fair,” Tessie says again.
This passage shows the self-serving survival instinct of humans very clearly. Each person who speaks up is protecting his or her own skin, a survival instinct that Jackson shows to be natural to all the villagers, and by extension all humans. Tessie is willing to throw her daughter and son-in-law into harm’s way to have a better chance of saving herself. The other women are relieved to have not been chosen—no one speaks up against the lottery until they themselves are in danger.
Bill Hutchinson regretfully agrees with Mr. Summers, and says that his only other family is “the kids.” Mr. Summers formally asks how many kids there are, and Bill responds that there are three: Bill Jr., Nancy, and little Davy. Mr. Graves takes the slips of paper back and puts five, including the marked slip of paper, in the black box. The others he drops on the ground, where a breeze catches them. Mrs. Hutchinson says that she thinks the ritual should be started over—it wasn’t fair, as Bill didn’t have enough time to choose his slip.
Mr. Summers and Mr. Graves’s calm continuation of the lottery’s ritual shows that they are numb to the cruelty of the proceedings. Tessie’s protests imply that she doesn’t see the choice of the marked slip of paper as fate or some kind of divine decree, but rather as a human failing. Perhaps she sees, too late, that the lottery is only an arbitrary ritual that continues simply because a group of people have unthinkingly decided to maintain it.
Mr. Summers asks if Bill Hutchinson is ready, and, with a glance at his family, Bill nods. Mr. Summers reminds the Hutchinsons that they should keep their slips folded until each person has one. He instructs Mr. Graves to help little Davy. Mr. Graves takes the boy’s hand and walks with him up to the black box. Davy laughs as he reaches into the box. Mr. Summers tells him to take just one paper, and then asks Mr. Graves to hold it for him.
Tessie’s protests have shown the reader that the outcome of the lottery will not be good. Little Davy’s inclusion reinforces the cruelty of the proceedings and the coldness of its participants. Little Davy is put at risk even when he is unable to understand the rituals or to physically follow the instructions.
Nancy Hutchinson is called forward next, and her school friends watch anxiously. Bill Jr. is called, and he slips clumsily, nearly knocking over the box. Tessie gazes around angrily before snatching a slip of paper from the box. Bill selects the final slip. The crowd is silent, except for a girl who is overheard whispering that she hopes it’s not Nancy. Then Old Man Warner says that the lottery isn’t the way it used to be, and that people have changed.
Even a dystopian society like this one doesn’t exclude other aspects of human nature like youth, popularity, friendship, and selfishness. Nancy’s behavior resembles that of many popular teen girls—again emphasizing the universal nature of Jackson’s story. We get the sense that Old Man Warner is perpetually displeased with any kind of change to tradition—even though the omniscient narrator tells us that the “tradition” Warner is used to is very different from the original lottery.
Mr. Summers instructs the Hutchinsons to open the papers. Mr. Graves opens little Davy’s and holds it up, and the crowd sighs when it is clearly blank. Nancy and Bill Jr. open theirs together and both laugh happily, as they hold up the blank slips above their heads. Mr. Summers looks at Bill, who unfolds his paper to show that it is blank. “Tessie,” Mr. Summers says. Bill walks over to his wife and forces the slip of paper from her hand. It is the marked slip of paper with the pencil dot Mr. Summers made the night before.
The inhumanity of the villagers, which has been developed by repeated exposure to the lottery and the power of adhering to tradition, still has some arbitrary limits—they are at least relieved that a young child isn’t the one chosen. They show no remorse for Tessie, however, no matter how well-liked she might be. Even Tessie’s own children are happy to have been spared, and relieved despite their mother’s fate. Jackson builds the sense of looming horror as the story approaches its close.
Mr. Summers tells the crowd, “let’s finish quickly.” The villagers have forgotten several aspects of the lottery’s original ritual, but they remember to use stones for performing the final act. There are stones in the boys’ piles and some others on the ground. Mrs. Delacroix selects a large stone she can barely lift. “Hurry up,” she says to Mrs. Dunbar beside her. Mrs. Dunbar gasps for breath and says that she can’t run. Go ahead, she urges, “I’ll catch up.”
Mrs. Dunbar already sent her son away, perhaps to spare him having to participate in murder this year, and now she herself seems to try and avoid taking part in the lottery as well. The line about the stones makes an important point—most of the external trappings of the lottery have been lost or forgotten, but the terrible act at its heart remains. There is no real religious or practical justification for the lottery anymore—it’s just a primitive murder for the sake of tradition. The use of stones also connects the ritual to Biblical punishments of “stoning” people for various sins, which then brings up the idea of the lottery’s victim as a sacrifice. The idea behind most primitive human sacrifices was that something (or someone) must die in order for the crops to grow that year. This village has been established as a farming community, so it seems likely that this was the origin of the lottery. The horrifying part of the story is that the murderous tradition continues even in a seemingly modern, “normal” society.
The children pick up stones, and Davy Hutchinson is handed a few pebbles. Tessie Hutchinson holds out her arms desperately, saying, “it isn’t fair,” as the crowd advances toward her. A flying stone hits her on the side of her head. Old Man Warner urges everyone forward, and Steve Adams and Mrs. Graves are at the front of the crowd. “It isn’t fair, it isn’t right,” Tessie screams, and then the villagers overwhelm her.
By having children (even Tessie’s own son) involved in stoning Tessie, Jackson aims to show that cruelty and violence are primitive and inherent aspects of human nature—not something taught by society. Tessie’s attempts to protest until the end show the futility of a single voice standing up against the power of tradition and a majority afraid of nonconformists. Jackson ends her story with the revelation of what actually happens as a result of the lottery, and so closes on a note of both surprise and horror. The seemingly innocuous, ordinary villagers suddenly turn violent and bestial, forming a mob that kills one of their own with the most primitive weapons possible—and then seemingly going home to supper. | <urn:uuid:c309385a-85f6-4894-abac-76e0764c6367> | CC-MAIN-2019-47 | https://www.litcharts.com/lit/the-lottery/summary-and-analysis | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00138.warc.gz | en | 0.972962 | 4,699 | 2.953125 | 3 |
Here are some of the most frequently asked questions and answers about the Santa Susana Field Laboratory (SSFL) cleanup, with a focus on questions regarding DTSC and its newly released draft Environmental Impact Statement (EIR) for the cleanup. Please also see our resources page, which has links to other organizational, community, and media stories about the cleanup. Click here to learn more about Boeing’s campaign to greenwash the cleanup.
What is SSFL and where is it located?
SSFL was established in the late 1940s by the Atomic Energy Commission as a testing facility (a “field laboratory”) for nuclear reactor development work too dangerous to perform close to a populated area. Over the following decades, however, the population mushroomed around the area. Currently, over half a million people reside within ten miles of the site.
The 2,850 acre SSFL site sits at an elevation of over 2,000 feet in the hills between Ventura and Los Angeles counties. Cities near SSFL include Simi Valley, Chatsworth, Canoga Park, Woodland Hills, West Hills, Westlake Village, Agoura Hills, Oak Park, Calabasas, and Thousand Oaks. (Click photo on right to enlarge.)
How did SSFL get contaminated?
SSFL was home to ten nuclear reactors, half a dozen atomic critical facilities, a plutonium fuel fabrication facility, and a “hot lab” for decladding and disassembling highly irradiated nuclear fuel shipped in from around the Atomic Energy Commission/Department of Energy (DOE) national nuclear complex as an initial step for reprocessing.
Numerous accidents, releases, and spills resulted. In 1959, the SRE reactor experienced a partial meltdown, in which a third of the fuel experienced melting. In 1964, a SNAP reactor suffered damage to 80% of its fuel. In 1968-9 another reactor suffered similar fuel damage. (See map to the left of SSFL’s nuclear area, click to enlarge.)
Several radioactive fires occurred in and at the Hot Lab. For decades, radioactively and chemically contaminated reactor components and other toxic wastes were routinely burned in open pits (the sodium “burn pit”) with airborne releases of contaminants as well as surface runoff of the pollutants offsite. In addition, tens of thousands of rocket tests were conducted at SSFL, resulting in significant chemical contamination.
What contaminants are at SSFL?
SSFL is contaminated with dangerous radionuclides such as cesium-137, strontium-90, plutonium-239 and tritium. In 2012, the U.S. EPA conducted an extensive radiological survey of SSFL and hundreds of soil samples came back positive for elevated levels of radioactivity. SSFL is also contaminated with highly toxic chemicals such as perchlorate, dioxins, PCBs, heavy metals, and volatile and semi-volatile organic compounds. In addition, hundreds of thousands of gallons of the extraordinarily toxic trichloroethylene (TCE) were used to flush out rocket test engines and then allowed to seep into the soil and groundwater.
The contamination at SSFL is widespread, extending through all four operational areas and into the buffer zones and extending offsite. The radioactive and hazardous chemicals represent a witches’ brew of some of the most toxic substances on earth. Many of these materials are regulated at a few parts per billion (ppb), yet there are very large quantities present in the soil at SSFL. Perchlorate, an exceedingly toxic component of solid rocket boosters, is not permissible in drinking water, for example, at levels greater than 6 ppb. SSFL disposed of literally tons of perchlorate in open-air burnpits, polluting soil, groundwater and surface water. TCE is so dangerous that permissible levels in water are 5 parts per billion; 500,000 gallons are estimated to be polluting the SSFL soil column and aquifer.
What are the health impacts of SSFL contaminants?
Exposure to SSFL contaminants can cause cancers and leukemias, developmental disorders, genetic disorders, neurological disorders, immune system disorders, and more. A study by the UCLA School of Public Health found significantly elevated cancer death rates among both the nuclear and rocket workers from exposures to these toxic materials. Another study by UCLA found that the rocket testing had led to offsite exposures to hazardous chemicals by the neighboring population at levels exceeding EPA levels of concern. A study performed for the federal Agency for Toxic Substances and Disease Registry found the incidence of key cancers were 60% higher in the offsite population near the site compared to further away.
In addition, studies by cancer registries found elevated rates of bladder cancer associated with proximity to SSFL. A cluster of retinoblastoma cases, a rare eye cancer affecting young children, was identified within an area downwind of the site. And the Public Health Institute’s 2012 California Breast Cancer Mapping Project found that the rate of breast cancer is higher in Thousand Oaks, Simi Valley, Oak Park and Moorpark than in almost any other place in the state. Most recently, rare pediatric cancers have been identified by families who live near SSFL, causing tremendous concern.
It is important to note that the National Academy of Sciences’ Report on the Biological Effects of Ionizing Radiation found that there is no safe level of exposure to radiation. Children are more susceptible to radiation-induced cancer than adults, and girls are more radio-sensitive than boys. Similar increased risk of cancer for children and females has been found for many chemical carcinogens as well.
Why is it important that SSFL be fully cleaned up?
Contamination does not stay put on SSFL’s hilltop location. There have been over a hundred exceedances of pollution standards in runoff from the site reported to the LA Regional Water Quality Control Board, resulting in more than a million dollars in fines. A TCE groundwater plume extends offsite. Perchlorate, a component of solid rocket fuels that disrupts human development and which contaminates much of SSFL, has been found in numerous wells in Simi Valley and in soil and sediment in Dayton Canyon. Strontium-90 was found in Runkle Canyon. Other contamination has been found at Brandeis-Bardin and at Sage Ranch, where hundreds of cubic yards of toxic soil contaminated with antimony and asbestos had to be removed.
If the contamination is not cleaned up from its source at SSFL, it will continue to migrate offsite especially with wind and rain, which can carry it for miles. Future visitors to the site could be at risk too. Anyone who has ever hiked in the hills and mountains of Southern California knows that trails get dusty, and people can easily be in contact with the dust or even ingest it. Who would possibly want to bring children to a former nuclear meltdown site that hasn’t been fully cleaned up? Boeing’s risk assessment reports show high ecological risks too. In order to protect people and the environment, the contamination must be fully cleaned up.
Who is responsible for the SSFL cleanup?
NASA owns and is responsible for cleaning up part of Area I and all of Area II, where the rocket engine testing took place.
The DOE is responsible for cleaning up Area IV, which it leases from Boeing. Area IV is where most of the nuclear activities occurred. DOE is also responsible for cleaning up the Northern Buffer Zone, which became part of SSFL in 1997 as part of a settlement between Boeing and the neighboring Brandeis-Bardin Institute, which sued over SSFL contamination polluting its property.
Boeing is responsible for cleaning up the rest of the 2,850 acre site.
Who is in charge of the SSFL cleanup?
The California Department of Toxic Substances Control (DTSC), which is part of the California Environmental Protection Agency (CalEPA), has regulatory oversight for the SSFL cleanup. (CalEPA and DTSC leadership pictured right, click to enlarge.)
SSFL is in fact so contaminated that the U.S. EPA had recommended it for consideration as a federal Superfund site – a status granted to the most polluted sites in the country. However, because the State Superfund provided a better cleanup process, the state declined the dual listing. Community members were pleased because at the time, the DTSC was standing up to Boeing and repeatedly stated it would require a full health protective cleanup for SSFL.
But shortly after historic cleanup agreements were signed between DTSC, DOE, and NASA in 2010, the Brown Administration came into power. Boeing lobbyists included former senior aides to Brown, and immediately upon taking office, started to undermine the cleanup commitments.
- New DTSC leadership was brought in that was decidedly more friendly to Boeing.
- DTSC sabotaged its own defense in a Boeing lawsuit to overturn SB 990, a law passed by the State of California to help assure full cleanup. A lawyer for the state waived its right to contest any Boeing statement of purported material facts, weeks before they were even put forwarded, and thus without even having seen them. Many of these statements were outlandish and clearly false, but the state had given up the right to challenge them and Boeing prevailed in overturning the law.
- DTSC refused to continue the SSFL Work Group, the longtime public participation vehicle for the cleanup. Instead, over the objections of community members and elected officials, it approved a “Community Advisory Group” (SSFL CAG) led by people with ties to Boeing and the other responsible parties. (This was part of Boeing’s greenwashing campaign, which you can read about here.)
- DTSC, after pressure from a Boeing lobbyist, ignored the position of the agency leadership that radiologically contaminated materials from SSFL could not be disposed of in sites not properly licensed to receive it, and repeatedly allowed radioactive waste to be sent out for recycling and disposal in sites not designed for it. Several public interest groups sued, and a Superior Court judge issued a temporary injunction and denied Boeing’s request for summary judgment.
- In May 2017, DTSC denied that harmful SSFL contamination had migrated to the Brandeis-Bardin camp or anywhere, despite significant evidence to the contrary, essentially asserting absurdly that there was a magical glass wall around SSFL that blocked wind and rain from moving contamination offsite.
- DTSC is now backsliding on its own SSFL cleanup commitments, allowing DOE and NASA to propose options that violate their cleanup agreement and breaking its own commitments to hold Boeing to a comparable cleanup level.
SSFL is not the only community in California that DTSC is failing. In 2013, the report “Golden Wasteland” was published by Consumer Watchdog, enumerating the many different ways that DTSC has allowed polluters throughout the state to contaminate and not clean up. A year later Consumer Watchdog published “Inside Job” which specifically examines Boeing’s influence on DTSC.
What cleanup agreements are in place?
In 2010, the Department of Energy and NASA signed historic agreements called Administrative Orders on Consent (AOCs) with the California Department of Toxic Substances Control that committed them to clean up their operational areas to background levels of contamination – meaning return the land to the way it was before they polluted it. The AOC agreements specified that the cleanup was to be completed by 2017. The agreement was proposed by the then Dept. of Energy’s Nobel Prize winning Secretary, Dr. Steven Chu, and his Assistant Secretary of Environmental Management, Dr. Ines Triay. The community was thrilled. Finally, after decades of working for full cleanup, we had an agreement that would ensure that all detectable contamination would be cleaned up. (See photo, right, of community members raising their glasses in celebration.)
Boeing refused to sign the AOC cleanup agreements, and filed suit to overturn the state cleanup law for SSFL, SB 990. DTSC, however, said that even in the absence of SB990 or an AOC, its normal procedures require it to defer to local governments’ land use plans and zoning decisions which for SSFL allow agricultural and rural residential uses, which DTSC said would require a cleanup to background, equivalent to the AOC requirements.
Boeing resisted these standards, and said that it would instead cleanup to a weaker standard, suburban residential standard. It said that although it intended the land to be open space, it would clean the site up so it would be clean enough to live on, as a way of providing assurance to the people who lived nearby. (You might hear DTSC or Boeing mention that they are complying with “the 2007 consent order.” It is important to know that the 2007 consent order does NOT include an agreed upon cleanup standard, just the following of the routine procedures that in 2010 DTSC said required cleanup to the agricultural, background standard because of the County land use designations.)
In August 2017, Boeing broke its word, and abandoned its promise to suburban residential standard, saying it should only have to clean up to recreational standards – which amount to almost no cleanup at all. Recreational standards are based on people only being on the site for short amounts of time. But people who live near SSFL do not live in open space, and if high amounts of contamination remain on site, they will continue to be at risk of exposure to SSFL contamination through offsite migration.
What is an Environmental Impact Report (EIR)?
An Environmental Impact Report (EIR) is a report that must be prepared in California prior to any project that impacts the environment, per the California Environmental Quality Act (CEQA.) Projects undertaken by the federal government are required to produce a similar report called an Environmental Impact Statement (EIS) per the National Environmental Policy Act.)
The purpose of an EIR or an EIS is to examine the impact of proposed projects and identify methods to minimize those effects. The process includes a public scoping period and a draft EIR which are both open for public review, a final EIR, and a Record of Decision. (Pictured right, Rocketdyne Cleanup Coalition member Holly Huff testifies at DTSC’s 2014 public scoping meeting for its EIR.)
NASA has completed its EIS for the SSFL cleanup in 2014, and DOE completed its Draft EIS in March 2017. (You can read about problems with DOE’s Draft EIS here.) Neither were actually required to do so, because NEPA is triggered by a discretionary action and NASA and DOE had signed binding cleanup agreements with the state of California. Both proposed alternatives that violated the agreements. The cleanup decisions rest with their regulator, DTSC, and are bound by the AOCS; NASA and DOE have tried to usurp that authority by doing their own EIS’s for cleanups far less protective than they had promised.
On September 7, 2017, DTSC released its draft EIR which covers the entire SSFL property, including all of the areas that NASA, DOE, and Boeing are responsible for cleaning up.
What’s wrong with the EIR for the SSFL cleanup?
DTSC’s draft EIR is essentially a breach of the commitments DTSC had made to require a full cleanup. It includes proposals that would violate the AOC cleanup agreements it signed with DOE and NASA. For Boeing’s part of SSFL, the EIR blocks from even being considered cleanup to the standards DTSC hand long promised. Instead, it says the very best that would be done would be cleanup standards nearly thirty times less protective than DTSC’s own official residential cleanup levels, and far less than the promised cleanup to agricultural/rural residential and background standards.
Amazingly, the EIR has a thousand pages of all the supposed negative impacts of doing a cleanup, but nothing on the negative impacts of the contamination and the health and environmental harm that would occur if the pollution isn’t cleaned up. By omitting cancer risk information and hyping potential negative impacts of the cleanup, the EIR presents a biased and inaccurate assessment of the SSFL cleanup. It is essentially a PR attack on the cleanup commitments DTSC itself had made.
If that sounds like something Boeing would produce, it’s because in large part it did – the EIR was written by Boeing’s consultant, hardly an honest broker! Indeed, DTSC allowed Boeing and the other responsible parties, DOE and NASA, to write and edit significant parts of what is supposed to be DTSC’s independent environmental review.
Here are more key problems:
- DTSC’s EIR contemplates leaving large amounts of contamination in place, which it refers to as “natural attenuation.” This means just leaving the toxic materials and hoping they lessen over long periods of time. It also violates the AOC cleanup agreements, which prohibit even considering leaving contamination in place.
- DTSC’s EIR says that it intends to exempt from cleanup unspecified but apparently huge amounts of contamination for purported biological and cultural reasons, which appear to be far beyond the narrow exemptions allowed under the cleanup agreements. The real threat to the ecology – which is not examined at all in the EIR – is the radioactive and chemical contamination, which needs to be cleaned up to protect ecological features as well as people.
- Though DTSC in 2010 promised a cleanup for Boeing’s property that would be equivalent to that required for DOE and NASA, the EIR now says Boeing will be allowed to do a less protective than that in the DOE and NASA agreements. Furthermore, it excludes from consideration a cleanup to background or to the rural residential standards previously promised.
- The EIR even excludes consideration of DTSC’s own official suburban residential standard, and puts forward instead one that is nearly thirty times less protective. In other words, the very best DTSC is now considering would leave contamination concentrations nearly thirty times higher than DTSC’s own official goals for what is safe for suburban residences, and far higher than even that compared to the cleanup levels DTSC has long promised. Furthermore, the EIR indicates that Boeing will be allowed to also leave large amounts of contamination in place, for similar “natural attenuation” and unspecified biological and cultural exemptions.
- The EIR also fails to disclose what DTSC is actually proposing to not clean up. It is absurd to release a report that gives no real information about what the proposed cleanup amounts will be. DTSC hides the ball—it keeps hidden how much contaminated soil it contemplates not cleaning up, saying that it will disclose that only after the EIR is finalized. This flies in the face of environmental law, which is to disclose and analyze in the EIR, not shielding from public view its intentions until after the EIR is over. And by only giving information about supposed impacts to the environment from cleanup and excluding information on risks to health and the environment from the contamination itself and from not cleaning it all up as promised, the DEIR misrepresents all risks.
- Along with the draft EIR, DTSC released a Program Management Plan (PMP) for review. Buried in the PMP is that outrageous statement that DTSC now projects that the cleanup will not be completed until 2034! That is 17 years after the AOC requirement of 2017 and nearly 90 years from when the contamination first began being created. It is absolutely unacceptable.
Will, as Boeing claims, a full cleanup cause harm?
Boeing’s new propaganda website–outrageously called “Protect Santa Susana,” when it is of course Boeing and its predecessors that re responsible for the toxic damage to Santa Suana–claims that DTSC’s draft EIR only considers “excessive” cleanup alternatives that “would unnecessarily require the local community to live with decades of transportation and air quality impacts, and destroy critical wildlife habitat and disturb Native American artifacts in the process.”
This is simply not true. First, we must point out that Boeing wasn’t concerned about trucks when SSFL was trucking in large quantities of highly radioactive reactor fuel and other toxic materials from around the country for decades. And it would have been nice if Boeing and the other responsible parties had been concerned about the environment during the over five decades that they wantonly and recklessly polluted the site with radiological and chemical contamination by violating fundamental norms of environmental protection. But now that its time to clean up their toxic mess, now they have magically turned into environmentalists? Give us a break.
The majority of the areas in SSFL that need to be cleaned are areas already disturbed. These are areas where nuclear reactors, buildings that handled radioactive materials, toxic burn pits, and rocket test stands are located. The contamination occurred primarily where topsoil had already been scraped away, where structures were constructed, and huge amounts of pollutants were just dumped in the soil.
If there are any truly sensitive areas, great care is taken not to disturb them during cleanup. (See photo above from 2010, when NASA remediated contaminated soil near a former incinerator and ash pile at SSFL. Some cleanup occurred around oak trees. The contractor noted, “We got at the soil we needed to with very minimal disturbance to the surrounding environment. No oak tree roots were exposed or damaged during the soil removal.”)
In addition, the AOC cleanup agreements provide protection for officially recognized Native American artifacts, as well as endangered plants and animals if a US Fish & Wildlife Biological Opinion directs particular protection. As they said in a prior Biological Opinion, cleaning up the contamination is critical to protect biological features. Thus the greatest risk is if the site isn’t cleaned up and the critters are continued to be exposed to radioactive and toxic chemical pollution.
The responsible parties have been hyping truck traffic for years now in a blatant attempt to manipulate the community into opposing the cleanup. In response, the community has urged them to consider alternative methods such as conveyor to rail, and alternative routes. DOE’s EIS refused to do either. DTSC’s EIR fails to take an honest look at these alternatives.
By far the greatest risk to the community and environment is if the contamination doesn’t get cleaned up, posing a risk both onsite and from contamination migrating offsite. That is why it must all be cleaned up—AS PROMISED.
How can I help ensure SSFL is fully cleaned up?
Despite Boeing’s slick greenwashing campaign and well-heeled lobbyists, and despite DTSC’s tendency to act like its job is to protect the polluter, instead of the public, communities who live near SSFL have a powerful voice and a critical role to play in making sure that SSFL is fully cleaned up.
Though the December 14 deadline to submit comments on DTSC’s draft PEIR for the SSFL cleanup has passed, you can still take action: | <urn:uuid:93d45b40-5fb0-4027-8d47-badd22740cda> | CC-MAIN-2019-47 | https://www.protectsantasusanafromboeing.com/faq/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00377.warc.gz | en | 0.962897 | 4,768 | 2.96875 | 3 |
The Ordnungspolizei (German: [ˈʔɔɐ̯dnʊŋspoliˌt͡saɪ], Order Police), abbreviated Orpo, were the uniformed police force in Nazi Germany between 1936 and 1945. The Orpo organisation was absorbed into the Nazi monopoly on power after regional police jurisdiction was removed in favour of the central Nazi government (verreichlichung of the police). The Orpo was under the administration of the Interior Ministry, but led by members of the Schutzstaffel (SS) until the end of World War II. Owing to their green uniforms, Orpo were also referred to as Grüne Polizei (green police). The force was first established as a centralised organisation uniting the municipal, city, and rural uniformed police that had been organised on a state-by-state basis.
|Common name||Grüne Polizei|
|Formed||26 June, 1936|
|Legal personality||Governmental: Government agency|
|Legal jurisdiction|| Nazi Germany|
|Headquarters||Berlin NW 7, Unter den Linden 72/74|
|Elected officers responsible|
|Parent agency||Reich Interior Ministry|
The Ordnungspolizei encompassed virtually all of Nazi Germany's law-enforcement and emergency response organisations, including fire brigades, coast guard, and civil defence. In the prewar period, Reichsführer-SS Heinrich Himmler and Kurt Daluege, chief of the Order Police, cooperated in transforming the police force of the Weimar Republic into militarised formations ready to serve the regime's aims of conquest and racial annihilation. Police troops were first formed into battalion-sized formations for the invasion of Poland, where they were deployed for security and policing purposes, also taking part in executions and mass deportations. During World War II, the force had the task of policing the civilian population of the conquered and colonised countries beginning in spring 1940. Orpo's activities escalated to genocide with the invasion of the Soviet Union, Operation Barbarossa. Twenty-three police battalions, formed into independent regiments or attached to Wehrmacht security divisions and Einsatzgruppen, perpetrated mass murder in the Holocaust and were responsible for widespread crimes against humanity and genocide targeting the civilian population.
Reichsführer-SS Heinrich Himmler was named Chief of German Police in the Interior Ministry on 17 June 1936 after Hitler announced a decree which was to "unify the control of police duties in the Reich". Traditionally, law enforcement in Germany had been a state and local matter. In this role, Himmler was nominally subordinate to Interior Minister Wilhelm Frick. However, the decree effectively subordinated the police to the SS. Himmler gained authority as all of Germany's uniformed law enforcement agencies were amalgamated into the new Ordnungspolizei, whose main office became populated by officers of the SS.
The police were divided into the Ordnungspolizei (Orpo or regular police) and the Sicherheitspolizei (SiPo or security police), which had been established in June 1936. The Orpo assumed duties of regular uniformed law enforcement while the SiPo consisted of the secret state police (Geheime Staatspolizei or Gestapo) and criminal investigation police (Kriminalpolizei or Kripo). The Kriminalpolizei was a corps of professional detectives involved in fighting crime and the task of the Gestapo was combating espionage and political dissent. On 27 September 1939, the SS security service, the Sicherheitsdienst (SD) and the SiPo were folded into the Reich Main Security Office (Reichssicherheitshauptamt or RSHA). The RSHA symbolised the close connection between the SS (a party organisation) and the police (a state organisation).
The Order Police played a central role in carrying out the Holocaust. By "both career professionals and reservists, in both battalion formations and precinct service" (Einzeldienst) through providing men for the tasks involved.
The German Order Police had grown to 244,500 men by mid-1940. The Orpo was under the overall control of Reichsführer-SS Himmler as Chief of the German Police in the Ministry of the Interior. It was initially commanded by SS-Oberstgruppenführer und Generaloberst der Polizei Kurt Daluege. In May 1943, Daluege had a massive heart attack and was removed from duty. He was replaced by SS-Obergruppenführer und General der Waffen-SS und der Polizei Alfred Wünnenberg, who served until the end of the war. By 1941, the Orpo had been divided into the following offices covering every aspect of German law enforcement.
The central command office known as the Ordnungspolizei Hauptamt was located in Berlin. From 1943 it was considered a full SS-Headquarters command. The Orpo main office consisted of Command Department (Kommandoamt), responsible for finance, personnel and medical; Administrative (Verwaltung) charged with pay, pensions and permits; Economic (Wirtschaftsverwaltungsamt); Technical Emergency Service (Technische Nothilfe); Fire Brigades Bureau (Feuerwehren); Colonial Police (Kolonialpolizei); and SS and Police Technical Training Academy (Technische SS-und Polizeiakademie).
Branches of policeEdit
- Administration (Verwaltungspolizei) was the administrative branch of the Orpo and had overall command authority for all Orpo police stations. The Verwaltungspolizei also was the central office for record keeping and was the command authority for civilian law enforcement groups, which included the Gesundheitspolizei (health police), Gewerbepolizei (commercial or trade police), and the Baupolizei (building police). In the main towns, Verwaltungspolizei, Schutzpolizei and Kriminalpolizei would be organised into a police administration known as the Polizeipräsidium or Polizeidirektion, which had authority over these police forces in the urban district.
- State protection police (Schutzpolizei des Reiches), state uniformed police in cities and most large towns, which included police-station duties (Revierdienst) and barracked police units for riots and public safety (Kasernierte Polizei).
- Municipal protection police (Schutzpolizei der Gemeinden), municipal uniformed police in smaller and some large towns. Although fully integrated into the Ordnungspolizei-system, its police officers were municipal civil servants. The civilian law enforcement in towns with a municipal protection police was not done by the Verwaltungspolizei, but by municipal civil servants. Until 1943 they also had municipal criminal investigation departments, but that year, all such departments with more than 10 detectives, were integrated into the Kripo.
- Gendarmerie (state rural police) were tasked with frontier law enforcement to include small communities, rural districts, and mountainous terrain. With the development of a network of motorways or Autobahnen, motorised gendarmerie companies were set up in 1937 to secure the traffic.
- Traffic police (Verkehrspolizei) was the traffic-law enforcement agency and road safety administration of Germany. The organisation patrolled Germany's roads (other than motorways which were controlled by Motorized Gendarmerie) and responded to major accidents. The Verkehrspolizei was also the primary escort service for high Nazi leaders who travelled great distances by automobile.
- Water police (Wasserschutzpolizei) was the equivalent of the coast guard and river police. Tasked with the safety and security of Germany's rivers, harbours, and inland waterways, the group also had authority over the SS-Hafensicherungstruppen ("harbour security troops") which were Allgemeine-SS units assigned as port security personnel.
- Fire police (Feuerschutzpolizei) consisted of all professional fire departments under a national command structure.
- The Orpo Hauptamt also had authority over the Freiwillige Feuerwehren, the local volunteer civilian fire brigades. At the height of the Second World War, in response to heavy bombing of Germany's cities, the combined Feuerschutzpolizei and Freiwillige Feuerwehren numbered nearly two million members.
- Air raid protection police (Luftschutzpolizei) was the civil protection service in charge of air raid defence and rescue victims of bombings in connection with the Technische Nothilfe (Technical Emergency Service) and the Feuerschutzpolizei (professional fire departments). Created as the Security and Assistance Service (Sicherheits und Hilfsdienst) in 1935, it was renamed Luftschutzpolizei in April 1942. The air raid network was supported by the Reichsluftschutzbund (Reich Association for Air Raid Precautions) an organisation controlled from 1935 by the Air Ministry under Hermann Göring. The RLB set up an organisation of air raid wardens who were responsible for the safety of a building or a group of houses.
- Technical Emergency Corps (Technische Nothilfe; TeNo) was a corps of engineers, technicians and specialists in construction work. The TeNo was created in 1919 to keep the public utilities and essential industries running during the wave of strikes. From 1937, the TeNo became a technical auxiliary corps of the police and was absorbed into Orpo Hauptamt. By 1943, the TeNo had over 100,000 members.
- Volunteer Fire Department (Feuerwehren), volunteer fire departments, conscripted fire departments and industrial fire departments were auxiliary police subordinate to the Ordnungspolizei.
- Radio protection (Funkschutz) was made up of SS and Orpo security personnel assigned to protect German broadcasting stations from attack and sabotage. The Funkschutz was also the primary investigating service which detected illegal reception of foreign radio broadcasts.
- Urban and rural emergency police (Stadt- und Landwacht) created in 1942 as a part-time police reserve. Abolished in 1945 with the creation of the Volksturm.
- Auxiliary Police (Schutzmannschaft) was the collaborationist auxiliary police in occupied Eastern Europe.
- Reichsbahnfandungsdienst, the "Railway criminal investigative service", subordinate to the Deutsche Reichsbahn.
- Bahnschutzpolizei, subordinate to the Deutsche Reichsbahn.
- SS-Bahnschutz replaced the Bahnschutzpolizei within the Reich territory from 1944.
- Postal protection (Postschutz) comprised roughly 45,000 members and was tasked with the security of Germany's Reichspost, which was responsible not only for the mail but other communications media such as the telephone and telegraph systems.
- SS-Postschutz; created with the transfer of the Postschutz from the Reichministry of Post to the Allgemeine-SS in 1942.
- Forstschutzpolizei, under the Reichsforstamt.
- Jagdpolizei (Hunting Police), under the Reichsforstamt. It was largely exercised through the Deutsche Jägerschaft.
- Zollpolizei (Customs Police), exercised through the Zollgrenzschutz and the Customs Authorities under the Ministry of Finance.
- Flurschutzpolizei (Agricultural Field Police), under the Ministry of Agriculture.
- Factory protection police (Werkschutzpolizei) were the security guards of Nazi Germany. Its personnel were civilians employed by industrial enterprises, and typically were issued paramilitary uniforms. They were ultimately subordinated to the Ministry of Aviation.
- Deichpolizei (Dam and Dyke Police), subordinated to the Ministry of Economy.
Invasion of PolandEdit
Between 1939 and 1945, the Ordnungspolizei maintained military formations, trained and outfitted by the main police offices within Germany. Specific duties varied widely from unit to unit and from one year to another. Generally, the Order Police were not directly involved in frontline combat, except for Ardennes in May 1940, and the Siege of Leningrad in 1941. The first 17 battalion formations (from 1943 renamed SS-Polizei-Bataillone) were deployed by Orpo in September 1939 along with the Wehrmacht army in the invasion of Poland. The battalions guarded Polish prisoners of war behind the German lines, and carried out expulsion of Poles from Reichsgaue under the banner of Lebensraum. They also committed atrocities against both the Catholic and the Jewish populations as part of those "resettlement actions". After hostilities had ceased, the battalions - such as Reserve Police Battalion 101 - took up the role of security forces, patrolling the perimeters of the Jewish ghettos in German-occupied Poland (the internal ghetto security issues were managed by the SS, SD, and the Criminal Police, in conjunction with the Jewish ghetto administration).
Each battalion consisted of approximately 500 men armed with light infantry weapons. In the east, each company also had a heavy machine-gun detachment. Administratively, the Police Battalions remained under the Chief of Police Kurt Daluege, but operationally they were under the authority of regional SS and Police Leaders (SS- und Polizeiführer), who reported up a separate chain of command directly to Reichsführer-SS Heinrich Himmler. The battalions were used for various auxiliary duties, including the so-called anti-partisan operations, support of combat troops, and construction of defence works (i.e. Atlantic Wall). Some of them were focused on traditional security roles as an occupying force, while others were directly involved in actions designed to inflict terror and in the ensuing Holocaust. While they were similar to Waffen-SS, they were not part of the thirty-eight Waffen-SS divisions, and should not be confused with them, including the national 4th SS Polizei Panzergrenadier Division. The battalions were originally numbered in series from 1 to 325, but in February 1943 were renamed and renumbered from 1 to about 37, to distinguish them from the Schutzmannschaft auxiliary battalions recruited from local population in German-occupied areas.
Invasion of the Soviet UnionEdit
The Order Police battalions, operating both independently and in conjunction with the Einsatzgruppen, became an integral part of the Final Solution in the two years following the attack on the Soviet Union on 22 June 1941, Operation Barbarossa. The first mass killing of 3,000 Jews by Police Battalion 309 occurred in occupied Białystok on 12 July 1941. Police battalions were part of the first and second wave of killings in 1941–42 in the territories of Poland annexed by the Soviet Union and also during the killing operations within the-1939 borders of the USSR, whether as part of Order Police regiments, or as separate units reporting directly to the local SS and Police Leaders. They included the Reserve Police Battalion 101 from Hamburg, Battalion 133 of the Nürnberg Order Police, Police Battalions 45, 309 from Koln, and 316 from Bottrop-Oberhausen. Their murder operations bore the brunt of the Holocaust by bullet on the Eastern Front. In the immediate aftermath of World War II, this latter role was obscured both by the lack of court evidence and by deliberate obfuscation, while most of the focus was on the better-known Einsatzgruppen ("Operational groups") who reported to the Reichssicherheitshauptamt (RSHA) under Reinhard Heydrich.
Order Police battalions involved in direct killing operations were responsible for at least 1 million deaths. Starting in 1941 the Battalions and local Order Police units helped to transport Jews from the ghettos in both Poland and the USSR (and elsewhere in occupied Europe) to the concentration and extermination camps, as well as operations to hunt down and kill Jews outside the ghettos. The Order Police were one of the two primary sources from which the Einsatzgruppen drew personnel in accordance with manpower needs (the other being the Waffen-SS).
In 1942, the majority of the police battalions were re-consolidated into thirty SS and Police Regiments. These formations were intended for garrison security duty, anti-partisan functions, and to support Waffen-SS units on the Eastern Front. Notably, the regular military police of the Wehrmacht (Feldgendarmerie) were separate from the Ordnungspolizei.
Waffen-SS Police DivisionEdit
The primary combat arm of the Ordnungspolizei was the SS Polizei Division the Waffen-SS. The division was formed in October 1939, when thousands of members of the Orpo were drafted and placed together with artillery and signals units transferred from the army. The division consisted of four police regiments composed of Orpo personnel and was typically used to rotate police members into a military situation, so as not to lose police personnel to the general draft of the Wehrmacht or to the full SS divisions of the regular Waffen-SS. Very late in the war several Orpo SS-Police regiments were transferred to the Waffen-SS to form the 35th SS Division.
Orpo and SS relationsEdit
By the start of the Second World War in 1939, the SS had effectively gained complete operational control over the German Police, although outwardly the SS and Police still functioned as separate entities. The Ordnungspolizei maintained its own system of insignia and Orpo ranks as well as distinctive police uniforms. Under an SS directive known as the "Rank Parity Decree", policemen were highly encouraged to join the SS and, for those who did so, a special police insignia known as the SS Membership Runes for Order Police was worn on the breast pocket of the police uniform.
In 1940, standard practice in the German Police was to grant equivalent SS rank to all police generals. Police generals who were members of the SS were referred to simultaneously by both rank titles - for instance, a Generalleutnant in the Police who was also an SS member would be referred to as SS Gruppenführer und Generalleutnant der Polizei. In 1942, SS membership became mandatory for police generals, with SS collar insignia (overlaid on police green backing) worn by all police officers ranked Generalmajor and above.
The distinction between the police and the SS had virtually disappeared by 1943 with the creation of the SS and Police Regiments, which were consolidated from earlier police security battalions. SS officers now routinely commanded police troops and police generals serving in command of military troops were granted equivalent SS rank in the Waffen-SS. In August 1944, when Himmler was appointed Chef des Ersatzheeres (Chief of the Home Army), all police generals automatically were granted Waffen-SS rank because they had authority over the prisoner-of-war camps.
- Burkhardt Müller-Hillebrandt: Das Heer (1933-1945), Vol. III Der Zweifrontenkrieg, Mittler, Frankfurt am Main 1969, p. 322
- Struan Robertson. "The 1936 "Verreichlichung" of the Police". Hamburg Police Battalions during the Second World War. Archived from the original (Internet Archive) on February 22, 2008. Retrieved 2009-09-24.
- Showalter 2005, p. xiii.
- Browning, Christopher R. (1998). Arrival in Poland (PDF). Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland. Archived from the original on 19 October 2013. Retrieved 27 June 2014 – via Internet Archive, direct download 7.91 MB. CS1 maint: BOT: original-url status unknown (link)
- Williams 2001, p. 77.
- Weale 2012, pp. 140–144.
- Zentner & Bedürftig 1991, p. 783.
- Browning, Nazi Policy, p. 143.
- McKale 2011, p. 104.
- Williamson, Gordon (2012). "Structure". World War II German Police Units. Osprey / Bloomsbury Publishing. pp. 6–8. ISBN 1780963408..
- McNab 2013, pp. 60, 61.
- Davis, Brian L. (2007). The German Home Front 1939-1945. Oxford, p. 9.
- Goldhagen 1997, p. 204.
- Browning 1998, p. 38.
- Breitman, Richard, Official Secrets, Hill and Wang: NY, 1998, p 5 & Goldhagen, Daniel J., Hitler's Willing Executioners: Ordinary Germans and the Holocaust, Random House: USA, 1996, p 186.
- Williamson, Gordon (2004). The SS: Hitler's Instrument of Terror. Zenith Imprint. p. 101. ISBN 0-7603-1933-2.
- Browning 1992, p. 5 (22/298 in PDF).
- Browning 1992, p. 38.
- Rossino, Alexander B., Hitler Strikes Poland, University of Kansas Press: Lawrence, Kansas, 2003, pp 69–72, en passim.
- Hillberg, p 81.
- Browning 1992, p. 45 (72 in PDF).
- Hillberg, pp 71–73.
- United States War Department (1995) [March 1945]. Handbook on German Military Forces. Louisiana State University Press. pp. 202–203. ISBN 0-8071-2011-1.
- Browning 1998, pp. 11-12, 31-32.
- "A German police officer shoots Jewish women still alive after a mass execution of Jews from the Mizocz ghetto". United States Holocaust Memorial Museum.
- Browning 1998, pp. 9-12 (26/298 in PDF).
- Hillberg, pp. 175, 192–198, en passim.
- Browning 1998, pp. 11-12, 31-32.
- Patrick Desbois (27 October 2008). "The Shooting of Jews in Ukraine: Holocaust By Bullets". Museum of Jewish Heritage, New York, NY. Archived from the original on 25 December 2014. Retrieved 2 January 2015.
- Hillberg, Raul, The Destruction of the European Jews, Holmes & Meir: NY, NY, 1985, pp. 100–106.
- Goldhagen, pp 202, 271–273, Goldhagen's citations include Israel Gutman, Encyclopedia of the Holocaust, NY: Macmillan 1990
- Goldhagen, p 195.
- Hillberg, pp 105–106.
- Stein 1984, pp. 33–35.
- Browning, Christopher, Nazi Policy, Jewish Workers, German Killers, Cambridge University Press, 2000. ISBN 0-521-77490-X.
- Browning, Christopher (1998) . Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland. New York: HarperCollins. ISBN 978-0-06-019013-2.
- Goldhagen, Daniel. Hitler's Willing Executioners: Ordinary Germans and the Holocaust (Amazon, Kindle book: look inside). Alfred A. Knopf 1996, Vintage 1997. ISBN 0679772685.
- McKale, Donald M (2011). Nazis after Hitler: How Perpetrators of the Holocaust Cheated Justice and Truth. Lanham, MD: Rowman & Littlefield. ISBN 978-1-4422-1316-6.
- McNab, Chris (2013). Hitler's Elite: The SS 1939-45. Osprey Publishing. ISBN 978-1782000884.
- Showalter, Dennis (2005). "Foreword". Hitler's Police Battalions: Enforcing Racial War in the East. Kansas City: University Press of Kansas. ISBN 978-0-7006-1724-1.
- Stein, George (1984) . The Waffen-SS: Hitler's Elite Guard at War 1939–1945. Cornell University Press. ISBN 0-8014-9275-0.
- Weale, Adrian (2012). Army of Evil: A History of the SS. New York; Toronto: NAL Caliber (Penguin Group). ISBN 978-0-451-23791-0.
- Westermann, Edward B. (2005). Hitler's Police Battalions: Enforcing Racial War in the East. Kansas City: University Press of Kansas. ISBN 978-0-7006-1724-1.
- Williams, Max (2001). Reinhard Heydrich: The Biography, Volume 1—Road To War. Church Stretton: Ulric Publishing. ISBN 978-0-9537577-5-6.
- Williamson, Gordon (2012) . World War II German Police Units. Osprey Publishing. ISBN 1780963408.
- Zentner, Christian; Bedürftig, Friedemann (1991). The Encyclopedia of the Third Reich. (2 vols.) New York: MacMillan Publishing. ISBN 0-02-897500-6.
- Megargee, Geoffrey P., ed. (2009). Encyclopedia of Camps and Ghettos, 1933–1945. Volume II. Bloomington: Indiana University Press. ISBN 0-253-35328-9.
- Nix Philip and Jerome Georges (2006). The Uniformed Police Forces of the Third Reich 1933-1945, Leandoer & Ekholm. ISBN 91-975894-3-8
|Wikimedia Commons has media related to Ordnungspolizei (Nazi Germany).| | <urn:uuid:1b3519c9-9f1e-40d4-9529-ded5d4107908> | CC-MAIN-2019-47 | https://en.m.wikipedia.org/wiki/Ordnungspolizei | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00217.warc.gz | en | 0.895455 | 5,516 | 3.390625 | 3 |
I imagine that the persistence of that question irritated Harry Truman above all other things. The atomic bombs that destroyed the cities of Hiroshima and Nagasaki fifty years ago were followed in a matter of days by the complete surrender of the Japanese empire and military forces, with only the barest fig leaf of a condition—an American promise not to molest the Emperor. What more could one ask from an act of war? But the two bombs each killed at least 50,000 people and perhaps as many as 100,000. Numerous attempts have been made to estimate the death toll, counting not only those who died on the first day and over the following week or two but also the thousands who died later of cancers thought to have been caused by radiation. The exact number of dead can never be known, because whole families—indeed, whole districts—were wiped out by the bombs; because the war had created a floating population of refugees throughout Japan; because certain categories of victims, such as conscript workers from Korea, were excluded from estimates by Japanese authorities; and because as time went by, it became harder to know which deaths had indeed been caused by the bombs. However many died, the victims were overwhelming civilians, primarily the old, the young, and women; and all the belligerents formally took the position that the killing of civilians violated both the laws of war and common precepts of humanity. Truman shared this reluctance to be thought a killer of civilians. Two weeks before Hiroshima he wrote of the bomb in his diary, "I have told [the Secretary of War] Mr. Stimson to use it so that military objectives and soldiers and sailors are the target and not women and children.
" The first reports on August 6, 1945, accordingly described Hiroshima as a Japanese army base.
This fiction could not stand for long. The huge death toll of ordinary Japanese citizens, combined with the horror of so many deaths by fire, eventually cast a moral shadow over the triumph of ending the war with two bombs. The horror soon began to weigh on the conscience of J. Robert Oppenheimer, the scientific director of the secret research project at Los Alamos, New Mexico, that designed and built the first bombs. Oppenheimer not only had threatened his health with three years of unremitting overwork to build the bombs but also had soberly advised Henry Stimson that no conceivable demonstration of the bomb could have the shattering psychological impact of its actual use. Oppenheimer himself gave an Army officer heading for the Hiroshima raid last minute instructions for proper delivery of the bomb.
Don't let them bomb through clouds or through an overcast. Got to see the target. No radar bombing; it must be dropped visually. ... Of course, they must not drop it in rain or fog. Don't let them detonate it too high. The figure fixed on is just right. Don't let it go up or the target won't get as much damage.
These detailed instructions were the result of careful committee work by Oppenheimer and his colleagues. Mist or rain would absorb the heat of the bomb blast and thereby limit the conflagration, which experiments with city bombing in both Germany and Japan had shown to be the principal agent of casualties and destruction. Much thought had also been given to finding the right city. It should be in a valley, to contain the blast; it should be relatively undamaged by conventional air raids, so that there would be no doubt of the bomb's destructive power; an educated citizenry was desired, so that it would understand the enormity of what had happened. The military director of the bomb project, General Leslie Groves, thought the ancient Japanese imperial capital of Kyoto would be ideal, but Stimson had spent a second honeymoon in Kyoto, and was afraid that the Japanese would never forgive or forget its wanton destruction; he flatly refused to leave the city on the target list. Hiroshima and Nagasaki were destroyed instead.
On the night of August 6 Oppenheimer was thrilled by the bomb's success. He told an auditorium filled with whistling, cheering, foot-stomping scientists and technicians that he was sorry only that the bomb had not been ready in time for use on Germany. The adrenaline of triumph drained away following the destruction of Nagasaki, on August 9. Oppenheimer, soon offered his resignation and by mid-October had severed his official ties. Some months later he told Truman in the White House, "Mr. President, I have blood on my hands."
Truman was disgusted by this cry-baby attitude. "I told him," Truman said later, "the blood was on my hands—let me worry about that."
Till the end of his life Truman insisted that he had suffered no agonies of regret over his decision to bomb Hiroshima and Nagasaki, and the pungency of his language suggests that he meant what he said. But it is also true that he ordered a halt to the atomic bombing on August 10, four days before the Japanese Emperor surrendered, and the reason, according to a Cabinet member present at the meeting, was that "he didn't like the idea of killing ... 'all those kids.' "
Was it right? Harry Truman isn't the only one to have disliked the question. Historians of the war, of the invention of the atomic bomb, and of its use on Japan have almost universally chosen to skirt the question of whether killing civilians can be morally justified. They ask instead, Was it necessary?
Those who say it was necessary argue that a conventional invasion of Japan, scheduled to begin on the southernmost island of Kyushu on November 1, 1945, would have cost the lives of large numbers of Americans and Japanese alike. Much ink has been spilled over just how large these numbers would have been. Truman in later life sometimes said that he had used the atomic bomb to save the lives of half a million or even a million American boys who might have died in an island-by-island battle to the bitter end for the conquest of Japan.
Where Truman got those numbers is hard to say. In the spring of 1945, when it was clear that the final stage of the war was at hand, Truman received a letter from former President Herbert Hoover urging him to negotiate an end to the war in order to save the "500,000 to 1 million American lives" that might be lost in an invasion. But the commander of the invasion force, General Douglas MacArthur, predicted nothing on that scale. In a paper prepared for a White House strategy meeting held on June 18, a month before the first atomic bomb was tested, MacArthur estimated that he would suffer about 95,000 casualties in the first ninety days—a third of them deaths. The conflict of estimates is best explained by the fact that they were being used at the time as weapons in a larger argument. Admirals William Leahy and Ernest J. King thought that Japan could be forced to surrender by a combination of bombing and naval blockade. Naturally they inflated the number of casualties that their strategy would avoid. MacArthur and other generals, convinced that the war would have to be won on the ground, may have deliberately guessed low to avoid frightening the President.
It was not easy to gauge how the battle would go. From any conventional military perspective, by the summer of 1945 Japan had already lost the war. The Japanese navy mainly rested on the bottom of the ocean; supply lines to the millions of Japanese soldiers in China and other occupied territories had been severed; the Japanese air force was helpless to prevent the almost nightly raids by fleets of B-29 bombers, which had been systematically burning Japanese cities since March; and Japanese petroleum stocks were close to gone. The battleship Yamato, dispatched on a desperate mission to Okinawa in April of 1945, set off without fuel enough to return.
But despite this hopeless situation the Japanese military was convinced that a "decisive battle" might inflict so many casualties on Americans coming ashore in Kyushu that Truman would back down and grant important concessions to end the fighting. Japan's hopes were pinned on "special attack forces," a euphemism for those engaged in suicide missions, such as kamikaze planes loaded with explosives plunging into American ships, as had been happening since 1944. During the spring and summer of 1945 about 8,000 aircraft, along with one-man submarines and "human torpedoes," had been prepared for suicide missions, and the entire Japanese population had been exhorted to fight, with bamboo spears if necessary, as "One Hundred Million Bullets of Fire." Military commanders were so strongly persuaded that honor and even victory might yet be achieved by the "homeland decisive battle" that the peace faction in the Japanese cabinet feared an order to surrender would be disobeyed. The real question is not whether an invasion would have been a ghastly human tragedy, to which the answer is surely yes, but whether Hoover, Leahy, King, and others were right when they said that bombing and blockade would end the war.
Here the historians are on firm ground. American cryptanalysts had been reading high-level Japanese diplomatic ciphers and knew that the government in Tokyo was eagerly pressing the Russians for help in obtaining a negotiated peace. The sticking point was narrow: the Allies insisted on unconditional surrender; the Japanese peace faction wanted assurances that the imperial dynasty would remain. Truman knew this at the time.
What Truman did not know, but what has been well established by historians since, is that the peace faction in the Japanese cabinet feared the utter physical destruction of the Japanese homeland, the forced removal of the imperial dynasty, and an end to the Japanese state. After the war it was also learned that Emperor Hirohito, a shy and unprepossessing man of forty-four whose first love was marine biology, felt pressed to intervene by his horror at the bombing of Japanese cities. The devastation of Tokyo left by a single night of firebomb raids on March 9–10, 1945, in which 100,000 civilians died, had been clearly visible from the palace grounds for months thereafter. It is further known that the intervention of the Emperor at a special meeting, or gozen kaigin, on the night of August 9–10 made it possible for the government to surrender.
The Emperor's presence at a gozen kaigin is intended to encourage participants to put aside all petty considerations, but at such a meeting, according to tradition, the Emperor does not speak or express any opinion whatever. When the cabinet could not agree on whether to surrender or fight on, the Premier, Kantaro Suzuki, broke all precedent and left the military men speechless when he addressed Hirohito, and said, "With the greatest reverence I must now ask the Emperor to express his wishes."
Of course, this had been arranged by the two men beforehand. Hirohito cited the suffering of his people and concluded, "The time has come when we must bear the unbearable." After five days of further confusion, in which a military coup was barely averted, the Emperor broadcast a similar message to the nation at large in which he noted that "the enemy has begun to employ a new and most cruel bomb. ... "
Are those historians right who say that the Emperor would have submitted if the atomic bomb had merely been demonstrated in Tokyo Bay, or had never been used at all?
Questions employing the word "if" lack rigor, but it is very probable that the use of the atomic bomb only confirmed the Emperor in a decision he had already reached. What distressed him was the destruction of Japanese cities, and every night of good bombing weather brought the obliteration by fire of another city. Hiroshima, Nagasaki, and several other cities had been spared from B-29 raids and therefore offered good atomic-bomb targets. But Truman had no need to use the atomic bomb, and he did not have to invade. General Curtis LeMay had a firm timetable in mind for the 21st Bomber Command; he had told General H. H. ("Hap") Arnold, the commander in chief of the Army Air Corps, that he expected to destroy all Japanese cities before the end of the fall. Truman need only wait. Steady bombing, the disappearance of one city after another in fire storms, the death of another 100,000 Japanese civilians every week or ten days, would sooner or later have forced the cabinet, the army, and the Emperor to bear the unbearable.
Was it right? The bombing of cities in the Second World War was the result of several factors: the desire to strike enemies from afar and thereby avoid the awful trench-war slaughter of 1914–1918; the industrial capacity of the Allies to build great bomber fleets; the ability of German fighters and anti-aircraft to shoot down attacking aircraft that flew by daylight or at low altitudes; the inability of bombers to strike targets accurately from high altitudes; the difficulty of finding all but very large targets (that is, cities) at night; the desire of airmen to prove that air forces were an important military arm; the natural hardening of hearts in wartime; and the relative absence of people willing to ask publicly if bombing civilians was right.
"Strategic bombing" got its name between the wars, when it was the subject of much discussion. Stanley Baldwin made a deep impression in the British House of Commons in 1932 when he warned ordinary citizens that bombing would be a conspicuous feature of the next war and that "the bomber will always get through."
This proved to be true, although getting through was not always easy. The Germans soon demonstrated that they could shoot down daytime low-altitude "precision" bombers faster than Britain could build new planes and train new crews. By the second year of the war the British Bomber Command had faced the facts and was flying at night, at high altitudes, to carry out "area bombing." The second great discovery of the air war was that high-explosive bombs did not do as much damage as fire. Experiments in 1942 on medieval German cities on the Baltic showed that the right approach was high-explosive bombs first, to smash up houses for kindling and break windows for drafts, followed by incendiaries, to set the whole alight. If enough planes attacked a small enough area, they could create a fire storm—a conflagration so intense that it would begin to burn the oxygen in the air, creating hundred-mile-an-hour winds converging on the base of the fire. Hamburg was destroyed in the summer of 1943 in a single night of unspeakable horror that killed perhaps 45,000 Germans.
While the British Bomber Command methodically burned Germany under the command of Sir Arthur Harris (called Bomber Harris in the press but Butch—short for "Butcher"—by his own men), the Americans quietly insisted that they would have no part of this slaughter but would instead attack "precision" targets with "pinpoint" bombing. But American confidence was soon eroded by daylight disasters, including the mid-1943 raid on ball-bearing factories in Schweinfurt, in which sixty-three of 230 B-17s were destroyed for only paltry results on the ground. Some Americans continued to criticize British plans for colossal city-busting raids as "baby-killing schemes," but by the end of 1943, discouraged by runs of bad weather and anxious to keep planes in the air, the commander of the American Air Corps authorized bombing "by radar"—that is, attacks on cities, which radar could find through cloud cover.
The ferocity of the air war eventually adopted by the United States against Germany was redoubled against Japan, which was even better siuited for fire raids, because so much of the housing was of paper and wood, and worse suited for "precision" bombing, because of its awful weather and unpredictable winds at high altitudes. On the night of March 9–10, 1945, General LeMay made a bold experiment: he stripped his B-29s of armament to increase bomb load and flew at low altitudes. As already described, the experiment was a brilliant success. By the time of Hiroshima more than sixty of Japan's largest cities had been burned, with a death toll in the hundreds of thousands.
No nation could long resist destruction on such a scale—a conclusion formally reached by the United States Strategic Bombing Survey in its Summary Report (Pacific War): "Japan would have surrendered [by late 1945] even if the atomic bombs had not been dropped, even if Russia had not entered the war [on August 8], and even if no invasion had been planned or contemplated."
Was it right? There is an awkward, evasive cast to the internal official documents of the British and American air war of 1939–1945 that record the shift in targets from factories and power plants and the like toward people in cities. Nowhere was the belief ever baldly confessed that if we killed enough people, they would give up; but that is what was meant by the phrase "morale bombing," and in the case of Japan it worked. The mayor of Nagasaki recently compared the crime of the destruction of his city to the genocide of the Holocaust, but whereas comparisons—and especially this one—are invidious, how could the killing of 100,000 civilians in a day for a political purpose ever be considered anything but a crime?
Fifty years of argument over the crime against Hiroshima and Nagasaki has disguised the fact that the American war against Japan was ended by a larger crime in which the atomic bombings were only a late innovation—the killing of so many civilians that the Emperor and his cabinet eventually found the courage to give up. Americans are still painfully divided over the right words to describe the brutal campaign of terror that ended the war, but it is instructive that those who criticize the atomic bombings most severely have never gone on to condemn all the bombing. In effect, they give themselves permission to condemn one crime (Hiroshima) while enjoying the benefits of another (the conventional bombing that ended the war).
Ending the war was not the only result of the bombing. The scale of the attacks and the suffering and destruction they caused also broke the warrior spirit of Japan, bringing to a close a century of uncontrolled militarism. The undisguisable horror of the bombing must also be given credit for the following fifty years in which no atomic bombs were used, and in which there was no major war between great powers. It is this combination of horror and good results that accounts for the American ambivalence about Hiroshima. It is part of the American national gospel that the end never justifies the means, and yet it is undeniable that the end—stopping the war with Japan—was the immediate result of brutal means.
Was it right? When I started to write this article, I thought it would be easy enough to find a few suitable sentences for the final paragraph when the time came, but in fact it is not. What I think and what I feel are not quite in harmony. It was the horror of Hiroshima and fear of its repetition on a vastly greater scale which alarmed me when I first began to write about nuclear weapons (often in these pages), fifteen years ago. Now I find I have completed some kind of ghastly circle.
Several things explain this. One of them is my inability to see any significant distinction between the destruction of Tokyo and the destruction of Hiroshima. If either is a crime, then surely both are. I was scornful once of Truman's refusal to admit fully what he was doing; calling Hiroshima an army base seemed a cruel joke. Now I confess sympathy for the man—responsible for the Americans who would have to invade; conscious as well of the Japanese who would die in a battle for the home islands; wielding a weapon of vast power; knowing that Japan had already been brought to the brink of surrender. It was the weapon he had. He did what he thought was right, and the war ended, the killing stopped, Japan was transformed and redeemed, fifty years followed in which this kind of killing was never repeated. It is sadness, not scorn, that I feel now when I think of Truman's telling himself he was not "killing 'all those kids.' " The bombing was cruel, but it ended a greater, longer cruelty.
They say that the fiftieth anniversaries of great events are the last. Soon after that the people who took part in them are all dead, and the young have their own history to think about, and the old questions become academic. It will be a relief to move on.
We want to hear what you think about this article. Submit a letter to the editor or write to email@example.com. | <urn:uuid:31d02eed-d5ab-4d99-a8fd-ebc0ae0c2d36> | CC-MAIN-2019-47 | https://www.theatlantic.com/magazine/archive/1995/07/was-it-right/376364/?utm_source=feed | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665976.26/warc/CC-MAIN-20191113012959-20191113040959-00058.warc.gz | en | 0.980052 | 4,213 | 3.046875 | 3 |
Chodosh in Chutz La'aretz - Part 2
Last week’s article, Part I, discussed the source and explanations of the prohibition of eating products containing chodosh flour or grain. The vast majority of poskim through the ages, from the Mishna down, ruled that this prohibition is Biblical even in Chutz La’aretz. This article will attempt to explain why, even so, chodosh observance is not more widespread or even known about, as well as exploring several different approaches, rationales and leniencies offered by the authorities, allowing chodosh products to be eaten in Chutz La’aretz.
1. Compounded Doubt
The Tur and Rema permitted the new grain because the new crop may have been planted early enough to be permitted, and, in addition, the possibility exists that the available grain is from a previous crop year, which is certainly permitted. This approach accepts that chodosh applies equally in chutz la’aretz as it does in Eretz Yisrael, but contends that when one is uncertain whether the grain available is chodosh or yoshon, one can rely that it is yoshon and consume it. Because of this double doubt, called a sefek sefeika, several major authorities permitted people to consume the available grain.
The issue: Rabbi Akiva Eiger questions the validity of this approach, and maintains that there is no compounded doubt. He explains that the safekos of when the grain rooted are all really one safek, since planting before the cutoff date is considered the previous year! Therefore, since the halacha states that chodosh is Biblical, we hold safek deoraysa l’chumra, and a single case of doubt should not be sufficient to allow it to be eaten. Additionally, even if one would rely on this leniency, it must be noted that this hetter is dependent on available information, and if one knows that the grain being used is actually chodosh, one may not consume it.
2. The Taz’s Take - Rely on Minority
The Taz offers an alternate rationale. He permitted the chutz la’aretz grain, relying on the minority opinion that chodosh is a mitzvah that applies only in Eretz Yisrael. This is based on a Gemara that states that when something has not been ruled definitively (and by chodosh the Gemara does not outright rule), one may rely on a minority opinion under extenuating circumstances. The Taz wrote that in his time, due to lack of availability of yoshon flour, it was considered Shaas Hadchak (extenuating circumstances) as apparently ‘let them eat cake’ would not be a sufficient response to address the needs of the hungry masses with no bread to eat, and therefore maintained that one may rely on the minority opinion.
The issue: The Shach emphatically rejects this approach, and concludes that one must be stringent when one knows that the grain is chodosh. The Ba’er Heitiv, as well as the Beis Hillel likewise voice their rejection of this hetter, in the strongest of terms – that there are “clear proofs” against this logic, and all poskim (Rif, Rambam, Rosh, Tur, Shulchan Aruch) effectivelyruled against it – that chodosh in Chutz La’aretz is prohibited Biblically, period.
3. Near, Not Far
The Magen Avraham, forwards a different approach, that it is not so clear cutthat the halacha follows Rabbi Eliezer in the Mishna (that eating chodosh is a Biblical prohibition), and therefore, “in order to answer up for the minhag of the world, we must say that we follow Rabbeinu Baruch, who was of the opinion that the prohibition of chodosh in Chutz l’aretz is a gezeira d’rabbanan (Rabbinical enactment), and Chazal only prohibited chodosh products on countries nearby to Eretz Yisrael, and therefore would not apply to countries further away. He concludes saying that a “ba’al nefesh” should still be stringent as much as possible.
The Aruch Hashulchan ruled this way as well, explaining that in Russia (where he lived) the land was frozen until past Pesach, there is no hetter of safek orsfeik sfeika (compounded doubt - see #1) to rely upon, for they knew that the farmers were unable to plant until after Pesach. Rather, he wrote that the issur of chodosh is interrelated to the Korban Omer, and therefore only applies to places from where the Korban could possibly be brought. Therefore, Chazal were not gozer on lands far away from Eretz Yisrael, for there would be no reason to do so, as those grains will never even reach Eretz Yisrael. He adds that since if one would not partake of the chodosh grains, he would be unable to eat any grain product for at least six months of the year, Chazal would not have made a gezeira that thetzibbur would not be able to withstand, and especially about grain which is man’s main sustenance (“chayei nefesh mamash”).
The issue: Same as above, that the vast majority of halachic authorities through the ages effectively ruled against this, that HaChodosh assur Min HaTorah bchol makom, including Chutz La’aretz.
4. The Beer Necessities of Life (Yes, you read that right!)
Anotherhetter is that of the Lechem Mishna(cited by the Shach), and Pnei Yehoshua that drinks that are made of derivatives of chodosh grain, such as beer - which seems to have been the mainstay drink in those days - should be permitted, as they are not the actual grain itself. Several authorities qualify this by saying that one may only be lenient in a case of whiskey or beer that was derived from a mixture (ta’aruvos) of different grains – including chodosh grains, but not if the drink was made exclusively from chodosh grain.
The issue: However, the Shach himself seems uneasy about using this leniency, as the Rosh implied that it should also be prohibited. The Chacham Tzvi, as well as the Chayei Adam and Aruch Hashulchan rule that one may not rely on this l’maaseh. The Vilna Gaon is reported as being so stringent on this that he called someone who buys beer made from chodosh grain for someone else – transgressing on Lifnei Iver.
There are those who took a middle of the road stance on beer, including the Mishkenos Yaakov, who although disagreeing with the Chacham Tzvi, nevertheless ruled that only for a tzorech gadol and shaas hadchak (extremely extenuating circumstances) may one rely on beer and other drinks derived from chodosh grain. Similarly, the Beis Hillel also disagrees with this hetter, but adds that if someone is weak and sickly, and it would be a danger for him not to drink it, he may rely on this hetter, as the Torah says “V’Chai Bahem”, v’lo Sheyamus bahem.
5.The Bach’s Hetter - Non-Jewish owned Grain
The Bach advances a different halachic basis to permit use of the new grain. He opines that chodosh applies only to grain that grows in a field owned by a Jew, and not to grain grown in a field owned by a non-Jew. Since most fields are owned by gentiles, one can be lenient when one does not know the origin of the grain and assume that it was grown in a gentile’s field, and it is therefore exempt from chodosh laws. The Bach notes that many of the greatest luminaries of early Ashkenazic Jewry, including Rav Shachna and the Maharshal, were lenient regarding chodosh use in their native Europe. He shares that as a young man he advanced his theory that chodosh does not exist in a field owned by a gentile to the greatest scholars of that generation, including the Maharal M’Prague, all of whom accepted it. In fact, the Ba'al Shem Tov is quoted as having a dream that when the Bach died, Gehhinom was cooled down for 40 days in his honor. When the Besh"t woke up he exclaimed that he did not realize the greatness of the Bach, and ruled that it is therefore worthwhile to rely on his opinion regarding chodosh.
The issue: Even though there are several poskim who rule like the Bach, nevertheless, the vast majority of authoritiescategorically reject this logic and rule that chodosh applies to grain grown in a gentile’s field, including the Rosh, Rambam, Rashba, Ran, Tosafos, Tur, and Shulchan Aruch; as did many later poskim, including the Shach, Taz, Gr”a, Chid”a, the Pnei Yehoshua, the Sha’agas Ayreh, and the Aruch Hashulchan. Additionally, although seemingly not widely known, is the fact that later on in his life, the Ba’al Shem Tov retracted his opinion and he himself became stringent after he found out that a certain Gadol of his time, Rabbeinu Yechiel of Horodna, ruled stringently on this matter. It is also worthwhile to note that the Chazon Ish quoted the Chofetz Chaim as saying that after someone passes on to the World of Truth, he will be asked why he ate chodosh. If he replies that he relied on the hetter of the Bach, then he will be asked why he spoke lashon hara, as the Bach did not allow that (implying that in Heaven he will be labeled a hypocrite).
Let Them Eat Bread
It should be further noted that even those who allowed consumption of chodosh based on the Bach’s hetter, the vast majority gave that ruling only since it was sha’as hadchak (extenuating circumstances) as otherwise there would be no grain products allowed to be eaten; but held that barring that, one should not rely on this leniency. This includes such renowned decisors as the Pri Megadim, Chayei Adam, Shulchan Aruch HaRav, Kitzur Shulchan Aruch, Mishna Berurah, and the Kaf HaChaim. This is similar to the Magen Avraham and Aruch Hashulchan’s approach (see # 3 above) of finding a hetter, in order that Klal Yisrael will be “clean of sin” for their actions.
Five separate rationales for allowing leniency when eating chodosh grain in Chutz La’aretz, as well as the issues and difficulties involved with relying on each of them, have been offered. And none seem to have the complete answer to the question posed in last week’s article, “Why has the traditional approach seemed to be lenient when most authorities rule that chodosh is prohibited even outside Eretz Yisrael?” B’Ezras Hashem the final pieces of the puzzle will be presented in next week’s article.
Y”D 293, 3. This approach was first introduced by the Rosh (Shu”t HaRosh Klal 2, 1; brought by the Tur) and Mordechai (Kiddushin 501). Tosafos (Kiddushin 36b s.v. kol) implies this way as well.
R’ Akiva Eiger in his glosses to Y”D 293, 3, quoting the Shu”t Mutzal Ma’eish (50). This question is also asked by the Kreisi U’Pleisi and Chavaas Daas (brought in Shu”t Beis Avi vol. 4, 138, 7). Although the Aruch Hashulchan (Y”D 293, 16) attempts to address this difficulty (dochek terutz) and explain how our case might still be a safek sfeika, yet Rabbi Akiva Eiger’s kushyos are not to be taken lightly.
Taz Y”D 293, 4.
Gemara Niddah 9b.
Y”D 293, Nekudos HaKessef 1.
Ba’er Heitiv ad loc. 4, Beis Hillel ad loc. 1.
Sources cited in last week’s article - Part I.
Magen Avraham O.C. 489, 17.
Aruch Hashulchan Y”D 293, 19. He actually disagreed with the Magen Avrahem’s proof (similar to the Machatzis HaShekel there – ad loc. end 2), but still paskened l’maaseh like him.
It can be debated that the Aruch Hashulchan’s hetter would no longer apply nowadays, when chodosh products, such as Cheerios are easily purchasable in Israel. For more on this topic of chodosh grain from Chutz La’aretz used in Eretz Yisrael, see Shu”t Achiezer (vol. 2, 39), Shu”t Chelkas Yoav (Y”D 33), Shu”t Har Tzvi (Y”D 239 -240), Shu”t Tzitz Eliezer (vol. 20, 40, 1, in the name of Rav Shmuel Salant, originally printed in Kovetz Kenesses Chachmei Yisrael 126,1) and Orchos Rabbeinu (vol 4, 70, pg 30, quoting the Chazon Ish and the Steipler Gaon).
Lechem Mishna (end of Terumos, cited by the Shach Y”D 293, 6), Pnei Yehoshua (end of Kiddushin, Kuntress Acharon 51, s.v. din hashlishi).
Shulchan Aruch HaRav (Shu”t 20; O.C.489, end 30) and the Beis Lechem Yehuda (Y”D end 293). On the other hand, the Chochmas Adam (Binas Adam 54 ) maintains that even by a ta’aruvos, in order for this to apply, there would need to be present at least 60 times the yoshon grain against the amount of chodosh grain.
Shu”t Chacham Tzvi 80, Chayei Adam (131, 12; see last footnote), Aruch Hashulchan (Y”D 293, 23), Gr”a (Maaseh Rav 89).
Similarly, see Shu”t Rivevos Efraim (vol 8, 199) who quotes Rav Chaim Kanievsky as ruling that if one is stringent on chodosh it is prohibited for him to feed chodosh food to someone who is not machmir. The Minchas Yitzchak (Shu”t vol 8, 113) proves that the Chasam Sofer agreed with the Chacham Tzvi on this, that any derivative of chodosh still maintains the same status and is assur M’deoraysa. His own conclusion is that only one who relies on a hetter of chodosh in Chutz La’aretz being derabbanan may rely on the hetter of beer, as it is improbable to make such a distinction. Other contemporary poskim as well, including Rav Yaakov Kamenetsky (Emes L’Yaakov on Shulchan Aruch O.C. 489, footnote 461) and the Beis Avi (cited above, 19) hold that one should be stringent on beer.
Shu”t Mishkenos Yaakov (Y”D end 68), Bais Hillel cited above.
Bach Y”D 293, 1 s.v. uma”sh bein.
Ba'al Shem Tov al HaTorah Parshas Emor, 6.
See Shu”t Tzitz Eliezer (vol. 20, 40) who states that the Bach used to be the Rav of both Medzhibuzh and Belz, and posits that this is possibly why many Chassidim are lenient when it comes to eatingchodosh products. However, the Shu”t Beis Avi (cited above, 2) quotes that the Sar Shalom of Belz was very stringent with the issur of chodosh, so it seems unlikely that Belzer Chassidim would be meikel exclusively based on the Bach’s shitta. He also cites (ibid, 19) that the Darchei Teshuva quoted that the Divrei Chaim of Sanz was also lenient with chodosh.
Including the Ba’er Hagolah (Y”D 293, 7), Knesses Yechezkel (Shu”t 41), Shev Yaakov (Shu”t 61), Chelkas Yoav (Shu”t Y”D 33, who says that the mekeilim actually rely on a tziruf of sevaros), and Makneh (Kiddushin 38- who qualifies his hetter that in Eretz Yisrael the prohibition would apply by grain owned by a non-Jew); and there are others who try to answer up for his shitta, through sevara, and not psak l’maaseh,including the Avnei Nezer (Shu”t Y”D 386 who wrote a teshuva on this topic when he was 16 (!) where, although not writing for psak l’maaseh, still brings sevaros to be maykel like Rabbenu Baruch; but he does note that the Rambam l’shitaso would not hold of them), and Rav Meshulam Igra (Shu”t vol. 1, O.C. 40 - while not paskening, similarly answers up for the sevara of Rabbenu Baruch to say chodosh in Chu”l could be derabbanan, but also disproves that it is dependant on the Korban Omer).
Shach (Y”D 293, 6), Taz (ibid. 2), Gr”a (ibid. 2 – who writes that the Ba’er Hagolah made such a mistake by paskening like the Bach, that it’s not worth even addressing the issue; see also Maaseh Rav 89 – 90 and Sheiltos 82 on how strict the Gr”a was with this halacha), Chid”a (Birkei Yosef ibid. 1), the Pnei Yehoshua (Shu”t Y”D 34 – also known as the Meginei Shlomo, was the grandfather of the Pnei Yehoshua on Shas who was more lenient regarding this prohibition; he writes extremely strongly against the Bach – calling his hetter “worthless”), the Sha’agas Ayreh (Shu”t HaChadashos, Dinei Chodosh Ch. 1 - 2, who was even makpid on all the dinim of Ta’am K’ikar for chodosh grain) and the Aruch Hashulchan (ibid. 12). See also Shu”t Shoel U’Meishiv (vol.6, 38), who wrote a pilpul (only sevarah and not psak) proving that chodosh should apply by grain owned by a non-Jew.
Ba'al Shem Tov al HaTorah Parshas Emor, 7.
Cited in Orchos Rabbeinu above.
Chayei Adam (131, 12) says since it is difficult for everyone to keep, one may rely on the minority opinion, however anyone who is an “ohev nafsho” would distance himself from relying on this; Shulchan Aruch HaRav (O.C. 489, 30) who calls it a “melamed zchus” and that every ba’al nefesh should be as machmir as possible – since that is the proper halacha; Kitzur Shulchan Aruch (172, 3); the Pri Megadim (O.C. 489, E.A. end 17) – “B’avonoseinu harabbim hadoros chalushim, v’ee efsher lizaher kol kach bzeh”; the Mishna Berurah (O.C. 489, 45, Biur Halacha s.v. v’af) although writing that one may not object against someone who is lenient, however uses very strong words against it and also calls the hetter a “melamed zchus”, since it’s a “davar kasha” to be vigilant from eating chodosh, and maintains that everyone should try to keep it as much as they possibly can. [It is said that Rav Moshe Feinstein - in line with the reasoning of Mishna Berura, was very scrupulous about this and made sure to have at least yoshon oats and barley - since it was much easier to observe yoshon with them than with wheat. See also Shu”t Igros Moshe (Y”D 4, end 46) where although he maintains there is what to rely upon l’maaseh, still one should try to ascertain where he can purchase yoshon flour, as it is preferable.] The Mishna Berura furthermore comments that with the advent of the train (mesilas habarzel), the grain might be coming from faraway lands such as Russia, where it’s vaday chodosh (like the Aruch Hashulchan observed); and the Kaf HaChaim (end O.C. 489 - who tries to find hetterim for why the “oilum is maykel”. Even the Ohr Zarua himself (vol. 1, 328, whom the Maharil and Terumas Hadeshen (191) base their similar lenient psak on), one of the early proponents of ruling that chodosh in Chutz La’aretz is only a Rabbinic enactment qualifies his psak, that his lenient ruling only applies in a case of safek when the grain was planted (and therefore safek drabbanan l’kula), and only since it’s shaas hadchak, for it is impossible not to buy grain and bread, therefore kdai l’smoch bshaas hadchak.
For any questions, comments or for the full Mareh Mekomos / sources, please email the author: firstname.lastname@example.org
Disclaimer: These are just a few basic guidelines and overview of the Halacha discussed in this article. This is by no means a complete comprehensive authoritative guide, but rather a brief summary to raise awareness of the issue. One should not compare similar cases in order to rules in any real case, but should refer his questions to a competent Halachic authority.
Disclaimer: This is not a comprehensive guide, rather a brief summary to raise awareness of the issues. In any real case one should ask a competent Halachic authority.
L'iluy Nishmas the Rosh HaYeshiva - Rav Chonoh Menachem Mendel ben R' Yechezkel Shraga, Rav Yaakov Yeshaya ben R' Boruch Yehuda, and l'zchus for Shira Yaffa bas Rochel Miriam and her children for a yeshua teikef u'miyad! | <urn:uuid:abcb44af-8938-4308-bdb4-9e1e9066ebf4> | CC-MAIN-2019-47 | https://ohr.edu/this_week/insights_into_halacha/4992 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665767.51/warc/CC-MAIN-20191112202920-20191112230920-00178.warc.gz | en | 0.946921 | 5,264 | 2.71875 | 3 |
It is well known that long-term success and predictability of
root canal therapy is dependent on the presence, or absence,
of infection and our ability to disinfect and seal all the main
and accessory canals three-dimensionally. Prognosis for
endodontic therapy in teeth with apical periodontitis is 10-
15% lower than for teeth without periapical lesion.1,2 Once the
tooth becomes infected, it becomes very difficult to completely
eliminate the bacteria from the three-dimensional network of
complicated root canal systems.3 Several pigmented endodontic
pathogens, such as P. gingivalis, P. endodontalis, P. intermedia
and P. nigrescens, have been found to persist after proper
instrumentation of the canals, and are responsible for failure of
endodontic therapy.4 Smear layer produced during mechanical
instrumentation, with either rotary or hand files, reduces
permeability of intra-canal irrigants, like NaOCl(sodium
hypochlorite) and CHX(chlorhexidine) by 25-49%.5 Hence,
anti-bacterial rinsing solutions can only reach the bacteria to
a depth of 100mm into the depth of the dentinal tubules.6
However, microorganisms such as E. faecalis have been found
as deep as 800-1100mm.
In the recent years, different laser wavelengths have been
shown to be advantageous for deeper penetration of dentinal
tubules, compared with chemical irrigants9,10 and therefore,
for better bactericidal effect.11,12 The Er,Cr:YSGG laser
system has been shown to eradicate E. faecalis and E.Coli to
undetectable levels, with average temperature rise at the root
surface of only 2.7-3.2oC, depending on the power used.13
Complete smear layer removal has been shown by both the
Er,Cr:YSGG 2780nm and the Er:YAG 2940nm laser systems,
due to induction of shock waves in aqueous solutions (water,
EDTA) inside the root canals.14,15 Although EDTA by itself
is effective in smear layer removal in straight large canals, its
effectiveness is improved by laser activation in small curved
The clinical case presented here describes a successful
endodontic treatment of a lower premolar, which exhibited
radiographic signs of internal resorption and a buccal
perforation. This microscope observation by an endodontist
suggested a hopeless prognosis for the tooth and, hence,
the recommended extraction. We utilized a Er,Cr:YSGG
wavelength for removal of organic debris and smear layer,
followed by a diode 940nm wavelength, known for its ability
Treatment Of Internal
Cervical Root Resorption
Using Er,Cr:YSGG 2780nm and
Diode 940nm Laser Systems:
A Three-Year Follow-Up Case Report
Marina Polonsky, DDS, MSc Lasers in Dentistry Keywords: endodontics, root canal, internal
resorption, Er,Cr:YSGG, diode, Laser, irrigation,
to penetrate deep into the tubules and disinfect. In this case,
the use of NaOCl, and likely hypochlorite extravasation into
the periodontal ligament in the area of buccal perforation
had to be avoided.
A 52-year-old male presented to the dental office for a
routine prophylaxis appointment, in December 2013, and
was clinically diagnosed with interproximal decay on the
distal surface of lower right second premolar (tooth 45). A
closer radiographic examination revealed an area of internal
resorption in close proximity to the decay (Fig. 1). A composite
resin restoration was completed without pulp exposure,
but close proximity to the area of internal resorption
was explained to the patient. In case of an onset of symptoms
of irreversible pulpitis, the patient was instructed to contact
the office immediately. Two months later, in February 2014,
the patient contacted the office complaining of lingering
cold sensitivity and was referred to the endodontist for
consultation regarding feasibility of root canal therapy. The
endodontist report indicated that buccal perforation was
observed and prognosis was deemed to be hopeless (Fig. 2).
Recommended treatment was extraction of the tooth and
Upon the patient’s return to the office in March 2014 for
possible extraction, the use of lasers in endodontics to help
with extremely compromised cases [18,19] was explained,
and the patient consented to try laser-assisted endodontic
treatment. Inferior alveolar nerve block was administered
using 4% Articaine 1:200,000 Epi. The tooth was accessed
following removal of the temporary filling placed by the
endodontist. Working length was measured to be 20mm.
Mechanical instrumentation of the canal was completed
using Sybron TF adaptive reciprocating motor system
up to file ML2 corresponding to ISO 35 size master file.
FileEze (Ultradent, South Jordan,UT, USA) was used for
file lubricant and BioPure MTAD (Mixture of tetracycline
isomer, citric acid and detergent) by Dentsply Tulsa Dental
Specialties, Tulsa, Okla.,USA was chosen as the intra-canal
irrigation, since the presence of buccal perforation contra-indicated
the use of NaOCl or EDTA. Before final obturation,
dual wavelength laser cleaning and disinfection protocol was
Dual Wavelength Endodontic debridement, decontamination
and disinfection laser protocol:
1. Debridement. 2780nm Er,Cr:YSGG laser (Biolase, IPlus,
Irvine, CA), 60ms pulse (H-mode), RFT2 radial firing
endolase tip 200mm diameter. Power 1.25W, Repetition
rate 50Hz, Pulse energy 25mJ/pulse, 54% water, 34% air.
The tip was measured 1mm short of working length and
fired only on the way out of the canal. The tip was moved
in a corkscrew-like motion at a speed of 1mm/s. Due to
complexity of internal resorption pattern, it was chosen
to repeat for 6 cycles of 15s (instead of recommended 4
cycles). Total time of laser application 90s, total energy
delivered 112.5 Joules. Calibration factor for RFT2 tip is
0.55, so the actual energy delivered to the surface of canal
wall is 62J.
2. Decontamination. 2780nm Er,Cr:YSGG laser, 60ms pulse
(H-mode), RFT2 radial firing endolase tip 200mm
diameter. Power 0.75W, Repetition rate 20Hz, Pulse
energy 37.5mJ/pulse, 0% water, 11% air. The tip was
inserted into the canal 1mm short of the apex and fired
only on the way out of the canal. The tip was moved in a
corkscrew-like motion at a speed of 1mm/sec. 6 cycles of
laser irradiation were applied, 15 seconds each, for a total
Endodontic therapy for teeth diagnosed with irreversible
pulpitis has been in use for many years, and continues to be
the standard of care in the dental practice. In simple cases,
where mechanical instrumentation and chemical irrigation
can achieve sufficient cleaning and disinfection, endodontic
treatment is reported to be as high as 96% successful.
However, in cases where root canal systems are more
complex, proper cleaning of the organic debris, removal of
smear layer and disinfection, has proven to be a challenge.
Long-standing chronic periapical lesions and complex root
canal anatomy, examples of which include the presence of
isthmus, apical deltas and lateral canals, the predictability
of successful endodontic therapy is significantly reduced.
Lasers of different wavelengths have been shown to be
useful in improving disinfection and smear layer removal
in more complicated endodontic cases. In this article, we
describe a case of laser-assisted endodontic therapy of a
lower premolar exhibiting radiographic evidence of internal
cervical resorption and clinical symptoms of irreversible
pulpitis. The case was recommended for extraction following
endodontic consultation, as there was suspected buccal
root perforation. Conventional mechanical instrumentation
was performed, followed by a dual wavelength protocol utilizing
the Er,Cr:YSGG 2780nm laser for smear layer removal
and the diode 940nm laser for deeper disinfection of the
dentinal tubules. Conventional chemical irrigants, such as
NaOCl and EDTA, were not used due to suspected buccal
perforation and the possibility of chemical extrusion outside
the root and into the surrounding tissues. The protocol
was successful and three-year radiographic follow-up is
presented as evidence.
66 oralhealth MAY 2017
of 90s. Total energy delivered 67.5J. Calibration factor
for RFT2 tip is 0.55, so the actual energy delivered to the
surface of canal wall was 37J.
3. Drying. Sterile paper points were used to verify complete
dryness of the canal system, as much as possible with
such complex internal resorption network.
4. Disinfection. 940nm diode laser, continuous wave mode,
EZ200 end-firing tip 200mm diameter. Power 1.0W. Tip
was inserted 1mm short of the apex and moved at the
speed of 2mm/s while being fired on the way out of the
canal. 6 cycles of 7.5s each for a total 30s of irradiation.
Total energy of 30J was delivered.
Following the laser protocol, the canal was filled with
hydrophilic EndoRez UDMA
resin based, self-priming
endodontic sealer (Ultradent,
South Jordan, UT, USA) to
the level of the CEJ (cementum-enamel
#30 (Dentsply, Tulsa
Dental Specialties) softened
gutta percha carrier was used
to complete the obturation.
The tooth was permanently
restored with resin restoration
(FutureBond DC bonding
agent and GrandioSO nanohybrid composite resin by Voco,
Germany) immediately following obturation (Fig. 3). The
patient was instructed to inform the office should any pain
or discomfort persist past first two to three days following
the endodontic therapy. He returned in three months for a
routine follow-up and reported only minor discomfort for
the first two to three days, which quickly subsided, with the
tooth feeling “normal” ever since. Post-op radiographs were
taken at 3, 9, 24 and 36 months after the completion of the
laser-assisted endodontic treatment (Figs. 4-8). The tooth
remains asymptomatic and functional.
The three-year success of this complicated case can be
attributed to a number of clinical decisions: 1. The timely
intervention at the irreversible pulpitis stage and before the
onset of peri-apical infection and bacterial colonization.1,2
2. The effectiveness of the Er,Cr:YSGG laser system in
three-dimensional removal of organic debris and smear layer
from the extensive internal resorption network of canals.13-16
3. The ability of the 940nm diode wavelength to penetrate
deep into the tubular dentin to help dry the internal structure
of resorption network and target
pigmented bacteria, due to its
high absorption in melanin and
hemoglobin.20 4. The hydrophilic
and biocompatible nature
of a UDMA resin sealer like
EndoRez, which is well tolerated
by periapical tissues in case of
overfill or extrusion.21-23
It has been shown that
bacterial contamination of the
root canal system, presence of
necrotic tissue and bacterial
colonies deep inside the dentinal
tubules (as far as 1.1mm)24 are the main contributing factors
to the long-term failure of endodontic therapy. Performing a
root canal treatment prior to onset of deep bacterial colonization
is a valuable service we can provide to our patients, to
ensure long-term success. Unfortunately, early intervention is
not always possible. As well, the difficult three-dimensional
anatomy of the canal systems, such as internal resorption
network, also makes it difficult to achieve complete debride-
on page 69
Pre-op X-ray of internal resorption and
Pre-op photo of internal resorption by
Dr. Thompson. February 2014.
Immediate post-op X-ray.
1. 2. 3.
Cooling effects of
blood circulation should
make this protocol even
safer in vivo
ment and disinfection of the entire surface area of the root
canal walls and into the depth of the tubules. Laser systems,
such as the 2780nm Er,Cr:YSGG and the 940nm diode, add
an advantage to the traditional mechanical debridement and
chemical disinfection. The Er,Cr:YSGG laser is a free-running
pulsed laser with a high absorption in hydroxyapatite
(HA) and water (OH- radical to be more precise). Activation
of aqueous solutions, such as water and EDTA, with this
laser creates cavitation effects and shock waves responsible
for debris and smear layer removal inside the root canal.
Open dentinal tubules then allow laser light conduction and
effective disinfection as far as 500mm deep into the tubules
with this 2780nm laser system alone.25 To achieve an even
greater penetration of laser energy, and accompanying bactericidal
effect, the addition of second wavelength, like the
940nm diode, offers extra benefit. Diode lasers have deeper
penetration in dentin26,27; their high absorption in melanin
and hemoglobin allows for selective killing of pigmented and
pigment producing bacteria, which make up the majority of
endodontic infections.4 A recent study has shown that dual
wavelength protocol (2780nm with 940nm) is safe and does
not result in adverse temperature changes on the external root
canal surface in vitro.28 Temperature rise was recorded to be
5oC to 7oC depending on the thickness of the dentinal wall.
Continuous movement of the laser tip inside the canal and
distilled water irrigation, between laser exposures, ensures
control of temperature rise. The cooling effects of blood circulation
should make this protocol even safer in vivo. Lastly, the
sealer chosen to complete the endodontic obturation is also of
great importance to the long-term success of complicated cases
involving possible perforations and/or open apices. Extrusion
of EndoRez sealer past the apex has been shown to be well
tolerated by periapical tissues and does not interfere with normal
bone healing.22,23 Owing to its hydrophilic nature, good
flow and wetting characteristics21, it is ideal for the filling of
the complex internal resorption network of canals, especially
since complete dryness inside the tooth cannot be accomplished
with the use of paper points, and possibly even diode
laser irradiation. It has even been suggested that EndoRez
increased the fracture resistance of the endodontically treated
roots to internally generated stresses.29
The three-year success of this clinical case may be an indication
that laser systems, such as the Er,Cr:YSGG (IPlus by Biolase)
and 940nm diode (Epic 10 by Biolase), can be beneficial
from page 66
Three-months post-op X-ray.
24 months post-op X-ray
Nine-months post-op X-ray.
36 months post-op X-ray.
15 months post-op X-ray
70 oralhealth MAY 2017
additional tools in the treatment of extremely complicated endodontic
cases, otherwise doomed for extraction. The possible
presence of buccal perforation, in this case of internal cervical
resorption, contra-indicated the use of conventional irrigants,
such as NaOCl and EDTA, due to the likely occurrence of a
painful hypochlorite accident in the periodontal tissues. An inability
of conventional hand or rotary files to completely remove
organic material from such a complex resorption system was
another reason for recommended extraction for this premolar
tooth. The lasers’ ability to remove debris and smear layer with
acoustic shockwaves and the great bactericidal effect30, due to
deep penetration and pigment affinity, helped us treat this case
in a conservative, non-surgical manner and without the use of
potentially harmful chemical irrigants. OH
Oral Health welcomes this original article.
Thank you Dr. Thompson of Capital Endodontics in Ottawa, ON,
Canada for providing pre-op photo of internal resorption.
1. Sjogren U, Hagglund B, Sundqvist G, Wing K. Factors affecting
the long-term results of endodontic treatment. J Endod,
2. Chugal N.M, Clive J.M, Spangberg L.S. A prognostic model for
assessment of the outcome of endodontic treatment: Effect of
biologic and diagnostic variables. Oral Surg Oral Med Oral Pathol
Oral Radiol Endod, 2001;91(3):342-352.
3. Haapasalo M, Endoal U, Zandi H, Coli JM. Eradication of endodontic
infection by instrumentation and irrigation solutions. Endoodont
4. Gomes B, Jasinto R, Pinheiro E, Sousa E, Zaia A, Ferraz C et
al. Porphyromonas gingivalis, Porphyromonas endodontalis,
Prevotella intermedia and Prevotella nigrescens in endodontic
lesions detected by culture and by PCR. Oral Microbiol Immunol
5. Fogel HM, Pashley DH. Dentin permeability: effects of endodontic
procedures on root slabs. J endod, 1990;16(9): 442-445.
6. Orstavik D, Haapasalo M. Disinfection by endodontic irrigants
and dressings of experimentally infected dentinal tubules. Endod
Dent Traumatol, 1990;6:124-149.
7. Berutti E, Marini R, Angeratti A. Penetration ability of different
irrigants into dentinal tubules. J Endod 1997; 23(12):725-727.
8. Vahdaty A, Pitt Ford TR, Wilson RF. Efficacy of chlorhexidine
in disinfecting dentinal tubules in vitro. Endod Dent Traumatol,
9. Klinke T, Klimm W, Gutknecht N. Antibacterial effects of Nd:YAG
laser irradiation within root canal dentin. J Clin Laser Med Surg,
10. Odor TM, Chandler NP, Watson TF, Ford TR, McDonald F. Laser
light transmission in teeth: a study of the patterns in different
species. Int Endod J, 1999;32(4):296-302.
11. Gutknecht N, Moritz A, Conrads G, Sievert T, Lampert F. Bactericidal
effect of the Nd:YAG laser in in vitro root canals. J Clin Laser
Med Surg, 1996;14(2):77-80.
12. Moritz A, Gutknecht N, Goharkhay K, Schoop U, Wenisch J, Sperr
W. In vitro irradiation of infected root canals with a diode laser:
results of microbiologic, infrared spectrometric and stain penetration
examinations. Quintessence Int, 1997;28(3):205-209.
13. Schoop U, Goharkhay K, Klimscha J, Zagler M, Wenisch J, Gourgopoulos
A, Sperr W, Moritz A. The use of the erbium, chromium:yttrium-scandium-gallium-garnet
laser in endodontic treatment:
the results of an in vitro study. JADA 2007;138(7):949-55.
14. Blanken J, De Moor RJ, Meire M, Verdaasdonk R. Laser induced
explosive vapor and cavitation resulting in effective irrigation
of the root canal. Part 1: a visualization study. Lasers Surg Med,
15. De Moor RJ, Blanken J, Meire M, Verdaasdonk R. Laser induced
explosive vapor and cavitation resulting in effective irrigation of
the root canal. Part 2: evaluation of the efficacy. Lasers Surg Med,
16. De Moor RJ, Meire M, Goharkhay K, Moritz A, Vanobbergen J.
Efficacy of ultrasonic versus laser-activated irrigation to remove
artificially placed dentin debris plugs. J Endod, 2010;36(9):1580-
17. Murugesan MS, Rajasekaran M, Indra R, Suganthan P. Efficacy of
Er,Cr:YSGG Laser with Conical Tip Design in Smear Layer Removal
at the Apical Third of Curved Root Canals. Int J Laser Dent,
18. Martins MR, Carvalho MF, Pina-vas I, Capelas J, Martins MA, Gutknecht
N. Er,Cr:YSGG laser and radial firing tips in highly compromised
endodontic scenarios. Int J Laser Dent, 2013;4:10-14.
19. Khetarpal A, Ravi R, Chaudhary S, Talwar S, Verma M, Kathuria
A. Successful Endodontic Management using Er,Cr:YSGG laser
disinfection of root canal in a case of large periapical pathology.
Int J Dent Sci and Research, 2013;1(3):63-66.
20. Gutknecht N, Franzen R, Schippers M, Lampert F. Bactericidal
effect of a 980-nm diode laser in the root canal wall dentin of
bovine teeth. J Clin Laser Med Surg, 2004;22:9-13.
21. Renato L. Obturation of the root canal – Listening to the needs of
the tooth with Science and Simplicity. Oral Health, 2009; 66-70.
22. Zmener O, Banegas G, Pamekjer C. Bone tissue response to a
Methacrylate-baser endodontic sealer: a histological and histometric
study. JOE, 2005;31(6):457-459.
23. Zmener O, Pameijer C. Clinical and radiographical evaluation
of a resin-based root canal sealer: a 5-year follow-up. JOE,
24. Kouchi Y, Ninomiya J, Yasuda H, Fukui K, Moriyama T, Okamoto H.
Location of streptococcus mutans in the dentinal tubules of open
infected root canals. J Dent Res, 1980;59:2038-46.
25. Franzen R, Esteves-Oliveira M, Meister J, Wallerang A, Vaweersch
L, Lampert F, Gutknecht N. Decontamination of deep dentin by
means of erbium,chromium:yttrium-scandium-gallium-garnet
laser irradiation. Lasers Med Sci, 2009;24(1):75-80.
26. Preethee T, Kandaswamy D, Arathi G, Hannah R. Bactericidal
effect of the 908nm diode laser on Enterococcus faecalis in
infected root canals. J Conserv Dent, 2012;15:46-50.
27. Falkenstein F, Gutknecht N, Franzen R. Analysis of laser transmission
and thermal effects on the inner root surface during periodontal
treatment with a 940-nm diode laser in an in vitro pocket
model. J Biomed Opt, 2014;19:128002.
28. Sardar Al-Karadaghi T, Gutknecht N, Jawad HA, Vanweersch L,
Franzen R. Evaluation of temperature elevation during root canal
treatment with dual wavelength laser: 2780nm Er,Cr:YSGG and
940nm diode. Photomed Laser Surg, 2015;33(9):460-466.
29. Hammad M, Qualtrough A, Silikas N. Effect of new obturating
materials on vertical root fracture resistance of endodontically
treated teeth. JOE, 2007;33(6)732-736.
30. Gordon W, Atabakhsh VA, Meza F, et al. The antimicrobial efficacy
of the erbium, chromium:yttrium-scandium-gallium-garnet
laser in endodontic treatment: the results of an in vitro study.
J Am Dent Assoc, 2007;138:949-955.
Dr. Marina Polonsky DDS, MSc is a
gold medal University of Toronto ’99
graduate, she maintains private general
practice in Ottawa, Ontario with
focus on multi-disciplinary treatment
utilizing lasers of different wavelengths.
She holds a Mastership from
World Clinical Laser Institute (WCLI),
Advanced Proficiency Certification
from Academy of Laser Dentistry
(ALD) and Master of Science in Laser
Dentistry from RWTH University in
Aachen, Germany. Dr. Polonsky is a founder of the Canadian Dental
Laser Institute (CDLI) | <urn:uuid:2f3c1c93-ce46-469d-a475-a3a2cbf85ea1> | CC-MAIN-2019-47 | http://cndlaserinstitute.com/cervical-root-resorption/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00219.warc.gz | en | 0.828422 | 5,842 | 2.578125 | 3 |
The following is an abbreviated history of Hughes’ communication satellite design evolution during the latter part of the twentieth century. Hughes1 was the leading supplier of communication satellites from the mid 1960’s through the end of the twentieth century. This marketplace dominance was earned through the innovation, insight and timely responses of Hughes’ technical and management leadership coupled with outstanding execution by a staff of extraordinarily capable employees. For those on the Hughes space team fortunate enough to have lived through these heady times, it was truly a unique and exhilarating career experience.
The advent of Syncom in the early 1960’s formed the spacecraft design foundation for the extraordinarily successful family of Hughes spin-stabilized communications satellites which endured for more than 40 years. This 80 pound, spin-stabilized spacecraft is a true masterpiece of design innovation. Syncom is a beautifully integrated design, elegant in its simplicity and efficiency. (See “The Syncom Story” by Harold Rosen.) Syncom was, in short the perfect, and probably the only practical, spacecraft design solution capable of initiating geosynchronous satellite communications through the application of the then current technology and employing the rocket launchers available at the time. Harold Rosen and his colleagues, Don Williams and Tom Hudspeth conceived, promoted and ultimately lead the full scale development of this major innovation in global communications.
The spin-stabilized Syncom design incorporated major advantages not realized in competing geosynchronous spacecraft concepts. Its large, spinning angular momentum facilitated the attitude stabilization of an integral solid rocket “stage” permitting the attainment of a 24 hour orbit consistent with the relatively limited performance capability of the then available launch vehicles.
On-orbit spin axis attitude control and latitude/longitude station-keeping required only two small thrusters.2 The “management” of onboard spacecraft propellant was implemented by the outboard placement of the propellant tanks in Syncom’s spinning, gravity-like centripetal field. The spacecraft’s thermal environment was benign and near room temperature due to spin “toasting” in the sun’s rays. Finally, the Syncom spacecraft provided a “built-in” spin-scan for body mounted sun, and later earth, sensors with no moving parts.
This all-spinning spacecraft design does, however, incorporate three significant performance limitations. The design’s cylindrical solar panel results in a factor of 1/π (3.14) or 31.8% geometric illumination efficiency (with respect to 100% for a flat, sun oriented solar panel). The incorporation of high gain, earth-oriented “pencil” beam antennas necessary for optimum communication performance (and small, inexpensive ground terminals) requires the electrical or mechanical de-spinning of directional, earth oriented communication antennas. Finally, the dynamic spin stability of this all spinning configuration requires the spacecraft to be “disk” shaped (with respect to mass properties) limiting the length of the overall configuration. Over the long history of Hughes’ spin-stabilized spacecraft, most of these performance limitations were largely mitigated by further spacecraft design initiatives as well as through the increase in satellite accommodation due to the ever-growing launcher rocket size and performance.
Syncom, as well as the Syncom “clone”, Early Bird (the first geosynchronous, commercial communication satellite), and Intelsat II, a larger version of Syncom/Early Bird produced by Hughes for Comsat and Intelsat respectively, incorporated a linear slotted array transmit/receive antenna mounted on and collinear with the spacecraft’s spin axis. This antenna array produced a toroidal or “pancake” antenna beam, symmetrical about Syncom’s spin axis. This antenna pattern resulted in a “squinting” or “focusing” of the spacecraft’s transmitted radiated power directed normal to the spacecraft’s spin axis (as well as that small fraction of the beam – about 5% – covering the earth) to be augmented by a factor of four (or 6 dB). This toroidal antenna beam, however, resulted in roughly 95% of Syncom’s precious transmitted power escaping, radiated uselessly into space. This modest antenna focusing (or gain) combined with a 2 watt traveling wave tube amplifier (TWTA) permitted the relay of one television channel employing the very large (85 foot dish antenna) and very sensitive (cryogenically cooled receiver) government receiving ground terminal located at Point Mugu, CA.
Clearly, the improvement in communication performance available through focusing the downlink transmitted power or “Effective Isotropic Radiated Power” (EIRP) via an earth coverage (17.4 degree diameter conical) beam offered a tremendous communications performance enhancement (about a factor of 20) permitting networks to be implemented using much smaller, cheaper and more practical fixed antenna ground receiving terminals. An additional factor of 50 or so could be achieved through the use of even smaller earth-oriented beams covering limited geographical areas (i.e. single countries).
The initial step toward substantially higher gain, earth oriented antenna beams was implemented via NASA’s Advanced Technology Satellite (ATS) program which was awarded to Hughes in 1962. The first spacecraft in this series (ATS I), launched in late 1966 incorporated an electronically despun earth-coverage antenna beam. A mechanically despun earth coverage antenna, (coupled to the spinning transmitters via an RF rotary joint) was demonstrated on ATS III, launched in November 1967, providing an important despun antenna design demonstration which supported a long, evolving line of future Hughes commercial and government communication satellites.
In the mid 1960’s Hughes recognized, and subsequently developed, the market for national communication satellite systems. The first sale was to Telesat of Canada, an entity created by the Canadian Parliament to establish and operate a national communication satellite (Anik) network. In support of this new marketplace, Hughes designed the HS 333 spacecraft series. The steadily improving performance of the Delta launch vehicle permitted an HS 333 spacecraft mass of 650 pounds, a solar panel generating about 300 watts of electrical power and twelve 6 watt TWTA transmitters (Vs. Syncom’s 80 pounds, 28 watts and single 2 watt TWTA).
The major improvement in transmit performance was, however, the 333’s antenna design which incorporated a mechanically despun, five foot diameter, “offset fed” antenna reflector whose beam was shaped and significantly narrowed (with respect to an earth coverage beam) to cover only the Canadian customer’s specified geographical area. In the case of Anik, this antenna beam narrowing provided an antenna gain performance increase of about a factor of 50 with respect to an earth coverage antenna beam or a factor of 1,000 over Syncom’s toroidal antenna pattern. This narrow beam, high antenna gain was also facilitated by the use of the, 4 GHz, C-Band downlink radio frequency (RF) with respect to Syncom’s 1.8 GHz, L-Band downlink frequency (the antenna diameter for a given, fixed beam width being inversely proportional to its operating frequency). The increased number of simultaneous transmitters and their increased power (12 six watt transmitters Vs. Syncom’s single two watt TWTA) provided another factor of 36 for an overall transmit EIRP performance improvement over Syncom and Early Bird of about a factor of 36,000! Anik’s 12 six watt TWTAs implemented up to 7,000 telephone circuits or 12 simultaneous color TV channels. The Anik ground receiving terminals incorporated fixed (~10 foot diameter) parabolic antennas and receivers operating at ambient temperature, a practical and much more modest receiving terminal performance requirement than for any previous satellite communication network. Subsequent HS-333 systems were produced for Indonesia and the Western Union Corporation bringing the total sales of this very innovative and successful design to eight spacecraft.
The Disk and the Pencil
Classical mechanics demonstrates that a spinning rigid body is stable spinning about either its maximum (i.e. a flattened disk) or minimum (i.e. a rod or pencil) axis of inertia. As a practical matter, there is no such thing as a perfectly rigid body, certainly not a spacecraft3 containing liquid fuel and other flexible components. In the presence of these non-rigid spacecraft elements, an all-spun configuration is stable only when spinning about its axis of maximum inertia.
However, if a portion of the spinning configuration is despun, a simple design measure to overcome the constraint of spinning only about the spacecraft’s axis of maximum inertia is available. By incorporating a passive, mechanical damper on the configuration’s despun element tuned to the spacecraft’s “nutation”4 frequency it is practical to spin-stabilize a spacecraft about its spinning axis of minimum inertia.
Tony Iorillo, a young Hughes engineer, had this insight in the mid 1960’s and demonstrated its validity both analytically and experimentally over the course of a few months. This innovative technique for the stabilization of a spinning spacecraft was dubbed “Gyrostat” stabilization. The first flight demonstration of Gyrostat stabilization came with the launch of the Hughes/Air Force TACSAT I on February 9, 1969. The TACSAT configuration’s despun “platform” incorporated its communication antennas as well as the entire suite of communications electronics. The freedom to lengthen TACSAT’s cylindrical solar array resulted in a prime power of one kilowatt, at the time, the most powerful spacecraft ever launched.
The impact of Tony’s insight on the Hughes’ family of spin stabilized spacecraft can hardly be overstated. The ability to lengthen the spinning solar panel largely satisfied the near term pressure for significantly increased spacecraft prime power. Additionally, the despun location of the entire communication “payload” permitted direct connection of the communications electronics to earth-oriented antennas enabling lower RF losses and relieving the communication frequency plan constraints with respect to a single RF rotary joint. This design break-through extended the life of Hughes’ spin-stabilized spacecraft product line into the early 21st century and enabled future contracts for upwards of 100 spacecraft valued well in excess of $5 B!
The Space Shuttle and “Slicing the Bologna”
A key element of NASA’s vision for the Space Transportation System (STS or the Space Shuttle) was that the Shuttle would supplant the government’s family of expendable launchers (Delta, Atlas, Titan, etc.), and become America’s exclusive system for access to space. NASA’s case for the Shuttle projected a dramatic step increase in launcher cost-effectiveness through the recovery and reuse of most of this manned launcher system’s components combined with tens of launches per year. The STS Program was approved by Congress and development was initiated in 1972.
To attract spacecraft customers to come on board, in the mid 1970’s NASA promulgated a pricing formula for the trip to low earth orbit (LEO). This formula was based on the spacecraft customer’s utilization of the Shuttles 60 foot long (by 15 foot diameter) payload bay length or the fraction of the available Shuttle, payload mass capability, about 55,000 pounds whichever was the greater. (Some wag likened this pricing formula to “Slicing a 15’ X 60’ Chub of Bologna” and the characterization stuck.) Based on NASA’s total price for a Shuttle launch, adjusted by their pricing formula, projected launch costs for a typical mid-70’s Hughes spacecraft was a small fraction of that charged for an expendable launch vehicle.
Following this realization, Hughes’ reaction was almost immediate. Harold Rosen’s initial concept for a Shuttle optimized spacecraft was a “wide-body” spin- stabilized spacecraft occupying the full Shuttle payload bay diameter, approximately ¼ of the payload bay length and consuming about 30% of its payload mass capacity. This new spacecraft design, dubbed “Syncom IV”, incorporated the integral propulsion necessary to transfer the spacecraft from LEO to geosynchronous orbit. In addition to a large, solid-propellant perigee kick motor (PKM), a new, high performance bi-propellant propulsion system was incorporated to augment orbit injection, for attitude control and for orbital station-keeping. Syncom IV was secured to the Shuttle’s payload bay using a “cradle” adapter encircling the lower half of the spacecraft’s cylindrical drum. Deployment in orbit was implemented through a “Frisbee” ejection, clearing the payload bay with a small residual separation velocity and slow spacecraft spin.
Hughes conducted a vigorous marketing campaign to demonstrate to potential Syncom IV customers the unmatched cost-effectiveness of the available communication capabilities that could be delivered on orbit at a bargain basement price. One of Hughes’ potential Syncom IV customers was Satellite Business Systems (SBS – a joint venture of IBM, Aetna and Comsat). During a marketing visit in 1977, Harold Rosen was briefing the Syncom IV design to SBS when their CEO interrupted to express great reluctance to enter into Syncom IV procurement as the Shuttle had not yet flown and expressed his concern that it would, perhaps, never fly! In that case, what would SBS do with an expensive satellite that could not be launched?
Prompted by SBS’s (and other customers’) skepticism of the Shuttle program’s integrity and schedule, within a few weeks Dr. Rosen configured a new spacecraft design which was compatible with launch on either the shuttle or the Delta, expendable launcher. This dual launch design was originally only intended to provide a backup capability in the absence of Shuttle launch availability. However, this new spacecraft design became the HS 376/393/Intelsat product line which comprised about 80 spacecraft (many of which were launched on the Shuttle) and carried the HSCC’s family of spin stabilized spacecraft into the early 21st century.
The Shuttle finally became operational in mid-1981 and the wide-body Syncom IV design became basis for the five spacecraft Leasat series for the US Navy. A few additional wide body STS launched spacecraft were also purchased by the U.S. Government in support of classified missions. (These classified versions of Syncom IV adopted a new, high capacity, weight efficient nickel-hydrogen battery design.)
Hughes was at least two years ahead of their competitors in recognizing the outstanding competitive opportunity offered by NASA’s STS launch pricing policy.
These Hughes’ STS compatible spacecraft designs, incorporating spin-stabilized integral propulsion for transfer to geosynchronous orbit, implemented a near perfect match to NASA’s Shuttle payload capabilities and pricing formula. During the mid 1980’s, life for Hughes in the Comsat marketplace was very good indeed!
“The jig’s up!”
This major marketplace advantage was ended abruptly in January of 1986 by the catastrophic Shuttle Challenger explosion shortly after its lift-off from Cape Canaveral. Within months of the loss of Challenger and her crew5, the government suspended STS launches of unmanned payloads. Continuing this Shuttle launch service was judged to be no longer acceptable due to the additional Shuttle launches required and the added risk to the Shuttle and her crews. Hughes, in a single stroke, lost the large pricing and capacious Shuttle payload bay advantage that had provided a significant marketplace edge for their spin stabilized family of spacecraft. With NASA’s decision to suspend STS launches of unmanned spacecraft, S&CG’s management realized that “The jig’s up!” and Hughes would need to initiate a body stabilized spacecraft design to remain competitive utilizing the available, and much more expensive, family of expendable launch vehicles. After many years of external and internal pronouncements that the era of spin-stabilized ComSats was finished due to the greater power efficiency offered by flat, sun-oriented solar array panels and a somewhat more compact, weight-efficient body stabilized configuration, Hughes concluded that the time had come to incorporate a body stabilized design into their Comsat portfolio.6
In 1986, Based on Hughes’ persuasive business case, their new owner, General Motors, endorsed a substantial investment of Hughes’ internal funds to develop a new, state-of-the-art, body stabilized Comsat design aimed at “leap-frogging” their competition by incorporating the very latest, proven, high performance space technology. This new spacecraft design was designated the Hughes Satellite 601 (HS-601). With eventual sales exceeding 90 units, the HS-601 Comsat design (followed by the late 90’s body stabilized HS-702 upgrade) continued Hughes’ position as the world’s leading supplier of communication satellites well into the 21st century.
The 601 design incorporates an internal high-speed momentum wheel gimbaled about two axes to control/stabilize spacecraft attitude. Attitude sensing is via earth and sun sensors augmented by precision inertial gyroscopes. The buildup in the small angular momentum (spin axis) errors due to external disturbances (primarily solar pressure imbalance) is cancelled through the use of magnetic torque rods, which conserves spacecraft propellant. The design provides the integral propulsion necessary to inject the spacecraft into geosynchronous orbit from either a low earth circular orbit (through the incorporation of a spin stabilized, solid propellant “perigee kick motor”) or from a highly elliptical geosynchronous transfer orbit utilizing a 100 pound liquid-propellant thruster. This high performance bi-propellant system, first implemented on Syncom IV, provides orbit injection augmentation as well as on-orbit latitude station-keeping. Later versions of the 601 configuration (601-HP) incorporate Xenon ion propulsion for on-orbit latitude station-keeping saving a substantial mass of liquid propellant. Prime spacecraft power is implemented using dual 3 or 4 section, deployable, sun oriented solar panels initially populated with silicon solar cells. Using advanced gallium arsenide (GaAs) solar cells, this design’s maximum prime power generation capability is approximately 10 KW. (This large prime power availability enables the economic transmission of colored television directly to individual homes utilizing fixed, ~1 meter diameter receiving antennas.) An advanced, highly efficient Nickel-Hydrogen (NiH) battery, proven on previous government programs, provides power during on-orbit earth eclipses. The heat generated by internal electronics is conducted via heat pipes to thermal radiating mirrors located on the north and south faces of the cubic 601 spacecraft.
This new spacecraft represented a major design departure from the previous Hughes, space proven spinning spacecraft “bus” technology. Everyone knew that Hughes had their reputation and a great deal of their customers’ confidence riding on demonstrating the efficacy of this new 601 spacecraft configuration.
Following the HS-601 design/development program and Hughes’ vigorous marketing campaigns in 1987/1988, the first 601 contract was awarded for two HS-601 Comsats by AUSSAT Pty Inc. in July of 1988.7 In the same month Hughes was awarded the US Navy contract for their UHF Follow-On program. This Navy Comsat was also a 601 design and was for the development, manufacture and launch of the first spacecraft with options for 9 additional units over a total period of about ten years. Both the AUSSAT and UHF Follow-On fixed-price contracts incorporated major financial penalties8 for any spacecraft failures and/or performance shortfalls. Both also called for the spacecraft to be delivered on-orbit which made Hughes responsible for the procurement of the launch vehicles and launch services.
Contracting for on-orbit Comsat delivery was not new for Hughes. However, the very first HS-601 launch (scheduled for April of 1992) was from Xichang, China aboard a new Chinese launch vehicle design, the Long March 2E rocket, lifting off from a recently constructed launch pad complex and was contracted with a newly minted Chinese commercial launch supplier (The China Great Wall Industries Corporation). So, with an unproven spacecraft, an unproven launcher, a new launch facility, and a novice commercial launch supplier, Hughes faced several additional and risky unknowns! Moreover, prior to the first HS-601 launch, Hughes had entered into fixed-price contracts for 24 additional HS-601 spacecraft. The risks and stakes with respect to this first spacecraft launch were probably as high as the Hughes’ space enterprise had ever taken on!
The first HS-601 launch (Australia’s Optus IB) was scheduled for April, 1992 from the Xichang launch complex. The Long March 2E rocket ignited on schedule but then, automatically shut down due to the failure of one of the Long March’s four “strap-on” booster assist rockets to ignite. After the identification and correction of this ignition fault, the Optus IB second launch attempt occurred on August 14, 1992. This time the Long March rocket performed flawlessly separating the spacecraft in low earth orbit. Following the firing of the spacecraft’s solid perigee kick motor and several burns of the liquid, 100 pound thruster, the spacecraft was placed into geosynchronous orbit. Subsequent deployments of Optus IB’s’ solar panels and communication antenna reflectors were executed without incident. Hughes’ initial HS-601 mission was, to everyone’s gratification and great relief, an unqualified success! It would be followed by more than ninety additional 601 flights before the product line was gradually supplanted by the upgraded HS-702 series introduced in the late 1990’s.
In response to the quest for even greater spacecraft power and performance the HS-702 body stabilized spacecraft incorporated prime power capability in excess of 16 KW through increasing the maximum number of deployable solar panels from four to six. Larger Xenon ion thrusters capable of augmenting liquid propulsion orbit insertion and for longitude on-orbit station-keeping were incorporated to reduce launcher costs and/or to convert the saved weight of the reduced liquid propellant load into additional communication payload capability. The 702 control system design is a “zero momentum” configuration incorporating multiple reaction wheels for increased attitude control flexibility. A series of “Geo-Mobile” (GEM) Comsats which featured multiple (~200) rapidly switchable spot beams controlled by an advanced on-board digital signal processor (DSP) and interconnecting small mobile or fixed ground terminals was added to the 702 inventory in the early 2000’s. (Six of these GEM Comsats are in operation for HSCC’s Thuraya and Direct TV customers.) The Boeing 7029 spacecraft design is currently in production for multiple commercial and government customers.
Four Decades of Innovation and Excellence
Hughes became the pioneer of geosynchronous satellite communications with the 1963 launch of Syncom. Hughes also became the world’s leading producer of communication satellites during the last four decades of the 20th century. Of the communication satellites launched during this period, 165 were Hughes’ products, more than all the other suppliers combined. At the end of the 20th century upwards of 200 active communication satellites occupied the geosynchronous orbit belt. Geosynchronous communication satellites now implement global and regional communication networks which enable the transmission of television, data, messaging and telephony on a scale and at a modest cost nearly unimaginable in the early 1960’s. Hughes’ communication satellite innovations have spawned a multi-billion dollar space industry which has truly changed the world.
1) The Hughes’ space enterprise was initiated as the “Space Division” of the Hughes Aircraft Company’s Aerospace Group in 1961. In 1970 Hughes’ space activities were consolidated in the “Space and Communications Group” (S&CG) under the leadership of Albert D. (Bud) Wheelon. In 1992, following the Hughes re-organization under their new CEO, Mike Armstrong, it was re-named the “Hughes Space and Communication Company” (HSCC). This history refers to the space component of the Hughes enterprise as simply “Hughes”.
2) This simple spacecraft thruster control system was patented by Don Williams and Hughes. The famous “Williams’’ Patent”, in effect, denied Hughes’ growing competition the use of spin stabilization in their spacecraft designs without the risk of legal sanctions. TRW’s Intelsat III and DSCS designs (as well as several others) did, in fact, infringe on the Williams patent and were the subject of protracted legal actions culminating in penalties of $114 M awarded to Hughes in 1994.
3) The first successful U.S. satellite, Explorer, launched in 1958 and designed to be spun about its minimum inertia axis, clearly showed that a non-ridged spacecraft was indeed unstable spinning about its minimum inertia “pencil” axis. Within a few hours, Explorer’s initial spin axis diverged to transfer its entire spin angular momentum about its axis of maximum inertia which was perpendicular to its “pencil” axis.
4) When disturbed (firing of a thruster, etc.), a spinning body’s axis of spin cones about its undisturbed position (its angular momentum vector). This coning motion is termed “nutation”. The frequency (number of rotations per second) of this coning motion is determined by the spacecraft’s inertial properties. (the ratio of maximum to minimum inertia axes)
5) The Challenger disaster was also a very personal tragedy for the Hughes family. One of the ill-fated Challenger crew members was Greg Jarvis a seasoned, a well-liked, respected and talented Hughes engineer.
6) Their existing contracts plus a few follow-on contracts for the Hughes Gyrostat configuration maintained this, now declining, spin stabilized product line through launches extending into the early 21st century.
7) AUSSAT (an Australian government entity) transferred responsibility for their satellite communications programs to Optus, a newly established, private Australian company, in late 1991.
8) The Optus IB contract called for Hughes to replace any failed spacecraft at no cost to their customer. The UHF Follow-On contract incorporated orbital incentives requiring Hughes to repay 50% of the contracts’ basic and/or option prices (declining to zero over the spacecraft’s 10-year contractual orbital lifetime) for failure of any spacecraft to satisfy on-orbit performance requirements.
9) Following the acquisition of the Hughes’ space business by the Boeing Company in 2000, the Hughes’ spacecraft product line nomenclature (HS-XXX) was revised to Boeing XXX.
1) Boeing Satellite Development Center Web Site
2) “The SYNCOM Story” – Harold Rosen | <urn:uuid:abd64c7b-2792-4122-acd6-1ab1cf24c7eb> | CC-MAIN-2019-47 | http://www.hughesscgheritage.com/slicing-the-bologna-and-other-comsat-design-initiatives-dick-johnson/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670389.25/warc/CC-MAIN-20191120010059-20191120034059-00017.warc.gz | en | 0.932132 | 5,579 | 2.78125 | 3 |
The Dirt on E-Waste
isn't measured only
by green purchasing.
A healthy, green
disposal method is
the back end of a
POP QUIZ! What happens to your computer
equipment when you've declared it surplus? Does it
get shuffled into a warehouse, awaiting attention at some
unspecified later date? Do you stick it on a pallet and have
it hauled away by a recycler? Do you sell it, refurbish it,
ship it back to a vendor, or drive it to the dump?
Don't know? You're not alone. Most smart technology
leaders can name multiple efforts they've already taken or
expect to pursue in their schools to "green up" IT operations,
such as powering off idle computers and virtualizing the data
center. But one area that many of them may not be so savvy
about is hardware disposal: What to do with the old stuff? After all, it's not something from which they can garner easy
or obvious savings. But, as some districts have figured out,
the disposal end of technology acquisition is as vital a part
of purchasing decisions as choosing energy-efficient devices.
Nobody knows precisely how much e-waste is generated
by schools nationwide. According to the Natural Resources
Defense Council, Americans on the whole throw out about
130,000 computers a day. That tallies up to 47.5 million a
year. And the numbers can only grow. Technology market
researcher Gartner estimates that 15.6 million new PCs were
shipped in the US during just the fourth quarter of 2008--
and that was during an economic slowdown. It's safe to
assume that the work of schools to refresh their technology
contributes a fair share to that count.
So what should you do when you don't want your old
machines anymore? It isn't sufficient to simply say, Recycle!
Those good intentions often come to bad ends. According
to a study by the Silicon Valley Toxics Coalition, which
advocates for a clean and safe high-tech industry, up to 80
percent of e-waste taken to recycling centers in this country
ends up being exported to towns in developing countries for
scrap recovery. There, according to a CBS 60 Minutes report
last November titled "The Electronic Wasteland,"
residents, including children, use crude and toxic means to
dismantle computers, monitors, and other electronics in an
effort to remove precious metals, such as gold.
That's antithetical to what US educators want, explains
Sarah O'Brien, outreach director of the Green Electronics
Council, a Portland, OR-based organization that works for
the environmentally safe use and reuse of electronic products.
O'Brien educates purchasers and the public about
the GEC's EPEAT (Electronic Product Environmental
Assessment Tool), a system that helps green-minded buyers
by establishing criteria that identify just how green a computing
device is. "A lot of the criteria that have to do with
toxics have a direct impact on kids," she says. "Not [just]
the kids in the district-- children across the world."
But districts that approach the disposal of their old,
unwanted computer equipment with the proper diligence are
finding that they have several options, all of which illustrate
why unloading e-waste doesn't have to be dirty work.
RECYCLING AND ASSET RECOVERY are two concepts that people
in the waste disposal industry bandy about, but the two terms actually
have distinct meanings, which you'll want to know when you discuss
the disposal of surplus technology. The word recycling in the
general population means to put something back into play, but has
a different context in the e-waste business.
"We view it as breaking something down to the component
level," says Craig Johoske, director of asset recovery services at Epic
Systems, which calls itself an asset recovery firm.
In that sense, recycling means breaking apart a piece of equipment
to recover its plastic, glass, metals, and other elements. " Asset
recovery [means] selling equipment on behalf of the client and then
splitting the proceeds with them," Johoske explains.
Another School's Treasure
Before the concept of e-waste recycling was better understood,
Union School District in San Jose, CA, would rent giant waste
containers at great expense. The bins would be labeled "recyclable
materials," recalls Mary Allen, supervisor of maintenance
and operations. "But back then nobody paid attention. All we
were told was, 'You can't put concrete or dirt in there.' We
dumped everything. When I first started with the district, we had
piles and piles of this stuff, because nobody knew what to do
with it." Once the district learned that monitors and TVs were
hazardous waste, says Allen, it held on to them.
The 4,000-student district picked up the disposal costs--
about $1,000 dollars a year-- until a company came along that
offered to haul away the whole lot of electronics for free,
including monitors, computers, copy machines, and printers.
"We knew that they broke down every unit and disposed of
them separately," she says. "The glass went one place. They
were actually recycling the units." Now the district could
divert that disposal expense to other purposes.
Then, in 2005, Allen learned about InterSchola, a company
that inventories a district's surplus hardware and handles selling
it on eBay and other auction sites. Suddenly, the job of getting
rid of old equipment could be a money maker.
"It was a great experience," says Allen. "I dealt with one
person in particular. He came out and did a field visit. He took
pictures and kept me posted via e-mail: 'Okay, we're going to
post these on eBay as of this date. This is the starting price
we're going to ask.'" Allen could monitor the auctions to see
how well the bidding was going. On that first sale, she says,
"we made a good $5,000 to $6,000." A recent sale netted
between $3,000 and $4,000.
InterSchola, launched in 2004, has worked with about 250
districts in California and New York, selling not only old electronic
components but also school buses, maintenance carts,
and furniture. For some goods, state law may require that a
school district's board must declare a piece of equipment as
surplus before it can be disposed of at public auction. It's that
process-- from development of the list of items that goes to the
local board for approval on through to the shipping of the equipment
to the final buyer-- that is handled by InterSchola.
Breaking Down an E-Cycler
AT REDEMTECH, a Columbus, OH-based IT asset
recovery provider certified by the Basel Action Network as
an "e-steward," great care is taken, says Jim Mejia, Redemtech's vice president
of environmental affairs, with either route an electronic product can
go once it has been turned over to the company: refurbishing and
reselling, or conversion into its base components.
The process for preparing, for example, a school laptop for resale at
Redemtech goes like this: The company picks up the laptop at the
school, where the machine is labeled, scanned, packaged, then loaded
onto a truck and driven away. It's rescanned upon arrival at the e-waste
facility to ensure that no sensitive data was lost in transit. At that point,
the unit is registered into a database.
Next, the machine is put through an assembly line. A worker typically
does a hard-drive erasure to Department of Defense standards, making
all data on the drive virtually unrecoverable. Redemtech is a Microsoft authorized refurbisher. Therefore, if the computer
has value, it's cleaned up and reloaded with Microsoft Windows XP or
Vista, allowing it to be reused. From that point, it's shipped to one of
the resale channels used by the company, including 21 Micro Center
stores, run by Redemtech's parent company, Micro Electronics.
It's also possible that the unit will be dismantled by hand and resalable
components shipped to a secondary market or overseas for additional
processing and resale. "That's where my responsibility starts," Mejia says.
Each type of e-waste is sent to an appropriate conversion partner. It's
Mejia's job to ensure that the facilities his company works with are "clean
and have good pollution control technology that not only protects the
community but also ensures their employees aren't exposed to toxins."
A "converter," as Mejia calls the partnering company, will take the unit
and convert it into its components. For example, a 67-pound, 19-inch
Sony monitor can be disassembled into the following: 5 pounds of steel,
3 pounds of aluminum, 1 pound of copper, a fraction of an ounce of
brass, 5 pounds of electronic board, 13 pounds of plastic, and 40 pounds
of cathode ray tube. Each part can be sold to a foundry or processor.
Mejia says that essentially the entire unit can be recycled. That includes
the CRT, which has its own composition of elements, including lead and
glass; the circuit card, which contains copper, lead, and chromium; and
the plastic, which is an oil-based derivative.
By-products-- those components of the unit that have no value to
anybody after the e-waste treatment-- are considered hazardous waste.
Those elements end up in large, sealed containers and buried in carefully
regulated hazardous waste landfills with groundwater protection
and other controls.
"A lot of people are ingrained in thinking that once they're
done with something, it must be at the end of its lifecycle," says
Melissa Rich, InterSchola's president and founder. "While it
may not make financial sense for your district to repair those
items, there may be districts for which purchasing your old stuff
is exactly what they need. And it really does extend the life of
technology. That's what we all want to do."
About 80 to 85 percent of the equipment accepted and listed
by InterSchola ends up selling through eBay or another marketplace.
What doesn't find a buyer is released back to the district.
The company recommends recyclers that will, for a fee, remove
equipment from a district.
Often, outdated electronics equipment that has been deemed
as "surplus" by the district's board doesn't have much financial
value. In that case, InterSchola won't attempt to sell it.
"They're very honest," says Allen. "We've had some big TVs.
They've told us, 'There's no market for that.' Those end up
going to the recycling company."
Chasing the Chain
That's the part that worries Rich Kaestner. Since the launch
of the Consortium for School Networking's (CoSN) Green Computing
Leadership Initiative last year, Kaestner, the initiative's
project director, has been waist-deep in e-waste. He has grown
skeptical of the motives of some disposal providers: "How much
of it is greenwash," he asks, referring to business efforts that are
packaged as environmentally motivated but in fact have other
designs, "and how much of it is really doing the right thing?"
Kaestner praises asset recovery efforts, but wonders about the
fate met by the items deemed unfit for recovery and resale and
handed off to a recycler. It's the toxic outcome of a lack of
attention to that end of the process that 60 Minutes exposed. He
says the problem is a lack of transparency at every point on the
trail: What we see is the removal, not the disposal, so we can't
know with certainty whether the leftovers are truly dispensed
of in a safe manner. "I'm not sure if you follow the chain that
everything gets to where it needs to be," he says.
A case in point of taking the good with, potentially, the bad is
the take-back option some vendors, like Apple, HP, and Lenovo,
provide to districts that buy their hardware products; the maker
offers to "take back" a district's old systems. Arlington Public
Schools in Virginia has such an arrangement with Dell.
In the last five years, Arlington has virtualized its network
office, modified technology purchase orders to mandate compliance
with the federal Energy Star program, and taught its
users to shut off workstations at the end of the day to reduce
energy usage. Though the motives were mainly financial, the
outcomes have been greatly environmental.
A recently board-approved initiative will allow the district to
refresh its computers every three years. That means in another
two years all the machines in Arlington will have been replaced.
"We'll be using machines that are more energy-efficient, and
that will allow us to keep up with energy standards in the
industry," Assistant Superintendent Walter McKenzie says.
That in turn means a surge of old machines being put out to
pasture. Fortunately, the district already has a destination in
mind for them, one journeyed by, according to McKenzie, the
9,750 computers, 38 LCD projectors, and two interactive whiteboards
the district has disposed of during the past five years. The
computers will follow one of two routes away from the schools
where they have been in use: They may go through an auction
process or be taken back by Dell, the supplier of the new PCs.
Dell's Asset Recovery and Recycling Services site describes
a multi-part process to customers. The company will pick up the
old equipment, ship it to its facilities, wash it of all data, perform
an audit to determine the remaining value, then help the
district resell it to a third party. Dell can also have the hardware
donated to the National Cristina Foundation, which passes it
along to charities, schools, and public agencies for reuse.
"Everybody takes their piece of usable equipment out of it,
and that's good," Kaestner says. "That's a good start."
It's a third option listed by Dell that raises his doubts. The
company offers districts the choice of having their obsolete
goods broken down and the parts handled by "specific partners
who specialize in the disposal of each unique material." Kaestner
says the promise of an "environmentally sensitive" disposal
is one that can't be taken on faith. "Who takes the pieces out?"
he asks. "Who is concerned that mercury and cadmium and all
the rest of that nasty stuff doesn't go into the groundwater and
eventually into streams? In order to feel like we're really doing
the right job here, you have to chase the whole chain."
Kaestner says he hopes and suggests districts and vendors
follow through with whatever e-cycler receives their unusable
goods so that the items meet the healthy disposition the organizations
intend. "We don't know how much rigor they put into the
processing of stuff that cannot be recycled," he says. "It's more
of a question and not an accusation."
Dell, it should be pointed out, has a "Be a Responsible
Neighbor" provision in its recycling program that prohibits
materials that pose a threat to the environment from being
deposited in developing countries unless its own Asset Recovery
Services Council has approved of the exportation channel.
Kaestner hopes CoSN will be able to research this issue in the
future, but for now it's not a priority, mostly because the takeback
programs are so new. "By new, I mean months old," he
says. "They haven't been around a long time. One announces it,
and then all the others jump onto the bandwagon in a hurry."
"The Electronic Wasteland," 60 Minutes' story on what
happens to many discarded electronic goods, can be
Calling All E-Stewards
Chad Stevens participates in CoSN's Green
Computing Initiative. About the same time
that he moved from being a school principal
into the CTO role for Texas' Clear Creek Independent
School District, located between Houston and Galveston in
Johnson Space Center country, a new energy manager joined
the district. "We were talking about some simple ways we
could save energy without spending money," he recalls. "A
combination of two interests-- sustainability and saving
energy-- led me to volunteer, just to learn more about it." That
participation in the CoSN project, in turn, provided him with
a crash course in green initiatives.
Stevens and his IT team have begun automating the power settings
of monitors and computers, virtualizing the data center,
and piloting a possible thin-client computer transition. One issue
they face is how to maintain a strong obsolescence policy-- no
computer is older than five years-- with a student enrollment
that's growing at 1,000 kids a year. "We're refreshing our
computers, but we're running on a treadmill," Stevens says.
"How long can we sustain our investment?"
Whether the district ultimately replaces existing machines
with comparable models, albeit newer ones, or thin clients, a
lot of equipment will need to be disposed of in their wake. In
2008, the district replaced 2,500 computers between January
and April. The hardware was hauled away by Epic Systems, a
company that offers recycling of computers at the expense of
$10 to $15 apiece. But that invites the question that goes to the
heart of the issue: What does Epic do with the hardware?
Shortly after the 60 Minutes story on e-waste aired, the Basel
Action Network (BAN), an international organization that
focuses on writing policies and legislation dealing with e-waste
and that served as an adviser on the 60 Minutes piece,
announced a formal program to certify electronics recyclers and
asset managers as "e-stewards." Accreditation requires proof that
the company isn't dumping e-waste into landfills or incinerators
and isn't exporting e-waste to developing countries.
It is these standards Epic says it adheres to by virtue of hiring
out to ECS Refining, listed by Basel as an e-steward. Because
ECS Refining bears the BAN stamp of approval, Stevens can be
confident Epic plays by the rules. "I can look anybody in the eye
and say our computers aren't ending up in a landfill," he says.
According to CoSN's Kaestner, that's the way all old electronics
should be handled. "The stuff that can be recycled, if we
want to be good, green world citizens, should go through an
organization that's been approved by BAN," he says. "That's
the best we've got."
A Refreshing Solution
The current drop in commodity pricing with copper, aluminum,
and even crude oil is affecting the sustainability of the
overall value of the makeup of e-waste. "You're losing money
on the process," says Jim Mejia, vice president of environmental
affairs for Redemtech, certified by BAN as an e-steward.
"So you try to make it up on the recycle fees."
In other words, the dismantling
and conversion of
components that have no
resale value, which is a labor-intensive
process, has a price tag. When recyclers can't make
money reselling metal or glass, they will make it up elsewhere
in the supply chain by charging more to the customer needing to
dispose of those materials-- the school district, in this case.
Mejia has a suggestion for districts wanting to offset disposal
costs: Refresh your technology more often, while it still has
reuse value for someone else. "There's a refresh cycle that's
sustainable, when you're going to get the peak value for your
used equipment," he says. "Yes, you could use it longer, but in
the end, you won't recognize value." The moral: Better too
soon than too late. That way, says Mejia, "the district ends up
with a check at the end."
If you would like more information on e-waste disposal, visit
our website at www.thejournal.com. In the
Browse by Topic menu, click on recycling.
Dian Schaffhauser is a freelance writer based in Nevada City, CA.
This article originally appeared in the 03/01/2009 issue of THE Journal. | <urn:uuid:4ea70cf8-b0f1-44db-998d-ef4a031cdf6b> | CC-MAIN-2019-47 | https://thejournal.com/articles/2009/03/01/the-dirt-on-ewaste.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00097.warc.gz | en | 0.954283 | 4,490 | 2.9375 | 3 |
When you think about diabetes and blood glucose control, the first thing that comes to mind is probably avoiding high blood glucose levels. After all, the hallmark of diabetes is high blood glucose, or hyperglycemia. But controlling blood glucose is more than just managing the “highs”; it also involves preventing and managing the “lows,” or hypoglycemia.
Most people are aware that keeping blood glucose levels as close to normal as possible helps prevent damage to the blood vessels and nerves in the body. But keeping blood glucose levels near normal can carry some risks as well. People who maintain “tight” blood glucose control are more likely to experience episodes of hypoglycemia, and frequent episodes of hypoglycemia — even mild hypoglycemia and even in people who don’t keep blood glucose levels close to normal — deplete the liver of stored glucose (called glycogen), which is what the body normally draws upon to raise blood glucose levels when they are low. Once liver stores of glycogen are low, severe hypoglycemia is more likely to develop, and research shows that severe hypoglycemia can be harmful. In children, frequent severe hypoglycemia can lead to impairment of intellectual function. In children and adults, severe hypoglycemia can lead to accidents. And in adults with cardiovascular disease, it can lead to strokes and heart attacks.
To keep yourself as healthy as possible, you need to learn how to balance food intake, physical activity, and any diabetes medicines or insulin you use to keep your blood glucose as close to normal as is safe for you without going too low. This article explains how hypoglycemia develops and how to treat and prevent it.
Blood glucose levels vary throughout the day depending on what you eat, how active you are, and any diabetes medicines or insulin you take. Other things, such as hormone fluctuations, can affect blood glucose levels as well. In people who don’t have diabetes, blood glucose levels generally range from 65 mg/dl to 140 mg/dl, but in diabetes, the body’s natural control is disrupted, and blood glucose levels can go too high or too low. For people with diabetes, a blood glucose level of 70 mg/dl or less is considered low, and treatment is recommended to prevent it from dropping even lower.
Under normal circumstances, glucose is the brain’s sole energy source, making it particularly sensitive to any decrease in blood glucose level. When blood glucose levels drop too low, the body tries to increase the amount of glucose available in the bloodstream by releasing hormones such as glucagon and epinephrine (also called adrenaline) that stimulate the release of glycogen from the liver.
Some of the symptoms of hypoglycemia are caused by the brain’s lack of glucose; other symptoms are caused by the hormones, primarily epinephrine, released to help increase blood glucose levels. Epinephrine can cause feelings of weakness, shakiness, clamminess, and hunger and an increased heart rate. These are often called the “warning signs” of hypoglycemia. Lack of glucose to the brain can cause trouble concentrating, changes in vision, slurred speech, lack of coordination, headaches, dizziness, and drowsiness. Hypoglycemia can also cause changes in emotions and mood. Feelings of nervousness and irritability, becoming argumentative, showing aggression, and crying are common, although some people experience euphoria and giddiness. Recognizing emotional changes that may signal hypoglycemia is especially important in young children, who may not be able to understand or communicate other symptoms of hypoglycemia to adults. If hypoglycemia is not promptly treated with a form of sugar or glucose to bring blood glucose level up, the brain can become dangerously depleted of glucose, potentially causing severe confusion, seizures, and loss of consciousness.
Some people are at higher risk of developing hypoglycemia than others. Hypoglycemia is not a concern for people who manage their diabetes with only exercise and a meal plan. People who use insulin or certain types of oral diabetes medicines have a much greater chance of developing hypoglycemia and therefore need to be more careful to avoid it. Other risk factors for hypoglycemia include the following:
- Maintaining very “tight” (near-normal) blood glucose targets.
- Decreased kidney function. The kidneys help to degrade and remove insulin from the bloodstream. When the kidneys are not functioning well, insulin action can be unpredictable, and low blood glucose levels may result.
- Alcohol use.
- Conditions such as gastropathy (slowed stomach emptying) that cause variable rates of digestion and absorption of food.
- Having autonomic neuropathy, which can decrease symptoms when blood glucose levels drop. (Autonomic neuropathy is damage to nerves that control involuntary functions.)
- Pregnancy in women with preexisting diabetes, especially during the first trimester.
Hypoglycemia is the most common side effect of insulin use and of some of the oral medicines used to treat Type 2 diabetes. How likely a drug is to cause hypoglycemia and the appropriate treatment for hypoglycemia depends on the type of drug.
Secretagogues. Oral medicines that stimulate the pancreas to release more insulin, which include sulfonylureas and the drugs nateglinide (brand name Starlix) and repaglinide (Prandin), have the potential side effect of hypoglycemia. Sulfonylureas include glimepiride (Amaryl), glipizide (Glucotrol and Glucotrol XL), and glyburide (DiaBeta, Micronase, and Glynase).
Sulfonylureas are taken once or twice a day, in the morning and the evening, and their blood-glucose-lowering effects last all day. If you miss a meal or snack, the medicine continues to work, and your blood glucose level may drop too low. So-called sulfa antibiotics (those that contain the ingredient sulfamethoxazole) can also increase the risk of hypoglycemia when taken with a sulfonylurea. Anyone who takes a sulfonylurea, therefore, should discuss this potential drug interaction with their health-care provider should antibiotic therapy be necessary.
Nateglinide and repaglinide are taken with meals and act for only a short time. The risk of hypoglycemia is lower than for sulfonylureas, but it is still possible to develop hypoglycemia if a dose of nateglinide or repaglinide is taken without food.
Insulin. All people with Type 1 diabetes and many with Type 2 use insulin for blood glucose control. Since insulin can cause hypoglycemia, it is important for those who use it to understand how it works and when its activity is greatest so they can properly balance food and activity and take precautions to avoid hypoglycemia. This is best discussed with a health-care provider who is knowledgeable about you, your lifestyle, and the particular insulin regimen you are using.
Biguanides and thiazolidinediones. The biguanides, of which metformin is the only one approved in the United States, decrease the amount of glucose manufactured by the liver. The thiazolidinediones, pioglitazone (Actos) and rosiglitazone (Avandia), help body cells become more sensitive to insulin. The risk of hypoglycemia is very low with these medicines. However, if you take metformin, pioglitazone, or rosiglitazone along with either insulin or a secretagogue, hypoglycemia is a possibility.
Alpha-glucosidase inhibitors. Drugs in this class, acarbose (Precose) and miglitol (Glyset), interfere with the digestion of carbohydrates to glucose and help to lower blood glucose levels after meals. When taken alone, these medicines do not cause hypoglycemia, but if combined with either insulin or a secretagogue, hypoglycemia is possible. Because alpha-glucosidase inhibitors interfere with the digestion of some types of carbohydrate, hypoglycemia can only be treated with pure glucose (also called dextrose or d-glucose), which is sold in tablets and tubes of gel. Other carbohydrates will not raise blood glucose levels quickly enough to treat hypoglycemia.
DPP-4 inhibitors, GLP-1 receptor agonists, and SGLT2 inhibitors. When taken alone, DPP-4 inhibitors, such as sitagliptin (Januvia), saxagliptin (Onglyza), linagliptin (Tradjenta), and alogliptin (Nesina); GLP-1 receptor agonists, such as exenatide (Byetta and Bydureon), liraglutide (Victoza), albiglutide (Tanzeum), and dulaglutide (Trulicity); and SGLT2 inhibitors, such as canagliflozin (Invokana), dapagliflozin (Farxiga), and empagliflozin (Jardiance) do not usually cause hypoglycemia. However, lows can occur when drugs in these classes are combined with a therapy that can cause hypoglycemia, such as insulin or sulfonylureas.
Although hypoglycemia is called a side effect of some of the drugs used to lower blood glucose levels, it would be more accurate to call it a potential side effect of diabetes treatment — which includes food and activity as well as drug treatment. When there is a disruption in the balance of these different components of diabetes treatment, hypoglycemia can result. The following are some examples of how that balance commonly gets disrupted:
Skipping or delaying a meal. When you take insulin or a drug that increases the amount of insulin in your system, not eating enough food at the times the insulin or drug is working can cause hypoglycemia. Learning to balance food with insulin or oral drugs is key to achieving optimal blood glucose control while avoiding hypoglycemia.
Too much diabetes medicine. If you take more than your prescribed dose of insulin or a secretagogue, there can be too much insulin circulating in your bloodstream, and hypoglycemia can occur. Changes in the timing of insulin or oral medicines can also cause hypoglycemia if your medicine and food plan are no longer properly matched.
Increase in physical activity. Physical activity and exercise lower blood glucose level by increasing insulin sensitivity. This is generally beneficial in blood glucose control, but it can increase the risk of hypoglycemia in people who use insulin or secretagogues if the exercise is very vigorous, carbohydrate intake too low, or the activity takes place at the time when the insulin or secretagogue has the greatest (peak) action. Exercise-related hypoglycemia can occur as much as 24 hours after the activity.
Increase in rate of insulin absorption. This may occur if the temperature of the skin increases due to exposure to hot water or the sun. Also, if insulin is injected into a muscle that is used in exercise soon after (such as injecting your thigh area, then jogging), the rate of absorption may increase.
Alcohol. Consuming alcohol can cause hypoglycemia in people who take insulin or a secretagogue. When the liver is metabolizing alcohol, it is less able to break down glycogen to make glucose when blood glucose levels drop. In addition to causing hypoglycemia, this can increase the severity of hypoglycemia. Alcohol can also contribute to hypoglycemia by reducing appetite and impairing thinking and judgment.
Being able to recognize hypoglycemia promptly is very important because it allows you to take steps to raise your blood glucose as quickly as possible. However, some people with diabetes don’t sense or don’t experience the early warning signs of hypoglycemia such as weakness, shakiness, clamminess, hunger, and an increase in heart rate. This is called hypoglycemia unawareness. Without these early warnings and prompt treatment, hypoglycemia can progress to confusion, which can impair your thinking and ability to treat the hypoglycemia.
If the goals you have set for your personal blood glucose control are “tight” and you are having frequent episodes of hypoglycemia, your brain may feel comfortable with these low levels and not respond with the typical warning signs. Frequent episodes of hypoglycemia can further blunt your body’s response to low blood glucose. Some drugs, such as beta-blockers (taken for high blood pressure), can also mask the symptoms of hypoglycemia.
If you have hypoglycemia frequently, you may need to raise your blood glucose targets, and you should monitor your blood glucose level more frequently and avoid alcohol. You may also need to adjust your diabetes medicines or insulin doses. Talk to your diabetes care team if you experience several episodes of hypoglycemia a week, have hypoglycemia during the night, have such low blood glucose that you require help from someone else to treat it, or find you are frequently eating snacks that you don’t want simply to avoid low blood glucose.
Anyone at risk for hypoglycemia should know how to treat it and be prepared to do so at any time. Here’s what to do: If you recognize symptoms of hypoglycemia, check your blood glucose level with your meter to make sure. While the symptoms are useful, the numbers are facts, and other situations, such as panic attacks or heart problems, can lead to similar symptoms. In some cases, people who have had chronically high blood glucose levels may experience symptoms of hypoglycemia when their blood glucose level drops to a more normal range. The usual recommendation is not to treat normal or goal-range blood glucose levels, even if symptoms are present.
Treatment is usually recommended for blood glucose levels of 70 mg/dl or less. However, this may vary among individuals. For example, blood glucose goals are lower in women with diabetes who are pregnant, so they may be advised to treat for hypoglycemia at a level below 70 mg/dl. People who have hypoglycemia unawareness, are elderly, or live alone may be advised to treat at a blood glucose level somewhat higher than 70 mg/dl. Young children are often given slightly higher targets for treating hypoglycemia for safety reasons. Work with your diabetes care team to devise a plan for treating hypoglycemia that is right for you.
To treat hypoglycemia, follow the “rule of 15”: Check your blood glucose level with your meter, treat a blood glucose level under 70 mg/dl by consuming 15 grams of carbohydrate, wait about 15 minutes, then recheck your blood glucose level with your meter. If your blood glucose is still low (below 80 mg/dl), consume another 15 grams of carbohydrate and recheck 15 minutes later. You may need a small snack if your next planned meal is more than an hour away. Since blood glucose levels may begin to drop again about 40–60 minutes after treatment, it may be a good idea to recheck your blood glucose level approximately an hour after treating a low to determine if additional carbohydrate is needed.
The following items have about 15 grams of carbohydrate:
- 3–4 glucose tablets
- 1 dose of glucose gel (in most cases, 1 small tube is 1 dose)
- 1/2 cup of orange juice or regular soda (not sugar-free)
- 1 tablespoon of honey or syrup
- 1 tablespoon of sugar or 5 small sugar cubes
- 6–8 LifeSavers
- 8 ounces of skim (nonfat) milk
If these choices are not available, use any carbohydrate that is — for example, bread, crackers, grapes, etc. The form of carbohydrate is not important; treating the low blood glucose is. (However, many people find they are less likely to overtreat low blood glucose if they consistently treat lows with a more “medicinal” form of carbohydrate such as glucose tablets or gel.)
If you take insulin or a secretagogue and are also taking an alpha-glucosidase inhibitor (acarbose or miglitol), carbohydrate digestion and absorption is decreased, and the recommended treatment is glucose tablets or glucose gel.
Other nutrients in food such as fat or resistant starch (which is present in some diabetes snack bars) can delay glucose digestion and absorption, so foods containing these ingredients are not good choices for treating hypoglycemia.
If hypoglycemia becomes severe and a person is confused, convulsing, or unconscious, treatment options include intravenous glucose administered by medical personnel or glucagon by injection given by someone trained in its use and familiar with the recipient’s diabetes history. Glucagon is a hormone that is normally produced by the pancreas and that causes the liver to release glucose into the bloodstream, raising the blood glucose level. It comes in a kit that can be used in an emergency situation (such as when a person is unable to swallow a source of glucose by mouth). The hormone is injected much like an insulin injection, usually in an area of fatty tissue, such as the stomach or back of the arms. Special precautions are necessary to ensure that the injection is given correctly and that the person receiving the injection is positioned properly prior to receiving the drug. People at higher risk of developing hypoglycemia should discuss the use of glucagon with their diabetes educator, doctor, or pharmacist.
Avoiding all episodes of hypoglycemia may be impossible for many people, especially since maintaining tight blood glucose control brings with it a higher risk of hypoglycemia. However, the following tips may help to prevent excessive lows:
- Know how your medicines work and when they have their strongest action.
- Work with your diabetes care team to coordinate your medicines or insulin with your eating plan. Meals and snacks should be timed to coordinate with the activity of your medicine or insulin.
- Learn how to count carbohydrates so you can keep your carbohydrate intake consistent at meals and snacks from day to day. Variations in carbohydrate intake can lead to hypoglycemia.
- Have carbohydrate-containing foods available in the places you frequent, such as in your car or at the office, to avoid delays in treatment of hypoglycemia.
- Develop a plan with your diabetes care team to adjust your food, medicine, or insulin for changes in activity or exercise.
- Discuss how to handle sick days and situations where you have trouble eating with your diabetes team.
- Always check your blood glucose level to verify any symptoms of hypoglycemia. Keep your meter with you, especially in situations where risk of hypoglycemia is increased.
- Wear a medical alert identification tag.
- Always treat blood glucose levels of 70 mg/dl or less whether or not you have symptoms.
- If you have symptoms of hypoglycemia and do not have your blood glucose meter available, treatment is recommended.
- If you have hypoglycemia unawareness, you may need to work with your diabetes care team to modify your blood glucose goals or treatment plan.
- Check your blood glucose level frequently during the day and possibly at night, especially if you have hypoglycemia unawareness, are pregnant, or have exercised vigorously within the past 24 hours.
- Check your blood glucose level before driving or operating machinery to avoid any situations that could become dangerous if hypoglycemia occurred.
- Check the expiration date on your glucagon emergency kit once a year and replace it before it expires.
- Discuss alcohol intake with your diabetes care team. You may be advised not to drink on an empty stomach and/or to increase your carbohydrate intake if alcohol is an option for you. If you drink, always check your blood glucose level before bed and eat any snacks that are scheduled in your food plan.
Although hypoglycemia can, at times, be unpleasant, don’t risk your health by allowing your blood glucose levels to run higher than recommended to avoid it. Meet with your diabetes care team to develop a plan to help you achieve the best possible blood glucose control safely and effectively. Think positive, and learn to be prepared with measures to prevent and promptly treat hypoglycemia should it occur. | <urn:uuid:eaca1de7-a685-451e-a5ab-ea7825e73fb7> | CC-MAIN-2019-47 | https://www.diabetesselfmanagement.com/managing-diabetes/blood-glucose-management/understanding-hypoglycemia/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00417.warc.gz | en | 0.919769 | 4,263 | 3.890625 | 4 |
Simultaneous searching refers to a process in which a user submits a query to numerous information resources. The resources can be heterogeneous in many aspects: they can reside in various places, offer information in various formats, draw on various technologies, hold various types of materials, and more. The user's query is broadcast to each resource, and results are returned to the user. The development of software products that offer such simultaneous searching relies on the fact that each information resource has its own search engine. The simultaneous searching product transmits the user's query to that search engine and directs it to perform the actual search. When the simultaneous searching software receives the results of the search, it displays them to the user.
Simultaneous searching is also known as integrated searching, metasearching, cross-database searching, parallel searching, broadcast searching, and federated searching. MetaLib, the library portal from Ex Libris, provides such simultaneous searching with its Universal Gateway component. In this paper, we shall refer to these systems as metasearch systems.
Let's take a look at an example of a metasearch process that a user carries out via MetaLib or a similar product.
A student is interested in the works of Henrik Ibsen. Since the student knows that Ibsen is Norwegian, she submits a search query in several Norwegian resources that she knows about, such as the catalog of the National Library of Norway, the catalog of the University of Oslo, and several archives maintained by the National Library of Norway-the television, radio, and newspaper archives. The student submits the query author = Henrik Ibsen to all these information resources. She then receives the results. If they are displayed by resource, she can easily pick out the results that seem most relevant. Let's say that one result from the television archive is a program about the play Peer Gynt, written by Ibsen. Looking at this record, the student decides that she can focus solely on the work Peer Gynt rather than all of Ibsen's works. She then uses additional functions of the system to submit a second query, title = Peer Gynt, to the same information resources. This time she receives different results, including the Peer Gynt Suite, composed by Edvard Grieg - a result from the radio archive that she did not obtain earlier. However, the Ibsen play A Doll's House, from the catalog of the University of Oslo, did not come back this time, although it was on the previous result list.
Let's take the process one step farther and ask another question: How did the student know of the resources relevant to her research? Of course, she could have been knowledgeable in this area and aware of pertinent resources. If she was just starting out, however, perhaps she concentrated on the default resources that her library has set, on the basis of her group affiliation, as a component of its gateway. Alternatively, she might have requested resources relevant to her subject, a specific geographic region, a certain type of material, and so on, thus creating a personal searching scope maintained by the system and available for reuse. In MetaLib, such functionality is provided through the Information Gateway component.
Most researchers today deal with content residing in a wide range of materials. For example, our student might want to access materials such as the script of the play in book form or PDF file, literary analyses of the play, various recordings of the suite, the score of the suite, or a video or poster of a specific performance. The immediate search result is typically a bibliographic record or other form of metadata describing the actual material. From the end user's perspective, the bibliographic records serve only as a means of obtaining the material itself. Users do not want to be bothered with technical issues such as the format of the material they seek and the software that they need to access it - the library OPAC, Adobe® Acrobat® Reader®, Microsoft® Word or PowerPoint®, MP3, the MrSid viewer, or any other software that handles specific types of files.
To provide users with convenient access to materials contained in a range of resources, multiple software products need to be integrated, and they should offer a seamless interface to users. The first type of information typically presented to users as a search result is a description of the material - the metadata - such as a bibliographic record representing a video. Ideally, the user should see the material on her screen - in this case, a video - without having to concern herself about how to find the actual material and how to view it.
The link from the bibliographic record to the actual material can be direct, an explicit URL embedded in the metadata, as in the MARC 856 field of a bibliographic record in library catalogs. However, in many instances, the system must perform calculations to create the link - for example, when the bibliographic record resides in one information repository, such as an abstracting and indexing database, but the actual material resides elsewhere, such as in an e-Journal repository or the library's printed collection. The user expects to reach the actual material nevertheless. A library can make this possible by configuring a context-sensitive linking server, such as the Ex Libris SFX server (Van de Sompel & Beit-Arie, 2001), that links the user to the actual material as a part of a set of extended services and onward navigation options. Such links include the appropriate copy of an article, the holdings in the user's library OPAC or any other relevant OPAC, the institution's document delivery service, citation information, a periodical directory, Internet searches, and information about the book in Internet bookstores or content-based services such as those offered by Syndetics. The software determines the list of links on the basis of the information in the specific bibliographic record and the institution's subscriptions and policies as predefined by the librarians.
The process of finding relevant materials for research falls, therefore, into two stages. First is the resource discovery phase, when the user locates the resources most relevant to the specific search. Next comes the information discovery phase, when the search is executed in the various information resources and the results are retrieved. Institutions strive to provide their members - students, staff, and researchers - with high quality resources that offer information of real value. It is up to the librarians to determine what constitutes the institution's collections, both physical and virtual, and set the collections' boundaries. Every member of the institution should be able to define a personal scope that derives from the institution's scope.
Once the user sets the scope of the search and submits a query, the information discovery phase begins. The metasearch system delivers the query to the selected information resources and returns the results to the user. The process requires that the system 'understands' the expectations of the resources regarding the form of the query, on the one hand, and the nature of the results, on the other. It is up to the system to convert the unified query and adapt it to the requirements of each searched resource, deliver the query in the form appropriate to each resource, receive the results, and manipulate them so that they comply with the system's unified format.
The first question, therefore, is which resources are available and which of those are appropriate for the institution. No software can replace librarians when it comes to an understanding of the scholarly information arena; only they can select the resources that are appropriate and affordable for their institution. However, the selection of a resource is just the first step. Information about the resource, resource metadata, is necessary as well. The metasearch software needs to obtain descriptive metadata about the resource, such as its coverage and the types of materials that it offers, and makes it available to end users so that they can make a knowledgeable decision about the relevance of the resource to their needs. Furthermore, the system needs technical metadata regarding its impending interaction with the resource.
Resource metadata can be made available in several ways:
|·||Resources can offer their metadata to any metasearch system that attempts to access them for the purpose of information retrieval.|
|·||A central repository can offer resource metadata to any metasearch system.|
|·||Metasearch systems can maintain their own repository of resource metadata.|
The first method - that a resource describes itself when relevant - seems the best. Resources provide the most accurate information about themselves, information that other repositories need not replicate. As a matter of fact, the Z39.50 Explain function was based on this premise. The idea was that when external software needed to access an information resource, the software would extract the details of the impending interaction from the resource on the fly and use the information to formulate the exact steps of the interaction. Apparently, few vendors implemented the Z39.50 Explain function, and those who did implemented it in a variety of forms. The Semantic Web approach takes the idea one step farther. With this approach, a typical metasearch process involves an interaction between agents that exchange requests and information to construct the final product, which is the information requested by the end user. This is the vision, but today's Web does not allow for such interaction between agents, and, therefore, an automated interaction between the metasearch system and a resource's own search engine cannot be achieved at the present time (Sadeh & Walker, 2003).
The second method - building and maintaining a central repository - is under discussion by the new NISO metasearch committee, MetaSearch Initiative, which was formed in early 2003. Maintaining a central repository would assure the availability of resource metadata but would pose new challenges. First, a decision would need to be made about which kinds of resources such a repository would store. Then a format for the resource metadata would need to be specified, as well as protocols dictating the manner in which resource metadata find their way to and from the repository. Finally, a decision would have to be made about who is responsible for storing information in the repository and keeping it updated - the repository, by means of harvesting programs, or the resource itself. Another undertaking similar to that of the NISO committee is the Information Environment (IE) Service Registry pilot project, driven by MIMAS, in the UK, in collaboration with UKOLN and the University of Liverpool. The purpose of the project is to provide a registry of IE collections and services and examine the feasibility of such a registry in terms of discovery, access, maintenance, sustainability, ownership, and scalability. The information science community is watching these initiatives with interest to see whether such repositories become comprehensive and robust enough to provide services as necessary.
The third method is one that various current metasearch products have already implemented. Each such product holds the metadata, both descriptive and technical, of all the resources that it can access. Products differ in the amount of descriptive metadata that they release to the end user and the way in which they display it. They also differ in the degree to which they implement the search interaction and hence vary in the amount of technical metadata that they store. The method whereby each metasearch system maintains information about the resources has many drawbacks. The most obvious one is that every vendor of a metasearch system has to configure and maintain the resource metadata. Handling such a repository requires considerable effort and therefore depends on the capabilities of the individual vendor.
MetaLib, like other products, provides a repository that includes the metadata of all the resources that it can access. However, the metadata are not maintained as part of the software but stored in the MetaLib Knowledge Base, a repository of resource data and rules. The software itself does not include any information that relies on specific resources: it extracts the information from the Knowledge Base. This information enables the user to select the resources and the MetaLib Information Gateway to perform the actual search and retrieval. If, in the future, one of the first two options regarding the origin of the resource metadata materializes, MetaLib will only need to extract the required metadata from another repository.
The MetaLib Knowledge Base is a proprietary repository provided to institutions along with the MetaLib software. The Knowledge Base holds two types of metadata about resources:
|·||Descriptive metadata, such as the resource's name, coverage, language, data types, and publisher. The user sees this information and, with it, can make a sensible selection of resources. It is the same information that enables the system to create resource lists based on the user's specifications and display them in a comprehensive way. In short, this information serves the resource discovery phase described earlier.|
|·||Technical metadata, such as the type of protocol that the resource supports, the cataloging format it uses, and the physical and logical structure of the records that it retrieves. We can describe this information as rules that define the flow, interface, and manner of searching and that the software uses for searching, retrieving the results, and manipulating them - that is, for the information discovery phase.|
The resource metadata in the MetaLib Knowledge Base can be divided into global metadata and local metadata:
|·||Global metadata are that part of the resource metadata that is universal and does not depend on the implementation of MetaLib at a specific institution. These metadata include the name of the resource owner, the coverage, and the interfacing rules.|
|·||Local metadata are institution-specific; they relate to the way in which the resource is used in the institution's environment and presented to the institution's members. Such metadata include elements of authentication vis_à-vis the provider of the resource, the authorization rules that apply to it within the institution, and the categorization information that the institution uses to enable the software to offer the resource in specific contexts. For instance, one institution might categorize a certain resource under Medicine, whereas an institution with a different orientation might categorize it under Social Studies.|
Ex Libris maintains a master Knowledge Base, which is copied to every MetaLib installation. Automated routines ensure that the Knowledge Base at each installation is updated as necessary. Institutions localize the relevant metadata and add configurations to local resources.
The process of searching and retrieving in a heterogeneous environment is far from trivial. Each resource has its own expectations regarding the form and manner in which it receives queries; even if the resource supports a standard interface, such as the Z39.50 protocol, the metasearch system needs to make further adjustments so that the resource's engine will interpret the query correctly.
The types of information that the Knowledge Base maintains to enable the system to search include the following examples:
|·||Access mode: What kind of interfacing protocol does the resource employ? Is it a structured, documented interface, such as Z39.50, the PubMed Entrez protocol, or a proprietary XML gateway? Or is it an unstructured HTTP protocol that dictates the use of HTML parsing techniques to access the resource?|
|·||Password control: How does the user access a specific, licensed resource? Are a user ID and password required, which the metasearch system delivers when the connection is established? Should the software redirect the query via a proxy to grant the user access?|
|·||URL creation: If a URL needs to be formulated to hold the specific query, what should the structure of the URL be?|
|·||Character conversion: What character set does the system use at the resource end? Does the character set comply with that of the end user?|
|·||Query optimization: How should the query be structured?|
|1.||What is the exact syntax that the resource's system expects?|
|2.||How should fields be mapped to the fields of the resource; for example, to which field should the system map the "author" field selected by the user for a specific query?|
|3.||How does the system expect to receive an author's name? Should it be|
<last name><,><first name>;
<last name>< ><first initial>; or in some other format?
|·||Normalization: What should the system do when the search engine at the resource end does not support a specific type of search? For instance, what rules should be applied if the user looks for a specific subject but a certain resource does not support a search by subject?|
Once the information is there, the metasearch system can indeed adapt a single, unified query to the requirements of the specific resource, as in the following example.
The user submits a query for title = dreams and author = Schredl, Michael in the following resources:
|·||Library of Congress (Z39.50 access to Endeavor's Voyager ILS)|
|·||NLM PubMed (the Entrez HTTP protocol)|
|·||HighWire Press® (HTML parsing)|
|·||Ovid MEDLINE® (Z39.50 access via the SilverPlatter ERL platform)|
|·||University of East Anglia (XML access to the Ex Libris ALEPH ILS)|
Even when looking at one brick of the process structure - the query syntax - we can clearly see the differences between the resources:
|·||The Library of Congress expects this query string: 1=Schredl, Michael AND 4=dreams|
|·||PubMed expects this query string: term=dreams+AND+Schredl+M|
|·||HighWire expects to see the encoded form of the following URL: author1=Schredl,+Michael&author2=&title=dreams|
|·||Ovid's MEDLINE via ERL, although accessed by the same protocol (Z39.50), expects this query string: 1003=Schredl-M* AND 4=dreams(Note the phrasing of the author's name.)|
|·||The ALEPH system at UEA expects the following encoded request: wau=(Schredl, Michael) AND wti=(dreams)|
Up to now we have discussed only the flow from the user to the resource. However, now that the query has been processed, the metasearch system needs to get back to the user with search results. Typically the interaction between the metasearch system and the resource consists of two phases. The first occurs after the search has been invoked: the resource returns the number of hits and some kind of reference to the result set. This phase is important because it gives the user some information about the search and enables the user to refine the query before browsing through the results. For instance, if a user sees that there are thousands of hits, she can modify the query to be more specific and thus reduce the number of results. The second phase consists of retrieval: The metasearch system retrieves the number of hits along with the first few records for each resource. This information is shown to the user instantly, even though the query might result in hundreds or thousands of hits. Some systems, including MetaLib, allow for further retrieval upon request.
Why do the systems provide such limited retrieval initially? First, retrieval depends on the use of networks, which are still not as rapid as one would like. Retrieving hundreds or thousands of records over a network is an extremely time-consuming process, and users are not likely to wait until it is completed. Second, people have difficulty handling immense result sets; after seeing the number of hits for each resource, users are likely to refine their query to obtain fewer hits. Once retrieved from the resource, each result is converted to a unified format before the user sees it. The rules that define the manipulation of the retrieved data are part of the resource metadata, which, in MetaLib, is stored in the Knowledge Base. These rules include information about the logical format, the cataloging format, the script, and the structure of certain fields, such as the citation field. For further processing to take place, the metasearch system must be able to apply these rules and convert all retrieved records, regardless of their origin.
Such additional processing can include the unified display of the records to end users; the merging of result lists from heterogeneous resources into one list; the comparison of records to eliminate duplicates; the creation of an OpenURL to allow context-sensitive reference linking; and the saving of records in whatever format is required. Consequently, functionality that might have been missing from the native interface of the resource, such as the provision of an OpenURL, is added to the same set of records by the metasearch system. However, the display of result lists is not as straightforward as might be expected. Users are well acquainted with Web search engines and therefore have solid expectations regarding the display. They would like their results ranked, merged into one list, and filtered for a selected resource. Furthermore, they would like to be able to sort results by various attributes, such as title, author, and date.
Given that only the first results are retrieved from the various resources, these expectations are not so easily satisfied. When the result sets are small, all records are in the system's cache memory and so the metasearch system can offer the expected functionality in a comprehensive manner. However, the larger the number of hits, the greater the value of merging, sorting, de-duplication, and ranking - and the more difficult these features are to provide. Consider, for instance, the merging of the lists. How should it be done? The number of hits may vary considerably from resource to resource. Would it be appropriate to merge the two hits received from one resource with the dozens or hundreds of hits received from another resource? And if so, in which order? Every resource returns results in a different sorting order - by date (ascending or descending), title, relevance, or another attribute of which the users are not necessarily aware. Because only the first records are retrieved, the issue of merging the results needs careful consideration.
Other issues are the sorting capability and relevance ranking that users expect to find when looking at results, even when the resource itself does not support such functionality. Does it make sense to rank and sort only those results that have been retrieved? Let's say that the metasearch system applies certain relevance-ranking algorithms to all retrieved records and sequences them accordingly in the display to the user. This display can be rather misleading, because the 'best' hits are not necessarily those that were retrieved first. It could well be that if the user asks for more hits better results will be retrieved. A similar problem applies to sorting: even though a system might enable the user to sort the records according to various parameters, this sorting would apply only to the set already retrieved.
MetaLib handles these issues by always allowing the user to see the results for each resource. If the resource supports sorting, the user can request that the result list be sorted. Then MetaLib submits the search to this resource again, asking that the entire set of results be arranged in the order requested by the user. Hence, the user indeed receives the first records of the whole set. MetaLib also enables users to explicitly request and obtain a merged set at any point. Such a set is already de-duplicated and sortable. Institutions are likely to limit the number of records that can be merged, to avoid lengthy waiting periods caused by the retrieval of large result sets.
End-users may wonder why other searching systems, primarily the Web search engines, are able to provide them with large sets that are merged and ranked. The reason is that these systems use a different type of technology to provide the users with search results. Metasearch systems are based on 'just-in-time' processing. The system does not maintain any indexes of its information landscape locally; only when the information is required does the system access the various resources to obtain the results. The approach of Web search engines is based on 'just-in-case' technology. Huge efforts are invested in preparing the information prior to users' requests so that when the information is needed, it is obtained immediately. Google, for example, holds indexes for the entire World Wide Web, including not only pointers to sites but also information that enables the search engine to evaluate the relevance ranking of a site. When the user searches with Google, only the indexes are scanned - and the information that Google initially displays on the screen is not from the sites themselves but from this vast repository of indexes. The search engine provides the actual access to a certain Web location only when the user selects it from the list. Needless to say, huge computing power and disk space along with sophisticated technologies for harvesting, evaluating, and maintaining the information are necessary for such powerful tools.
The use of local repositories of indexes in the library environment started some time ago. As opposed to union catalogs, which actually replicate the information that is located in local catalogs, repositories such as MetaIndex from Ex Libris hold only the indexes to the bibliographic materials that are kept in the resources. An example is the MetaIndex implementation at the Cooperative Library Network Berlin-Brandenburg (KOBV), which preceded the metasearch systems a few years back: At KOBV, MetaIndex enables each of the consortium members to maintain its library system and cataloging conventions while the consortium provides a single search interface for end users. MetaIndex has now become a resource available to MetaLib at KOBV, along with other resources. No doubt that a local repository of indexes has many advantages. Information that is gathered and processed prior to queries can be organized, evaluated, and de-duplicated and therefore can be accessible to end-users in a rapid and comprehensive manner. However, maintaining such a repository has a major drawback: the repository is another system, with hardware and software, to create and maintain, and personnel must be available to take care of it.
Considering libraries' budget constraints and limitations in the technical expertise available to them, a combination of just-in-case and just-in-time approaches would be optimal for metasearch systems. Local repositories would be useful in the following cases:
|·||When no searching mechanism exists at the resource end. This situation is typical of various types of local repositories, such as those that hold research papers written by institution members or spreadsheets relevant to institutional activities; but it could obviously apply to any other data that have not yet been made available to the public.|
|·||When the information is scattered. A local repository may be worthwhile if several resources that are mutually compliant form a single resource of value to the institution. For example, a worldwide organization that has dozens of branches, each of which holds regionally relevant information, wants to provide a simultaneous search capability that will cover all the local information. Creating an index such as MetaIndex would be preferable to requiring users to search all the repositories simultaneously.|
|·||When the interface is not reliable. Some institutions want to provide access to resources that are not always online or do not offer reliable networking for accessing them. In such cases, an institution might be better off harvesting the information and keeping it as a local repository.|
|·||When preprocessing is important. Preprocessing tasks such as relevance ranking and the elimination of duplicate records can be of value for some institutions. However, a component like MetaIndex can provide a solution only if the search scope is defined and limited. For instance, at KOBV, the consortium catalogs represent a limited search scope; as a result, the mathematics department of the consortium was able to develop a sophisticated de_duplication algorithm that permitted the construction of a comprehensive MetaIndex component.|
MetaIndex from Ex Libris is created through the harvesting of information from other repositories. One of the harvesting mechanisms is the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). The use of such harvesting protocols can facilitate the gathering of data and is applicable to a wide range of resources that are now becoming OAI compliant. Furthermore, MetaIndex itself can become OAI compliant, thus serving as both a resource for MetaLib and an OAI-compliant resource that enables other systems to harvest the data from it.
The promise of a truly integrated environment in a heterogeneous world may not yet be a reality, but with the active involvement of all the stakeholders, significant progress has been made. Just a few years back, metasearch systems seemed like a dream; today they are already a building block in the information resource environment serving the academic and research community.
Ex Libris. http://www.aleph.co.il/
Information Environment (IE) Service Registry. http://www.mimas.ac.uk/iesr/
MIMAS - Manchester Information & Associated Services. http://www.mimas.ac.uk/
NISO MetaSearch Initiative. http://www.niso.org/committees/metasearch-info.html
The Open Archives Initiative Protocol for Metadata Harvesting.
SemanticWeb.org - The Semantic Web Community Portal. http://www.semanticweb.org/
University of Liverpool. http://www.liv.ac.uk/
LIBER Quarterly, Volume 13 (2003), No. 3/4 | <urn:uuid:3c69437f-bdff-4685-9356-9da2a393be70> | CC-MAIN-2019-47 | https://www.liberquarterly.eu/articles/10.18352/lq.7743/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00137.warc.gz | en | 0.920421 | 6,017 | 2.796875 | 3 |
Zirconium is a chemical element with the symbol Zr and atomic number 40. The name zirconium is taken from the name of the mineral zircon (the word is related to Persian zargun (zircon;zar-gun, "gold-like" or "as gold")), the most important source of zirconium. It is a lustrous, grey-white, strong transition metal that closely resembles hafnium and, to a lesser extent, titanium. Zirconium is mainly used as a refractory and opacifier, although small amounts are used as an alloying agent for its strong resistance to corrosion. Zirconium forms a variety of inorganic and organometallic compounds such as zirconium dioxide and zirconocene dichloride, respectively. Five isotopes occur naturally, three of which are stable. Zirconium compounds have no known biological role.
|Standard atomic weight Ar, std(Zr)||91.224(2)|
|Zirconium in the periodic table|
|Atomic number (Z)||40|
|Element category||Transition metal|
|Electron configuration||[Kr] 4d2 5s2|
Electrons per shell
|2, 8, 18, 10, 2|
|Phase at STP||solid|
|Melting point||2128 K (1855 °C, 3371 °F)|
|Boiling point||4650 K (4377 °C, 7911 °F)|
|Density (near r.t.)||6.52 g/cm3|
|when liquid (at m.p.)||5.8 g/cm3|
|Heat of fusion||14 kJ/mol|
|Heat of vaporization||591 kJ/mol|
|Molar heat capacity||25.36 J/(mol·K)|
|Oxidation states||−2, +1, +2, +3, +4 (an amphoteric oxide)|
|Electronegativity||Pauling scale: 1.33|
|Atomic radius||empirical: 160 pm|
|Covalent radius||175±7 pm|
|Spectral lines of zirconium|
|Crystal structure||hexagonal close-packed (hcp)|
|Speed of sound thin rod||3800 m/s (at 20 °C)|
|Thermal expansion||5.7 µm/(m·K) (at 25 °C)|
|Thermal conductivity||22.6 W/(m·K)|
|Electrical resistivity||421 nΩ·m (at 20 °C)|
|Young's modulus||88 GPa|
|Shear modulus||33 GPa|
|Bulk modulus||91.1 GPa|
|Vickers hardness||820–1800 MPa|
|Brinell hardness||638–1880 MPa|
|Naming||after zircon, zargun زرگون meaning "gold-colored".|
|Discovery||Martin Heinrich Klaproth (1789)|
|First isolation||Jöns Jakob Berzelius (1824)|
|Main isotopes of zirconium|
- 1 Characteristics
- 2 Production
- 3 Compounds
- 4 History
- 5 Applications
- 6 Safety
- 7 See also
- 8 References
- 9 External links
Zirconium is a lustrous, greyish-white, soft, ductile, malleable metal that is solid at room temperature, though it is hard and brittle at lesser purities. In powder form, zirconium is highly flammable, but the solid form is much less prone to ignition. Zirconium is highly resistant to corrosion by alkalis, acids, salt water and other agents. However, it will dissolve in hydrochloric and sulfuric acid, especially when fluorine is present. Alloys with zinc are magnetic at less than 35 K.
The melting point of zirconium is 1855 °C (3371 °F), and the boiling point is 4371 °C (7900 °F). Zirconium has an electronegativity of 1.33 on the Pauling scale. Of the elements within the d-block with known electronegativities, zirconium has the fifth lowest electronegativity after hafnium, yttrium, lanthanum, and actinium.
At room temperature zirconium exhibits a hexagonally close-packed crystal structure, α-Zr, which changes to β-Zr, a body-centered cubic crystal structure, at 863 °C. Zirconium exists in the β-phase until the melting point.
Naturally occurring zirconium is composed of five isotopes. 90Zr, 91Zr, 92Zr and 94Zr are stable, although 94Zr is predicted to undergo double beta decay (not observed experimentally) with a half-life of more than 1.10×1017 years. 96Zr has a half-life of 2.4×1019 years, and is the longest-lived radioisotope of zirconium. Of these natural isotopes, 90Zr is the most common, making up 51.45% of all zirconium. 96Zr is the least common, comprising only 2.80% of zirconium.
Twenty-eight artificial isotopes of zirconium have been synthesized, ranging in atomic mass from 78 to 110. 93Zr is the longest-lived artificial isotope, with a half-life of 1.53×106 years. 110Zr, the heaviest isotope of zirconium, is the most radioactive, with an estimated half-life of 30 milliseconds. Radioactive isotopes at or above mass number 93 decay by electron emission, whereas those at or below 89 decay by positron emission. The only exception is 88Zr, which decays by electron capture.
Five isotopes of zirconium also exist as metastable isomers: 83mZr, 85mZr, 89mZr, 90m1Zr, 90m2Zr and 91mZr. Of these, 90m2Zr has the shortest half-life at 131 nanoseconds. 89mZr is the longest lived with a half-life of 4.161 minutes.
Zirconium has a concentration of about 130 mg/kg within the Earth's crust and about 0.026 μg/L in sea water. It is not found in nature as a native metal, reflecting its intrinsic instability with respect to water. The principal commercial source of zirconium is zircon (ZrSiO4), a silicate mineral, which is found primarily in Australia, Brazil, India, Russia, South Africa and the United States, as well as in smaller deposits around the world. As of 2013, two-thirds of zircon mining occurs in Australia and South Africa. Zircon resources exceed 60 million tonnes worldwide and annual worldwide zirconium production is approximately 900,000 tonnes. Zirconium also occurs in more than 140 other minerals, including the commercially useful ores baddeleyite and kosnarite.
Zirconium is relatively abundant in S-type stars, and it has been detected in the sun and in meteorites. Lunar rock samples brought back from several Apollo missions to the moon have a high zirconium oxide content relative to terrestrial rocks.
Zirconium is a by-product of the mining and processing of the titanium minerals ilmenite and rutile, as well as tin mining. From 2003 to 2007, while prices for the mineral zircon steadily increased from $360 to $840 per tonne, the price for unwrought zirconium metal decreased from $39,900 to $22,700 per ton. Zirconium metal is much higher priced than zircon because the reduction processes are expensive.
Collected from coastal waters, zircon-bearing sand is purified by spiral concentrators to remove lighter materials, which are then returned to the water because they are natural components of beach sand. Using magnetic separation, the titanium ores ilmenite and rutile are removed.
Most zircon is used directly in commercial applications, but a small percentage is converted to the metal. Most Zr metal is produced by the reduction of the zirconium(IV) chloride with magnesium metal in the Kroll process. The resulting metal is sintered until sufficiently ductile for metalworking.
Separation of zirconium and hafniumEdit
Commercial zirconium metal typically contains 1–3% of hafnium, which is usually not problematic because the chemical properties of hafnium and zirconium are very similar. Their neutron-absorbing properties differ strongly, however, necessitating the separation of hafnium from zirconium for nuclear reactors. Several separation schemes are in use. The liquid-liquid extraction of the thiocyanate-oxide derivatives exploits the fact that the hafnium derivative is slightly more soluble in methyl isobutyl ketone than in water. This method is used mainly in United States.
Zr and Hf can also be separated by fractional crystallization of potassium hexafluorozirconate (K2ZrF6), which is less soluble in water than the analogous hafnium derivative.
The product of a quadruple VAM (vacuum arc melting) process, combined with hot extruding and different rolling applications is cured using high-pressure, high-temperature gas autoclaving. This produces reactor-grade zirconium that is about 10 times more expensive than the hafnium-contaminated commercial grade.
Hafnium must be removed from zirconium for nuclear applications because hafnium has a neutron absorption cross-section 600 times greater than zirconium. The separated hafnium can be used for reactor control rods.
Like other transition metals, zirconium forms a wide range of inorganic compounds and coordination complexes. In general, these compounds are colourless diamagnetic solids wherein zirconium has the oxidation state +4. Far fewer Zr(III) compounds are known, and Zr(II) is very rare.
Oxides, nitrides, and carbidesEdit
The most common oxide is zirconium dioxide, ZrO2, also known as zirconia. This clear to white-coloured solid has exceptional fracture toughness and chemical resistance, especially in its cubic form. These properties make zirconia useful as a thermal barrier coating, although it is also a common diamond substitute. Zirconium monoxide, ZrO, is also known and S-type stars are recognised by detection of its emission lines in the visual spectrum.
Zirconium tungstate has the unusual property of shrinking in all dimensions when heated, whereas most other substances expand when heated. Zirconyl chloride is a rare water-soluble zirconium complex with the relatively complicated formula [Zr4(OH)12(H2O)16]Cl8.
Lead zirconate titanate (PZT) is the most commonly used piezoelectric material, with applications such as ultrasonic transducers, hydrophones, common rail injectors, piezoelectric transformers and micro-actuators.
Halides and pseudohalidesEdit
All four common halides are known, ZrF4, ZrCl4, ZrBr4, and ZrI4. All have polymeric structures and are far less volatile than the corresponding monomeric titanium tetrahalides. All tend to hydrolyse to give the so-called oxyhalides and dioxides.
The corresponding tetraalkoxides are also known. Unlike the halides, the alkoxides dissolve in nonpolar solvents. Dihydrogen hexafluorozirconate is used in the metal finishing industry as an etching agent to promote paint adhesion.
Organozirconium chemistry is the study of compounds containing a carbon-zirconium bond. The first such compound was zirconocene dibromide ((C5H5)2ZrBr2), reported in 1952 by Birmingham and Wilkinson. Schwartz's reagent, prepared in 1970 by P. C. Wailes and H. Weigold, is a metallocene used in organic synthesis for transformations of alkenes and alkynes.
Zirconium is also a component of some Ziegler–Natta catalysts, used to produce polypropylene. This application exploits the ability of zirconium to reversibly form bonds to carbon. Most complexes of Zr(II) are derivatives of zirconocene, one example being (C5Me5)2Zr(CO)2.
The zirconium-containing mineral zircon and related minerals (jargoon, hyacinth, jacinth, ligure) were mentioned in biblical writings. The mineral was not known to contain a new element until 1789, when Klaproth analyzed a jargoon from the island of Ceylon (now Sri Lanka). He named the new element Zirkonerde (zirconia). Humphry Davy attempted to isolate this new element in 1808 through electrolysis, but failed. Zirconium metal was first obtained in an impure form in 1824 by Berzelius by heating a mixture of potassium and potassium zirconium fluoride in an iron tube.
The crystal bar process (also known as the Iodide Process), discovered by Anton Eduard van Arkel and Jan Hendrik de Boer in 1925, was the first industrial process for the commercial production of metallic zirconium. It involves the formation and subsequent thermal decomposition of zirconium tetraiodide, and was superseded in 1945 by the much cheaper Kroll process developed by William Justin Kroll, in which zirconium tetrachloride is reduced by magnesium:
- ZrCl4 + 2 Mg → Zr + 2 MgCl2
Approximately 900,000 tonnes of zirconium ores were mined in 1995, mostly as zircon.
Most zircon is used directly in high-temperature applications. This material is refractory, hard, and resistant to chemical attack. Because of these properties, zircon finds many applications, few of which are highly publicized. Its main use is as an opacifier, conferring a white, opaque appearance to ceramic materials. Because of its chemical resistance, zircon is also used in aggressive environments, such as moulds for molten metals.
Zirconium dioxide (ZrO2) is used in laboratory crucibles, in metallurgical furnaces, and as a refractory material. Because it is mechanically strong and flexible, it can be sintered into ceramic knives and other blades. Zircon (ZrSiO4) and the cubic zirconia (ZrO2) are cut into gemstones for use in jewelry.
A small fraction of the zircon is converted to the metal, which finds various niche applications. Because of zirconium's excellent resistance to corrosion, it is often used as an alloying agent in materials that are exposed to aggressive environments, such as surgical appliances, light filaments, and watch cases. The high reactivity of zirconium with oxygen at high temperatures is exploited in some specialised applications such as explosive primers and as getters in vacuum tubes. The same property is (probably) the purpose of including Zr nanoparticles as pyrophoric material in explosive weapons such as the BLU-97/B Combined Effects Bomb. Burning zirconium was used as a light source in some photographic flashbulbs. Zirconium powder with a mesh size from 10 to 80 is occasionally used in pyrotechnic compositions to generate sparks. The high reactivity of zirconium leads to bright white sparks.
Cladding for nuclear reactor fuels consumes about 1% of the zirconium supply, mainly in the form of zircaloys. The desired properties of these alloys are a low neutron-capture cross-section and resistance to corrosion under normal service conditions. Efficient methods for removing the hafnium impurities were developed to serve this purpose.
- Zr + 2 H2O → ZrO2 + 2 H2
This exothermic reaction is very slow below 100 °C, but at temperature above 900 °C the reaction is rapid. Most metals undergo similar reactions. The redox reaction is relevant to the instability of fuel assemblies at high temperatures. This reaction was responsible for a small hydrogen explosion first observed inside the reactor building of Three Mile Island nuclear power plant in 1979, but at that time, the containment building was not damaged. The same reaction occurred in the reactors 1, 2 and 3 of the Fukushima I Nuclear Power Plant (Japan) after the reactor cooling was interrupted by the earthquake and tsunami disaster of March 11, 2011, leading to the Fukushima I nuclear accidents. After venting the hydrogen in the maintenance hall of those three reactors, the mixture of hydrogen with atmospheric oxygen exploded, severely damaging the installations and at least one of the containment buildings. To avoid explosion, the direct venting of hydrogen to the open atmosphere would have been a preferred design option. Now, to prevent the risk of explosion in many pressurized water reactor (PWR) containment buildings, a catalyst-based recombiner is installed that converts hydrogen and oxygen into water at room temperature before the hazard arises.
Space and aeronautic industriesEdit
Materials fabricated from zirconium metal and ZrO2 are used in space vehicles where resistance to heat is needed.
High temperature parts such as combustors, blades, and vanes in jet engines and stationary gas turbines are increasingly being protected by thin ceramic layers, usually composed of a mixture of zirconia and yttria.
Positron emission tomography camerasEdit
The isotope 89Zr has been applied to the tracking and quantification of molecular antibodies with positron emission tomography (PET) cameras (a method called "immuno-PET"). Immuno-PET has reached a maturity of technical development and is now entering the phase of wide-scale clinical applications. Until recently, radiolabeling with 89Zr was a complicated procedure requiring multiple steps. In 2001–2003 an improved multistep procedure was developed using a succinylated derivative of desferrioxamine B (N-sucDf) as a bifunctional chelate, and a better way of binding 89Zr to mAbs was reported in 2009. The new method is fast, consists of only two steps, and uses two widely available ingredients: 89Zr and the appropriate chelate. On-going developments also include the use of siderophore derivatives to bind 89Zr(IV).
Zirconium-bearing compounds are used in many biomedical applications, including dental implants and crowns, knee and hip replacements, middle-ear ossicular chain reconstruction, and other restorative and prosthetic devices.
Zirconium binds urea, a property that has been utilized extensively to the benefit of patients with chronic kidney disease. For example, zirconium is a primary component of the sorbent column dependent dialysate regeneration and recirculation system known as the REDY system, which was first introduced in 1973. More than 2,000,000 dialysis treatments have been performed using the sorbent column in the REDY system. Although the REDY system was superseded in the 1990s by less expensive alternatives, new sorbent-based dialysis systems are being evaluated and approved by the U.S. Food and Drug Administration (FDA). Renal Solutions developed the DIALISORB technology, a portable, low water dialysis system. Also, developmental versions of a Wearable Artificial Kidney have incorporated sorbent-based technologies.
Sodium zirconium cyclosilicate is used by mouth in the treatment of hyperkalemia. It is a selective sorbent designed to trap potassium ions in preference to other ions throughout the gastrointestinal tract.
A mixture of monomeric and polymeric Zr4+ and Al3+ complexes with hydroxide, chloride and glycine, called Aluminium zirconium tetrachlorohydrex gly or AZG, is used in a preparation as an antiperspirant in many deodorant products. It is selected for its ability to obstruct pores in the skin and prevent sweat from leaving the body.
|NFPA 704 (fire diamond)|
Although zirconium has no known biological role, the human body contains, on average, 250 milligrams of zirconium, and daily intake is approximately 4.15 milligrams (3.5 milligrams from food and 0.65 milligrams from water), depending on dietary habits. Zirconium is widely distributed in nature and is found in all biological systems, for example: 2.86 μg/g in whole wheat, 3.09 μg/g in brown rice, 0.55 μg/g in spinach, 1.23 μg/g in eggs, and 0.86 μg/g in ground beef. Further, zirconium is commonly used in commercial products (e.g. deodorant sticks, aerosol antiperspirants) and also in water purification (e.g. control of phosphorus pollution, bacteria- and pyrogen-contaminated water).
Short-term exposure to zirconium powder can cause irritation, but only contact with the eyes requires medical attention. Persistent exposure to zirconium tetrachloride results in increased mortality in rats and guinea pigs and a decrease of blood hemoglobin and red blood cells in dogs. However, in a study of 20 rats given a standard diet containing ~4% zirconium oxide, there were no adverse effects on growth rate, blood and urine parameters, or mortality. The U.S. Occupational Safety and Health Administration (OSHA) legal limit (permissible exposure limit) for zirconium exposure is 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) recommended exposure limit (REL) is 5 mg/m3 over an 8-hour workday and a short term limit of 10 mg/m3. At levels of 25 mg/m3, zirconium is immediately dangerous to life and health. However, zirconium is not considered an industrial health hazard. Furthermore, reports of zirconium-related adverse reactions are rare and, in general, rigorous cause-and-effect relationships have not been established. No evidence has been validated that zirconium is carcinogenic or genotoxic.
Among the numerous radioactive isotopes of zirconium, 93Zr is among the most common. It is released as a product of 235U, mainly in nuclear plants and during nuclear weapons tests in the 1950s and 1960s. It has a very long half-life (1.53 million years), its decay emits only low energy radiations, and it is not considered as highly hazardous.
- Meija, Juris; et al. (2016). "Atomic weights of the elements 2013 (IUPAC Technical Report)". Pure and Applied Chemistry. 88 (3): 265–91. doi:10.1515/pac-2015-0305.
- "Zirconium: zirconium(I) fluoride compound data". OpenMOPAC.net. Retrieved 2007-12-10.
- Lide, D. R., ed. (2005). "Magnetic susceptibility of the elements and inorganic compounds". CRC Handbook of Chemistry and Physics (PDF) (86th ed.). Boca Raton (FL): CRC Press. ISBN 0-8493-0486-5.
- Pritychenko, Boris; Tretyak, V. "Adopted Double Beta Decay Data". National Nuclear Data Center. Retrieved 2008-02-11.
- Harper, Douglas. "zircon". Online Etymology Dictionary.
- Emsley, John (2001). Nature's Building Blocks. Oxford: Oxford University Press. pp. 506–510. ISBN 978-0-19-850341-5.
- "Zirconium". How Products Are Made. Advameg Inc. 2007. Retrieved 2008-03-26.
- Lide, David R., ed. (2007–2008). "Zirconium". CRC Handbook of Chemistry and Physics. 4. New York: CRC Press. p. 42. ISBN 978-0-8493-0488-0.
- Considine, Glenn D., ed. (2005). "Zirconium". Van Nostrand's Encyclopedia of Chemistry. New York: Wylie-Interscience. pp. 1778–1779. ISBN 978-0-471-61525-5.
- Winter, Mark (2007). "Electronegativity (Pauling)". University of Sheffield. Retrieved 2008-03-05.
- Schnell I & Albers RC (January 2006). "Zirconium under pressure: phase transitions and thermodynamics". Journal of Physics: Condensed Matter. 18 (5): 16. Bibcode:2006JPCM...18.1483S. doi:10.1088/0953-8984/18/5/001.
- Audi, Georges; Bersillon, Olivier; Blachot, Jean; Wapstra, Aaldert Hendrik (2003), "The NUBASE evaluation of nuclear and decay properties", Nuclear Physics A, 729: 3–128, Bibcode:2003NuPhA.729....3A, doi:10.1016/j.nuclphysa.2003.11.001
- Peterson, John; MacDonell, Margaret (2007). "Zirconium". Radiological and Chemical Fact Sheets to Support Health Risk Analyses for Contaminated Areas (PDF). Argonne National Laboratory. pp. 64–65. Archived from the original (PDF) on 2008-05-28. Retrieved 2008-02-26.
- "Zirconium and Hafnium - Mineral resources" (PDF). 2014.
- "Zirconium and Hafnium" (PDF). Mineral Commodity Summaries: 192–193. January 2008. Retrieved 2008-02-24.
- Ralph, Jolyon & Ralph, Ida (2008). "Minerals that include Zr". Mindat.org. Retrieved 2008-02-23.
- Callaghan, R. (2008-02-21). "Zirconium and Hafnium Statistics and Information". US Geological Survey. Retrieved 2008-02-24.
- Nielsen, Ralph (2005) "Zirconium and Zirconium Compounds" in Ullmann's Encyclopedia of Industrial Chemistry, Wiley-VCH, Weinheim. doi:10.1002/14356007.a28_543
- Stwertka, Albert (1996). A Guide to the Elements. Oxford University Press. pp. 117–119. ISBN 978-0-19-508083-4.
- Brady, George Stuart; Clauser, Henry R. & Vaccari, John A. (24 July 2002). Materials handbook: an encyclopedia for managers, technical professionals, purchasing and production managers, technicians, and supervisors. McGraw-Hill Professional. pp. 1063–. ISBN 978-0-07-136076-0. Retrieved 2011-03-18.
- Zardiackas, Lyle D.; Kraay, Matthew J. & Freese, Howard L. (1 January 2006). Titanium, niobium, zirconium and tantalum for medical and surgical applications. ASTM International. pp. 21–. ISBN 978-0-8031-3497-3. Retrieved 2011-03-18.
- Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-037941-8.
- "Zirconia". AZoM.com. 2008. Retrieved 2008-03-17.
- Gauthier, V.; Dettenwanger, F.; Schütze, M. (2002-04-10). "Oxidation behavior of γ-TiAl coated with zirconia thermal barriers". Intermetallics. 10 (7): 667–674. doi:10.1016/S0966-9795(02)00036-5.
- Keenan, P. C. (1954). "Classification of the S-Type Stars". Astrophysical Journal. 120: 484–505. Bibcode:1954ApJ...120..484K. doi:10.1086/145937.
- MSDS sheet for Duratec 400, DuBois Chemicals, Inc.
- Wilkinson, G.; Birmingham, J. M. (1954). "Bis-cyclopentadienyl Compounds of Ti, Zr, V, Nb and Ta". J. Am. Chem. Soc. 76 (17): 4281–4284. doi:10.1021/ja01646a008.Rouhi, A. Maureen (2004-04-19). "Organozirconium Chemistry Arrives". Science & Technology. 82 (16): 36–39. doi:10.1021/cen-v082n015.p035. ISSN 0009-2347. Retrieved 2008-03-17.
- Wailes, P. C. & Weigold, H. (1970). "Hydrido complexes of zirconium I. Preparation". Journal of Organometallic Chemistry. 24 (2): 405–411. doi:10.1016/S0022-328X(00)80281-8.
- Hart, D. W. & Schwartz, J. (1974). "Hydrozirconation. Organic Synthesis via Organozirconium Intermediates. Synthesis and Rearrangement of Alkylzirconium(IV) Complexes and Their Reaction with Electrophiles". J. Am. Chem. Soc. 96 (26): 8115–8116. doi:10.1021/ja00833a048.
- Krebs, Robert E. (1998). The History and Use of our Earth's Chemical Elements. Westport, Connecticut: Greenwood Press. pp. 98–100. ISBN 978-0-313-30123-0.
- Hedrick, James B. (1998). "Zirconium". Metal Prices in the United States through 1998 (PDF). US Geological Survey. pp. 175–178. Retrieved 2008-02-26.
- "Fine ceramics - zirconia". Kyocera Inc.
- Kosanke, Kenneth L.; Kosanke, Bonnie J. (1999), "Pyrotechnic Spark Generation", Journal of Pyrotechnics: 49–62, ISBN 978-1-889526-12-6
- Gillon, Luc (1979). Le nucléaire en question, Gembloux Duculot, French edition.
- Arnould, F.; Bachellerie, E.; Auglaire, M.; Boeck, D.; Braillard, O.; Eckardt, B.; Ferroni, F.; Moffett, R.; Van Goethem, G. (2001). "State of the art on hydrogen passive autocatalytic recombiner" (PDF). 9th International Conference on Nuclear Engineering, Nice, France, 8–12 April 2001. Retrieved 4 March 2018.
- Meier, S. M.; Gupta, D. K. (1994). "The Evolution of Thermal Barrier Coatings in Gas Turbine Engine Applications". Journal of Engineering for Gas Turbines and Power. 116: 250. doi:10.1115/1.2906801.
- Heuveling, Derek A.; Visser, Gerard W. M.; Baclayon, Marian; Roos, Wouter H.; Wuite, Gijs J. L.; Hoekstra, Otto S.; Leemans, C. René; de Bree, Remco; van Dongen, Guus A. M. S. (2011). "89Zr-Nanocolloidal Albumin–Based PET/CT Lymphoscintigraphy for Sentinel Node Detection in Head and Neck Cancer: Preclinical Results" (PDF). The Journal of Nuclear Medicine. 52 (10): 1580–1584. doi:10.2967/jnumed.111.089557. PMID 21890880.
- van Rij, Catharina M.; Sharkey, Robert M.; Goldenberg, David M.; Frielink, Cathelijne; Molkenboer, Janneke D. M.; Franssen, Gerben M.; van Weerden, Wietske M.; Oyen, Wim J. G.; Boerman, Otto C. (2011). "Imaging of Prostate Cancer with Immuno-PET and Immuno-SPECT Using a Radiolabeled Anti-EGP-1 Monoclonal Antibody". The Journal of Nuclear Medicine. 52 (10): 1601–1607. doi:10.2967/jnumed.110.086520. PMID 21865288.
- Ruggiero, A.; Holland, J. P.; Hudolin, T.; Shenker, L.; Koulova, A.; Bander, N. H.; Lewis, J. S.; Grimm, J. (2011). "Targeting the internal epitope of prostate-specific membrane antigen with 89Zr-7E11 immuno-PET". The Journal of Nuclear Medicine. 52 (10): 1608–15. doi:10.2967/jnumed.111.092098. PMC 3537833. PMID 21908391.
- Verel, I.; Visser, G. W.; Boellaard, R.; Stigter-Van Walsum, M.; Snow, G. B.; Van Dongen, G. A. (2003). "89Zr immuno-PET: Comprehensive procedures for the production of 89Zr-labeled monoclonal antibodies" (PDF). J Nucl Med. 44 (8): 1271–81. PMID 12902418.
- Perk, L, "The Future of Immuno-PET in Drug Development Zirconium-89 and Iodine-124 as Key Factors in Molecular Imaging" Archived April 25, 2012, at the Wayback Machine, Amsterdam, Cyclotron, 2009.
- Deri, Melissa A.; Ponnala, Shashikanth; Zeglis, Brian M.; Pohl, Gabor; Dannenberg, J. J.; Lewis, Jason S.; Francesconi, Lynn C. (2014-06-12). "Alternative Chelator for 89Zr Radiopharmaceuticals: Radiolabeling and Evaluation of 3,4,3-(LI-1,2-HOPO)". Journal of Medicinal Chemistry. 57 (11): 4849–4860. doi:10.1021/jm500389b. ISSN 0022-2623. PMC 4059252. PMID 24814511.
- Captain, Ilya; Deblonde, Gauthier J.-P.; Rupert, Peter B.; An, Dahlia D.; Illy, Marie-Claire; Rostan, Emeline; Ralston, Corie Y.; Strong, Roland K.; Abergel, Rebecca J. (2016-11-21). "Engineered Recognition of Tetravalent Zirconium and Thorium by Chelator–Protein Systems: Toward Flexible Radiotherapy and Imaging Platforms". Inorganic Chemistry. 55 (22): 11930–11936. doi:10.1021/acs.inorgchem.6b02041. ISSN 0020-1669. PMID 27802058.
- Lee DBN, Roberts M, Bluchel CG, Odell RA. (2010) Zirconium: Biomedical and nephrological applications. ASAIO J 56(6):550-556.
- Ash SR. Sorbents in treatment of uremia: A short history and a great future. 2009 Semin Dial 22: 615-622
- Ingelfinger, Julie R. (2015). "A New Era for the Treatment of Hyperkalemia?". New England Journal of Medicine. 372 (3): 275–7. doi:10.1056/NEJMe1414112. PMID 25415806.
- Schroeder, Henry A.; Balassa, Joseph J. (May 1966). "Abnormal trace metals in man: zirconium". Journal of Chronic Diseases. 19 (5): 573–586. doi:10.1016/0021-9681(66)90095-6. PMID 5338082.
- "Zirconium". International Chemical Safety Cards. International Labour Organization. October 2004. Retrieved 2008-03-30.
- Zirconium and its compounds 1999. The MAK Collection for Occupational Health and Safety. 224–236
- "CDC - NIOSH Pocket Guide to Chemical Hazards - Zirconium compounds (as Zr)". www.cdc.gov. Retrieved 2015-11-27. | <urn:uuid:79a9941d-04c8-440c-b09f-39e1bada8c57> | CC-MAIN-2019-47 | https://en.m.wikipedia.org/wiki/Zirconium | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00539.warc.gz | en | 0.785027 | 8,037 | 3.84375 | 4 |
The Challenge of French Diversity
The Challenge of French Diversity
After more than a decade of public debate and sharp media criticism, in February 2004 France adopted a law banning the wearing of ostentatious religious symbols in public schools. That controversial decision, one of many recommendations by a 19-member commission established by President Jacques Chirac, has highlighted the complexity of managing diversity in the context of France's strongly secular traditions.
The so-called headscarf affair demonstrates the challenges of migration management and integration strategies for France, a longstanding country of immigration. The new law, applied only in the French public school system, is one of many strategies the government has considered at the local and national level to allow for individual religious and cultural expression consistent with longstanding national values.
In addition, concern about the place and visibility of Islam in the secular French democracy has stimulated action by the public authorities to counter discrimination. The long-anticipated independent body to combat discrimination was voted in by parliament in October 2004 and will be implemented in January 2005. The combination of informal and formal mechanisms that have emerged in France to prevent discrimination on one hand and promote integration on the other will be of keen interest, not least of all to other Member States across Europe.
As home to the largest Islamic community in Europe, France and its approach to immigration are likely to define the ongoing debate about successful integration measures.
Setting the Stage
Since the mid-19th century, French immigration policy has had two aims: to meet the needs of the labor market by introducing migrant workers, and to compensate French demographic deficits by favoring the permanent installation of foreign families, while ensuring their integration into the national body.
On the labor market front, the deepening of French colonial relations in the 19th and early 20th centuries laid the groundwork for steady movements of people between France and its colonies. While early information on the foreign population dates back to France's first census in 1851, the first attempts to codify and regulate immigration to France began in the post-World War II era.
The devastation of two world wars and low birthrates thereafter had left France with a limited national labor pool. The country saw a partial answer to its dwindling workforce in the recruitment of foreign labor, initially from Belgium and Germany as well as from Poland, Russia, Italy, and Spain.
Immigration to France increased during the wars of liberation and decolonization in the 1950s and 1960s. For France, the impact was felt especially acutely in the free and unregulated entries of immigrants from Algeria. This was particularly true in the period leading up to and following Algerian independence from France in 1962 with the signing of the Evian Agreement.
New arrivals included former French colonists resident in Algeria, as well as Algerians who had sided with France during the war of independence. In 1962, about 350,000 so-called "French Muslims" were counted in France. The number of Algerians rose to 470,000 in 1968 and to 800,000 in 1982.
The late 1960s and early 1970s, however, ushered in a period of tremendous social change. The maturing of the baby boom generation and the entrance of large numbers of women into the labor force limited the need for foreign workers. Economically, the oil price shock of 1973 further hamstrung economic performance, and led to an extended period of high unemployment.
In July 1974, the French government followed the lead of other European counterparts and officially ended its labor migration programs. The legislation also included provisions for sanctions affecting employers who hired illegal immigrants, a French policy innovation originally developed in the 1930s. Nonetheless, immigration continued and diversified over the following decades.
Reversal of Fortune
Since then, immigrant integration issues and political reversals suffered by both left and right have had an impact on French immigration policies. In 1993, the conservative government's interior minister, Charles Pasqua, put forth the goal of "zero immigration," later qualified to mean zero illegal immigration.
The so-called Pasqua Laws prohibited foreign graduates from accepting in-country employment, increased the waiting period for family reunification from one to two years, and denied residence permits to foreign spouses who had been in France illegally prior to marrying. The legislation also enhanced the powers of police to deport foreigners and eliminated opportunities to appeal asylum rejections. The election of a conservative president in 1995 continued the course of limiting immigration channels.
Indeed, the ambiguity expressed by parts of the French electorate and political leaders was already apparent in the late-1980s when the far-right National Front led by Jean-Marie Le Pen began to make significant gains in local elections, building on an anti-immigrant agenda. By the early 1990s, with the popularity of Le Pen's party on the rise, the conservative right responded by embracing some components of the far-right agenda, particularly the immigration one.
The resulting legislative changes altered the migration landscape. In 1990, 102,400 foreigners settled in France (not including undocumented immigrants but including workers, refugees, and those joining their families). The years from 1995 to 1997 were marked by a continuous decline in permanent entries, which saw the lowest levels registered since the end of World War II: 69,300 entries in 1994, 56,700 in 1995 and 55,600 in 1996.
In 1997, the streams began to increase once more and exceeded the 100,000 mark. Of the 102,400 who entered in 1997, 78,000 foreigners came from outside the European Economic Area (the European Union plus Norway, Iceland, and Liechtenstein).
In the streets, however, the Pasqua Laws were not left unchallenged. In the summer of 1996, a group of Africans and Chinese who were unable to obtain residence permits - even though many had resided in France for several years and could not be legally deported - occupied a church in Paris.
These sans papiers (people without legal documents) mobilized the support of over 10,000 people who marched in Paris on their behalf. The police broke up the demonstration, but similar acts of civil disobedience by the sans papiers and their supporters continued throughout the 1995 to 1997 period.
In 1997, the Socialists won control of the National Assembly and began rethinking immigration policy. At the government's request, political scientist Patrick Weil and a team of government experts produced a report on nationality and citizenship. The report asserted that the Pasqua Laws deprived France of human capital by deterring foreign students and professionals from settling in France.
Weil's recommendations served as the basis for new 1997 and 1998 legislation. Ratified in the name of national interest, the new rules aimed to provide highly skilled workers, scholars, and scientists with a special immigrant status, while simultaneously combating illegal immigration.
The March 16, 1998 law on nationality along with the RESEDA Law of May 11, 1998 on foreign immigration sought to ease the admission procedures for graduates and highly skilled employees. In addition, a regularization procedure, launched in June 1997, legalized the status of roughly 87,000 unauthorized immigrants out of roughly 150,000 applicants.
Since then, the inflow of foreign students has continued to rise, accounting for 25,100 entries in 1999, compared to over 147,000 in 2001. Furthermore, the principle of jus soli that had been modified by the Pasqua Laws was reinstated. Under the Pasqua Laws, children born in France of foreign parents were required to make a "voluntary declaration" of their intention to acquire French citizenship. After 1998, children of foreign parents automatically acquire French citizenship at the age of 18.
The Current Context
More recently, despite poor economic performance and growing concerns about illegal immigration, immigration is on the rise again. According to the 2003 Trends in International Migration Report of the Organization for Economic Cooperation and Development (OECD), from 1999 to 2002, total permanent entries significantly increased, totaling 104,400 in 1999 and 141,000 in 2001 including European Economic Area (EEA) nationals.
The main reason for immigration remains family reunification, accounting for 70 percent of the entries from non-EEA countries and 33 percent of the entries from EEA countries. In addition, another 50,600 people entered as temporary immigrants, who include students and those with a temporary work permit.
In November 2003, the National Assembly passed a law amending legislation on immigration and on the residence of foreigners on French territories. The new law provides stricter regulations to combat illegal immigration and to regulate the admission and stay of foreigners in France.
Applications for asylum increased dramatically at the end of the 1980s, growing from 22,500 in 1982, to 34,300 in 1988, and 61,400 in 1989. The turn towards asylum channels by some immigrants can be partially explained by the stricter visa requirements and stiffer conditions of entry for work, family entry, and settlement.
Increased numbers of asylum seekers and greater scrutiny of approvals have reduced the number of approvals. Between 1980 and 1995, the approval rate for asylum applications fell from 85 percent to less than 20 percent, where it held steady for 1998 and 1999. This figure does not include those claims that were eventually granted an appeal, which would increase slightly the overall approval percentage.
From 1999 to 2003, applications for asylum in France continued to increase. The Office Français de Protection des Réfugiés et Apatrides (OFPRA), the French asylum determination body, received 30,900 applications for asylum in 1999 compared to 52,204 in 2003. These numbers may represent more than one individual as they do not include spouses or children of applicants.
In 2003, the OFPRA approved 9,790 initial applications for asylum out of 52,204 applications received. The admission rate in 2003 was 9.8 percent compared to 12.6 percent in 2002.
A 2003 report by the European Council on Refugees and Exiles (ECRE) outlines some of the major trends in asylum. Asylum applications from Asia have been increasing, particularly from China, totaling 5,307 in 2003 compared to 2,885 in 2002. Also notable, applications from individuals from Turkey have steadily increased (7,345 applications in 2003 compared to 6,988 in 2002).
In general, applications from African countries have been decreasing. Applications from Algeria declined from 2,888 in 2002 to 2,448 in 2003. Despite a slight decrease in the number of applications received from the Democratic Republic of the Congo (DRC) - 4,625 applications in 2003 compared to 5,375 in 2002 - the DRC remains France's largest origin country of asylum seekers from Africa.
The rising number of asylum applications has challenged the adequacy of French asylum policies. France does not provide for systematic refugee resettlement, nor does it yet accept the "safe third country" concept whereby an asylum seeker coming from outside the EU but through a "safe" country may be, under certain conditions, returned to this "safe" country. In addition, asylum seekers have limited access to social welfare benefits and are not allowed to work.
In 2003, the government made several moves to reform the asylum application process, including the introduction of the "safe third country" and "domestic asylum" concepts. The latter allows OFPRA to deny asylum to persons who could have found protection in their country of origin.
At the European level, France has expressed reservations on Dublin II, which came into effect in September 2003. Dublin II revised the terms of the 1990 Dublin Convention, which outlined common formal arrangements on asylum, stating that when people have been refused asylum in one member state, they may not seek asylum in another. Dublin II established criteria and mechanisms to determine the Member State responsible for examining applications for asylum. The goal of Dublin II is to expedite asylum application procedures and to counteract potential abuses of the asylum system.
The Sangatte Asylum Issue
In 1999, the opening of the Sangatte emergency center near the Eurotunnel transport terminal in Calais quickly became a source of political tensions and bilateral disputes between France and Great Britain. The center was a transit location for asylum seekers - most of them Afghans, Iraqis, and Kurds - intending to illegally enter Britain and to apply for territorial asylum there. The issue raised security and humanitarian concerns.
In July 2002, French Minister of the Interior, Nicolas Sarkozy, and his British counterpart, David Blunkett, agreed to close the center. In the meantime, bilateral negotiations on immigration issues set the stage for burden sharing on asylum applications and security cooperation, and led to the adoption of the Sangatte Protocol. The Protocol provides for the establishment of additional control offices as well as provisions on asylum applications. In addition, Great Britain adopted a tighter law to control illegal immigration and to restrict the right to asylum in its territories.
In practice, the Sangatte asylum crisis has highlighted underlying difficulties to share responsibilities and to harmonize European asylum and immigration policies under the Dublin Convention. It has also underscored the need to create a comprehensive common asylum system in Europe.
French by Nationality and by Origin
The French census tracks "nationality" as a category and distinguishes between those who are born in France, those who have acquired French citizenship, and those who are foreigners. After the 1990 census, based on recommendations by the government's High Council for Integration, the French National Institute of Statistics and Economic Studies (INSEE) also adopted the category of "immigrant," defined in France as "a person born abroad with a foreign nationality."
French statistics on immigration are best understood in three categories:
French by birth. This includes the offspring of French citizens who were born either in France or abroad.
French by acquisition. This includes individuals who have acquired French by naturalization after moving to France, by declaration (as with children born in France of immigrant parents), and some others.
Foreigners. This includes individuals in France who were born abroad as well as children, under the age of 18, who were born in France of immigrant parents. It also includes any individual born in France of foreign parents who chooses not to adopt French nationality at the age of 18.
Thus, because of the peculiarities of French nationality laws, not all foreigners are immigrants, because the children born in France of foreigners generally remain foreigners until the age of 18.
In 1999, the stock of foreigners born in France totaled 510,000. Another 1,580,000 immigrants born abroad had become French by acquisition.
By adding the category of "foreigners" to those who became "French by acquisition," a new category is derived - "foreigners by nationality or origin." This new category includes only a proportion of the descendants of immigrants. In total, this new category included nearly 10 percent of the population of France in 1999 (see Table 1).
The 1999 census shows that new immigrants compose a growing proportion of the foreign-born population in France, which has grown by 6.8 percent over its 1982 level, from 4,037,000 to 4,310,000. Algerians, still the largest group, make up 13.4 percent (575,740) of the immigrant population, slightly less than the 14.8 percent (597,644) in 1982. Similarly, Italian and Spanish populations in France are declining in terms of relative population.
Table 1: Population living in France according to nationality and place of birth, in 1999 (in thousands)
More dynamic currents in the 1999 census include Moroccans (521,000 or 12.1 percent), Turks (176,000 or 4.1 percent) and people from sub-Saharan Africa (400,000 or 9.3 percent). With an increase of 43 percent, these groups have undergone the greatest increase in the period between 1990 and 1999, compared with the 3.4 percent increase of the overall immigrant population over the same period.
It is also important to note that immigrants from Southeast Asia are also increasing in number, especially those from China, Pakistan, India, Sri Lanka, and Bangladesh.
Since the 1990s, naturalizations have been on the rise. In 1995, 92,400 people assumed French nationality, including 61,884 by naturalization and 30,000 children of immigrants who assumed nationality. In 2000, 150,025 applications were approved. The origins of these new French citizens are North Africa (48 percent), Europe (16 percent), sub-Saharan Africa (7.5 percent), and Turkey (8.5 percent).
The results of the next census will be available in 2005.
Managing a Mosaic
The visibility of immigrants, especially from North Africa, has shed new light on the difficulties of integrating immigrants and managing diversity. France's long tradition of equating French citizenship with equal treatment has meant that the government has not tracked ethnic origins in official statistics, unlike in the United States or Great Britain. (France has traditionally viewed the retention of ethnic identity as an obstacle to both integration and national solidarity.)
Yet, since the mid-1990s, discrimination has become a new preoccupation of the authorities and scholars. Breaking with the French model of integration that emphasized French identity over ethnic identities, new terms have emerged to help identify these communities, such as the "second generation" or "persons born in France of immigrant parents." These terms have helped to provide more information on the scope of discrimination and its mechanisms.
The prime minister announced on July 8, 2004 the creation of a "cité nationale de l'histoire de l'immigration" (National Center for Immigration History), akin to the Ellis Island Museum in New York City. This center will promote the memory and heritage of different immigrant groups and explore their contributions to French society. It is an important step towards recognizing the crucial place of immigrants in contemporary French history.
Promoting Secularism in a Religiously Diverse Society
Since 1989, controversies surrounding the wearing of a headscarf (hijab) in public schools have sporadically punctuated social and political debates. The wearing of headscarves and other ostentatious religious symbols in public schools has been denounced by some as incompatible with French republican and secularist values. Others fear that attempts to curtail such religious expression only perpetuate an assimilationist model of integration that is not respectful of personal and religious freedom.
In July 2003, in the midst of intensified debates opposing Islamic religious organizations and political institutions, President Chirac appointed a commission to reflect and engage public thought on the principle of secularism in the French Republic. The commission was composed of a broad cross section of the French public, from parents to company managers.
The emphasis of the commission's resulting report is on the need to respect constitutional secularist and republican values in the public sphere as a unifying factor in a diverse society. Although the commission issued 25 recommendations to President Chirac and the parliament, the decision related to the wearing of headscarves has garnered the lion's share of attention. It is also the only recommendation that has been put into effect.
While supporting the argument that freedom of religion should be respected under constitutional law, the commission recommended that ostentatious religious symbols be banned in public schools. The commission's report argued that the French educational system should be a neutral environment where the principles of secularism, republicanism, and citizenship are taught and reflected. The report also supported developing other policies to fight against discrimination in the public sphere, responding to mounting concerns that discrimination is on the rise.
On February 10, 2004, the French Assembly passed the law prohibiting the wearing of conspicuous religious symbols in public schools with a majority of votes (494 to 36) and broad public support. L'Union pour une Majorité Populaire (UMP) and the Parti Socialiste (PS) have been the strongest supporters of the law while the Union pour la Démocratie Française (UDF) and the Parti Communiste Français (PCF) were divided.
The decision to ban headscarves and other ostentatious religious symbols in schools strikes at the core of French concerns about religious freedom in a diverse society. Some critics have argued that such a decision is a political statement designed to allay public concerns. They view the decision as profoundly anti-Islam rather than pro-secularism. It has also been interpreted as an indirect legitimization of anti-Arab stereotypes, fostering rather than preventing racism.
Others have welcomed the decision. They believe that the legislation frees those girls who do not want to wear a headscarf from growing pressures to comply with Islamic law. They view the legislation as an act against religious persecution of any kind rather than a statement on Islam, per se.
The Ministry of Education has been stepping up its efforts to enforce the new law. In practice, this has affected not only Muslim girls but also Sikh boys who have been asked to remove their turbans. Finding the right balance, and one that reflects the protections that the law is ostensibly designed to uphold, will continue to be a popular and political struggle.
While integration has long been viewed as a national prerogative, it is clear that France's actions in this regard will have ramifications across Europe, within other Islamic communities, and within the broader human rights community.
Ongoing efforts to resolve the headscarf affair reflect the myriad interests that have coalesced in France, shaped by immigration, religion, France's colonial history, emerging multiculturalism, domestic labor needs, and the broader context of post-September 11 security politics.
Whether the new law reaffirms the principles of secularism and republicanism in the French public sphere or simply responds to public and political concerns about extremist Islamist groups in France misses a broader point.
Any effort to manage diversity within a democracy requires fair, feasible, and transparent policies that result from broad consultation with affected communities. France is only at the beginning of that iterative process. It is likely that many more changes and adjustments will come as intended and unintended consequences of such legislation become apparent.
December 2003 expert commission report on the principles of secularism in France delivered to French President Jaques Chirac (in French). Available online.
European Council for Refugees and Exiles (ECRE), "ECRE Country Report 2003: France" Available online.
Hamilton, Kimberly. 1997. "Europe, Africa, and International Migration: an Uncomfortable Triangle of Interests." New Community 23(4): 549-570.
Law banning the wearing of any conspicuous signs in public schools (in French). Available online.
Law on the High Authority to combat discriminations and to promote equality (in French). Available online.
Organization for Economic Cooperation and Development Continuous Reporting System on Migration (SOPEMI). Trends in International Migration (various editions). Paris: OECD Publications.
Organization for Economic Cooperation and Development Continuous Reporting System on Migration (SOPEMI). Trends in International Migration. Paris: OECD Publications, 2003: 196-198
Organization for Economic Cooperation and Development. "Education at a Glance 2003: Table C3.5. Number of foreign students in tertiary education by country of origin and country of destination (2001)." Available online.
Papademetriou, Demetrios and Kimberly Hamilton. 1996. Converging Paths to Restriction: French, Italian, and British Responses to Immigration. Washington, DC: Carnegie Endowment for International Peace.
Simon, Patrick. 2000. Les discriminations ethniques dans la société française. Paris: IHESI.
Simon, Patrick. 1999. "Nationality and Origins in French Statistics : Ambiguous Categories." Population: an English Selection 11: 193-220.
Toubon, Jacques. 2004. "Report to the prime minister: Towards the creation of a national center for immigration history." Available online.
United Nations High Commissioner for Refugees, "Asylum Levels and Trends: Europe and non-European Industrialized Countries, 2003," February 2004. The site can be found under the UNHCR website. Go to Statistics and click on Asylum Trends.
Weil, Patrick. 1997. Mission d'étude de legislations de la nationalité et de l'immigration. Paris: La documentation Française.
Weil, Patrick. 2004. "Lifting the Veil of Ignorance." Progressive Politics. Vol. 3.1. March. | <urn:uuid:61c2c349-4d10-4080-85bf-2077f8e76334> | CC-MAIN-2019-47 | https://www.migrationpolicy.org/article/challenge-french-diversity | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00297.warc.gz | en | 0.952835 | 4,937 | 3.140625 | 3 |
Stunned researchers in Antarctica have discovered fish and other aquatic animals living in perpetual darkness and cold, beneath a roof of ice 740 meters thick. The animals inhabit a wedge of seawater only 10 meters deep, sealed between the ice above and a barren, rocky seafloor below—a location so remote and hostile the many scientists expected to find nothing but scant microbial life.
A team of ice drillers and scientists made the discovery after lowering a small, custom-built robot down a narrow hole they bored through the Ross Ice Shelf, a slab of glacial ice the size of France that hangs off the coastline of Antarctica and floats on the ocean. The remote water they tapped sits beneath the back corner of the floating shelf, where the shelf meets what would be the shore of Antarctica if all that ice were removed. The spot sits 850 kilometers from the outer edge of the ice shelf, the nearest place where the ocean is in contact with sunlight that allows tiny plankton to grow and sustain a food chain.
“I’m surprised,” says Ross Powell, a 63-year old glacial geologist from Northern Illinois University who co-led the expedition with two other scientists. Powell spoke with me via satellite phone from the remote location on the West Antarctic Ice Sheet, where 40 scientists, ice drillers and technicians were dropped by ski-mounted planes. “I’ve worked in this area for my whole career,” he says—studying the underbellies where glaciers flow into oceans. “You get the picture of these areas having very little food, being desolate, not supporting much life.” The ecosystem has somehow managed to survive incredibly far from sunlight, the source of energy that drives most life on Earth. The discovery provides insight into what kind of complex but undiscovered life might inhabit the vast areas beneath Antarctica’s ice shelves—comprising more than a million square kilometers of unexplored seafloor.
The expedition, funded by the National Science Foundation, had ventured to this location to investigate the history and long-term stability of the Whillans Ice Stream, a major glacier that flows off the coast of Antarctica and feeds into the Ross Ice Shelf. The expedition began in December as tractors towed massive sleds holding more than 400 metric tons of fuel and equipment to a remote location 630 kilometers from the South Pole and 1,000 kilometers from the nearest permanent base.
In early January the team began an unprecedented effort to drill through the ice to reach a place called the grounding zone—essentially, a subglacial beach where the glacier transitions from resting on bedrock to floating on sea water as it oozes off the edge of the continent. A team of ice drillers from the University of Nebraska-Lincoln (U.N.L.) used a jet of hot water from the end of a Kevlar hose a kilometer long and as big around as an ankle to melt a hole through the ice into the seawater below.
Until now no one had ever directly observed the grounding zone of a major Antarctic glacier. And from the moment the hole was first opened on January 7 Pacific time, it appeared that this place didn’t hold much in the way of life.
Deceived by lifeless mud
A downward-facing video camera lowered through the borehole found a barren sea bottom—“rocky, like a lunar surface,” Powell says. Even deep “abyssal” ocean floors three or four kilometers deep in the ocean usually show some signs of animal life: the tracks of crustaceans that have scuttled over the mud, or piles of mud that worms have ejected from their burrows. But the camera showed nothing of the sort. Cores of mud that the team gently plucked from the bottom also showed no signs that anything had ever burrowed through underneath. And seawater lifted from the bottom in bottles was found to be crystal clear—suggesting that the water was only sparsely populated with microbes, and certainly not enough of them for animals to graze and sustain themselves on.
“The water’s so clear—there’s just not much food,” says Trista Vick-Majors, on a separate satellite call. Vick-Majors is a PhD microbiology student from Montana State University, who handled samples of water lifted from the bottom. What’s more, sediments in the sea floor were packed with quartz, a mineral that holds little nutritional value for microbes. When mud is raised from the bottom of an ocean or lake, it is often possible to smell gases such as hydrogen sulfide that are produced by microbes—“your nose is a great detector of microbial activity,” says Alex Michaud, a microbiology PhD student also from Montana State who is working with the sediment samples. “But I don’t smell anything.”
The revelation that something larger lived down there in the dark came eight days after the hole was opened, on January 15 Pacific time.
The finding depended on a skinny, 1.5 meter-long robot called Deep-SCINI, with eyes made of reinforced, pressure-resistant sapphire crystal and a streamlined body of aluminum rods and high-tech, “syntactic” foam comprising millions of tiny, hollow glass beads.
Deep-SCINI, a remotely operated vehicle (ROV), is designed to slip down a narrow, icy borehole and explore the water cavity below. It carries sapphire-shielded cameras, a grabber arm, water-samplers and other instruments. Robert Zook and Justin Burnett, from the U.N.L. ice-drilling program, had worked day and night to finish building it in time for the expedition, flying to New Zealand and then Antarctica with it in their carry-on cases.
Just after lunch on January 16 workers in hard hats coupled Deep-SCINI to a fiber optic cable as thick as a garden hose. A winch atop the drill platform hummed into action, unwinding cable from a giant spool, lowering the ROV down the hole. Deep-SCINI had “flown” (as Zook called it) in swimming pools and tested once in a pressure chamber to confirm that it could survive the deep ocean. But this would be its first real dive, deeper down through glacial ice than any ROV had ever ventured.
A dozen people crowded inside a compact control room, built inside a cargo container mounted on skis, to watch the ROV’s maiden flight play out on several video monitors.
The view down the hole was obscured by a block of concrete hanging from Deep-SCINI’s claw—intended to keep the craft vertical in the narrow borehole, only three quarters of a meter across. Instead, for 45 minutes as the ROV crept downward, its side-looking camera caught images of dark debris layers on the walls of the hole, trapped deep in the ice, possibly the remains of volcanic ash or other dust deposited on the ice surface thousands of years ago. The researchers discovered the layers several days earlier when they first drilled the hole. They later found pebbles at the bottom, suggesting that the underside of the ice sheet might be melting faster than people had thought (see my story on that discovery here). Fast melting could allow the massive glacier on land to slide into the sea more quickly that scientists had anticipated.
Finally the walls of the hole, lit by Deep-SCINI’s lamp, fell away into darkness. The ROV emerged into a boundless void of pitch-black water beneath the ice. Bright flecks streamed down like falling stars past the side-looking camera—the light of Deep-SCINI’s lamps reflecting off bits of sand, trapped in the ice for thousands of years, now falling to the seafloor somewhere below after being disturbed by the robot’s descent.
The ROV reached the rocky bottom. Burnett (a PhD student), sitting at the controls in the cargo container, nudged a lever: the claw opened, the concrete weight came to rest on the bottom and Deep-SCINI righted to a horizontal position. Zook, the self-taught engineer who conceived this ROV and designed much of it, sat beside Burnett, operating cameras and displays. People standing in the unlit room stared into the blackness of the video monitors. Here and there they glimpsed hints of motion just past the reach of the lights: a bit of falling debris that suddenly changed direction, or a shadow flitting through a corner.
Burnett and Zook continually worked around problems as they piloted an ROV clearly still in its test stage. An overheating problem—ironic, in this place—forced them to operate the thrusters below capacity. No navigation system had yet been built into the ROV, so they maneuvered using tricks—flying from one large rock on the bottom to another, or having the winch operator reel in a couple meters of cable, to tug the ROV from behind and point it away from the hole. They found themselves working on an unexpectedly short leash—forced to stay within 20 or 30 meters of the hole by a tether cable snagged somewhere above.
At last Burnett and Zook brought Deep-SCINI to a standstill a meter above the bottom, while they adjusted their controls. People in the cargo container stared at an image of the sea floor panned out on one of the video monitors, captured by the forward-looking camera. Then someone started to yell and point. All eyes swung to the screen with the down-looking camera.
A graceful, undulating shadow glided across its view, tapered front to back like an exclamation point—the shadow cast by a bulb-eyed fish. Then people saw the creature casting that shadow: bluish-brownish-pinkish, as long as a butter knife, its internal organs showing through its translucent body.
The room erupted into cheering, clapping and gasps. “It was just amazing,” recalls Powell.
Bored out of their minds
Deep-SCINI stayed down in the wedge of seawater for six hours. When Burnett parked it on the bottom, a fish—watching, sitting motionless far off across the bottom, gradually came closer, swimming from one motionless perch to another over a period of 20 minutes until it came within half an arm length of the camera. These fish, attracted perhaps by the novelty of light, were “curious and docile,” Zook says. “I think they’re bored. I know I would be.”
All told, the ROV encountered 20 or 30 fishes that day. “It was clear they were a community living there,” Powell says, “not just a chance encounter.” The translucent fish were the largest. But Deep-SCINI also encountered two other types of smaller fish—one blackish, another orange—plus dozens of red, shrimpy crustaceans flitting about, as well as a handful of other marine invertebrates that the team has so far declined to describe.
To the microbiologists who were present, the most exciting thing was not the discovery of fish itself, but rather what it says about this remote, unexplored environment. Just three days before the discovery, Brent Christner, a microbiologist from Louisiana State University (L.S.U.) with years of experience studying ice-covered Antarctic lakes, had agreed with Vick-Majors that life in the water would be limited to microbes with sluggish metabolic rates. “We have to ask what they’re eating,” he says, when I asked later on about the fishes. “Food is in short supply and any energy gained is hard-won. This is a tough place to live.”
One source of food could be small plankton, grown in the sunlit waters of the Ross Sea then swept by currents under the ice shelf. But oceanographic models suggest that this food would have to drift six or seven years under the dark of the ice shelf before reaching the Whillans grounding zone, encountering plenty of other by other animals along the way. “The water will be pretty chewed on by the time it gets here,” Vick-Majors says.
The ecosystem could also be powered by chemical energy derived from Earth's interior, rather than sunlight. Bacteria and other microbes might feed on mineral grains dropped from the underside of the ice or flushed into the sea water by subglacial rivers flowing out from beneath the West Antarctic Ice Sheet. The microbes at the bottom of the food chain could also be fed by ammonium or methane seeping up from ancient marine sediments hundreds of meters below. In fact, two years ago when this same team drilled into a subglacial lake 100 kilometers upstream, they found an ecosystem that fueled itself largely on ammonium—although in that case, the ecosystem included only microbes, with no animals present.
People had speculated that the nutrient-poor environment beneath Antarctica’s large ice shelves would resemble another underfed habitat—the world’s vast, abyssal sea floors sitting below 3,000 meters. But important differences are already emerging: The muddy floors of the oceanic abyss are populated by worms and other animals that feed on bits of rotting detritus that rain down from above. But the mud cores brought up so far from the Whillans grounding zone haven’t revealed such animals. Nor did Deep-SCINI’s cameras. “We saw no established epi-benthic community,” Powell says. “Everything living there can move.”
These new results are still extremely preliminary but a similar pattern was seen in the late 1970s when a hole was briefly melted through another part of the Ross Ice Shelf not as far inland—the so-called J9 borehole, which reached a layer of sea water 240 meters thick, sitting 430 kilometers in from the edge of the ice. Fish and crustaceans were seen in the water, but nothing was spotted in the mud. The lack of mud dwellers might indicate that animals living this far under the ice shelf must be mobile enough to follow intermittent food sources from place to place.
Whatever the ultimate energy source, bacteria would serve as food for microscopic organisms called protists, crustaceans would eat the protists and fishes would eat the crustaceans—or sometimes, one another’s young—says Arthur DeVries, a biologist at the University of Illinois at Urbana-Champaign. DeVries was not on this expedition but has spent 50 years studying fishes living near the exposed front of the Ross Ice Shelf.
Whether the fishes themselves represent something truly novel to science remains to be seen. Photographs and videos will have to be extensively analyzed and the results published in a peer-reviewed journal before the team is likely to say much more. The fishes could turn out to belong to a single family, called the Nototheniidae, DeVries says. These fishes began to dominate Antarctica starting around 35 million years ago, when the continent and its surrounding oceans began to cool precipitously, and the fishes evolved proteins that helped them avoid freezing solid.
Years of data to come
Even with the joyful discovery of fishes, the day was far from over. Back in the control room Burnett and Zook struggled to overcome technical difficulties and bring Deep-SCINI back to the surface. A buoyant string, floating like a helium balloon above the concrete block that it was tied to, helped them find their way back to the block—and hence to the borehole that would be the robot’s exit. Even then, the duo had to grab hold of the weight again, in order to put the ROV back in its vertical pose and ascend the hole. One of Deep-SCINI’s cameras had been bumped out of position during the dive, so it no longer focused on the claw. The two operators spent 45 minutes trying to snag it before finally succeeding.
“We had a minor miracle,” says Zook, of Deep-SCINI’s maiden flight. Antarctica’s harsh conditions tend to punish innovation, he notes: “The rule of thumb down here is that any new technological thing does not work for the first deployment.”
Two hours after Deep-SCINI was hoisted into daylight atop the drill platform, another instrument was lowered and parked at the bottom for 20 hours to measure gases, currents, temperatures and salinity– all expected to change as remote ocean tides push and pull at this deep recess of water. Throughout that time, a down-looking lamp and camera repeatedly attracted visitors—reddish crustaceans or inquisitive fishes.
Up top, Zook fashioned some window screening into a trap for crustaceans. Michaud built a fish trap using parts from a lobster trap that Zook had purchased as a joke at a sporting goods store in New Zealand while in route to Antarctica. They have so far caught a handful of crustaceans for further scientific study—but no fishes, at this writing.
Even as all of this went on, work continued for an expedition whose overall goal was to understand the behavior of the glacier as it meets the ocean. Slawek Tulaczyk, a glaciologist from the University of California, Santa Cruz, who co-led the expedition with Powell and another scientist, missed the fish hubbub because he was a short distance away on the ice surface, lowering a string of sensors into another hole that had just been melted through the ice. The hole will refreeze, sealing the string in the ice shelf. For years to come it will record temperatures up and down the ice, and also in the water below. It will record the ebb and flow of tides, and pulses of cloudy water from subglacial rivers flowing into the ocean. Tilt meters will measure how the ice shelf flexes in response to the tide that rises and falls a meter beneath it each day. Seismic sensors will record the pops and snaps as crevasses erupt on the underside of the flexing ice. The goal is to find out how much heat and mechanical stress is being delivered to the grounding zone of the Whillans Ice Stream.
“I know it sounds wonky,” Tulaczyk wrote via email—by which he might have meant, less cute and charismatic than bug-eyed fishes. But this data, he says, will fill in some key unknowns about how quickly ice will melt from the underbelly of this glacier.
Right now the Whillans Ice Stream is actually slowing down a little each year—a rarity among glaciers in Antarctica—part of a complex cycle of intermittent stops and starts that occur over hundreds of years in several glaciers that feed into this part of the Ross Ice Shelf. Knowing the melt rate at the Whillans grounding zone could shed light on the meaning of last week’s discovery of stones raining down from the underside of the ice there. It could determine whether changes are already underway that could overcome the Whillans’s current slowdown and cause it to accelerate its flow into the ocean once more. All of this is important for understanding how glaciers in this part of Antarctica might contribute to global sea level rise.
Even as the downward camera recorded the comings and goings of fishes for 20 hours on January 15 and 16 Pacific time, Tulaczyk was focused on something else, far more subtle, in the camera’s view. A weight planted on the bottom below the hole was sliding past the camera – slowly at first, then faster. The weight was stationary but the glacier above it had begun to slide: The Whillans Ice Stream is known for its bizarre habit of staying still most of the time but lurching forward twice per day—but this was the best measurement that had ever been obtained.
Those layers of dust or ash that Deep-SCINI documented on its way down the hole will also keep ice guys like Tulaczyk and Powell busy for some time. “It was a great trip down, even before the fish,” says Powell. “It will be a great data set.”
As the microbiologists head home with their water and mud samples they will face the unexpected task of figuring out whether this entire ecosystem, including the fishes, really does sustain itself on methane, ammonium or some other form of chemical energy. “That would be really exciting,” L.S.U.'s Christner says. “Our samples can help answer that.” | <urn:uuid:f6f59f69-9b7d-44c4-ba74-10156c1cd5ae> | CC-MAIN-2019-47 | https://www.scientificamerican.com/article/discovery-fish-live-beneath-antarctica/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00497.warc.gz | en | 0.958726 | 4,277 | 3.859375 | 4 |
The cavernous sinus is, at least from the angiographic perspective, a metaphysical entity. It is a collection of extradural venous compartments, often functionally separate, which altogether constitute the venous space we have come to regard as a distinct anatomical structure. It is critical for the neurointerventionalist to understand this, because his or her treatments will be, of necessity, targeting these varied and sometimes complex subspaces. Almost everything about this structure is defined by variability — size, extent of compartmentalization, inflow and outflow sources — not to mention its embryology — thus making for a fascinating topic. The classic story is written and illustrated best by the immortal Albert Rhoton — it is a squarely neurosurgical perspective and a must read even for those who need not make a new hole to get inside the head (click here for amazing 3-D videos, the passion for which neuroangio is proud to share).
Credit for the name goes to Jacob Benignus Winslow (1669-1760), (Jacques Benigne Winslow, s’il vous plaît) who writes “The cavernous or lateral Sinuses of the Os Sphenoides are Reservatories of a very particular kind, containing not only Blood, but considerable Vessels and Nerves, as we shall see hereafter; and likewise a spungy or cavernous substance full of blood, much like that of the Spleen or Corpus Cavernosum of the Urethra.” The name stood the test of time. According to dictionary.com, cavernous means “deep-set, containing caverns, full of small spaces”.
For our purposes, the cannon holds that cavernous sinuses are paired, non-urethral dural venous structures located between the sella and Meckel caves, joined by the intercavernous venous plexus, and each individually connected to the superior opthalmic vein, the sphenoparietal sinus / superficial Sylvian veins, the superior and inferior petrosal sinuses, the pterygopalatine venous plexus, and the clival venous plexus. Each sinus contains a variable number of septations which may separate it into functionally distinct compartments. The anatomy of cranial nerves related to the sinus is well-described in a myriad places elsewhere — again Rhoton’s works being the #1 reference in my opinion — and there are many others besides, for those who shun being persons of one book.
Now should be time to offer a suggestion, especially for the reader reactively self-assessed to the “practical” camp — to look for good clinical value in embryology. Yet the classic story of cavernous sinus development is checkered and inconsistent, in my opinion. The main sources are Dorcas Padget — a brilliant researcher and illustrator, whose work remains essentially intact today, and likely to remain so for as long as tiny human embryos are no longer dissected for gross anatomical knowledge; and George L. Streeter — Padget’s predecessor and author of seminal works using the Carnegie embryo collection.
The cavernous sinus develops very early (10 mm embryo) primarily acting as a receptacle of orbital and facial venous drainage (future superior ophthalmic vein). In this way, it comes into existence and forever remains an extradural entity — Padget holds that at birth the cavernous sinus does not participate in drainage of the cerebral hemispheres (via the Sylvian veins) — the connection between such veins and sinus maturing later. This view explains for example why the superior ophthalmic vein is always connected to the cavernous sinus, while the Sylvian veins may or may not be. Others, notably Prof. Raybaud, hold that de novo connections should not evolve for either arteries or veins at this late point in development and somehow the theory will have to be modified to explain the fact of an already-existing brain-to-sinus anastomoses. Additionally, there is disagreement about the role of the so-called sphenoparietal sinus vis a vis both cavernous sinus and the sylvian veins. The classic viewpoint holds that the sphenoparietal sinus evolves from a very early structure called the tentorial sinus which drains the developing cerebral hemispheres. Phillipe Gailloud et al, however, based on specimen and literature analysis (including original manuscripts by the likes of Brechet, Trolard, and others), believe that the sphenoparietal sinus does not in fact exist, and the Sylvian veins drain directly into the cavernous or paracavernous sinus (more on the latter later). The Gailloud viewpoint seems like the better one when angiograms are analyzed with unprejudiced eyes.
My very unproven opinion, to the propagation of which i am entitled by claim of authorship, holds that variability in form and function stem from the physiologic and anatomical location of the cavernous sinus at the crossroads of intra- and extra-cranial venous drainage. Phylogenetically, there is tremendous variability in venous drainage patterns of the intra-and extra-cranial tissues, depending on relative size of the brain to that of the head, the size and position of the eyes in the skull, preference for internal vs. external jugular systems, etc. In the human, the central anatomical location of the sinus and markedly variable development of its major tributary and drainage routes on an individual basis cannot but produce tremendous variation.
The practical lesson of embryology is to appreciate that the cavernous sinus is best understood not by regarding it as a static entity but as a receptacle of varied inflow and source for varied outputs. To understand each cavernous sinus we must understand its unique inputs and outputs. This is a distinctly hemodynamic view, which is complementary to Rhotons.
1) Superior Ophthalmic vein — typically flows into the sinus, but can easily reverse in cases of overall intracranial or specifically cavernous sinus hypertension
2) Sphenoparietal sinus — draining superficial Sylvian veins into the sinus. See discussion below on whether or not a sphenoparietal sinus really exists or whether Sylvian veins drain directly into the sinus.
3) Basal vein of Rosenthal — its first segment, when connected to the CC, typically flows into it, but easily reverses flow in cases of fistula, etc.
4) Petrosal or bridging veins of the prepontine / cerebellopontine angle cisterns
1) Superior Petrosal Sinus
2) Inferior Petrosal Sinus
3) Foramen Ovale, and other skull base foramina to the pterygoid venous plexus
4) Contralateral Cavernous sinus thru intercavernous channels
5) Clival (basilar) venous plexus down to foramen magnum region, and from there into jugular veins or marginal sinus.
Nothing shows cavernous sinus as well as a venous injection — such as during Petrosal Venous Sinus Sampling. Look at this awesome frontal picture, courtesy of Dr. Eytan Raz
And now, with labels
Angiographic Visualization — look at all vessels!
Remember this — angiography does not study the whole head at once — all vessels should be injected before making conclusions about a particular structure. For the cavernous sinus, even though the major tributaries are related to the anterior circulation, sometimes these do not drain into cavernous sinus. However, large portions of the posterior circulation, via petrosal or bridging veins, often do. In the following example, right ICA injection shows no cavernous sinus. The deep sylvian tributaries (light blue) of the basal vein (purple) drain towards the Galen, and another set of deep Sylvian veins (white) drains via a short probably sphenoparietal sinus (black) into the orbit (orange, strange also)
Same thing on the left, there is no cavernous sinus opacification
It is however unwise to conclude that this homo sapiens lacks both cavernous sinuses, which are very much alive and well, as you can see below from the following very neat stereoscopic projections
The legend below explains how in this case the superior petrosal sinuses (purple) are somewhat hypoplastic and not connected to the transverse sinuses. Instead, the anterior group of cerebellar veins and bilateral lateral recess veins drain into the petrosal veins (blue), which then empty into a very much alive and well cavernous sinuses (white) and from there into the inferior petrosal sinuses (black). Notice silhouette of the ICA (red).
Superior Ophthalmic Vein — External Carotid Injection
Even more importantly to the point of injecting every possible tributary is the study of the external carotid artery. As all good medical students know, there is a facial triangle where veins drain via the SOV into the cavernous sinus. Not that any of us in the USA will ever see a cavernous thrombophlebitis from a facial zit (we do have a case of course! — see here). But the nasal soft tissues are the most vascular ones that will drain into the cavernous sinus. So, don’t forget to get a common or external carotid view.
Here is an internal view with only the back of the cavernous sinus (orange) opacified, and of course the SOV is not seen because there is no intracranial hypertension and the flow in it is from outside into the cavernous sinus.
External carotid injection, however, shows prominently vascular nasal soft tissues (red), and in venous phase beautifully shows both SOVs (white) draining into both anterior cavernous sinuses (purple). Notice right angular vein (yellow).
Another example — the ophthalmic vein is not seen on ICA injection, but is quite prominent with ECA. Bilateral filling of SOV is common, reflecting mostly the midline location of the nasal venous apparatus which drains into the SOV. Antegrade flow from the face and orbit into the cavernous sinus is indirect evidence of relatively normal intracranial pressures. One of the angiographic findings of intracranial hypertension, of any cause, is retrograde ophthalmic vein flow. On the other hand, incidental finding of such retrograde flow is not necessarily pathologic, in my experience.
Ipsilateral ECA injection
Angiographic visualization — mask image
A neat way of projecting arterial phase as a mask for venous phase to demonstrate carotid artery relationship to the cavernous sinus. Notice the intracavernous lucency (purple) corresponding to the carotid artery. The dominant inflow arrives from the superficial sylvian veins, and outflow is into the inferior petrosal sinus.
Below is another example of a cavernous sinus opacified mainly by the superficial sylvian veins:
Cavernous Sinus (dark blue). The sylvian network and “sphenoparietal sinus” (orange) are drain into the cavernous sinus with a well-developed inferior petrosal sinus (light blue). Notice superimposition of the basal vein (purple) and the anterior choroidal artery (red)
How low does the cavernous sinus go?
A relatively well-known classification of the internal carotid artery segments by Bouthillier lists the “lacerum” segment. The alternative and unfortunately less well known classification by Zial dispenses with the lacerum segment on the grounds that the cavernous sinus extends all the way down to the petrous bone where the carotid enters the sinus distal to the petrolingual ligament. Well, Zial is right. The cavernous sinus indeed extends very low, in most people. This can be beautifully shown angiographically. Here are ANAGLYPH stereoscopic images of the cavernous sinus in double mask images with carotid artery as mask image, showing its very low extent.
Even better are “triple mask” with bone added to boot
Here are the usual crosseye stereos
Typical appearance of cavernous sinus (black) on an ICA injection — major inflow is via the superficial sylvian veins (pink) and the deep sylvian vein – basal vein conduit (yellow, notice full extent of basal vein between CC and the Galen), and outflow into inferior petrosal sinus (white) and pterygoid venous plexus (purple)
The same in stereo:
Another example of normal cavernous sinus anatomy, silhouetting the vertical and horizontal segments of the cavernous internal carotid artery segment (light blue). Again, remember the typical deep inferior extent of the cavernous sinus, projecting just above the petrous bone.
Stereo image. Notice dominant superficial Sylvian inflow and egress into the inferior petrosal sinus. Notice how, oftentimes, dominant superficial Sylvian veins are multiple in number — there are two coursing in parallel to join the cavernous sinus — to me this argues against presence of a unique sphenoparietal sinus.
Stereo with carotid artery as mask image
Why is the superior ophthalmic vein not seen on the above injections? The answer is again the same — one cannot see the vein if the drainage territory of this vein is not injected. Because the superior ophthalmic vein drains extracranial territory towards the cavernous sinus, the SOV is only rarely seen via ICA injections (it can be seen because ophthalmic artery can extensive supply nasal tissues, as in cases of failed IMAX epistaxis embolization). Usually, though, it is the external injection that shows the SOV.
This case illustrates an internal carotid injection which does not visualize the SOV (catheter marked by white arrows)
A common carotid injection (white arrows external branches) of the same patient shows the SOV (blue arrows)
Another example of SOV visualization via ECA injection, draining in antegrade intracranial fashion
ICA injection of the same patient, in stereo, barely visualizing the SOV draining the intraorbital tissues (notice blush of choroid and extra-ocular muscles, not labeled). There is, however, a normally elusive inferior ophthalmic vein (purple) here
Ophthalmic Vein Cavernous Sinus anatomy via a carotid cavernous fistulogram
Left ICA injection demonstrating enlarged superior (purple) and inferior (light blue) ophthalmic veins draining a carotid-cavernous sinus (dark blue) fistula. The fistula was approached via surgical exposure, gaining access into the superior ophthalmic vein (lateral center and AP right images) and closed by coiling. The microcatheter is labeled in red.
Cavernous Sinus Asymmetry
Dominance of one transverse sinus over another is routine. Cavernous sinuses are usually symmetric in their development — spectrum from very large to essenially absent. Except when they are asymmetric. Rare but still normal. Important to exclude pathology such as thrombosis, tumor, etc. However, when one sinus is bigger than another, the usual reason is simple variation. Here is an example of a patient who got an angio for suspected left cavernous sinus dural fistula.
CTA (poor technique, venous phase contamination) shows asymmetric prominence of left cavernous sinus (arrow) A tail (arrowhead) is seen also (see below).
Notice absence of other dural fistula signs such as congestion of SOV or other nearby veins
Angio of vert and right ICA. The right cavernous sinus is hypoplastic
The left one is definitely not. Sylvian veins drain into the paracavernous sinus which connects (arrowhead) to the main cavernous sinus (arrow). The connection corresponds to the “tail” on CTA. More on paracavernous sinus below.
Cavernous sinus connections demonstrated by cavernous sinus dural fistula.
Nothing demonstrates the limits of anatomical plasticity quite like pathology. What better way to show all available cavernous sinus connections than a high-pressure cavernous sinus state, exemplified by the various complex and challenging fistulas which afflict it. This one, though “dural” or “indirect” in origin (first term being appropriate, and second frankly short-signed), is high-flow enough to expect that every exit will do its duty.
Red= Distal ECA (catheter seen on lateral view below the lower red arrow); Orange=foramen rotundum branch; Yellow=MMA; green=cavernous sinus; dark blue = superior ophthalmic vein; light blue=facial angular vein; purple=basal vein; bright green=straight sinus; black=sigmoid sinus; brown=jugular bulb; pink = petrosal vein congesting inferior cerebellar veins; white=inferior petrosal sinus; double yellow=sphenoparietal sinus; double light blue=reflux into brain veins via basal vein.
As one can see, not every egress is available. For example, the superior and inferior petrosal sinuses are either hypoplastic or thrombosed as a result of the fistula. The superior ophthalmic and contralateral cavernous egress is insufficient, with congestion of the supra- and infra-tentorial brain parenchymal veins.
Although it is a safe bet that this sinus usually exists, its visualization in the nonpathologic state is hampered by lack of pressure gradient for contrast to flow from one side to another. The sinus is best seen in pathologic sates of cavernous sinus hypertension (fistulas again) which frequently either congest both sinuses or lead to a pressure gradient between them. Here is an angiographic example of an intercavernous sinus (yellow) in a patient with a large left temporal lobe AVM. The carotid artery flow voids are inside blue circle/oval. Lots of other good information is here. The interpeduncular vein is shown in white. Basal vein middle and posterior portions (dark blue arrows) outline the midbrain. Anterior portion of the basal vein (light blue arrow) is contiguous with the deep sylvian (a.k.a. middle cerebral ) vein (pink). The pterygopalatine plexi are marked by green arrows.
Another example of a cavernous sinus venogram, with the posterior intercavernous sinus (some would call it the upper segment of the basilar venous plexus) shown in white. Superior petrosal sinuses are pink, and carotid artery silhouettes are red.
Superficial Sylvian veins, “laterocavernous sinus” and “paracavernous sinus”
The debate as to whether the sylvian veins drain directly into the cavernous sinus or by way of the “sphenoparietal sinus” or Brechet is, in my opinion, won by those who believe in direct venous drainage. A sinus does appear to exist along the sphenoid ridge, and is developmentally related to the middle meningeal veins which are called “anterior parietal”, for good embyologic reasons though they in fact are in the frontal bone. That sinus is likely not connected to the Sylvian veins. In many instances, also, several superficial sylvian veins seem to course in parallel to join the cavernous sinus individually. Either these represent multiple sphenoparietal sinuses, or perhaps all but one of them are true veins, or perhaps all are in fact veins and there is no sphenoparietal sinus related to these. The latter seems to be the preferred option. In the following example, three separate superficial Sylvian veins (purple) run adjacent to the sphenoid ridge to join a common channel (white) which may be a common vein or perhaps a short sphenoparietal sinus.
Stereo of the same
So, lets drop the “sphenoparietal sinus” name for now.
The Sylvian veins may be more or less developed in any given individual (see Superficial Venous System page). When prominent, they collect opercular territory and course along the sphenoid ridge. Then, several possibilities exist:
1) Direct drainage into the cavernous sinus, with subsequent outflow via the usual suspects (petrosal sinuses)
2) Drainage into a space which appears to be lateral to the cavernous sinus, from where flow is directed via foramen ovale into the pterygopalatine venous plexus. Some have termed this space “paracavernous or lateral cavernous sinus”, though I am not familiar with such a concept existing in the neurosurgical literature, nor in the opinion of my neurosurgical colleagues (Jafar Jafar, M.D., personal communication). It is likely an example of a hemodynamically isolated compartment (metaphysics).
3) Drainage via an inferior temporal vein into the transverse/sigmoid sinus junction
Below is an example of dominant Sylvian (purple, pink arrows) egress into the pterygopalatine venous plexus (dark blue), via a compartment quite lateral to the internal carotid artery which angiographically marks the location of the cavernous sinus (light blue double arrow) — this (purple arrows) would be considered a “laterocavernous sinus”
Beautiful Examples of Cavernous Sinus Compartments
Below, the compartmentalization of the cavernous sinus is elegantly shown by pathology, again in form of a cavernous sinus dural arteriovenous fistula. Injections of the right and left carotid arteries demonstrate a fistula at the posterior aspect of the left cavernous sinus medial compartment (pink), supplied by various branches of the left and right MHT. The venous drainage of this compartment is directed into the engorged superior ophthalmic vein (red). Despite left cavernous sinus congestion, as evidenced by the precarious state of the superior ophthalmic vein, the patient’s normal hemispheric venous drainage proceeds, completely unimpeded, via dominant superficial sylvian veins (light blue) into the “laterocavernous sinus / lateral cavernous sinus compartment (purple) and through foramen ovale (yellow) into the pterygopalatine venous plexus (black). How’s them apples?
The same arrangement is shown in the lateral projections of early (left) arteriovenous shunting and brain venous phase (right) lateral compartment (purple) drainage. Neither compartment communicates with the other, as evidenced by their mutually separate drainage routes. Understanding this anatomy allows the operator to consider transvenous coiling of the fistulous medial compartment without compromise of the sylvian venous outflow.
Another Example of Cavernous Sinus Comparmentalization — Meningioma
Usually, in cases like these, the ipsilateral cavernous sinus is occluded. However, one value for preoperative angiography is to look for all kinds of potential problems. Here, we see tumor supply arising from the ILT (red arrow) . Venous phase shows that, contrary to expectations, the cavernous sinus is still open, and necessary, receiving superficial sylvian venous drainage — another example of laterocavernous sinus.
Venous phase images show superficial sylvian veins (purple) draining into the lateral cavervnous sinus (blue). The foramen ovale egress into the pterygopalatine fossa is closed (white arrow points to stump). Instead, venous drainage proceeds toward the posterior left cavernous sinus, and into both inferior petrosal sinuses (pink), via the intercavernous sinus (black). All important information for the surgeon.
Yet another example of lateral sinus compartment — giant cavernous aneurysm
In this giant aneurysm the dominant sylvian veins continue to drain via the lateral compartment of the cavernous sinus. The images below are ANAGLYPH STEREOS
Venous phase shows drainage of dominant sylvian veins into lateral compartment around the aneurysm
Same in lateral
Really, it is a different sinus. Instead of running along the lateral wall of the cavernous sinus, it projects along the floor of the middle cranial fossa completely independently and usually drains via ovale or spinosum into pterygopalatine venous plexus. Some believe this to be remnant of the embryonic “primitive parietal sinus” — see venous embryology page for that. Here is an example. Runs along the anterior aspect of middle cranial fossa (black arrows), then along the floor (black arrowheads) and out via ovale (white arrowhead) and spinosum (white arrow). Arterial phase mask images below show relationship to the MMA
Another example — actually same patient on right. More medial course but still not in cavernous sinus (black arrows). Exit via ovale (white arrows)
Deep Sylvian veins and Basal vein to Cavernous Sinus Anastomoses
The basal vein, in its full expression, is an unbroken conduit between the cavernous sinus and the vein of Galen (see Deep Venous System page for dedicated info). Below is an example of a fully contiguous basal vein (purple), with its deep sylvian component (pink) draining into the cavernous sinus (white) near the confluence of the superficial sylvian vein(blue). The main outflow of the cavernous sinus is the inferior petrosal sinus (black).
Another example of the same, also shown previously. Basal vein (yellow) usually connects with superficial sylvian veins (pink) just before the common trunk empties into the cavernous sinus (black)
In the example below, the basal vein (purple) is not connected to the cavernous sinus, which is hypoplastic — the deep sylvian vein (pink), with well-displayed uncal and lenticulostriate tributaries (yellow) drains via the basal vein towards the Galen.
“Case Archives Cavernous Sinus Thrombophlebitis” page for example of this classical entity diagnosed, of course, by catheter angiography(!) | <urn:uuid:c8f02859-b6c9-4ca6-aada-105e71e2df67> | CC-MAIN-2019-47 | http://neuroangio.org/venous-brain-anatomy/cavernous-sinus/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665976.26/warc/CC-MAIN-20191113012959-20191113040959-00057.warc.gz | en | 0.883133 | 5,725 | 2.71875 | 3 |
A couple of months ago I visited Niah Caves near Miri in Sarawak. You can read about my trip on my Malaysia Traveller website.
This cave complex is one of Malaysia’s most impressive natural wonders and it was nominated for UNESCO World Heritage status in 2010, though it has not yet achieved that honour.
The caves and some 3100 hectares of the surrounding rainforest and limestone hills were gazettedas Niah National Park in 1974, meaning they should be preserved in pristine condition in perpetuity.
The national park is managed by Sarawak Forestry Corporation which describes Niah as follows:
Niah is one of Sarawak’s smaller national parks, but it is certainly one of the most important, and has some of the most unusual visitor attractions. The park’s main claim to fame is its role as one of the birthplaces of civilisation. The oldest modern human remains discovered in Southeast Asia were found at Niah, making the park one of the most important archaeological sites in the world.
Yet there is much more to Niah than archaeology. A vast cave swarming with bats and swiftlets; the thriving local economy based on birds-nests and guano; ancient cave paintings; a majestic rainforest criss-crossed with walking trails; abundant plant and animal life – all these and more make up the geological, historical and environmental kaleidoscope that is Niah.
Given the importance of the site for tourism you would think that everything possible would be done to protect this valuable, fragile and irreplaceable asset. However, on my recent visit, it was disappointing, but sadly not surprising, to see that the area is under threat from quarrying.
This Google Maps image shows extensive quarrying is already encroaching on the edges of the National Park and some of the limestone cliffs have been broken up and trucked away.
Is this area inside or outside the National Park borders? This map shows the approximate borders of the park:
Since the National Park was intended to protect all the surrounding limestone hills it is possible that quarrying is already taking place inside the National Park, which would be illegal.
This street level image shows the dirt road turn-off leading towards the quarry, busy with lorries.
Niah is not the only national park in Malaysia under threat. Illegal logging is reported in and around forest reserve areas across the country. Sarawak Forestry and the Department of Wildlife and National Parks should introduce buffer zones surrounding national parks within which certain activities, such as logging and quarrying, are prohibited. Access to these areas by trucks and diggers should be controlled. Strict enforcement and heavy penalties are needed otherwise Malaysia’s natural wonders will not be around for much longer.
After spending 9 months at Simunjan (see last post), Alfred Russel Wallace made a shorter exploration of Bukit Peninjau, a small hill (1,646 feet high) some 20 km, as the bird flies, from Kuching town centre. He was initially accompanied by Sir James Brooke, the Rajah of Sarawak, whom he had met in Singapore and who maintained a small cottage on this hill. ‘Rajah’ was a grand job title, but Brooke had only been granted the role by the Sultan of Brunei some 14 years earlier and Sarawak was still in its rudimentary stage of development. As such, think of the cottage as more of a wooden shack than a palace. Wallace stayed at the cottage from 13–20 December 1855 and between 31 December 1855 and 19 January 1856.
Wallace described the hill as follows:
“On reaching Sarawak early in December, I found there would not be an opportunity of returning to Singapore until the latter end of January. I therefore accepted Sir James Brooke’s invitation to spend a week with him and Mr. St. John at his cottage on Peninjauh. This is a very steep pyramidal mountain of crystalline basaltic rock, about a thousand feet high, and covered with luxuriant forest. There are three Dyak villages upon it, and on a little platform near the summit is the rude wooden lodge where the English Rajah was accustomed to go for relaxation and cool fresh air.”
Wallace would have approached the hill by river, disembarking at the jetty where the village of Siniawan now stands.
“It is only twenty miles up the river, but the road up the mountain is a succession of ladders on the face of precipices, bamboo bridges over gullies and chasms, and slippery paths over rocks and tree-trunks and huge boulders as big as houses. A cool spring under an overhanging rock just below the cottage furnished us with refreshing baths and delicious drinking water, and the Dyaks brought us daily heaped-up baskets of Mangosteens and Lansats, two of the most delicious of the subacid tropical fruits.”
Local government officials announced a few years back an intention to promote Bukit Peninjau (also known as Bung Muan and Gunung Serumbu) as a tourist destination and at least the place is signposted.
My trip to the hill was unfortunately a bit of a wash-out.
The sky looked fairly bright as I approached the foot of the hill.
But as soon as I parked my rental car the heavens opened and the hill disappeared behind the clouds.
I took shelter under the eaves of the Tourist Information Centre. Here you are supposed to be able to hire a local guide for RM50 to take you up the Wallace Trail but the place was locked and there was nobody around. Visitors are advised not to go alone but having no other choice, I dropped my contribution into the donations box and set off up the hill once the rain had eased off somewhat.
There is a map with estimated climb times. According to their estimates it should take nearly 4 hours to reach the peak.
There were quite a lot of arrows pointing the way which was reassuring but the path itself was overgrown with dense foliage which I dislike (I would make a very poor Wallace being scared of snakes, spiders and other creepy crawlies!).
Wallace was impressed with the versatile qualities of bamboo and the ingenious ways in which the local tribesmen put it to good use. In this chapter of The Malay Archipelago he wrote about bamboo bridges and I was pleased to see this example of one at Bukit Peninjau.
The wooden hut here is similar to ones I have seen in Peninsular Malaysia used as watch houses to guard over valuable durian trees during the ripening season. It might perform the same purpose here.
He was also fascinated by ladders made by driving bamboo pegs into a tree trunk:
“I was exceedingly struck by the ingenuity of this mode of climbing, and the admirable manner in which the peculiar properties of the bamboo were made available. The ladder itself was perfectly safe, since if any one peg were loose or faulty, and gave way, the strain would be thrown on several others above and below it. I now understood the use of the line of bamboo pegs sticking in trees, which I had often seen, and wondered for what purpose they could have been put there.”
I was amazed to see a similar ladder in almost the same location 160 years after Wallace’s time, the only difference being that they now use blue plastic twine to secure the pegs instead of strips of wood bark.
By the time I reached Batu Tikopog, a rock with an unusually smooth cleft, the rain started to intensify with thunder and lightning in the air. I decided to abandon my trek to the peak since visibility would have been zero. It’s a shame I didn’t manage to see the site of Brooke’s cottage either. Nothing remains of the cottage now except an indistinct clearing in the jungle. Plans to rebuild the cottage were announced a few years ago but nothing yet seems to have happened. Perhaps I’ll revisit one day once the cottage has been rebuilt.
After Christmas in Kuching, Wallace returned to Bukit Peninjau, this time accompanied by his English assistant and a Malay servant.
“A few days afterwards I returned to the mountain with Charles and a Malay boy named Ali and stayed there three weeks for the purpose of making a collection of land-shells, butterflies and moths, ferns and orchids. On the hill itself ferns were tolerably plentiful, and I made a collection of about forty species. But what occupied me most was the great abundance of moths which on certain occasions I was able to capture. …during the whole of my eight years’ wanderings in the East I never found another spot where these insects were at all plentiful,…It thus appears that on twenty-six nights I collected 1,386 moths.”
The hill is still teeming with insects. Most of Wallace’s moth collecting took place at night but even during the daytime this place has some of the noisiest bugs I’ve ever heard as this ten-second video attempts to show.
“When I returned to Singapore I took with me the Malay lad named Ali, who subsequently accompanied me all over the Archipelago. Charles Allen preferred staying at the Mission-house, and afterwards obtained employment in Sarawak and in Singapore, until he again joined me four years later at Amboyna in the Moluccas.”
Writing in his autobiography many years later, Wallace wrote about Ali:
When I was at Sarawak in 1855 I engaged a Malay boy named Ali as a personal servant, and also to help me to learn the Malay language by the necessity of constant communication with him. He was attentive and clean, and could cook very well. He soon learnt to shoot birds, to skin them properly, and latterly even to put up the skins very neatly. Of course he was a good boatman, as are all Malays, and in all the difficulties or dangers of our journeys he was quite undisturbed and ready to do anything required of him. He accompanied me through all my travels, sometimes alone, but more frequently with several others, and was then very useful in teaching them their duties, as he soon became well acquainted with my wants and habits.
He was less glowing about Charles Martin Allen who was just a teenager when Wallace took him to South East Asia as his collecting assistant. In his letters , Wallace complained about Allen’s carelessness and inability to learn.
Of all the Wallace trails I have visited so far this one is perhaps the most interesting and is fairly easy to access from Kuching. Pity about the weather though! Try to go on a dry day and see if you can get hold of a guide.
A map showing the location of Bukit Peninjau appears on my previous post about Wallace.
To read about another trip up Bukit Serumbu in Wallace’s footsteps, this one in 1912, see here.
Alfred Russel Wallace spent 15 months in Borneo from November 1854 to January 1856. After exploring in the vicinity of Sarawak town (Kuching) he made a journey into ‘a part of the interior seldom visited by Europeans’.
This is how he described the area in The Malay Archipelago:
“In March 1865 I determined to go to the coalworks which were being opened near the Simunjon River, a small branch of the Sadong, a river east of Sarawak and between it and the Batang- Lupar. The Simunjon enters the Sadong River about twenty miles up. It is very narrow and very winding, and much overshadowed by the lofty forest, which sometimes almost meets over it. The whole country between it and the sea is a perfectly level forest-covered swamp, out of which rise a few isolated hills, at the foot of one of which the works are situated. On the slope of the hill near its foot a patch of forest had been cleared away, and several rule houses erected, in which were residing Mr. Coulson the engineer, and a number of Chinese workmen. I was at first kindly accommodated in Mr. Coulson’s house, but finding the spot very suitable for me and offering great facilities for collecting, I had a small house of two rooms and a verandah built for myself. Here I remained nearly nine months, and made an immense collection of insects.”
Click on the expand map symbol in the top right corner to view a larger map.
I thought I would have little chance of tracing this location based on such a scanty description but I found on the map the small town of Simunjan where the Simunjan River meets the Sadong River. A couple of miles from the town is the only hill for miles around which is today known as Gunung Ngeli (though it is more of a Bukit than a Gunung given its modest height).
Further internet searches revealed that this hill was once a coal mining area and this was indeed the place where Wallace spent nine months in 1865.
It is a 170km drive (each way) from Kuching and it took me about 3 hours to get there in my Perodua hire car. But the trip was worth it as Gunung Ngeli was, a few years back, converted into a recreational park with a trail and steps all the way to the top, so I was able to have a good look around.
I made this short video to show how Gunung Ngeli looks today.
Wallace stayed such a long time here because it was so rich in insect life. He adopted the practice of paying locals one cent for each insect brought to him and this yielded great results:
“I obtained from the Dyaks and the Chinamen many fine locusts and Phasmidae (stick insects) as well as numbers of handsome beetles.”
“When I arrived at the mines, on the 14th of March, I had collected in the four preceding months, 320 different kinds of beetles. In less than a fortnight I had doubled this number, an average of about 24 new species every day. On one day I collected 76 different kinds, of which 34 were new to me. By the end of April I had more than a thousand species, and they then went on increasing at a slower rate, so that I obtained altogether in Borneo about two thousand distinct kinds, of which all but about a hundred were collected at this place, and on scarcely more than a square mile of ground. The most numerous and most interesting groups of beetles were the Longicorns and Rhynchophora, both pre- eminently wood-feeders.”
“My collection of butterflies was not large; but I obtained some rare and very handsome insects, the most remarkable being the Ornithoptera Brookeana, one of the most elegant species known. This beautiful creature has very long and pointed wings, almost resembling a sphinx moth in shape. It is deep velvety black, with a curved band of spots of a brilliant metallic-green colour extending across the wings from tip to tip, each spot being shaped exactly like a small triangular feather, and having very much the effect of a row of the wing coverts of the Mexican trogon, laid upon black velvet. The only other marks are a broad neck-collar of vivid crimson, and a few delicate white touches on the outer margins of the hind wings. This species, which was then quite new and which I named after Sir James Brooke, was very rare. It was seen occasionally flying swiftly in the clearings, and now and then settling for an instant at puddles and muddy places, so that I only succeeded in capturing two or three specimens.”
It was while Wallace was at Gunung Ngeli that he hunted and killed more than a dozen orang-utans, which nowadays would be a despicable thing to do but in his era would have been the only way to study the species in detail and besides, he financed his trip by selling skins and specimens to museums and collectors.
He describes these encounters in considerable detail. Here is one such excerpt:
“On the 12th of May I found another, which behaved in a very similar manner, howling and hooting with rage, and throwing down branches. I shot at it five times, and it remained dead on the top of the tree, supported in a fork in such a manner that it would evidently not fall. I therefore returned home, and luckily found some Dyaks, who came back with me, and climbed up the tree for the animal. This was the first full-grown specimen I had obtained; but it was a female, and not nearly so large or remarkable as the full-grown males. It was, however, 3 ft. 6 in. high, and its arms stretched out to a width of 6 ft. 6 in. I preserved the skin of this specimen in a cask of arrack, and prepared a perfect skeleton, which was afterwards purchased for the Derby Museum.”
Ienquired with Derby Museum to see whether they still held any of Wallace’s specimens as I though it would be interesting to visit next time I am in UK. Thiswas their response:
“Derby Museums do not hold any orang-utan specimens collected by Alfred Russel Wallace. This is despite his book, ‘The Malay Archipelago’ (1869), clearly referring to material being killed and collected for Derby Museum. We now know these specimens are in the World Museum Liverpool which was then known as the Derby Museum, named after the main donor, the 13th Earl of Derby (resident of the nearby Knowsley Hall), whose bequeathed natural history collection formed the basis of their collections.”
After my climb up and down Gunung Ngeli, which took about 90 minutes, I drove on to the small town of Simunjan. which was probably non-existent or just getting established in Wallace’s time. No old buildings survive as the town is situated on a bend in the river and is prone to erosion. The earliest structures were washed away and the present town of around 60 shophouses dates mainly from the 1960s.
There are various theories as to how Simunjan got its name. The most plausible, and the one which Wallace might have found interesting, is that it was named after a bird called the Munjan as shown on this billboard.
There was not a lot to see in the town but I had a light lunch and was warmly greeted by the friendly inhabitants.
Gunung Santubong is an 810m high mountain located about 35km north of Kuching, Sarawak.
Like many Malaysian peaks, it is associated with legends, this one involving a heavenly princess who was transformed into a mountain.
Viewed from afar, the profile of the mountain is supposed to resemble a woman lying on her back. I can’t see it myself. From this angle it looks more like the face of Homer Simpson.
The mountain is surrounded by jungle, mudflats and mangrove forests. Kuching’s best beaches are found here and dolphins and porpoises have been known to frequent the waters.
The fit and adventurous can try climbing this hill. It’s harder than it looks. The summit trek involves an energy-sapping climb with lots of rope ladders and scrambling up steep slopes. It can take anywhere between 2 1/2 and 4 hours to ascend, depending on fitness levels and the number of stops, and up to two hours to come down.
The trail starts at the Green Paradise Seafood Restaurant, about 5 minutes walk from Damai Beach. Those not wishing to go to the top can just take the easier jungle trek or visit the waterfall.
Damai Beach is a fine sand beach with a beautiful setting at the foot of the mountain. It can be prone to jellyfish at certain times of the year but during my recent visit the sea was very swimmable. The beach is shared by the Damai Beach Resort and the newly opened Damai Central, a public beach-front shopping, eating and entertainment complex.
The retail outlets in Damai Central are not yet fully occupied but it looks like a good facility and it allows public access to the beach which otherwise might have been turned into another exclusive beach resort.
Noticing something unusual on the horizon in the above picture I zoomed in to see what looked like a wrecked barge.
Also at Damai Beach is the entrance to one of Kuching’s top tourist attractions, the Sarawak Cultural Village. This 17 acre living museum is home to 150 people wearing traditional costumes who show visitors around replica longhouses from all the main ethnic groups of Sarawak and demonstrate their culture and lifestyles . It might not be the real thing but it seems authentic enough for most tourists and it sure is a convenient way to get an overview of Sarawak’s people all in one location.
How do you like my postage stamp design? Perhaps I should ask the Post Office for a job.
I went to Kuching in Sarawak this week where, among other places, I visited Bako National Park. It is the oldest national park in Sarawak (since 1957) and one of the smallest covering an area of 2,727 hectares at the tip of the Muara Tebas peninsula. It is only 37km from Kuching, making it easily accessible for day-trippers.
Getting there is part of the fun. I took a public bus to Kampung Bako and was dropped off right in front of the National Parks Boat Ticketing Counter. Here I chartered a small speed boat (with driver) for a 20-30 minute boat ride through a wide but shallow estuary and then out into the open sea before being deposited on a beach (Telok Assam) where the Park HQ is located.
Before catching the boat you can read a slightly concerning poster about crocodile attacks in Sarawak with a gruesome photo of dismembered human legs being removed from the stomach of a dead croc. Apparently there are 4.2 crocodile attacks per year in Sarawak and this number is increasing. Over half the attacks are in the Batang Lupar River Basin which I must make a note of not visiting.
Within minutes of arriving at Bako I saw more wildlife than I have seen in any other Malaysian national park. This family of Bornean bearded pigs was waiting for me on the beach. I’m a bit wary of wild boars but these guys did not seem concerned by humans and carried on making sandcastles. Nearby a group of proboscis monkeys were wandering about.
There are a number of well marked and maintained trails within the park. I opted for the relatively straightforward Telok Pandan Kecil trail, which, at 5km and 3 hours round trip, would get me back to the Park HQ in time for my rendezvous with the boat driver.
After the mangrove boardwalk at Telok Assam, the trail ascends through thick forest before reaching a plateau covered in scrub vegetation. The path continues along a sandy track lined with carnivorous pitcher plants, before emerging onto a cliff top overlooking the stunning and secluded bay below. Here you can see the snake-shaped sea stack rock formation just offshore. A further 10 minutes descent through thick vegetation and you arrive at one of the best beaches in the park. Some people were swimming but remembering the crocodiles and jellyfish and having no trunks I stayed on dry land.
On my way back I made a short detour to Telok Pandan Besar. The path ends on a cliff top overlooking another beautiful bay but there is no path down to the beach which remains inaccessible except by boat.
Back at Park HQ there is a good canteen and accommodation for those who want to stay overnight. For safety reasons, you have to register at Park HQ before setting out on a trail and sign back in on returning. Overall I was impressed with the efficient organisation of Sarawak Forestry Corporation which manages all the national parks in Sarawak. They have a good website too. | <urn:uuid:131ec059-a66d-4cd9-9bca-5ddd773db00a> | CC-MAIN-2019-47 | https://thriftytraveller.wordpress.com/tag/sarawak/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668712.57/warc/CC-MAIN-20191115195132-20191115223132-00498.warc.gz | en | 0.975514 | 5,049 | 2.765625 | 3 |
A dialog box providing general program information, such as version identification, copyright, licensing agreements, and ways to access technical support.
above the fold
A screen layout metaphor taken from newspaper journalism. The content above the fold of the newspaper must be particularly engaging to spur sales. Similarly, in screen layout, the most important content must be visible without scrolling. Users must be motivated to take the time to scroll past the content they encounter initially "above the fold."
An alphanumeric key that, when combined with the Alt key, activates a control. Access keys are indicated by underlining one of the characters in the control's label. For example, pressing Alt+O activates a control whose label is "Open" and whose assigned access key is "O". Access keys aren't case sensitive. The effect of activating a control depends on the type of control.
In contrast to shortcut keys, which are intended mostly for advanced users, access keys are designed to improve accessibility. Because they are documented directly within the user interface (UI) itself, they can't always be assigned consistently and they aren't intended to be memorized.
The monitor where the active program is running.
A navigational element, usually appearing at the top of a window, that displays, and allows users to change, their current location. See also: breadcrumb bar.
Visual properties of an object that indicate how it can be used or acted on. For example, command buttons have the visual appearance of real-world buttons, suggesting pushing or clicking.
A program used to perform a related set of user tasks; often relatively complex and sophisticated. See also: program.
A keyboard key with a context menu graphic on it. This key is used to display the context menu for the selected item.
A control that presents a menu of commands that involve doing something to or with a document or workspace, such as file-related commands.
Displays the context menu for the current selection (the same as pressing Shift+F10).
Using related visual techniques, such as customized controls, to create a unique look or branding for an application.
An expression of the relation between the width of an object and its height. For example, high definition television uses a 16:9 aspect ratio.
A type of list used in text boxes and editable drop-down lists in which the likely input can be extrapolated and populated automatically from having been entered previously. Users type a minimal amount of text to populate the auto-complete list.
A text box in which the input focus automatically moves to the next related text box as soon as a user types the last character.
A common Windows control that informs users of a non-critical problem or special condition.
A navigational element, usually appearing at the top of a window, that displays, and allows users to change, their current location. "Breadcrumb" refers to breaking the current location into a series of links separated by arrows that users can interact with directly. Use address bar instead. See also: address bar.
A common Windows control that allows users to decide between clearly differing choices, such as toggling an option on or off.
A small control or button that indicates there are more items than can be displayed in the allotted space. Users click the chevron to see the additional items.
The area in a window where the commit buttons are located. Typically, dialog boxes and wizards have a command area. See also: commit button.
A common Windows control that allows users to initiate an action immediately.
A control used to make a choice among a set of mutually exclusive, related choices. In their normal state, command links have a lightweight appearance similar to hyperlinks, but their behavior is more similar to command buttons.
A command button used to commit to a task, proceed to the next step in a multi-step task, or cancel a task. See also: command area.
A type of wizard page in which users commit to performing the task. After doing so, the task cannot be undone by clicking Back or Cancel buttons.
A wizard page used to indicate the end of a wizard. Sometimes used instead of Congratulations pages. See also: congratulations page.
A wizard page used to indicate the end of a wizard. These pages are no longer recommended. Wizards conclude more efficiently with a commit page or, if necessary, a follow-up or completion page. See also: commit page, completion page, follow-up page.
A dialog box used by User Account Control (UAC) that allows protected administrators to elevate their privileges temporarily.
In controls that involve user input, such as text boxes, input constraints are a valuable way to prevent errors. For example, if the only valid input for a particular control is numeric, the control can use appropriate value constraints to enforce this requirement.
The portion of UI surfaces, such as dialog boxes, control panel items, and wizards, devoted to presenting options, providing information, and describing controls. Distinguished from the command area, task pane, and navigation area.
A tab containing a collection of commands that are relevant only when the user has selected a particular object type. See also: Ribbon.
A Windows program that collects and displays for users the system-level features of the computer, including hardware and software setup and configuration. From Control Panel, users can click individual items to configure system-level features and perform related tasks. See also: control panel item.
control panel item
An individual feature available from Control Panel. For example, Programs and Ease of Access are two control panel items.
A dialog box used by User Account Control (UAC) that allows standard users to request temporary elevation of their privileges.
The highest degree of severity. For example, in error and warning messages, critical circumstances might involve data loss, loss of privacy, or loss of system integrity.
A pictorial representation unique to a program (as opposed to a Windows system icon).
Graphics, animations, icons, and other visual elements specially developed for a program.
default command button or link
The command button or link that is invoked when users press the Enter key. The default command button or link is assigned by the developer, but any command button or link becomes the default when users tab to it.
The monitor with the Start menu, taskbar, and notification area.
delayed commit model
The commit model used by control panel item spoke pages where changes aren't made until explicitly committed by users clicking a commit button. Thus, users can abandon a task, navigating away using the Back button, Close, or the Address bar. See also: immediate commit model.
The onscreen work area provided by Windows, analogous to a physical desktop. See also: work area.
An action that has a widespread effect and cannot be undone easily, or is not immediately noticeable.
The pane at the bottom of a Windows Explorer window that displays details (if any) about the selected items; otherwise, it displays details about the folder. For example, Windows Photo Gallery displays the picture name, file type, date taken, tags, rating, dimensions, and file size. See also: preview pane.
A secondary window that allows users to perform a command, asks users a question, or provides users with information or progress feedback.
dialog box launcher
In a ribbon, a button at the bottom of some groups that opens a dialog box with features related to the group. See also: Ribbon.
A dialog unit (DLU) is the device-independent measure to use for layout based on the current system font.
Direct interaction between the user and the objects in the UI (such as icons, controls, and navigational elements). The mouse and touch are common methods of direct manipulation.
A window that appears at a fixed location on the edge of its owner window. See also: floating window.
The arrow associated with drop-down lists, combo boxes, split buttons, and menu buttons, indicating that users can view the associated list by clicking the arrow.
A common Windows control that allows users to select among a list of mutually exclusive values. Unlike a list box, this list of available choices is normally hidden.
The physical resolution of a monitor normalized by the current dpi (dots per inch) setting. At 96 dpi, the effective resolution is the same as the physical resolution, but in other dpis, the effective resolution must be scaled proportionately. Generally, the effective resolution can be calculated using the following equation:
Effective resolution = Physical resolution x (96 / current dpi setting)
In User Account Control, elevated administrators have their administrator privileges. Without elevating, administrators run in their least-privileged state. The Consent UI dialog is used to elevate administrators to elevated status only when necessary. See also: protected administrator, standard user.
A pop-up window that concisely explains the command being pointed to. Like regular tooltips, enhanced tooltips may provide the shortcut key for the command. But unlike regular tooltips, they may also provide supplemental information, graphics, and an indicator that Help is available. They may also use rich text and separators. See also: tooltip.
A state in which a problem has occurred. See also: warning.
A progressive disclosure chevron pattern where a heading can be expanded or collapsed to reveal or hide a group of items. See also: progressive disclosure.
In list views and list boxes, a multiple selection mode where selection of a single item can be extended by dragging or with Shift+click or Ctrl+click to select groups of contiguous or non-adjacent values, respectively. See also: multiple selection.
A quick, straight stroke of a finger or pen on a screen. A flick is recognized as a gesture, and interpreted as a navigation or an editing command.
A window that can appear anywhere on the screen the user wants. See also: docked window.
A popup window that temporarily shows more information. On the Windows desktop, flyouts are displayed by clicking on a gadget, and dismissed by clicking anywhere outside the flyout. You can use flyouts in both the docked and floating states.
A wizard page used to present related tasks that users are likely to do as follow-up. Sometimes used instead of congratulations pages.
A set of attributes for text characters.
A maximized window that does not have a frame.
A simple mini-application hosted on the user's desktop. See also: Sidebar.
A list of commands or options presented graphically. A results-based gallery illustrates the effect of the commands or options instead of the commands themselves. May be labeled or grouped. For example, formatting options can be presented in a thumbnail gallery.
A quick movement of a finger or pen on a screen that the computer interprets as a command, rather than as a mouse movement, writing, or drawing.
getting started page
An optional wizard page that outlines prerequisites for running the wizard successfully or explains the purpose of the wizard.
A window frame option characterized by translucence, helping users focus on content and functionality rather than the interface surrounding it.
A generic term used to refer to any graph or symbolic image. Arrows, chevrons, and bullets are glyphs commonly used in Windows.
A common Windows control that shows relationships among a set of related controls.
Software that converts ink to text.
User assistance of a more detailed nature than is available in the primary UI. Typically accessed from a menu or by clicking a Help link or icon, this content may take a variety of forms, including step-by-step procedures, conceptual text, or more visually-based, guided tutorials.
A special display setting that provides extreme contrast for foreground and background visual elements (either black on white or white on black). Particularly helpful for accessibility.
In control panel items, a hub page presents high-level choices, such as the most commonly used tasks (as with task-based hub pages) or the available objects (as with object-based hub pages). Users can navigate to spoke pages to perform specific tasks. See also: spoke page.
hybrid hub page
In control panel items, a hybrid hub page is a hub page that also has some properties or commands directly on it. Hybrid hub pages are strongly recommended when users are most likely to use the control panel item to access those properties and commands.
immediate commit model
The commit model used by hybrid hub pages where changes take effect as soon as users make them. Commit buttons aren't used in this model. See also: delayed commit model.
A message that appears in the context of the current UI surface, instead of a separate window. Unlike separate windows, in-place messages require either available screen space or dynamic layout.
indirect dialog box
A dialog box displayed out of context, either as an indirect result of a task or as the result of a problem with a system or background process.
inductive user interface
A UI that breaks a complex task down into simple, easily explained, clearly stated steps with a clear purpose.
A small pop-up window that concisely describes the object being pointed to, such as descriptions of toolbar controls, icons, graphics, links, Windows Explorer objects, Start menu items, and taskbar buttons. Infotips are a form of progressive disclosure, eliminating the need to have descriptive text on screen at all times.
The raw output for a pen. This digital ink can be kept just as written, or it can be converted to text using handwriting recognition software.
Placement of links or messages directly in the context of its related UI. For example, an inline link occurs within other text instead of separately.
The location where the user is currently directing input. Note that just because a location in the UI is highlighted does not necessarily mean this location has input focus.
A program session. For example, Windows Internet Explorer allows users to run multiple instances of the program because users can have several independent sessions running at a time. Settings can be saved across program sessions. See also: persistence.
In a ribbon, the mechanism used to display access keys. The access keys appear in the form of a small tip over each command or group, as opposed to the underlined letters typically used to display access keys. See also: access key.
A presentation option that orients an object to be wider than it is tall. See also: portrait mode.
least-privilege user account
A user account that normally runs with minimal privileges. See also: User Account Control.
A common Windows control that allows users to select from a set of values presented in a list, which, unlike a drop-down list, is always visible. Supports single or multiple selections.
A common Windows control that allows users to view and interact with a collection of data objects, using either single selection or multiple selection.
A preview technique that shows the effect of a command immediately on selection or hover without the user commiting the action. For example, formatting options such as themes, fonts, and colors benefit from live previews by showing users the effect with minimal effort.
The process of adapting software for different countries, languages, cultures, or markets.
A file-based repository for information of various kinds about activity on a computer system. Administrators often consult log files; ordinary users generally do not.
Prominently displayed text that concisely explains what to do in the window or page. The instruction should be a specific statement, imperative direction, or question. Good main instructions communicate the user's objective rather than focusing just on manipulating the UI.
A networked computer environment managed by an IT department or third-party provider, instead of by individual users. Administrators may optimize performance and apply operating system and application updates, among other tasks.
A type of touch interaction in which input corresponds directly to how the object being touched would react naturally to the action in the real world.
To display a window at its largest size. See also: minimize, restored window.
A list of commands or options available to users in the current context.
A secondary window that is displayed to inform a user about a particular condition.
A contextual toolbar displayed on hover.
To hide a window. See also: maximize, restored window.
For check boxes that apply to a group of items, a mixed state indicates that some of the items are selected and others are cleared.
Restrictive or limited interaction due to operating in a mode. Modal often describes a secondary window that restricts a user's interaction with the owner window. See also: modeless.
Non-restrictive or non-limited interaction. Modeless often describes a secondary window that does not restrict a user's interaction with the owner window. See also: modal.
The ability for users to choose more than one object in a list or tree.
non-critical system event
A type of system event that does not require immediate attention, often pertaining to system status. See also: critical.
Information of a non-critical nature that is displayed briefly to the user; a notification takes the form of a balloon from an icon in the notification area of the taskbar.
The ability for users to select optional features explicitly. Less intrusive to users than opt-out, especially for privacy and marketing related features, because there is no presumption of users' wishes. See also: opt out, options.
The ability for users to remove features they don't want by clearing their selection. More intrusive to users than opt-in, especially for privacy and marketing related features, because there is an assumption of users' wishes. See also: opt in, options.
Choices available to users for customizing a program. For example, an Options dialog box allows users to view and change program options. See also: properties.
Any UI displayed in a pop-up window that isn't directly related to the user's current activity. For example, notifications and the Consent UI for User Access Control are out-of-context UI.
A secondary window used to perform an auxiliary task. It is not a top-level window (so it isn't displayed on the taskbar); rather, it is "owned" by its owner window. For example, most dialog boxes are owned windows. See also: child window, owner window.
The source of a tip, balloon, or flyout. For example, a text box that has input constraints might display a balloon to let the user know of these limitations. In this case, the text box is considered the owner control.
A basic unit of navigation for task-based UI, such as wizards, property sheets, control panel items, and Web sites. Users perform tasks by navigating from page to page within a single host window. See also: page flow, window.
page space control
Allows users to view and interact with a hierarchically arranged collection of objects. Page space controls are like tree controls, but they have a slightly different visual appearance. They are used primarily by Windows Explorer.
A modeless secondary window that displays a toolbar or other choices, such as colors, patterns, fonts, or font attributes.
To move a scene, such as a map or photo, in two dimensions by dragging it directly. This differs from scrolling in two ways: scrolled content usually has one predominant dimension and often scrolls only along that dimension; and scrolling content conventionally appears with scroll bars that the user drags in the opposite direction of the scrolling motion.
A rectangular area within a window that users may be able to move, resize, hide, or close. Panes are always docked to the side of their parent window. They can be adjacent to other panes, but they never overlap. Undocking a pane converts it to a child window. See also: window.
The container of child windows (such as controls or panes). See also: owner window.
A stylus used for pointing, gestures, simple text entry, and free-form handwriting. Pens have a fine, smooth tip that supports precise pointing, writing, or drawing in ink. They may also have an optional pen button (used to perform right-clicks) and eraser (used to erase ink).
The principle that the state or properties of an object is automatically preserved.
Customizing a core experience that is crucial to the user's personal identification with a program. By contrast, ordinary options and properties aren't crucial to the user's personal identification with a program.
Detailed descriptions of imaginary people. Personas are constructed out of well-understood, highly specified data about real people.
The horizontal and vertical pixels that can be displayed by a computer monitor's hardware.
pop-up group button
In a ribbon, a menu button that consolidates all the commands and options within a group. Used to display ribbons in small spaces.
A presentation option that orients an object to be taller than it is wide. See also: landscape mode.
Don't use. Use options or properties instead.
A representation of what users will see when they select an option. Previews can be displayed statically as part of the option, or upon request with a Preview or Apply button.
A window pane used to show previews and other data about selected objects.
A central action that fulfills the primary purpose of a window. For example, Print is a primary command for a Print dialog box. See also: secondary command.
A collection of commands designed to be comprehensive enough to preclude the use of a menu bar. See also: supplemental toolbar.
A primary window has no owner window and is displayed on the taskbar. Main program windows are always primary windows. See also: secondary window.
A sequence of instructions that can be executed by a computer. Common types of programs include productivity applications, consumer applications, games, kiosks, and utilities.
A common Windows control that displays the progress of a particular operation as a graphical bar.
A technique of allowing users to display less commonly used information (typically, data, options, or commands) as needed. For example, if more options are sometimes needed, users can expose them in context by clicking a chevron button.
A sequence in which the UI used to inform users becomes progressively more obtrusive as the event becomes more critical. For example, a notification can be used for an event that users can safely ignore at first. As the situation becomes critical, a more obtrusive UI such as a modal dialog should be used.
A label or short instruction placed inside a text box or editable drop-down list as its default value. Unlike static text, prompts disappear once users type something into the control or it gets input focus.
Settings of an object that users can change, such as a file's name and read-only status, as well as attributes of an object that users can't directly change, such as a file's size and creation date. Typically properties define the state, value, or appearance of an object.
Quick Access Toolbar
A small, customizable toolbar that displays frequently used commands.
Quick Launch bar
A direct access point on the Windows desktop, located next to the Start button, populated with icons for programs of the user's choosing. Removed in Windows 7.
A common Windows control that allow users to select from among a set of mutually exclusive, related choices.
A device-independent metric that is the same as a physical pixel at 96 dpi (dots per inch), but proportionately scaled in other dpis. See also: effective resolution.
A visible, partial-screen window, neither maximized nor minimized. See also: maximize, minimize.
A tabbed container of commands and options, located at the top of a window or work area and having a fixed location and height. Ribbons usually have an Application menu and Quick Access Toolbar. See also: menu, toolbar.
A quality of user action that can have negative consequences and can't be easily undone. Risky actions include actions that can harm the security of a computer, affect access to a computer, or result in unintended loss of data.
The route users are likely to take as they scan to locate things in a window. Particularly important if users are not engaged in immersive reading of text.
An assistive technology that enables users with visual impairments to interpret and navigate a user interface by transforming visuals to audio. Thus, text, controls, menus, toolbars, graphics, and other screen elements are spoken by the computerized voice of the screen reader.
A control that allows users to scroll the content of a window, either vertically or horizontally.
A peripheral action that, while helpful, isn't essential to the purpose of the window. For example, Find Printer or Install Printer are secondary commands for a Print dialog box. See also: primary command.
A window that has an owner window and consequently is not displayed on the taskbar. See also: primary window.
A protected environment that is isolated from programs running on the system, used to increase the security of highly secure tasks such as log on, password changes, and UAC Elevation UI. See also: User Account Control.
A shield icon used for security branding.
Chosen by the user in order to perform an operation; highlighted.
For sentence-style capitalization:
- Always capitalize the first word of a new sentence.
- Don't capitalize the word following a colon unless the word is a proper noun, or the text following the colon is a complete sentence.
- Don't capitalize the word following an em-dash unless it is a proper noun, even if the text following the dash is a complete sentence.
- Always capitalize the first word of a new sentence following any end punctuation. Rewrite sentences that start with a case-sensitive lowercase word.
Specific values that have been chosen (either by the user or by default) to configure a program or object.
Keys or key combinations that users can press for quick access to actions they perform frequently. Ctrl+letter combinations and function keys (F1 through F12) are usually the best choices for shortcut keys. By definition, a shortcut key is the keyboard equivalent of functionality that is supported adequately elsewhere in the interface. Therefore, avoid using a shortcut key as the only way to access a particular operation.
In contrast to access keys, which are designed to improve accessibility, shortcut keys are designed primarily for advanced users. Because they aren't documented directly within the UI itself (although they might be documented in menus and toolbar tooltips), they are intended to be memorized and therefore they must be assigned consistently within applications and across different applications.
A region on the side of the user's desktop used to display gadgets in Windows Vista. See also: gadget.
A user input error relating to a single control. For example, entering an incorrect credit card number is a single-point error, whereas an incorrect logon is a double-point error, because either the user name or password could be the problem.
A common Windows control that displays and sets a value from a continuous range of possible values, such as brightness or volume.
In programs, special experiences relate to the primary function of the program, something unique about the program, or otherwise make an emotional connection to users. For example, playing an audio or video is a special experience for a media player.
The combination of a text box and its associated spin control. Users click the up or down arrow of a spin box to increase or decrease a numeric value. Unlike slider controls, which are used for relative quantities, spin boxes are used only for exact, known numeric values.
A control that users click to change values. Spin controls use up and down arrows to increase or decrease the value.
Transitional screen image that appears as a program is in the process of launching.
A bipartite command button that includes a small button with a downward pointing triangle on the rightmost portion of the main button. Users click the triangle to display variations of a command in a drop-down menu. See also: command button.
In control panel items, spoke pages are the place in which users perform tasks. Two types of spoke pages are task pages and form pages: task pages present a task or a step in a task with a specific, task-based main instruction; form pages present a collection of related properties and tasks based on a general main instruction. See also: hub page.
In User Account Control, standard users have the least privileges on the computer, and must request permission from an administrator on the computer in order to perform administrative tasks. In contrast with protected administrators, standard users can't elevate themselves. See also: elevated administrator, protected administrator.
User interface text that is not part of an interactive control. Includes labels, main instructions, supplemental instructions, and supplemental explanations.
An optional form of user interface text that adds information, detail, or context to the main instruction. See also: main instruction.
A collection of commands designed to work in conjunction with a menu bar. See also: primary toolbar.
A color defined by Windows for a specific purpose, accessed using the GetSysColor application programming interface (API). For example, COLOR_WINDOW defines the window background color and COLOR_WINDOWTEXT defines the window text color. System colors are not as rich as theme colors. See also: theme color.
A collection of basic window commands, such as move, size, maximize, minimize, and close, available from the program icon on the title bar, or by right-clicking a taskbar button.
A dialog box that contains related information on separate labeled pages (tabs). Unlike property sheets, which also often contain tabs, tabbed dialog boxes are not used to display an object's properties. See also: properties.
A unit of user activity, often represented by a single UI surface (such as a dialog box), or a sequence of pages (such as a wizard).
A dialog box implemented using the task dialog API. Requires Windows Vista or later.
A sequence of pages that helps users perform a task, either in a wizard, explorer, or browser.
A link used to initiate a task, in contrast to links that navigate to other pages or windows, choose options, or display Help.
A type of UI similar to a dialog box, except that it is presented within a window pane instead of a separate window. As a result, task panes have a more direct, contextual feel than dialog boxes. A task pane can contain a menu to provide the user with a small set of commands related to the selected object or program mode.
The access point for running programs that have a desktop presence. Users interact with controls called taskbar buttons to show, hide, and minimize program windows.
A control specifically designed for textual input; allows users to view, enter, or edit text or numbers.
A color defined by Windows for a specific purpose, accessed using the GetThemeColor API along with parts, states, and colors. For example, the windows part defines a FillColor and a TextColor. Theme colors are richer than system colors, but require the theme service to be running. See also: system color.
For title-style capitalization:
- Capitalize all nouns, verbs (including is and other forms of to be), adverbs (including than and when), adjectives (including this and that), and pronouns (including its).
- Capitalize the first and last words, regardless of their parts of speech (for example, The Text to Look For).
- Capitalize prepositions that are part of a verb phrase (for example, Backing Up Your Disk).
- Don't capitalize articles (a, an, the), unless the article is the first word in the title.
- Don't capitalize coordinate conjunctions (and, but, for, nor, or), unless the conjunction is the first word in the title.
- Don't capitalize prepositions of four or fewer letters, unless the preposition is the first word in the title.
- Don't capitalize to in an infinitive phrase (for example, How to Format Your Hard Disk), unless the phrase is the first word in the title.
- Capitalize the second word in compound words if it is a noun or proper adjective, an "e-word," or the words have equal weight (for example, E-Commerce, Cross-Reference, Pre-Microsoft Software, Read/Write Access, Run-Time). Do not capitalize the second word if it is another part of speech, such as a preposition or other minor word (for example, Add-in, How-to, Take-off).
- Capitalize user interface and application programming interface terms that you would not ordinarily capitalize, unless they are case-sensitive (for example, The fdisk Command). Follow the traditional capitalization of keywords and other special terms in programming languages (for example, The printf Function, Using the EVEN and ALIGN Directives).
- Capitalize only the first word of each column heading.
A graphical presentation of commands optimized for efficient access.
A small pop-up window that labels the unlabeled control being pointed to, such as unlabeled toolbar controls or command buttons.
Direct interaction with a computer display using a finger.
A research technique that helps you improve your user experience by testing your UI design and gathering feedback from real target users. Usability studies can range from formal techniques in settings such as usability labs, to informal techniques in settings such as the user's own office. But the constants of such studies are: capturing information from the participants; evaluating that information for meaningful trends and patterns; and finally implementing logical changes that address the problems identified in the study.
User Account Control
With User Account Control (or UAC, formerly known as "Least-privilege User Account," or LUA) enabled, interactive administrators normally run with least user privileges, but they can self-elevate to perform administrative tasks by giving explicit consent with the Consent UI. Such administrative tasks include installing software and drivers, changing system-wide settings, viewing or changing other user accounts, and running administrative tools.
User Account Control shield
A shield icon used to indicate that a command or option needs elevation for User Account Control.
user input problem
An error resulting from user input. User input problems are usually non-critical because they must be corrected before proceeding.
A description of a user goal, problem, or task in a specific set of circumstances.
A message that describes a condition that might cause a problem in the future. Warnings aren't errors or questions. In Windows Vista and later, warning messages are typically displayed in task dialogs, include a clear, concise main instruction, and usually include a standard warning icon for visual reinforcement of the text.
The first page of a wizard, used to explain the purpose of the wizard. Welcome pages are no longer recommended. Users have a more efficient experience without such pages.
A rectangular area on a computer screen in which programs and content appear. A window can be moved, resized, minimized, or closed; it can overlap other windows. Docking a child window converts it to a pane. See also: pane.
Windows logo key
A modifier key with the Windows logo on it. This key is used for a number of Windows shortcuts, and is reserved for Windows use. For example, pressing the Windows logo key displays or hides the Windows Start menu.
A UI mockup that shows a window's functionality and layout, but not its finished appearance. A wireframe uses only line segments, controls, and text, without color, complex graphics, or the use of themes.
A sequence of pages that guides users through a multi-step, infrequently performed task. Effective wizards reduce the knowledge required to perform the task compared to alternative UIs.
The onscreen area where users can perform their work, as well as store programs, documents, and their shortcuts. See also: desktop.
The layered relationship of windows on the display. | <urn:uuid:e2111672-778b-4e2b-a933-cde9e59a02fa> | CC-MAIN-2019-47 | https://docs.microsoft.com/en-us/windows/win32/uxguide/glossary | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00258.warc.gz | en | 0.871845 | 7,399 | 3.265625 | 3 |
A “profane” or materialistic view of telepathy became commonplace after World War II. Just as new mathematical models and theories of physics had been brought to bear on development of the atomic bomb, so too new tools were brought to bear on the human mind.
Just as Cold War scientists raced to design rocket engines and missile technologies that would give their country superiority on the nuclear battlefield, so too did scientists rush to develop ever more complex and thorough models of the human brain. They literally began to see the brain as a mental battlefield.
Implicit within this Cold War race to acquire brain “technology” was the crude assumption that the human mind could be mechanically “modeled” or understood as an artificial construct. The brain began to be viewed as a complex “thinking machine” or computer that could be analyzed, broken into component parts, and back-engineered.
Within this context, telepathy began to be seen as an exotic form of mental radio transmission, only one of many communication functions performed by the mental machine. Communication per se was nothing new. But technicians became fascinated by the potential to communicate silently and covertly, at a distance. Likewise, telepathy seemed to offer a powerful means to distract and confuse the enemy, to program assassins, or to forcibly extract secret information from an enemy’s mind.
Put bluntly, the Pentagon began to see telepathy as a powerful multi-task weapon. The rush to develop “artificial telepathy” became a top-priority weapons program within the overall race for total mind control. Artificial telepathy cannot be fully understood outside this military context or the historical context of the Cold War. The research and development really did begin as a Cold War weapons program.
The paragraphs below give a brief summary of the history of mind control research during the past 50 years.
Some of the amazing technologies developed during this time may be found at ‘Synthetic Telepathy And The Early Mind Wars‘.
We will examine some of the specific telepathy programs, and the scientists behind them, in future posts.
The following article combines material from several sources, listed under Footnotes.
The majority of information appeared in David Guyatt‘s synopsis of the history and development of mind control weapons, first presented at an ICRC symposium on “The Medical Profession and the Effects of Weapons”
First Electromagnetic Beam Weapons
The background to the development of anti-personnel electromagnetic weapons can be traced to the early-middle 1940’s and possibly earlier.
Japanese “Death Ray”
The earliest extant reference, to my knowledge, was contained in the U.S. Strategic Bombing Survey (Pacific Survey, Military Analysis Division, Volume 63) which reviewed Japanese research and development efforts on a “Death Ray.” Whilst not reaching the stage of practical application, research was considered sufficiently promising to warrant the expenditure of Yen 2 million during the years 1940-1945.
Summarizing the Japanese efforts, allied scientists concluded that a ray apparatus might be developed that could kill unshielded human beings at a distance of 5 to 10 miles. Studies demonstrated that, for example, automobile engines could be stopped by tuned waves as early as 1943. (1)
It is therefore reasonable to suppose that this technique has been available for a great many years
Nazi Experiments in Mind Manipulation
[E]xperiments in behavior modification and mind manipulation have a much more grisly past. Nazi doctors at the Dachau concentration camp conducted involuntary experiments with hypnosis and narco-hypnosis, using the drug mescaline on inmates. Additional research was conducted at Auschwitz, using a range of chemicals including various barbiturates and morphine derivatives. Many of these experiments proved fatal.
Following the conclusion of the war, the U.S. Naval Technical Mission was tasked with obtaining pertinent industrial and scientific material that had been produced by the Third Reich and which may be of benefit to U.S. interests. Following a lengthy report, the Navy instigated Project CHATTER in 1947.
Many of the Nazi scientists and medical doctors who conducted hideous experiments were later recruited by the U.S. Army and worked out of Heidelberg prior to being secretly relocated to the United States under the Project PAPERCLIP program. Under the leadership of Dr. Hubertus Strughold, 34 ex-Nazi scientists accepted “Paperclip” contracts, authorized by the Joint Chiefs of Staff, and were put to work at Randolph Air Force Base, San Antonio, Texas.
Project Moonstruck, 1952, CIA:
Electronic implants in brain and teeth
Targeting: Long range Implanted during surgery or surreptitiously during abduction
Frequency range: HF – ELF transceiver implants
Purpose: Tracking, mind and behavior control, conditioning, programming, covert operations
Functional Basis: Electronic Stimulation of the Brain, E.S.B.
First Narco-Hypnosis Programs
By 1953 the CIA, U.S. Navy and the U.S. Army Chemical Corps were conducting their own narco-hypnosis programs on unwilling victims that included prisoners, mental patients, foreigners, ethic minorities and those classified as sexual deviants. (2)
[For a fuller account of the Nazi experiments refer to Resonance No 29 November 1995, published by the Bioelectromagnetic Special Interest Group of American Mensa Ltd., and drawn from a series of articles published by the Napa Sentinel, 1991 by Harry Martin and David Caul.]
Project MK-ULTRA, 1953, CIA:
Drugs, electronics and electroshock
Targeting: Short range Frequencies: VHF HF UHF modulated at ELF Transmission and Reception: Local production
Purpose: Programming behavior, creation of “cyborg” mentalities
Effects: narcoleptic trance, programming by suggestion
Pseudonym: Project Artichoke
Functional Basis: Electronic Dissolution of Memory, E.D.O.M.
Project Orion, 1958, U.S.A.F:
Drugs, hypnosis, and ESB
Targeting: Short range, in person
Frequencies: ELF Modulation
Transmission, and Reception: Radar, microwaves, modulated at ELF frequencies
Purpose: Top security personnel debriefing, programming, insure security and loyalty
MK-DELTA, 1960, CIA:
Fine-tuned electromagnetic subliminal programming
Targeting: Long Range
Frequencies: VHF HF UHF Modulated at ELF
Transmission and Reception: Television antennae, radio antennae, power lines, mattress spring coils, modulation on 60 Hz wiring.
Purpose: programming behavior and attitudes in general population
Effects: fatigue, mood swings, behavior dysfunction and social criminality, mood swings
Pseudonym: “Deep Sleep”, R.H.I.C.
It was not until the middle or late 1970’s that the American public became aware of a series of hitherto secret programs that had been conducted over the preceding two decades by the military and intelligence community. (3) Primarily focusing on narco-hypnosis, these extensive covert programs bore the project titles MKULTRA, MKDELTA, MKNAOMI, MKSEARCH (MK being understood to stand for Mind Kontrol), BLUEBIRD, ARTICHOKE and CHATTER.
The principal aim of these and associated programs was the development of a reliable “programmable” assassin. Secondary aims were the development of a method of citizen control. (4)
Dr. Jose Delgado
Particularly relevant was Dr. Jose Delgado’s secret work directed towards the creation of a “psycho-civilized” society by use of a “stimoceiver.” (5)
Delgado’s work was seminal, and his experiments on humans and animals demonstrated that electronic stimulation can excite extreme emotions including rage, lust and fatigue.
In his paper “Intracerebral Radio Stimulation and recording in Completely Free Patients,” Delgado observed that:
“Radio Stimulation on different points in the amygdala and hippocampus in the four patients produced a variety of effects, including pleasant sensations, elation, deep thoughtful concentration, odd feelings, super relaxation (an essential precursor for deep hypnosis), colored visions, and other responses.”
With regard to the “colored visions” citation, it is reasonable to conclude he was referring to hallucinations — an effect that a number of so-called “victims” allude to. (7)
Dr. John C. Lilly
Also of interest is Dr. John C. Lilly (10), who was asked by the Director of the National Institute of Mental Health to brief the CIA, FBI, NSA and military intelligence services on his work using electrodes to stimulate, directly, the pleasure and pain centers of the brain. Lilly said that he refused the request. However, as stated in his book, he continued to do “useful” work for the national security apparatus.
In terms of timing this is interesting, for these events took place in 1953.
First use of computers to communicate with the brain
As far back as 1969, Delgado predicted the day would soon arrive when a computer would be able to establish two-way radio communication with the brain – an event that first occurred in 1974.
Lawrence Pinneo, a neurophysiologist and electronic engineer working for Stanford Research Institute (a leading military contractor),
“developed a computer system capable of reading a person’s mind. It correlated brain waves on an electroencephalograph with specific commands. Twenty years ago the computer responded with a dot on a TV screen. Nowadays it could be the input to a stimulator (ESB) in advanced stages using radio frequencies.” (8)
Drs. Sharp and Frey develop “Microwave Hearing”
Drs. Joseph Sharp and Allen Frey experimented with microwaves seeking to transmit spoken words directly into the audio cortex via a pulsed-microwave analog of the speaker’s sound vibration. Indeed, Frey’s work in this field, dating back to 1960, gave rise to the so called “Frey effect” which is now more commonly referred to as “microwave hearing.” (19)
Within the Pentagon this ability is now known as “Artificial Telepathy.” (20)
[Footnote 20 – Refer to Dr. Robert Becker who has stated “Such a device has obvious applications in covert operations designed to drive a target crazy with “voices” or deliver undetected instructions to a programmed assassin.”
Dr. Ross Adey experiments with EM control of emotional states
In his pioneering work, Dr. Ross Adey determined that emotional states and behavior can be remotely influenced merely by placing a subject in an electromagnetic field. By directing a carrier frequency to stimulate the brain and using amplitude modulation to shape the wave to mimic a desired EEG frequency, he was able to impose a 4.5 CPS theta rhythm on his subjects.
Adey and others have compiled an entire library of frequencies and pulsation rates which can affect the mind and nervous system. (21)
Adey induces calcium efflux in brain tissue with low power level fields (a basis for the CIA and military’s “confusion weaponry”) and has done behavioral experiments with radar modulated at electroencephalogram (EEG) rhythms. He is understandably concerned about environmental exposures within 1 to 30 Hz (cycles per second), either as a low frequency or an amplitude modulation on a microwave or radio frequency, as these can physiologically interact with the brain even at very low power densities.
Dr. Ewen Cameron’s experiments in mental programming
Additional studies, conducted by Dr. Ewen Cameron and funded by the CIA, were directed towards erasing memory and imposing new personalities on unwilling patients.
Cameron discovered that electroshock treatment caused amnesia. He set about a program that he called “de-patterning” which had the effect of erasing the memory of selected patients. Further work revealed that subjects could be transformed into a virtual blank machine (Tabula Rasa) and then be re-programmed with a technique which he termed “psychic driving.”
Such was the bitter public outrage, once his work was revealed (as a result of FOIA searches), that Cameron was forced to retire in disgrace.
From 1965 through to 1970, Defense Advanced Projects Research Agency (DARPA), with up to 70-80% funding provided by the military, set in motion operation PANDORA to study the health and psychological effects of low intensity microwaves with regard to the so-called “Moscow signal” registered at the American Embassy in Moscow.
Initially, there was confusion over whether the signal was an attempt to activate bugging devices or for some other purpose. There was suspicion that the microwave irradiation was being used as a mind control system.
CIA agents asked scientists involved in microwave research whether microwaves beamed at humans from a distance could affect the brain and alter behavior.
Dr. Milton Zarat who undertook to analyze Soviet literature on microwaves for the CIA, wrote:
“For non-thermal irradiations, they believe that the electromagnetic field induced by the microwave environment affects the cell membrane, and this results in an increase of excitability or an increase in the level of excitation of nerve cells. With repeated or continued exposure, the increased excitability leads to a state of exhaustion of the cells of the cerebral cortex.”
This project appears to have been quite extensive and included (under U.S. Navy funding) studies demonstrating how to induce heart seizures, create leaks in the blood/brain barrier and production of auditory hallucinations.
Despite attempts to render the Pandora program invisible to scrutiny, FOIA filings revealed memoranda of Richard Cesaro, Director of DARPA, which confirmed that the program’s initial goal was to “discover whether a carefully controlled microwave signal could control the mind.”
Cesaro urged that these studies be made “for potential weapons applications.” (12)
EM Mind Control Research Goes Black
Following immense public outcry, Congress forbade further research and demanded that these projects be terminated across the board. But as former CIA agent Victor Marchetti later revealed, the programs merely became more covert with a high element of “deniability” built in to them, and that CIA claims to the contrary are a cover story. (13)
Despite the fact that many of the aforementioned projects revolved around the use of narcotics and hallucinogens, projects ARTICHOKE, PANDORA and CHATTER clearly demonstrate that “psychoelectronics” were a high priority.
Indeed, author John Marks’ anonymous informant (known humorously as “Deep Trance”) stated that beginning in 1963 mind control research strongly emphasized electronics.
1974: Dr. J.F. Scapitz experiments with remote hypnosis
In 1974, Dr. J. F. Scapitz filed a plan to explore the interaction of radio signals and hypnosis.
He stated that,
“In this investigation it will be shown that the spoken word of the hypnotists may be conveyed by modulate electromagnetic energy directly into the subconscious parts of the human brain — i.e. without employing any technical devices for receiving or transcoding the messages and without the person exposed to such influence having a chance to control the information input consciously.”
Schapitz’ work was funded by the DoD. Despite FOIA filings, his work has never been made available. Also it is interesting to note the date of 1974, which almost exactly mirror’s the period when the USSR commenced its own program that resulted in “Acoustic Psycho-correction technology.”]
1976: Soviets use ELF transmissions as mind-control weapon
On July 4, 1976 seven giant transmitters in the Ukraine, powered by the Chernobyl nuclear facility, pumped a 100 megawatt radio frequency at the West, which contained a 10 Hz ELF mind control frequency. According to a US scientist, Dr Andrija Puharich, MD, the soviet pulses covered the human brain frequencies.
With a Dr Bob Beck, he proved that the Soviet transmissions were a weapon. He found that a 6.65 Hz frequency would cause depression and an 11Hz frequency would cause manic and riotous behavior. Transmissions could indeed entrain the human brain, and thereby induce behavioral modification such that populations can be mind controlled en masse by ELF transmissions. More importantly, he found that an ELF signal could cause cancer at the flick of a switch. It did this by modifying the function of RNA transference’s so that amino acid sequences are scrambled and produce unnatural proteins.
As further reading, I recommend “Mind Control World Control! By Jim Keith.
1981: Eldon Byrd develops EM devices for riot control
Scientist Eldon Byrd, who worked for the Naval Surface Weapons Office, was commissioned in 1981 to develop electromagnetic devices for purposes including riot control, clandestine operations and hostage removal. (11)
In the context of a controversy over reproductive hazards to Video Display Terminal (VDT) operators, he wrote of alterations in brain function of animals exposed to low intensity fields.
Offspring of exposed animals,
“exhibited a drastic degradation of intelligence later in life… couldn’t learn easy tasks… indicating a very definite and irreversible damage to the central nervous system of the fetus.”
With VDT operators exposed to weak fields, there have been clusters of miscarriages and birth defects (with evidence of central nervous system damage to the fetus). Byrd also wrote of experiments where behavior of animals was controlled by exposure to weak electromagnetic fields.
“At a certain frequency and power intensity, they could make the animal purr, lay down and roll over.”
Low-frequency sleep induction
From 1980 to 1983 […] Eldon Byrd ran the Marine Corps Nonlethal Electromagnetic Weapons project. He conducted most of his research at the Armed Forces Radiobiology Research Institute in Bethesda, Md.
“We were looking at electrical activity in the brain and how to influence it,” he says.
Byrd, a specialist in medical engineering and bioeffects, funded small research projects, including a paper on vortex weapons by Obolensky. He conducted experiments on animals – and even on himself – to see if brain waves would move into sync with waves impinging on them from the outside. (He found that they would, but the effect was short lived.)
By using very low frequency electromagnetic radiation–the waves way below radio frequencies on the electromagnetic spectrum–he found he could induce the brain to release behavior-regulating chemicals.
“We could put animals into a stupor,” he says, by hitting them with these frequencies. “We got chick brains – in vitro – to dump 80 percent of the natural opioids in their brains,” Byrd says.
He even ran a small project that used magnetic fields to cause certain brain cells in rats to release histamine.
In humans, this would cause instant flulike symptoms and produce nausea. “These fields were extremely weak. They were undetectable,” says Byrd.
“The effects were nonlethal and reversible. You could disable a person temporarily,” Byrd hypothesizes. “It [would have been] like a stun gun.”
Byrd never tested any of his hardware in the field, and his program, scheduled for four years, apparently was closed down after two, he says.
“The work was really outstanding,” he grumbles. “We would have had a weapon in one year.”
Byrd says he was told his work would be unclassified, “unless it works.” Because it worked, he suspects that the program “went black.”
Other scientists tell similar tales of research on electromagnetic radiation turning top secret once successful results were achieved. There are clues that such work is continuing.
In 1995, the annual meeting of four-star U.S. Air Force generals–called CORONA–reviewed more than 1,000 potential projects. One was called “Put the Enemy to Sleep/Keep the Enemy From Sleeping.” It called for exploring “acoustics,” “microwaves,” and “brain-wave manipulation” to alter sleep patterns.
It was one of only three projects approved for initial investigation.
PHOENIX II, 1983, U.S.A.F, NSA:
Location: Montauk, Long Island Electronic multi-directional targeting of select population groups
Targeting: Medium range
Frequencies: Radar, microwaves. EHF UHF modulated
Power: Gigawatt through Terawatt
Purpose: Loading of Earth Grids, planetary sonombulescence to stave off geological activity, specific-point earthquake creation, population programming for sensitized individuals
Pseudonym: “Rainbow”, ZAP
TRIDENT, 1989, ONR, NSA:
Electronic directed targeting of individuals or populations
Targeting: Large population groups assembled
Display: Black helicopters flying in triad formation of three
Power: 100,000 watts
Purpose: Large group management and behavior control, riot control
Allied Agencies: FEMA
Pseudonym: “Black Triad” A.E.M.C
Mankind Research Unlimited
An obscure District of Columbia corporation called Mankind Research Unlimited (MRU) and its wholly owned subsidiary, Systems Consultants Inc. (SCI), operated a number of classified intelligence, government and Pentagon contracts, specializing in, amongst other things:
“problem solving in the areas of intelligence electronic warfare, sensor technology and applications.” (14)
MRU’s “capability and experience” is divided into four fields. These include “biophysics — Biological Effects of Magnetic Fields,” “Research in Magneto-fluid Dynamics,” “Planetary Electro-Hydro-Dynamics” and “Geo-pathic Efforts on Living Organisms.” The latter focuses on the induction of illness by altering the magnetic nature of the geography.
Also under research were “Biocybernetics, Psychodynamic Experiments in Telepathy,” “Errors in Human Perception,” “Biologically Generated Fields,” “Metapsychiatry and the Ultraconscious Mind” (believed to refer to experiments in telepathic mind control), “Behavioral Neuropsychiatry,” “Analysis and Measurement of Human Subjective States” and “Human Unconscious Behavioral Patterns.”
Employing some old OSS, CIA and military intelligence officers, the company also engages the services of prominent physicians and psychologists including E. Stanton Maxey, Stanley R. Dean Berthold, Eric Schwarz plus many more.
MRU lists in its Company Capabilities “brain and mind control.” (15)
1989 CNN Program on EM Weapons
During 1989 CNN aired a program on electromagnetic weapons and showed a U.S. government document that outlined a contingency plan to use EM weapons against “terrorists.” Prior to the show a DoD medical engineer sourced a story claiming that in the context of conditioning, microwaves and other modalities had regularly been used against Palestinians.
RF MEDIA, 1990, CIA:
Electronic, multi-directional subliminal suggestion and programming
Location: Boulder, Colorado (Location of main cell telephone node, national television synchronization node)
Targeting: national population of the United States
Frequencies: ULF VHF HF Phase modulation
Implementation: Television and radio communications, the “videodrome” signals
Purpose: Programming and triggering behavioral desire, subversion of psychic abilities of population, preparatory processing for mass electromagnetic control
Pseudonym: “Buzz Saw” E.E.M.C.
TOWER, 1990, CIA, NSA:
Electronic cross country subliminal programming and suggestion
Targeting: Mass population, short-range intervals, long-range cumulative
Frequencies: Microwave, EHF SHF
Methodology: Cellular telephone system, ELF modulation
Purpose: Programming through neural resonance and encoded information
Effect: Neural degeneration, DNA resonance modification, psychic suppression
Pseudonym: “Wedding Bells”
1992: Maj. Edward Dames and Project GRILL-FLAME
Major Edward Dames, formerly with the Pentagon’s Defense Intelligence Agency until 1992, was a long-serving member of the highly classified operation GRILL-FLAME, a program that focused on some of the more bizarre possibilities of intelligence gathering and remote interrogation.
Known as “remote viewers,” GRILL-FLAME personnel possessed a marked psychic ability that was put to use “penetrating” designated targets and gathering important intelligence on significant figures.
The program operated with two teams: one working out of the top secret NSA facility at Fort George Meade in Maryland, and the other at SRI. Results are said to have been exemplary.
Following the Oliver North debacle, the Secretary of Defense officially terminated GRILL-FLAME, fearing bad publicity if the program were to become known to the public.
The leading members of the project — including Dames — immediately relocated to the privately owned and newly formed Psi-Tech, and continue their work to this day, operating under government contract.
In the course of his work, Dames was (and remains) close to many the leading figures and proponents anti-personnel electromagnetic weapons, especially those that operate in the neurological field.
During NBC’s “The Other Side” program, Dames stated that “The U.S. Government has an electronic device which could implant thoughts in people.” He refused to comment further.
The program was broadcast during April 1995.
1993 Report of “Acoustic Psycho-correction”
In 1993, Defense News announced that the Russian government was discussing with American counterparts the transfer of technical information and equipment known as “Acoustic Psycho-correction.”
The Russians claimed that this device involves,
“the transmission of specific commands via static or white noise bands into the human subconscious without upsetting other intellectual functions.”
Experts said that demonstrations of this equipment have shown “encouraging” results “after exposure of less than one minute,” and has produced “the ability to alter behavior on willing and unwilling subjects.”
The article goes on to explain that combined “software and hardware associated with the (sic) psycho-correction program could be procured for as little as U.S. $80,000.”
The Russians went on to observe that,
“World opinion is not ready for dealing appropriately with the problems coming from the possibility of direct access to the human mind.”
Acoustic psycho-correction dates back to the mid 1970’s and can be used to “suppress riots, control dissidents, demoralize or disable opposing forces and enhance the performance of friendly special operations teams.” (18)
One U.S. concern in relation to this device was aired by Janet Morris of the Global Strategy Council, a Washington-based think tank established by former CIA deputy director Ray Cline. Morris noted that “Ground troops risk exposure to bone-conducting sound that cannot be offset by earplugs or other protective gear.”
In recent months I met with and discussed Russian research efforts, with a contact who had visited Russia earlier this year. He, in turn, met with a number of Russian scientists who are knowledgeable in this field. I have few doubts that the Defense News article cited earlier is fundamentally accurate.
1994 Report on “Less Than Lethal” Weapons
The April 1994 issue of Scientific American carried an article entitled “Bang! You’re Alive” which briefly described some of the known arsenal of “Less Than Lethal” weapons presently available. These include laser rifles and low-frequency infrasound generators powerful enough to trigger nausea or diarrhea.
Steve Aftergood of the Federation of American Scientists (FAS) noted that non-lethal weapons have been linked to “mind control” devices and that three of the most prominent advocates of non-lethality share an interest in psychic phenomena. (23)
It is now the opinion of many that these and related programs have been brought under the banner of non-lethal weapons, otherwise known as “less than lethal,” which are now promulgated in connection with the doctrine of low intensity conflict, a concept for warfare in the 21st century.
It is clear that many of these Pentagon and related LTL programs operate under high classification. Others consider many similar or related “black” programs are funded from the vast resources presently available under the U.S. counter-drug law enforcement policy which has a FY 1995 budget of $13.2 billion. (25)
On 21 July 1994, Defense Secretary William J. Perry issued a memorandum on non-lethal weapons which outlined a tasking priority list for use of these technologies. Second on the list was “crowd control”. Coming in at a poor fifth was “Disable or destroy weapons or weapon development/production processes, including suspected weapons of mass destruction.”
It is therefore clear that non-lethality is fundamentally seen as anti-personnel rather than anti-material.
In July 1996, the Spotlight, a widely circulated right-wing U.S. newspaper, reported that well-placed DoD sources have confirmed a classified Pentagon contract for the development of “high-power electromagnetic generators that interfere with human brain waves.” The article cited the memorandum of understanding dated 1994 between Attorney General Janet Reno, and Defense Secretary William Perry for transfer of LTL weapons to the law enforcement sector.
A budget of under $50 million has been made available for funding associated “black” programs.
Dr. Emery Horvath, a professor of physics at Harvard University, has stated in connection to the generator that interferes with human brain waves that,
“These electronic ‘skull-zappers’ are designed to invade the mind and short circuit its synapses… in the hands of government technicians, it may be used to disorient entire crowds, or to manipulate individuals into self destructive acts. It’s a terrifying weapon.” (26)
In a 1993 U.S. Air Command and Staff College paper entitled Non Lethal Technology and Air Power, authors Maj. Jonathan W. Klaaren (USAF) and Maj. Ronald S. Mitchell (USAF) outlined selected NLT weapons. These included “Acoustic” (pulsed/attenuated high-intensity sound, infrasound (very low frequency) and Polysound (high volume, distracting) as well as high-power microwaves (HPM) that possessed the ability to deter or incapacitate human beings.
These and other classified weapons are being passed to domestic law enforcement agencies, as shown by the 1995 ONDCP (Office of National Drug Control Policy) International Technology Symposium, “Counter-Drug Law Enforcement: Applied Technology for Improved Operational Effectiveness,” which outlined the “Transition of advanced military technologies to the civil law enforcement environment.”
There are some observers who fear that the burgeoning narcotics industry is an ideal “cover” in which to “transit” Non Lethal Technologies to domestic political tasks. Whether this is merely a misplaced “Orwellian” fear remains to be seen. (27)
Have weapons of this nature been developed and field tested?
Judging from the number of individuals and groups coming forward with complaints of harassment, the answer appears to be “yes.” Kim Besley, of the Greenham Common Women’s Peace Camp, has compiled a fairly extensive catalogue of effects that have resulted from low frequency signals emanating from the U.S. Greenham Common base, apparently targeted at the women protesters.
These include: vertigo, retinal bleeding, burnt face (even at night), nausea, sleep disturbances, palpitations, loss of concentration, loss of memory, disorientation, severe headaches, temporary paralysis, faulty speech co-ordination, irritability and a sense of panic in non-panic situations. Identical and similar effects have been reported elsewhere and appear to be fairly common-place amongst so-called “victims.”
Many of these symptoms have been associated in medical literature with exposure to microwaves and especially through low intensity or non-thermal exposures. (22) These have been reviewed by Dr. Robert Becker, twice nominated for the Nobel Prize, and a specialist in EM effects. His report confirms that the symptoms mirror those he would expect to see had Microwave weapons been deployed.
HAARP, 1995, CIA, NSA, ONR:
Electromagnetic resonant induction and mass population control
Location: Gakona, Alaska
Frequencies: Atmospheric phase-locked resonant UHF VHF
Potential: DNA code alteration in population and mass behavior modification
Power: Giga-watt to Tera-watt range
Step-Down reflective frequencies: Approx 1.1 GHz, Human DNA resonant frequency, cellular system phase-lock
PROJECT CLEAN SWEEP, 1997, 1998, CIA, NSA, ONR:
Electromagnetic resonant induction and mass population control
Frequencies: Emotional wavelengths, data gathering through helocopter probes following media events – rebroadcast in order to re-stimulate population emotional levels for recreation of event scenarios.
Ref: LE#108, March 1998
Potential: Mass behavior modification
Power: Unknown. Possibly rebroadcast through GWEN network or cellular tower frequencies, coordinated from NBS in Colorado.
Jack Verona and Project SLEEPING BEAUTY
Current Projects include SLEEPING BEAUTY, directed towards the battlefield use of mind-altering electromagnetic weapons. This project is headed by Jack Verona, a highly placed Defense Intelligence Agency (DIA) officer. Dr. Michael Persinger of Laurentian University is also employed on the project.
Other sources have revealed a project entitled MONARCH which, supposedly, is directed towards the deliberate creation of severe multiple personality disorder. (24)
Guyatt, David G. Synopsis prepared for the ICRC Symposium The Medical Profession and the Effects of Weapons in “Government Mind Control”
Keeler, Anna “Remote Mind Control Technology” Reprinted from Secret and Suppressed: Banned Ideas and Hidden History (Portland, OR: Feral House, 1993)
Leading Edge International Research Group “Major Electromagnetic Mind Control Projects”
Pasternak, Douglas “Wonder Weapons: The Pentagon’s quest for nonlethal arms is amazing, but is it smart?”
U.S. News and World Report, 7 July 1997 in “Government Mind Control” | <urn:uuid:11b063ef-f7e4-455c-8f15-ea2fb3eefe48> | CC-MAIN-2019-47 | https://usahitman.com/tattomc/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00537.warc.gz | en | 0.940038 | 7,417 | 2.875 | 3 |
- High-definition television
High-definition television (HDTV) is video that has resolution substantially higher than that of traditional television systems (standard-definition television). HDTV has one or two million pixels per frame, roughly five times that of SD (1280 x 720 = 921,600 for 720p, or 1920 x 1080 = 2,073,600 for 1080p). Early HDTV broadcasting used analog techniques, but today HDTV is digitally broadcast using video compression.
- 1 History of high-definition television
- 2 Inaugural HDTV broadcast in the United States
- 3 European HDTV broadcasts
- 4 Notation
- 5 Contemporary systems
- 6 Recording and compression
- 7 See also
- 8 Notes
- 9 External links
History of high-definition television
On 2 November 1936 the BBC began transmitting the world's first public regular high-definition service from the Victorian Alexandra Palace in north London. It therefore claims to be the birthplace of television broadcasting as we know it today.
The term high definition once described a series of television systems originating from the late 1930s; however, these systems were only high definition when compared to earlier systems that were based on mechanical systems as few as 30 lines of resolution.
The British high-definition TV service started trials in August 1936 and a regular service in November 1936 using both the (mechanical) Baird 240 line and (electronic) Marconi-EMI 405 line (377i) systems. The Baird system was discontinued in February 1937. In 1938 France followed with their own 441-line system, variants of which were also used by a number of other countries. The US NTSC system joined in 1941. In 1949 France introduced an even higher-resolution standard at 819 lines (768i), a system that would be high definition even by today's standards, but it was monochrome only. All of these systems used interlacing and a 4:3 aspect ratio except the 240-line system which was progressive (actually described at the time by the technically correct term "sequential") and the 405-line system which started as 5:4 and later changed to 4:3. The 405-line system adopted the (at that time) revolutionary idea of interlaced scanning to overcome the flicker problem of the 240-line with its 25 Hz frame rate. The 240-line system could have doubled its frame rate but this would have meant that the transmitted signal would have doubled in bandwidth, an unacceptable option.
Colour broadcasts started at similarly higher resolutions, first with the US NTSC color system in 1953, which was compatible with the earlier B&W systems and therefore had the same 525 lines (480i) of resolution. European standards did not follow until the 1960s, when the PAL and SECAM colour systems were added to the monochrome 625 line (576i) broadcasts.
Since the formal adoption of digital video broadcasting's (DVB) widescreen HDTV transmission modes in the early 2000s the 525-line NTSC (and PAL-M) systems as well as the European 625-line PAL and SECAM systems are now regarded as standard definition television systems. In Australia, the 625-line digital progressive system (with 576 active lines) is officially recognized as high-definition.
In 1949, France started its transmissions with an 819 lines system (768i). It was monochrome only, it was used only on VHF for the first French TV channel, and it was discontinued in 1985.
In 1958, the Soviet Union developed Тransformator, the first high-resolution (definition) television system capable of producing an image composed of 1,125 lines of resolution aimed at providing teleconferencing for military command. It was a research project and the system was never deployed in the military or broadcasting.
In 1979, the Japanese state broadcaster NHK first developed consumer high-definition television with a 5:3 display aspect ratio. The system, known as Hi-Vision or MUSE after its Multiple sub-Nyquist sampling encoding for encoding the signal, required about twice the bandwidth of the existing NTSC system but provided about four times the resolution (1080i/1125 lines). Satellite test broadcasts started in 1989, with regular testing starting in 1991 and regular broadcasting of BS-9ch commenced on 25 November 1994, which featured commercial and NHK programming.
In 1981, the MUSE system was demonstrated for the first time in the United States, using the same 5:3 aspect ratio as the Japanese system. Upon visiting a demonstration of MUSE in Washington, US President Ronald Reagan was most impressed and officially declared it "a matter of national interest" to introduce HDTV to the USA.
Several systems were proposed as the new standard for the US, including the Japanese MUSE system, but all were rejected by the FCC because of their higher bandwidth requirements. At this time, the number of television channels was growing rapidly and bandwidth was already a problem. A new standard had to be more efficient, needing less bandwidth for HDTV than the existing NTSC.
Demise of analog HD systems
The limited standardization of analogue HDTV in the 1990s did not lead to global HDTV adoption as technical and economic reasons at the time did not permit HDTV to use bandwidths greater than normal television.
Early HDTV commercial experiments such as NHK's MUSE required over four times the bandwidth of a standard-definition broadcast—and HD-MAC was not much better. Despite efforts made to reduce analog HDTV to about 2× the bandwidth of SDTV these television formats were still distributable only by satellite.
In addition, recording and reproducing an HDTV signal was a significant technical challenge in the early years of HDTV (Sony HDVS). Japan remained the only country with successful public broadcasting analog HDTV, with seven broadcasters sharing a single channel.
Rise of digital compression
Since 1972, International Telecommunication Union's radio telecommunications sector (ITU-R) has been working on creating a global recommendation for Analogue HDTV. These recommendations however did not fit in the broadcasting bands which could reach home users. The standardization of MPEG-1 in 1993 also led to the acceptance of recommendations ITU-R BT.709. In anticipation of these standards the Digital Video Broadcasting (DVB) organisation was formed, an alliance of broadcasters, consumer electronics manufacturers and regulatory bodies. The DVB develops and agrees on specifications which are formally standardised by ETSI.
DVB created first the standard for DVB-S digital satellite TV, DVB-C digital cable TV and DVB-T digital terrestrial TV. These broadcasting systems can be used for both SDTV and HDTV. In the US the Grand Alliance proposed ATSC as the new standard for SDTV and HDTV. Both ATSC and DVB were based on the MPEG-2 standard. The DVB-S2 standard is based on the newer and more efficient H.264/MPEG-4 AVC compression standards. Common for all DVB standards is the use of highly efficient modulation techniques for further reducing bandwidth, and foremost for reducing receiver-hardware and antenna requirements.
In 1983, the International Telecommunication Union's radio telecommunications sector (ITU-R) set up a working party (IWP11/6) with the aim of setting a single international HDTV standard. One of the thornier issues concerned a suitable frame/field refresh rate, the world already having split into two camps, 25/50 Hz and 30/60 Hz, related by reasons of picture stability to the frequency of their main electrical supplies.
The IWP11/6 working party considered many views and through the 1980s served to encourage development in a number of video digital processing areas, not least conversion between the two main frame/field rates using motion vectors, which led to further developments in other areas. While a comprehensive HDTV standard was not in the end established, agreement on the aspect ratio was achieved.
Initially the existing 5:3 aspect ratio had been the main candidate but, due to the influence of widescreen cinema, the aspect ratio 16:9 (1.78) eventually emerged as being a reasonable compromise between 5:3 (1.67) and the common 1.85 widescreen cinema format. (Bob Morris explained that the 16:9 ratio was chosen as being the geometric mean of 4:3, Academy ratio, and 2.4:1, the widest cinema format in common use, in order to minimize wasted screen space when displaying content with a variety of aspect ratios.)
An aspect ratio of 16:9 was duly agreed at the first meeting of the IWP11/6 working party at the BBC's Research and Development establishment in Kingswood Warren. The resulting ITU-R Recommendation ITU-R BT.709-2 ("Rec. 709") includes the 16:9 aspect ratio, a specified colorimetry, and the scan modes 1080i (1,080 actively interlaced lines of resolution) and 1080p (1,080 progressively scanned lines). The British Freeview HD trials used MBAFF, which contains both progressive and interlaced content in the same encoding.
It also includes the alternative 1440×1152 HDMAC scan format. (According to some reports, a mooted 750-line (720p) format (720 progressively scanned lines) was viewed by some at the ITU as an enhanced television format rather than a true HDTV format, and so was not included, although 1920×1080i and 1280×720p systems for a range of frame and field rates were defined by several US SMPTE standards.)
Inaugural HDTV broadcast in the United States
HDTV technology was introduced in the United States in the 1990s by the Digital HDTV Grand Alliance, a group of television, electronic equipment, communications companies and the Massachusetts Institute of Technology. Field testing of HDTV at 199 sites in the United States was completed August 14, 1994. The first public HDTV broadcast in the United States occurred on July 23, 1996 when the Raleigh, North Carolina television station WRAL-HD began broadcasting from the existing tower of WRAL-TV south-east of Raleigh, winning a race to be first with the HD Model Station in Washington, D.C., which began broadcasting July 31, 1996 with the callsign WHD-TV, based out of the facilities of NBC owned and operated station WRC-TV. The American Advanced Television Systems Committee (ATSC) HDTV system had its public launch on October 29, 1998, during the live coverage of astronaut John Glenn's return mission to space on board the Space Shuttle Discovery. The signal was transmitted coast-to-coast, and was seen by the public in science centers, and other public theaters specially equipped to receive and display the broadcast.
European HDTV broadcasts
Although HDTV broadcasts had been demonstrated in Europe since the early 1990s, the first regular broadcasts started on January 1, 2004 when the Belgian company Euro1080 launched the HD1 channel with the traditional Vienna New Year's Concert. Test transmissions had been active since the IBC exhibition in September 2003, but the New Year's Day broadcast marked the official start of the HD1 channel, and the start of HDTV in Europe.
Euro1080, a division of the Belgian TV services company Alfacam, broadcast HDTV channels to break the pan-European stalemate of "no HD broadcasts mean no HD TVs bought means no HD broadcasts ..." and kick-start HDTV interest in Europe. The HD1 channel was initially free-to-air and mainly comprised sporting, dramatic, musical and other cultural events broadcast with a multi-lingual soundtrack on a rolling schedule of 4 or 5 hours per day.
These first European HDTV broadcasts used the 1080i format with MPEG-2 compression on a DVB-S signal from SES Astra's 1H satellite. Euro1080 transmissions later changed to MPEG-4/AVC compression on a DVB-S2 signal in line with subsequent broadcast channels in Europe.
The number of European HD channels and viewers has risen steadily since the first HDTV broadcasts, with SES Astra's annual Satellite Monitor market survey for 2010 reporting more than 200 commercial channels broadcasting in HD from Astra satellites, 185 million HD-Ready TVs sold in Europe (£60 million in 2010 alone), and 20 million households (27% of all European digital satellite TV homes) watching HD satellite broadcasts (16 million via Astra satellites).
In December 2009 the United Kingdom became the first European country to deploy high definition content on digital terrestrial television (branded as Freeview) using the new DVB-T2 transmission standard as specified in the Digital TV Group (DTG) D-book. The Freeview HD service currently contains 4 HD channels and is now rolling out region by region across the UK in accordance with the digital switchover process. Some transmitters such as the Crystal Palace transmitter are broadcasting the Freeview HD service ahead of the digital switchover by means of a temporary, low-power pre-DSO multiplex.
HDTV broadcast systems are identified with three major parameters:
- Frame size in pixels is defined as number of horizontal pixels × number of vertical pixels, for example 1280 × 720 or 1920 × 1080. Often the number of horizontal pixels is implied from context and is omitted, as in the case of 720p and 1080p.
- Scanning system is identified with the letter p for progressive scanning or i for interlaced scanning.
- Frame rate is identified as number of video frames per second. For interlaced systems an alternative form of specifying number of fields per second is often used.
If all three parameters are used, they are specified in the following form: [frame size][scanning system][frame or field rate] or [frame size]/[frame or field rate][scanning system]. Often, frame size or frame rate can be dropped if its value is implied from context. In this case the remaining numeric parameter is specified first, followed by the scanning system.
For example, 1920×1080p25 identifies progressive scanning format with 25 frames per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i25 or 1080i50 notation identifies interlaced scanning format with 25 frames (50 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i30 or 1080i60 notation identifies interlaced scanning format with 30 frames (60 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 720p60 notation identifies progressive scanning format with 60 frames per second, each frame being 720 pixels high; 1,280 pixels horizontally are implied.
50 Hz systems support three scanning rates: 25i, 25p and 50p. 60 Hz systems support a much wider set of frame rates: 23.976p, 24p, 29.97i/59.94i, 29.97p, 30p, 59.94p and 60p. In the days of standard definition television, the fractional rates were often rounded up to whole numbers, e.g. 23.976p was often called 24p, or 59.94i was often called 60i. 60 Hz high definition television supports both fractional and slightly different integer rates, therefore strict usage of notation is required to avoid ambiguity. Nevertheless, 29.97i/59.94i is almost universally called 60i, likewise 23.976p is called 24p.
For commercial naming of a product, the frame rate is often dropped and is implied from context (e.g., a 1080i television set). A frame rate can also be specified without a resolution. For example, 24p means 24 progressive scan frames per second, and 50i means 25 interlaced frames per second.
There is no standard for HDTV colour support. Until recently the color of each pixel was regulated by three 8-bit color values, each representing the level of red, blue, and green which defined a pixel colour. Together the 24 total bits defining colour yielded just under 17 million possible pixel colors. Recently[update][when?] some manufacturers have produced systems that can employ 10 bits for each colour (30 bits total) which provides for a palette of 1 billion colors, saying that this provides a much richer picture, but there is no agreed way to specify that a piece of equipment supports this feature. Human vision can only discern approximately 1 million colours so an expanded colour palette is of questionable benefit to consumers.
Most HDTV systems support resolutions and frame rates defined either in the ATSC table 3, or in EBU specification. The most common are noted below.
High-definition display resolutions
Video format supported [image resolution] Native resolution [inherent resolution] (W×H) Pixels Aspect ratio (W:H) Description Actual Advertised (Mpixel) Image Pixel 720p
786,432 0.8 4:3 4:3 Typically a PC resolution (XGA); also a native resolution on many entry-level plasma displays with non-square pixels. 1280×720 921,600 0.9 16:9 1:1 Standard HDTV resolution and a typical PC resolution (WXGA), frequently used by high-end video projectors; also used for 750-line video, as defined in SMPTE 296M, ATSC A/53, ITU-R BT.1543. 1366×768
1,049,088 1.0 683:384
1:1 A typical PC resolution (WXGA); also used by many HD ready TV displays based on LCD technology. 1080p/1080i
1920×1080 2,073,600 2.1 16:9 1:1 Standard HDTV resolution, used by Full HD and HD ready 1080p TV displays such as high-end LCD, Plasma and rear projection TVs, and a typical PC resolution (lower than WUXGA); also used for 1125-line video, as defined in SMPTE 274M, ATSC A/53, ITU-R BT.709; Video format supported Screen resolution (W×H) Pixels Aspect ratio (W:H) Description Actual Advertised (Mpixel) Image Pixel 720p
876,096 0.9 16:9 1:1 Used for 750-line video with faster artifact/overscan compensation, as defined in SMPTE 296M. 1080p
2,005,056 2.0 16:9 1:1 Used for 1125-line video with faster artifact/overscan compensation, as defined in SMPTE 274M. 1080i
1,555,200 1.6 16:9 4:3 Used for anamorphic 1125-line video in the HDCAM and HDV formats introduced by Sony and defined (also as a luminance subsampling matrix) in SMPTE D11.
Standard frame or field rates
- 23.976 Hz (film-looking frame rate compatible with NTSC clock speed standards)
- 24 Hz (international film and ATSC high-definition material)
- 25 Hz (PAL, SECAM film, standard-definition, and high-definition material)
- 29.97 Hz (NTSC standard-definition material)
- 50 Hz (PAL & SECAM high-definition material)
- 59.94 Hz (ATSC high-definition material)
- 60 Hz (ATSC high-definition material)
- 120 Hz (ATSC high-definition material)
At a minimum, HDTV has twice the linear resolution of standard-definition television (SDTV), thus showing greater detail than either analog television or regular DVD. The technical standards for broadcasting HDTV also handle the 16:9 aspect ratio images without using letterboxing or anamorphic stretching, thus increasing the effective image resolution.
The optimum format for a broadcast depends upon the type of videographic recording medium used and the image's characteristics. The field and frame rate should match the source and the resolution. A very high resolution source may require more bandwidth than available in order to be transmitted without loss of fidelity. The lossy compression that is used in all digital HDTV storage and transmission systems will distort the received picture, when compared to the uncompressed source.
There is widespread confusion in the use of the terms PAL, SECAM and NTSC when referring to HD material. These terms apply only to standard definition television, not HD. The only technical reason for keeping 25 Hz as the HD frame rate in a former PAL country is to maintain compatibility between HD and standard definition television systems.
Types of media
Standard 35mm photographic film used for cinema projection has a much higher image resolution than HDTV systems, and is exposed and projected at a rate of 24 frames per second (frame/s). To be shown on standard television, in PAL-system countries, cinema film is scanned at the TV rate of 25 frame/s, causing a speedup of 4.1 percent, which is generally considered acceptable. In NTSC-system countries, the TV scan rate of 30 frame/s would cause a perceptible speedup if the same were attempted, and the necessary correction is performed by a technique called 3:2 Pulldown: Over each successive pair of film frames, one is held for three video fields (1/20 of a second) and the next is held for two video fields (1/30 of a second), giving a total time for the two frames of 1/12 of a second and thus achieving the correct average film frame rate.
Non-cinematic HDTV video recordings intended for broadcast are typically recorded either in 720p or 1080i format as determined by the broadcaster. 720p is commonly used for Internet distribution of high-definition video, because most computer monitors operate in progressive-scan mode. 720p also imposes less strenuous storage and decoding requirements compared to both 1080i and 1080p. 1080p-24 frame/s and 1080i-30 frame/s is most often used on Blu-ray Disc; as of 2011, there is still no disc that can support full 1080p-60 frame/s.
Besides an HD-ready television set, other equipment may be needed to view HD television. In the US, cable-ready TV sets can display HD content without using an external box. They have a QAM tuner built-in and/or a card slot for inserting a CableCARD.
High-definition image sources include terrestrial broadcast, direct broadcast satellite, digital cable, IPTV, the high-definition Blu-ray video disc (BD), internet downloads. Sony's Playstation 3 has extensive HD compatibility because of the Blu-ray platform, so does Microsoft's Xbox 360 with the addition of Netflix streaming capabilities, and the Zune marketplace where users can rent or purchase digital HD content. The HD capabilities of the consoles has influenced some developers to port games from past consoles onto the PS3 and 360, often with remastered graphics.
Recording and compression
HDTV can be recorded to D-VHS (Digital-VHS or Data-VHS), W-VHS (analog only), to an HDTV-capable digital video recorder (for example DirecTV's high-definition Digital video recorder, Sky HD's set-top box, Dish Network's VIP 622 or VIP 722 high-definition Digital video recorder receivers, or TiVo's Series 3 or HD recorders), or an HDTV-ready HTPC. Some cable boxes are capable of receiving or recording two or more broadcasts at a time in HDTV format, and HDTV programming, some free, some for a fee, can be played back with the cable company's on-demand feature.
The massive amount of data storage required to archive uncompressed streams meant that inexpensive uncompressed storage options were not available in the consumer market until recently. In 2008 the Hauppauge 1212 Personal Video Recorder was introduced. This device accepts HD content through component video inputs and stores the content in an uncompressed MPEG transport stream (.ts) file or Blu-ray format .m2ts file on the hard drive or DVD burner of a computer connected to the PVR through a USB 2.0 interface.
Realtime MPEG-2 compression of an uncompressed digital HDTV signal is prohibitively expensive for the consumer market at this time, but should become inexpensive within several years (although this is more relevant for consumer HD camcorders than recording HDTV). Analog tape recorders with bandwidth capable of recording analog HD signals such as W-VHS recorders are no longer produced for the consumer market and are both expensive and scarce in the secondary market.
In the United States, as part of the FCC's plug and play agreement, cable companies are required to provide customers who rent HD set-top boxes with a set-top box with "functional" Firewire (IEEE 1394) upon request. None of the direct broadcast satellite providers have offered this feature on any of their supported boxes, but some cable TV companies have. As of July 2004[update], boxes are not included in the FCC mandate. This content is protected by encryption known as 5C. This encryption can prevent duplication of content or simply limit the number of copies permitted, thus effectively denying most if not all fair use of the content.
- Extreme High Definition (1440p) (4320p)
- HDTV blur
- List of digital television deployments by country
- Ultra High Definition Television
- ^ "Teletronic – The Television History Site". Teletronic.co.uk. http://www.teletronic.co.uk/tvera.htm. Retrieved 2011-08-30.
- ^ "SBS jubilant with its 576p HD broadcasts". http://www.broadcastandmedia.com/articles/ff/0c0276ff.asp.
- ^ Russian: Трансформатор, Transformer.
- ^ "HDTV in the Russian Federation: problems and prospects of implementation (in Russian)". http://rus.625-net.ru/625/2007/01/tvch.htm.
- ^ "Researchers Craft HDTV's Successor". http://www.pcworld.com/article/id,132289-c,hdtv/article.html.
- ^ "Digital TV Tech Notes, Issue #2". http://www.tech-notes.tv/Archive/tech_notes_002.htm.
- ^ James Sudalnik and Victoria Kuhl, "High definition television"
- ^ "High definition television comes of age thanks to ITU". http://www.itu.int/ITU-R/index.asp?category=information&link=hdtv-25&lang=en.
- ^ "History of the DVB Project". http://www.dvb.org/about_dvb/history/.
- ^ Bob Morris (2003-07-13). "The true origins of the 16:9 HDTV aspect ratio!". rec.arts.movies.tech. (Web link). Retrieved 2010-01-16.
- ^ "Digital TV Tech Notes, Issue #41". http://www.tech-notes.tv/Archive/tech_notes_041.htm.
- ^ The Grand Alliance includes AT&T Bell Labs, General Instrument, MIT, Philips, Sarnoff, Thomson, and Zenith)
- ^ Carlo Basile et al. (1995). "The U.S. HDTV standard: the Grand Alliance". IEEE Spectrum 32 (4): 36–45.
- ^ "HDTV field testing wraps up". Allbusiness.com. http://www.allbusiness.com/electronics/consumer-household-electronics-high/7686036-1.html. Retrieved 2010-10-02.
- ^ "History of WRAL Digital". Wral.com. 2006-11-22. http://www.wral.com/wral-tv/story/1069461/. Retrieved 2010-10-02.
- ^ "WRAL-HD begins broadcasting HDTV". Allbusiness.com. http://www.allbusiness.com/electronics/consumer-household-electronics-high/7691754-1.html. Retrieved 2010-10-02.
- ^ "Comark transmitter first in at Model Station". Allbusiness.com. http://www.allbusiness.com/electronics/computer-electronics-manufacturing/7691367-1.html. Retrieved 2010-10-02.
- ^ a b Albiniak, Paige (1998-11-02). "HDTV: Launched and Counting.". Broadcasting and cable (BNET). http://findarticles.com/p/articles/mi_hb5053/is_199811/ai_n18386452?tag=content;col1. Retrieved 2008-10-24. [dead link]
- ^ "Space Shuttle Discovery: John Glenn Launch". Internet Movie Database. 1998. http://www.imdb.com/title/tt0384554/. Retrieved 2008-10-25.
- ^ "SES ASTRA and Euro1080 to pioneer HDTV in Europe" (Press release). SES ASTRA. October 23, 2003. http://www.ses-astra.com/business/en/news-events/press-archive/2003/23-10-03/index.php.
- ^ Bains, Geoff. "Take The High Road" What Video & Widescreen TV (April, 2004) 22–24
- ^ ASTRA Satellite Monitor research.
- ^ "Scanning Methods (p, i, PsF)". ARRI Digital. http://www.arridigital.com/creative/camerabasics/7. Retrieved 2011-08-30.
- ^ "HDTV information". http://www.hidefster.com/HDTV_blog/?cat=9.
- ^ Nelson, Randy. "Microsoft unveils Zune HD, Zune marketplace headed to Xbox 360". www.Joystiq.com. http://www.joystiq.com/2009/05/26/microsoft-unveils-zune-hd-zune-marketplace-headed-to-360/.
- ^ "5C Digital Transmission Content Protection White Paper" (PDF). 1998-07-14. Archived from the original on 2006-06-16. http://web.archive.org/web/20060616075812/http://dtcp.com/data/wp_spec.pdf. Retrieved 2006-06-20.
- Technology, Television, and Competition (New York: Cambridge University Press, 2004)
- Film indir
- Film izle
- Sony HD TV
- Images formats for HDTV, article from the EBU Technical Review.
- High Definition for Europe – a progressive approach, article from the EBU Technical Review.
- High Definition (HD) Image Formats for Television Production, technical report from the EBU
- HDTV in Germany: Lack of Innovation Management Leads to Market Failure, diffusion of HDTV in Germany from the DIW Berlin
Digital video resolutions Designation Usage Examples Definition (lines) Rate (Hz) Interlaced (fields) Progressive (frames) Low,
Ultra High UHDTV 4320 60 High-definition (HD) Concepts Analog broadcast Digital broadcast Audio Filming and storage HD media and
Connectors Deployments Resolutions Broadcast video formats Television525 lines625 linesHidden signalsDefunct systemsInterlacedMPEG-2 standardsMPEG-4 AVC standardsHidden signals Digital cinema Technical issues
Wikimedia Foundation. 2010.
Look at other dictionaries:
High Definition Television — [haɪ ˌdɛfɪˈnɪʃən ˈtɛlɪvɪʒən] (HDTV, engl. für hochauflösendes Fernsehen) ist ein Sammelbegriff, der eine Reihe von Fernsehnormen bezeichnet, die sich gegenüber dem Standard Definition Television (SDTV) durch eine erhöhte vertikale, horizontale… … Deutsch Wikipedia
High Definition Television — [engl.], HDTV … Universal-Lexikon
high-definition television — noun a television system that has more than the usual number of lines per frame so its pictures show more detail • Syn: ↑HDTV • Hypernyms: ↑television, ↑telecasting, ↑TV, ↑video * * * /huy def euh nish euhn/ a television system having twice the… … Useful english dictionary
high-definition television — raiškioji televizija statusas T sritis automatika atitikmenys: angl. high definition television vok. hochzeiliges Fernsehen, n rus. телевидение высокой четкости, n pranc. télévision à haute définition, f … Automatikos terminų žodynas
high definition television — noun a television format, primarily for digital transmission, that provides a higher vertical resolution than standard definition television and a wider aspect ratio of the screen. Also, HDTV … Australian English dictionary
high-definition television — high′ defini′tion tel′evision n. rtv a television system having a high number of scanning lines per frame, producing a sharper image and greater picture detail Abbr.: HDTV • Etymology: 1980–85 … From formal English to slang
high-definition television — noun a) Any or all of several formats of television system having a higher resolution than traditional ones b) A television set that employs such a system … Wiktionary
high-definition television — /huy def euh nish euhn/ a television system having twice the standard number of scanning lines per frame and producing a sharper image, and greater picture detail. Abbr.: HDTV [1980 85] * * * … Universalium
High Definition TeleVision — new television technology which enables better picture quality, HDTV … English contemporary dictionary
high-definition television — WikiV a) General term for standards pertaining to consumer high resolution TV. b) A TV format capable of displaying on a wider screen (16:9) as opposed to the conventional 4:3) and at higher resolution. Rather than a single HDTV standard the FCC… … Audio and video glossary | <urn:uuid:23b5d8f7-ff08-4c77-988d-378af9411f9b> | CC-MAIN-2019-47 | https://en.academic.ru/dic.nsf/enwiki/9282596 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669431.13/warc/CC-MAIN-20191118030116-20191118054116-00097.warc.gz | en | 0.89163 | 7,232 | 3.8125 | 4 |
Popular Science Monthly/Volume 49/August 1896/Spirit Writing and Speaking with Tongues
|"SPIRIT" WRITING AND "SPEAKING WITH TONGUES."|
THE word "automatism" not only designates a group of phenomena but also connotes a theory as to their origin, and this theory rests upon the popular conception of the relation of "soul" and body. The soul, according to it, is an entity of a peculiar kind, entirely distinct from and independent of the body. The body is a material machine, and does not essentially differ from the machines made by man. The relation between soul and body is one of reciprocal action and interaction. The body is the medium through which material realities external to it are communicated to the soul under the guise of the sensations and conceptions of consciousness; it possesses also the capacity of executing certain movements—as, for example, reflexes—without the concurrence of the soul. But the more complex movements of the body, especially those which adjust it to a constantly shifting environment and those which serve as the exponents of mental life, can not be executed without the co-operation of the soul. Occasionally these normal relations appear to be disturbed. Movements take place of the kind usually ascribed to the activity of the soul, and that soul disavows them. Sensations and perceptions enter into the range of consciousness for which no external reality can be found, and thoughts strangely unlike those proper to the thinker troop through his mind and force themselves upon his unwilling attention. These phenomena are ascribed to the agency of the body as distinguished from that of the soul on the one hand, and that of the material world on the other. The body is a machine out of gear; it is no longer controlled by the indwelling soul, and is constantly executing movements on its own account and forcing upon the soul sensations, perceptions, and ideas which stand for no realities save that of the disordered mechanism which produces them. Thus the three chief forms of automatism are: the automatism of movement, of sensation, and of thought or ideation. While I shall use the word automatism and its derivatives, I do not, of course, wish to be understood as subscribing to the theory which it connotes.
Automatic movements may be of any and all kinds. The simplest are those of which the actor is thinking at the time although himself unaware that his thought is passing over into movement. To this type belong the marvels of the pendulum which swings above a reflecting surface only, of the divining rod, of most forms of table-turning, of "thought transference" as practiced by Bishop and Cumberland, et id genus omne. Space forbids my entering into the discussion of these relatively familiar cases, and I shall turn at once to the more complex types.
Automatic writing is an exceedingly common phenomenon. It took its rise from table-turning. Ordinary tables being found in many cases too heavy for the "spirits" to lift, tiny three-legged tables were made for the purpose and termed "planchettes." Later the device was hit upon of attaching a pencil to one leg and placing a sheet of paper beneath to record the movements of the leg. This is our modern planchette. Two or three persons then put their hands on the instrument and wait to see "what planchette will say." Many automatists need no planchette. It is enough for them to take a pencil in hand and sit quietly with the hand on a sheet of paper. After the lapse of a variable period of time the hand will stiffen, twist, and fall to writing quite of its own accord. Of these methods planchette is the more likely to be successful. In the first place, the chances of finding an automatist among two or three people is obviously greater than in the case of one; furthermore, since all expect planchette to move, the slightest tendency to automatism on the part of any one is likely to be magnified by the unconscious co-operation of the others, and is less likely to be checked by the writer himself, since each ascribes the movement to any one but himself.
The writing produced by either of these methods may be regarded as belonging to one of two main types: 1. That which, although involuntary, is dependent upon the co-operation of the subject's consciousness. 2. That which is produced without the co-operation of the subject's consciousness. The latter, again, may be either intelligible or in "unknown tongues."
Intelligible automatic writing may be produced without the co-operation of the subject's consciousness, either when that consciousness is apparently unimpaired, or when the patient is in a trance state. The latter I need not now discuss, as it belongs to the same category as dreams, but the former calls for some comment.
There are two methods of proving that the automatic messages did not emanate from the subject's upper consciousness. In the first place, it is sometimes found that they become the more clear and copious the more effectually the upper consciousness of the subject is distracted from the writing. Miss G——, for example, whom I studied with some care, always did her best automatic writing when busily engaged in conversation or in reading aloud. I concealed her hand from her eyes, and it was but now and then that she would decipher a word by the sense of touch and movement as it was written. But the messages she wrote were always trivial, silly, and often self-contradictory.
In the second place, the content of the writing may be of such a character that we can scarcely ascribe it to the subject's consciousness. In hysterical patients, for example, the upper consciousness, or at least the consciousness which talks, is often anæsthetic to one or more sensory stimuli, yet the automatic writing betrays consciousness of the lost sensations. Prof. James, of Harvard, has noted the same phenomenon in an apparently normal patient. "The planchette began by illegible scrawling. After ten minutes I pricked the back of the right hand several times with a pin; no indications of feeling. Two pricks on the left hand were followed by withdrawal, and the question, 'What did you do that for?' to which I replied, 'To find out whether you are going to sleep.' The first legible words which were written after this were, 'You hurt me.' A pencil in the right hand was then tried instead of the planchette. Here again the first legible words were, 'No use (?) in trying to spel when you hurt me so.' Next, 'It's no use trying to stop me writing by pricking.' These writings were deciphered aloud in the hearing of S——, who seemed slow to connect them with the two pin-pricks on his left hand, which alone he had felt. . . . I pricked the right wrist and fingers several times again quite severely, with no sign of reaction on S——'s part. After an interval, however, the pencil wrote: 'Don't you prick me any more.'. . . S—— laughed, having been conscious only of the pricks on his left hand, and said, 'It's working those two pin-pricks for all they are worth.'" Yet the hand was not anæsthetic when directly tested.
Sometimes the automatic message is potentially known indeed to the upper consciousness, but not at the time present to it. Take, for example, one of Mr. Gurney's experiences:
"In 1870 I watched and took part in a good deal of planchette writing, but not with results or under conditions that afforded proof of any separate intelligence. However, I was sufficiently struck with what occurred to broach the subject to a hard-headed mathematical friend, who expressed complete incredulity as to the possibility of obtaining rational writing except through the conscious operation of some person in contact with the instrument. After a long argument he at last agreed to make a trial. I had not really the faintest hope of success, and he was committed to the position that success was impossible. We sat for some minutes with a hand of each upon the planchette, and asked that it should write some line of Shakespeare. It began by seesawing and producing a great deal of formless scribble; but then there seemed to be more method in the movements, and a line of hieroglyphics appeared. It took us some time to make it out, the writing being illegible just to that degree which at first baffles the reader, but which afterward leaves no more doubt as to its having been correctly deciphered than if it were print. And there the line indubitably stood: 'A little more than kin and less than kind.' Now, as neither of us had been thinking of this line, or of any line (for we had been wholly occupied with the straggling movements of the instrument), the result, though not demonstrative, is at any rate strongly suggestive of a true underground psychosis."
At other times the information conveyed is at once true and quite unknown to the subject. Some of these cases are undoubtedly due to the automatic reproduction of memories which can not at the time be recalled—a common phenomenon in all forms of automatism. Thus, in the case of B——, to which I shall refer at greater length hereafter, it was stated that a man named Parker Howard had lived at a certain number on South Sixteenth Street, Philadelphia, Upon going to the house, I found that a man named Howard—not Parker Howard, however—had lived there some time, but had moved away about two months before. Moreover, the whole Howard incident proved to be mythical; no such person as Parker Howard ever existed. But B—— told me that after his hand had mentioned the name, and before the address was given, he stepped into a shop and looked through a directory for the name. Probably, as he glanced over the list of Howards, his eye had fallen upon the address which his hand afterward wrote, but he had no recollection of it.
Many other cases are certainly due to accidental coincidence. B——, for example, wrote long accounts of events happening at a distance from him, which were afterward found to be in the main correct; but that this was a mere matter of chance was abundantly proved to B——'s own satisfaction. The chances of coincidence are much increased by the extremely illegible character of much of the script, which leaves wide room for "interpretation." I can not but suspect that the "anagrams" sometimes written automatically often owe their existence to this kind of "interpretation." Yet, after making all allowances for coincidence and forgotten memories, nearly all investigators admit that there remains a residuum which can not plausibly be explained by any accepted theory. I can not discuss this residuum here; it is enough to point to its existence, with the caution that no theory can be regarded as final unless it can explain all the facts.
The importance of this material from a psychological point of view can not be overestimated. If the man's hand can write messages without the co-operation of the man's consciousness, we are forced upon the one horn or the other of a very perplexing dilemma. Either these utterances stand for no consciousness at all, merely recording certain physiological processes, or else they indicate the existence of mentation which does not belong to any recognized human being. The first would seem to deny the doctrine of parallelism, according to which physiological processes of the degree of complexity requisite to the production of writing necessarily generate mental states, and this would lead us toward the old theory of the soul, or something like it. The second would compel the assumption either of personalities distinct from that of the subject, which is the theory of possession, or of segregated mental states. The latter is the theory which I am developing in these pages, and although I am far from satisfied with it, it is more in line with our present scientific conceptions than others, and accounts for some of the facts fairly well.
But this dilemma presents itself only when it can be shown that the subject's upper consciousness has nothing to do with the
production of the writing. I am convinced that experimenters do not pay sufficient attention to this point, and consequently much of the recorded material is to my mind of little significance. As my space is limited, I wish to lay especial stress upon this aspect of the problem.
A few years ago I had the opportunity of studying at leisure a remarkably good case of automatism. The subject, whom I shall call B——, was a man of intelligence and education, with whom I had long been on terms of intimacy, and of whose good faith I can therefore speak with some confidence. The writing was at first a mere scrawl, accompanied by quite violent twisting of the arm; little by little it became intelligible, wrote "Yes" and "No" took to printing in large capitals, and finally fell into an easy script almost identical with B——'s normal hand. The communications always professed to emanate from spirits, and, on the
whole, fulfilled in phraseology, style of script, etc., B——'s notions as to what the alleged spirit ought to say and write. One "spirit," for example, was R——, to whom writing had been ascribed by another automatist whom B—— had seen, and his writing, as executed by B——'s hand (Fig. 1), was clearly a rough imitation of the original (Fig. 2). Fig. 3 represents the script of another mythical spirit. Yet another alleged communicator was the late Stainton Moses; Fig. 4 is his signature as written by B——'s hand; Fig. 5 is a facsimile of his actual signature, which B—— had seen. I think there is here also an attempt at imitation, although a very bad one. Another "communicator" began as shown in Fig. 6; he then announced that he was born in 1629, and died in 1685. Now, B—— knows a little about seventeenth century script, and he instantly saw that this did not resemble it. Scarcely had he noticed the discrepancy when his hand began writing
the script figured as No. 7, which is not unlike that then in use. B—— thought at the time that he could not write this hand voluntarily without taking pains, but upon attempting it he found that lie could do it voluntarily as well as automatically (Fig. 8).
It was easy enough to prove that these communications had nothing to do with spirits. B—— satisfied himself upon that point in a very short time. But we kept on experimenting, to Figs. 4 and 5. determine whether they were of subconscious origin or not. To B—— himself they felt strangely external. To quote his own words:
"When I wish to write automatically I take a pencil and place my hand upon a sheet of paper. After the lapse of a few minutes I feel a tingling sensation in my arm and fingers; this is followed by a stiffening of the arm and by convulsive movements. After scrawling for a while, it will make a mark which suggests to me the beginning of a letter, and usually the letter will be clearly written almost before the thought enters my mind. It is then followed by some word beginning with that letter, and that by other words, constituting a 'communication' from some 'spirit.' The writing then proceeds quite rapidly. It seems to me that I read it as it is written; sometimes I apparently anticipate the writing, but quite often it does not proceed in accordance with my anticipation. Sometimes the writer seems to be at a loss how to complete his sentence, and begins again. At other times an illegible combination of signs will be repeatedly written, until
finally a word is evolved, and this appears to be what the writer had in mind at the outset. I am now satisfied, however, that there is never any foresight; my hand simply develops the illegible scrawl into the word which I think it most resembles, thus fulfilling my expectations. This is curiously shown in the emotion it displays. It will twist violently about, pound on the table, bruise my fingers, break my pencils, and show every sign of the greatest excitement, while I, the spectator, survey it with the coolest and most skeptical curiosity. But it will do this only when such emotion seems to me appropriate, just as the persons I see in my dreams may manifest an emotion which I do not share. My hand sometimes abuses me, especially for my skepticism, and sometimes reproves my faults in a very embarrassing manner. It has frequently urged me, upon very plausible grounds, to do things which I would not dream of doing. In every case save one the reasons given were untrue, and in that one I am satisfied the coincidence was due to chance. On two occasions my hand wrote a short stanza with little hesitation. I have never done such a thing myself, but the verses were so incoherent and so atrocious
that I have no doubt they were developed successively, each being based upon the suggestions of the preceding in the manner above described."
I can see no reason for ascribing B——'s writing to subconscious states. It was never intelligible unless B—— allowed himself to "read" it. If he persistently distracted his attention or refused to wonder what his hand was trying to write, it would make marks resembling writing, but never "wrote sense." It was highly suggestible. If he wondered why it did not print, it would instantly try to print; and if, while trying to print, he refused to wonder what it said, it produced strange characters resembling some unknown language. Fig. 9 is a facsimile of a few of these; they were written as rapidly as the hand could fly. Fig. 10 is a facsimile of some writing executed by a Dr. Mayhew, October 5, 1853 (Neueste spiritualistische Mittheilungen, Berlin, 1862), and purports to be an account written by a spirit from the planet Saturn of the Saturnian mythology. In this case the spirit kindly wrote a "translation" giving the general sense, and in B——'s case, had he for a moment believed that the writing was intelligible to the writer, I have no doubt that a "translation" would have been as promptly forthcoming. This automatic production of mysterious characters is not uncommon. Prof. James, of Harvard, has examined many cases, but neither he nor any one else has ever, so far as I know, found any that could be deciphered.
Thus, the intelligibility of B——'s script is fully accounted for; but its automatic character remains more or less of a puzzle. I am inclined to regard it as due to the spontaneous "running" of some parts of the nervous mechanism which have nothing to do with consciousness. Precisely what parts we can not say, but if we suppose that consciousness accompanies cortical processes only, we may also suppose that they are to be found in the reenforcing and co-ordinating mechanism of the great basal ganglia. If so, this case might be regarded as strictly automatic—i. e., as due to mechanical causes only.
I do not believe that all cases of automatic writing can be explained in this way; but I am convinced that experimenters do not take sufficient pains to eliminate the action of the subject's consciousness. They seem to think that where the sense of voluntary effort is lacking the subject's consciousness can not interfere.
For the first carefully observed and reported case of automatic speech we are indebted to Prof. James, of Harvard. His paper, together with an account written by the subject, will shortly appear in the Proceedings of the Society for Psychical Research. I have not yet seen it, but he has kindly allowed me to make an independent study of the case for myself and to make use of it in this connection. The subject, whom I shall call Mr. Le Baron, is an Englishman thirty-eight years of age, is a man of education, has written a novel, a volume of poems, and a treatise on metaphysics, and is a reporter for a daily paper. In the summer of 1894 he fell in with a group of persons interested in occultism, and his association with them appears to have brought to the surface tendencies to automatism which had already manifested themselves sporadically. Of this association he thus speaks: "Before and almost immediately preceding this 'speaking with tongues' my nature had undergone a most remarkable emotional upheaval, which terminated in a mild form of ecstasy. Credulity and expectation are twin brothers, and my credulity was first aroused by the earnest narration of divers 'spiritualistic' experiences by a cultured lady of beautiful character, fine presence, and the noblest of philanthropic intuitions. A number of persons associated with this lady in her work secretly believed themselves the elected 'spiritual' vanguard of humanity. Not to understand these facts is not to understand the potent factors giving rise to the phenomenon."
In some way or other this grop of occultists, whose leader I shall call Miss J——, got the notion that Mr. Le Baron was the reincarnated spirit of the Pharaoh of the Exodus. Miss X——'s mother, they thought, had loved that king in a previous incarnation, and was still watching over his transmigrations. The time was now ripe for him to be forgiven his sins and to be brought to the light, and she was to make of him an instrument for a fuller revelation of God to humanity. They impressed this delusion upon Mr. Le Baron with all the energy of conviction. "Unless it be borne in mind," he says, "that the air was full of a greedy expectancy concerning the appearance of a reincarnated prophet, no solution of this problem is possible." His common sense protested, and he would not, perhaps, have been much affected had not a traitor within the camp presented itself in the form of his own highly suggestible and excitable nervous system, which caught the ideas with which he was surrounded and reflected them to the confusion of his understanding. This automatism first appeared in the form of writing. "My credulity was as profoundly sincere as it was pitifully pathetic. It was aroused by the narration of the purported history of a finger ring supposed to have been worn ages ago by a vestal virgin in one of the ancient temples of Egypt. Miss J—— believed she wore the ring in those days, and was herself the vestal virgin. On one occasion, in August, 1894, she asked me to place the ring on my finger and attempt automatic writing. I did so. Violent jerks followed, leading to scribbling upon the sheets of paper which were laid before me. This she attributed to spirits, and the placing on of the ring was in some way a sign to call them into activity. The 'invisible brotherhood' were subsequently declared to be en rapport with me, and in the exact ratio of my credulity concerning this assertion did this singular, insentient, emotional mechanism co-operate with the sensations of my common consciousness, and at times assume intelligible proportions."
The circumstances under which automatic speech appeared he was not able to fix with precision. He recollected two occasions, but was not able to say which came first. On one, he was at a seance at Miss J——'s house. He was asked to lie upon a couch upon which Mrs. J—— had lain during her last illness, and to look at a brilliantly illuminated portrait of her. In a short time he was seized with a convulsive paroxysm of the head and shoulders; this was followed by a flow of automatic speech purporting to emanate from the spirit of Mrs. J——, and fully confirming his friends' notions. Upon another occasion he was in a pine wood at night with them. Certain of the ladies professed to see signs and portents in the skies, and he had a similar convulsive attack followed by speech. This began with the words, "O my people! O my people!" and was of a semiprophetic character. As an illustration of the sort of confirmation thus given, I may quote a passage, spoken automatically September 6, 1894, and purporting to come from Mrs. J——: "I am the mother of the Evangel. There are several things which must be done. S—— (Miss J——) must go to the house of the man she got the things from on the day of the coming of the man from the other side of the water. Also tell her that she must tell the man that the work is to be of the kind he said he would help on. And tell her that I say that she must go to him and say that I am the one that sent her to him; and also say that the whole world is now ready for the coming of the day when the coming of the truth shall enlarge the whole possibilities of the race. You may also say that I said that he was the man that the whole of the thing on the day of the fate had to be turned to. Say that I am now with the man whom I shall go with in the spirit to direct him," etc.
Mr. Le Baron had heard of "speaking with tongues," and, believing as he did in transmigration, naturally inferred that he "must have some dead languages lurking away somewhere in the nooks and crannies of his much-experienced soul." Hence, not long after the invasion, his utterances assumed this character. They were poured forth very rapidly in deep, harsh, loud tones, coming apparently from the abdomen; often, he told me, "it seemed as if the malignity of a city were concentrated into a word," and many persons found the sound most startling. In an affidavit made February 2, 1895, he swore that "since the first day of September, 1894, he has experienced an automatic flow of foreign speech the meaning of which he does not understand when he utters it; that he is not a professed medium, and makes no claim to any supernatural or supernormal claims for the same; that he can utter by the command of his will this automatic flow of foreign consonantal and vowel combinations at any place and time to any length; and that the aforesaid automatic flow often assumes other linguistic forms than the following."
One or two illustrations of the "unknown tongues," in prose and verse, must suffice: "Shurumo te mote Cimbale. Ilunu teme tele telunu. Onstomo te ongorolo. Sinkete ontomo. Isa bulu, bulu, bulu. Ecemete compo tete. Olu mete compo. Lete me lu. Sine mete compote. Este mute, pute. Ompe rete keta. Onseling erne ombo lu mu. Outeme mo, mo, mo. Ebedebede tinketo. Imbe, Imbe, Imbe."
"Ede pelute kondo nadode
Igla tepete compto pele
Impe odode inguru lalele
Omdo resene okoro pododo
Igme odkondo nefulu kelala
Nene pokonto ce folodelu
Impete la la feme olele
Igdepe kindo raog japate
Relepo oddo og cene himano."
After the utterance in a "tongue" a "translation" was usually given in the same way, and the "translation" of the above poem, although somewhat incoherent, is of a distinctly higher order than most of the prose utterances. Witness one stanza:
"The coming of man from the roar of the ages
Has been like the seas in the breath of the storm;
His heart has been torn and his soul has been riven,
His joy has been short and his curse has been long.
But the bow of my promise still spreads in the heavens;
I have not destroyed the great sign of my love.
I stand at the door of the ark of creation,
And take in thy world like a storm-beaten dove,
And press to my bosom the world that I love."
Mr. Le Baron has shown traces of sensory automatism, hut very seldom. Once, in a sleeper returning from Chicago, he was awakened by a voice in his ears saying, "Enthusiasm shall fill the hearts of the multitude in the place of the hours of the day." He has also seen flashes of light.
As an illustration of automatic "prophecy" I may quote the following: "I have heard the wail of the dying and I have heard the wail of the man whose heart was broken. I have heard the voice of mirth and I have heard the voice of woe. I have heard the voice of him who is darkness and I have heard the voice of him who is light. I have heard the roar of the ocean and I have heard the song of the bird. I have heard the triumph of peace and I have heard the triumph of woe. I have heard the tears of the nations as they fell and I have heard the songs of the nations as they rose. I have heard the roar of cities and I have heard the music of the woodlands. I have heard the roar of the death of the man who was slain in battle and I have heard the shout of the victor. I have heard the new word and I have heard the old word," etc.
Mr. Le Baron never publicly admitted any belief in the veridical character of these utterances. As he says himself: "All this involved such an unscientific view of things, and was, moreover, so horribly egotistic and full of gall," impudence, and assumption, that I said nothing about it save to the few who had been throwing fuel upon the fire of my reincarnation conceptions and who were ready to believe anything in support of the hypothesis." Yet he was much impressed, as he frankly owns: "I, for the time heing and for months afterward, assented to the statement of my subliminal that my soul had pre-existed; I also believed that it knew when and where it had pre-existed. When it therefore stated that I had been sent through the fires of three thousand years of awful transmigration because, as Rameses or Sesostris, my way had not been 'the way of the Lord,' I either had to assent to the inference that my subliminal was a liar, or that it told the truth, or that it was mistaken. As it insisted upon pouring into my upper consciousness the loftiest of spiritual advice, I concluded that, if it was such a pure teacher of love and justice, it would make no mistake knowingly about a matter of history." Yet he never lost sight of the fundamental point—that, without verification, his automatic utterances were worthless, and he deliberately set himself the task of verifying or disproving them. He sought the advice of linguists and toiled through many a grammar and lexicon of little known languages with a purely negative result. The languages proved to be nothing more than meaningless combinations of sounds, and the supposed lofty communications from the Almighty were found to be the scarcely more intelligent reflection of the ideas with which the air was surcharged. As he himself jokingly phrased it in conversation, "I was like a cat chasing her own tail." I can not do better, in concluding my account of this case, than quote Mr. Myers's comment upon it: "He had the good fortune to meet with a wise and gentle adviser, and the phenomenon which, if differently treated, might have led on to the delusion of many, and perhaps to the insanity of one, became to the one a harmless experience, and to the world an acquisition of interesting psychological truth."
The only other outbreak of automatic speech of which any considerable details have been preserved was that which took place among the followers of the Rev. Edward Irving at the close of the first third of the present century. I have not been able to get access to all the extant information about this outbreak, but there can be little doubt that it was precisely analogous to Mr. Le Baron's experience. The "unknown tongues" were usually followed by a "translation," and all witnesses describe them as uttered in strange and unnatural tones. One witness speaks of them as "bursting forth, and that from the lips of a woman, with an astonishing and terrible crash." Says another, "The utterance was so loud that I put my handkerchief to my mouth to stop the sound, that I might not alarm the house." Another: "There was indeed in the strange, unearthly sound an extraordinary power of voice, enough to appall the heart of the most stout-hearted." Of its subjective side we have a vivid description from the pen of Robert Baxter, who was for a while one of Irving's leading prophets, but afterward, finding that the prophecies which his mouth uttered did not come true, he ascribed them to "lying spirits." He thus describes his own original experience:
"After one or two brethren had read and prayed, Mr. T—— was made to speak two or three words very distinctly and with an energy and depth of tone which seemed to me extraordinary, and fell upon me as a supernatural utterance which I ascribed to the power of God. The words were in a tongue I did not understand. In a few minutes Miss E. C—— broke out in an utterance
in English which, as to matter and manner and the influence it had upon me, I at once bowed to as the utterance of the Spirit of God. Those who have heard the powerful and commanding utterance need no description; but they who have not, may conceive what an unnatural and unaccustomed tone of voice, an intense and riveting power of expression, with the declaration of a cutting rebuke to all who were present, and applicable to my own state of mind in particular, would effect upon me and upon others who were come together expecting to hear the voice of the Spirit of God. In the midst of the feeling of awe and reverence which this produced I was myself seized upon by the power, and in much struggling against it was made to cry out and myself to give forth a confession of my own sin in the matter for which we were rebuked. . . . I was overwhelmed by this occurrence. . . . There was in me at the time of the utterance very great excitement, and yet I was distinctly conscious of a power acting upon me beyond the mere power of excitement. So distinct was the power from the excitement that in all my trouble and doubt about it I never could attribute the whole to excitement. . . . In the utterances of the power which subsequently occurred many were accompanied by the flashing in of conviction upon my mind, like lightning rooting itself in the earth; while other utterances, not being so accompanied, only acted in the way of an authoritative communication." At another time he was reading the Bible. "As I read, the power came upon me and I was made to read in the power, my voice raised far beyond its natural pitch, and with constrained repetition of parts and with the same inward uplifting which at the presence of the power I had always before experienced."
So far as I know, there exists no written record of the "tongues" spoken by the Irvingites, but the few specimens of their "prophecies" which I have seen present identically the same characteristics as those found in Mr. Le Baron's utterances —the same paucity of ideas, the same tendency to hover about one word or phrase with senseless repetitions. One illustration will serve, ex uno discite omnia:
"Ah, will ye despise, will ye despise the blood of Jesus? Will ye pass by the cross, the cross of Jesus? Oh! oh! oh! will ye crucify the Lord of glory? Will ye put him to an open shame? He died, he died, he died for you. He died for you. Believe ye, believe ye the Lamb of God. Oh, he was slain, he was slain, and he hath redeemed you; he hath redeemed you; he hath redeemed you with his blood! Oh, the blood, the blood, the blood that speaketh better things than the blood of Abel—which crieth mercy for you now, mercy for you now! Despise not his love, despise not his love, despise not his love!
"Oh, grieve him not! Oh, grieve not your Father! Rest in his love. Oh, rejoice in your Father's love! Oh, rejoice in the love of Jesus, in the love of Jesus, in the love of Jesus, for it passeth knowledge! Oh, the length! oh, the breadth! oh, the height! oh, the depth, of the love of Jesus! Oh, it passeth knowledge! Oh, rejoice in the love of Jesus! sinner, for what, for what—what, O sinner, what can separate, can separate, can separate from the love of Jesus?" etc.
Mr. Le Baron's "tongues" are constructed upon the same general principle, one phonetic element appearing to serve as the basis or core for a long series of syllables. I believe all these cases to be analogous to that of my friend B——, and I see no reason for ascribing them to subconscious activities of any kind.
- Proceedings of the American Society for Psychical Research, vol. i, p. 540.
- Proceedings of the Society for Psychical Research, vol. iv, p. 301, note.
- Some further details about this case can be found in my paper, The Experimental Induction of Automatic Processes, in the Psychological Review, July, 1805.
- Journal of the Society for Psychical Research, vol. vii, p. 250.
- Mr. Myers has Prof. James in mind. | <urn:uuid:8e074691-a79a-4eed-aa16-e115237c1b12> | CC-MAIN-2019-47 | https://en.m.wikisource.org/wiki/Popular_Science_Monthly/Volume_49/August_1896/Spirit_Writing_and_Speaking_with_Tongues | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668954.85/warc/CC-MAIN-20191117115233-20191117143233-00458.warc.gz | en | 0.977545 | 7,900 | 2.953125 | 3 |
Ever since the Sorbonne-educated scholar and Zionist activist Jacques Faitlovitch traveled to Ethiopia to visit the Beta Israel in 1904, the blurred boundaries of the Jewish people have been expanding. Increasingly over the last century, Africans, Indians, South Americans, African Americans, and even New Zealand tribespeople have asserted their Jewish identity. Of these, Ethiopian Jews are the best known because their Jewish identity has been halakhically authenticated by the Israeli rabbinate, and the struggling community of about one hundred and thirty thousand Ethiopian Israelis garners a great deal of support in the United States.
Next in order of public awareness are the Abayudaya (“People of Yehuda”) from Uganda. Other African groups who claim Jewish identity include the Lemba of South Africa and Zimbabwe, the Igbo of Nigeria, and numerous smaller groups in Mali, Timbuktu, Cape Verde, and Ghana. Taken all together, these groups number in the tens of thousands.
Two new books set out to explain the origins of the many African and African American groups who identify themselves as Jews or, at least, as descendants of the ancient Israelites. Both books focus primarily on the origins and evolution of these Judaic and black Israelite identities, rather than the practices, doctrines, and lifestyles that they have generated. Black Jews of Africa and the Americas, by Tudor Parfitt, is based upon a series of lectures he gave in 2011 at Harvard’s W.E.B. Du Bois Institute. Parfitt is famous for his many expeditions in Africa and the Middle East to uncover the lore surrounding the Lost Tribes of Israel. In economical and fast-paced prose, he argues that “Judaized” societies on the African continent are the product of “the Israelite trope”—the belief, introduced into Africa by European colonialists and Christian missionaries, that some Africans are descendants of the Lost Tribes. Parfitt argues that Africans who identify themselves as Jews or Israelites have adopted and internalized this trope as a narrative of their origins.
In the United States, African American Jews—as well as some black Christians and Rastafarians—also affirm their descent from ancient Israelites (although not specifically from the Lost Tribes). In Chosen People: The Rise of American Black Israelite Religions, Jacob Dorman, a young historian from the University of Kansas, traces the threads of American Black Judaism, Israelite Christianity, and Rastafarianism in abundantly documented detail. In a thesis that parallels that of Parfitt, Dorman says that these religions arose in late-19th-century America “from ideational rather than genealogical ancestry.” They are, to use Benedict Anderson’s famous phrase, “imagined communities,” and their leaders “creatively repurposed” the Old Testament, along with the ideas of the Pentecostal Church, Freemasonry, and, eventually, an Afrocentric historical perspective in a manner that spoke to black Americans.
The first Europeans to explore Africa read the Bible as universal history. Thus, they explained the existence of African peoples they encountered by conceiving them as one of the Lost Tribes of Israel. The idea that blacks were Jews was persuasive to them, because as Parfitt writes:
[B]oth Jews and blacks were pariahs and outsiders, and in the racialized mind of Europe this shared status implied that Jews and blacks had a shared “look” and a shared black color.
Parfitt cites the research of a German army officer, Moritz Merker, who studied Masai customs and religious practices in the late 19th century. Merker observed “profound and numerous parallels between the Masai’s myths and customs, social structure, and religion and those of the biblical Hebrews.”
The Bible then was largely responsible for giving the Masai a new racial identity . . . He concluded that the Masai and Hebrews once constituted a single people. The Masai in fact were Israelites.
Parfitt traces how European colonialists and missionaries imposed ideas of Israelite descent upon the Ashanti, Yoruba, Zulu, Tutsi, Igbo, and other peoples across sub-Saharan Africa, prompting a subset of each group to adopt the invented stories of origin as historical truth.
For Parfitt, even the Ethiopian Beta Israel, whose Jewish identity has been granted by the Jewish state, root their Jewish origins in legendary, rather than historical material. The Ethiopians themselves, both Beta Israel and Orthodox Christian, shared a belief in descent from King Solomon and the Queen of Sheba, based on their own Kebra Negast, a compilation of ancient lore about the origins of Ethiopia redacted about seven hundred years ago. In 1973, when Chief Sephardic Rabbi of Israel Ovadia Yosef halakhically validated their Jewish identity, he based his opinion on the 16th-century responsum of Rabbi David ben Solomon ibn Abi Zimra (known as the Radbaz), who concluded they were descendants of the tribe of Dan. The Radbaz inherited the idea of the Danite origins of the Ethiopian Jews from the unverifiable writing of Eldad Ha-Dani, a 9th-century traveler who himself claimed to be a Danite. Parfitt quotes Henry Aaron Stern, a Jewish convert to Christianity who visited the Ethiopian Beta Israel in the mid-1800s and subsequently reported: “There were some whose Jewish features no one could have mistaken who had ever seen the descendants of Abraham either in London or Berlin.”
The Israelite trope was useful for Christian missionaries. If Africans were the descendants of the Israelites who had accepted the original covenant between God and Israel, what could make more sense than that they would now enter into the New Covenant? Occasionally, however, the missionaries’ efforts took an unexpected turn, when some Africans were drawn towards the Jewish side of their putative Judeo-Christian legacy.
The Igbo Jews of Nigeria are a case in point. An estimated thirty thousand Igbo practice some sort of Judaism, and there are nearly forty synagogues in Nigeria, which follow different Jewish or quasi-Jewish practices. Although their opinions about their tribe of ancestry (Gad, Zevulun, or Menashe) vary, virtually all accept the general Lost Tribes theory and believe that customs of Israelite origin, like circumcision on the eighth day, methods of slaughtering of meat, and endogamy, predated colonialism.
In 1789, Olaudah Equiano, a former American slave who purchased his freedom and moved to London, published a famous and widely read autobiography in which he described similarities in customs between Jews and his own Igbo tribe based on recollections from his childhood. A half-century later, the British Niger expedition of 1841 brought with it letters from London rabbis to the (lost) Jews they expected to find in Africa. By 1860, part of the Bible had been translated into the Igbo language. In 1868, Africanus Horton, an Igbo nationalist educated in London, published a “vindication” of the African race by arguing that “the Igbo could trace their origins back to the Lost Tribes of Israel and that their language was heavily influenced by Hebrew.” Some believe that the word Igbo is derived from Ivri, the Hebrew word for “Hebrew.”
In the late 1960s, the Biafran War of Independence pitted some thirty million Igbo against the much larger Nigeria. The horror of more than a million deaths, most of them Igbo, propelled many more Igbos to identify not only with Jews, but with Israel:
[T]he fact that they [Igbo] were scattered throughout Nigeria as a minority in many cities created the sense of a Jewish-like diaspora, and the “holocaust” of the Biafra genocide provided an even firmer ground for establishing commonalities with Jews. The publicists of the Republic of Biafra compare the Igbo to the “Jews of old” . . . they termed the anti-Igbo riots as anti-Semitic Cossack “pogroms.”
As Parfitt aptly puts it, “Once a myth is established, it often acts as a magnet to the iron filings of supporting evidence: points of comparison and further proofs accumulate and create a consistent, interlocking, convincing whole.”
It has convinced more than the Igbo. Kulanu (All of Us), an American nonprofit that supports vulnerable Jewish communities, has published the writings of Remy Ilona, an Igbo Jew who retells as historical the narrative Parfitt casts as mythic, and a recent documentary film, Re-emerging: The Jews of Nigeria, has brought the Igbo’s story to several Jewish film festivals.
The Abayudaya of Uganda are unique in not claiming descent from a Lost Tribe, but Parfitt argues that they still owe their origins to imported biblical lore. Protestant missionaries from England, which ruled Uganda as a protectorate from 1894 until 1962, evangelized the native population by telling them that “the British themselves had learned this [biblical] wisdom from Oriental foreigners—a people called the Jews. It followed then that if the all-powerful British could accept religious truth from foreigners, so too could Africans.”
Accepting Christianity, together with the idea that Jesus was a Jew, unexpectedly prompted the formation of the Society of the One Almighty God in 1913, which attracted ninety thousand followers who kept the Saturday Sabbath and became known as Malakites (after their leader, Musajjakawa Malaki). More descriptively, they were also known as “Christian Jews.” In 1913 Semei Kakungulu, a Protestant convert and a prominent military advisor to the British in their battles with native Muslims, joined the sect. But the British treatment of Africans angered Kakungulu and he declared himself a Jew, Parfitt suggests, as a protest against the British treatment of his troops. Kakungulu, Parfitt writes, “used his self-identification as a Jew as a weapon.”
Currently, there are about twelve hundred Abayudaya, many of whom have been converted to normative Judaism by visiting American rabbis. The group’s spiritual leader, Gershom Sizomu, was ordained by a conservative beit din in 2008, after he graduated from the Ziegler School of Rabbinic Studies in Los Angeles.
The Lemba of South Africa and Zimbabwe, with whom Parfitt has been most closely associated, are a complicated case. When he first encountered the Lemba in 1985, they told him that they were “blood relatives” of the Falasha and that they came from a place called Sena, to the north and across the sea. Parfitt quotes several comments by early European travelers on “the lighter skin and Jewish appearance of the Lemba” as well as the “Semitic features of the Lemba.” In 1989, a Lemba woman in Soweto told Parfitt:
I love my people . . . we came from the Israelites, we came from Sena, we crossed the sea. We were so beautiful with beautiful long, Jewish noses . . . We no way wanted to spoil our structure by carelessness, eating pig or marrying non-Lemba gentiles!
Fascinated by the legends, Parfitt spent months on an adventurous search for their Sena, which he documented in Journey to the Vanished City (a bestseller that earned him the nickname “the British Indiana Jones” in the popular media). He didn’t turn up the ancient city, but an unexpected finding published in The American Journal of Human Genetics startled Parfitt into looking further. A DNA study showed that “50 percent of the Lemba Y chromosomes tested were Semitic in origin, 40 percent were ‘Negroid,’ and the ancestry of the remainder could not be resolved.” A set of six markers, the Cohen Modal Haplotype (or CMH) was present in high frequency in both the priestly class of Jews and the Lemba.
In 1997 Parfitt conducted his own testing focused around the deserted ancient town of Sena in Yemen. While it was clearly not the Sena of Lemba legends, Parfitt believed there was a connection between this location and the Lemba “on the basis of [unspecified] historical and anthropological data.” His study showed that the Buba clan (the “priests” of the Lemba) had a high incidence of CMH as did the kohanim. The New York Times, the BBC, and a NOVA television special announced Parfitt’s discovery of this “African Jewish tribe.” Parfitt, however, is more circumspect and suggests that “the ‘Jewishness’ of the Lemba may be seen as a twenty-first-century genetic construction.”
How important is the genetic criterion in determining inclusion in Jewish peoplehood? The Ethiopian Jews, who have also undergone genetic testing, which they decisively “failed,” are an instructive case. DNA testing showed them to be similar to Ethiopian (Orthodox Christian) men rather than to Israeli Jewish men. Yet, they are almost universally regarded as part of the Jewish people, while the Lemba are not. Genetic links seem neither necessary nor sufficient for Jewish identity. Could it be that Israel and the Jewish Federations of North America believe the Beta Israel are descended “from the Tribe of Dan”? Is this the slender thread—one that can bear no historical weight—that authenticates their Jewish identity? Or is it rather their consistent practice over the centuries? Parfitt’s concisely crafted Black Jews in Africa and the Americas is an invitation for us to reexamine the criteria we actually use to determine Jewishness.
In Chosen People, Jacob Dorman defines a “Black Israelite” religion as one whose core belief is that “the ancient Israelites of the Hebrew Bible were Black and that contemporary Black people [in America] are their descendants.” Contemporary practitioners of one or another variety of this religion can be found in black congregations in Chicago, New York City, and Philadelphia. They generally identify themselves as Jews and Israelites, unlike the Black Hebrews of Dimona, Israel (originally from Chicago), who identify themselves as Israelites but not as Jews. This, of course, does not include the small but significant number of African Americans who have joined mainstream Jewish congregations or who have converted formally.
Like Parfitt, Dorman denies the historicity of the belief in Israelite descent. The earliest impetus to identify with the ancient Israelites was motivated, he says, by a “search for roots among uprooted people.” Slavery had severed African Americans from their history and demeaned their social and economic status. As can be heard in spirituals of the mid-1800s, such as “Go Down, Moses,” blacks resonated with the biblical Israelites as oppressed slaves in need of liberation, and some of them imagined an actual Israelite past for themselves. Christianity was the “white religion” of the slave-owners, while for some blacks, the Israelite religion stood for freedom.
In Chosen People, Dorman rejects the idea that Black Israelite religions flowed only “vertically” from roots in the past. “People also pick up culture more obliquely or ‘horizontally,’” he writes. His preferred metaphor is the rhizome, a plant fed by “subterranean, many-branching, hyper-connected networks.” Black Israelite religions, then, were created from a “host of ideational rhizomes”: biblical interpretations, the American passion for Freemasonry, the Holiness church, Anglo-Israelism, and Garveyism. The power of Dorman’s sometimes sprawling narrative derives from the way he shows how these disparate strands became interwoven.
The two principal founders of Black Israelite religions, William Christian (1856-1928) and William Saunders Crowdy (1847-1908), were both born into slavery. Both of them first became active Freemasons, and Dorman sees this as one source of the new American religion they would found. “It is likely,” he writes, “that Masonic texts as well as Masonic interest in the Holy Land and in biblical history helped both men to formulate their Israelite beliefs.” William Christian preached that “Adam, King David, Job, Jeremiah, Moses’s wife, and Jesus Christ were all ‘of the Black race,’” and emphasized racial equality before God.
[Christian] referred to Christ as “colorless” because he had no human father . . . He was fond of St. Paul’s statement that Christianity crossed all social divisions . . .
Crowdy was passionate and charismatic, but he also had bouts of erratic behavior that landed him in jail. After moving to the all-black town of Langston, Oklahoma in 1891, he was tormented by hearing voices and, during one episode, fled into a forest and experienced a vision compelling him to create The Church of God and Saints of Christ. Influenced by the idea that the ancient Israelites were black, he introduced Old Testament dietary laws, the seventh-day Sabbath, and the observance of biblical holidays. By 1900, Crowdy had opened churches in the Midwest, New York, and Philadelphia. These churches, as well as those founded by William Christian, adhered to the Holiness (later, Pentecostal) tradition. By 1926, Crowdy’s Church of God and Saints of Christ had thousands of followers, and there were more than two hundred branches of Christian’s Church of the Living God.
In Harlem of the early 1900s, the serendipitous proximity of black Americans and Eastern European Jews gradually transformed Black Israelite Christianity. The new leaders of Black Israelism, Arnold Josiah Ford, Samuel Valentine, Mordecai Herman, and Wentworth Arthur Matthew, were all recent Caribbean immigrants who believed that Sephardic Jews fleeing the Spanish Inquisition had intermarried with West Indian blacks, passing on Jewish customs to their descendants. Not entirely unlike the painters, poets, and musicians of the Harlem Renaissance, these men were artists of the religious experience, creatively mixing an impressive array of ingredients that evolved into Black Judaism, Black Islam, Rastafarianism, and other religious orders:
Black Israelism drew on Caribbean carnival traditions, Pentecostal [Holiness] Christianity, Spiritualism, magic, Kabbalah, Freemasonry, and Judaism, in a polycultural creation process dependent not on an imitation or inheritance of Judaism as much as on innovation, social networks, and imagination.
Marcus Garvey’s elevation of Africa to a kind of black American’s Zion became another cornerstone of Black Israelite thought. Josiah Ford, the first Black Israelite to give himself the title “rabbi,” was the music director of Garvey’s Universal Negro Improvement Association and even tried to convert Garvey to Judaism. At Garvey’s headquarters in Liberty Hall, Ford, Valentine, and Herman, who had studied Hebrew with their Ashkenazi neighbors in Harlem, taught liturgical Hebrew to other Israelites. Ford, along with Valentine, soon founded the Beth B’nai Abraham congregation, which was the earliest locus of Black Judaism.
Whereas the Holiness Black Israelite founders had derived Judaic practices from Christian and Masonic texts and practices, Ford and his collaborators . . . rejected Christianity and practiced Jewish rituals with Jewish prayer books.
In his re-creation of Harlem of the 1920s, Dorman demonstrates the symbolic importance to black Americans of Ethiopia. Ethiopia enjoyed widespread adulation as the African nation that had never been colonized. The country’s luster was further enhanced by the fact that it was mentioned frequently in the Bible (as the translation of the Hebrew Cush). Thus, the exciting news of Jacques Faitlovitch’s recent encounters with Ethiopian Jews was a windfall that provided Black Israelites with a missing link to both an apparently authentic African Judaism and an African Zion. Ford actually “made aliyah” to Ethiopia with a small cadre of Beth B’nai Abraham members, and Dorman recounts the expedition’s tragicomic misadventures in dramatic detail.
With Ford gone, the mantle of leadership fell upon Wentworth Matthew. It was Matthew who eventually stripped Christianity out of what had begun as a Holiness practice. In Harlem in 1919 he founded the Commandment Keepers Congregation of the Living God, later renaming it the Commandment Keepers Royal Order of Ethiopian Hebrews. He also established a school, which, as the International Israelite Rabbinical Academy, still ordains the rabbis of Black Judaism.
Matthew’s own certificate of ordination, or smicha, was mailed to him from Ethiopia by Ford, and is signed only by an official of Ethiopia’s Orthodox Christian Church. This curious detail is representative of Dorman’s unsparing revelations concerning Rabbi Matthew, and they are sure to raise eyebrows and hackles among Black Jews who revere him to this day as a founding figure. Dorman draws from a wide array of archival documents—he calls them Matthew’s “hidden transcripts”—to indicate a more gradual and ambiguous abdication of Christian and “magical” practices than is commonly believed, though he ultimately does describe Matthew as having “transitioned from ‘Bishop’ to ‘Rabbi’ and from Holiness Christianity to Judaism.”
Chosen People is unique in placing Black Israelite religions in the complex context of American history and is the most comprehensive work of scholarship on this topic, but it is not always an easy read or straightforward. For example, Crowdy’s biography and role are introduced in a dozen pages in Chapter 1, but then much of this material is restated in a slightly different context in Chapter 3. Nevertheless, Dorman’s nuanced analysis of Israelite religion and the dramatic stories he tells along the way make it well worth the moments of déjà vu. No one attempting to understand the rise of Black Israelite religions in America can afford to do without Chosen People.
Like Parfitt, Dorman concludes his book by raising the question of religious identity. He argues that “‘Polyculturalism’ is a better term than ‘syncretism’ for describing the process by which African Americans have created new religions in the twentieth century.” Those who have worshipped at black congregations in the Rabbi Matthew tradition can attest to the accuracy of Dorman’s terminology: The service, siddur, and the Torah reading are virtually the same as those of mainstream synagogues, but the call-and-response preaching, the music, and the religious fervor are deeply African American.
As Black Israelites, these faith practitioners believe they are Jews by descent. Chosen People challenges that historical premise. If, however, one were to ask Dorman whether they are Jewish, I suspect that his answer can be found in the final paragraph of the book, where he challenges the reader “to look at the world, its peoples, and cultures as riotously impure.”
Peter S. Beagle's classic fantasy novel The Last Unicorn perhaps betrays its Jewish bent with "idiosyncratic yet archetypal characters such as the hapless magician Schmendrick."
One of the many pleasures of the recently published Saul Bellow: Letters is how it reacquaints us with Bellow's wry, poignant, infectiously erudite voice. This is all the more surprising because he wasn't, or at least so he insisted, a natural-born letter writer. As in his literature, Bellow's language is so stunning that one wonders whether he was writing to both his correspondents, and to readers like us.
In Israel even well-to-do families can be seen scooping bath water out of the tub to water backyard plants and hygiene classes teach students to use the least amount of water when showering and brushing their teeth. Israel's way with water may be the way out chronic water shortages.
Unlike the Jews of Venice, whose charter was anxiously renegotiated every decade or so, American Jews participated in civic life, confidently building themselves a future. | <urn:uuid:6da71264-e670-4704-a988-9f11da1135dd> | CC-MAIN-2019-47 | https://jewishreviewofbooks.com/articles/399/in-and-out-of-africa/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00497.warc.gz | en | 0.962933 | 5,150 | 3.171875 | 3 |
Connexions Resource Centre:
Focus on Health
Recent & Selected Articles
- This is a small sampling of articles related to health issues in the Connexions Online Library. For more articles, books, films, and other resources, check the Connexions Library Subject Index, especially under topics such as
and occupational health and safety.
- Air pollution now 'largest health crisis' (November 23, 2018)
The WHO estimates that seven million premature deaths are linked to air pollution every year, of which nearly 600,000 are children who are uniquely vulnerable.
- "Right to Try" Is a Cruel Farce (August 12, 2018)
Drug companies want you to think they're providing glimmers of hope to terminally ill patients. Don't believe them.
- The birth of the Cuban polyclinic (June 28, 2018)
During the 1960s, Cuban medicine experienced changes as tumultuous as the civil rights and antiwar protests in the United States. While activists, workers, and students in western Europe and the United States confronted existing institutions of capitalism and imperialism, Cuba faced the even greater challenge of building a new society.
- Ten Theses on Farming and Disease (June 8, 2017)
Theres a growing understanding of the functional relationships health, food justice, and the environment share. Theyre not just ticks on a checklist of good things capitalism shits on.
- Ontario health-care reform and Community Health Centres (January 7, 2017)
We already have a working model of primary care that targets these populations and that is very good at dealing with complex needs and providing holistic care. Community Health Centres (CHCs) have been in existence for decades all over Canada, providing care to communities that are not well served by other models of primary care.
- The Precautionary Principle: the basis of a post-GMO ethic (April 18, 2016)
GMOs have been in our diets for about 20 years. Proof that they are safe? No way - it took much, much longer to discover the dangers of cigarettes and transfats, dangers that are far more visible than those of GMOs. On the scale of nature and ecology, 20 years is a pitifully short time. To sustain our human future, we have to think long term.
- What is Meant by 'Single-Payer' in the Current Discussion of Health Care Reforms During the Primaries? (March 10, 2016)
Single-payer means that most of the funds used to pay for medical care are public, that is, they are paid with taxes. The government, through a public authority, is the most important payer for medical care services and uses this power to influence the organization of health care. The overwhelming majority of developed countries have one form or another of a single-payer system.
- Uranium Mine and Mill Workers are Dying, and Nobody Will Take Responsibility (February 15, 2016)
To talk to former uranium miners and their families is to talk about the dead and the dying. Brothers and sisters, coworkers and friends: a litany of names and diseases. Many were, as one worker put it, "ate up with cancer," while others died from various lung and kidney diseases.
- How Class Kills (November 8, 2015)
A recent study showing rising mortality rates among middle-aged whites drives home the lethality of class inequality.
- 'Worse Than We Thought': TPP A Total Corporate Power Grab Nightmare (November 5, 2015)
On issues ranging from climate change to food safety, from open Internet to access to medicines, the TransPacific Partnership (TPP) is a disaster.
- TPP is "Worst Trade Agreement" for Medicine Access, Says Doctors Without Borders (October 7, 2015)
The TPP [Trans-Pacific Partnership] will
go down in history as the worst trade agreement for access to medicines in developing countries, said Doctors without Borders/Médecins Sans Frontières (MSF) in a statement following the signing of the TPP trade deal.
- Restrict antibiotics to medical use, or they will soon become ineffective (April 6, 2015)
Antibiotics have saved hundreds of millions of lives since they came into use in the 1930s, but their power is running dry thanks to their massive use in factory farming, horticulture, aquaculture and industry.
- Health Care and Immigration Policies that Kill (March 17, 2015)
Cuts to Canada's Interim Federal Health Program (IFHP), severely curtail access to health-care services for refugee claimants and refugees. Many beneficiaries and practitioners were already critical of the original IFHP because it provided inconsistent access to health care and many services were not covered. The situation only worsened after the cuts.
- What austerity has done to Greek healthcare (January 26, 2015)
The shocking 'austerity'-imposed destruction of Greece's once proud healthcare system is a key reason Greeks have turned to Syriza, finds London GP Louise Irvine in an eye witness account.
- Film Review: Revolutionary Medicine - A Story of the First Garifuna Hospital (September 13, 2014)
A review of the provocative documentary Revolutionary Medicine, which tells the story of the first Garifuna hospital, in Honduras.
- Moral bankruptcy of capitalism': UK's top public doctor shames western society over Ebola (August 3, 2014)
Western countries should tackle drugs firms' "scandalous" reluctance to invest in research into the virus which has already killed over 700 people in West Africa, the UK's top public doctor said, adding, They'd find a cure if Ebola came to London.
- It's Raining Bombs and Shells (July 31, 2014)
Im still alive. I dont know what this means, but I can say that most of the time I can still walk and do some work with people who need help. It all depends on my luck. And here, for people living in Gaza, luck means how close to you the bombs fall from Israels tanks, planes, or warships. Some hours its raining bombs. Americans say Its raining cats and dogs. In the new Gaza idiom, we say Its raining bombs and shells.
- Glyphosate is a disaster for human health (April 30, 2014)
Extensive, long running evidence for the cancer-causing effects of glyphosate, and other toxic impacts, have been ignored by regulators. Indeed as the evidence has built up, permitted levels in food have been hugely increased.
- The chemical dangers in food packaging (March 1, 2014)
The long-term effects of synthetic chemicals used in packaging, food storage and processing food could be damaging our health, scientists have warned.
- Black Sites across America (February 4, 2014)
There are 2.3 million people in US prisons in conditions that are often inhumane and at worst life threatening. The most striking aspect of this scene is the lack of decent medical care for prisoners, whether in solitary confinement or in the general prison population.
- Scientific journal retracts study exposing GM cancer risk (December 5, 2013)
The Journal of Food and Chemical Toxicology appears to have violated scientific standards by withdrawing a study which found that rats fed on a Monsanto GM corn were more likely to develop cancer than controls.
- The Sleepwalkers Are Revolting (November 22, 2013)
The Center for Disease Controls finding that sleep deprivation has reached epidemic proportions has failed to generate significant public outcry.
- Cancer is Capitalist Violence (November 15, 2013)
Its been two decades since the publication of Martha Balshams landmark study, Cancer in the Community: Class and Medical Authority (1993). Balshem, a hospital-based anthropologist, documented how a Philadelphia lay community rejected medical advice to stop smoking, eat fruits and vegetables and schedule regular screening tests. The working class community of Tannerstown (a pseudonym) instead blamed air pollution from highway traffic and nearby chemical plants, as well as fate, for their cancers.
- Held hostage by Big Pharma: a personal experience (2013)
Mike Marqusee looks at how drug firms can make huge profits from their state-enforced monopoly on an essential good.
- Researchers find link between aircraft noise and heart disease (2013)
Exposure to high levels of aircraft noise is associated with an increased risk of cardiovascular disease, two studies find. Researchers found increased risks of stroke, coronary heart disease, and cardiovascular disease for both hospital admissions and mortality, especially among the 2% of the study population exposed to the highest levels of daytime and night time aircraft noise.
- Bad Pharma, Bad Journalism (October 23, 2012)
The drugs don't work: a modern medical scandal, from Ben Goldacre's new book, Bad Pharma presents a disturbing picture emerges of corporate drug abuse.
- Medicare Myths and Realities (May 1, 2012)
Since medicare is an extremely popular social program, the media and right-wing politicians have learned that it is unwise to attack it directly. Instead, they propagate myths designed to undermine public support for, and confidence in, the health care system, with the goal of gradually undermining and dismantling it.
- Island of the Widows (December 12, 2011)
Mysterious kidney disease in Central America.
- Connexions Archive Case Statement (September 24, 2011)
Working together to secure a future for the past
- ER certainties: death and co-pays (September 1, 2011)
Our society has made choices that dehumanize all of us. Dehumanization is felt inside and outside the shop floor. The HMO's bottom line is not about how well the patient's illness is treated, but how to minimize costs. They remind us employees daily that we're a business. The corporate ethos is the survival of the business above all, over anyone else's survival.
- Traffic Noise Increases the Risk of Having a Stroke, Study Suggests (January 27, 2011)
Exposure to noise from road traffic can increase the risk of stroke, particularly in those aged 65 years and over.
- Asia Inhales While the West Bans the Deadly Carcinogen (February 16, 2010)
Asbestos, a known carcinogen banned in much of the world, is a common and dangerous building block in much of Asia's development and construction boom. This white powder causes 100,000 occupational deaths per year, according to Medical News Today.
- Connexions Archive seeks a new home (November 18, 2009)
The Connexions Archive, a Toronto-based library dedicated to preserving the history of grassroots movements for social change, needs a new home.
- Your Money, Or Your Life (October 5, 2009)
Single Payer will save lives, but it also will save money. The exorbitant salaries of Insurance Company CEOs will be eliminated. The profit motive for investors will be eliminated. Administrative costs will be reduced because one single payer will replace a large number of insurance companies - all with different forms, different standards, and different requirements for an endless stream of mind-numbing paper work.
- 5 Things the Corporate Media Don't Want You to Know About Cannabis (September 23, 2009)
Recent scientific reports suggest that pot doesn't destroy your brain, that it doesn't cause lung damage like tobacco -- but you won't hear it in the corporate media.
- Treading the Borders Between Life and Death (September 22, 2009)
During Israel's Operation Cast Lead in December 2008 - January 2009, Israeli forces killed 16 emergency medical staff and injured 57. According to the Palestinian Centre for Human Rights (PCHR), perhaps hundreds of those killed could have survived if emergency services had been able to access them promptly - the access denied to them can be defined as a deliberate violation of the Geneva Conventions and therefore a war crime.
- The misbegotten 'war against cancer' (September 21, 2009)
- Meet the Real Death Panels (September 18, 2009)
Harvard-based researchers found that uninsured, working-age Americans have a 40 percent higher risk of death than their privately insured counterparts, up from a 25 percent excess death rate found in 1993.
- South Africa: Redouble Efforts to Reduce Maternal Mortality (September 10, 2009)
Maternal health has been under the spotlight in South Africa after an analysis of maternal deaths was released in July showing an increase in the country's maternal mortality rate. Researchers found that nearly four out of every 10 deaths (38.4 percent) were avoidable. They identified non-attendance and delayed attendance as common problems, together with poor transport facilities, lack of health care facilities and lack of appropriately trained staff.
- Health Care Around the World (August 31, 2009)
Overview of the various ways health services are provided around the world, as well as accompanying issues and challenges. Topics include health as a human right, universal health care, and primary health care.
- Health and environmental victories for South African activists (August 20, 2009)
In South Africa, major advances in health and the environment during the 2000s were only won by social activists by removing the profit motive.
- Israeli Doctors Collude in Torture (June 30, 2009)
Israeli human rights groups charge that Israel's watchdog body on medical ethics has failed to investigate evidence that doctors working in detention facilities are turning a blind eye to cases of torture.
- Testimony of David U. Himmelstein, M.D. before the HELP Subcommittee (April 23, 2009)
A single-payer reform would make care affordable through vast savings on bureaucracy and profits. As my colleagues and I have shown in research published in the New England Journal of Medicine, administration consumes 31 percent of health spending in the United States, nearly double what Canada spends. In other words, if we cut our bureaucratic costs to Canadian levels, we'd save nearly $400 billion annually - more than enough to cover the uninsured and to eliminate co-payments and deductibles for all Americans.
- Addiction and Control (January 19, 2009)
Prisons are very profitable. There are private prisons nowadays. The people that own them have, as their mission, first and foremost, the making of money. They need as many people as possible in prison to maximize their profits. They also need to spend as little as possible on the inmates and staff. Thus, America has over 2.3 million people incarcerated; more than any other country.
- What We Mean By Social Determinants of Health (September 9, 2008)
Analyzes the changes in health conditions and quality of life in the populations of developed and developing countries over the past 30 years, resulting from neoliberal policies developed by many governments and promoted by international agencies. Critiquing a WHO report on social determinants of health, Navarro argues that it is not inequalities that kill people; it is those who are responsible for these inequalities that kill people.
- How we learned to stop having fun (April 2, 2007)
We used to know how to get together and really let our hair down. Then, in the early 1600s, a mass epidemic of depression broke out - and we've been living with it ever since. Something went wrong, but what?
- Health care and children in crisis in Gaza (March 26, 2007)
These days one hears a lot about Canadian soldiers in Afghanistan, adults who have been specifically trained for warfare, who are nevertheless traumatized by the experience of seeing comrades injured or killed, or suffering injuries or danger themselves. The trauma goes on, long after the experience has ended and they are back in a place of safety. How much worse then for children in Gaza who witness and experience these events day after day, week after week with no end and with no place of safety.
- Noise Pollution: A Modern Plague (2007)
Environmental noise pollution, a form of air pollution, is a threat to health and well-being. It is more severe and widespread than ever before, and it will continue to increase in magnitude and severity because of population growth, urbanization, and the associated growth in the use of increasingly powerful, varied, and highly mobile sources of noise. It will also continue to grow because of sustained growth in highway, rail, and air traffic, which remain major sources of environmental noise. The potential health effects of noise pollution are numerous, pervasive, persistent, and medically and socially significant.
- National Post columnist traumatized by having to wait his turn (December 26, 2006)
Columnist thinks people with money should get quicker treatment in emergency rooms than people who are poor.
- Disaster and Mental Health (2005)
The continuing Israeli military occupation of Gaza is the cause of deep and widespread trauma for Palestinian children and adults.
- Health Disparities By Race And Class: Why Both Matter (2005)
This essay examines three competing causal interpretations of racial disparities in health. The first approach views race as a biologically meaningful category and racial disparities in health as reflecting inherited susceptibility to disease. The second approach treats race as a proxy for class and views socioeconomic stratification as the real culprit behind racial disparities. The third approach treats race as neither a biological category nor a proxy for class, but as a distinct construct, akin to caste. The essay points to historical, political, and ideological obstacles that have hindered the analysis of race and class as codeterminants of disparities in health.
- Inequalities Are Unhealthy (June 1, 2004)
The growing inequalities we are witnessing in the world today are having a very negative impact on the health and quality of life of its populations.
- Anti-Vaccination Fever (January 1, 2004)
Sensationalist media, religious fanatics, and alternative medical practitioners fanned the fires created by questionable research to spawn worldwide epidemics of a disease that has almost been forgotten.
- The Truth About the Drug Companies (2004)
The combined profits for the ten drug companies in the Fortune 500 ($35.9 billion) were more than the profits for all the other 490 businesses put together ($33.7 billion) [in 2002]. Over the past two decades the pharmaceutical industry has moved very far from its original high purpose of discovering and producing useful new drugs. Now primarily a marketing machine to sell drugs of dubious benefit, this industry uses its wealth and power to co-opt every institution that might stand in its way.
- Abandoning the Public Interest (October 7, 2000)
The neo-liberal drive to cut red tape is costing lives. Exposing the hidden costs of deregulation and privatization.
- Contamination: The Poisonous Legacy of Ontario's Environmental Cutbacks (June 4, 2000)
The story of Ontario's right-wing Harris government, which gutted health and environmental protection polices, leading to the Walkerton water disaster.
- Indoor Air Quality: No Scents is Good Sense (January 1, 1998)
Establishing a scent-free workplace.
- Community Noise (1995)
Critically reviews the adverse effects of community noise, including interference with communication, noise-induced hearing loss, annoyance responses, and effects on sleep, the cardiovascular and psychophysiological systems, performance, productivity, and social behaviour.
- Health News Briefs 1992- 1994 (January 1, 1995)
A round-up of health care in the news, 1992 - 1994.
- Health News Briefs 1987 - 1991 (January 1, 1992)
A round-up of health care in the news, 1987 - 1991.
- Connexions Annual Overview: Health (October 1, 1989)
Selected Organizations, Websites and Links
- This is a small sampling of organizations and websites concerned with health issues in the Connexions Directory. For more organizations and websites, check the Connexions Directory Subject Index, especially under topics such as
and occupational health and safety.
- Canadian Centre for Occupational Health and Safety
CCOHS promotes a safe and healthy working environment by providing information and advice about occupational health and safety.
- Canadian Doctors for Medicare
One of the chief aims of Canadian Doctors for Medicare (CDM) is to provide a counterpoint to organizations and interests advocating for two-tier medicine. Equally important, CDM's voice will complement the advocacy of other groups seeking to preserve and improve Medicare and fight against privatization.
- The Canadian Health Coalition
Dedicated to protecting and expanding Canada's public health system for the benefit of all Canadians.
A web portal featuring information and resources about health, with articles, documents, books, websites, and experts and spokespersons. The home page features a selection of recent and important articles. A search feature, subject index, and other research tools make it possible to find additional resources and information.
- HealthWatcher.net Consumer Health Watchdog
Web site which seek to expose quackeryand bogus practices in health care, including cancer quackery and diet scams.
- National Council Against Health Fraud
Focusing on health misinformation, fraud, and quackery as public health problems. Our positions are based upon the principles of science that underlie consumer protection law. We advocate: (a) adequate disclosure in labeling and other warranties to enable consumers to make truly informed choices; (b) premarketing proof of safety and effectiveness for products and services claimed to prevent, alleviate, or cure any health problem; and, (c) accountability for those who violate the law.
- Physicians for Human Rights - Israel
Physicians For Human Rights-Israel was founded with the goal of struggling for human rights, in particular the right to health, in Israel and the Occupied Territories. Human dignity, wellness of mind and body and the right to health are at the core of the world view of the organization and direct and instruct our activities and efforts on both the individual and general level. Our activities integrate advocacy and action toward changing harmful policies and direct action providing healthcare.
Purpose is to combat health-related frauds, myths, fads, and fallacies. Its primary focus is on quackery-related information that is difficult or impossible to get elsewhere. Includes links to other interesting websites.
Other Links & Resources
- Canadian Health Services Research Foundation
Supports the evidence-based management of Canada's healthcare system by facilitating knowledge transfer and exchange - bridging the gap between research and healthcare management and policy.
- Centers for Disease Control and Prevention (U.S.)
Respected source of health information.
- Connexions Library: Food Focus
Selected articles, books, websites and other resources on food.
- Disabilities Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on disability topics in the Sources directory for the media.
- Diseases & Illnesses Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on topics related to illness and diseases in the Sources directory for the media.
- Dr. med. Mabuse
Die Bucher des Mabuse-Verlags und unsere Zeitschrift, Dr. med. Mabuse, sind einer sozialen und humanen Medizin und Pflege verpflichtet und wenden sich nicht nur an Fachkrafte im Gesundheitswesen, sondern wollen allen Interessierten Zugang zum Thema ermöglichen. Der schwierige, aber notwendige Dialog zwischen den Berufsgruppen liegt uns hierbei besonders am Herzen.
- Drugs & Pharmaceuticals Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on topics related to drugs and pharmaceuticals in the Sources directory for the media.
- Food & Nutrition Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on topics related to food and nutrition in the Sources directory for the media.
- Health & Safety Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on topics related to health and safety in the Sources directory for the media.
- Health Care Politics Economics Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on topics related to health care politics and economics in the Sources directory for the media.
- Health Care Workers Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on topics related to health care workers in the Sources directory for the media.
- Health Treatments Interventions Procedures Topic Index in Sources Directory
A subject guide to experts and spokespersons on topics related to health care treatments, interventions, and procedures in the Sources directory for the media.
- Healthy Skepticism
Misleading drug promotion harms health and wastes money. An international non-profit organisation for everyone interested in improving health care.
- Hospitals Clinics Health Care Facilities Topic Index in Sources Directory
A subject guide to experts and spokespersons on topics related to hospitals, clinics, and health care facilities in the Sources directory for the media.
- Ontario Health Coalition
Network of grassroots community organizations representing virtually all areas of Ontario. Our primary goal is to empower the members of our constituent organizations to become actively engaged in the making of public policy on matters related to health care and healthy communities.
- Preventive Health Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on topics related to preventive health in the Sources directory for the media.
- Profit is not the Cure
Web site sponsored by the Council of Canadians to defend public health care. Turn off the sound when visiting this Web site; a badly misguided Web designer has saddled this site with sound effects apparently intended to drive users away so they never come back.
- Psychology Psychiatry Mental Health Topic Index in Sources Directory
A subject guide to experts and spokespersons on topics related to psychology, psychiatry, and mental health in the Sources directory for the media.
- Safety Topic Index in Sources Directory of Experts
A subject guide to experts and spokespersons on topics related to safety in the Sources directory for the media.
- Sources Select Health Experts
Health experts available to take media calls about their area of expertise.
- World Health Organization
The WHO is the United Nations' specialized agency for health, established inl 1948. WHO's objective is the attainment by all peoples of the highest possible level of health. Health is defined in WHO's Constitution as a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.
Books, Films and Periodicals
- This is a small sampling of books related to health issues in the
Connexions Online Library. For more books and other resources, check the Connexions Library
Subject Index, especially under topics such as
and occupational health and safety.
- Betrayal of Trust
The Collapse of Global Public Health
Author: Garrett, Laurie
The story of recent failings of public health systems across the globe.
- Big Pharma Making a killing
New Internationalist November 2003
A look at big pharmaceutical companies and issues surrounding their pursuit for profit at the expense of peoples' health.
- Deception By Design
Pharmaceutical Promotion in the Third World
Author: Lexchin, Joel; Kaur, Shila Rani
The authors discuss the workings of the pharmaceutical industry by exposing the unethical marketing practices, double standards and weak marketing codes.
- Health Hazard
New Internationalist January/February 2001
A look into the history of public health and the challenges it is facing.
- How to Dismantle the NHS in 10 Easy Steps
Author: El-Gingihy, Youssef
The story of how the British National Health Service (NHS) has been gradually converted into a market-based healthcare system over the past 25 years. This process is accelerating under the Coalition government and the very existence of a National Health Service is in danger.
- Medical Reform Newsletter
Newsletters published by the Medical Reform Group of Ontario. The newsletters were published under several different names, including Medical Reform Group News; MRG Newsletter, and Medical Reform. The Connexions Archive has an almost-complete set.
- The No-Nonsense Guide to HIV/AIDS
Author: Usdin, Shereen
This book gives an overview of the origins of HIV, the ways in which it spreads, the profits made by drug companies, women's special vulnerability and the positive action being taken by people and communities to fight back.
- The No-Nonsense Guide to World Health
Author: Usdin, Shereen
- Revolutionary Medicine - A Story of the First Garifuna Hospital
Author: Geglia, Beth; Freeston, Jesse (directors)
The story of the building of a hospital in Ciriboya, Honduras -- an authentic, grass-roots, community development project, from the initial community meetings, the organized planning, the community defense committees, to the actual bricks, mortar and staffing. The viewer of Revolutionary Medicine is guided through the process in a series of compelling interviews with doctors, patients and community protagonists.
- Trick or Treatment?
Alternative Medicine on Trial (North American title: Trick or Treatment: The Undeniable Facts about Alternative Medicine)
Author: Singh, Simon; Ernst, Edzard
Evaluates the scientific evidence for acupuncture, homeopathy, herbal medicine, and chiropractic, and briefly covers 36 other treatments. It finds that the scientific evidence for these alternative treatments is generally lacking. Homeopathy is concluded to be completely ineffective: "It's nothing but a placebo, despite what homeopaths say"
Although the book presents evidence that acupuncture, chiropractic and herbal remedies have limited efficacy for certain ailments, the authors conclude that the dangers of these treatments outweigh any potential benefits. Such potential risks outlined by the authors are contamination or unexpected interactions between components in the case of herbal medicine, risk of infection in the case of acupuncture and the potential for chiropractic manipulation of the neck to cause delayed stroke.
Learning from our Histor
Resources for Activists
The Connexions Calendar - An event calendar for activists. Submit your events for free here.
Media Names & Numbers - A comprehensive directory of Canada’s print and broadcast media. .
Sources - A membership-based service that enables journalists to find spokespersons and story ideas, and which simultaneously enables organizations to raise their profile by reaching the media and the public with their message.
Organizing Resources Page - Change requires organizing. Power gives way only when it is challenged by a movement for change, and movements grow out of organizing. Organizing is qualitatively different from simple “activism”. Organizing means sustained long-term conscious effort to bring people together to work for common goals. This page features a selection of articles, books, and other resources related to organizing.
Publicity and Media Relations - A short introduction to media relations strategies.
Grassroots Media Relations - A media relations guide for activist groups.
Socialism Gateway - A gateway to resources about socialism, socialist history, and socialist ideas.
Marxism Gateway - A gateway to resources about Marxism.
Connect with Connexions
Newsletter Facebook Twitter | <urn:uuid:e181e2e5-4203-4261-822a-0240705863c4> | CC-MAIN-2019-47 | https://connexions.org/CxL-HEA.htm | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00418.warc.gz | en | 0.925794 | 6,418 | 2.609375 | 3 |
This is an excerpt from “E(race)ing Inequities: The State of Racial Equity in North Carolina Public Schools” by the Center for Racial Equity in Education (CREED). Go here to read the full report and to find all content related to the report, including the companion report Deep Rooted.
In this concluding section, we summarize the results of over 30 indicators of educational access and achievement examined, provide interpretations that span the full analysis and the six racial groups studied, discuss the significance of the overall findings, and explain the project’s relationship to the ongoing work of its parent organization, CREED. Numerous directions for change flow from the analysis in this report, but a full explanation of those is beyond the scope of this initial examination. Given that comprehensive analyses of racial equity in North Carolina public schools are not being conducted by other educational institutions in the state, the primary focus of the present work is to:
- Provide an empirical basis for nuanced understanding of how race influences the educational experiences of students,
- Identify key areas for future in-depth study, and
- Indicate directions for intervention intended to provide equitable access to the benefits of public education in our state.
This report asked two broad questions: 1) Does race influence educational access and outcomes? 2) Does race influence access and outcomes after accounting for other factors, such as gender, socioeconomic status, language status, (dis)ability status, and giftedness? In this section we frame our answers to those questions in terms of accumulated (dis)advantage. We seek to assess the overall educational trajectory of racial groups in the state based on aggregate levels of access and achievement/attainment. As we have done throughout the report, White students are the reference group in comparisons.
It is important to note that analyses of data already collected, like those in this report, cannot establish causal links between measurements. That is, we cannot directly link student groups with less opportunity and access to diminished educational success as measured by achievement and attainment outcomes. However, we do ask that readers recognize the clear logical relationship between access and outcomes, as well as the cyclical nature of educational (dis)advantage. Children with less access have enhanced likelihood of school failure (broadly speaking), which in turn diminishes future access/opportunity, and so forth in a fashion that tends to accumulate even more barriers to educational success.
We also call attention to the systemic nature of our findings, which assess racial equity in all schools in the state, across virtually all readily available metrics, and among all U.S. Census designated racial groups. All of this is done in the context of the statutory and policy framework set forth by the North Carolina Constitution, the General Assembly, the State Board of Education, and the Department of Public Instruction.
The full analysis leaves no doubt that race is a powerful predictor of access, opportunity, and outcomes in North Carolina public schools. Furthermore, race affects the educational experiences of students in a very clear and consistent fashion, with Asian and White students tending to accumulate educational advantage and non-Asian student groups of color tending to accumulate disadvantage. Table 15.1 provides a simple visual representation of the relative advantage/disadvantage of student groups of color as compared to White students. A + denotes advantage and a – denotes disadvantage as compared to White students on the same indicator. The … symbol indicates no statistical differences. With 44 points of analysis and six student groups of color, there are a total of 264 possible pairwise comparisons.
Approximately 87% (231 of 264) of comparisons were statistically significant (p≤.05), meaning there is a very low probability that the observed result was due to chance. While this kind of comparison is imprecise by nature, it provides a broad measure of the extent of educational advantage/disadvantage at the state level. Most non-significant comparisons (21 out of 33) were between Pacific Islanders and Whites, which is likely due to the small number of Pacific Islanders in the state rather than because there are not substantial differences. Given the direction of the Pacific Islander vs. White comparisons that were significant, it is likely that with more Pacific Islander students, more significant negative comparisons would be revealed.
If we compare all six student groups of color to Whites, 82% (191 out of 231) of significant comparisons indicated advantage to Whites. Most cases (31 of 41) where students of color had advantage are in comparisons between Asians and Whites (more on this below), leaving only 10 instances (out of 187) of advantage for non-Asian students of color. Thus, if we only look at the five non-Asian student groups of color, approximately 95% of significant comparisons indicated advantage to Whites.
Multiracial students were disadvantaged in every significant comparison. American Indians and Pacific Islanders were advantaged in a single comparison. Black students were disadvantaged in all but three indicators (chronic absenteeism, dropout/graduation, and postsecondary intentions). Hispanic students were disadvantaged in all but four indicators (in-school suspension, out-of-school suspension, suspensions of subjective offenses, and chronic absenteeism). Asians outperformed other groups on all indicators of academic achievement and attainment despite numerous points of comparative disadvantage across the indicators of access.
Most of the symbols (+/-/…) in Table 15.1 represent predicted results of student groups of color compared to White students after controlling for other relevant factors (gender, socioeconomic status, language status, (dis)ability status, giftedness, suspension). In other words, they are not based on simple tallies or statewide averages of the various indicators. For instance, the symbols for GPA do not simply show that average GPAs among Whites are lower than Asians and higher than other groups, but that these same gaps remain after factoring out other predictors in a way that isolates the effect of race.
The remaining indicators measure exposure to benefit/penalty based on the racial composition of schools, such as Honors Courses Access, AP Courses Access, Schools with Novice Teachers, Teacher Vacancy, and Teacher Turnover. In these cases, we are asking if schools with greater proportions of students of color have different levels of access to rigorous coursework and the most effective teachers. As such, all student populations are examined together in a more binary fashion (White/not White).
While we cannot establish statistical causation, an examination of Table 15.1 and the associated results tables throughout the report make it clear that overall the same racial groups with accumulated disadvantage on access variables (i.e. teachers, rigorous coursework, discipline, EC status, AIG status) also have diminished outcomes (i.e. EOG/EOC scores, SAT, ACT, graduation). This makes it exceedingly difficult not to connect barriers to access and opportunity with attendant achievement and attainment outcomes. It also highlights the systemic nature of racial inequity in North Carolina public schools. Were all students, regardless of racial background, to enter the North Carolina public school system with similar levels of readiness, ability, and educational resources, our results suggest that the current system would function to constrain the educational success of non-Asian student groups of color in such a way that upon exiting the system, these same groups would be less prepared for college, career, and adult life. As such, the core interpretation of the full analyses conducted for this report is that in all but a handful of cases, systemic barriers to access and opportunity feed educational disadvantage among non-Asian student groups of color in North Carolina public schools.
Before we share conclusions related to the state of racial equity for individual racial groups, we should point out two bright spots in the data. Although there are clear racialized patterns in the distribution of novice teachers, racial groups in North Carolina appear to have reasonably equitable access to experienced teachers as measured by years of experience. Although statewide data includes a substantial number of teachers with ‘unknown” qualifications, North Carolina is clearly committed to staffing qualified teachers, with the vast majority having licenses and college degrees in their content area.
There are, as noted, exceptions to the overarching conclusion of our analysis that systemic barriers to access and opportunity feed educational disadvantage among non-Asian student groups of color. For instance, while they do not face the same level of systemic disadvantage, the achievement and attainment results of Asian students indicates that they, as a group, are insulated from the potentially adverse effects of over-exposure to less effective teachers and under-exposure to rigorous coursework. This may suggest that there is a “tipping point” at which the accumulated disadvantage within a racial group exceeds that group’s ability to overcome educational barriers. It may also indicate that the economic success and attendant social capital attained by Asian Americans as a social group increases their resilience to educational obstacles.
It is also likely that different student groups of color encounter the educational system in different ways. While research and theory have firmly rejected the notion that all Asian children are smarter, work harder, more docile, and more compliant (Museus & Iftikar, 2013; Teranishi, Nguyen, & Alcantar, 2016), this does not preclude the possibility that this “model minority” mythology continues to leak into the policies and practices of educational actors in our schools. Finally, while all groups of color have experienced state sanctioned discrimination, exclusion, and violence in the American education system and beyond, the degree to which present and historical racism is infused in public education is likely different across groups. An important step in disentangling the various contributors to Asians’ educational experiences would be to collect disaggregated data within the Asian demographic category to help illuminate the differences among and between the approximately 15 different ethnic Asian subgroups (Chinese, Hmong, Korean, Sri Lankan, Thai, Vietnamese, etc.)
Our results for Black students represent a related exception. The pernicious history of slavery and violence against Black families throughout American history is well-documented (Anderson, 1988; Span, 2015) as is a legacy of negative stereotypes and racism against Black children in the public education system (Ladson-Billings & Tate, 1995; Staats, 2015). Our analyses reiterate these trends. Our results show that within many of the access and opportunity metrics where Black students are disadvantaged compared to other student groups, they tend to have among the highest disparities of any student group. Black students have among the highest exposure to judgmental and exclusionary exceptional children (EC) designations, the largest degree of under-selection for academically and intellectually gifted (AIG) programs, the largest disparities in in-school and out-of-school suspension, and are the most likely to be suspended for subjective offenses. Given the unique history of discrimination against Black students, we draw attention to the substantial degree of subjectivity, discretion, and interpretation on the part of educational actors and school authorities in determining things like EC status, AIG status, punishment for (mis)behavior, and the meaning of subjective disciplinary offenses like disobedience, defiance, and insubordination. These determinations are in large part out of the control of Black students, as are many of the other indicators where they are disadvantaged, such as access to rigorous honors and Advanced Placement courses and numerous measures of access to effective teachers. This provides important context for our finding that Black students consistently have the lowest achievement results on EOG and EOC scores.
However, there are several indicators in our analysis where students and families do exercise a substantial degree of control, specifically attendance (chronic absenteeism) dropout/graduation, and postsecondary intentions. For all of three of these indicators, Black students have similar or better results than Whites and several other racial groups after controlling for factors like gender, socioeconomic status, language status, (dis)ability status, giftedness, and suspension. It is noteworthy that before controlling for those other factors, Black students compare poorly with Whites on all three metrics. This suggests that where Black students and families can exercise control over educational outcomes (attendance, dropout, college intentions), they demonstrate a strong commitment to success in school. However, their achievement outcomes appear to be constrained by disadvantages in access and opportunity, many of which are out of their control and vulnerable to the influence of racial prejudice and discrimination. Of course, this recognition has powerful implications for the experiences of Black students in North Carolina public schools, but we also highlight how it challenges the racialized discourse in education that often suggests that Black students/families, and other non-Asian students/families of color, are somehow less committed to success in school (Anderson, 1988; Jones, 2012). Indeed, our analysis strongly suggests the opposite, and that given equitable access and opportunity, Black students would likely make dramatic gains in achievement and attainment outcomes.
The results presented in this report provide some of the first empirical evidence of systemic racial inequity for previously unexamined or underexamined groups. As mentioned above, the small number of Pacific Islanders in North Carolina public schools led to non-significant results in roughly half of the indicators examined, making it difficult to fully assess their overall level of comparative advantage/disadvantage. However, among the significant indicators, all but one (suspension for subjective offenses) indicated disadvantage compared to White students. Given that trend, and the limitations of the data, is likely that our analysis underestimates the areas of disadvantage for Pacific Islanders. Furthermore, the state of North Carolina does not collect data on Pacific Islander teachers. This leaves a substantial gap in our understanding of the educational experiences of these youth.
Our results show that American Indian students have among the highest degree of cumulative disadvantage of any group. Across the 44 indicators in Table 1, American Indians are disadvantaged in 38, including every indicator of academic achievement. They have comparative advantage in only 2 indicators (exceptional children designation and highly qualified teachers) and are similar to Whites in 4 indicators (in-school suspension, suspension for subjective offenses, dropout, and highly qualified teachers). American Indian students are the least likely to aspire to college, take the fewest honors and AP courses, and have the highest levels of chronic absenteeism. Their levels of out-of-school suspension are approximately double the rate of White students, and American Indians are among the least likely to take courses with ethnically matched teachers.
Taken together, our analysis of American Indians suggest that they may lack much of the structural support necessary for equitable levels of college and career readiness. As with other groups, attendance problems and over-selection for discipline likely diminish the achievement results of American Indian students. These disadvantages combined with decreased access to honors and Advanced Placement courses and few same-race teachers provide important insight into why so few American Indian students plan to attend college, despite their comparatively low high school dropout rate.
Multiracial students represent another underexamined student group. Given the complexity of their racial background, they are also perhaps the least understood of any student group of color, despite the fact that they make up a larger proportion of the student population than American Indians, Asians, and Pacific Islanders. The minimal research that has been devoted to Multiracial students has suggested that they are among the most vulnerable to accumulated disadvantage in educational settings (Triplett, 2018). Our analysis supports this conclusion.
Multiracial students are the only group in our analysis that is disadvantaged on every significant indicator of access, opportunity, and outcomes. Perhaps owing to the complexity of their racial identity, multiracial students do not represent the most acute levels of disadvantage in any single indicator. However, they do have among the highest levels of suspension, particularly suspension for subjective infractions. As is the case for Pacific Islanders, North Carolina does not collect data on teachers that identify as Multiracial, making it difficult to fully assess their exposure to effective instruction.
Although they appear to differ according to specific indicators, our analysis finds that Hispanic students in North Carolina public schools also have substantial accumulated disadvantage. Of the 44 metrics assessed, Hispanics are disadvantaged (vs. Whites) on 38, advantaged on four, and similarly situated on only two indicators. Three of the four metrics on which they have comparative advantage are related to school discipline (in school suspension, out-of-school suspension, and suspension for subjective offenses). This supports previous literature in suggesting that Hispanic students as a group experience school discipline in a less racialized manner than other non-Asian student groups of color (Gordon, Piana, & Keleher, 2000; Triplett, 2018). The final indicator with comparative advantage for Hispanic students (vs. Whites) is chronic absenteeism. This is not a surprising result considering that our analysis shows that suspension is such a powerful predictor of chronic absenteeism, even when we factor out absences due to out-of-school suspension. In other words, Hispanics’ comparative advantage (vs. Whites) on chronic absenteeism may be in large part due to their relatively low rates of suspension and the heightened levels of absenteeism among Whites.
Hispanics represent the group with the most acute comparative disadvantage on several indicators, including dropout, lack of same-race teachers, and judgmental exceptional children (EC) designations. The dropout rate among Hispanics is substantially higher than any other racial group in our analysis. While the data did not allow us to empirically test the relationship, it is perhaps not a coincidence that Hispanic youth drop out at such high rates in the absence of same-race role models in schools, particularly given the documented pressure that many Hispanic youth feel to pursue employment after high school (see Dropout).
Hispanics’ results on EC designations are also noteworthy. They do not have a particularly high likelihood of being designated EC in comparison to their proportion of the student population, but Hispanic results on EC demonstrate a unique pattern. First, there is a dramatic drop in the number of Hispanic EC designations when we use race alone as a predictor as opposed to when we control for other factors (i.e. gender, socioeconomic status, language status, (dis)ability status, giftedness). Secondly, as mentioned, Hispanics have the highest levels of judgmental exceptional children (EC) designations, which include the developmentally delayed, behaviorally/emotionally disabled, intellectual disability, and learning disabled designations. This may suggest that language status is inappropriately contributing to learning disabled EC designations for Hispanic students. While further study is required, if language status is contributing to EC in this way, it may indicate that school staff lack the resources needed to provide non-Native English speakers with the additional educational support they require and/or that school staff harbor biases that cause them to conflate lack of facility in English with learning disabilities.
While they serve as the comparison group in most of our analyses, our results still indicate comparative levels of (dis)advantage for White youth. With a single exception (Dropout/Graduation), Whites have a clear pattern of results on the indicators related to educational outcomes (EOC scores, EOG scores, GPA, ACT, SAT, and WorkKeys), such that they underperform compared to Asians but outperform all other student groups of color. White youth also tend to accumulate advantage with access and opportunity indicators. On 10 of the 21 indicators related to access, Whites are advantaged or similarly situated to all other groups. Results are mixed for the remaining 11 access indicators. The indicators where Whites compare most poorly to student groups of color are in-school suspension (ISS), chronic absenteeism, and dropout. While virtually all previous research has found that Whites are under-selected for discipline compared to non-Asian student groups of color, studies have also shown that all racial groups have similar misbehavior rates and that Whites tend to be punished less harshly than students of color for similar offenses (Finn, Fish, & Scott, 2008; U.S. Department of Justice & U.S. Department of Education, 2014). Therefore, our results for Whites on ISS may reflect a scenario where less punitive forms of discipline (in-school vs. out-of-school suspension) are rationed for Whites and more punitive forms of discipline are reserved for students of color despite similar rates and types of misbehavior (Welch & Payne, 2010).
More straightforward interpretations appear to apply to Whites and chronic absenteeism and dropout. Whites are over-represented statewide in chronic absenteeism compared to their proportion of the student population, but not in dropout. In our regression models, Whites tend to have much lower odds of both chronic absenteeism and dropout when race alone is used as a predictor. However, when we predict the odds of chronic absenteeism and dropout while controlling for other factors (Gender, Free/Reduced Lunch Eligibility, Language Status, Special Education Status, Giftedness, and Suspension), Whites compare to most student groups of color less favorably. This indicates that compared to similarly situated students of color, Whites exhibit concerning patterns of attendance and persistence to high school graduation. It is likely that attendance problems contribute to dropping out of high school for Whites (and other groups). Our examination of the reasons for dropout provide additional context for interpreting these results.
In addition to attendance, White students were more likely to cite substance abuse, health problems, lack of engagement with school/peers, unstable home environments, and psychological/emotional problems as reasons for dropout. These reasons were relatively unique for Whites, as other groups tended to cite discipline, child care, and the choice of work over school. Overall, our results may suggest that schools lack the structural supports needed to address the unique social, emotional, and psychological needs of many White students vulnerable to disengagement from school.
Summary and Findings, Notable Challenges, and Future Directions
Given the sum of our findings, the state of racial equity in North Carolina public schools should be a point of critical concern and sustained action for all stakeholders in education. Our core conclusion, that systemic barriers to access and opportunity feed educational disadvantage among student groups of color in our state, is a betrayal of the promise of public education. The urgency of fully understanding the matter at hand is further increased by the recognition that those responsible for educational policy and practice in North Carolina do not appear to regularly conduct comprehensive, action-oriented analyses of the state of racial equity intended to produce reform.
Two broad challenges follow from the results of this report. First, all student groups of color have inequitable access to the kinds of rigorous coursework and effective teachers necessary to ensure college and career readiness for all students. The challenges associated with rigorous coursework and effective teachers will require state-level, systematic intervention both because the relevant legal and statutory regulations are enacted on the state level and because equitable access requires policy reform that encompasses the substantial racial, cultural, geographic, and socio-political diversity of our state. Exposure to inequitable forms of school discipline represents a second major challenge. While there is considerable variation reflected in the disciplinary experiences of different student groups, we view discipline reform as a pressing challenge because of the powerful influence that over-exposure to suspension appears to have on critical outcomes such as attendance and dropout, and because the racialized patterns of discipline in North Carolina raise fundamental legal and human rights issues that reach far beyond the field of education.
Group-specific challenges flow from our analysis as well. Asian students reflect the same lack of access to rigorous coursework and effective teachers as other student groups of color. Data indicating that they are the highest achieving group makes them no less deserving of the conditions and resources necessary to reach their full educational potential. The pattern of results for Black students suggests that persistent prejudice and racism is still a key constraint on their educational success, especially in the areas of school discipline, exclusionary and judgmental exceptional children designations, and academically/intellectually gifted designations. It is important to reiterate the implied role that racial subjectivities (beliefs, opinions, biases, ideologies, etc.) of school authorities presumably play in these areas. We also call attention to the contribution our analysis can make to honoring the struggle and reinforcing the commitment of Black students and families to public education.
While Black students appear to have the largest magnitude of disadvantage on many indicators of access, American Indian and Pacific Islander students are disadvantaged across a higher proportion of metrics. While often of a different magnitude, the patterns of disadvantage for American Indian and Pacific Islander students suggests that they face many of the same barriers as Black students related to racial subjectivities. An overall lack of empirical research, and the educational community’s understandable and necessary focus on Black – White inequity, have likely contributed to a lack of clarity about how race influences the educational experiences of American Indian and Pacific Islander youth.
Multiracial students represent an even more extreme example of this. While they are perhaps the least studied and the least understood, they are disadvantaged on the widest collection of access metrics, and thus likely have among the highest cumulative disadvantage of any student group in the state. It is truly astonishing that the fourth largest student racial group has been relegated to little more than an afterthought in the discourse and policymaking in North Carolina.
While Hispanics as a group do not have the highest levels of cumulative disadvantage, our analysis reveals their unique pattern of disadvantage and the attendant challenges that they face. High dropout rates and a dramatic lack of Hispanic educators calls our attention to the relationship between the state’s commitment to a diverse and representative teaching corp and the educational success of its increasingly diverse students.
White students as a group tend to have the least amount of disadvantage across indicators of access and opportunity. With the exception of Asians, Whites also outperform students of color on virtually all indicators of academic achievement. This suggests that in general, White students likely have the benefit of structural supports that lead to educational success. However, our analysis related to dropout and attendance (chronic absenteeism) indicate that North Carolina schools may need additional resources and support in order to address the unique family, social, and psychological circumstances of White students and their communities.
The process of conducting an analysis across so many indicators and racial groups in the state has given us some insights into issues related to data quality. First, taking steps to collect and analyze data within racial groups would contribute to our empirical understanding of patterns of racial (in)equity. Specifically, further disaggregating race data within the Asian and Hispanic racial groups to include racial/cultural subgroups and country of origin for recent immigrants may allow research to parse the unique patterns of educational (dis)advantage for these groups. Doing so may help illuminate questions like: Why do Asians have such achievement success despite numerous structural disadvantages in access and opportunity? Why are there so few Hispanic teachers? Why do so many Hispanic youth leave high school despite relatively high aspirations to attend college? Answering these kinds of questions would increase understanding of the Asian and Hispanic experience but is also likely to bear on the educational journey of other student groups of color.
Our analysis also hints at a need for data that further encapsulates the geographic and regional diversity of the state, particularly in relation to White students. This kind of data could, for instance, help research better delineate between the experiences of rural, poorer White youth and their presumably wealthier urban and suburban counterparts.
There is also a clear need to collect data on teachers that identify as Pacific Islander and Multiracial. This is likely a simple matter of changing the options on a survey item. The lack of data on Pacific Islander and Multiracial teachers and administrators leaves a gap in our understanding of a critical predictor of educational success. In addition, state data on teacher qualifications includes a substantial proportion of teachers (~18%) with “unknown” qualifications. This makes it unclear whether any analysis of the relationship between teacher traits and student success (such as the EVAAS system) are valid. Unknown teacher qualifications take on additional salience today given policy discussions and proposals around such value-added measures.
Beyond the specific challenges discussed above, we believe the results of this report make it clear that the agencies and institutions responsible for fulfilling the mandate of public education laid out in the North Carolina Constitution and statutory law must demonstrate greater commitment to sustained attention, ongoing comprehensive assessment, and data-driven reforms to improve the state of racial equity in North Carolina public schools.
While policymaking bodies are ultimately responsible for the provision of a sound basic education and monitoring the performance of student groups in North Carolina, we contend it is necessary for a third non-governmental entity to take the lead by maintaining an intentional focus on race. Fortunately, racial equity has received increased attention as many stakeholder groups have adopted appropriate lenses when discussing the educational experiences of students. Racialized opportunity gaps require more intense scrutiny and action on the part of policy organizations and think-tanks. Now more than ever there is a need for an organization with the express purpose of measuring and responding to inequities in education across lines of race, not as a peripheral venture, but as a core strategy.
To that end, the parent organization that produced this report, the Center for Racial Equity in Education (CREED), was created. CREED is committed to centering the experience of people of color in North Carolina as it transforms the education system for the betterment of all students. Taking a multi-pronged and purposefully multi-racial approach, CREED has three main branches of activity: Research, Engagement, and Implementation. Through research, coalition building, and technical assistance, CREED works to close opportunity gaps for all children in P-20 education, especially children of color, with the vision that one day race will no longer be a substantial predictor of educational outcomes.
To advance this mission, CREED conducts evidence-based research (the first of which are E(race)ing Inequities and Deep Rooted). Through partnerships with historians, researchers, and policy experts, we produce scholarship that allows for deeper and richer understanding of the issues facing students of color in North Carolina. In addition, CREED builds coalitions of school leaders, educators, parents, policymakers, and community members who have a shared agenda of creating equitable school systems. Through programming, communication and grassroots-organizing strategies, CREED is intent on shifting the atmosphere by providing the education and experiences needed to inform action in meaningful ways. Lastly, we support schools and educators with technical assistance and training designed to improve educational outcomes for students of color. As much as reports such as this one are instrumental in providing foundational knowledge about the myriad ways race influences our school system, direct service and professional development with practitioners is necessary for it to translate into sustainable change. CREED is committed to providing the sort of training and consultation that is often found wanting when engaging in issues racial equity.
In summary, our greatest contribution with respect to the findings of this report is to build an organization suited to respond to what we see. As things stand in North Carolina, no such entity exists that explicitly focuses on race, with interventions spanning the entire research-to-practice continuum. We hope this report may come to represent a watershed moment and believe organizations like CREED are best suited to take up the challenge of enacting racial equity in North Carolina public schools.
Anderson, J. D. (1988). The education of Blacks in the South, 1860-1935. Chapel Hill, NC: University of of North Carolina Press.
Danico & J. G. Golson (Eds.), Asian American Students in Higher Education (pp. 18-29). New York: Routledge.
Finn, J. D., Fish, R. M., & Scott, L. A. (2008). Educational sequelae of high school misbehavior. The Journal of Educational Research, 101(5), 259-274.
Gordon, R., Piana, L. D., & Keleher, T. (2000). Facing the consequences: An examination of racial discrimination in U.S. public schools. Oakland, CA: Applied Research Center.
Jones, B. (2012). The struggle for Black education. In Bale, J., & Knopp, S. (Eds.). Education and capitalism: Struggles for learning and liberation. Chicago, IL: Haymarket Books.
Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Educational Research Journal, 32(3), 465-491.
Museus, S. D. & Iftikar, J. (2013). An Asian critical race theory (AsianCrit) framework. In M. Y.
Span, C. M. (2015). Post-Slavery? Post-Segregation? Post-Racial? A History of the Impact of Slavery, Segregation, and Racism on the Education of African Americans. Teachers College Record, 117(14), 53-74.
Staats, C. (2016). Understanding Implicit Bias: What Educators Should Know. American Educator, 39(4), 29-43.
Teranishi, R. T., Nguyen, B. M. D., & Alcantar, C. M. (2016). The data quality movement for the Asian American and Pacific Islander community: An unresolved civil rights issue. In P. A. Noguera, J. C. Pierce, & R. Ahram (Eds.), Race, equity, and education: Sixty years from Brown (pp. 139-154). New York, NY: Springer.
Triplett, N. P. (2018). Does the Proportion of White Students Predict Discipline Disparities? A National, School-Level Analysis of Six Racial/Ethnic Student Groups (Doctoral dissertation, The University of North Carolina at Charlotte).
U.S. Department of Justice & U.S. Department of Education. (2014, January). Notice of language assistance: Dear colleague letter on the nondiscriminatory administration of school discipline. Retrieved from http://www2.ed.gov/about/offices/list/ocr/letters/colleague-201401-title-vi.html
Welch, K. & Payne, A. A. (2010). Racial threat and punitive school discipline. Social Problems, 57(1), 25-48.
Editor’s note: James Ford is on contract with the N.C. Center for Public Policy Research from 2017-2020 while he leads this statewide study of equity in our schools. Center staff is supporting Ford’s leadership of the study, conducted an independent verification of the data, and edited the reports.E(race)ing Inequities | <urn:uuid:07c8ba89-37c7-45bc-a17c-0cd2eebe2df5> | CC-MAIN-2019-47 | https://www.ednc.org/eraceing-inequities-racial-equity-in-north-carolinas-schools-a-story-of-accumulated-disadvantage/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00059.warc.gz | en | 0.944022 | 7,060 | 2.796875 | 3 |
- Research article
- Open Access
- Open Peer Review
Family time, parental behaviour model and the initiation of smoking and alcohol use by ten-year-old children: an epidemiological study in Kaunas, Lithuania
BMC Public Health volume 6, Article number: 287 (2006)
Family is considered to be the first and the most important child development and socialization bond. Nevertheless, parental behaviour model importance for the children, as well as family time for shared activity amount influence upon the child's health-related behaviour habit development has not been yet thoroughly examined. The aim of this paper is to indicate the advanced health-hazardous behaviour modelling possibilities in the families, as well as time spent for joint family activities, and to examine the importance of time spent for joint family activities for the smoking and alcohol use habit initiation among children.
This research was carried out in Kaunas, Lithuania, during the school year 2004–2005. The research population consisted of 369 fifth-grade schoolchildren (211 (57.2%) boys and 158 (42.8%) girls) and 565 parents: 323 (57.2%) mothers and 242 (48.2%) fathers. The response rate was 80.7% for children; 96.1% and 90.6% for mothers and fathers correspondingly.
Eating a meal together was the most frequent joint family activity, whereas visiting friends or relatives together, going for a walk, or playing sports were the most infrequent joint family activities. More than two thirds (81.5%) of parents (248 (77.0%) mothers and 207 (85.9%) fathers (p < 0.05)) reported frequenting alcohol furnished parties at least once a month. About half of the surveyed fathers (50.6%) together with one fifth of the mothers (19.9%) (p < 0.001) were smokers. More frequently than girls, boys reported having tried smoking (6.6% and 23.0% respectively; p < 0.001) as well as alcohol (31.16% and 40.1% respectively; p < 0.05). Child alcohol use was associated both with paternal alcohol use, and with the time, spent in joint family activities. For instance, boys were more prone to try alcohol, if their fathers frequented alcohol furnished parties, whereas girls were more prone to try alcohol, if family members spent less time together.
Joint family activity time deficit together with frequent parental examples of smoking and alcohol use underlie the development of alcohol and smoking addictions in children to some extent. The above-mentioned issues are suggested to be widely addressed in the comprehensive family health education programs.
Family is considered to be the first and the most important child development and socialization bond. Harmonious child-parent interaction, open communication, and parental support are the factors that underlie a successful child's mental and physical health development . Family time devoted for shared activities preconditions a successful communication within the family [2, 3].
Nowadays families, as the society itself, are affected by rapid changes. An ever-increasing rush brought on by the information age influences the earlier-settled interaction of the family members, and alters the communication patterns between parents and children. A margin between work and leisure is notably disappearing, and an ever-increasing number of people are never completely free from work . The above-mentioned fact seems to have a direct influence upon the quality as well as the quantity of family time. Some research data demonstrate that family members that are more frequently engage in family activities, increase their marital satisfaction, and enhance the stability of their family [5, 6]. When families get involved in joint activities, their children do better at school and grow up to become more successful adults . Several studies have demonstrated a substantial and consistent relationship between family time and habits of smoking and drinking, drug abuse and sexual intercourse experience, and delinquent behaviour in children [8–10].
Parents are the preferential and the most important behavioural development models for children. The quality of this primary social bond underlies the development of the child's future well-being, problem behaviour, and emotional disturbances. [11, 12]. Nevertheless, parental behaviour model importance for the children, as well as family time for shared activity amount influence upon the child's health-related behaviour habit development has not been yet thoroughly examined. The aim of this paper is to indicate the advanced health-hazardous behaviour modelling possibilities in the families, as well as time spent for joint family activities, and to examine the importance of time spent for joint family activities for the smoking and alcohol use habit initiation among children.
This cross-sectional research constitutes the fourth stage of a long-term epidemiological study, launched in 1999. The study aims at young people lifestyle and health related behaviour follow-up investigation of six-year-old children until the end of their adolescence. The study intervals coincide with the main child social life periods, that is, the kindergarten, primary, and secondary school. The original sample of 577 six-year-olds was randomly selected at 12 kindergartens in Kaunas, Lithuania. Every separate child social life stage implies focusing on different child health and social well-being aspects. The current research findings were collected during the years of 2004–2005, when the investigative children were entering the adolescence and starting to attend secondary schools.
Participants and study procedures
The research was coordinated by the Laboratory for Social Paediatrics at the Institute for Biomedical Research, Kaunas University of Medicine. The research ethical clearance, as well as the support together with all relevant information was obtained from Kaunas Regional Bioethical Committee of Biomedical Research, Department of Education at Kaunas municipality and the administrations of schools. Parent written consent to take part in the investigation was also obtained.
The research was performed at 41 schools in Kaunas, Lithuania. The investigation material consisted of a structured questionnaires, filled in by 369 fifth-grade schoolchildren (211 (57.2%) boys and 158 (42.8%) girls; response rate 80.7%), aged 10, as well as 565 parents (323 (57.2%) mothers (response rate 96.1%) and 242 (48.2%) fathers (response rate 90.6%)). During the research period, 267 fathers and 336 mothers lived together with the investigative sample of the 369 children. The response rates for fathers and mothers were individually calculated according to the number of the distribution of both sexes parents in the families.
With an aim to obtain authentic research data, the survey instruments were kept confidential. Repetition of the equally structured questions in the questionnaires for both parents and children allowed of the parent and child response relation. In order to assess the substance use prevalence among the family members, schoolchildren were addressed with 5, and parents (fathers and mothers) were addressed with 3 questions regarding smoking and alcohol consumption [See Additional file 1 , Questions]. The schoolchildren answers allowed us to determine the tobacco smoking and alcohol consumption onset age, as well as smoking frequency. Fathers and mothers were inquired about the smoking frequency, as well as about the participation in situations with alcohol use. Daily smoking was an indicator for selecting the regular smoker group. The familial alcohol consumption frequency was assessed on the grounds of parental responses to a question: "How often do you participate in alcohol-furnished parties, where you do consume alcohol, even though the least amount of it?" Six alternative response options were suggested: 1) 'almost every day'; 2) 'about 2–3 times a week'; 3) 'about once a week'; 4) 'about once a month'; 5) 'about once a year'; 6) 'never'. The respondents were subdivided into three groups according to the alcohol consumption frequency:
Group1 included the ones, stating frequenting alcohol furnished parties at least once a week and more often. This Group1 was comprised of the respondents, who chose the responses from 'almost every day', to 'about 2–3 times a week', and 'about once a week'.
Group2 included the ones, stating frequenting alcohol furnished parties at least once a month. This Group2 was formed of the respondents, who chose the response 'about once a month'.
Group3 included the ones, who stated seldom participating in alcohol furnished parties. This Group3 was comprised of the respondents, who chose the responses from 'about once a year' to 'never'.
The current research was carried out employing the same family time evaluation methodology as it was employed in the WHO Cross-National Study of Health Behaviour in School-Aged Children (HBSC) . The family time indicator was calculated according to the frequency of collectively spent family time for joint activities. Children were asked to answer the following question: "How often do you and your family together do each of these things?" The suggested responses were as follows: a) 'watch TV or a video'; b) 'play indoor games'; c) 'have a meal'; d) 'go for a walk'; e) 'go places'; f) 'visit friends or relatives'; g) 'play sports'; h) 'sit and talk about things'. Each response was assigned a value from 1 (least family time), to 5 (most family time) scores according to the quantity of the family time allocated: 'every day' = 5, 'most days' = 4, 'about once a week' = 3, 'less often' = 2, 'never' = 1.
Statistical Package for Social Sciences (SPSS) for Windows (version 13.0) software was used to conduct data analysis. The statistical relationship between qualitative variables was assessed by Chi-square test, with significance level 0.05 and odds ratio with 95% confidence interval (CI). Family time indication mean values and standard deviations were calculated. The higher mean values, the higher probability for the certain activity to be performed together with the family.
In order to assess the family time in general, and to be able to present the current research, all the above-mentioned eight joint family activity indications were combined into one derivative variable, labelled "Family Time Index" (FTI). The factor analysis was employed to calculate the index , and provided a possibility to estimate the extent of each item in the linear combination of the items, as well as proved to be a more rigorous scientific method than a simple sum of the family time indication scores. The values for FTI appeared to be distributed within the range of -3.17 and 2.72. Based on the FTI values, the family time was grouped into two groups. Positive FTI values were typical of the families, where the members reported more commonly-spent time (indicated as 'good family time'); and, vice versa, negative FTI values demonstrated that families tended to spend less time together (indicated as 'poor family time'). The research finding revealed 41.6% of families with a 'good family time' indication; while 58.4% of families were indicated with a 'bad family time'.
A logistic regression analysis was computed with an aim to ascertain the risk of child smoking and alcohol use initiation, in relation to a set of predictors, including family time.
Smoking and alcohol use
The parental smoking habit assessment revealed 11.5% of families with both parents being smokers, 43.8% of families with one parent being smoker, and 44.7% of families with no parents being smokers. Father came out to be a more frequent smoker than mother was (50.6% and 19.9% respectively; p < 0.001). In a group of smokers, 89.0% of fathers, and 63.2% of mothers smoked regularly.
The familial alcohol consumption frequency was assessed on the grounds of parental responses to a question: "How often do you participate in alcohol-furnished parties, where you do consume alcohol, even though the least amount of it?" 77.0% of mothers, and 85.9% of fathers (p = 0.008) reported participating in these kinds of parties at least once a month.
The research revealed 55 (15.9%) smoking attempting children (more frequently boys than girls; 23.0% and 6.6% respectively; p < 0.001). The mean of the age of the onset of smoking equalled to 8.59 ± 0.29 years of age. The findings revealed 1.2% of smoking schoolchildren, all of them being boys.
A group of 123 (36.2%) children reported having tried alcohol. The percentage of alcohol consumption attempts admitting respondents was higher among boys than among girls (40.1% and 31.1% respectively; p = 0.086). The mean of the age of the onset of alcohol consumption attempts among children equalled to 8.80 ± 0.17 years of age. Twenty (5.9%) respondents (7.0% of boys, 4.7% of girls; p = 0.392) reported having experienced being drunk.
The statistical analysis indicated a relationship between the mother's and the son's smoking habits. 19.8% of sons of the non-smoking mothers, and 38.2% of sons of the smoking mothers had tried smoking themselves (Table 1). The odds ratio (OR) calculation revealed the smoking mothers' sons being 2.5 times more likely to start smoking, if compared with non-smoking mothers' sons. However, father's smoking seemed to have no effect on children smoking attempts.
The periodicity of parental alcohol furnished party frequenting was related to the onset of alcohol use among boys (Table 2).
The first alcohol use experience frequency was increasingly interrelating with the reported parental alcohol use frequency at the alcohol furnished parties. 8.3% of boys from the families where both parents were reluctant to frequent alcohol furnished parties, 31.6% of boys from the families where at least one of the parents tended to frequent alcohol furnished parties (for comparison with the first group of the parents, OR = 5.08; p = 0.160), and 44.6% of boys from the families where both parents were prone to frequent alcohol furnished parties, reported having attempted alcohol use (OR = 8.84; p = 0.041). Nevertheless, the above-mentioned pattern could not be established in the group of girls.
The findings revealed that eating a meal together was the most frequent joint family activity, whereas visiting friends or relatives together, going for a walk, or playing sports were the most infrequent joint family activities (Table 3).
Eating a meal with parents (p = 0.014) was a more frequent joint family activity among girls, while playing sports with parents was a more frequent joint activity among boys (p < 0.001). Nevertheless, no gender differences were observed in other shared activities.
In order to assess the correlation between the joint family activity items, the correlation matrix was verified. The highest correlation values were revealed among such indications, as 'going places', 'going for a walk', and 'playing at home'. The indication of 'watching a TV or a video' was least associated with other shared activity indications.
Family time, smoking and alcohol use
Family time indication was revealed to be highly associated with both smoking and alcohol consumption by either children or parents; however, the nature of the relationship remains ambiguous. The highest 'poor family time' indication prevalence was observed in the group of children with smoking attempts, consequently suggesting an assumption that smoking-attempting boys and girls come out to be less involved in shared familial activities. Correspondingly, less of the collectively spent family time indicating children were more prone to try smoking (Fig. 1).
In comparison with children, who spent more time with their parents (indicated as 'good family time'), the boys, who spent less time with their parents (indicated as 'poor family time'), were 2.15 times more inclined to start smoking (95% CI 1.09 – 4.27). The girls, who spent less time with their parents (indicated as 'poor family time'), in comparison with children, who spent more time with their parents (indicated as 'good family time') were 4.48 times more inclined to start smoking (95% CI 0.92 – 21.82). Analogous, although weaker, association was detected among children with attempts to try alcohol (Fig. 2).
There has been a strong relationship determined between the collectively spent family time and parental smoking and alcohol use habits. For example, mothers from the families, were 'good family time' was indicated, reported generally smoking less frequently, if compared to mothers from the families, where 'poor family time' was indicated (15.0% and 23.7% respectively; p = 0.049).
The findings revealed the development of child smoking and alcohol use habit formation being closely related to parental smoking and alcohol use example frequency, together with the fact of less collectively spent family time. Therefore, the need for determining the highest inside family surveyed impact-making factors upon the children smoking and alcohol use initiation was inevitable. A logistic regression analysis allowed measuring the relationship between the following factors and the child smoking and alcohol use habit formation, in case of the simultaneous impact of the above-mentioned factors (Table 4).
The analysis of the findings revealed the onset of smoking among boys being significantly dependent upon the collectively spent family time amount. In comparison with the boys from the families with an indicated 'good family time', the boys from families with an indicated 'poor family time' were 3.03 times more inclined to try smoking. According to the analysed data, parental smoking, as such, makes an insignificant impact. (Based on the small number of those, who have tried smoking, the analogous analysis was not applied to the group of girls.)
Father alcohol furnished party frequenting made an essential impact on boy alcohol initiation. This indicator was statistically significant for all alcohol consumption periodicities, starting from 'once a week', to 'more often', or 'only once a month'. Collectively spent family time amount made no impact on boy alcohol initiation; however, it influenced girl alcohol initiation. After applying the one-level and multilevel analysis, the daughters of the alcohol furnished party once a month frequenting mothers were proved to be less prone to try alcohol (Table 4). This indicator odds ratio was less than 1 (OR = 0.19; p = 0.031).
The research highlights a familiar notion of child's psychological resistance and vulnerability onset being traced back to family circumstances. As the first child's social environment, family provides the child with a background for attitudes and values, presents with first and the most important life models as well as communication skills. An adolescent child triggers new family development stages. Adolescents shift their social concern onto their peers, and, consequently, family has to recognize the changes as well as the altered relationship between parents and children [15, 16]. In search of their identity, adolescents experiment with various behavioural models, including the health-hazardous ones. Young people substance use is becoming a worldwide problem; an use onset occurring at ever-younger ages. Current surveys indicate a steady increase in smoking and alcohol use trends among Lithuanian adolescents during the last decade [13, 17]. According to the data of the WHO Cross-National Study on Health Behaviour in School-aged Children (HBSC), the prevalence of regular alcohol consumption among schoolchildren has increased from 9.4% to 13.6% among boys, and from 4.2% to 6.5% among girls during the study period (1994 – 2002) . Smoking prevalence among teenagers has increased 5 times during the last decade; now smoking teenagers outnumber adult women [19, 20].
Despite the fact of family contribution in child development being investigated in various aspects, the transitional adolescent period implication, as well as the parent-children relationship type, both influencing the health-hazardous child behaviour development, has been poorly analyzed. The current survey provides an opportunity of the first adolescent children risk behaviour manifestation classification as well as of the first harmful habit development assessment on the grounds of the two important behavioural development aspects, that is, the analogous parental behaviour model, and the amount of family time for shared activities.
The current research data are highly representative for the Lithuanian population, and serve as main socio-economic family status indicators; however, the study sample was drawn from Kaunas habitants only. The sample size and response rate were adequate for estimated relationship statistical significance. The study instrument for assessing the family time for shared activities maintained appropriate internal and external validity, test-proven by the authors of the instrument . Moreover, the research presents one-stage findings out of a long-term epidemiological young people lifestyle and health related behaviour follow-up investigation of six-year-old children until the end of their adolescence. Repetitive contacts with the participants during the clearance of the investigation aims as well as the long-term investigation itself encouraged the children and the parents for an active and open participation in the investigation. The given analysis does not include a comparison in a long-term perspective, therefore should be considered as a cross-sectional study of a randomly selected population. The further analysis of the first smoking and alcohol use manifestation in adolescence might include variables that had been measured in the previous stages of this longitudinal study.
The parental behaviour model importance for the children was substantiated and proven on the grounds of different research and experiments [21–25]. Children evidently do not only soon become aware of the parent habits (smoking, drinking alcohol); they also notice even the slightest details of their parent behaviour. The available data demonstrate that smoking parents' children attempt smoking more frequently, if compared to the non-smokers' children [26–28].
According to the current research data, families with non-smoking parents, and families where at least one of the parents was a smoker could comprise two similar-sized groups. Despite the fact that mothers smoked twice less frequently (if compared to fathers), mother smoking was statistically significantly related to their son smoking attempts. Maternal smoking (especially during pregnancy) may not only serve as a behavioural model; it may physically precondition the child demand in nicotine stimulation . Similar observations concerning the enhanced maternal, rather than paternal smoking influences upon the child smoking habits are also featured in other publications [30–32]. However, the unambiguous opinion, whether maternal or paternal smoking has a more significant modulating effect upon the child behaviour, is still unavailable, due to the fact that data solely confirm a major paternal smoking influence . Presumptively, the effect intensity is preconditioned by the parent gender and methodological survey differences, as well as by the cultural environment and traditional male and female family roles.
Certain behavioural model exposition frequency may become a significant health-related behaviour acquisition process factor. From the authors' point of view, the latter factor could be viewed as one among the possible explanations of the detected relationships between the child and parent alcohol use examples. The incidence rate of boys with alcohol use attempts increased correspondingly with the increasing frequency of parental participation in alcohol-furnished parties.
The fact that the smoking and alcohol use onset was higher among boys than among girls might be viewed as influenced by socialization regarding gender differences, and the determined tendency towards external behavioural problems in boys. .
The data regarding psychological family climate and child-parent communication are abundant [1, 35]. Unfortunately, there is little scientific material available on one of the principle items of communication, that is, the joint activities of parents and children. The last decade has brought about substantial changes in the family behaviour pattern causing the shift in family models and the increase in the number of divorces, which, in turn, have a negative influence upon the communication between children and parents [1, 35]. Single-parent family household incurs high demands on a single biological parent, forcing him/her to work more in order to support the family, thus leaving less time for the children . The fact of a high time deficit for the children was confirmed by different international investigations. According to the recent US survey, two out of three parents stated that in case they had more free time, they would spend it with their children . The results of the current survey reflected the familiar tendencies, namely, two-thirds of the schoolchildren have daily meals with their parents, and about one third of them have a daily possibility to talk with their parents about various things. Children and adolescents, who spend less time with their parents, are more susceptible to the development of risk behaviour . The current research data confirm the above-mentioned fact that adolescents, who share less time with their parents in joint activities, are more prone to try smoking as well as alcohol.
The results of the current survey revealed the parental behaviour, as well as child-parent communication influence upon the child's behavioural development. The risk behaviour prevention implementation among schoolchildren require the highlighting of the importance of family time devoted to shared activities for the child's health development. The education of parents, the promotion of their motivation in favour of healthy lifestyle, as well as encouragement to spend more time with the children can reduce the manifestations of risk behaviour among the adolescents.
Joint family activity time deficit together with frequent parental examples of smoking and alcohol use underlie the development of alcohol and smoking addictions in children to some extent. The above-mentioned issues are suggested to be widely addressed in the comprehensive family health education programs.
Zaborskis A, Žemaitienė N, Garmienė A: Harmony of relationship with parents and its impact on health behaviour and wellbeing of adolescents [in Lithuanian]: Bendravimo su tėvais darna ir jos reikšmė paauglių elgsenai ir savijautai. Lietuvos bendrosios praktikos gydytojas. 2005, 9 (3): 169-174.
Wertliwb D: American Academy of Pediatrics Task Force on the Family. Converging trends in family research and paediatrics: recent findings for the American Academy of Paediatrics Task Force on the Family. Pediatric. 2003, 111 (6 Pt 2): 1572-87.
Greeff AP, le Roux MC: Parents' and adolescents' perceptions of a strong family. Psychol Rep. 1999, 84 (3 Pt 2): 1219-24.
Eriksen TH: Tyranny of the moment. Fast and slow time in the informational century [in Lithuanian]: Akimirkos tironija. Greitasis ir lėtasis laikas informacijos amžiuje. 2004, Vilnius: Tyto Alba
Hill MS: Marital stability and spouses' shared time. Journal of Family Issues. 1988, 9 (4): 427-51.
Schor EL: American Academy of Pediatrics Task Force on the Family. Family paediatrics: report of the Task Force on the Family. Pediatrics. 2003, 111 (6 Pt 2): 1541-71.
Yeung WJ, Stafford F: Intra-family Child Care Time Allocation: Stalled Revolution or Road to Equality? Paper presented in International Sociological Association meeting. Australia, July 2002. 2003
Sweeting H, West P: Family life and health in adolescence: a role for culture in the health inequalities debate?. Soc Sci Med. 1995, 40 (2): 163-75. 10.1016/0277-9536(94)E0051-S.
Sweeting H, West P, Richards M: Teenage family life, lifestyles and life chances: Associations with the family structure, conflict with parents and joint family activity. Int J Law Policy Family. 1998, 12: 15-46. 10.1093/lawfam/12.1.15.
Granado Alcón MC, Pedersen JM: Family as a child development context and smoking behaviour among schoolchildren in Greenland. Int J Circumpolar Health. 2001, 60 (1): 52-63.
Garniefski N, Diekstra RF: Perceived social support from family, school, and peers: relationship with emotional and behavioural problems among adolescents. J Am Acad Child Adolesc Psychiatry. 1996, 35 (12): 1657-64. 10.1097/00004583-199612000-00018.
Harter S, Whitesell NR: Multiple pathways to self-reported depression and psychological adjustment among adolescents. Development and psychopathology. 1996, 8: 761-777.
Health Behaviour in School-aged Children: a WHO Cross- National Study. Research Protocol for the 2001/2002 Survey. Edinburgh. 2001
Čekanavičius V, Murauskas G: Statistics and its practice [in Lithuanian]: Statistika ir jos taikymai. II dalis. 2002, Vilnius: TEV
Žukauskienė R: Developmental psychology [in Lithuanian]: Raidos psichologija. 1996, Vilnius
Želvys R: Development of adolescent's psyche [in Lithuanian]: Paauglio psichikos vystymasis. 1994, Vilnius
Global Youth Tobacco Survey Collaborative Group: Tobacco use among youth: a cross country comparison. Tob Control. 2002, 11 (3): 252-70. 10.1136/tc.11.3.252.
Šumskas L, Zaborskis A: Alcohol consumption in Lithuanian school-aged children during 1994–2002. Medicina (Kaunas). 2004, 40 (11): 1117-1123.
Grabauskas V, Zaborskis A, Klumbienė J, Petkevičienė J, Žemaitienė N: Changes in health behavior of Lithuanian adolescents and adults over 1994–2002 [in Lithuanian]: Lietuvos paauglių ir suaugusių žmonių gyvensenos pokyčiai 1994–2002 metais. Medicina. 2004, 40 (9): 884-890.
Veryga A: Evaluation of epidemiological situation of tobacco dependence and smoking cessation effectiveness. Summary of the Doctoral Dissertation-Kaunas. 2004
Bandura A: Social learning theory. 1977, Englewood Cliffs: Prentice-Hall
Montgomery KS: Health promotion with adolescents: examining theoretical perspectives to guide research. Res Theory Nurs Pract. 2002, 16 (2): 119-34. 10.1891/rtnp.188.8.131.52001.
Wright DR, Fitzpatrick KM: Psychosocial correlates of substance use behaviours among African American youth. Adolescence. 2004, 39 (156): 653-67.
Li C, Pentz MA, Chou CP: Parental substance use as a modifier of adolescent substance use risk. Addiction. 2002, 97 (12): 1537-50. 10.1046/j.1360-0443.2002.00238.x.
Zhang L, Welte JW, Wieczorek WF: Peer and parental influences on male adolescent drinking. Subst Use Misuse. 1997, 32 (14): 2121-36.
Chassin L, Presson CC, Rose JS, Sherman SJ, Todd M: Maternal socialization of adolescent smoking: the intergenerational transmission of parenting and smoking. Dev Psychol. 1998, 34 (6): 189-201. 10.1037/0012-16184.108.40.2069.
Engels RCME, Knibbe RA, Vries de H, Drop MJ, Breukelen van GJP: Influences of parental and best friends' smoking and drinking on adolescent use: a longitudinal study. J Appl Soc Psychol. 1999, 29: 337-361. 10.1111/j.1559-1816.1999.tb01390.x.
den Exter EAW, Blokland MA, Engels RCME, Hale WW, Meeus W, Willemsen MC: Lifetime parental smoking history and cessation and early adolescent smoking behaviour. Preventive Medicine. 2004, 38: 359-368. 10.1016/j.ypmed.2003.11.008.
Law KL, Struod LR, LaGarse LL, Niaura R, Liu J, Lester BM: Smoking during pregnancy and newborn neurobehaviour. J Pediatrics. 2003, 111: 1318-23. 10.1542/peds.111.6.1318.
Hover SJ: Factors associated with smoking behaviour in adolescent girls. Addictive Behaviours. 1988, 13: 139-145. 10.1016/0306-4603(88)90003-2.
Brenner H, Scharrer SB: Parental smoking and sociodemographic factors related to smoking among German medical students. European Journal of Epidemiology. 1996, 12: 171-176. 10.1007/BF00145503.
Herlitz C, Westholm B: Smoking and associated factors among young Swedish females. Scandinavian Journal of Primary Health Care. 1996, 14: 209-215.
Shamsuddin K, Abdul Harris M: Family influence on current smoking habits among secondary school children in Kota Bharu, Kelantan. Singapore Medical journal. 2000, 41: 167-171.
Pastavkaitė G: Mental health of junior school aged children and links with the social factors. summary of the Doctoral Dissertation-Kaunas. 2005
Garmienė A, Žemaitienė N, Zaborskis A: Family structure and communication between children and parents [in Lithuanian]: Šeimos struktūrta bei tėvų ir vaikų bendravimas. Lietuvos bendrosios praktikos gydytojas. 2004, 8 (11): 708-712.
Wolff EN: Recent Trends in Wealth Ownership, 1983–1998. Jerome Levy Economics Institute Working Paper No.300. 2000, [http://ssrn.com/abstract=235472]
Talking Points – State Services. Children, the Internet and the family time. Media Release Points, Ask-Alabama Poll, Fall. 2004, 1 (7): [http://web6.duc.auburn.edu/outreach/ask_alabama/december2004/Children%20&%20Internet%20Media%20talking%20points.pdf]
Garmienė A, Žemaitienė N, Zaborskis A: Schoolchildren's health behaviour and their relationship with social integration into peer group [in Lithuanian]: Moksleivių gyvensenos ir socialinės integracijos bendraamžių grupėje ryšys. Visuomenės sveikata. 2003, 4 (23): 39-44.
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2458/6/287/prepub
The author(s) declare that they have no competing interests.
AG carried out the investigation, performed the statistical data analysis, participated in the data interpretation, and drafted the manuscript. NZ substantially contributed to the investigation project and design development, participated in the data interpretation and manuscript preparation. AZ coordinated the investigation, helped to perform the statistical analysis as well as to prepare the draft of the manuscript. All authors read and approved the final manuscript.
Asta Garmienė, Nida Žemaitienė contributed equally to this work.
Electronic supplementary material
Additional file 1: We presented the questions used in the survey for the assessment of the advanced health-hazardous behaviour modelling possibilities in the families, as well as time spent for joint family activities, and for the examination of the importance of time spent for joint family activities for the smoking and alcohol use habit initiation among children. (DOC 34 KB)
About this article
Cite this article
Garmienė, A., Žemaitienė, N. & Zaborskis, A. Family time, parental behaviour model and the initiation of smoking and alcohol use by ten-year-old children: an epidemiological study in Kaunas, Lithuania. BMC Public Health 6, 287 (2006) doi:10.1186/1471-2458-6-287
- Shared Activity
- Family Time
- Paternal Smoking
- Smoking Addiction
- Alcohol Consumption Frequency | <urn:uuid:8f984fb8-7142-490e-8827-45c53c87b9a0> | CC-MAIN-2019-47 | https://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-6-287 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00298.warc.gz | en | 0.93172 | 7,575 | 2.546875 | 3 |
Does ISO 27001 Cover the Requirements of GDPR?
In today's ever–evolving digital environment, the protection of personal data has become more critical than ever. As data breaches occur with greater frequency, the cyber–security standards that were put into law 20 years ago are no longer enough to protect the information of businesses and customers that interact online. The larger the database, the graver the consequences of a breach for parties at both ends.
In the present technological landscape, a more advanced set of regulations has been needed to mitigate the dangerous and costly consequences that often come with online data collection. To that end, new regulations are set to take effect across Europe that will impact all businesses around the world that do business with residents of that region. As companies prepare to bring their systems into compliance, best–practice standards are becoming increasingly popular.
What Is the GDPR?
The General Data Protection Regulation (GDPR) was developed by the European Union over a four–year period to serve as a legislative solution to issues regarding data protection in the present day. Currently, laws regarding data protection in the United Kingdom are based on the Data Protection Act of 1998 — an update of the 1995 EU Data Protection Directive — which itself was designed to handle security issues as understood by lawmakers and programming experts in the years leading up to the millennium.
The GDPR is intended to solve security issues that have emerged over the past two decades since the development of cloud technology and its impact on data security. The new regulations, which go into effect in 2018, will impose strict fines throughout the EU for breaches in data security. The law will also offer more power to citizens in regards to what companies can do with private data. While the new law will be beneficial on all sides, the GDPR has ostensibly been designed to protect consumers.
What Is the Purpose Behind the GDPR?
There are two main reasons for the new protective regulations of the GDPR. First and foremost, the regulations are designed to protect customer data in the new digital environment. In an age where companies like Facebook and Google share the personal data of account holders in exchange for site access and features, the GDPR seeks to return more control of the situation back to the user. This could make EU users less wary of sharing information on such platforms, would, in turn, allow the companies behind such platforms to serve the European public better.
The current laws that are in place were drafted long before the current digital environment, in which cloud technology has made it easier than before to exploit private data. With the heightened restrictions imposed by the GDPR, the EU aims to renew the public's trust in the ever–developing digital arena. Fears of advanced online hackers could be reduced, if not eliminated, by the new regulations.
The other reason for the GDPR is to establish a clear–cut set of regulations under which businesses can operate in regards to the handling of customer data. With these new rules, the boundaries would be easier to understand on both the corporate and consumer end, which would make it easier for businesses to earn and hold the trust of customers.
With the laws on data protection more clearly defined throughout the EU, the GDPR could save the European business economy roughly €2.3 billion a year. That savings, in turn, could be passed onto consumers.
When Will the GDPR Go Into Effect?
The GDPR will go into effect across the EU on May 25, 2018. While the new law was agreed upon two years before the date as mentioned earlier, businesses have been given 24 months to bring their systems into compliance with GDPR regulations. For thousands of companies worldwide, the new law has led to widespread adoption of best–practice standards.
Still, it remains to be seen just how immediate the new regulations will be implemented across the board. Despite the fact that most IT security professionals have acknowledged the scope of the GDPR, less than half of them have readied their systems for the new law. According to a recent Imperva survey of 170 security staff professional, only 43% have examined the probable effects of the GDPR and the standards it will set for corporate legal compliance in the realm of data privacy. A lot of organizations have yet to understand the effects of the new law.
Granted, many of the respondents in the Imperva survey were based in the US, but the impact of the GDPR will still be felt across the pond. More than a fourth (28%) of those surveyed said that they didn't understand how the GDPR would impact the way that American companies handle the data of European customers. Consequently, few of the stateside parties for whom the law concerns have taken steps to prepare for the implementation of the GDPR.
The trouble is, organizations in the US, Canada and other parts of the world that gather the data of EU residents will be subject to the same penalties as European companies that fail to comply with the new regulations.
Who Does the GDPR Effect?
The two parties in the realm of data security that are directly impacted by the implementation of the GDPR are the controllers and processors of digital information. The first of those parties, the controllers, are the entities that determine the methods and reasons for the processing of user data, while the latter side, the processors, are the entities that are responsible for the processing of user data. Together, the two parties are the balancing act behind personal data security on a global scale.
The controlling party would be represented by any organization — be it a company, a charity or a government entity — that handles user information. The processing party would be represented by the IT firms that actually handle the technical functions through which user data is processed.
The GDPR will affect all controllers and processors that handle the personal data of EU residents, regardless of whether the controlling or processing parties in question are based in Europe or abroad. As such, the new law affects all online businesses and platforms that accept international customers or members.
Regarding GDPR fulfillment, the balancing act between controllers and processors works as follows — the controllers must ensure that their processors function in accordance with the new regulations, while the processors themselves must make sure that their activities abide the new law and maintain applicable records.
If the latter party holds full or even partial responsibility for a data breach, the processor will be penalized much more strictly under the incoming regulations than under the pre-existing Data Protection Act. The actual source of a breach won't even matter under the new law, as the processor will bear most of the blame.
How Is Data Processed Under the GDPR?
As soon as the GDPR is implemented on May 25, 2018, controllers will be required under law to process EU user data for specific purposes with complete transparency. Once the purpose is completed, and the controlling/processing entities have no lawful need for the data of a given user, the data must be deleted. Therefore, personal data will no longer be stored idly and indefinitely on servers that could be hacked at any time.
What Does the Word "Lawful" Entail Under the GDPR?
The word "lawful" covers a spectrum of meaning within the parlance of GDPR application. On the one hand, the use of private data could be considered lawful if the person in question has consented to the use of his or her personal information. Alternately, "lawful" could apply to the following four justifications:
compliance with a legal contract
the protection of an interest deemed essential to the life of an individual
the processing of data within the interest of the public
the prevention of fraud
For the personal data of EU residents to be processed lawfully under the GDPR, at least one of the justifications mentioned above will have to apply. The law is designed to put the interests of online users above the entities that could be intent on exploiting or sharing personal data.
How Is Consent Granted Under the GDPR?
For controllers to get the agreement of an individual, the person must give consent through a direct, confirmed action. The pre–existing standard of justification, which allows controllers to use data with only passive acceptance on the part of the person, will not suffice under the GDPR. Therefore, consent cannot be gained through means that would only be understood by users who parse the fine print of a given set of terms.
For example, an interface that obtains consent in a manner that could be confusing to a customer — or that automatically renews a customer without consent unless the customer in question follows a set of complex steps to explicitly withdrawal — will not be allowed under the GDPR.
It's the controller's obligation to keep a record of the time, date and means through which a person has given his or her consent, and to respect the individual's wish to withdrawal at any time. Any business, charity or government agency that doesn't currently conform to these new regulations will have to bring their protocols into compliance by the date at which the GDPR takes effect. For many companies, the implementation of best–practice standards has made this transition a whole lot easier.
What Is Considered Private Data Under the GDPR?
The category that constitutes private data within the EU has become a whole lot broader under the GDPR. In response to the type of information that companies now gather from individuals, information about a user's computer and location, as indicated by an IP address, will now be considered private data. Other information, such as the financial, psychological or ethnic history of an individual, would also be defined as personal information. Anything that could be used to identify an individual would qualify.
Additionally, anonymized information about an individual could also be defined as private data, as long as the information in question could easily be traced back to the real identity of the person in question. If a person has lived under an alias or pseudonym, yet the duo identity is widely known, that would be considered personal information. Of course, anything already defined as private under the Data Protection Act will also fall under the definition of personal information/private data under the GDPR.
When Can an Individual Access His or Her Stored Private Data?
A person can request to see his or her private data at reasonable intervals, as defined by the GDPR. Under the new law, controllers will be obligated to respond to a user's request within 30 days. The new regulations will also require controllers and processors to maintain policies of transparency regarding the means through which data is gathered, used and processed. The language used to explain these processes to people must be worded in simple, clear layman's terms and not be littered with confusing jargon. It can't read like a formal, legal document.
Each individual is entitled to access any private data held by a company. Furthermore, each individual has the right to know just how long his or her information will be stored, which parties will get to view it and the reasons for which the data is being used. Whenever possible, controllers will be encouraged to offer secure viewing access for any account holder who wishes to see his or her personal information, as held in a company's database. People will also be able to request that incorrect or incomplete data be corrected at any time.
What Is the "Right to be Forgotten"?
A person can request that his or her data be deleted at any time, for any reason, under a clause of the GDPR known as the "right to be forgotten." If an individual feels that his or her data is no longer essential for the original purpose of its collection — such as when an address is collected to verify that a person meets the geographical requirements for participation in a contest or survey — a request can be made to have the information removed from a database.
In keeping with this principle, a person can also request that his or her data be deleted from a server if the individual withdraws consent for the data collection, or if the individual directly opposes the way that the data has been processed. When a request to be forgotten has been made, the controller is obligated to inform Google and other data–gathering organizations that all copies and links to said data must be deleted. The risk of having dormant personal info leaked to third–party marketers will be greatly minimized under the new law.
What If a Person Wants to Transfer His or Her Data Elsewhere?
Under the GDPR, EU residents will be able to request that information be moved from one database to another, free of charge, for any reason. The law stipulates that controllers will be required to store private data in formats of common use, such as CSV, that make it easy to transfer data from one organization to another. All such requests for personal data transfer must be honored within 30 days. The process for making such requests will also be simplified under the new law.
What If an Organization Is Hit With a Data Breach?
Any organization that collects the private data of users is required to report news of a data breach to a protection authority. The news must be reported within 72 hours of when the breach first becomes known to the organization. In the United Kingdom, the Information Commissioner's Office (ICO) serves as the authority on such matters. Once the GDPR goes into effect, the UK is expected to expand oversight on such issues.
The 72–hour deadline to report a breach will not always give an organization enough time to learn the full nature of a particular offense, but it should provide sufficient time to gather adequate information for the authority about the kind of data that will be affected by said breach. Just as importantly, an organization should be able to give a rough estimate of the number of people that will be impacted by the breach. This way, potentially affected parties will have more time to react.
Before a call is even placed to the data protection authority, the people who could be affected by the breach should be notified. Failure to notify the data protection authority within 72 hours could result in a fine of as much as 2% of a company's global annual revenue, or a fine of €10 million — whichever happens to be the larger amount. Compared to current ICO fines, which only go as high as £500,000, the penalties under the GDPR are far stricter.
What Additional Non–compliance Fines Do Organizations Face Under the GDPR?
If an organization fails to abide by the core principles of the GDPR — such as gaining the consent, respecting the rights, and obeying the requests of individuals — the organization could face fines twice as high as those imposed for failure to report a data breach. Under the GDPR, the fine for failing to follow the new law could be as high as 4% of a company's global revenue, or a fine of €20 million — whichever happens to be the larger amount.
How Will the GDPR Impact The UK After Brexit?
Until the UK actually withdraws from the EU, companies based in England, Scotland and Wales will still have to be in full compliance with the new laws. British citizens, meanwhile, will be protected under the GDPR until Brexit takes effect. Once the UK has completed its exit from the EU, British companies will still be required to follow the new regulations when handling the private data of EU citizens.
Because the GDPR will go into law before Brexit takes place, any new data–protection legislation that the UK implements will likely be designed to work in compliance with EU regulations, rather than force the EU to accommodate UK standards. As such, there won't likely be much difference between the GDPR and British laws on information protection in the foreseeable future, if ever.
How Will the GDPR Affect Smaller Businesses?
Compliance with the GDPR could be complicated at first for companies that haven't even begun the process of updating systems to meet the new regulations. This holds especially true for smaller businesses, which might lack the infrastructure to complete such changes before the law takes effect on May 25, 2018.
For small companies that are just beginning — or have yet to prepare — for implementation of the GDPR, it will likely be best to hire a third–party entity, such as a security or consultancy firm, to help with the process of bringing systems into full compliance. Better yet, the adoption of best–practices standards can contribute to ensuring that compliance is largely met in advance of any audits or assessments.
One of the clearest–cut consequences for organizations that fail to comply with the new regulations are the penalties that follow a security breach. The party responsible for a data breach — be it an outside hacker, an inside rogue or an unidentified source — won't matter under the GDPR, which places full responsibility on the organization itself. For small businesses in particular, these policies and the resulting fines will make compliance especially important. To that end, the early adoption of best–practices standards can save smaller businesses a whole lot of money.
What Benefits Do Companies Gain Under the GDPR?
When a company's database is brought into compliance with the GDPR, the benefits are multifaceted. For starters, the regulations can help a company establish better practices for the handling and security of collected information. After all, when customers or site members reap benefits, the businesses and platforms in question earn better reputations, which lead to more sales and sign ups.
Compliance can also ensure that a company adopts protocols that keep processes updated in a timely manner. Furthermore, the GDPR will likely motivate companies to improve the integrity of collected data. Over time, companies may even be inspired to develop better methods for capturing and storing leads and customer information.
How Will the Changes Affect Businesses That Are Already Regulated?
Organizations that are already regulated under best–practice models like the FCA or PRA will probably see few changes under the GDPR. Likewise, if a business is accredited with ISO 27001 certification, few changes are likely to be necessary once the EU regulations take effect.
For companies that lack these models or certifications and have yet to adapt to newer, higher standards of security, the future law will be a lot harder to come into compliance with by May 25, 2018. The sooner a company begins the preparation process, the easier and less risky or costly it will be to collect and handle private data once the regulations are implemented. This is one of the primary reasons why more companies across the globe are implementing ISO 27001 as a best–practices standard.
How Can ISO 27001 Help With GDPR Compliance?
Organizations around the world that have studied the GDPR are likely aware that the regulations are an encouragement to adopt best–practice schemes. As businesses and government agencies prepare for the implementation of the EU law, new systems are being developed to further enhance best–practice models in the areas of data security.
ISO 27001 is an information security standard that helps companies come into compliance with international best–practice models. The standard covers three key components of data security — people, processes and technology. When steps are taken to safeguard data with these three components in mind, businesses are better equipped to protect information, mitigate risks and rectify procedures that are deemed ineffective. As such, a growing consensus has emerged in the corporate sector that deems ISO 27001 to be the gold standard in best–practice schemes.
By putting the ISO 27001 standard into effect, an organization activates an information security management system (ISMS) that works within the business culture of the company in question. The standard is regularly updated and enhanced, and these ongoing improvements allow the ISMS to stay abreast of changes both within and outside of the company, all the while spotting and eliminating new risks.
How Does ISO 27001 Apply to Articles in the GDPR?
In Article 32 of the GDPR, policies are outlined for the encryption of data, the assurance of confidentiality and availability and the testing of security. So how does ISO 27001 work the incoming EU law? In the following ways:
Data encryption. This is encouraged by ISO 27001 as the primary method to reduce the possibility of risks. In ISO 27001:2013, more than one hundred controls are outlined for use, each of which can be implemented to lower the possibility of security risks. With each control based on the result of a previous risk assessment, any organization that utilizes the standard can pinpoint the at–risk assets and apply the necessary encryption.
Confidentiality, integrity and availability of data. This is one of the fundamental principles of ISO 27001. While the confidentiality of data is the bedrock of customer trust in an online company, the integrity and availability of private information is also crucial. If the data can easily be accessed, but the format is unreadable due to system errors, the data has lost its integrity. By the same token, if the data is safe yet out of reach to a person who needs it, the data can't be considered available.
Risk assessment. According to ISO 27001, an organization must enact complete evaluations of all possible vulnerabilities that could impact a company's data, and to leave no stone unturned in the effort to safeguard the privacy, accessibility and integrity of that information. At the same time, the standard discourages the use of overbearing security protocols that could hinder a company's ability to operate efficiently.
Business continuity. In ISO 27001, the fundamentals of continuity in business management are laid out, whereby controls are implemented to help a company keep vital information readily available in the event of a system interruption. These same protocols can assist a business in its quick recovery from what would otherwise be a lengthy and costly shutdown. With the standard in place, companies experience little, if any, downtime.
Testing and assessments. An organization that gains ISO 27001 certification receives assessments and audits of its ISM by a third–party certification firm. This ensures that the ISM is compliant with the standard. Any company that implements the standard must keep its ISM under constant review to ensure that it remains protective of private data.
Compliance. According to control A.18.1.1 of ISO 27001 — which concerns the identification of applicable legislation and contractual requirements — an organization must list all legislative, regulatory and contractual requirements. If an organization is required to comply with the GDPR, this must be listed as one of the regulatory requirements. Even without the EU law, control A.18.1.4 leads organizations that utilize the standard through the process of enacting data protection.
Breach notification. ISO 27001 control A.16.1 ensures the efficient management of security incidents, which helps organizations come into compliance with the GDPR, under which authorities must be notified within 72 hours of the discovery of a data breach. Companies that have implemented the standard are faster to respond to such incidents, which makes compliance in this area easier to meet.
Asset management. This is addressed under ISO 27001 control A.8, which includes personal data as an information security asset. Organizations that implement the standard are given an understanding of which types of data are essential, as well as where such data must be stored and for how long. Control A.8 also covers the origin of personal data and the party that gets to access said data, all of which are requirements of GDPR.
Of course, the requirements of ISO 20071 extend far beyond the principles as mentioned earlier. As an all–encompassing best–practices standard, various other areas are also covered, such as methods that apply to staff training. The standard has been implemented by countless organizations throughout the world. With the frequency and consequences of today's data breaches, the standard has become an essential part of data security in the digital marketplace.
Is ISO 27001 Enough for GDPR Compliance?
While there are some areas covered under the GDPR that are not controlled under the ISO 27001 standard — such as the right of a data subject to have his or her data moved or deleted — the standard covers most of the requirements of the new law by virtue of the fact that private data is recognized as an information security asset under ISO 27001. As such, the standard and the new regulations share like–minded views on data security.
Essentially, any company that interacts with EU residents will need to reach compliance with the GDPR. Because ISO 27001 is considered far and wide as the most secure of all the best–practice standards, it will likely prove to be the most applicable standard under the new regulations. Already, more and more companies are swiftly implementing the standard in advance of the new law.
How Can an ISO 27001–certified Company Ensure That It's Also GDPR Compliant?
ISO 27001 is one of the most applicable standards for GDPR compliance. If the standard has already been implemented by a company, said company is already more than half prepared for the incoming EU regulations. To verify whether full compliance has been reached, any concerned company should run a GDPR GAP analysis, which will pinpoint any requirements that still need to be added to the ISM of an already implemented ISO 27001.
How Will the GDPR Affect Companies Based in the US?
Any company in the US that profiles of sells products to EU residents will need to bring its data system into compliance with the GDPR by May 25, 2018. Granted, while many stateside companies already have privacy policies in place that are designed to meet compliance with pre–existing EU laws, revisions will need to be made by these companies in preparation for the upcoming regulations. Any company that has thus far relied on pre–cloud security standards will need to bring its systems up to speed and fast.
How Will the GDPR Impact Companies Around the Globe?
No matter where in the world a company is based outside the EU, that company will still need to be in compliance with the GDPR as long as it gathers the personal data or sells goods or services to EU residents. This would apply to any company worldwide that collects money or information from people on the Internet.
The penalties for non–EU companies that fail to meet GDPR compliance will be the same as for European–based companies. Therefore, failure to follow the incoming EU regulations could result in whichever of the following fines constitutes the greater amount — a fine that equals €20 million, or a fine that takes 4% of a company's annual revenue. These penalties will apply to companies in nations across the globe — Canada, Japan, Brazil, China, Australia, India, South Africa, Argentina — as long as said companies do business with the residents of any EU nation.
At NQA, we help companies bring systems into compliance with the current standards of the digital environment. To learn more about how we can help you prepare for the GDPR, contact us today to request a quote. | <urn:uuid:bff0c776-1dc2-43f5-856f-a2e8fcf84578> | CC-MAIN-2019-47 | https://www.nqa.com/en-ca/resources/blog/august-2017/iso-27001-gdpr-requirements | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00259.warc.gz | en | 0.950338 | 5,422 | 3.0625 | 3 |
|Other names||wilderness diarrhea, or backcountry diarrhea|
Wilderness-acquired diarrhea is a variety of traveler's diarrhea in which backpackers and other outdoor enthusiasts are affected. Potential sources are contaminated food or water, or "hand-to-mouth", directly from another person who is infected. Cases generally resolve spontaneously, with or without treatment, and the cause is typically unknown. The National Outdoor Leadership School has recorded about one incident per 5,000 person-field days by following strict protocols on hygiene and water treatment. More limited, separate studies have presented highly varied estimated rates of affliction that range from 3 percent to 74 percent of wilderness visitors. One survey found that long-distance Appalachian Trail hikers reported diarrhea as their most common illness. Based on reviews of epidemiologic data and literature, some researchers believe that the risks have been over-stated and are poorly understood by the public. U.S. adults annually experience 99 million episodes of acute diarrhea in a population of about 318 million. A very small fraction of these cases results from infections acquired in the wilderness, and all infectious agents occur in both wilderness and non-wilderness settings.
The average incubation periods for giardiasis and cryptosporidiosis are each 7 days. Certain other bacterial and viral agents have shorter incubation periods, although hepatitis may take weeks to manifest itself. The onset usually occurs within the first week of return from the field, but may also occur at any time while hiking.
Most cases begin abruptly and usually result in increased frequency, volume, and weight of stool. Typically, a hiker experiences at least four to five loose or watery bowel movements each day. Other commonly associated symptoms are nausea, vomiting, abdominal cramping, bloating, low fever, urgency, and malaise, and usually the appetite is affected. The condition is much more serious if there is blood or mucus in stools, abdominal pain, or high fever. Dehydration is a possibility. Life-threatening illness resulting from WAD is extremely rare but can occur in people with weakened immune systems.
Some people may be carriers and not exhibit symptoms.
Infectious diarrhea acquired in the wilderness is caused by various bacteria, viruses, and parasites (protozoa). The most commonly reported are the protozoa Giardia and Cryptosporidium. Other infectious agents may play a larger role than generally believed and include Campylobacter, hepatitis A virus, hepatitis E virus, enterotoxogenic E. coli, E. coli O157:H7, Shigella, and various other viruses. More rarely, Yersinia enterocolitica, Aeromonas hydrophila, and Cyanobacterium may also cause disease.
Giardia lamblia cysts usually do not tolerate freezing although some cysts can survive a single freeze–thaw cycle. Cysts can remain viable for nearly three months in river water when the temperature is 10 °C and about one month at 15–20 °C in lake water. Cryptosporidium may survive in cold waters (4 °C) for up to 18 months, and can even withstand freezing, although its viability is thereby greatly reduced. Many other varieties of diarrhea-causing organisms, including Shigella and Salmonella typhi, and hepatitis A virus, can survive freezing for weeks to months. Virologists believe all surface water in the United States and Canada has the potential to contain human viruses, which cause a wide range of illnesses including diarrhea, polio and meningitis.
Modes of acquiring infection from these causes are limited to fecal-oral transmission, and contaminated water and food. The major factor governing pathogen content of surface water is human and animal activity in the watershed.
It may be difficult to associate a particular case of diarrhea with a recent wilderness trip of a few days because incubation of the disease may outlast the trip. Studies of trips that are much longer than the average incubation period, e.g. a week for Cryptosporidium and Giardia, are less susceptible to these errors since there is enough time for the diarrhea to occur during the trip. Other bacterial and viral agents have shorter incubation periods, although hepatitis may require weeks.
A suspected case of wilderness-acquired diarrhea may be assessed within the general context of intestinal complaints. During any given four-week period, as many as 7.2% of Americans may experience some form of infectious or non-infectious diarrhea. There are an estimated 99 million annual cases of intestinal infectious disease in the United States, most commonly from viruses, followed by bacteria and parasites, including Giardia and Cryptosporidium. There are an estimated 1.2 million U.S. cases of symptomatic giardiasis annually. However, only about 40% of cases are symptomatic.
Since wilderness acquired diarrhea can be caused by insufficient hygiene, contaminated water, and (possibly) increased susceptibility from vitamin deficiency, prevention methods should address these causes.
The risk of fecal-oral transmission of pathogens that cause diarrhea can be significantly reduced by good hygiene, including washing hands with soap and water after urination and defecation, and washing eating utensils with warm soapy water. Additionally a three-bowl system can be used for washing eating utensils.
Water can be treated in the wilderness through filtering, chemical disinfectants, a portable ultraviolet light device, pasteurizing or boiling. Factors in choice may include the number of people involved, space and weight considerations, the quality of available water, personal taste and preferences, and fuel availability.
In a study of long-distance backpacking, it was found that water filters were used more consistently than chemical disinfectants. Inconsistent use of iodine or chlorine may be due to disagreeable taste, extended treatment time or treatment complexity due to water temperature and turbidity.
Because methods based on halogens, such as iodine and chlorine, do not kill Cryptosporidium, and because filtration misses some viruses, the best protection may require a two-step process of either filtration or coagulation-flocculation, followed by halogenation. Boiling is effective in all situations.
Iodine resins, if combined with microfiltration to remove resistant cysts, are also a viable single-step process, but may not be effective under all conditions. New one-step techniques using chlorine dioxide, ozone, and UV radiation may prove effective, but still require validation.
Ultraviolet (UV) light for water disinfection is well established and widely used for large applications, like municipal water systems. Some hikers use small portable UV devices which meet the U.S. EPA Guide Standard and Protocol for Testing Microbiological Water Purifiers, for example, the SteriPEN. Another approach to portable UV water purification is solar disinfection (also called sodis). Clear water is sterilized by putting it in a clear polyethylene (PET) bottle and leaving it in direct sunlight for 6 hours.
Water risk avoidance
Different types of water sources may have different levels of contamination:
- More contamination may be in water that
- likely could have passed through an area subject to heavy human or animal use
- is cloudy, has surface foam, or has some other suspicious appearance.
- Less contamination may be in water from
- springs (provided the true source is not surface water a short distance above)
- large streams (those entering from the side may have less contamination than those paralleling the trail)
- fast-flowing streams
- higher elevations
- lakes with undisturbed sediments (10 days undisturbed water storage can result in 75–99% removal of coliform bacteria by settling to the bottom)
- freshly melted snow
- deep wells (provided they aren't subject to contamination from surface runoff)
- regions where there was a heavy snow year when streams run full and long compared to dry years.
Rain storms can either improve or worsen water quality. They can wash contaminants into water and stir up contaminated sediments with increasing flow, but can also dilute contaminants by adding large amounts of water.
Unfortunately, there have not been any epidemiological studies to validate the above, except possibly for the case of spring water.
WAD is typically self-limited, generally resolving without specific treatment. Oral rehydration therapy with rehydration salts is often beneficial to replace lost fluids and electrolytes. Clear, disinfected water or other liquids are routinely recommended.
Hikers who develop three or more loose stools in a 24-hour period – especially if associated with nausea, vomiting, abdominal cramps, fever, or blood in stools – should be treated by a doctor and may benefit from antibiotics, usually given for 3–5 days. Alternatively, a single dose azithromycin or levofloxacin may be prescribed. If diarrhea persists despite therapy, travelers should be evaluated and treated for possible parasitic infection.
Cryptosporidium can be quite dangerous to patients with compromised immune systems. Alinia (nitazoxanide) is approved by the FDA for treatment of Cryptosporidium.
The risk of acquiring infectious diarrhea in the wilderness arises from inadvertent ingestion of pathogens. Various studies have sought to estimate diarrhea attack rates among wilderness travelers, and results have ranged widely. The variation of diarrhea rate between studies may depend on the time of year, the location of the study, the length of time the hikers were in the wilderness, the prevention methods used, and the study methodology.
The National Outdoor Leadership School (NOLS), which emphasizes strict hand-washing techniques, water disinfection and washing of common cooking utensils in their programs, reports that gastrointestinal illnesses occurred at a rate of only 0.26 per 1000 program days. In contrast, a survey of long-distance Appalachian Trail hikers found more than half the respondents reported at least one episode of diarrhea that lasted an average of two days. (Infectious diarrhea may last longer than an average of two days; certain forms of non-infectious diarrhea, caused by diet change etc., can be of very brief duration). Analysis of this survey found occurrence of diarrhea was positively associated with the duration of exposure in the wilderness. During any given four-week period, as many as 7.2% of Americans may experience some form of infectious or non-infectious diarrhea. A number of behaviors each individually reduced the incidence of diarrhea: treating water; routinely washing hands with soap and water after defecation and urination; cleaning cooking utensils with soap and warm water; and taking multi-vitamins.
A variety of pathogens can cause infectious diarrhea, and most cases among backpackers appear to be caused by bacteria from feces. A study at Grand Teton National Park found 69% of diarrhea affected visitors had no identifiable cause, that 23% had diarrhea due to Campylobacter and 8% of patients with diarrhea had giardiasis. Campylobacter enteritis occurred most frequently in young adults who had hiked in wilderness areas and drunk untreated surface water in the week prior. Another study tested 35 individuals before and after a trip to the Desolation Wilderness of California. Giardia cysts were found in fecal samples from two people after the trip, but they were asymptomatic. A third person was empirically treated for symptoms of giardiasis.
Fecal-oral transmission may be the most common vector for wilderness acquired diarrhea. There are differing opinions regarding the importance of routine disinfection of water during relatively brief backcountry visits.
Backcountry water quality surveys
Infection by fecal coliform bacteria, which indicate fecal pollution, are more common than giardiasis. Risks are highest in surface water near trails used by pack animals and cattle pastures.
Most samples of backcountry water in the Desolation Wilderness in California have found very low or no Giardia cysts. The infectious dose of giardia, however, is very low, with about 2% chance of infection from a single cyst. Also, very few studies have addressed the issue of transient contamination. According to one researcher, the likely model for the risk of Giardia from wilderness water is pulse contamination, that is, a brief period of high cyst concentration from fecal contamination.
Diarrhea acquired in the wilderness or other remote areas is typically a form of infectious diarrhea, itself classified as a type of secretory diarrhea. These are all considered forms of gastroenteritis. The term may be applied in various remote areas of non-tropical developed countries (U.S., Canada, western Europe, etc.), but is less applicable in developing countries, and in the tropics, because of the different pathogens that are most likely to cause infection.
- Hargreaves JS (2006). "Laboratory evaluation of the 3-bowl system used for washing-up eating utensils in the field". Wilderness Environ Med. 17 (2): 94–102. doi:10.1580/PR17-05.1. PMID 16805145.
Diarrhea is a common illness of wilderness travelers, occurring in about one third of expedition participants and participants on wilderness recreation courses. The incidence of diarrhea may be as high as 74% on adventure trips. ...Wilderness diarrhea is not caused solely by waterborne pathogens, ... poor hygiene, with fecal-oral transmission, is also a contributing factor
- Boulware DR (2004). "Influence of Hygiene on Gastrointestinal Illness Among Wilderness Backpackers". J Travel Med. 11 (1): 27–33. doi:10.2310/7060.2004.13621. PMID 14769284.
- McIntosh SE, Leemon D, Visitacion J, Schimelpfenig T, Fosnocht D (2007). "Medical Incidents and Evacuations on Wilderness Expeditions" (PDF). Wilderness and Environmental Medicine. 18 (4): 298–304. doi:10.1580/07-WEME-OR-093R1.1. PMID 18076301.
- Zell SC (1992). "Epidemiology of Wilderness-acquired Diarrhea: Implications for Prevention and Treatment". J Wilderness Med. 3 (3): 241–9. doi:10.1580/0953-9859-3.3.241.
- Boulware DR, Forgey WW, Martin WJ (March 2003). "Medical risks of wilderness hiking". The American Journal of Medicine. 114 (4): 288–93. doi:10.1016/S0002-9343(02)01494-8. PMID 12681456.
- Welch TP (2000). "Risk of giardiasis from consumption of wilderness water in North America: a systematic review of epidemiologic data". International Journal of Infectious Diseases. 4 (2): 100–3. doi:10.1016/S1201-9712(00)90102-4. PMID 10737847. Archived version April 20, 2010
- Backer, Howard (1992). "Wilderness acquired diarrhea (editorial)". Journal of Wilderness Medicine. 3: 237–240. doi:10.1580/0953-9859-3.3.237.
- Derlet, Robert W. (April 2004). "High Sierra Water: What is in the H20?". Yosemite Association. Archived from the original on 2007-10-12.
- "Acute Diarrhea".
- CDC Division of Parasitic Diseases (2004). "CDC Fact sheet: Giardiasis". Centers for Disease Control. Retrieved 2008-10-13.
- National Center for Zoonotic, Vector-Borne, and Enteric Diseases (2008-04-16). ""Crypto" - Cryptosporiodosis". Centers for Disease Control. Retrieved 2008-10-13.CS1 maint: multiple names: authors list (link)
- (Backer 2007, p. 1371)
- (Backer 2007, p. 1369)
- EPA, OEI, OIAA, IAD, US. "Water Resources" (PDF).CS1 maint: multiple names: authors list (link)
- Prepared by Federal-Provincial-Territorial Committee on Drinking Water of the Federal-Provincial-Territorial Committee on Health and the Environment (2004) (2004). "Protozoa: Giardia and Cryptosporidium" (PDF). Guidelines for Canadian Drinking Water Quality: Supporting Documentation. Health Canada. Retrieved 2008-08-07.
- Dickens DL, DuPont HL, Johnson PC (June 1985). "Survival of bacterial enteropathogens in the ice of popular drinks". JAMA. 253 (21): 3141–3. doi:10.1001/jama.253.21.3141. PMID 3889393.
- Backer H (2000). "In search of the perfect water treatment method" (PDF). Wilderness Environ Med. 11 (1): 1–4. doi:10.1580/1080-6032(2000)011[0001:isotpw]2.3.co;2. PMID 10731899.
- Gerba C, Rose J (1990). "Viruses in Source and Drinking Water". In McFeters, Gordon A. (ed.). Drinking water microbiology: progress and recent developments. Berlin: Springer-Verlag. pp. 380–99. ISBN 0-387-97162-9.
- White, George W. (1992). The handbook of chlorination and alternative disinfectants (3rd ed.). New York: Van Nostrand Reinhold. ISBN 0-442-00693-4.
- (Backer 2007, p. 1374)
- Boulware DR, Forgey WW, Martin WJ 2nd (2003). "Medical Risks of Wilderness Hiking". Am J Med. 114 (4): 288–93. doi:10.1016/S0002-9343(02)01494-8. PMID 12681456.
- Scallan, E. J.; A. Banerjee; S. E. Majowicz; et al. (2002). "Prevalence of Diarrhea in the Community in Australia, Canada, Ireland and the United States" (PDF). CDC. Retrieved 2008-10-15.
- Garthright WE, Archer DL, Kvenberg JE (1988). "Estimates of incidence and costs of intestinal infectious diseases in the United States". Public Health Rep. 103 (2): 107–15. PMC 1477958. PMID 3128825.
- "Giardiasis Surveillance — United States, 2009–2010".
- Howard Backer (1992). "Wilderness acquired diarrhea". Journal of Wilderness Medicine. 3 (3): 237–240. doi:10.1580/0953-9859-3.3.237.
- (Backer 2007, pp. 1368–417)
- Johnson, Mark (2003). The Ultimate Desert Handbook : A Manual for Desert Hikers, Campers and Travelers. International Marine/Ragged Mountain Press. p. 46. ISBN 0-07-139303-X.
- Backer H (February 2002). "Water disinfection for international and wilderness travelers". Clin. Infect. Dis. 34 (3): 355–64. doi:10.1086/324747. PMID 11774083.
- (Backer 2007, p. 1411)
- "Steripen - Proven Technology". Hydro-Photon, Inc. 2008. Retrieved 2008-10-14.
- "Steripen - Microbiological Testing". Hydro-Photon, Inc. 2008. Retrieved 2008-10-14.
- "Household Water Treatment Options in Developing Countries: Solar Disinfection (SODIS)" (PDF). Centers for Disease Control and Prevention (CDC). January 2008. Retrieved 2010-07-31.
- (Backer 2007, pp. 1373–4)
- Sanders JW, Frenck RW, Putnam SD, et al. (August 2007). "Azithromycin and loperamide are comparable to levofloxacin and loperamide for the treatment of traveler's diarrhea in United States military personnel in Turkey". Clin. Infect. Dis. 45 (3): 294–301. doi:10.1086/519264. PMID 18688944.
- Gardner TB, Hill DR (2002). "Illness and injury among long-distance hikers on the Long Trail, Vermont". Wilderness & Environmental Medicine. 13 (2): 131–4. doi:10.1580/1080-6032(2002)013[0131:iaiald]2.0.co;2. PMID 12092966.
- McIntosh, Scott E.; Drew Leemon; Joshua Visitacion; et al. (2007). "Medical incidents and evacuations on wilderness expeditions" (PDF). Wilderness and Environmental Medicine. 18 (4): 298–304. doi:10.1580/07-WEME-OR-093R1.1. PMID 18076301.
- Taylor, D. N.; K. T. McDermott; J. R. Little; et al. (1983). "Campylobacter enteritis from untreated water in the Rocky Mountains". Ann Intern Med. 99 (1): 38–40. doi:10.7326/0003-4819-99-1-38. PMID 6859722. Retrieved 2008-10-16.
- Zell SC, Sorenson SK (1993). "Cyst acquisition rate for Giardia lamblia in backcountry travelers to Desolation Wilderness, Lake Tahoe" (PDF). Journal of Wilderness Medicine. 4 (2): 147–54. doi:10.1580/0953-9859-4.2.147.
- Derlet, Robert W.; James Carlson (2003). "Sierra Nevada Water: Is it safe to drink? - Analysis of Yosemite National Park Wilderness water for Coliform and Pathologic Bacteria". SierraNevadaWild.gov. Sierra Wilderness Education Project. Archived from the original on May 13, 2008. Retrieved 2008-10-15.
- Derlet RW (2008). "Backpacking in Yosemite and Kings Canyon National Parks and neighboring wilderness areas: how safe is the water to drink?". Journal of Travel Medicine. 15 (4): 209–15. doi:10.1111/j.1708-8305.2008.00201.x. PMID 18666919. Lay summary (May 2008).
- Derlet, Robert W. (April 2004). "High Sierra Water: What is in the H20?". Yosemite Association.
- Rose JB, Haas CN, Regli S (1991). "Risk assessment and control of waterborne giardiasis". Am J Public Health. 81 (6): 709–13. doi:10.2105/ajph.81.6.709. PMC 1405147. PMID 2029038.
- (Backer 2007, p. 1372)
- Backer, Howard D. (2007). "Chapter 61: Field Water Disinfection". In Auerbach, Paul S. (ed.). Wilderness Medicine (5 ed.). Philadelphia, PA: Mosby Elsevier. pp. 1368–417. ISBN 978-0-323-03228-5.
- "Sources of Infection & Risk Factors| Giardia | Parasites | CDC". www.cdc.gov. Retrieved 3 August 2018. | <urn:uuid:2b083f37-d8c5-48b8-9194-29649c352756> | CC-MAIN-2019-47 | https://en.wikipedia.org/wiki/Wilderness-acquired_diarrhea | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00418.warc.gz | en | 0.850557 | 4,940 | 3.390625 | 3 |
Porterville, California facts for kids
|City of Porterville|
Location of Porterville in Tulare County and the state of California
|Incorporated||May 7, 1902|
|• City||17.679 sq mi (45.790 km2)|
|• Land||17.607 sq mi (45.603 km2)|
|• Water||0.072 sq mi (0.188 km2) 0.41%|
|• Metro||4,839 sq mi (12,530 km2)|
|Elevation||459 ft (140 m)|
|Population (July 1, 2016)|
|• Density||3,397.8/sq mi (1,311.86/km2)|
|• Metro density||94.946/sq mi (36.659/km2)|
|Time zone||Pacific (UTC-8)|
|• Summer (DST)||PDT (UTC-7)|
|GNIS feature IDs||1652779, 2411470|
Since its incorporation in 1902, the city's population has grown dramatically as it annexed nearby unincorporated areas. The city's July 2014 population (not including East Porterville) was estimated at 55,466.
Porterville serves as a gateway to a vast wonderland and recreational area, the Sequoia National Forest, the Giant Sequoia National Monument and Kings Canyon National Park.
- The Tule River Indian War of 1856
- Local trivia
- Sister cities
During California's Spanish period, the San Joaquin Valley was considered a remote region of little value. Emigrants skirted the eastern foothills in the vicinity of Porterville as early as 1826. Swamps stretched out into the Valley floor lush with tall rushes or "tulares" as the Indians called them.
Gold discovered in 1848 brought a tremendous migration to California, and prairie schooners rolled through Porterville between 1849 and 1852. Starting in 1854, Peter Goodhue operated a stopping place on the Stockton - Los Angeles Road on the bank of the Tule River. Wagon trains of gold seekers passed through the village, but other travelers found the land rich and remained to establish farms. A store was set up in 1856 to sell goods to miners and the Indians, who lived in tribal lands along the rivers. From 1858 to 1861 it was the location of the Tule River Station of the Butterfield Overland Mail.
Royal Porter Putnam came to the village in 1860 to raise cattle, horses and hogs. Putnam bought out Goodhue in 1860, turning the station into a popular stopping place and hotel called Porter Station. He bought 40 acres of land and built a two-story store and a hotel on the highest point of the swampy property, which is now the corner of Oak and Main. The town of Porterville was founded there in 1864. The town took its name from the founder's given name because another Putnam family lived south of town.
In 1862, 20.8 inches of rain fell in the area causing the change of course of the Tule River. Putnam's acres drained, and he had his property surveyed, staking out lot lines and establishing streets. Settlers were offered a free lot for every one purchased. Needs of a burgeoning California population for food gave the impetus which led to permanent development of the east side southern San Joaquin Valley. The long, dry, hot summer prompted irrigation of the lands.
In 1888, the Southern Pacific Railway brought in the branch line from Fresno. The Pioneer Hotel and Bank were built by businessmen from San Francisco. The town incorporated in 1902, as miners moved into the area to extract magnetite ore, and the Chamber of Commerce was formed in 1907. A City Manager-Council form of government was adopted in 1926, and a Charter was adopted. The City has grown from a community of 5,000 people in 1920. Agriculture supplemented by the Central Valley Water Project has been the major source of economic growth in the area. The City is the center of a large farming area noted especially for citrus and livestock.
Industry has become a significant factor in the development of the community. The Wal-Mart Distribution Center, National Vitamin, Beckman Instruments, Standard Register, Sierra Pacific Apparel, Royalty Carpeting, and other small companies have facilities in Porterville. Several large public facilities are also located here. These include the Porterville Developmental Center, Sequoia National Forest Headquarters, the Army Corps of Engineers Lake Success Facility, and the Porterville College campus of the Kern Community College District.
The Tule River Indian War of 1856
The Native Americans living in the foothills of the Sierra Nevada mountains were relatively undisturbed by early Spanish colonization. During the late 1840s and into the 1850s, once gold was discovered in California, miners began encroaching on traditional lands. Although a treaty was signed with the local tribes in 1851, defining a proposed reservation and two hundred head of cattle per year, the US Senate failed to ratify the treaty, with every member either abstaining or voting no.
In the spring of 1856, a rumor that 500 cattle had been stolen by Native Americans began to circulate. Upon further investigation, a single yearling calf had been taken as a bridal gift. Mobs of armed settlers were organized to counter the perceived menace despite the peaceful intentions of the Native Americans. These mobs began raiding Native camps and killing their inhabitants.
One mob, under the leadership of Captain Foster DeMasters, failed to dislodge a numerically-superior Native encampment while wearing ineffective makeshift body armor consisting of cotton-padded jackets. Reinforcements were obtained from Keyesville and the resulting force, now under the leadership of Sheriff W.G. Poindexter, were similarly repulsed. After falling back, the mob then proceeded to wage a scorched-earth campaign by destroying Native American supply caches.
News of these engagements spread throughout California, exaggerating the degree of menace and misrepresenting its causes. Finally, in May 1856, army soldiers under the command of LaRhett Livingston assaulted the encampment and succeeded in driving off its defenders. The war's duration was approximately six weeks.
In retrospect, George Stewart wrote "Thus ended the Tule river war of 1856; a war that might have been prevented had there been an honest desire on the part of the white settlers to do so, and one that brought little glory to those who participated therein. The responsibility cannot now be fixed where it properly belongs. Possibly the Indians were to blame. Certainly the whites were not blameless, and it is too seldom, indeed, that they have been in the many struggles with the aboriginal inhabitants of this continent."
Historian Annie Mitchell later wrote in the Tulare County Historical Society bulletin (Los Tulares No. 68, March 1966): "Over the years it has been assumed that the Tule River War was a spontaneous, comic opera affair. It was not and if the Indians had been armed with guns instead of bows and a few pistols they would have run the white men out of the valley."
Porterville is located at(36.068550, -119.027536).
According to the United States Census Bureau, the city has a total area of 17.7 square miles (46 km2), of which, 17.6 square miles (46 km2) of it is land and 0.1 square miles (0.26 km2) of it (0.41%) is water.
Porterville is located on the Tule River at the base of the western foothills of the Sierra Nevada and eastern most section of California's Central Valley. In the foothills above Porterville is the man-made Lake Success.
Porterville, lying along the foothills of the Sierras at an elevation of 455 feet, is located on State Highway 65, 165 miles north of Los Angeles, 171 miles east of the Pacific Coast. The City has a strategic central location to major markets and a ready access to major transportation routes
Porterville is subject to earthquakes and aftershocks due to its proximity to the Pacific Ring of Fire. The geologic instability produces numerous fault lines both above and below ground, which altogether cause approximately 10,000 earthquakes every year. One of the major fault lines is the San Andreas Fault. A few major earthquakes have hit the Porterville area like the Kern County Earthquake of 1952 and the Bakersfield Earthquake of 1952 causing serious aftershocks and earthquakes in the area. All but a few quakes are of low intensity and are not felt. Most of the city are also vulnerable to floods. The San Joaquin Valley and metropolitan areas are also at risk from blind thrust earthquakes.
Porterville, CA, gets almost 13 inches of rain per year. The US average is 37. Snowfall is 0.01 inches. The average US city gets 25 inches of snow per year. The number of days with any measurable precipitation is 46.
On average, there are 271 sunny days per year in Porterville, CA. The July high is around 100.5 degrees. The January low is 35.6. The comfort index, which is based on humidity during the hot months, is a 54 out of 100, where higher is more comfortable. The US average on the comfort index is 44.
|Climate data for Porterville, California|
|Average high °F (°C)||57.9
|Average low °F (°C)||35.6
|Precipitation inches (mm)||2.17
|Avg. precipitation days (≥ 0.01 in)||7.2||6.9||6.3||2.8||1.6||0.4||0.3||0.3||1.7||1.7||3.9||5.4||38.5|
Owing to geography, heavy reliance on automobiles, and agriculture, Porterville suffers from air pollution in the form of smog. The Porterville area and the rest of the San Joaquin Valley are susceptible to atmospheric inversion, which holds in the exhausts from road vehicles, airplanes, locomotives, agriculture, manufacturing, and other sources. Unlike other cities that rely on rain to clear smog, Porterville gets only 13.00 inches (330.20 mm) of rain each year: pollution accumulates over many consecutive days. Issues of air quality in Porterville and other major cities led to the passage of early national environmental legislation, including the Clean Air Act. More recently, the state of California has led the nation in working to limit pollution by mandating low emission vehicles. Smog levels are only high during summers because it is dry and warm. In the winter, storms help to clear the smog and it is not as much of a problem. Smog should continue to drop in the coming years due to aggressive steps to reduce it, electric and hybrid cars, and other pollution-reducing measures taken.
As a result, pollution levels have dropped in recent decades. The number of Stage 1 smog alerts has declined from over 100 per year in the 1970s to almost zero in the new millennium. Despite improvement, the 2006 annual report of the American Lung Association ranked the city as the 11th most polluted in the country with short-term particle pollution and year-round particle pollution. In 2007 the annual report of the American Lung Association ranked the city as the 4th most polluted in the country with short-term particle pollution and year-round particle pollution. In 2008, the city was ranked the third most polluted and again fourth for highest year-round particulate pollution.
Porterville is also experiencing environmental issues due to California's extreme drought. Most of the city of Porterville has run out of their supply of groundwater, an unfortunate consequence of the entire city relying heavily on private wells. Citizens receive shipments of bottled water and bathe in government-provided public showers.
The 2010 United States Census reported that Porterville had a population of 54,165. The population density was 3,076.3 people per square mile (1,187.77/km²). The racial makeup of Porterville was 31,847 (58.8%) White, 673 (1.2%) African American, 1,007 (1.9%) Native American, 2,521 (4.7%) Asian, 64 (0.1%) Pacific Is lander, 15,482 (28.6%) from other races, and 2,571 (4.7%) from two or more races. Hispanic or Latino of any race were 33,549 persons (61.9%).
The Census reported that 53,018 people (97.9% of the population) lived in households, 207 (0.4%) lived in non-institutionalized group quarters, and 940 (1.7%) were institutionalized.
There were 15,644 households, out of which 8,177 (52.3%) had children under the age of 18 living in them, 8,032 (51.3%) were opposite-sex married couples living together, 2,962 (18.9%) had a female householder with no husband present, 1,315 (8.4%) had a male householder with no wife present. There were 1,424 (9.1%) unmarried opposite-sex partnerships, and 115 (0.7%) same-sex married couples or partnerships. 2,679 households (17.1%) were made up of individuals and 1,193 (7.6%) had someone living alone who was 65 years of age or older. The average household size was 3.39. There were 12,309 families (78.7% of all households); the average family size was 3.78.
The population was spread out with 18,154 people (33.5%) under the age of 18, 5,879 people (10.9%) aged 18 to 24, 14,266 people (26.3%) aged 25 to 44, 10,773 people (19.9%) aged 45 to 64, and 5,093 people (9.4%) who were 65 years of age or older. The median age was 28.8 years. For every 100 females there were 97.9 males. For every 100 females age 18 and over, there were 95.2 males.
There were 16,734 housing units at an average density of 946.5 per square mile (365.4/km²), of which 8,966 (57.3%) were owner-occupied, and 6,678 (42.7%) were occupied by renters. The homeowner vacancy rate was 2.9%; the rental vacancy rate was 6.3%. 30,016 people (55.4% of the population) lived in owner-occupied housing units and 23,002 people (42.5%) lived in rental housing units.
As of the census of 2000, there were 39,615 people, 11,884 households, and 9,174 families residing in the city. The population density was 2,828.4 people per square mile (1,091.8/km²). There were 12,691 housing units at an average density of 906.1 per square mile (349.8/km²). The racial makeup of the city was 49.8% White, 1.3% African American, 1.7% Native American, 4.6% Asian, 0.2% Pacific Islander, 32.7% from other races, and 4.8% from two or more races. Hispanic or Latino of any race were 54.5% of the population.
There were 11,884 households out of which 47.5% had children under the age of 18 living with them, 53.1% were married couples living together, 17.7% had a female householder with no husband present, and 22.8% were non-families. 19.1% of all households were made up of individuals and 8.3% had someone living alone who was 65 years of age or older. The average household size was 3.20 and the average family size was 3.62.
In the city, the population was spread out with 34.3% under the age of 18, 10.8% from 18 to 24, 28.0% from 25 to 44, 17.5% from 45 to 64, and 9.4% who were 65 years of age or older. The median age was 29 years. For every 100 females there were 96.4 males. For every 100 females age 18 and over, there were 93.0 males.
The median income for a household in the city was $32,046, and the median income for a family was $35,136. Males had a median income of $31,171 versus $23,737 for females. The per capita income for the city was $12,745. About 20.3% of families and 25.7% of the population were below the poverty line, including 33.7% of those under age 18 and 6.4% of those age 65 or over.
- The sheriff in Big Top Pee-wee (1988) received a report from Porterville about a windstorm approaching Pee-wee Herman's local town.
- It received the All-America City Award in 1994.
- In the science fiction novel The Santaroga Barrier (1968) Porterville is the nearest "normal town" to the fictional Santaroga situated in a valley 25 miles to the east of Porterville.
- In the science fiction novel Lucifer's Hammer (1977), this city is destroyed by the collapse of the dam at Lake Success.
- Porterville is the home of the Persian Lime
- Porterville has three structures that are listed in the National Register of Historic Places (NRHP); The First Congregational Church, US Post Office- Porterville Main, The Zalud House Museum.
Highways and freeways
California State Route 65, known as The All American City Highway or Porterville Freeway, is a major north-south freeway that heads north to Lindsay and south to Bakersfield. California State Route 190, is a major east west freeway in Porterville that heads west to California State Route 99 and east bypassing East Porterville to Springville.
- (CR J15) – Porterville
- (CR J26) – Porterville
- (CR J27) – Porterville
- (CR J28) – Porterville
- (CR J29) – Porterville
- (CR J37) – Porterville
- (CR J42) – East Porterville
The Porterville Transit operates environmentally-friendly and convenient public transportation to Porterville and the surrounding communities. Porterville COLT Paratranit service designed for transit riders with disabilities that prevent them from using regular bus services. Porterville Transit and COLT services are provided within the city limits and to designated unincorporated urban areas of the county, including "county islands" within the city limits.
The Tulare County Area Transit (TCaT) provides the public transit services between Porterville and smaller communities throughout the greater Porterville Area. Service includes Fixed Route and Demand Responsive services that are offered Monday through Saturday
- (IATA: FAT, ICAO: KFAT, FAA LID: FAT) Fresno Yosemite International Airport, owned by the City of Fresno; serves the San Joaquin Valley.
- (IATA: BFL, ICAO: KBFL, FAA LID: BFL) Meadows Field Airport, also known as Kern County Airport #1, serves the South Valley and the Greater Metropolitan Bakersfield.
- Porterville Municipal Golf Course
- River Island Country Club
- Eagle Mountain Casino
- Porterville Historical Museum
- Zalud House
- Porterville off- Highway Vehicle Park
- Rocky Hill Speedway
- Barn Theatre
- Frank "Buck" Shaffer Auditorium (Porterville Memorial Auditorium)
- Deenie's Dance Workshop
- Porterville Community Strings
- Monache High School Band
- Porterville Panther Band
- Granite Hills Grizzlies Band
- The Porterville Marketplace
- Riverwalk Marketplace
- Mainstreet Boutiques
- Special Occasion Gifts
- Fashion Network
- Earth Angel
Festivals and events
- Band-O-Rama (November)
- Sierra Winter Classic Livestock Jackpot (January
- Orange Blossom Klassic Livestock Jackpot (February)
- Iris Festival (April)
- Porterville Celebrates Reading Fair (April)
- Springville Rodeo (Last Full weekend in April)
- Porterville Fair (May 15–19)
- Springville Apple Festival (October)
- Pioneer Days & Rib Cook-Off (October)
- Annual Veterans Day Parade (November 11)
- Annual Christmas Children's Parade
- Porterville Municipal Pool
- Sequoia National Forest
- Sequoia National Monument
- Tule River Indian Reservation
- Lake Success
- Golden Trout Wilderness Pack Train
- Balch Park Pack Station
- Tule River
- Bartlett Park
Porterville has two sister cities,
- [http:// Official website]
Porterville, California Facts for Kids. Kiddle Encyclopedia. | <urn:uuid:6dde7020-bbef-424c-9fd7-03ecd090a2d0> | CC-MAIN-2019-47 | https://kids.kiddle.co/Porterville,_California | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00338.warc.gz | en | 0.948203 | 4,440 | 3.25 | 3 |
Macclesfield Canal construction
© Copyright 2002 Graham Cousins and the Railway & Canal Historical Society.
The Act empowered the Company to raise the money required for building the canal and its associated structures this sum was to be a maximum of £300,000. The money was to pay for the costs of obtaining the Act of Parliament, for the surveys, plans and estimates, and for the construction and maintenance of the canal. The sum of £300,000 was to be divided into 3,000 shares of £100 each. The estimated cost of building the canal and its associated works was put at £295,000. Of this, £260,900 had already been subscribed by the time of the Act. It was enacted that the whole of the sum of £295,000 had to be subscribed before the powers given in the Act could be put into force. The Company was enabled to raise a further sum of £100,000 for the completion of the canal by raising a mortgage upon the credit of the navigation.
The tolls which the Company could charge for the carriage of goods on the canal were set out in the Act as follows:
For every ton of sand, gravel, paving stones, bricks, clay, coal for burning lime, limestone, and rubble stone for roads – 1d per mile.
For every ton of ashlar stone, slate, flags, spar, coal (except for burning lime), and other minerals – 1½d per mile.
For every ton of timber, lime, goods, wares, and all other merchandise, articles, matters, and things not mentioned above – 2d per mile.
Tolls were payable to the next quarter of a mile travelled and to the next quarter of a ton as loaded. The Company was required to set up posts every quarter of a mile along the canal – where such a post was not fixed the Company would forfeit up to £10. The Company had to erect boards listing the tolls at all the places along the canal where tolls would be charged or collected.
The first General Meeting of Canal Company s hareholders took place on Thursday 25 May 1826 at the Macclesfield Arms Hotel at 12 noon.35 Sir H. M. Mainwaring was the Chairman. The business of the day consisted mainly in the appointment of officers of the Company. Mr William Cririe was elected as Clerk. The next appointment to be made was that of Treasurer. The Rev Edward Stanley proposed Mr William Brocklehurst and this was seconded by John Ryle. A counter proposal for this post was a Mr Edward Smythe, his name being put forward by John Daintry and a Mr Jones. However, it was felt that a banker would be the best person to appoint as Treasurer since no salary would be paid. William Brocklehurst was finally appointed without opposition, but it was then found that he wished his brother, Mr Thomas Brocklehurst, to be appointed instead. Thomas Brocklehurst was proposed as Treasurer by Mr Stanley and appointed. Mr Cririe explained the work of the Committee to the assembled shareholders – covering the early stages when the Parliamentary Bill was being promoted to the present time when they were met to put the Act into force. The negotiations with the Trent & Mersey Canal Company were detailed. The Trent & Mersey Company was a powerful opponent of the Macclesfield Canal project and it won the right to excavate the last mile of the canal and to take the tolls on that length when it was completed. The cost of obtaining the Act, including Engineer's, Surveyor's and other fees, was said to be between £5,000 and £6,000.
Work creation scheme
In October 1826 an editorial in the Courier raised concerns about the lack of employment for the poor during the coming winter. It suggested that an application should be made by the Poor Law Overseers of the town to the Committee of Proprietors to contract with them for the cutting of that part of the canal which passed through the Macclesfield area.36
During that same month the Canal Company was advertising for contractors to start excavating the canal between Marple and the head of the flight of locks at Bosley, a length of about sixteen miles. The work was to be let in five lots. Plans and specifications were to be ready for inspection at the Macclesfield Arms Hotel from Wednesday 25 October until Monday 13 November. William Crosley, the Company's Engineer, would be available during this period to provide further information.37, 38 People who were interested in contracting for the work were invited to send tenders for each lot to Mr Cririe, no later than 13 November 1826.
The Committee met on Wednesday 15 November to discuss the various tenders which had been received. There was evidently so much difference between the various tenders that they were referred to Thomas Telford for a final decision. It was estimated that the work, when it began, would bring £500 per week into the town of Macclesfield. The contractors for the Marple to Bosley section of the canal were decided upon at a meeting of the Committee on 27 November and it was estimated that within a month about 800 ‘navigators' would be employed.39
The first sod
The ceremony of 'turning the first sod' was performed at Bollington on Monday 4 December 1826 by John Ryle. The navvies evidently celebrated the occasion in their usual manner because it was reported that 'the navigators, upon the occasion, were regaled to their heart's content with libations of the barley-beer '. No doubt little progress was made that day with the digging! The Courier also reported that Mr William Wrigg of Macclesfield was in treaty, on behalf of the Poor Law Overseers, for a part of the lot that was to be cut over Macclesfield Common, contracted for by a Mr Jenkinson. The newspaper felt that if Mr Wrigg was successful it would be a great benefit to the town.40
In April 1827 the Courier reported that 'The Macclesfield Canal is in a state of great forwardness, under the very able direction of Mr Crosley, the Engineer. We understand the line will be shortened nearly two miles by the skilful management of that gentleman.'41 During the week of 2 July the Committee inspected the work being carried out on the canal and were very pleased with the progress being made.42 The second Annual General Meeting of the Company was held on Thursday 19 July 1827. The Committee was re-elected, with the additions of Messrs Randle-Wilbraham, Kinnersley, and the Rev F. Brandt. The business of the day was followed by dinner with Mr Oldknow as Chairman.43 A letter from James Potter to Thomas Telford dated 19 October 1827 requested that he inspect the new Harecastle Tunnel and also informed him that the cutting of that part of the Macclesfield Canal which was being built by the Trent & Mersey Canal Company had begun.44
The Committee surveyed the whole line of the canal during the two days Thursday and Friday 22 and 23 May 1828. They were able to travel by boat from Marple to High Lane. Again they expressed their satisfaction with the work being carried out.45 Thursday 17 July 1828 was the day of the third Annual General Meeting of the Macclesfield Canal Company. The accounts were presented to the forty or so shareholders in attendance and Crosley made his report on the progress of the work.46, 47 The line between Marple and the locks at Bosley had been split into five sections. The first section ran from the junction with the Peak Forest Canal to Lyme Hanley, and had been contracted to Messrs Seed & Son. A good length of this section was now navigable. The embankment at Middlewood was in this section and it was estimated that 220,000 cubic yards of earth would be required for its construction. The second section was 3¼ miles in length and included two embankments, one at Hagg Brook and the other between Adlington and Pott Shrigley. The third section, contracted to William Soars, ran to Tytherington. This was a distance of 2¾ miles and included the large embankment at Bollington. This was also estimated to require 220,000 cubic yards of earth. However, because of the rocky nature of the ground, it had been possible to reduce the width of the base by making the slopes more perpendicular. It had been calculated that 150,000 cubic yards of earth would now be needed. The fourth lot terminated at Sutton, being three miles in length, and was also contracted to Messrs Seed & Sons; some two miles of this section were nearly ready for the water to be let in. The fifth section terminated at the top of the locks at Bosley, being about four miles in length. Two miles were full of water and a further 1½ miles was ready for filling. Messrs Jennings, Jenkinson & Otley were the contractors on this length. Of the summit level 12¾ miles, out of 16¼, were nearly ready to contain the full depth of water. Forty arched stone bridges, six swivel bridges, five aqueducts and large culverts and fifty-one smaller culverts had been completed. There remained to be built four arched bridges, the arch of the road aqueduct at Bollington, part of the large culvert at Middlewood and eight swivel bridges, five of which were in progress.
Crosley then came to report on the rest of the line to its junction at Harding's Wood with the Trent & Mersey Canal. The first section included the locks and terminated at Buglawton, a distance of three miles. This length was contracted to Messrs Nowell & Sons. Work on the locks had not yet started, the contractors having been occupied by opening a quarry on Bosley Cloud for the required stone, and building a railway from the Cloud to the line of the canal. The second section on this level was some three miles in length and was contracted to William Soars. There were to be four embankments in this section, three requiring 180,000 cubic yards of earth, with the fourth at Dane Henshaw (Dane-in-Shaw) needing 240,000 cubic yards. The third section was contracted to Messrs Pearce & Tredwell and was just over four miles in length.
In August 1828 the Company was advertising contracts to build two reservoirs near Macclesfield (Sutton and Bosley), together with associated feeders to and from the reservoirs, new brook courses and associated works. Plans and specifications could be inspected at the Macclesfield Arms Hotel between Tuesday 16 September and Tuesday 23 September. Sealed tenders for these contracts were to be sent to Mr Cririe no later than 23 September 1828.48
The T&M length, 1829
On 20 March 1829 Thomas Telford wrote to James Caldwell reporting on an inspection which he had made of Knypersley Reservoir, Harecastle Tunnel, and the part of the Macclesfield Canal which was being built by the Trent & Mersey Canal Company.49 Telford was generally satisfied with the work being carried out, although he objected to the use of 'dense blue clunch ' to cover the clay puddle. He ordered that this should be removed and a lining of gravel be laid. At one point on the line water was found to be getting below the clay lining. It had been proposed to pave the canal bed with flat stones below the clay lining in order to prevent the water from rising. Telford thought this to be an uncertain and expensive operation and advised that the water should be cut off by drain xxx concluded by saying that he had 'fully explained these matters to the Superintendent and the contractor Mr Pritchard '.
A letter from William Faram, on behalf of James Caldwell, to Thomas Telford, dated 6 May 1829 indicated that problems with water still existed on this length.50 Faram advised Telford that they had found water in the sand when they had started excavation. The clay puddle would not stay in place so they had covered some seven yards with stone. This appeared to be successful with the water seeping up between the stones, but bringing little sand with it. It was also planned to face the sides of the canal with stone at this point.
Committee inspection, 1829
The Committee carried out the annual inspection of the canal on Tuesday 30 June 1829.51 They were especially pleased with the work on the locks at Bosley, and during the day the first stone of the aqueduct over the river Dane at Bosley was laid by Randle Wilbraham, the Chairman of the Committee. The ceremony was watched by numerous spectators and upwards of 400 of the workmen.
The Annual General Meeting of the Company was held on 16 July 1829.52 Things were going well and it was estimated that a saving of up to 10% would be made on the cost of construction. This arose in part because only two reservoirs would be needed, rather than the five which had been originally projected. Crosley's report to the Company, however, did highlight two or three areas of concern, namely the embankment at Bollington, the aqueduct over the river Dane at Bosley and the embankment at Dane in-Shaw. His concern about the embankment at Bollington is quite evident from this extract from his report.53
A slip has taken place in this embankment, which has not only retarded the work, but has injured the masonry of the culvert. The cause appears to be this – Finding that the embankment would be chiefly composed of stone, I concluded it would stand upon a much narrower base than was intended by the original contract, and that, by making the slopes twelve inches to a foot, instead of twenty-four inches to a foot, a saving of about £1,700 might be effected; and, I was the more induced to recommend this alteration to the Committee, as, in the event of its not answering, I considered that the original plan might be resorted to, and the work completed at a sum not exceeding the amount of the original contract. When I found the work giving way, a stop was put to it for about six weeks, in order that the effect produced by time and the heavy rains which were then falling, might be seen. The work was resumed several weeks ago; and, as no further slip has taken place, I am of opinion that the embankment will stand without any extension of the base. But, if I should be deceived upon this point, and it should be found desirable, either as a matter of necessity, or of prudence, to adopt the original contract, the extra work will be performed for the sum which has been deducted from the contract, so that no additional expense will fall upon the Company. With respect to the injury of the culvert to which I have before alluded, an inner arch has been constructed in that part; which, if continued entirely through, will render it quite safe. Should the work stand without any further alteration, about 20,000 yards will complete the embankment.
The report goes on to document his thoughts about the aqueduct at Bosley and the completion of the canal as follows:
The masonry of two locks is completed, and two others are in progress. The large aqueduct over the river Dane is included in this division. In consequence of the foundations proving very unsound, it was thought advisable to consult Mr Telford as to the propriety of a deviation from the original plan. Mr Telford being of opinion, after considering the circumstances, that the sort of aqueduct originally intended to be adopted could not be relied upon, and having suggested various alterations, the work is now proceeding according to the plan drawn out by him; by which all doubts that were entertained of the safety of the aqueduct are now removed. By the terms of the contracts for the execution of the lower line of the canal, that portion of it between the Trent & Mersey Canal and Congleton was stipulated to be finished by 1 January next. This I have no doubt will be the case. The remaining part of the line of this level, which includes the very great and important embankment at Dane in-Shaw, the different locks, and the aqueduct over the Dane, was to be completed by 1 January 1831; but in consequence of the unavoidable delay occasioned by the deficiencies in the foundations for the aqueduct over the Dane, and the necessity of making some alterations, the time for completing the aqueduct has been extended to 1 May following; and I feel quite satisfied that every part of the works will be completed within the periods that have been fixed upon. The locks are executing by Messrs Nowell & Sons, the contractors, in a very superior manner, and with respect to the whole line, with the exceptions which have been pointed out, the whole of the work is going on without accident; and the different contractors are executing their respective portions in a manner perfectly satisfactory. The reservoir at Bosley is proceeding rapidly: the pipes are laid for taking the water through the embankment, and the masonry at the end of the same is in progress. In consequence of the difficulty of finding a good foundation for the puddle, in the centre of the embankment, I have concluded to have a lining puddle under its seat, and along the bottom of the reservoir, till it can be tied into firm and watertight ground. The forming of the feeders has also been commenced. The reservoir, together with the feeders, will be completed by the time they are required for the use of the canal.
After the business of the meeting was concluded the 'proprietors sat down to a sumptuous dinner, provided in Mrs Foster's usual style of excellence '.
Telford's inspection, 1829
Thomas Telford inspected the line of the canal during the autumn of 1829 and made the following comments on its construction.54
The canal is in general laid out with much judgement, with very proper curves for navigation, except near the Macclesfield School land, where there is an inconveniently quick bend which will be required to be altered. The canal banks and towing path have also been very properly executed excepting in some places, where in crossing valleys by embankments of sand which have been much injured by the late heavy rains, should be remedied by covering the surface with a coating of clay. The forty seven arched stone bridges and eleven swivel bridges are all judiciously placed and well executed with their approaches properly protected. The locks at Bosley are executed in a very perfect manner, and when complete will, for materials and workmanship, exceed any others in the kingdom and be a great credit to the Canal Company. The materials of which the aqueduct over the river Dane is composed are singularly good, and the workmanship equally so. All the operations of the contractor for this aqueduct and the locks before mentioned are carried on in a most masterly manner. A few minor improvements are suggested for securing the foundations of the aqueduct, the culverts and the embankments. The river arch at Sutton will require to be repaired by one of greater strength and curvature, placed upon a better foundation. The culvert at Bollington to be altogether abandoned, and a new channel for the river constructed through the free stone rock on the north side by tunnelling, from immediately below where the two brooks meet at Bollington village, and rejoining the present river channel on the lower side above the mill. The road archway and the mill culvert near it are perfectly strong, and when the new channel is constructed and the present culvert filled up, the embankment will be secured.
Telford was obviously pleased with the workmanship and materials of the locks at Bosley and the nearby aqueduct over the river Dane. These thoughts can still be appreciated many years later.
On 26 June 1830 the Macclesfield Courier carried an article which again described the benefits which the canal would bring to the district, and commented upon the building of a corn mill by the banks of the canal. Macclesfield had been dependant upon other towns for its supply of flour and it was suggested that these millers had made a good living at Macclesfield's expense. The money that had previously been spent in other towns would at last be retained and circulated within the immediate neighbourhood.55 The Annual General Meeting of 1830 had been scheduled for Thursday 15 July. However, King George IV died on 26 June and his funeral was held on that day. The meeting was adjourned and was subsequently held on Saturday 24 July 1830. The arch of the aqueduct over the river Dane was completed on Saturday 23 October 1830. The Macclesfield Courier of 30 October 1830 gives some interesting details of its construction the arch is a semi circle of forty two feet span and it springs twenty-four feet from the bed of the river. It contains 10,212 cubic feet of stone. 'The very superior stone of which the aqueduct is composed, and also the twelve adjoining locks, which are now complete, have been procured from the adjacent mountain ' (Bosley Cloud).56
The canal was eventually opened throughout on 9 November 1831 – a discussion of the events of that year, the opening ceremony and the early trading days of the canal form the basis of a subsequent article.
Webmaster's addition - one day in 2006 a couple from New Zealand visited the Discovery Centre at Clarence Mill, Bollington. They said they had called on chance and did we know where in Bollington the first sod of the canal had been cut. I didn't, but thought it likely that it had been at the company wharf opposite Adelphi Mill. They said their interest was raised by the fact that the gentleman's great, great, great grandfather was William Wrigg! Mr & Mrs Wrigg, how nice to meet you! Thanks very much for visiting us. | <urn:uuid:39a7b491-ee0b-4403-b36f-71b1c321527f> | CC-MAIN-2019-47 | https://www.macclesfieldcanal.org.uk/construction | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00258.warc.gz | en | 0.983656 | 4,541 | 2.578125 | 3 |
A recent study published in the Lancet finds Millennials to be at much higher risk for cancer than their parents and grandparents ever were.
Those born between 1981 and 1997 appear to be at increased risk of cancer of the:
Study authors cite obesity as the main culprit.
The CDC reports the prevalence of obesity was 35.7% among young adults aged 20 to 39 years.
In 2016 the International Agency for Research and Cancer listed multiple cancers in which obesity plays a role. They include the above as well as breast, ovarian, and esophageal cancer.
Studies have found obesity to alter hormone levels which could incite cells to rapidly divide. Fat acts as if it's another organ, inducing signals that can affect insulin, sugar and fat metabolism and can induce inflammation when it accumulates around other organs.
Moreover it could be an associative relationship in which those who are obese may have poor diets and exercise habits which are linked to cancer as well.
In the above study, non-obesity related cancer, such as lung, appears to be at less risk for millennials as many are saying no to tobacco products.
However, other causes could be at play such as radiation exposure. The verdict is not yet out on vaping either.
Study authors state:
Skeletal muscle insulin resistance is the root cause of reactive elevated insulin levels. Muscle utilizes fatty acids for fuel, rather than glycogen converted to sugar. We only have several thousand calories stored in muscle as glycogen, but many hundreds of thousands potentially stored as fatty acids in adipose tissue, both white and brown fat under skin and around internal organs. This adaptive change probably arose several hundred thousand years ago, to survive famines and ice ages. It is the famine response triggered by mineral depleted soils, high sugar and carbohydrate diets, and stress hormonal responses.
Obese individuals have not only elevated insulin levels fasting and after meals, but counter-insulin hormones, glucagon and cortisol. The pancreas makes insulin but also glucagon, that promotes liver conversion to sugar, especially overnight. Cortisol pushes sugar upward countering surging insulin, and comes from the adrenals, showing a mild stress hormone response to swinging blood sugar levels.
The solution to Obesity and Diabetes is dietary, metabolic, specific kinds of exercise. Paleo and Ketogenic diets lower insulin stimulation, with low glycemic index dietary foods. Supplements that lower insulin resistance in muscles and other target organs to a lesser extent, lowers insulin output. Special supplements lower Glucagon and Cortisol, counter-insulin hormones, and thus lower sugar production. Excess glucose production is converted to fatty acids, that link on glycerol to make triglyceride or fat, that is exported from the liver to adipose tissues.
Supplements that reverse type 2, or insulin resistant hypothyroidism can raise metabolic rate, suppressed by elevation of Reverse Thyroid Hormone RT3, that can be lab evaluated. Core temperature will return to normal when Free RT3 levels are reduced. It can be monitored with an alcohol Geratherm or two digit Electronic oral thermometer, in the AM, before getting up, as muscle twitches will produce some heat to core temperature. The best exercise is non-traumatic vibration platform exercise with resistance bands or small hand weights to boost metabolic activity and fat burning actions, release NO nitric oxide, and raise GH Growth Hormone levels.
Stay well with wise natural care of functional metabolic medicine
Dr. Bill Deagle, MD, AAEM, ACAM, A4M
The World Health Organization (WHO) finds the number of obese children in the world to be 10 times greater than what it was 4 decades ago.
They estimate currently 50 million girls and 74 million boys are obese worldwide.
Back in 1975 only 11 million children worldwide were obese. Now the number sits at 124 million.
True, population has grown since then, but the percentage of children obese is exploding -- 19% of girls and 22.4% of boys in the US are considered obese.
Adult obesity is skyrocketing as well. In 1975 there were 100 million obese adults worldwide. This jumped to 671 million in 2016 and doesn’t include the 1.3 billion “overweight” adults.
The Center for Disease Control and Prevention (CDC) states the following:
Obesity is defined as having excess body fat. Overweight is defined as having excess body weight for a particular height from fat, muscle, bone, water, or a combination of these factors. Body mass index, or BMI, is a widely used screening tool for measuring both overweight and obesity. BMI percentile is preferred for measuring children and young adults (ages 2–20) because it takes into account that they are still growing, and growing at different rates depending on their age and sex. Health professionals use growth charts to see whether a child’s weight falls into a healthy range for the child’s height, age, and sex.
Children with a BMI at or above the 85th percentile and less than the 95th percentile are considered overweight.
Children at or above the 95th percentile have obesity.
We’re successfully fighting the war on tobacco. Adults especially can’t turn to a stick of nicotine as easily as they once could to curb their appetite. Teen smoking is down as well, so their appetites may be up.
We like fast food. Its cheap, yummy and convenient. For 99 cents you can get a small burger that is served to you in a matter of minutes and can be eaten before your next meeting or class. Fast food contains excess calories, fat and preservatives that our body doesn’t need.
We eat too quickly. The speed at which we eat may affect our metabolism. Eating too quickly prevents a satiety signal from reaching the brain, hence we will gulp down more food than is needed. For more on this read here.
We don’t move around as much. We can all agree that children and adults these days don’t play outside as much as we did in previous generations. And even if we did get some exercise in each day during PE or at the gym, we lose much of the ground gained when we sit on our computers at night for hours on end.
More hormones are in our food. Hormones such as steroids and recombinant bovine growth hormone (rBGH) that enhance food production in our food-producing animals may affect our metabolism.
Sugar isn’t a treat anymore, it’s considered a food group. In the 70’s if you got dessert one night at dinner it would be a rare treat. Today kids have dessert at lunch and even breakfast has sugar levels over flowing the cereal bowl. Excess sugar leads to fat storage.
Our portions have gotten bigger. Remember when the Quarter Pounder came out in the early 70’s and we thought it was the biggest burger ever? Now people will eat two in one sitting.
Below is a table showing the difference in portion sizes today vs. the 1950’s.
Make exercise not a choice but a daily necessity. Schools should have English class conducted on walks around the school rather than sitting in desks. A 30 minute workout should be a given every morning without excuses. We brush our teeth, we wash our hair, we gas up our truck, we exercise.
Eat fresh, avoid fast food. The more junk food the more junk in your trunk. Avoid preservatives and processed foods. Your body was designed to eat the basics. Give it what it needs.
Eat slowly. No need to chow down on the run. If you’re in a hurry then eat half the sandwich as save the rest for later. Which brings us to…
Eat smaller portions. Get rid of the platters you call plates these days and eat your dinner off of a saucer dish. You’ll still fill up your tummy.
Swap vegetables for carbs. It’s healthier, filling, and helps you poop.
Just say NO to sugar. This will be a hard one for me but if you do it, I will.
Daliah Wachs is a guest contributor to GCN news. Doctor Wachs is an MD, FAAFP and a Board Certified Family Physician. The Dr. Daliah Show , is nationally syndicated M-F from 11:00 am - 2:00 pm and Saturday from Noon-1:00 pm (all central times) at GCN.
Fast food has become the staple of many American and European diets and we’ve seen obesity rise. True more people take public or private transportation to work over walking, and many have given up smoking every time they had a hunger itch, but the most popular reason for our waistline increase is fast food. But is it the caloric content of the fast food that’s fueling the obesity epidemic, or the speed at which its ingested?
What is Fast Food?
According to the Merriam-Webster dictionary, Fast Food is “food that can be prepared and served quickly”. A burger, shake and fries is considered fast food but so is a take away salad or sandwich. It’s implied that fast food is a meal that is not made fresh but made previously and preserved such that it can taste fresh when needed to be served.
According to CalorieKing, a McDonald’s Big Mac is 540 calories. A large order of fries is 510 calories. So a meal over 1000 calories is obviously not the healthiest choice.
But let’s return back to the sandwich alone. While a Big Mac is 540 calories, CalorieKing finds Chick-Fil-A’s Cobb Salad (without dressing) 500 calories. Bob Evans Restaurant’s Cobb Salad is 516 calories.
Now on the same site a Tuna Salad Sandwich (5 oz) w. mayo, 3 oz Bread is 679 calories.
So are we becoming obese eating cobb salads and tuna salad for lunch just as one would eat a Big Mac? We don’t know since people don’t study cobb and tuna salad eating consumers. My guess is no.
Yes, and so fast that I believe it could be messing with our metabolism.
Think back to caveman days. We had to chew. And not on a soft sesame seed bun, but chew our meat. Nuts and vegetables took a chewing as well. Food was more scarce so it was savored and meals weren’t on the run while on a subway or at a stop light in one’s car.
Previous studies have shown that eating slowly and chewing it multiple times allow the body’s signals to trigger the satiety sensation sooner, hence one would eat less.
So gulping down a burger in 5 bites could be accomplished prior to the brain receiving the signal that it should be satisfied.
Now the metabolism issue. Fast food could contain sugars, fats and preservatives that alter metabolism. But eating on the run could cause metabolism issues in and of itself.
When a body senses that the food source is short-lived, unpredictable, and coming at a speed preventing proper absorption of nutrients, it may slow down metabolism to allow the body to make the most of what it has. Eating a meal slow and methodical may be the most successful way to not only feel full but to eat less and lose weight.
I suggest a study be done looking at two groups of people eating the same food with the same caloric content but differing on the speed at which they eat it.
I suggest to you all to take an extra 15 minutes to complete your meal than what you’re accustomed to and determine if you see results after a few weeks.
Of course avoiding fast food would be the most beneficial for our weight but if you must eat fast food, eat it slowly.
LearnHealthSpanish.com / Medical Spanish made easy.
Daliah Wachs, MD, FAAFP is a Board Certified Family Physician. The Dr. Daliah Show , is nationally syndicated M-F from 11:00 am - 2:00 pm and Saturday from Noon-1:00 pm (all central times) at GCN.
Obesity is an epidemic in America. Overall, 38 percent of U.S. adults are obese and 17 percent of teenagers are obese, the Center for Disease Control reported in 2016. More than two-thirds of Americans are at least overweight. There is a difference between obese and overweight, though.
The obese are less likely to be physically active or are physically unable to be physically active, which is why this complete nutrition guide is for the immobile. You can lose weight without exercising. Use the following tips to start losing weight without knowingly altering your calorie intake and without exercising.
When overcoming obesity, you have to start somewhere, and if you have trouble moving, you have to start with the way you eat. I’m not talking about a diet or counting calories. There are things you can do before, during and after consuming food that will help keep you from overeating.
America’s obesity problem stems from increases in portion size since the 1980s, and those portions continue to grow as body weights increase. It’s corporate food taking advantage of an addiction it created, much like the tobacco industry. Don’t be a pawn in their game.
A serving of meat is three ounces, which is the size of a bar of soap. A hamburger serving is the size of a hockey puck. A serving of pasta is the size of your fist. A serving of vegetables is the size of a baseball, and a serving of fruit is the size of a tennis ball. A serving of peanut butter is the size of a ping pong ball. If you guide your portions based on the recommended serving sizes, chances are you’ll end up consuming less and losing weight. If you use smaller plates, you’ll also end up eating fewer calories, and research shows that people eat less off red plates than white or blue plates.
Plan what you’ll eat for breakfast, lunch, dinner and snacks for an entire week. Having a plan keeps you from replacing potentially nutritious meals with fast food and puts you in control of your nutrition goals instead of some high school kid inside a drive-thru window. Having a plan will also help you avoid skipping meals, which isn’t good for you either. I log my meals for the next day using the MyPlate app from Livestrong. Logging meals a day in advance gives me an idea of my calorie, fat, sugar and sodium wiggle room for snacks throughout the day. It also helps me save money because I’m less likely to eat out when my meals are already planned.
Impulse buying contributes to the American obesity epidemic. If it never seems like the healthy foods are on sale, it’s because they seldom are. But if you enter the store with a list of foods you know you need, and you don’t waver from that list, you’ll leave having saved some calories and some money.
Drink a glass of water before every meal and more water in general. Are you drinking half a gallon of water each day? Chances are you’re not. The daily recommended water intake is eight, eight-ounce glasses. With one before every meal that’s just three per day, so be sure to stay hydrated. It’s literally the easiest way to lose weight.
Eating in the proper environment can help prevent overeating. A study conducted by a Cornell researcher found that people eating in fast food restaurants where the lighting was dimmer and the music more soothing ate 175 fewer calories than those who ate in the same place with the lights brighter and the music louder. And don’t eat in front of the television, as you’ll be more likely to forget how much you’ve eaten.
It takes 20 minutes for your stomach to send a message to your brain that you’ve eaten enough, so eat slower and you’ll be less likely to overeat. And chew your food thoroughly.
People who eat more in the morning and less at night lose more weight, and starting your day with warm food high in protein helps you feel fuller and less hungry later. Consume 350 to 400 calories and 25 grams of protein every morning and you’ll be on your way to losing weight.
Eggs are my go-to breakfast food because they’re cheap, quick to make, high in protein and are delicious when mixed with vegetables. Non-fat Greek yogurt is also a great breakfast food if you’re on the go. Mix it with granola and fruit for the perfect parfait.
If you have a blender, a plant-based, protein shake is a great way to get a serving of fruits and vegetables along with protein without the fat. I use hemp-based protein because it improves heart health, and BodyBuilding.com put together multiple lists of delicious shake recipes here and here so you never get sick of them. If you can push back your breakfast to later in the day, it lowers the amount of time you’ll have to eat throughout the day, too. This way you’re less likely to consume too much in one day.
Another reason obesity is a problem is the amount of time Americans have to actually sit down and eat. It’s very important that you sit down to eat, and that you actually eat more often. You just want to tone down the size of your meals and spread them out throughout the day so your metabolism stays high and you burn fat throughout the night. Eating smaller meals more frequently also keeps your appetite in check so you don’t wake up starving. Try to eat five smaller meals per day instead of three large meals.
Eating foods that satisfy your hunger is a key to eating fewer calories and overcoming obesity. WebMD put together a chart with examples of satisfying foods, as well as unsatisfying foods. I bet you can guess where Twinkies, Snickers, potato chips, cheese puffs and french fries fell. A turkey sandwich on wheat bread topped the list of satisfying foods, with oatmeal on its heels and bean burrito coming in third. A vegetarian refried bean burrito is an even healthier option.
Avoiding food before bedtime can actually keep you from losing weight. Just don’t overeat before going to bed and make sure you’re consuming protein instead of carbohydrates and fat. Your body burns more calories digesting protein than carbs and fat. Another protein shake is perfect before bed because it might boost your metabolism, according to a Florida State study. Adding a cup of rooibos tea could reduce stress hormones that trigger fat storage and hunger. Some of the best midnight snacks are turkey and cottage cheese, because they’re both high in protein and contain tryptophan, the amino acid that puts you to sleep on Thanksgiving. Speaking of sleep…
Fitbits wouldn’t monitor sleep quality if it wasn’t important to fitness. It’s incredibly important to get at least seven hours of sleep each night because people who get more sleep have the proper balance of leptin and ghrelin hormones that help control appetite. If you create a routine that you do an hour before sleep each night, like brushing your teeth and then reading for an hour, your body will be better prepared to sleep, and you won’t be counting sheep.
If you can’t fall asleep in 20 minutes, leave the bedroom and do something unstimulating. That doesn’t mean watch television or stare at your phone or tablet. Looking at a screen before bed not only makes it harder to fall asleep, but can make you more tired and less alert in the morning. If you still struggle sleeping or can’t seem to breathe while sleeping, get checked for sleep apnea. Oh, and the colder you can handle the bedroom while sleeping, the more calories you’ll burn in your sleep.
If you’re looking to overcome obesity and aren’t physically able to be physically active, week one of the “Overcoming Obesity” nutrition guide for the immobile can help you become mobile. We still won’t advocate exercise in week two, either, because you don’t need to exercise to lose weight. Week two of the “Overcoming Obesity” program will focus on nutrition -- not a diet.
If you like this, you might like these Genesis Communications Network talk shows: America’s Healthcare Advocate, The Bright Side, The Dr. Daliah Show, Dr. Asa On Call, Dr. Coldwell Opinion Radio, Good Day Health, Health Hunters, Herb Talk, Free Talk Live | <urn:uuid:b06b6921-68d7-4c5f-9bbf-8f3c3eb6c2ff> | CC-MAIN-2019-47 | http://www.gcnlive.com/JW1D/index.php/back-catalog/itemlist/tag/obesity | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00058.warc.gz | en | 0.944839 | 4,308 | 3.328125 | 3 |
Table of Contents
Structure and Secretion of the Salivary Gland
The swallowing process is movement of food from the mouth to the stomach. Before this happens, saliva is secreted by the salivary glands into the oral cavity.
Saliva consists of 99.5 % water and 0.5 % dissolved substances such as sodium, potassium, and bicarbonate. The main function of the secretion of saliva is to lubricate food as well as to start the chemical digestion of carbohydrates and lipids.
The autonomic nervous system controls the secretion of saliva in the oral cavity, throat and the esophagus. The daily amount of secretion by the salivary glands results in 1,000 – 1,500 ml of saliva is secreted by salivary glands every day.
There are numerous salivary glands in and around the oral cavity
A distinction is made:
Minor salivary glands
- Glandulae labiales in the lips’ mucosa
- Glandulae buccales in the cheeks’ mucosa
- Glandulae palatinae in the pharyngeal mucosa
- Glandualae linguales
Major salivary glands
- Glandula parotideae (parotid gland, below and in front of each ear)
- Glandula submandibularis (submandibular gland, lower jaw)
- Glandula sublingualis (sublingual gland)
The minor salivary glands secrete saliva through short ducts directly or indirectly into the oral cavity. Their contribution to the total saliva is small.
The major salivary glands lead directly into the oral cavity, secrete the most amount of saliva, and consist of three pairs of major salivary glands.
Glandula Parotideae (parotid gland)
These glands are located inferior and anterior to the ears, they cover the M. masseter, cranially touch the Arcus zygomaticus, and the Meatus acusticus externus in a dorsal direction. The biggest part of the gland lies deep in the Fossa retromandibularis. The glands secrete saliva through the Ductus parotideus (ear saliva duct), which extends through the M. buccinators and ends in the vestibule opposite to the second upper molar tooth.
The parotid glands solely consist of serous acini (parts of the glands that secrete serous fluid). Saliva is highly fluid and comprises of many proteins and enzymes. It contains immunoglobulins, which are secreted as a complex of immunoglobulins by the gland cells. They act as an immunological defence against germs within the oral cavity.
Glandula Submandibularis (submandibular gland)
At the bottom of the mouth, we find the submandibular gland, which is located in a canal between the inner side of the mandible, M. mylohyoideus, M. hyoglossus, and Lamina superficialis fasciae cervicalis.
Its ducts lie on both sides of the midline beneath the Mucosa (Ductus submandibularis) and reach the oral cavity next to the lingual frenulum. The submandibular gland mainly consists of serous acini and a few mucosal acini (parts of the gland that secrete mucus).
Glandula Sublingualis (sublingual gland)
The sublingual gland lies lateral to the M. genioglossus on the M. mylohyoideus and above the submandibular glands. Its small ducts (Ductus sublinguales minores) empty into the bottom of the mouth directly into the oral cavity. It mainly consists of mucosal acini and just a few serous acini.
Structure of the Teeth
Teeth (dentes) are included in the accessory digestive organs, they lie in the caves of the alveolar extensions in the upper and lower jaw.
The alveolar extensions are covered by the (gingiva) that slightly reaches into every hole. On the inner side of these holes, there is the periodontal ligament (peridontium), which consists of connective tissue fibers that holding the teeth in place.
A tooth consists of three main parts:
The visible part that lies above the gingival is called the crown. One or three roots are invisibly located inside the holes. The constricted connection between crown and roots is called the neck.
The main inner part of a tooth is called the dentin and consists of calcified connective tissue that is responsible for tooth’s form and consistency.
At the crown, the dentin is covered by enamel (substantia adamantinea), which consists of calcium phosphate and calcium carbonate. Due to its high concentration of calcium, enamel is harder than bones, making it the hardest substance in the body.
The main function of enamel is to protect teeth against wear from chewing and against acids which could dissolve the dentin. The dental cement is connected to the periodontal ligament and covers the dentin at the roots.
The dentin encompasses a defined range of each tooth. The pulp cavity is the part of the enlarged space inside the crown, which is filled with dental pulp. The dental pulp contains blood vessels, nerves, and lymph vessels.
The root canals are small expansions of the pulp cavity that traverse the root of a tooth and hold a hole at its bottom—the apical foramen—where nerves, blood, and lymph vessels are attached.
Over the course of their lives, humans have two dentitions, the primary dentition, and the permanent dentition. The first milk teeth (dentes decidui) bursts through after 6 months. Each month of a baby’s life, one or two more teeth appear until all 20 teeth are in place. Generally, all milk teeth shed between the age of 6 and 12 after which they are then replaced by the permanent dentition.
The permanent dentition (dentes permanentes) consist of 32 teeth, 16 in the upper jaw and 16 in the lower jaw respectively. Each half of the jaw contains 3 molars, 2 bicuspids, 1 canine, and 2 incisors.
The first molars (16, 26, 36, 46) burst through at the age of six years, the second molars at the age of twelve, and the so-called wisdom teeth (third molars) after the age of 17. Their function is to crunch and grind up food.
In many cases, the human jaw does not provide enough space behind the second molars to allow the wisdom teeth to burst through. When this occurs, the third molars remain embedded in the alveolar bone. Sometimes, this causes pressure and pain to the extent that they have to be surgically removed. Some people experience that the third molars atrophy or do not develop at all.
The Layers of the GI Tract
The wall of the GI tract consists of four layers each constructed in the same basic order—from the lower esophagus to the anus.
From the luminal surface to the outer surface, four layers can be distinguished:
The mucosa is the innermost layer of the GI tract and it is a mucous membrane consisting of an epithelial layer, a thin layer of connective tissue, and another thin layer consisting of smooth muscle cells.
The epithelial layer serves as a protective layer and with the help of the single-layered columnar epithelium, it contributes to secretion and resorption. Additionally, exocrine cells are located between the epithelial cells, which secrete mucus and liquid into the lumen (the inner space of a cavity) of the GI tract.
The submucosa or lamina propria (lat. lamina = thin, flat layer; propria = own) contains many blood and lymph vessels. The lamina propria enables nutrients which are reabsorbed in the GI tract to reach other parts of the body. They also contain the majority of cells of the mucosa-associated lymphoid tissue (MALT), which in turn contain cells of the immune system that help protect the body from diseases.
The muscularis mucosae is a thin layer of smooth muscle fibres that enlarges the surface for better digestion and resorption and further ensures that all reabsorbing cells get in contact with the substances in the GI tract.
The submucosa consists of loose reticular fibres which strike through numerous fenestrations (areolar connective tissue) and connect the mucosa to the muscularis. Its structure comprises of many blood and lymph vessels which contain absorbable food molecules. The special feature of the submucosa is the included Meissner’s plexus, a vast network of neurons. It can also contain glands and lymphoid tissue.
The muscularis of the GI tract consists of skeletal muscles as well as smooth muscle fibres. The arbitrary swallowing is generated by the muscularis of the mouth, the pharynx, and the upper and middle part of the esophagus. On the other hand, the deliberate control of defecation is possible due to the skeletal muscles found at the outer anal sphincter.
The smooth muscles are located in the rest of the GI tract and are characterized by an inner layer of circular fibres and an outer layer of longitudinal fibres. They help to degrade food, mix up digestive secretions, as well as move food along the tract. The second plexus of neurons, the so-called Auerbach’s plexus, is located between the layers of the muscularis.
The serosa or surface layer is a serous membrane of areolar connective tissue and single-layered squamous epithelium. The surface layer of the esophagus is comprised of just one single layer of areolar connective tissue (adventitia) with the absence of a serosa.
Specifics of the Wall Structure of the Esophagus, Small Intestine and Large Intestine
The esophagus has the same wall structure as all the other segments of the GI tract.
The mucosa (mucous membrane) of the esophagus consists of non-keratinized multi-layered squamous epithelium, lamina propria, and a muscularis mucosa (smooth muscle), which is responsible for the onward movement of food (peristalsis) to the stomach.
The muscularis consists of fasciated skeletal muscles (voluntary muscles) in the upper third, whereas the lower third consists entirely of smooth muscles. In between are voluntary as well as smooth muscle cells.
The small intestine is organized in 3 parts, namely the duodenum, jejunum, and ileum. The wall of the small intestine also consists of the same four layers as the larger part of the GI tract. However, the small intestine has some specifics.
The epithelial layer of the small intestine’s mucosa is built of single-layered cylindrical epithelium, which contains many different cell types, e.g.
- Resorption cells: responsible for digesting and reabsorbing nutrients in the bolus
- Goblet cells: responsible for secreting mucus
The small intestine mucosa contains many deep fissures.
Intestinal glands (crypt of Lieberkühn) are the cells furnishing the fissures and secreting the intestinal juice. Apart from resorption and goblet cells, intestinal glands contain Paneth cells and enteroendocrine cells.
The submucosa of the duodenum contains Brunner’s glands that secrete alkaline mucus to neutralize the stomach acid in the chymus.
The muscularis of the small intestine consists of two layers of smooth muscles.
- Longitudinal fibres—outer, thinner layer
- Circular fibres—inner, thicker layer
Apart from the main part of the duodenum, the serosa covers the entire small intestine.
Specifics of the Small Intestine
Special structural characteristics of the small intestine facilitate digestion and resorption—the so-called valves of Kerckring. They are mucosal and submucosal folds. They boost the resorption by extending the surface and allow the chymus to move helically instead of straight through the intestine.
The small intestine also has villi, which do some resorption and digestion and give the intestine mucosa its velvety appearance.
Apart from valves of Kerckring and villi, the small intestine also contains microvilli, extensions of the free membrane of the resorption cells, which build a fuzzy line (brush border) and reach into the lumen of the small intestine.
Specifics of the Large Intestine
Just like the esophagus and small intestine, the large intestine contains four typical layers: mucosa, submucosa, muscularis, and serosa.
The epithelium of the mucosa contains most notably resorption cells whose purpose is to reabsorb water and goblet cells that secrete mucus thus lubricating the bogus. The resorption and goblet cells are located in the intestinal glands (crypts of Lieberkühn), which are found in the entire diameter of the mucosa.
As opposed to the small intestine, there are no valves of Kerckring or villi. However, microvilli of the resorption cells for reabsorbing purposes are present.
The muscularis consists of the following layers:
- Outer layer of the smooth longitudinal muscles
- Inner layer of the circular muscles
One characteristic of the GI tract is the presence of longitudinal muscles, which are thickened into three well-visible longitudinal ligaments (Taeniae coli). These ligaments run across the whole length of the large intestine. Clonical contractions give rise to a range of pockets (haustra coli, singular: haustrum), which give the colon its puckered appearance.
Between the longitudinal ligaments, there is one single layer of smooth circular muscles.
Gastric mucosa and its glands’ Histology
Apart from some deviations, the gastric mucosa consists of the same four layers as the whole GI tract.
One single layer of cylindrical epithelial cells constitutes the surface of the mucosa. Their function is to secrete mucus. The mucosa consists of a lamina propria with areolar connective tissue and muscularis mucosae, which is characterized by smooth muscles.
Many epithelial cells are located in the lamina propria, forming columns with secretory cells. These are called gastric glands, which in turn are lined with many narrow canals, the so-called gastric pits. Each of these gastric pits is filled with the secretion of several gastric glands before the secretion flows into the lumen of the stomach.
The gastric glands contain three types of exocrine gland cells:
- Foveolar cells: They secrete mucus just like the mucous cells at the surface
- Gastric chief cell: Their main function is the secretion of pepsinogen and gastric lipase.
- Parietal cells: They produce hydrochloric acid and the intrinsic factor, which is essential for reabsorption of vitamin B12.
Gastric juice consists of the secretions of foveolar, parietal, and gastric chief cells. They produce 2—3 liters of gastric juice per day.
The G cell, an enteroendocrine cell which secretes the hormone, gastrin, into the blood, is also one of the gastric glands and is mainly located in the pyloric glands of the stomach antrum.
Three additional layers, namely the submucosa, muscularis, and serosa, lay beneath the mucosa.
The submucosa is characterized by areolar connective tissue.
The muscularis of the stomach has three layers of smooth muscles. An outer longitudinal layer, a middle circular layer, and an inner angular layer that is primarily limited to the corpus of the stomach.
The serosa consists of single-layered squamous epithelium and areolar connective tissue. It covers the stomach and is part of the peritoneum viscerale. At the small curvature of the stomach, it reaches into the liver.
Liver – Lobules and Glisson’s Capsule
The liver is the heaviest organ in a human body and is located below the diaphragm. At the bottom of the liver, there is a pear-shaped, 7—10 cm long pouch: the gallbladder.
The liver can be divided into two main lobes separated from each other by the falciform ligament:
- Right lobe
- Smaller left lobe
The falciform ligament spreads from the bottom of the diaphragm through the two lobes to the upper surface of the liver. Its function is to “fasten” the liver in the abdominal cavity.
The lobes of the liver consist of many small functional units, the so-called liver lobules. These lobules have a hexagonal structure and are made of specialized epithelial cells that are called hepatocytes. The cells are arranged in irregularly branched and linked panels having a vein at the center.
The liver lobules additionally contain highly permeable capillaries through which blood flows. They are called liver sinusoids.
Kupffer cells are found in the liver sinusoids and are responsible for phagocytosis. They destroy dead white and red blood cells and other bacteria as well as impurities that originate from the gastro-intestinal tract.
The hepatocytes secrete bile that reaches the small biliary canal and empty into the small bile ducts (ductus biliferi). The bile ducts create the large right and the left hepatic duct, which incorporates and leaves the liver as common hepatic duct (ductus hepaticus communis). Together with the ductus cysticus (cystic duct), it creates one single bile duct after the gall bladder, the so-called ductus choledochus.
Blood Supply of the Liver
The liver receives blood from two different sources:
- Blood rich in oxygen from the hepatic portal vein (A. hepatica)
- Blood poor in oxygen from the hepatic arteries (V. portae), including nutrients, pharmaceutical substances and possibly microbes and toxins from the gastro-intestinal tract
The hepatic portal vein and the hepatic arteries carry blood into the sinusoidal liver capillaries, where oxygen, nutrients and specific toxins are absorbed by the hepatocytes. Through the central vein, which ends in a liver vein, specific nutrients required by different cells are carried through the blood.
Glisson’s Capsule (Trias hepatica)
The Glisson’s capsule is located at the edge of the liver lobules and consists of the branching of the portal vein, the proper hepatic artery, and the bile duct.
Common Exam Questions on Oral Cavity and Gastrointestinal Tract
Solutions can be found below the references.
1. Which saliva gland serves as an immunological defence against germs in the oral cavity?
- Glandula parotideae (ear)
- Glandula submandibularis (lower jaw)
- Glandula sublingualis (sublingual)
- Glandulae buccales in the mucosal of the cheeks
- Glandulae palatinae in the mucosal of the pharynx
2. Which are the three special features of the mucosa of the small intestine help in the resorption of nutrients?
- Villi, microvilli, and Brunner’s glands
- Microvilli, Brunner’s glands, and haustra coli
- Valves of Kerckring, villi, and microvilli
- Valves of Kerckring, villi, and haustra coli
- Adventitia, Brunner’s glands, and haustra coli
3. Which of the following carries out the process of phagocytosis of dead white and red blood cells in the liver?
- Peyer’s patches
- Lobuli hepatis
- Kupffer cells
- Alveolar monocytes | <urn:uuid:15d298b0-6961-416c-816e-d803cb807074> | CC-MAIN-2019-47 | https://www.lecturio.com/magazine/oral-cavity-gastro-intestinal-tract/?appview=1 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00018.warc.gz | en | 0.906584 | 4,309 | 3.71875 | 4 |
Some risks, however significant, are simple to understand. Homeowners and their insurance companies agree that smoke detectors and fire extinguishers reduce the risk that a house will burn down and lives will be lost. When it comes to firearms, however, both the problems and the proposed solutions are considerably more complex.
In the wake of recent highly publicized school shootings, school districts and legislators across the country have been working to determine how best to protect children and school staff from gun violence at schools. As part of this conversation, at least two states have enacted legislation in 2018 that would facilitate the carrying and use of firearms on school grounds,1 with Florida recently passing legislation allowing school personnel (excluding classroom teachers) to be trained and armed.2 At least 24 states across the country have policies that allow security personnel to carry weapons in schools, and at least nine states have policies that allow other school employees to do the same.3 But how could these laws affect school districts’ insurance policies and coverage? It’s not quite so easy, as the state of Kansas found out when it passed a law in 2013 allowing school staff to carry guns, and an insurer that covers most districts in the state subsequently issued a letter denying coverage to schools that took on this risk. Five years later, no Kansas school employee has carried a gun into a K-12 school.4
Arming school staff and allowing guns in schools both pose challenging issues of risk and liability. As with any legislation, the ramifications of a new policy can be complicated, and there are a variety of factors that governments and school boards would be wise to consider as they debate this divisive and weighty issue. This paper discusses risk and insurance considerations for school districts and legislators tackling this difficult subject across the United States.
What does K-12 insurance look like?
Currently, most schools are insured in one of two ways—with commercial liability policies or, more often, as part of formal intergovernmental risk pools. Risk pools, generally structured by state regulation, are a type of self-insurance in which schools or other public entities join forces to spread the cost of risk and share best practices on risk management. The Association of Governmental Risk Pools (AGRiP) estimates up to 80% of public entities nationally, including K-12 schools, buy one or more coverages through a risk pool.5
When it comes to arming school personnel, a district’s insurance needs will vary widely based on a number of factors. Two federal laws prohibiting firearm possession in or near K-12 schools apply to all states, but exceptions to the federal laws are permitted, and state-by-state legislation varies. More often than not, allowing guns on campus remains a matter for each school to decide. Some school boards in isolated or rural regions, such as public school districts in Harrold and Argyle, Texas, have voted to allow certain personnel to carry guns because those schools are 15 minutes or more from first responders.6,7
Governmental immunity laws also come into play when assessing the potential liability of public schools. Governmental immunity holds that certain public entities may be immune from some tort lawsuits.8 For public entities such as public schools, these laws may dictate caps on losses from certain risks, such as natural disasters. In the case of the Marjory Stoneman Douglas High School shooting in Parkland, Florida, parents of the victims looking to sue were informed by the school’s insurer that the school’s liability for damages could not exceed $300,000, according to local law.9 These limits protect public schools from a certain amount of financial peril in the event of a catastrophe. But for many public entities, there may be exceptions to these local laws that could open public school districts up to increased risk. Geography, population, and individual local laws vary widely and, as a result, liability coverages should be tailored to the needs of the individual district or risk pool.
What do we know about firearm risk in schools from claims data?
When it comes to employing armed personnel in schools, we do not know much about firemarm risk in schools from the claims data. Anecdotally, news reports have noted instances of injuries due to accidental firearm discharge in schools.10 And in April, CNN released a catalogue of 20 school shootings that have occurred since the beginning of this year.11 But hard and fast data is scarce. Based on recent conversations with only a small sample of its member pools, AGRiP reports it has not heard about any liability claims resulting from schools with armed staff. A couple anecdotal stories exist within AGRiP membership about workers' compensation claims for accidental wounds during gun safety training by school staff. Similarly, according to Milliman research, gun-related liability claims in one large risk pool for schools amount to just a tiny fraction of all sizable claims—about one incident in a thousand. Without a larger sampling of data (for instance, the number of armed personnel throughout U.S. schools or the number of gun-related claims), we cannot assume this data is representative or draw conclusions as to risk.
With a relative lack of liability claims data in schools, law enforcement experience can shed some light on liability risk and pricing issues. In evaluating law enforcement risk, shooting claims are often found to be expensive regardless of whether the victim was the one breaking the law. Further, law enforcement experience has shown that even if a police officer was not shown to have used excessive force, juries tend to award damages to the survivors. Of course, juror attitudes may vary–law enforcement personnel are not schools and teachers, and jurors may not see schools as such deep pockets of funds as city or state governments. Also, teachers may be less likely to be portrayed as the “bad apples” in law enforcement that motivate some awards. But the experience is not to be ignored; a liability claim of the kind seen in law enforcement could cause large increases in any entity’s excess insurance costs. Multiple claims could mean a dramatic change to coverage.
Liability considerations for arming school personnel
As states and districts grapple with the issue of having armed personnel on K-12 campuses, insurers and risk managers will ask a variety of questions to assess the risk:
- How big is the school?
- Where is the school located?
- How many guns are in the school?
- Where are the guns kept, during the day and at night?
- Who has access to guns and how?
- What is the level of safety training available, and is it required?
- Are there background checks of armed staff?
- Are protocols in place for when a gun may or may not be used?
There is also an important distinction to be made as to who is being armed. The liability considerations for arming security personnel (such as a law enforcement officer or trained safety officer) in schools are different than for arming or providing firearm access to teachers or school staff with little or no professional firearm training or state-specific certifications. While armed security personnel may present a higher cost to school districts than armed teachers, this approach likely would mitigate accident risk and provide better loss control.
Most schools that decide to arm staff or employ armed personnel on campus will likely need to structure and implement safety training programs that address gun use, though the state legislative requirements around this issue vary. Schools in Texas, for instance, must choose one of two programs if they want to allow armed personnel in schools: The Guardian Plan allows local school boards to determine training standards and authorize specific employees to carry on campus at all times, and The School Marshal Program allows local boards to authorize employees, but they must be trained and licensed by the Texas Commission on Law Enforcement.12
State legislation varies, but the liability issues depend on many factors. What will the program look like? Who will manage it? How will it be rolled out? Who will be responsible for it? How will results be measured? Safety training is rarely a one-and-done proposition—as a best practice, it often requires periodic retraining and recertification. In addition, accidents in training and the workers' compensation claims that follow are not unusual and should be anticipated. In California, for example, when sheriffs went to a lighter gun to accommodate people who had less grip strength, a large number of misfiring accidents and workers' compensation claims occurred. Costs of workers' compensation claims can include medical and surgical treatment, time lost from work, ongoing pain management and other pharmaceutical needs, long-term rehabilitative therapy, and potential post-traumatic stress disorder (PTSD) issues.
Liability arising from access to guns on school property and appropriate use must also be taken into account when calculating risk. The risk of gun theft if the firearm were not properly secured would become an insurance consideration if more of them were in schools. Schools may also need to consider the liability implications of a teacher who, in a certain situation, could or should have used a gun but chose not to. Or schools may have to consider the ramifications of an armed teacher who, in attempting to address a situation, mistakenly shoots an innocent student. As a practical matter, schools would need to look at making rules and providing thoughtful documentation around all these issues.
Alternately, a necessary part of the discussion on liability should address the question that looms largest in the debate around arming school staff: could increased firearm access by trained personnel decrease injury/crime (and therefore liability) in schools in the case of a violent event? Is there a cost to not providing armed security personnel? A 2013 commissioned report by the National School Shield Task Force found that a “properly trained armed school officer, such as a school resource officer, has proven to be an important layer of security for prevention and response in the case of an active threat on school campus.”13 However, the effectiveness of armed school personnel to prevent crime is difficult to measure. Proponents of this view point to data that looks at gun use by citizens in self-defense situations to prevent crimes in progress. A study conducted by researchers at Harvard shows that people used a gun in self-defense in 0.9% of crimes; another study, published by criminologists Gary Kleck and Marc Gertz in 1995, put that number much higher: between 2.2 and 2.5 million defensive gun uses annually.14 A 2015 Washington Post op-ed catalogues a number of anecdotal instances where armed citizens were able to stop crimes in process.15 But crime control and prevention in schools using armed security staff is a complex issue, with ramifications for student populations,16 and without specific data it is hard to draw liability conclusions. What is without question is that preventing crime would reduce not only the liability costs to school districts but more importantly the cost to human life, however it is accomplished.
Because there is so little data available on the risks of arming school staff, insurers or pools interested in pricing liability risk must use informed judgment to best model what the additional exposure will look like. According to news reports, districts that have allowed armed staff in schools have experienced a range of coverage responses and pricing. In Oregon, the risk pool Property and Casualty Coverage for Education (PACE) has implemented a number of pricing structures for school district members interested in contracting with armed security personnel. Districts that contract with law enforcement officers with specific state certification see no premium change as long as the district is not liable for the officer’s actions; if the district does assume liability for the officer’s actions, a premium charge between $1,500 and $2,500 per FTE would be incurred, depending on whether that officer is a member of law enforcement. According to the risk pool’s FAQ document on firearm liability, PACE will not provide coverage for armed personnel who did not receive certification through the state’s standards.17 Certain schools in Kansas, on the other hand, were informed by their commercial insurer that members would be denied coverage if school employees were allowed to carry handguns.18 AGRiP, in a memo from 2013, noted that as of that writing, it was unaware of any member pools that had excluded liability coverage as a result of a member carrying concealed weapons.19
For actuaries and insurers working to price this risk, the situation is comparable to the introduction of “self-driving” vehicles, where the frequency and severity of the risk is still a relative unknown. Without historical data around frequency and severity, it may not yet be possible to derive actuarially sound rates—but managing risk is still crucial. Similarly, risk mitigation efforts may be seen as ways to lower liability premium costs for school districts, such as the Oregon example above. Ongoing safety training, proper gun storage, and professional firearm experience may all be important factors for actuaries looking to price this risk. But as we’ve seen, uncertainty around risk is a common cause of increased pricing in liability coverage premiums.
What might this discussion look like five years from now? As states and districts continue to grapple with the safety of their schools and populations, the private sector has begun to respond with options for those looking to insure against catastrophic gun risk. On the commercial insurance side, new products have begun emerging, such as XL Catlin’s "Workplace Violence and Stalking Threat Insurance" in 2016.20 A recent Risk Management article featured a 2015 product from Beazley, underwritten by Lloyd’s, dubbed "active shooter" coverage, which is a standalone policy that began as an active shooter product, but evolved to encompass active assailants or malicious attacks.21
When it comes to gun use, insurance products are available as well. The National Rifle Association (NRA) offers personal firearm liability insurance, underwritten by Lloyd’s, designed to provide coverage for unintentional injuries or damage caused while hunting, shooting at private ranges, or shooting in competitions. It’s not beyond the realm of possibility that this coverage might one day be expanded to cover schools if the demand existed.22 On the other hand, a growing chorus of advocates have put forth the idea of mandatory firearm insurance—that is, insuring the ownership and use of firearms as we do cars, both mandatory and with varying premiums based on risk and other factors.23
As a number of legislators and school boards across the country weigh the risks of arming personnel on campus, it’s important to understand the various insurance and liability considerations inherent in such a heavy decision. The risks of arming staff will vary by geography, population, training programs, school-specific rules, and state laws. What won’t change is the common desire of all stakeholders to keep their students and staff out of harm’s way.
1Education Commission of the States. School policy tracker, “School Safety: Guns and Employees.” Retrieved June 13, 2018, from https://www.ecs.org/state-education-policy-tracking/.
2“Governor Rick Scott signs Marjory Stoneman Douglas High School Public Safety Act.” (March 9, 2018) Retrieved June 13, 2018, from https://www.flgov.com/2018/03/09/gov-rick-scott-signs-marjory-stoneman-douglas-high-school-public-safety-act/.
3Thomsen, J. (May 3, 2018). “State policy responses to school violence.” Education Commission of the States. Retrieved June 14, 2018 from https://www.ecs.org/wp-content/uploads/State-Policy-Responses-to-School-Violence.pdf.
4McCausland, P. (April 2, 2018). “Guns in schools: Insurance premiums could present hurdle in arming teachers.” NBC News. Retrieved May 3, 2018, from https://www.nbcnews.com/news/us-news/guns-schools-insurance-premiums-could-present-hurdle-arming-teachers-n859846.
5AGRiP (2014). PR Toolkit for Public Entity Pools. Retrieved May 3, 2018, from http://www.agrip.org/assets/1/6/PR_Toolkit_Messaging_Document.pdf.
6Dugyala, R. (March 22, 2018). “This Texas school began arming teachers in 2007. More than 170 other districts have followed.” The Texas Tribune. Retrieved May 3, 2018, from https://www.texastribune.org/2018/03/22/rural-texas-harrold-school-teachers-guns-before-parkland/.
7Hennessy-Fiske, M. (February 24, 2018). “What gun debate? Many Texas school teachers are already armed.” Sarasota Herald-Tribune. Retrieved May 3, 2018, from http://www.heraldtribune.com/zz/news/20180224/what-gun-debate-many-texas-school-teachers-are-already-armed.
8Matthiesen, Wicker, Lehrer Attorneys at law. State Sovereign Immunity tort liability in all 50 states. Retrieved May 8, 2018, from https://www.mwl-law.com/wp-content/uploads/2013/03/STATE-GOVERNMENTAL-LIABILITY-IN-ALL-50-STATES-CHART-GLW-00211981.pdf.
9Pesantes, E. (April 22, 2018). “Parkland shooting families outraged over cap on school district liability.” The Sun-Sentinel. Retrieved June 14, 2018, from http://www.sun-sentinel.com/local/broward/parkland/florida-school-shooting/fl-reg-florida-school-shooting-liability-reax-20180427-story.html.
10Ortiz, E. (March 18, 2018). “3 Students injured when California high school teacher fires gun during safety course.” Retrieved May 29, 2018, from https://www.nbcnews.com/news/us-news/3-students-injured-when-california-high-school-teacher-fires-gun-n856481.
11Ahmed, S. & Walker, C. (April 20, 2018). “There has been, on average, 1 school shooting every week this year.” CNN. Retrieved May 3, 2018, from https://www.cnn.com/2018/03/02/us/school-shootings-2018-list-trnd/index.html.
12Dugyala, R. (March 22, 2018). “This Texas school began arming teachers in 2007. More than 170 other districts have followed.” The Texas Tribune, ibid.
13Hutchinson, A. (April 2, 2013). Report of the National School Shield Task Force. Retrieved, May 31, 2018, https://www.nationalschoolshield.org/media/1844/summary-report-of-the-national-school-shield-task-force.pdf.
14Raphelson, S. (April 13, 2018). “How often do people use guns in self-defense?” Retrieved June 6, 2018, from https://www.npr.org/2018/04/13/602143823/how-often-do-people-use-guns-in-self-defense.
15Volokh, E. (October 3, 2015). “Do citizens (not police officers) with guns ever stop mass shootings?” The Washington Post. Retrieved June 14, 2018, from https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/10/03/do-civilians-with-guns-ever-stop-mass-shootings/?utm_term=.ea90837a88ea.
16Cook, P. “School Crime and Prevention,” Sanford School of Public Policy, Duke University. Retrieved June 6, 2018, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1368292.
17Ohio School Board Association. Topics: “Guns in schools – FAQ: Liability Concerns.” Retrieved June 6, 2018, from http://www.osba.org/-/media/Files/Resources/Legal/FirearmLiabilityLangaugeFor
18McCausland, P. (April 2, 2018). Guns in schools: Insurance premiums could present hurdle in arming teachers, ibid.
19Association of Governmental Risk Pools. http://www.agrip.org/files/Cybrary/ConcealedCarryMemo.pdf.
20XL Catlin (February 3, 2016). XL Catlin unveils workplace violence insurance coverage for US businesses. Retrieved May 3, 2018, from http://xlcatlin.com/insurance/news/xl-catlin-unveils-workplace-violence-insurance-coverage-for-us-businesses.
21McDonald, C. (April 2, 2018). “Active Shooter Coverage Gaining Traction.” Risk Management. Retrieved May 3, 2018, from http://www.rmmagazine.com/2018/04/02/active-shooter-coverage-gaining-traction/.
22The National Rifle Association, Retrieved May 9, 2018, from https://mynrainsurance.com/insurance-products/liability-personal-firearms.
23Garson, R. (March 2, 2018). Why mandatory firearm insurance could be a hugely powerful gun control play. Observer. Retrieved May 3, 2018, from http://observer.com/2018/03/why-mandatory-firearm-insurance-could-be-a-powerful-gun-control-play/. | <urn:uuid:a53baef7-1ffe-47cc-82bb-88eb4c0f1676> | CC-MAIN-2019-47 | http://www.milliman.com/insight/2018/Arming-school-staff-Insurance-considerations/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00218.warc.gz | en | 0.953403 | 4,578 | 2.765625 | 3 |
A young veteran reminded me of the truly ancient roots of conflict in the Middle East, pointing to lines we do not even see on the sand and soil. This prompted me to return to a summary sketch I laid aside months ago, after fleshing out an account of what we now call Iran. Then the House of Representatives passed a non-binding resolution condemning the Ottoman Empire for committing the first genocide of the 20th Century…and 12 Republicans joined Rep. Ilhan Omar in opposing the resolution! What? Why? What follows is a single summary of the other three big players, historically, now known as Turkey, Egypt, and Saudi Arabia.
Iran and Egypt can point to the most ancient civilizations, as their progenitors were contemporary regional powers. The clash between them was captured in the ancient Hebrew texts, as the Jewish people were caught in the middle. Saudi Arabia comes next, with claims to punching far above their weight with armies fired by the fervor of a new faith, and more recently of being the secular and religious guardians of the faith. Finally, the Turks can claim to have been the most successful and latest power to rule the region for centuries after imposing final defeat on the (Christian) Eastern Roman empire.
The Saudi claim is oldest of the Muslim claims, as armies swept out of the desert around 600 years after the birth of Christianity, and notably conquered both Egypt and Persia:
In the seventh century AD the weakening Eastern Roman empire, which had inherited [Egypt], lost control to the Islamic empire when the latter’s fervently enthusiastic forces swept through in 639-640, taking Libya at the same time. For the next century the region was governed directly by the Umayyad caliphate to the east, restoring a situation that had existed periodically between the rise of the Assyrian empire until the division of Alexander the Great’s Greek empire.
Over the centuries, the Arabian dynasties were battered, weakened, and lost the appearance of uncontested divine blessing, as Christians stopped and rolled back some of the conquests, and as the Mongols ravaged kingdoms Christian and Muslim alike. From this arose a non-Arab, and yet not ethnically North African, movement that set its headquarters in Egypt. Under the Mamlūks, Egypt became the cultural center of Islam for almost two centuries, a status it would reclaim in the 20th Century, with universities and modern media, movies, radio, and television broadcast content:
During the Mamlūk period [1250–1517] Egypt became the unrivaled political, economic, and cultural centre of the eastern Arabic-speaking zone of the Muslim world. Symbolic of this development was the reestablishment in 1261 under the Mamlūk rulers of the ʿAbbāsid caliphate—destroyed by the Mongols in their sack of Baghdad three years earlier—with the arrival in Cairo of a youth claiming ʿAbbāsid lineage. Although the caliph enjoyed little authority, had no power, and was of dubious authenticity, the mere fact that the Mamlūks chose to maintain the institution in Cairo is a measure of their determination to dominate the Arab-Islamic world and to legitimize their own rule. It is curious that the Mamlūks—all of whom were of non-Arab (most were Turks and, later, Circassians), non-Muslim origin and some of whom knew little if any Arabic—founded a regime that established Egypt’s supremacy in Arab culture.
Mamlūk legitimacy also rested on the regime’s early military successes, particularly those against the Mongols, who were seen by many contemporaries as undefeatable and as a threat to the very existence of Islam as a political culture.
With the Ottomans’ defeat of the Mamlūks in 1516–17, Egyptian medieval history had come full circle, as Egypt reverted to the status of a province governed from Constantinople (present-day Istanbul). Again the country was exploited as a source of taxation for the benefit of an imperial government and as a base for foreign expansion. The economic decline that had begun under the late Mamlūks continued, and with it came a decline in Egyptian culture.
It was the Revolutionary French who, by seizing and holding Egypt as a threat to English global ambitions, triggered a long chain of administrative and governing changes, together with a mix of secular and Islamic learning spread by new educational institutions and printing presses. The old domination by regional foreigners gave way to local faces negotiating sovereignty with contenting Western European powers as well as Ottoman Turks and rival North African peoples.
With the onset of the Cold War and the fading of British and French empires, Middle Eastern states started aligning with Moscow or Washington. A secular pan-Arab movement imagined an alliance of modernizing states getting military technology from Moscow. It was expressed as Baathism in Syria and Iraq, and Nasserism in Egypt. Nasser led a coup against the last king of Egypt, starting the long reign of military officers who went through the form of popular election to be president–for life.
Under Nasser and his successor Anwar Sadat, Egypt achieved cultural dominance in the region, producing movies, television, and books. To this day, if you take an Arabic language class, it will likely be Egyptian Arabic, just as Spanish classes tend to be based in Castilian Spanish. It would take until the turn of the 21st Century for the original Arabs to mount their own international influence campaign through funding of mosques, religious schools, and broadcast.
Yet, the Egyptian generals’ rode a tiger that the former Ottoman, militantly modern secular Turkish state did not face. When Sadat made peace with Israel, he signed his own death warrant. Sadat was assassinated by soldiers who were influenced by the Muslim Brotherhood:
President Anwar el-Sadat of Egypt was shot and killed [6 October 1981] by a group of men in military uniforms who hurled hand grenades and fired rifles at him as he watched a military parade commemorating the 1973 war against Israel.
A devout Moslem, Mr. Sadat was harsh toward fundamentalist groups, such as the Moslem Brotherhood and the Islamic Association. He banned both groups, calling them illegal. He said that he would not tolerate mixing religion and politics and that these groups were using mosques to denounce him.
Hosni Mubarak, Sadat’s successor, managed an uneasy domestic balance, aligned with Washington instead of Moscow. He did not break the peace treaty with Israel, but he allowed the country to become more fundamentalist, with Egyptian women’s rights severely eroded, driven back under the headscarf and out of anything approaching equal civil status in reality. The current strongman, President al-Sisi, is the first in Egypt to plainly say that Islam must reform itself and stop blaming outsiders, but this has, so far, not resulted in significant change on the street or in external influence.
The capital of Turkey was established as the capital of the Roman Empire, and then the Eastern Roman Empire, when the empire split to solve the problem of excessive complexity in an increasingly effective threat environment. After Rome fell in 410 AD, less than 400 years after the birth of Christianity, Constantinople, named after the first Roman emperor to worship Christ, remained an obstacle to Islam’s advance until Sunday of Pentecost, 29 May 1453. Yes, that is over a millennium of withstanding threats from all sides, even the sack of the city by a Roman pope’s minions in 1204.
You can see, then, that Constantinople would be a great jewel in the crown of Muslim conquerors, a prize beyond all others because everyone else had failed to take and hold the city. It finally fell to a young Ottoman Turkish commander, with the technical assistance of a Hungarian Christian named Orban. Conquering Constantinople took a massive siege cannon, Chinese technology rapidly advanced with European metal founding, to finally reduce the impenetrable walls.
In a thousand years the city had been besieged some 23 times, but no army had found a way to crack open those land walls.
Accordingly, Orban’s arrival at Edirne must have seemed providential. The sultan welcomed the master founder and questioned him closely. Mehmed asked if he could cast a cannon to project a stone ball large enough to smash the walls at Constantinople. Orban’s reply was emphatic: “I can cast a cannon of bronze with the capacity of the stone you want. I have examined the walls of the city in great detail. I can shatter to dust not only these walls with the stones from my gun, but the very walls of Babylon itself.” Mehmed ordered him to make the gun.
Freed at last, Islam almost snuffed out Christianity, until the miracle before the gates of Vienna, where Polish lances arrived just in time to save Christian Europe. John III Sobieski, the King of Poland, led the Winged Hussars, who fell on the Ottoman army like an army of avenging angels. When was this? Pay attention to the date: September 11, 1683. While that date was purged from American education, others most certainly did not forget it, even if Bush the Second and his minions desperately denied this by diversion.
The Ottoman Empire persisted, even flourished, then fell further and further behind Western Europe. Yet, they were never in position to project influence like the series of European states through the Age of Discovery, the naval exploration race started over two centuries before the Ottomans were permanently outmatched as a land force. Oh, they could hold their own on their own turf, but in the end they needed (German) European equipment and advisors to give the British a bloody nose in World War I.
In the dying days of the empire, from the outset of World War I until 1923, when a young Army officer named Ataturk put an end to the remains of the Ottoman regime, the last two sultans presided over the industrial-scale murder of ethnically Armenian Christian populations, who had persisted under Muslim dhimmitude for four and a half centuries.
Disgusted with the complete helplessness of the late Ottoman empire against late Christian Europe, competent Turkish Army officers seized power, led by the Turkish hero of Gallipoli, Mustafa Kemal Ataturk. He forcibly instituted a program of secular modernization which held until the rise of Erdogan.
Under the Ataturk model, urban Turkey became the Germany of the Middle East. As Germany struggled to recover from the self-inflicted man shortage of World War II, they invited Turkish men in as Gastarbeiter, guest workers. I met such men in the mid-1980s settled in small towns as “Greek” gasthaus proprietors and employees. It was from this relatively successful model that the foolish political class projected success when they opened the flood gates to Muslim migrants from very different cultures.
However, the Eurocrats had contempt for the Turks’ relatively conservative beliefs, and repeatedly refused EU membership on the excuse that Turkey refused to ban capital punishment. Apparently the Turks were good enough to die to protect Europeans’ homelands from Russian tanks thrusting into their underbelly, but not to be part of the European club. This was not headed in a good direction.
With the end of the Cold War and the temporary collapse of the Russian Empire, the logic of NATO was severely strained. Why should Turkey not cooperate with Russia against other players in regions of mutual interest? The rise of computerized, networked weapons eventually meant that choosing one major arms producing country’s equipment would expose other countries’ equipment to data collection readily transmitted back to the producers. So, when it came time for bids for the next generation of equipment, there was going to be a tension between allies. Buying S-400 anti-aircraft missile systems necessarily conflicts with F-35s and other semi-stealthy aircraft.
By the same token, cooperating with large U.S. military force movements through Turkey places the Turkish government on the opposite side of rural, more religiously devout Muslim Turks, and reinforces the old image of decadent Ottoman rulers dominated by European military powers. Just as Bush the Second and his entire team of diplomatic, intelligence, and military “experts” were blind to the meaning of September 11, so too were they blind to the implications of planning to send a large military ground force attacking out of eastern Turkey into northern Iraq, to catch Saddam Hussein’s regime in a pincer move.
Look, this was not just spaghetti on the wall theory, I personally know soldiers, units, that were mobilized from their civilian home communities and then left sitting at military bases in the continental United States as the august experts, who we are supposed to trust more than President Trump, gawped, shuffled, and then hastily re-planned how to get the troops into theater and how to open that second front.
Bottom line, the American forces all came in through Kuwait, and only civilian contractor forces ran limited logistics out of Turkey and Jordan. Oh, but when we needed quality, I mean Western European/American/NATO quality construction on a petroleum fuel quality control laboratory that would be our liquid logistics soldiers’ ticket out of the Iraqi theater, I was not at all surprised to see a Turkish crew.
This is a photograph taken near nightfall. This construction crew took internally timed breaks. All the equipment would stop, one worker would be seen running with a tray of coffee or tea around a semi-circle of the other workers. Five minutes later, the dude ran back to the coffee/tea urn with the empty cups, and the cutting and welding sparks flew. Nobody sauntered or lazed about, nobody. The Turks definitely hold themselves above the Arabs and are prepared to demonstrate the difference.
At the same time, within Turkey, the divide between urban and rural populations grew as it did in America. The more rural Turks were more traditional, more devout, less secularized. These Turks finally got an effective voice in the party that Erdogan rode to power. It was the Justice and Development Party (AKP), which voters swept into power in 2002, that stood up against President George W. Bush, refusing to allow the movement of U.S. troops and supplies through Turkey in Operation Iraqi Freedom. Erdogan’s hand was not directly on that decision, because he had been temporarily banned from office, but he was clearly the party leader and became prime minister in March 2003.
Since then, Erdogan has consistently moved to consolidate power, including breaking the previously independent military, that had acted repeatedly to preserve the Ataturk secular reform system. Even as Erdogan has redefined the presidency into a strong executive office, he has ridden popular support from the forgotten people of Turkey, mobilizing enough voters to overcome major urban population preferences. These moves, and his speeches, have raised the image of Erdogan as the sultan of 21st century Turkey:
On July 9, President Recep Tayyip Erdogan will take his oath of office in parliament. Turkey will thus officially move from a parliamentary system to a presidential system. Just 13 years have passed since Turkish officials started EU accession negotiations. At the time, it seemed that democracy, freedom of expression and social harmony were growing.
Now, however, Turkey is preparing to endow its increasingly Islamist, nationalist and authoritarian president with an unprecedented amount of power. The abolition of parliamentary control gives Erdogan sole power over the executive branch of government. And, through his power to appoint important judges, he will also control the judiciary.
President Erdogan has been characterized as the first 21st-century populist, and critics point to a turn from reform to “New Sultan:”
Erdoğan’s speeches since he assumed the presidency, particularly after an attempted coup in 2016, have been the most consistently populist of his career. Much of his fury has been directed at perceived enemies within. But Erdoğan has also sharpened his critique of foreign adversaries, complaining Turkey has been betrayed by the international order.
H. Res 296, a resolution recognizing the Armenian Genocide passed today, with a vote of (405 to 11 with 3 present ) on Tuesday.
This is the fourth time such a resolution was introduced to Congress since 2000, but the first time it received a House vote. The previous three times the various resolutions were pulled due to pressure from the executive branch.
As Slate correctly notes, the decision to change the House policy position on Turkey and the Armenians is inherently political, as was the old position:
Turkish pressure is often cited as the reason the U.S. government has been reluctant to use the G-word in recent decades, though blaming Turkey lets several U.S. administrations off the hook. These administrations, for understandable reasons, didn’t think a fight over a century-old event was worth alienating a NATO ally and key security partner. George W. Bush lobbied against an American genocide resolution in 2007. Barack Obama called the Bush administration out for this as a senator and then did the exact same thing as president, using terms like “difficult and tragic history.”
So what changed? Lawmakers didn’t suddenly have an epiphany about the events of 1915 to 1923 or the definition of genocide. And this issue has long been a priority for Armenian American voters. What changed is Turkey’s image in Washington. Members of Congress have little patience for arguments about the importance of the U.S.-Turkey alliance after Turkey’s recent offensive against the Syrian Kurds, U.S. allies, a campaign that has itself been referred to as ethnic cleansing.
President Donald Trump’s role in facilitating that offensive and his enthusiastic embrace of authoritarian Turkish President Recep Tayyip Erdogan likely made this an easy vote for many Democrats. But a government that held an American evangelical pastor as a hostage for two years isn’t all that popular with Republicans either.
Never mind that President Trump used the far more effective real tool of economic sanctions to squeeze Erdogan into releasing the American evangelical pastor without “quid pro quo.” This was a long popular position finally seen as politically useful in the context of President Trump and a NATO ally now under the near dictatorial control of the man who would be sultan. Note well that there was no visible effort by the State Department deep state, or Secretary of State Pompeo, or President Trump to stop this resolution. The real adults in the room are under no illusions now about the current leadership of Turkey, nor of the electoral base supporting Erdogan.
Because a House resolution is not an expression of American government policy, unlike a joint resolution signed by the president, President Trump can use this gesture in negotiation. After all, everyone knows the House is in conflict with both the Senate and the president. If President Erdogan says he must act to represent his electorate, President Trump can point to the “people’s house” also reflecting feelings of their districts, and so suggest that both nations should find some way forward acceptable to both populations.
On the other hand, we might use the failure of Turkey to take responsibility for its direct predecessor state’s actions as a reason to take further negative actions. The closest comparison would be West Germany taking ownership for Nazi Germany’s genocidal policy. As German reunited, they have not cast off historical responsibility, in contrast to Turkey’s consistent failure to acknowledge Ottoman actions were genocidal, even as their leader aspires to revive some portion of Ottoman era influence.
Meanwhile, Republicans are in no position to weaponize the House vote against Rep. Omar, since only Republicans joined her in not voting for the resolution. Instead, this is an issue within the Democratic Party coalition, as reflected in the disappointed, disingenuous, or ducking comments inside Minnesota. As the Star Tribune reported, “abstention on Armenian genocide vexes Omar supporters:
Omar’s decision to abstain and the subsequent explanation she gave has triggered another round of intense criticism for the freshman Democrat, in Minnesota and across the nation. Many members of the Twin Cities Armenian community expressed shock and deep dismay.
Omar’s defense also drew rebukes from some leading Minnesota Democrats, who argued the current conflict in Syria makes the resolution all the more important. House Majority Leader Ryan Winkler, DFL-Golden Valley, called the vote “deeply troubling.”
“The current Turkish regime is a dictatorship and is bent on destroying the Kurdish people in what could be a genocide in present time. …[All] Americans, especially progressive Americans, should be speaking with one voice against Turkish genocide historically and currently,” said Winkler, who lives in Omar’s district.
DFL Gov. Tim Walz, who sponsored a similar resolution as a member of Congress, tweeted that “the Armenian Genocide is historical fact, and the denial of that fact is a continuation of the genocide.” Both Walz and Lt. Gov. Peggy Flanagan, who is the highest-ranking American Indian woman serving in elected office nationwide, declined to comment further.
Jaylani Hussein, who leads the Minnesota chapter of the Council on American-Islamic Relations, defended Omar’s track record on human rights. Hussein argued that as a refugee, Omar is uniquely qualified to understand the complexities of such issues.
The latest controversy also appeared to further strain relations between Omar and members of the local Jewish community concerned about her support for sanctions against Israel and her past criticism of pro-Israel lobbying groups in Congress, which some interpreted as anti-Semitic. “Our local Armenian and Jewish communities celebrate together, commemorate together, learn together and now we are appalled together by this manifest example of suborning Armenian Genocide denial,” said Steve Hunegs, executive director of the Jewish Community Relations Council of Minnesota and the Dakotas.
As resolutions go, this one was very clear and not at all tinged with partisan overtones. Both President Wilson and President Reagan are approvingly acknowledged. America is praised for large private fundraising for relief efforts between 1915 and 1930. The House urges that the good things America did should be taught when facts of the genocide are taught. Here is the official text [links added]:
In the House of Representatives, U. S.,
October 29, 2019.
Whereas the United States has a proud history of recognizing and condemning the Armenian Genocide, the killing of 1.5 million Armenians by the Ottoman Empire from 1915 to 1923, and providing relief to the survivors of the campaign of genocide against Armenians, Greeks, Assyrians, Chaldeans, Syriacs, Arameans, Maronites, and other Christians;
Whereas the Honorable Henry Morgenthau, United States Ambassador to the Ottoman Empire from 1913 to 1916, organized and led protests by officials of many countries against what he described as the empire’s “campaign of race extermination”, and was instructed on July 16, 1915, by United States Secretary of State Robert Lansing that the “Department approves your procedure * * * to stop Armenian persecution”;
Whereas President Woodrow Wilson encouraged the formation of the Near East Relief, chartered by an Act of Congress, which raised $116,000,000 (over $2,500,000,000 in 2019 dollars) between 1915 and 1930, and the Senate adopted resolutions condemning these massacres;
Whereas Raphael Lemkin, who coined the term “genocide” in 1944, and who was the earliest proponent of the United Nations Convention on the Prevention and Punishment of Genocide, invoked the Armenian case as a definitive example of genocide in the 20th century;
Whereas, as displayed in the United States Holocaust Memorial Museum, Adolf Hitler, on ordering his military commanders to attack Poland without provocation in 1939, dismissed objections by saying “[w]ho, after all, speaks today of the annihilation of the Armenians?”, setting the stage for the Holocaust;
Whereas the United States has officially recognized the Armenian Genocide, through the United States Government’s May 28, 1951, written statement to the International Court of Justice regarding the Convention on the Prevention and Punishment of the Crime of Genocide, through President Ronald Reagan’s Proclamation No. 4838 on April 22, 1981, and by House Joint Resolution 148, adopted on April 8, 1975, and House Joint Resolution 247, adopted on September 10, 1984; and
Whereas the Elie Wiesel Genocide and Atrocities Prevention Act of 2018 (Public Law 115–441) establishes that atrocities prevention represents a United States national interest, and affirms that it is the policy of the United States to pursue a United States Government-wide strategy to identify, prevent, and respond to the risk of atrocities by “strengthening diplomatic response and the effective use of foreign assistance to support appropriate transitional justice measures, including criminal accountability, for past atrocities”: Now, therefore, be it
Resolved, That it is the sense of the House of Representatives that it is the policy of the United States to—
(1) commemorate the Armenian Genocide through official recognition and remembrance;
(2) reject efforts to enlist, engage, or otherwise associate the United States Government with denial of the Armenian Genocide or any other genocide; and
(3) encourage education and public understanding of the facts of the Armenian Genocide, including the United States role in the humanitarian relief effort, and the relevance of the Armenian Genocide to modern-day crimes against humanity. | <urn:uuid:4362c1d9-3a39-46fc-893c-819ec1527654> | CC-MAIN-2019-47 | https://ricochet.com/692006/turkish-trick-or-treat/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00137.warc.gz | en | 0.963898 | 5,239 | 2.640625 | 3 |
A Mathematical Analysis of the Star of Bethlehem
One of the most enigmatic stories of the New Testament is that of the Wise Men of the East, who followed a star to pay tribute to the newborn king of the Jews :
“After Jesus was born in Bethlehem in Judea during the time of King Herod, Magi from the East came to Jerusalem and asked, ‘Where is the one who has been born king of the Jews? We saw his star in the east and have come to worship him.’ When King Herod heard this he was disturbed, and all Jerusalem with him. ... Then Herod called the Magi secretly and found out from them the exact time the star had appeared. He sent them to Bethlehem and said, ‘Go make a careful search for the child. As soon as you find him, report to me, so that I too may go and worship him.’ After they had heard the king, they went on their way, and the star they had seen in the East went ahead of them until it stopped over the place where the child was. When they saw the star, they were overjoyed. On coming to the house, they saw the child with his mother Mary, and they bowed down and worshipped him. Then they opened their treasures and presented him with gifts of gold and of incense and of myrrh. And having been warned in a dream not to go back to Herod, they returned to their country by another route.”
This story is not mentioned in the other synoptic gospels, but is repeated with a slightly different slant in the apocryphal Infancy Gospel (Protovangelium) of James ,
“And there was a great commotion in Bethlehem of Judaea, for Magi came, saying: Where is he that is born king of the Jews? for we have seen his star in the east, and have come to worship him. And when Herod heard, he was much disturbed, and sent officers to the Magi. And he sent for the priests, and examined them, saying: How is it written about the Christ? where is He to be born? And they said: In Bethlehem of Judaea, for so it is written. ... And he examined the Magi, saying to them: What sign have you seen in reference to the king that has been born? And the Magi said: We have seen a star of great size shining among these stars, and obscuring their light, so that the stars did not appear; and we thus knew that a king has been born to Israel, and we have come to worship him. And Herod said: Go and seek him; and if you find him, let me know, in order that I also may go and worship him. And the Magi went out. And, behold, the star which they had seen in the east went before them until they came to the cave, and it stood over the top of the cave. And the Magi saw the infant with His mother Mary; and they brought forth from their bag gold, and frankincense, and myrrh. And having been warned by the angel not to go into Judaea, they went into their own country by another road.”
Doubt exists whether such an event ever occurred and it is widely recognized that the author of Matthew may deliberately have introduced the story as the supposed fulfilment of the prophecy of Balaam , that
“A star will come out of Jacob; a sceptre will rise out of Israel.”
Accepting henceforth that such an event did occur in some way or another, the story of the Star of Bethlehem raises many questions. What was so special about this star that caused the Magi to interpret it as a sign of any kind, not to mention that it heralded the birth of the King of the Jews? Who were the Magi? Were they located in the east when they first saw the star, or in Palestine when the saw the star rising in the east? Did the star guide them from the east to Bethlehem, or to Bethlehem only from Jerusalem, where they were questioned by Herod? Why did Herod and his office not notice this particularly bright star?
Much has also been speculated about the star itself, with suggestions including the planet Venus, Halley’s and other comets, supernovas and the conjunction of planets [4,5], but virtually no consideration appears to have been given to the practical aspects of following a star in order to locate a specific place below it. The Catholic Encyclopedia goes as far as stating that
“The position of a fixed star in the heavens varies at most one degree each day. No fixed star could have so moved before the Magi as to lead them to Bethlehem; neither fixed star nor comet could have disappeared, and reappeared, and stood still. Only a miraculous phenomenon could have been the Star of Bethlehem,”
thereby recognising some practical aspects of the problem, but ultimately having to resort to divine intervention as an explanation for how such a star could have existed. The latter possibility is not considered here.
From Matthew’s account we can deduce that the Magi had not followed the star to either Jerusalem or Bethlehem, but that they were sent to the latter by Herod because of the prophecy that Bethlehem would be the birthplace of the new king of the Jews. This article therefore addresses from a mathematical perspective the physical problem of locating any specific place in Bethlehem by following a star.
Looking straight up
For argument’s sake we will assume that a particularly bright star did indeed ‘stop’ over Bethlehem and the movement of all observable celestial objects was frozen for that period of time (the rotation of the earth around its axis stopped and its travel around the sun halted). We now need to define in mathematical terms how the Magi could have located Christ’s place of birth by looking upward at the star above them. The problem can be depicted as shown in Figure 1, where an object O is located at a distance d from the earth of which the radius is R. The angle θ denotes the angular offset between the object O and a person located at point P2 , as measured from the centre of the earth C. The angle α represents the angle at which this person would observe the object O. At point P1, the point directly below the object O, α = 0°. The distance along the surface of the earth between points P1 and P2 is given by t = Rθ, θ in radians.
Elementary trigonometry applied to Figure 1 yields the equation
tan α = (R+d) sin θ / [ (R+d) cos θ - R]
≈ tan θ for d much larger than R and θ small
⇒ α ≈ θ = t / R, α and θ in radians
from which the angle α can now be calculated. In practical terms the angle α defines how accurately one needs to look ‘up’ in order to get within a distance t from point P1.
Figure 1. Mathematical definition of parameters for a person observing an object in space
Apart from the sun and the moon, the brightest celestial object in the sky is the planet Venus, also called the Morning Star or Evening Star because of the time of day it is brightest. The minimum distance between the earth and Venus is approximately 38.2 million kilometers (d in Figure 1), while the average distance between Venus and the earth is 41.4 million kilometers . Proxima Centauri, the nearest true star, is about 4.2 light-years (3.97 × 1013 km) from the sun . The average radius of the earth (R in Figure 1) is a mere 6378 km by comparison , which means that whichever celestial object the Magi were following, the simplified equation above can be used for calculations.
In terms of the mathematical problem, one now needs to numerically quantify what ‘looking straight up’ would mean, as this presumably was how the Wise Men found the birthplace of Christ. If ‘up’ is defined by a circular area around P1 within which a maximum deviation of 1° from directly above is allowed (α ≤ 1°), it would translate to a circular area of radius t = Rα ≈ 111 km, or 38,700 square kilometers! The maximum width of Israel is 112 km, but through Bethlehem it is only about 85 km, significantly less than what a view angle of 1° would allow. If one would be able to tell what ‘directly above you’ is with an accuracy of 0.1°, the radius reduces to 11.1 km and the surface area to 387 km². For a person to identify a specific venue at point P1, he will probably have to come within 15m of that venue, i.e. t ≤ 15m. In this case one will have to be able to discern what is ‘above you’ with a maximum error of 0.0001° (1.35x10-4°)! Such accuracy will be near impossible to achieve even with the most sophisticated measurement equipment available today.
Rotation of the earth
Assuming that it is indeed possible to look upward with an accuracy of 0.0001°, we now need to take into account the practical problem of the rotation of the earth around its axis. As before we will assume that all observable celestial bodies are motionless relative to each other and specifically relative to the earth, which is only allowed to rotate around its axis. If we were to be looking down at the earth from a star located directly above as shown in Figure 2, Bethlehem would appear in the position of the red dot. As the earth rotates around its axis, we will observe Bethlehem moving along the red trajectory in Figure 2, all the way around until it reaches this same position 24 hours later.
The earth rotates at an angular velocity of one revolution per day and a point on the equator will travel a distance of 2πR= 2π (6378) = 40074 km in 24 hours (along the green trajectory in Figure 2), at a speed of 1667 km/h. Bethlehem is located at latitude 31.7°N and will therefore travel a distance of 2πR cos(31.7°) = 34096 km in one day, at a speed of 1421 km/h, or 394.6 m/s (the speed of sound is 340 m/s). This then is the speed at which a person in Bethlehem will move through the spot that is directly below us looking down from the stationary star to the rotating earth.
If we allow the spot to have a radius of 15m, corresponding to the upward look angle of ±0.0001° we need to locate the birthplace of Christ, a person will remain within that spot for 2x15/394.6 = 0.076 seconds. The human eye takes about 0.3 to 0.4 seconds to blink, which means that if you are looking up at the star, waiting for the moment that you will be directly below it within 0.0001°, you will miss that moment in less than the blink of an eye. Five seconds later you will have moved almost 2 km past that spot. Conversely, if you want to ‘follow’ the star to the birthplace of Christ, i.e. remain directly below the star all the time, you will have to travel at a speed of 1421 km/h (Mach 1.2), and you will have to step on the brakes rather heavily once you get there!
Trajectory of the Star
Finally, assuming for the moment that we can indeed look up with an accuracy of 0.0001° and travel faster than the speed of sound to keep up with the star’s projection on earth, we are faced with one more obstacle, the orientation and position of the earth relative to the star at any given moment in time. Whichever ‘star’ it might have been, the problem remains the same. As an example we will assume the Star of Bethlehem to have been the planet Venus, which is also known as the morning or evening star. Venus has essentially been ruled out as a candidate for the Star of Bethlehem because it was and still is a regular sight, which could hardly have been interpreted as a singular event by the Magi . However, Christ is actually referred to as the Morning Star in the New Testament 10, which potentially links him to the star the Wise Men may have followed.
As a first step, we must determine whether Venus would ever have passed directly over Bethlehem. The orbit of the earth around the sun is depicted in Figure 3 and it is important to note the axial tilt angle of 23.4° of the earth with respect to a line perpendicular to its orbital plane (the black line through the planet in Figure 3). Even though the earth revolves around the sun, the orientation of its rotational axis (the red line in Figure 3) does not change significantly within one year. It does rotate very slowly in space through a process known as axial precession, taking approximately 26 000 years to complete a full revolution .
The orbit of the earth around the sun is almost perfectly circular as depicted in Figure 4, with the orbit of Venus essentially also circular and concentric with the orbit of the earth. Note the constant orientation of the rotational axis of the earth as it revolves around the sun (the freezing up of the northern region during winter not depicted).
As the earth rotates around its axis, the trajectory of the surface normal of Bethlehem, i.e. the line looking straight up from Bethlehem, will form a cone in space (Figure 5). The red lines depicting this cone are infinitely long, but have been truncated for the purpose of illustration. For any star ever to be directly above Bethlehem, its trajectory in space must cross this cone.
Figure 5. Conical trajectory in space of the Bethlehem surface normal
The next step is to determine whether the orbit of Venus will ever intersect this imaginary cone in space. The orbits of Venus and the earth both define a circular plane which can be viewed from the side as shown in Figure 6 (Figure 4 shows the view from above). Although the orbits of the earth and Venus are essentially concentric, the plane of Venus is inclined at an angle of 3.39° with respect to the orbital plane of the earth. The radii of the orbits of the earth and Venus are RE = 149.6 million km and RV = 108.2 million km, respectively . The earth and sun are shown out of proportion in Figure 6 in order to illustrate the orientation of the Bethlehem cone with respect to the orbits of Venus and the earth.
From Figure 6 it is evident that had the orbit of Venus not been inclined with respect to the orbit of the earth, Venus would never have passed directly (within ±0.0001°) over Bethlehem. As it is, its trajectory intersects the Bethlehem cone C only just at point I in the sector near B. During the remainder of its trajectory Venus will be visible from Bethlehem all the way around to point A and back, but will not appear directly above Bethlehem.
Having established that it is theoretically possible for Venus to appear directly over Bethlehem even though for only the briefest part of its journey around the sun, we now have to incorporate the aspect of time into the problem.
The earth completes its orbit around the sun in 365.26 days, while Venus completes its orbit in 224.7 days 7. The angular velocity ω of an object in a circular orbit is typically expressed in terms of the number of revolutions made per time unit. The angular velocity of the earth is ωe = 360°/365.26 days = 0.99°/day and ωv = 360°/224.7 days = 1.60°/day for Venus. The angular velocity of Venus relative to the earth is therefor ωr = ωv - ωe = 0.616°/ day, and Venus passes the earth every 360°/ (0.616°/ day) = 583.9 days. Accordingly, we can redefine the mathematical problem as the earth being stationary in space, while Venus travels along its orbit at a speed of 0.616°/ day, as depicted in Figure 7. From the relationship v=ωr for the linear speed v of an object revolving at angular velocity ω (in radians) at a distance r from the centre of revolution, the speed V at which Venus will pass the stationary earth is
V= RV x 0.616°/day = 108.2M x 0.616° x π/180 km/day = 1.163 M km/day = 48470 km/h.
We have already shown that one needs to look ‘up’ with an accuracy of better than ±0.0001° in order to come within ±15m of the house below the star. If the star happens to be traveling relative to the earth as shown in Figure 7, we now have to define the sector D on the orbit of Venus in which it will be visible from Bethlehem within this required accuracy.
With reference to Figure 1, we have seen that the angles α and θ are approximately equal when the distance to the star (d) is very large. This situation is shown in Figure 8, where the red line at B corresponds to the direct overhead position of the star (at Bethlehem), while positions A and C correspond to a distance of 15m to the left and right of B, respectively.
A person standing upright at position A will have to look to his right by α = 0.0001° in order to see the Star of Bethlehem when it is located directly above the house. If we think of the angle α as being defined by a narrow tube through which one has to look upward, limiting the view to ±α °, the person at position A will be able to observe Venus from the moment it enters his ±α ° field of view from the left. He will continue to observe Venus through this tube from position A until it has moved to a position directly above point B. In order to follow the star, he then has to move from point A towards his right until he reaches point C, which is 15m to the right of B. Here he will be able to observe Venus for a further 0.0001° to the right, beyond which it can no longer be deemed to be ‘above’ him. From Figure 8 it is therefore clear that ±15m allowable range around the target house translates to a ±2α° (i.e. 4α° total) field of view in which Venus can be regarded as being above Bethlehem. This is indicated by the sector D in Figure 7.
The orbits of Earth and Venus are separated by a distance L= RE -RV = 41.4 million km in space, and simple trigonometric calculation for α = 1.35x10-4 ° yields D = 2 tan(2α) L = 390 km. Traveling at a speed of 48470 km/h, Venus will remain visible ‘above’ Bethlehem for a total of 390/48470 = 0.008 hours, or 29 seconds. The orbit of Venus will intersect the Bethlehem cone twice, which means that momentarily ignoring the rotation of the earth, Venus can be in a position directly above Bethlehem for a total of 58 seconds in 584 days! This of course requires Bethlehem to be in exactly the right position the very moment Venus traverses its projected cone, which is practically impossible to happen as explained next.
Figure 9 shows the geometry for calculating the speed at which the Bethlehem surface normal will rotate through space. At the radius RI = 41.4 M km, the speed VS of the surface normal can be calculated as
VS = RR x 360°/day = 35.22 M km x 360° x π/180 km/day = 221.3 M km/day = 9.22 M km/h.
The time T to traverse D then is
T=390 / 9.22x106 hours = 0.152 seconds, for 4α. For 2α, corresponding to ±15m discussed earlier, the time will be 0.076 seconds, as expected.
To be strictly correct, one needs to take into account the exact point of intersection of the orbits of Venus and the earth. As indicated in Figure 6, the orbit of Venus extends fractionally into the Bethlehem cone. The view from Venus at that furthest point of its orbit is shown in Figure 10. Venus travels through the cone along the trajectory A-A (the actual points of intersection) from right to left at speed V as derived above, while the cone is rotating in the opposite direction at speed VS.
The actual speed VA at which Venus will pass through the ‘above Bethlehem’ sector from east to west (as seen from Bethlehem) is therefore VA = V + VS ≈ VS since VS >> V. Should Venus pass through the cone along the trajectory A’- A’, Venus will still appear to be crossing this sector from east to west since VA ≈ VS and Bethlehem has rotated 90° further relative to its position in Figure 10.
Finally, one has to take into account the exact orbital positions of Venus and the earth relative to each other at any given moment in time. For Venus to appear ‘above’ Bethlehem at any point, both have to be in the correct position, i.e. Venus must be at the intersection between its orbit and the Bethlehem cone, and the earth must have rotated such that Bethlehem is exactly below Venus at that very moment. The calculation of the actual position and time of appearance of Venus above Bethlehem is a complicated mathematical problem and falls beyond the scope of this article. The reader will however realize the near impossibility of such an alignment of Venus, the earth and Bethlehem ever occurring.
The principles applied here for Venus are equally valid for any other star, comet or conjunction of planets to be identified as the Star of Bethlehem.
The above mathematical analysis of the concept of following a star clearly demonstrates that whoever wrote the account had no understanding whatsoever of astronomy.
Taking into account that
∙ It is impossible to look up with sufficient accuracy to bring you within 100 km of the house (α = 1°), and far less 10 km (α = 0.1°);
∙ The speed at which one has to travel to follow the projection of the star on the earth exceeds the speed of sound;
∙ The star would remain above the house in Bethlehem for less than a second;
∙ The time frame during which star could appear above the house in Bethlehem is fractionally small. For Venus this period is 58 seconds every 485 days (±15m from the house);
∙ The chance that the star, the earth and Bethlehem could ever align correctly in space and time is infinitesimally small; and
∙ All of this had to coincide with the birth of Christ,
one can categorically state that the possibility of the Magi having located the birthplace of Christ by following a star is identically zero.
Whatever the reason might have been for the inclusion of the Magi in the story of the birth of Christ, the following of a star had nothing to do with it, i.e. the Magi could never have followed a star to any place in Bethlehem and the Star of Bethlehem therefore never existed. Any attempt to identify a specific star, comet or conjunction of planets as the Star of Bethlehem is therefore a complete and utter waste of time.
References and Notes
1. Matthew 2:1-12, NIV Study Bible.
2. Numbers 24:17, cf. Matthew 2:6, Micah 5:1-4, 2 Samuel 5:2
3. The Infancy Gospel (Protovangelium) of James, Christian Classics Ethereal Library (CCEL[http://www.ccel.org/
4. Mark Kidger, The Star of Bethlehem, An Astronomer’s View, Princeton University Press, 1999. Kidger presents an excellent overview of historical and current views on the topic.
5. Wikipedia (online): Star of Bethlehem [http://en.wikipedia.org/wiki/
6. Catholic Encyclopedia, ‘Magi’[http://www.newadvent.
7. National Space Science Data Center (online): Venus Fact Sheet [http://nssdc.gsfc.nasa.gov/
8. Encyclopedia Britannica, Proxima Centauri
9. Encyclopedia Britannica, Earth
10. 2 Peter 1:19; Revelation 2:28, 22:16
11. Wikipedia (online): Axial Precession [http://en.wikipedia.org/wiki/
The article can be downloaded in PDF format (1MB) here.
- Hits: 9746 | <urn:uuid:44fd5888-d6d3-40d6-8e5b-320ce8ffe252> | CC-MAIN-2019-47 | https://riaanbooysen.com/the-star-of-bethlehem | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00059.warc.gz | en | 0.954105 | 5,148 | 3.1875 | 3 |
When Captain Picard of the USS Enterprise wants something done, he points his finger and orders someone to “make it so.” This catchphrase is what the retail giant Amazon almost called itself: “MakeItSo.com.”
Doesn’t have the same ring to it, right?
While there’s no finger-pointing involved, this is basically how home delivery services work these days. We browse through a catalog of options, we pick one, we press a button, and poof—it becomes a reality. Whether it’s a box of paper towels, a new book, or a hoverboard, you can get it delivered pretty quickly.
And though this technology seems really like something straight out of Star Trek, the processes underlying it actually go back hundreds of years, for as long as humans have been trying to make home delivery services work.
But the systems and processes that Amazon has become synonymous with really rely on an appetite created by a name that was once a retail giant that cast a shadow as large as Amazon’s: Sears, Roebuck & Co. In fact, Amazon’s success can largely be credited to their ability to perfect processes that Sears innovated and honed throughout the 20th century.
And where Sears failed, Amazon is still succeeding…
The Mail-Order Revolution
For most of human history, ordering something from far away was really tricky. Even in an era of globalization, the systems that transported things like dry goods, food, and mail were often unreliable. Entire ships got lost. Packages were mixed up. Mail was sent to the wrong address.
Still, merchants tried to make it work for them, from Dutch furniture builders to Welsh book peddlers in the 1600’s. Historians have found that for the most part, nothing sold, because the processes were so inefficient and untrustworthy.
Ben Franklin’s catalog contribution
That is, until they offered “satisfaction guaranteed,” a promise Benjamin Franklin, of all people, is credited with.
In addition to inventing bifocals, the founding father also invented the customer satisfaction promise. In 1744, he published “A Catalogue of Choice and Valuable Books, Consisting of Near 600 Volumes, in most Faculties and Sciences.” It was one of the first mail-order catalogs to be circulated in America.
Most importantly, it famously included an invaluable piece of text:
“Those Persons that live remote, by sending their Orders and Money to said B. Franklin, may depend on the same Justice as if present.”
Voila. The first ever mail-order guarantee.
With your purchase assured, mail-order catalogs started sounding really appealing. They were primarily used by book publishers and plant nurseries for a long time, since they were small, non-perishable items. These companies started printing and mailing in color to get more attention, even though it was really expensive.
Still, catalogs were only mailed to previous customers, and contained a pretty specialized selection of items. Until the Sears’ catalog came along.
In 1888, Richard Sears, along with his partner, Alvah Roebuck, mailed pamphlets advertising their watches to people all over the mid-Atlantic with mail-order service. They quickly expanded into other goods like bicycles, clothing, and furniture. With no brick-and-mortar store, Sears, Roebuck & Co. was able to spend a lot of money compiling a catalog that was really appealing.
A natural at writing ad copy, Richard Sears declared his 1894 catalog the ultimate “Book of Bargains: A Money Saver for Everyone,” and the “Cheapest Supply House on Earth.” He also made bold claims like “Our trade reaches around the World,” even though they only shipped within the U.S.
The catalog caused the company to take off. His partner, Roebuck, had to sell his shares of the business after the Panic of 1893, and remained attached to Sears, Roebuck, and Co. Lucky for him, it became a household name across the States.
Americans treated Sears the way we now treat Amazon. It was the go-to place for any item, from basics to hard-to find goods. It was the first ever “everything store.” Post World War II, in a boom of American consumerism (especially in malls), Sears opened brick-and-mortars from coast to coast.
A Sears in LA was the same as a Sears in New York was the same as the Sears catalog—they created brand consistency in a largely inconsistent market.
“The catalog was pure brilliance at a time when (America) was a far-flung nation without a lot of stores. It was really the Internet of the day — a place where anyone, at any time, in any place could take a look, say, ‘Oh my gosh, I need that’ — and get it.”
–James Schrager, a professor of entrepreneurship and strategy at the University of Chicago’s Booth School of Business.
Why Sears (and Amazon) Soared
Even though they’ve taken radically different directions, Sears and Amazon actually had a lot in common. Their early growth strategies and ability to become synonymous with an “everything store” is strikingly similar—especially when you look at some of the processes they put into place.
They took advantage of public roadways
America’s public post system was officially founded in 1775, when the Continental Congress finalized deals that had been in the works for a while—coincidentally, also implemented by Ben Franklin. But it didn’t really take off until the Homestead Act of 1863, when Americans moved West en masse and needed better ways to communicate.
As the federal government encouraged westward expansion in a sweeping frenzy of manifest destiny, they wanted to reassure Americans that they could stay in touch with their families back east. The government introduced free rural delivery in 1896 (just 8 years into Sears’ operation), as well as reduced postage for “aids in the dissemination of knowledge.” Because mailed marketing materials were so new, the Sears catalog qualified.
They could send their catalogs for just a penny—and they were basically the only ones delivering catalogs to such far-flung distances. Sales skyrocketed.
Amazon did an analogous process when the Internet rolled around—the next major overhaul in public information transportation. the same with the Internet—capitalized on something that was changing with the times.
Reliability’s the name of the game
Building reliable processes changed the game.
Even though Franklin promised “satisfaction or your money back,” not a lot of other people picked up on the phrase. There was a high chance that you would ship off your hard-earned money, get a bad product in the mail in a few weeks, and have nowhere to return it.
The Sears Catalog changed that. Even from the very first issue, it stated in bold letters: “We guarantee every article you select to be absolutely satisfactory to you, or you can return it to us at our expense and we will, without question, refund your money.”
Consumers loved that the transaction didn’t end when their goods arrived in the mail. They now had the opportunity to exchange goods, as if Sears—hundreds of miles away from their remote location—were a local dry goods store down the street. It was invaluable to settlers in the West.
Amazon now does the same thing with its online sales. If you’re not satisfied with a purchase, you can return it, no questions asked. Whereas other stores might just give you credit, or not accept a return, Amazon is know for its reliable refund process.
They turned customers into addicts
Sears also instituted rewards programs in their catalog that encouraged people to buy in bulk, buy with their neighbors, or buy goods on a calendar basis. Not only did the “consumer bible” carry everything, it rewarded big spenders.
In 1903, Sears initiated the first “customer profit sharing” program, which gave them a small amount of credit for every dollar they spent with Sears that could then be redeemed for other products.
With that kind of rewards system, it made sense for consumers to buy everything they needed at Sears—even the blueprints and materials for their new “department store house.”
Amazon Prime curated the same kind of customer loyalty.
Amazon’s innovative buy-in delivery system, Amazon Prime, was criticized at first because it made very little financial sense. But according to Brad Stone, Bloomberg journalist and author of The Everything Store: Jeff Bezos and the Age of Amazon, Prime paid dividends when it turned “customers into Amazon addicts who gorged on the almost instant gratification of having purchases reliably appear two days after they ordered them.”
Just take a look at one-click buying, or the Amazon dash button. And it really works. Prime members, it turns out, spend a whopping $1,500 on Amazon per year, which is almost double what average customers spend.
And it’s not just Amazon Prime. Other companies, like the discount shopping site Jet, have managed to coax would-be one-time buyers into consumers who keep coming back for more based solely on their rewards programs. Jet rewards customers for purchasing by discounting future purposes–kind of like a coupon book, but only for things they know customers want. It’s a huge success. Just one month after Jet launched this summer, it was already the #4 marketplace.
They re-invented customer happiness
Sears was one of the first companies to popularize customer reviews. While it was still a marketing ploy, each catalog came with glowing reviews for products from “unbiased” buyers as far back as 1893. Next to a girl wearing her brand new rain boots, you could find a review written by her mother saying how great they were.
Amazon extended this by being one of the first companies to regularly post any customer reviews, positive or negative. This feels counter-intuitive. Why allow consumers to post negative reviews of products, especially since it might deter potential customers from buying?
But Amazon, like Sears, is certain enough of its ability to satisfy the customer that it allows even negative reviews. It creates brand trust and loyalty. Customers rely on reviews to choose what to buy, and know what to avoid based on the 5-star system.
Where Sears Failed (and Amazon is Succeeding)
Sears used to be a staple. By the 1960s, 1 out of nearly 200 U.S. workers received a Sears paycheck, and 1 out of every 3 carried a Sears credit card. The company’s strengths came from anticipating American needs and honing processes that fit them.
The real problems arose when they stopped paying attention to those processes.
The company took a nosedive in the 80’s and 90’s, and the catalog that was once America’s go-to “store” has now been reduced to a struggling appliances and home goods market that doesn’t get a lot of attention or thought these days. It’s the place people don’t want to shop.
The reason for the company’s sudden failure can be boiled down to a few areas. Areas where, if you take a close look, it’s clear Amazon is picking up on and making sure they don’t make the same mistake with their processes in their similar quest to dominate American consumer spending.
Sears’ terrible delivery process
Although you might be used to seeing brick and mortar stores around, Sears actually started as just a mail-order businesses. Their stock was located in a couple of fulfillment centers—one in Chicago, one in Atlanta, one in LA, and a couple on the East Coast. It was faster than other companies, but still not really efficient.
In the early days, staffers received mail orders from customers, opened them, and assembled packages by rolling around the facilities on roller skates to collect the stuff, like waitresses in a bad 50’s-themed diner.
It didn’t make the process run very smoothly. Workers had to travel to different stories of the building to get items, often slipping in elevators because of their roller skates. In addition, things were cataloged by hand, so there was a lot of human error.
They were the best processes available at the time. But as more technology came out, including automation tools that could help with cataloging and delivery, Sears didn’t adapt. Rather than fix the processes, Sears relied on its power and name to continue being a goliath in the field. What did a few wrong addresses matter when you were the unstoppable Sears? The customer would have to come back to them, right?
Unfortunately, Sears wasn’t worrying about the rising number of competitors like J.C. Penney and Macy’s that were quickly catching up to them and stealing customers left and right. This kind of ego is exactly what led to their downfall. As scholar James Schrager of UChicago writes:
“Sears was so powerful and so successful at one time that they could build the tallest building in the world that they did not need. The Sears Tower stands as a monument to how quickly fortunes can change in retailing, and as a very graphic example of what can go wrong if you don’t ‘watch the store’ every minute of every day.”
The Sears tower was renamed the Willis tower in 2009—a symbol of changing times and Sears’ inability to change processes.
Amazon’s amazingly efficient delivery process
Amazon, on the other hand, is managing to stay agile while still constantly looking for ways to fill orders faster. They get about 35 orders per second, so they have no choice but to hone their process.
Here’s what goes down. Amazon fulfillment centers are located all over the world—some are the size of about seventeen football fields. They’re staffed by “pickers” who go through crazy hoops to fulfill orders, each equipped with a tracking device that tells them what the next item they have to fetch is, where in the building it’s located, and what’s the fastest route there. They put items in a trolley and bring them to a conveyor belt.
But Amazon’s “pickers” don’t operate anything like Sears’ roller-skating employees. They don’t assemble the packages for customers. They don’t even gather stuff for the same customer.
In fact, all the logistics are decided by a robot called the “mechanical sensei.” It’s a process they’re constantly trying to improve, which is why they’re obsessive about expanding the number of fulfillment centers they have worldwide.
“The invisible hand that orchestrates the symphony that is the Amazon fulfillment center is called the mechanical sensei.”
In addition, it’s not organized at all like Sears’ fulfillment center. Items in the fulfillment center are seemingly randomly placed (eg books are never next to other books). Even though it feels counterintuitive, it makes a lot of sense. This is actually to prevent human error and make sure that items are efficiently placed on the conveyor belt to save time.
You’re less likely to confuse an iPad case with a book than two similar brands of iPad cases.
Sears was a stagnant giant
In addition to not adapting processes, Sears didn’t adapt to the changing American economy.
American appetites were changing. Sears used to be really on top of the ball, anticipating consumers’ desires for mail-order delivery, then the shift to mall-shopping. But when stand-alone box stores hit it big in the 90’s, Sears didn’t jump on the bandwagon. Soon, the store was losing customers to rising companies like Walmart and Target.
Even people within the company recognized that the problems were because Sears was turning a blind eye to process. Arthur Martinez, Sears’ CEO from 1995 to 2000, writes:
“The challenges the company faces today are far worse than ever before, but they’re very much self-inflicted.”
In addition, other companies adapted to Sears’ model. Stores like JC Penney’s, Macy’s, and Land’s End began using the same mail-order catalog. Instead of changing with the times to keep customers loyal, they watched as customers slowly drifted away, until it was too late to do anything about it.
Amazon sets its eyes on the future
Amazon, on the other hand, is always looking for ways to change and improve. For example, they’re finding ways to digitize and computerize the system so that delivery systems are more reliable and more process-driven. One such process: drones.
They’re already in place in some Amazon fulfillment centers. And they’re developing more. They’ve even initiated the “Amazon picking challenge,” an open contest to design better warehouse robots. Still, the company reassures that there will always be humans working at the centers, just in tandem with robots.
In addition, Amazon is always branching out into more fields. Amazon Studios—which they spent over $100 million in one quarter alone to produce original content—is now the unlikely underdog that’s actually winning Golden Globes for its original programming, and Prime Music, which is seeking to take down rival streaming player Spotify.
By focusing on growing into new sectors and taking risks, Amazon ensures they won’t suffer the same fate as Sears—a sudden change in American appetites that they’re unprepared for.
The real difference here is that Amazon has a different model for growth than Sears did. Sears struck gold and relied on that model. Amazon uses a flywheel model, where growth propels more growth. It’s the not-so-secret to their business success—the process that keeps helping them dominate the field.
What’s particularly interesting to note is that the elements of Amazon’s flywheel are dynamic. Customer experience, for example, looked very different in 2000 than in 2016. By making sure that each element of the flywheel process is constantly being watched, they ensure that the model keeps working, and spurring more growth.
Amazon’s processes aren’t static. They adhere to them rigorously, but when things stop working, they fix it. If Sears had done that with their stores, maybe they’d still be the behemoth they once were.
Testing the mechanical sensei’s limits
Still, it seems like there might be some limits to what Amazon—and its impressive fleet of process-defying robots—can accomplish. For example, one of Amazon’s latest forays has been into food: Amazon Fresh. Can robots really pick out our groceries? John Mackey, co-founder of Whole Foods, predicts that AmazonFresh will be “Amazon’s Waterloo.”
This might be a frontier where drones simply don’t cut it.
All good things must come to an end?
Amazon is getting so good at these processes—in part because of their flywheel model—and building up so much customer loyalty, it might reach its logical extension: cornering the market. Will anti-trust authorities break up Amazon? It’s a highly debated subject, but worth noting because it might change how Amazon operates its business in the future.
Those processes that Dutch traders and Ben Franklin only dreamed of have become a reality thanks to efficient processes—and in the eyes of some anti-trust authorities—maybe too efficient.
It seems fitting, then, that Amazon wound up being named after the river with the most volume of water and not a command from Star Trek. Because while it almost embodies “MakeItSo.com,” it most certainly embodies volume and mass—perhaps so much that it will become a problem in the future.
Download our FREE 111 Page Ebook on Process Automation
Ever wished you could automate the stuff you hate doing at work? Then you need to check out The Ultimate Guide To Business Process Automation with Zapier!
We’ve created the perfect resource to get you saving time and money by automating your business’ processes.
From basic tasks such as saving Gmail attachments into Dropbox to shipping your Salesforce leads into Mailchimp, the Ultimate Guide will guide you through setting up the perfect Zaps to automatically handle the tasks that clog up your schedule.
Plus, with Zapier’s 500+ integrated apps, chances are that your favorite programs are just waiting to be linked!
What’s in the Ebook?
- What is Zapier – A Brief Introduction to Business Automation
- The Real Power of Zapier – Lookups, Filters and Multi-Step Zaps
- Zapier vs IFTTT: The Best Way to Automate Your Life?
- 222 Zaps You Can Start Using Right Now
- 50 Examples of Business Process Automation from the World’s Most innovative Companies
- 50 more Examples of Workflow Automation using Process Street | <urn:uuid:7f4d8b86-141a-4d28-ad16-4976d39a4592> | CC-MAIN-2019-47 | https://www.process.st/delivery-process/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00337.warc.gz | en | 0.96362 | 4,469 | 2.671875 | 3 |
Learn More About How Mouth Cancer Can Affect Your Dental Health
Are your dental habits leading you down the path towards mouth cancer? Discover the signs, symptoms, and risks associated with oral disease.
Did you know that your dentist can screen for cancer? Cancer of the mouth is one of the most common forms of the disease, and adults over 20 should get tested regularly! The Oral Cancer Foundation reported approximately 49,750 cases of mouth cancer in the United States last year. Oral cancer can wreak havoc on both your dental and general health?especially if you are a smoker! Take the time to learn the warning signs of oral diseases, get tested regularly, and avoid harmful cigarettes.
What is Oral Cancer?
Cancer, simply put, is an abnormal growth of malignant cells. Throughout our lives, our cells are always splitting and regrowing, though typically in a controlled fashion. However, whenever that process is altered, it causes the cells to grow in a rapid manner, clustering to form what is known as a tumor. Oral cancer can happen anywhere on or around the mouth, including on your lips, gums, cheeks, and soft palate tissues. It often appears in the form of a lesion or sore, and if left untreated, the malignant cells can reach to other parts of your body, and the result can be life-threatening.
So, how can you tell if you are developing signs of cancer and not another oral disease, such as gingivitis? While some symptoms, such as pain, bleeding, or swelling can be present in both, other indications that you may have oral cancer include:
Sores that develop near or in the mouth that bleed persistently and do not heal
Swollen areas, such as lumps, bumps, or thickenings
Developing abnormal white or red patches on the skin or inside the mouth
Loose teeth or dentures
Pain in the tongue, jaw, or throat
Stiffness, numbness, loss of feeling, or significant tenderness in the face, jaw, or inside the mouth (which may cause difficulty breathing, eating, or talking)
Soreness or swelling in the back of the throat
Significant, unexplained weight loss
Who is at Risk?
Studies have shown that oral cancer is typically found more often in men than women. Men over the age of 50, specifically, should take extra care to take preventative measures against the disease. However, the most significant correlation found is with oral cancer patients and tobacco. The use of tobacco?whether smoked or chewed?can significantly increase your chances of getting oral cancer. Around 90% of mouth cancer patients have also reported tobacco use.
Tobacco Use and Oral Cancer
All types of tobacco, including those found in cigars, cigarettes, or pipes, come from tobacco leaves; and, cigarette companies use a myriad of additives to help make the experience even more “enjoyable” for users. The truth is, tobacco smoke is made up of over 7,000 chemicals…and at least 70 of them are known carcinogens (aka, cancer-causing). Not only can they cause diseases in your mouth, lungs, or heart, but also cigarettes can ultimately be fatal. At least 30% of all deaths in the United States are attributed to complications from smoking, and there is no safe way to consume tobacco.
However, even non-smokers can develop cancerous tissue in their mouth, so it helps to know what else can increase your risk of oral cancer. Other causes of oral cavity cancer include:
Excessive alcohol use. In addition to being detrimental to your health, drinking lots of alcohol can also erode the natural coating on your teeth, making you more susceptible to other infections.
Exposure to the sun. As with other areas of your body, applying sunscreen to your lips and around your mouth can reduce your risk of melanoma or of other cancers.
Certain diseases, like HPV. Known as the human papillomavirus, HPV is a sexually-transmitted disease. So, practicing safe sex and getting regularly tested can help to significantly reduce this hazard.
Other factors, such as a family history of cancer or a weakened immune system may also put you at higher risk. Patients with these factors should be especially vigilant about taking preventative measures.
Protect Yourself, and Reduce Your Chances
The most proactive way to reduce your chance of infections, diseases, or cancer is by taking preventative measures. The best place to start is by quitting smoking! If you have never smoked or chewed tobacco?don’t start! Even using tobacco once can negatively impact your health, including the integrity of your teeth and gums. Other ways you can lessen your risk of developing cancer is by eating healthy, by properly brushing and flossing your teeth, and by visiting your dentist on a regular basis! At Williams Dental & Orthodontics, we offer general services, including cleanings and oral cancer screenings. If you or a family member needs to schedule a dental consultation in Skiatook, OK, call us today at (918) 396-3711!
For many people, toothaches are an unfortunate part of life. A toothache can vary wildly in severity; some aches cause mild discomfort, while other aches are so intense that they keep a person’s up at night. A toothache can cause other symptoms such as jaw soreness, swelling, headaches, or even fever. If these symptoms persist, it is recommended that you see your dentist in case of an emergency
In the meantime, living with the pain of a toothache can be very frustrating. A toothache may last for a long time, depending on the individual’s case. However, you don’t have to live with the pain. There are some easy home remedies you can take for some fast toothache relief. Here are seven ways to relieve a toothache at home
1. Swish Salt Water in Your Mouth.
Infection is often the culprit behind a toothache, and salt water is a great solution to infection. Salt works as a cleaning agent, drawing out any infection and killing it. Salt water will also reduce inflammation with the gums, leading to a decrease in pain.
The beauty of this remedy is its simplicity. All you need to do is grab a glass of water and add one tablespoon of salt. Mix the salt enough so that it dissolves into the water. Once dissolved, put a mouthful of the salt water into your mouth and swish it around for at least 30 seconds. Don’t swallow the solution, instead spit it out when you are done. Repeat this every few hours until the toothache disappears.
2. Dab Clove Oil Over the Pained Area.
Clove is an essential oil that treats a variety of health problems: skin irritation, indigestion, headaches, stress, and teeth issues. Clove oil contains antimicrobial, antiseptic, and antiviral properties that fight off infection. Clove oil also works as an anesthetic, leading to a slight loss of sensation when applied to gums. Because of these effects, clove oil is great for countering toothache.
If you happen to have this essential oil in your household, dab some clove oil drops onto a cotton cloth. Then gently rub this over the painful area. If you don’t have clove oil, you can buy this product from a pharmacy, a health store, or an online retailer.
3. Apply a Cold Compress or a Hot Pack.
Did you know that dental pain can be reduced by temperature? A cold application may be able to ease swollen gums by constricting the blood flow of the affected area. This constriction will help to slow down the pain of a toothache. Meanwhile, a hot compress is supposed to draw out an infection.
If you have an ice pack or some ice cubes, use those to hold up against the cheek near the aching tooth. If you’re interested in using the hot compress, take a rag and rinse it in warm water; apply the towel against the toothache. Apply one of these compresses for at least 15 minutes.
4. Try Tea Bags.
Tea bags, particularly peppermint flavored bags, can be used to soothe sensitive gums. Some tea bags even contain numbing properties. When it comes to a toothache, you have two options with tea bags: you can make a cup of tea to drink, or you can place an unused tea bag directly over the affected area of teeth.
5. Swish Hydrogen Peroxide in Your Mouth.
If you have a bottle of hydrogen peroxide at home, then you are in luck. Hydrogen peroxide is a cleaning solution that instantly stops toothache pain in its track. Its antiseptic properties help to clean wounds and prevent infection.
Like with salt water, hydrogen peroxide is a fairly simple remedy. Pour one tablespoon of hydrogen peroxide with one tablespoon of water. Mix it up and rinse it in your mouth for about a minute. Your hydrogen peroxide bottle label should also come with instructions for teeth use, so be sure to follow those directions just in case.
6. Use Over-the-Counter Oral Care Products.
In this modern age, there are products designed specifically for toothache relief. Oral pain products are often laced with strong numbing ingredients that bring quick relief. Check your local pharmacy or an online retailer to find some quality oral analgesics. Any oral care product will typically come with instructions for application.
7. Employ Preventive Dental Care.
If you would like to avoid toothache in the long term, then keeping up decent oral hygiene will help to achieve this. Visiting https://www.williamsdentalok.com/on an annual basis will ensure that any dental problems will get taken care of in its early stages. Brushing and flossing twice a day will keep your gums healthy. Avoiding sugary foods will reduce the risk of cavity development.
8. Try Oil Pulling
This ancient remedy is surprisingly effective. Oils such as coconut oil, olive oil, sesame oil can be used to clean the mouth. The oil works to reduce plaque, improve bad breath, and wash out the bacteria in your teeth.
This is how it works: you take any kind of household oil, swish it around your mouth for at least 20 minutes, and spit it out. Make sure that you don’t accidentally swallow any of the oil during or after the 20 minute time period; the oil is supposed to be pulling harmful bacteria at this time. Once you are done oil pulling, brush your teeth as you normally would.
Key Steps for Preventing Tooth Decay and Boosting Oral Health
It is vital for everyone including you to keep your teeth clean and take care of your oral dental health. Otherwise, you may suffer from tooth decay, cold sores, and even thrush. There are key steps you can take to prevent tooth decay and improve your oral dental health.
Brushing and Flossing Your Teeth
If you want to keep your teeth and gums healthy, be sure to brush your teeth twice per day with a fluoride-based toothpaste. Brush your teeth before going to bed in the evening and after you have breakfast in the morning. Don’t forget to replace your toothbrush every three or four months. Use dental floss or interdental cleaners every day to remove any food that may be stuck between your teeth.
Use a mouthwash containing fluoride every day to boost your oral dental health. This will help you keep your mouth free of bacteria. Mouthwashes often have antiseptic ingredients that destroys bacteria known to cause plaque.
The Right Diet to Keep Your Mouth Healthy
Another way to keep your teeth and gums healthy is to eat quality nutritious meals and snacks. You should avoid eating hard candy, chocolates, and junk food such as chips or pretzels. You will especially want to limit the amount of sugar you consume.
Add fruits and vegetables to your diet instead. You will want to follow the recommendations of the American Dietetic Association and the National Institutes of Health when it comes to eating the right nutrition.
Essentially, you will want to eat a balanced diet using the five food groups, which includes milk and dairy products, chicken, meat, fish or beans, breads and cereals, fruits, and vegetables. It is important to avoid any new diets on the market that will tell you to take out entire food groups from your meals.
When snacking, you will want to avoid sticky and sweet foods that stick to your teeth. This includes cake, candy, and even dried fruit. Instead, you may want to consume yogurt, nuts, and raw vegetables. Have a combination of different foods in order to neutralize the acids that will form in your mouth.
If you eat something that gets stuck between your teeth, be sure to floss and brush your teeth at the first opportunity. Avoid smoking or chewing tobacco in order to keep your teeth clean, white, and healthy. Smoking and chewing tobacco can stain your teeth and cause cancer.
Drink plenty of water to keep your mouth moist and prevent dehydration. Saliva protects both your teeth and your gums. You should also drink fluoridated water to keep your teeth healthy. This will prevent tooth decay and other dental problems.
In addition, you may want to ask your dentist if you should use supplemental fluoride. This may also help you keep your oral health in top shape. Fluoride is known to strengthen teeth.
Visit Your Dentist Twice Per Year
Along with eating right, flossing, and brushing your teeth, be sure to visit your dentist twice per year so that future dental problems could be prevented. If you have a toothache or other dental issues, make an additional appointment with your dentist. Regular visits with your dentist can help you live a long life of superior oral health. You may live your whole life with your original teeth if you regularly see your dentist.
Also, you may want to talk to your dentist about getting dental sealants, which are plastic coatings that protect your teeth. You may want to get dental sealants to protect your back teeth from decaying. In addition, ask your dentist if any medicine you may be taking could damage your teeth and tell your healthcare professional about any mouth sores that haven’t healed.
How to Keep Your Child’s Teeth Healthy
If you want to help your child take good care of their teeth, be sure to start taking them to the dentist at one year of age after their teeth come in. By two years old, children should start brushing their teeth using toothpaste. If your child is older and plays any contact sports, be sure they wear protective headgear to protect their mouth and teeth from injury.
The Health Problems Associated with Poor Dental Hygiene
If you do not follow the recommendations above for taking care of your teeth and gums, there are serious dental problems you may face. The most common is cavities. When your tooth decays, you end up with a cavity that needs to be filled.
If you do not brush or floss your teeth regularly, the food and bacteria left in your mouth will decay your teeth. If you do not get a cavity treated by a dentist quickly, your mouth will be in pain and you may get an infection or even lose your tooth.
Another common health problem associated with inferior dental hygiene is gum disease. The plaque left on your teeth could cause gum disease in the future. This means that the tissue supporting your teeth becomes infected and your teeth may become loose over time. More importantly, gum disease is linked to heart disease, which is an excellent reason to floss and brush your teeth to keep your gums healthy.
Another serious medical concern that can occur due to smoking, drinking too much alcohol, or chewing tobacco is oral cancer. While poor oral hygiene will likely not increase your risk of oral cancer, consuming toxins in the form of tobacco or alcohol does raise your risk.
However, if you follow the advice above explaining how to keep your oral health in top shape, you will likely not have to worry about gum disease, cavities, or oral cancer. If you don’t have a dental provider, you will want to make an appointment as soon as possible. Please call Williams Dental & Orthodontics at 918-396-3711 Mondays to Fridays from 8:00 AM to 5:00 PM.
When dental crowns are lost, try not to panic. The experience can be unsettling and sometimes painful, but there are some actions that you can take in order to keep your crown and your tooth safe until you can see a dentist.
There are many reasons why your crown may have fallen out such as chewing hard foods or in one of the worst-case scenarios, tooth decay. In some cases, the adhesive holding on the crown eventually wears down and leaks out from underneath the crown after years of use. Trauma and grinding of the teeth are also big culprits when it comes to crowns falling off as well, so make sure to take precautions like using a mouth guard at night or during sports. If you notice that your crown is starting to become loose, your dentist can help to reattach it before it completely comes off.
If you don’t take care of the problem, the tooth below can quickly decay and cause additional dental problems. No matter what the condition of the tooth underneath the crown, it’s important to take action as quickly as possible to make the situation as easy to handle as possible.
Find the Crown
This seems like it should be obvious, but some people panic when they lose their crown and don’t think about saving it. In some cases, the porcelain crowns can be reattached to your tooth as long as your dentist at Williams Dental and Orthodontics is seen fast enough.
If you accidentally swallow one of your porcelain crowns, don’t worry. You probably don’t want to have it reattached, but it probably won’t cause any problems on its way out. In most cases, dental crowns fall out during eating or while brushing or flossing your teeth. When you find it, clean it carefully with a toothbrush and store it somewhere safe until you decide what to do next.
Call Your Dentist
Timing is everything when it comes to dental care. Try to make an appointment as quickly as you can. It’s likely that you won’t be able to get it on the same day that you call, but they may be able to get you an emergency appointment for a date in the very near future.
Protect Your Tooth or Replace the Crown
As long as you have the crown still and assuming there isn’t any severe decaying of the tooth it was attached to, it should be relatively easy to attach. You can purchase dental cement at most pharmacies that can temporarily reattach the crown in order to keep bacteria from entering the tooth and causing further damage.
Temporarily reattaching the crown with dental cement is a relatively simple process. After the crown and tooth are cleaned thoroughly, coat the tooth and crown with the cement and firmly press the crown back into place. Before you apply the cement, place the crown on the tooth and gently bite down to make sure that it is placed onto the tooth correctly.
If you can’t find the crown, it’s possible to just coat the tooth itself with dental cement or wax to protect the tooth.
It’s important to remember that this is only a temporary solution. While you may be able to reattach the crown, only a dental professional can ensure that it is properly placed and bonded to the tooth below to prevent it from coming off again.
Take Care of the Pain
Losing a crown can be painful as it can expose the nerves that are inside of the tooth, this pain can get worse while eating and drinking. While the pain is usually relatively minor, it can sometimes become quite severe and unbearable for some.
To dull the pain, you can either use local anesthetics that contain benzocaine or even clove oil if you would prefer a natural solution. These are to be applied directly to the affected area with a cotton swab in order to numb part of the pain.
In addition, over the counter pain medication that contains aspirin, ibuprofen, or acetaminophen can also help relieve some of the pain that is associated with the loss of a dental crown.
Take Care of Your Teeth
Oral hygiene is always important, but even more so after you lose a crown as the internal part of your tooth is exposed to bacteria and food debris. Make sure to brush the affected area gently as too much pressure and friction can loosen the crown again and cause further irritation. It’s also a good idea to rinse your mouth out with salt water after you are finished eating to help keep the area clean and infection free.
Watch Your Diet
As mentioned before, certain foods can cause additional pain after you have lost your crown. In addition, certain foods can also cause damage to the tooth below as it is may still be exposed even if you have temporarily replaced the crown. It’s important to try to avoid any additional damage before you can get into a dentist to make sure that replacing the crown goes as smoothly as possible without requiring any additional dental procedures.
Avoid food and drinks that are too sugary, hard to chew, extreme temperatures, or anything that is erosive or acidic. If the tooth below the crown is not taken care of, you may require a root canal or that it be completely removed.
Contact Williams Dental and Orthodontics in Skiatook,Oklahoma
Even if the crown simply needs to be replaced, you’ll still need to speak to a dental professional. If there are any signs of decay, they will need to be treated by a trained professional like those at Williams Dental and Orthodontics. Contact us today to make an appointment!
The one procedure that has captured the attention of many of our patients, is dental implants. But, what are dental implants? How do they work? How can you get them? These are the questions that everyone should ask before receiving a cosmetic dental treatment, and in this article we hope to give you a few quick answers to your most frequently asked questions.
What are implants?
In short, dental implants are regarded as a semi-permanent treatment alternative to many dental concerns. The procedure involves attaching a customized façade securely to your existing teeth, thereby filling in the gaps in your smile, correcting any discoloration, and leaving the teeth with a unified glistening-white appearance. This façade is wafer thin, and should be indistinguishable from natural, healthy looking teeth by the end of treatment.
Once you have received your implants, your teeth will still function normally, and usually require very little additional maintenance. In many cases implants can last decades with little-to-no specialized assistance.
What’s involved in the procedure?
Receiving dental implants is a complex treatment option that should only be done by a licensed and skilled dentist. Given the surge of interest in dental implants in particular, you can now find a general dentist to perform the treatment; for example, Dr. Brad Williams of WilliamsDentalOK.com in Skiatook, Oklahoma.
Receiving a dental implant is an outpatient surgical procedure that is (most often) performed in steps …
The unhealthy tooth is extracted
The jawbone is prepped for the procedure
After the jawbone has some time to heal, your doctor will outfit your jaw with a metallic dental implant post which will be used to support a prosthesis (like veneers)
The jaw is allowed further time to heal in preparation to receive veneers or some other kind of tooth-replacement
The doctor installs an abutment, which is an addition to the implant post
After taking a little more time to heal, your dentist will take impressions of your teeth (and jawbone) and implant the replacement teeth.
The entire treatment can require several months (from start to finish) to complete, however, most of this time is spent during the healing stages between trips to your doctor.
Should I consider dental Implants?
You should consult directly with WilliamsDentalOK.com before contemplating any type of voluntary treatment, like implants . Only qualified dentists, who are equipped with your full medical track-record and a professional relationship with you, can tell you for certain if implants are a good option for you. However, implants are used to resolve many common oral health problems. So, if you have experienced …
Chipped, broken, or worn-down teeth
Misaligned or uneven teeth
Then implants could be a real game-changer for your oral health, since they are commonly used to treat those specific conditions. And the benefits are tremendous …
Although brand new, your smile will look extremely natural
Teeth will be better able to resist staining
Color and alignment correcting for all of your teeth without painful braces or other orthodontic options.
Teeth will feel stronger and, in some cases, less sensitive
If you feel that dental implants are the right decision for you, then contact Dr. Williams today!
George Washington is arguably one of our most celebrated presidents we’ve ever had. Not only a forefather of our great country, but a great leader. But his legacy isn’t just leadership achievements, it also has some interesting myths. We’ve all heard the cherry tree myth, where he famously stated he “cannot tell a lie”. But have you heard the idea that George Washington had wooden teeth? It’s true. Well, it’s sort of true.
It is no myth that George Washington had dental issues. At the time, it wasn’t tough to develop dental problems, especially in the military. And since dental technology was primitive at best, poor dental health was a general throughout his entire career. Because of these issues, he wore various sets of dentures constructed of ivory, gold and lead. Unfortunately, wood was never even an option at the time. Myth busted.
So why do we think he had wooden teeth? Well, dental scientists and historians believe it was due to the ivory set he used. As with all of us, teeth become stained over time. Because they were made of ivory, these studies lead us to believe the stained ivory gave the impression of wood.
Believe it or not, when George Washington did his First Inaugural Address in 1789, he only had one natural tooth, so the dentures were definitely necessary, and because they weren’t constructed as well as they are today, they were very painful to wear. This dental pain forced him to have a dour expression at all times. It is possible the myth was construed to make President Washington more relatable and less remote.
If you have any questions or concerns regarding dental restorations, contact Williams Dental and Orthodontics at (918)396-3711 or williamsdentalok.com.
Williams Dental and Orthodontics proudly serves Skiatook, Sperry, Collinsville, Hominy, Owasso and all surrounding areas.
As soon as October hits, the air turns from Indian summer to chilly Fall. And just like that, out come the cutest little ghouls and goblins waiting to trick-or-treat. Yes, the tiny humans in your family are especially excited about this sugary nightmare of a holiday and there is little a parent can do. After all, we know that too much of a sugary thing will only lead to dental issues in the future. But there is no need to deprive your child of this sweetest of holidays, just keep an eye on what it could do to their teeth. The following is a list of your child’s Halloween dental health enemies:
Sour. Sour candies can ruin your tooth enamel by combining sweet with sour for a pH level of 2.5, a level that high can cause serious damage to developing teeth so keep them clear of this fruity confection.
Sticky. It may seem fairly harmless, but sticky candy is some of the worst for teeth. Not only does the sticky texture stick to your teeth’s surface and in their crevices, but they can also loosen dental fixtures, like braces or fillings.
Gummi. Similar to caramels and other sticky candies, gummy candies also stick to the crevices of your teeth, making it difficult to remove with a brushing or a quick rinse of water.
Hard. Not only can hard candy damage teeth when bitten, but even if you don’t bite them, sucking on hard candy isn’t a quick process, making time for sugars to nestle into a tooth and cause cavities extensive.
If you have any questions or concerns regarding pediatric dentistry, contact Williams Dental and Orthodontics at (918)396-3711 or williamsdentalok.com.
Williams Dental and Orthodontics proudly serves Tulsa, Skiatook, Sperry, Collinsville, Hominy and Owasso.
Getting New Natural Looking Teeth through Dental Implants
Dental implants are one of the best, if not the best, choices when it comes to replacing missing teeth. The results don’t just look natural, as if you never lost any teeth at all, but they are also very secure and are a lot more comfortable compared to bridges and dentures. While it is true that getting an implant is not by any means a quick process, the fact that the newly restored tooth or teeth look and feel natural are well worth the hassle.
Before treatment begins, your dentist will have to assess first if your teeth are suitable for implants. This usually includes taking x-rays to check the bone density and mass of your jaws. Once your dentist has finished the evaluation and you’re found out to be a suitable candidate, the dentist will then begin going over with you the different stages involved in the planning of the surgery.
The Placement of the Dental Implant
To start things, the dental surgeon will insert an implant into your jaw. This implant is usually made from titanium because this element rarely, if at all, produces any negative reactions with the human body. The implant will then be left to heal to allow osseointegration, or for the implant to fuse naturally into your jawbone.
The surgery won’t really take long, but you should still rest after. If you feel woozy or if there’s any discomfort, the use of over-the-counter painkillers is allowed.
After the implant has healed, the second part involves uncovering the implant, then another surgical procedure, only this time to attach a post. But, only if necessary, which doesn’t always happen to be the case.
Lastly, the crown will be custom-made so that it fits right over the post. The dentist will take an impression of your implant and then send it over to a dental laboratory. The color or shade of your teeth will also be measured to make sure that the crown doesn’t stand out once it’s fitted in.
The final process usually takes around two weeks as the crown has to be custom-made first in the laboratory and then sent back to your dentist. Once it’s finished, the crown will be fitted and it should look and even feel as if you had your old tooth put right back in there.
This is where the process ends and where the dentist will tell you how to take proper care of your dental implant, as well as your teeth. If taken care of properly, dental implants can last for a decade or even more! In fact, it’s not surprising for the implant to last for the rest of an adult patient’s life.
For more information on dental implants or to make an appointment call Williams Dental in Skiatook, OK at 918-396-3711. Visit our website at http://www.williamsdentalok.com. | <urn:uuid:ee2198d7-1857-4163-b21d-ab4550aa4067> | CC-MAIN-2019-47 | https://www.williamsdentalok.com/category/dental-implants/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00098.warc.gz | en | 0.949247 | 6,688 | 3.078125 | 3 |
Here we describe a quick and simple method to measure cell stiffness. The general principle of this approach is to measure membrane deformation in response to well-defined negative pressure applied through a micropipette to the cell surface. This method provides a powerful tool to study biomechanical properties of substrate-attached cells.
Cite this ArticleCopy Citation | Download Citations | Reprints and Permissions
Oh, M. J., Kuhr, F., Byfield, F., Levitan, I. Micropipette Aspiration of Substrate-attached Cells to Estimate Cell Stiffness. J. Vis. Exp. (67), e3886, doi:10.3791/3886 (2012).
Translate text to:
Growing number of studies show that biomechanical properties of individual cells play major roles in multiple cellular functions, including cell proliferation, differentiation, migration and cell-cell interactions. The two key parameters of cellular biomechanics are cellular deformability or stiffness and the ability of the cells to contract and generate force. Here we describe a quick and simple method to estimate cell stiffness by measuring the degree of membrane deformation in response to negative pressure applied by a glass micropipette to the cell surface, a technique that is called Micropipette Aspiration or Microaspiration.
Microaspiration is performed by pulling a glass capillary to create a micropipette with a very small tip (2-50 μm diameter depending on the size of a cell or a tissue sample), which is then connected to a pneumatic pressure transducer and brought to a close vicinity of a cell under a microscope. When the tip of the pipette touches a cell, a step of negative pressure is applied to the pipette by the pneumatic pressure transducer generating well-defined pressure on the cell membrane. In response to pressure, the membrane is aspirated into the pipette and progressive membrane deformation or "membrane projection" into the pipette is measured as a function of time. The basic principle of this experimental approach is that the degree of membrane deformation in response to a defined mechanical force is a function of membrane stiffness. The stiffer the membrane is, the slower the rate of membrane deformation and the shorter the steady-state aspiration length.The technique can be performed on isolated cells, both in suspension and substrate-attached, large organelles, and liposomes.
Analysis is performed by comparing maximal membrane deformations achieved under a given pressure for different cell populations or experimental conditions. A "stiffness coefficient" is estimated by plotting the aspirated length of membrane deformation as a function of the applied pressure. Furthermore, the data can be further analyzed to estimate the Young's modulus of the cells (E), the most common parameter to characterize stiffness of materials. It is important to note that plasma membranes of eukaryotic cells can be viewed as a bi-component system where membrane lipid bilayer is underlied by the sub-membrane cytoskeleton and that it is the cytoskeleton that constitutes the mechanical scaffold of the membrane and dominates the deformability of the cellular envelope. This approach, therefore, allows probing the biomechanical properties of the sub-membrane cytoskeleton.
1. Pulling Glass Micropipettes
Equipment: Micropipette Puller, Microforge.
Glass: Boroscillicate glass capillaries (~1.5 mm external diameter, ~1.4 mm internal diameter).
- Micropipettes are pulled using the same basic approach that is used to prepare glass microelectrodes for electrophysiology recordings. Briefly, a glass capillary is heated in the middle and when the glass starts to melt the two halves of the capillary are pulled apart generating two micropipettes. Multiple commercial pullers are available to perform this process ranging from relatively simple vertical pullers that use gravity to pull the two pipettes apart to highly-sophisticated horizontal pullers that offer multiple programmable options to vary the velocity and other parameters of the pull. Both types of pullers were used in our experiments.
- Requirements for the geometry of the pipette tip: The tips of the pipettes used in these experiments typically range between 2 to 6 μm external diameter depending on the size of the cell. Another important parameter is the shape of the tip, which should approximate a cylindrical tube (see Figure 1). This can be achieved by optimizing the parameters of the pull and verifying the shape of the tip under the microscope until the desired shape is obtained. The optimal length of the pipette shank depends on the amount of expected deformation: if the deformations are small <10 μm, it is enough that the cylinder-like part of the pipette is also relative short (the same order of magnitude), for larger deformations adjust accordingly. In general, increasing the heat and/or increase in "pull" decreases the diameter of the tip. The increase in "pull" also generates a tip with a longer taper. In our experiments, using a Sutter P-97 horizontal pipette puller, the program was optimized to the following parameters: Heat of 473; Pull 22; Velocity 22, Time 200; Pressure 500. It is also possible to create cylindrical tips by pulling a very long shank and then breaking and polishing it. Detailed instructions of how to prepare different types of the pipettes are given in the Sutter manual.
- Microforge: It is also recommended to fire-polish the tip of the pipette to generate a smooth glass surface that makes a good seal to the plasma membrane. This is done by bringing the tip of the pipette to the vicinity of a heated glass ball for a very a fraction of a second using a microforge. Similar technique is routinely used for preparing microelectrodes for electrophysiological recordings.
- Filling the Micropipette: Micropipettes should be filled with a physiological saline solution, such as PBS or non-fluorescent growth media. Importantly the solution should be supplemented with 30% serum that will allow the cell membrane to move smoothly into the pipette. Two approaches can be used to get rid of air bubbles in the tip of the pipette: (i) a tip of the pipette can be immersed in the solution first to allow the liquid to fill the tip by the capillary forces followed by backfilling the pipette from the other end or (ii) the whole pipette can be filled from the back end by gently tapping on the shank of the pipette to remove the bubbles from the tip.
- Note: Pipettes have to be prepared on the day of the experiment.
2. Preparation of Cells
- Seeding the cells: Microaspiration is performed on single cells that are either maintained in suspension or are attached to the substrate. To aspirate cells in suspension, cells are lifted from their substrates and pipetted into a shallow longitudinal chamber that is mounted on the inverted microscope right before the experiment. To aspirate substrate-attached cells, cells are seeded on small cover-slips (~10 mm diameter) that can also be placed into the microaspiration chamber before the experiment. The rationale to use a shallow longitudinal chamber is to allow a micropipette to approach the cells at a very shallow angle, as close to horizontal as possible. This is done to allow the membrane pulled into the micropipette to be visualized on a single plane of focus (see Figure 2).
- Visualizing cell membrane: To observe the membrane projection into the pipette, cellular membranes are stained with a lipophilic fluorescent dye, such as diI using a standard staining protocol.
- Warm PBS solution.
- Dilute the stock diI to a working concentration (5 μM) with the warmed PBS solution.
- Sonicate for 5 min to break dye aggregates.
- Spin down for 5 min and take supernatant.
- Wash cells in PBS 3 times, 5 min each.
- Incubate cells with the dye solution for 30 min in a 37 °C incubator.
- Wash the cells with PBS 3 times for 5 min each.
- Note: It is possible to substitute diI staining of the membrane with visualizing the sub-membrane cytoskeleton that is also being pulled together with the membrane into the pipette. It needs to be taken into the account, however, that perturbing the cytoskeleton may alter the biomechanical properties of the cell. Moreover, when performing microaspiration experiments with substrate attached cells, it is recommended to use 3D imaging to estimate the length of membrane projection into the pipette that is positioned at an angle to the focal plane of the cells.
3. Microaspiration and Image Acquisition
Equipment: Inverted Fluorescent Microscope, preferably with 3D deconvolution capabilities (Zeiss Axiovert 200M with computer controlled Z-axis movement of the objectives or an equivalent); videocamera connected to a computer (AxioCam MRm or an equivalent), Pressure Transducer (BioTek or an equivalent), Vibration-free station (TMD or an equivalent), Micromanipulator (Narishige, Sutter, Burleigh or equivalent; manipulators can be mechanical, hydraulic or piezoelectric). It is also important to emphasize that microaspiration can be performed using a microscope without 3D capabilities to estimate the stiffness of cells in suspension, such as red blood cells 1,2 or neutrophils 3, isolated organelles, such as nuclei 4 or artificial liposomes 5.
Image Acquisition software: Zeiss AxioVision or an equivalent.
- Mount the cells into a microaspiration chamber, as described above, on an inverted fluorescent microscope. Position the cells choosing a cell for an experiment and place it in the center of the visual field. It is important to perform these experiments in a vibration-free environment, particularly for the experiments with substrate-attached cells because minute vibrations that typically occur on benches and regular tables are likely to completely compromise the creation of the seal, break the tip of the micropipette or result in significant shifts in the position of the tip that will skew the analysis of the results.
- Place a micropipette filled with PBS/media w/ serum solution into a pipette holder connected to a power transducer by flexible tubing with the diameter adjusted to the connecter of the pipette holder for a tight fit. In the beginning of each experiment the pressure in the pipette is equilibrated to the atmospheric pressure. The pipette is mounted onto a micromanipulator that allows fine control of the pipette movements in a micron range. Position a pipette at a shallow angle to the bottom of the chamber and bring the tip of the pipette to the center of the visual field. The shank of the pipette, a cylindrical part of the pipette tip into which the membrane is aspirated, is aligned horizontally to the focal plane by (1) positioning the pipette at the shallowest angle possible (10-15°) and (2) by flexing the shank of the pipette against the bottom of the chamber. Because the shank is very thin it is flexible enough to slide on the bottom of the chamber while approaching a cell, as shown schematically in Figure 1. Slowly bring down the micropipette to the side of a single cell using the course manipulator until near the plane of focus for the cell. Then, using a fine manipulator move the micropipette to the edge of the cell until the tip of the pipette gently touches the membrane. Take one image to observe the position of the pipette. Good seals are created when the whole tip of the pipette is in full contact with the cell surface and the contact is stable. There is no strong objective criterion, however, about how good the seal is except for the visual examination.
- Apply a step of negative pressure using the transducer and maintain it until membrane projection is stabilized. The amount of pressure needed to aspirate the membrane into the pipette varies depending on cell type and specific experimental conditions. In our experiments, initial deformation is typically observed when applying pressure in the range between -2 to -15mm Hg. When the pressure is applied, the membrane is gradually deformed into the pipette until it is stabilized at certain length, a process that typically takes 2-3 min. During this time, images of membrane deformation are acquired every 30 sec to track the progression of the membrane that is pulled into the pipette.
- Increase the pressure to the next level in 2-5 mm Hg steps and repeat the whole procedure until membrane projection detaches from the cell and moves into the pipette, at which point the experiment is stopped.
To quantify the degree of membrane deformation, the aspirated length (L) is measured from the tip of the pipette to the vertex of the circumference of the membrane projection. It is important to note, however, that a larger pipette will apply more force on the cell membrane at the same level of pressure. To account for the variability between the diameters of the pipettes, therefore, the aspirated length is normalized for the pipette diameter (D) measured for each experiment.
The data can be further analyzed using a standard linear viscoelastic half-space model of the endothelial cell, as described in the earlier studies 6,7. Specifically, the elastic modulus of the cells was estimated using the equation:
where E is Young's modulus, a is the inner radius of the pipette, Δp is the pressure difference, L is the corresponding aspirated length, and φ(η) is a wall function calculated using the force model, as described by Theret et al 7. It is important to note that multiple models have been used to analyze the microaspiration data including a finite element model that assumes that a cell is a deformable sphere with isotropic and homogenous material properties and liquid drop models, which assume that cells form a spherical shape, can deform continuously, and recover upon release, as described in several excellent reviews: 8-10. Microaspiration can also be used to investigate other biomechanical parameters of cells and tissues, such as cellular viscoelastic properties, cortical tension and contribution of different structural elements to cell and tissue biomechanics (see the reviews listed above for more information).
5. Representative Results
In earlier studies, micropipette aspiration was performed either on liposomes 5 or on cells that were not attached to the substrate 2,11-13. In our studies, however, the cells are typically maintained attached to the substrate to avoid changes in the cytoskeletal structure that are likely to occur when cells detach 14-16. To validate the use of microaspiration technique for substrate-attached cells, we tested whether disruption of F-actin results in the decrease in cell stiffness of bovine aortic endothelial cells (BAECs), as estimated by this approach. Figure 3 shows that, as expected, this is indeed the case. Specifically, Figure 3A shows a typical series of fluorescent images of an endothelial membrane undergoing progressive deformation in response to negative pressure applied through a micropipette. As expected, the membrane is gradually aspirated into the pipette and the aspirated length increases as a function of applied pressure. The time-courses of the deformation show that disruption of F-actin significantly increases the aspirated lengths of the projections under all pressure conditions (Figure 3B) 14.
Using this approach, we discovered that cell stiffness increases when cellular membranes are depleted of cholesterol whereas cholesterol enrichment had no effect 14. Figure 4 shows a cholesterol-enriched cell, a control cell, and a cholesterol-depleted cell after reaching maximal aspiration lengths at -15 mm Hg (4A). The projections typically started to develop at -10mmHg and the time-courses of membrane deformation could be measured for the negative pressures of -10, -15 and -20 mm Hg (4B). Application of pressures above -25mmHg resulted in detachment of the aspirated projection forming a separate vesicle. The pressure level that resulted in membrane detachment was similar under different cholesterol conditions. This observation was highly unexpected because previous studies showed that in membrane lipid bilayers an increase in membrane cholesterol increases the stiffness of the membrane 5,17. Our further studies confirmed these observations using several independent approaches, including Atomic Force Microscopy 18,19 and Force Traction Microscopy 20.
Figure 1. Schematic side view of a recording pipette. The pipette is pulled to generate a cylindrical shank at the tip (side view). Micropipette parameters: D=2a=internal diameter and ED=2b=external diameter.
Figure 2. Micropipette approaching a substrate-attached cell. (A) Schematic side view; (B) Bright contrast image of a micropipette shank touching a typically shaped cell used in aspiration experiments; (C) Fluorescent image of the same cell labeled with DiIC18. The micropipette is still present but is invisible (From 14).
Figure 3. Validation of measuring cell stiffness in substrate-attached cells using microaspiration. A: Images of progressive membrane deformation of BAECs under control conditions and after exposure to latrunculin. The pipette is invisible on the images because it does not fluoresce. The cells were exposed to 2 μM latrunculin A for 10 min, which dramatically reduced the amount of F-actin, as measured by rhodamine-phalloidin fluorescence (not shown) but had no significant effect on the cell shape. In a latrunculin-treated cell, there is a thinning of the membrane in the middle of the aspirated projection but the projection is still attached to the cells. B. Effect of latrunculin A on the time courses of membrane deformation where L is aspirated length of the membrane projection and D is the diameter of the pipette for control cells (n=14) and cells exposed to 2 μM latrunculin A for 10 min (n=5). The cells were aspirated with -10 mm Hg (diamonds), -15 mm Hg (squares) and -20 mm Hg (triangles). (From 14.)
Figure 4. Effect of cellular cholesterol levels on membrane deformation of BAECs. A. Typical images of membrane deformation of cholesterol-enriched, cholesterol-depleted and control cells (control cells were exposed to MβCD:MβCD-cholesterol mixture at 1:1 ratio that had no effect on the level of free cholesterol in the cells (see inset). The images shown depict the maximal deformation at -15 mm Hg. The arrow indicates the position of the aspirated projection. The bar is 30 μm. B. Average time-courses of aspirated lengths for the three experimental cell populations. C. Maximal aspirated lengths plotted as a function of the applied pressure. The maximal normalized length in depleted cells was significantly lower than that of control cells for pressures -15 mm Hg and -20 mm Hg (P < 0.05). (From 14.)
Microaspiration provides a simple and highly reproducible method to estimate cell stiffness/deformability by applying negative pressure to a cell membrane and measuring membrane deformability in response to well-defined pressure. It was first developed by Mitchison and Swann (1954) to characterize the elastic properties of sea-urchin eggs to provide insights into the mechanisms of cell division 21 and then to look at the mechanical properties in red blood cells 1. This method has been used in multiple studies to assess the biomechanical properties of various cell types (e.g. 2,3,12,13). Our recent studies extended this method to estimate the stiffness of substrate-attached cells by combining microaspiration with 3D imaging 14,15,22. Membrane stiffness and elasticity provide critical information regarding cell phenotype and how cells respond to a dynamic environment, particularly to a variety of mechanical clues that are generated by hemodynamic forces of the blood flow or by changing the viscoelastic properties of the extracellular matrix. Indeed, earlier studies have shown that cell stiffness increases in response to fluid shear stress 13,23 and as a function of the stiffness of the substrate 16,24. Changes in cell stiffness are expected to have a major impact on cell mechanosensitivity and mechanotransduction. Specifically, we have shown recently that an increase in endothelial stiffness facilitates their sensitivity to flow 25. We have also shown that endothelial stiffening is associated with an increase in their angiogenic potential 22. Furthermore, a decrease in membrane stiffness and viscosity characterize proliferative late-stage ovarian cancer cells indicating that these properties can help differentiate early stage versus an aggressive malignant phenotype 26.
Measuring cell stiffness also may provide major insights into the structure and organization of the cytoskeleton. A key factor in determining cell stiffness is the sub-membrane cytoskeleton, particularly F-actin 6,14. Changes in cell stiffness may also reflect cytoskeletal re-arrangement in response to different signaling events, such as activation of Rho-GTPases 27, a major mechanism that couple between the membrane and the cytoskeleton 28. Furthermore, changes in cell stiffness can be detected even when no obvious changes in the structure of the cytoskeleton networks are observed 14 suggesting that this approach is more sensitive to subtle changes in the cytoskeleton structure than visualization of the networks.
In terms of alternative approaches, microaspiration provides a useful and inexpensive alternative to Atomic Force Microscopy (AFM) that is typically used to measure cell stiffness 29,30. In contrast to AFM that measures local membrane rigidity, microaspiration provides a global measure of the deformability of a cell. Other methods to estimate cellular biomechanical properties include various bead/particle approaches, such as magnetic twisting cytometry 31-33, particle tracking 34,35, as well as several other approaches, such as cytoindenter, a method that is somewhat similar to the AFM 36 and optical tweezers 37. While detailed comparative analysis of these techniques is beyond the scope of this manuscript, the main differences between them and microaspiration are the following: (i) Magnetic twisting cytometry involves applying an external force to a bead attached to the cells surface to cause local membrane deformation at the site where the bead is attached which is most important to determine the force-induced stiffening response 31; (ii) Particle tracking can be used for several different purposes, such as estimating the force that cells exert on the substrate (Traction Force Microscopy) 34 or estimate the stiffness of the internal "deep" layers of the cell by analyzing the motion of the beads or organelles inside the cell 35. In contrast, optical tweezesrs are used to pull membrane nanotubes (tethers) to estimate cortical tension and membrane-cytoskeleton adhesion 37. An alternative method to pull membrane tethers is to use the AFM, a method that has both significant similarities and differences as compared to optical tweezers 18,38. The advantage of most of these methods is that they may provide detailed spatial information about the biomechanical properties of the individual cells. However, all of these techniques require either very expensive equipment or sophisticated and not readily available software packages. The strength of the microaspiration, on the other hand, is that it is a good as other methods at providing quantitative results and differentiating among specific mechanical models of the cell without the requirement of the highly specialized equipment or software. In summary, while there are multiple approaches to assess the biomechanical properties of cells and tissues, microaspiration remains a useful and powerful approach to investigate cellular biomechanics.
No conflicts of interest declared.
|Sutter pipette puller||Sutter Instruments||P-97|
|Inverted Fluorescent Microscope||Zeiss||Axiovert 200M||The microscope should be preferably equipped with 3D/deconvolution capabilities.|
|Image Acquisition sotware||Zeiss||AxioVision|
|Pneumatic Pressure Transducer||BioTek||DPM-1B||DPM1B Pneumatic Transducer Tester can now be found by FLUKE.|
|Pipette glass||Richland||Customized glass||Pipettes were customized with a 1.2 inner diameter and 1.6 outer diameter.|
|DiI Dye||Invitrogen||D282||Dissolves well in DMSO|
- Rand, R. P., Burton, A. C. Mechanical properties of the red cell membrane. I. Membrane stiffness and intracellular pressure. Biophys. J. 4, 115-135 (1964).
- Discher, D. E., Mohandas, N., Evans, E. A. Molecular maps of red cell deformation: hidden elasticity and in situ connectivity. Science. 266, 1032-1035 (1994).
- Schmid-Schönbein, G. W., Sung, K. L., Tözeren, H., Skalak, R., Chien, S. Passive mechanical properties of human leukocytes. Biophys. J. 36, 243-256 (1981).
- Guilak, F., Tedrow, J. R., Burgkart, R. Viscoelastic properties of the cell nucleus. Biochem. Biophys. Re.s Commun. 269, 781-786 (2000).
- Needham, D., Nunn, R. S. Elastic deformation and failure of lipid bilayer membranes containing cholesterol. Biophys. J. 58, 997-1009 (1990).
- Sato, M., Theret, D. P., Wheeler, L. T., Ohshima, N., Nerem, R. M. Application of the micropipette technique to the measurement of cultured porcine aortic endothelial cell viscoelastic properties. Journal of Biomechanical Engineering. 112, 263-268 (1990).
- Theret, D. P., Levesque, M. J., Sato, F., Nerem, R. M., Wheeler, L. T. The application of a homogeneous half-space model in the analysis of endothelial cell micropipette measurements. J. of Biomechanical Engineering. 110, 190-199 (1988).
- Hochmuth, R. M. Micropipette aspiration of living cells. J. Biomech. 33, 15-22 (2000).
- Lim, C. T., Zhou, E. H., Quek, S. T. Mechanical models for living cells--a review. Journal of Biomechanics. 39, 195 (2006).
- Zhao, R., Wyss, K., Simmons, C. A. Comparison of analytical and inverse finite element approaches to estimate cell viscoelastic properties by micropipette aspiration. Journal of Biomechanics. 42, 2768 (2009).
- Chien, S., Sung, K. L., Skalak, R., Usami, S., Tozeren, A. Theoretical and experimental studies on viscoelastic properties of erythrocyte membrane. Biophys. J. 24, 463-487 (1978).
- Evans, E., Kuhan, B. Passive material behavior of granulocytes based on large deformation and recovery after deformation tests. Blood. 64, 1028-1035 (1984).
- Sato, M., Levesque, M. J., Nerem, R. M. Micropipette aspiration of cultured bovine aortic endothelial cells exposed to shear stress. Arteriosclerosis. 7, 276-286 (1987).
- Byfield, F., Aranda-Aspinoza, H., Romanenko, V. G., Rothblat, G. H., Levitan, I. Cholesterol depletion increases membrane stiffness of aortic endothelial cells. Biophys. J. 87, 3336-3343 (2004).
- Byfield, F. J., Hoffman, B. D., Romanenko, V. G., Fang, Y., Crocker, J. C., Levitan, I. Evidence for the role of cell stiffness in modulation of volume-regulated anion channels. Acta. Physiologica. 187, 285-294 (2006).
- Byfield, F. J., Reen, R. K., Shentu, T. -P., Levitan, I., Gooch, K. J. Endothelial actin and cell stiffness is modulated by substrate stiffness in 2D and 3D. Journal of Biomechanics. 42, 1114 (2009).
- Evans, E., Needham, D. Physical properties of surfactant bilayer membranes: thermal transitions, elasticity, rigidity, cohesion and colloidal interactions. Journal of Physical Chemistry. 91, 4219-4228 (1987).
- Sun, M., Northup, N., Marga, F., Huber, T., Byfield, F. J., Levitan, I., Forgacs, G. The effect of cellular cholesterol on membrane-cytoskeleton adhesion. J. Cell. Sci. 120, 2223-2231 (2007).
- Shentu, T. P., Titushkin, I., Singh, D. K., Gooch, K. J., Subbaiah, P. V., Cho, M., Levitan, I. oxLDL-induced decrease in lipid order of membrane domains is inversely correlated with endothelial stiffness and network formation. Am. J. Physiol. Cell. Physiol. 299, 218-229 (2010).
- Norman, L. L., Oetama, R. J., Dembo, M., Byfield, F., Hammer, D. A., Levitan, I., Aranda-Espinoza, H. Modification of Cellular Cholesterol Content Affects Traction Force, Adhesion and Cell Spreading. Cell Mol. Bioeng. 3, 151-162 (2010).
- Mitchinson, J. M., Swann, M. M. The Mechanical Properties of the Cell Surface: I. The Cell Elastimeter. J. of Experimental Biology. 31, 443-460 (1954).
- Byfield, F. J., Tikku, S., Rothblat, G. H., Gooch, K. J., Levitan, I. OxLDL increases endothelial stiffness, force generation, and network formation. J. Lipid Res. 47, 715-723 (2006).
- Ohashi, T., Ishii, Y., Ishikawa, Y., Matsumoto, T., Sato, M. Experimental and numerical analyses of local mechanical properties measured by atomic force microscopy for sheared endothelial cells. Biomed. Mater. Eng. 12, 319-327 (2002).
- Solon, J., Levental, I., Sengupta, K., Georges, P. C., Janmey, P. A. Fibroblast adaptation and stiffness matching to soft elastic substrates. Biophysical Journal. 93, 4453 (2007).
- Kowalsky, G. B., Byfield, F. J., Levitan, I. oxLDL facilitates flow-induced realignment of aortic endothelial cells. Am. J. Physiol. Cell. Physiol. 295, 332-340 (2008).
- Ketene, A. N., Schmelz, E. M., Roberts, P. C., Agah, M. The effects of cancer progression on the viscoelasticity of ovarian cell cytoskeleton structures. Nanomedicine: Nanotechnology, Biology and Medicine. Forthcoming (2011).
- Kole, T. P., Tseng, Y., Huang, L., Katz, J. L., Wirtz, D. Rho kinase regulates the intracellular micromechanical response of adherent cells to rho activation. Mol. Biol. Cell. 15, 3475-3484 (2004).
- Hall, A. Rho GTPases and the Actin Cytoskeleton. Science. 279, 509-514 (1998).
- Okajima, T. Atomic Force Microscopy for the Examination of Single Cell Rheology. In. Methods Mol. Biol. 736, 303-329 (2011).
- Wang, N., Butler, J. P., Ingber, D. E. Mechanotransduction across the cell surface and through the cytoskeleton. Science. 260, 1124-1127 (1993).
- Fabry, B., Maksym, G. N., Butler, J. P., Glogauer, M., Navajas, D., Fredberg, J. J. Scaling the microrheology of living cells. Physical Review Letters. 87, 148102 (2001).
- Park, C. Y., Tambe, D., Alencar, A. M., Trepat, X., Zhou, E. H., Millet, E., Butler, J. P., Fredberg, J. J. Mapping the cytoskeletal prestress. American Journal of Physiology - Cell Physiology. 298, C1245-C1252 (2010).
- Munevar, S., Wang, Y., Dembo, M. Traction force microscopy of migrating normal and H-ras transformed 3T3 fibroblasts. Biophys. J. 80, 1744-1757 (2001).
- Wirtz, D. Particle-tracking microrheology of living cells: principles and applications. Annu. Rev. Biophys. 38, 301-326 (2009).
- Shin, D., Athanasiou, K. Cytoindentation for obtaining cell biomechanical properties. J. Orthop. Res. 17, 880-890 (1999).
- Ou-Yang, H. D., Wei, M. T. Complex fluids: probing mechanical properties of biological systems with optical tweezers. Annu. Rev. Phys. Chem. 61, 421-440 (2010).
- Hosu, B. G., Sun, M., Marga, F., Grandbois, M., Forgacs, G. Eukaryotic membrane tethers revisited using magnetic tweezers. Phys. Biol. 4, 67-78 (2007). | <urn:uuid:4eceb99a-2465-4de3-883b-edd1545f6fe9> | CC-MAIN-2019-47 | https://www.jove.com/video/3886/micropipette-aspiration-substrate-attached-cells-to-estimate-cell | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00098.warc.gz | en | 0.846639 | 7,288 | 2.9375 | 3 |
Electric motors general purpose industrial leeson pressure washer pump motor hp rpm. Many industrial processes require variable speed drives for various applications. Save Shaded pole motor is a split phase type single phase induction motor. In a single-phase motor we have only a single field winding excited with alternating current; therefore, it does not have a revolving field like three-phase motors. Single-Phase Capacitor Start Induction Motor Type Power Rating Current (A) Power Factor EFF (%) Speed HP KW (r/min) YC90S-2 1. V/f CONTROL OF INDUCTION MOTOR DRIVE A Thesis submitted in partial fulfillment of the requirements for the degree of 3. (6) To observe and compare the performance of three phase motor with VVVF drive. This work is introducing the designing of new three phase squirrel cage induction motor with an objective of getting good efficiency. Forced-air heating systems have a fan motor. Refer back to this diagram as the operational requirements of the single-phase motor are discussed. The project is by no means trivial. Magnetization Curves III. Equivalent Circuit of a Single Phase Induction Motor The Equivalent circuit of a Single Phase Induction Motor can be obtained by two methods named as the Double Revolving Field Theory and Cross Field Theory. A single phase brushed motor is easy enough to control by just varying the voltage reaching it, or through phase-angle control, or PWM (which in the end amounts to the same thing) but an induction motor is an entirely different animal. Although, these electromechanical devices are highly reliable, they are susceptible to many types of faults. An electric motor converts electrical energy into a mechanical energy which is then supplied to different types of loads. ( ) (0,1757). observed that after the fault is applied at t = 1. The three-phase squirrel-cage induction motor can, and many times does, have the same armature (stator) winding as the three-phase synchronous motor. The difference is that they have only one active winding on the stator. Filters: Products. The angular position dependency of the rotor shaft is. • Use the three phase induction motor with plc for any application. Since there is a voltage drop between the power source and the electric motor, the single phase motors are rated either 115 volt or 230 volt. 3) Capacitor Motor. Permeability F. Single phase induction motors arenot a self-starting motor, and three phase induction motor are a self-starting motor. Here we will learn how does single phase induction motor work. Alternating flux acting on a squirrel cage rotor can not produce rotation, only revolving flux can. Simply connect a capacitor and plug the motor into an AC power supply to operate. 4 Magnetizing current, Flux and Voltage 60 6 Induction. Because polyphase motors are the most commonly used in industrial applications, we shall examine them in detail. SINGLE-PHASE MOTORS 13. Sometimes, squirrel cage induction motors exhibits a tendency to run at very slow speeds (as low as one-seventh of their synchronous speed). After analyzing this document, the reader can easily integrate the reference design and associated. The single-phase stator winding produces a magnetic field that pulsates in strength in a sinusoidal manner. Speed Torque Characteristics In Three Phase Induction Motor. Single phase induction Motor – Construction, Principle of Operation and Starting methods The single phase induction motor machine is the most frequently used motor for refrigerators, washing machines, clocks, drills, compressors, pumps, and so forth. For the same output a 1-phase motor is about 30% larger than a corresponding 3-phase motor. It reduces the starting current to the AC induction motors and also reduces the motor. With access to a large stock we are able to despatch most motors for next day to ensure a minimal downtime period. Instead, it pulses, getting first larger and then smaller but always remaining in the same direction. power supplies that are readily available at homes, and remote rural areas . Umanand, Principal Research Scientist, Power Electronics Group, CEDT, IISC Bangalore For more detail. HP Motor Voltage Service Factor. Here we will learn how does single phase induction motor work. Current search Single Phase Induction Motor. induction motor with a PWM modulated sinusoidal voltage. 00 180 370 750 750 1100 1100 1500 1500. 5 hp motor options are available to you, such as asynchronous motor, induction motor, and servo motor. The motor then runs as a single-phase induction motor. 5 A Locked rotor current : 87. This project uses a system that is specifically design to assess the different switching strategies to evaluate the performance parameters of a three-phase induction motor. Numerical Problems Induction Motors. Induction Motors • An induction motor has two magnetically coupled circuits: the stator and the rotor. At Industrybuying. /! • name the basic types of electric motors. develops and supplies state-of-the-art testing solutions, including different types of dynamometers, such as hysteresis brake, regenerative dynamometer, inertial dynamometers, (ranging from low-cost, single-function devices to high-performance modular systems), along with other testing devices for electric motors and engines. Disadvantages: While single-phase motors are simple mechanics-wise, this does not mean that they are perfect and nothing can go wrong. Circuitry is designed and external hardware selected to facilitate the control of a variety of induction motors. It is discussed later. A low-cost solution is delivered, complete with a bill of materials (BOM), schematic, code and PCB artwork files. They are: a) Split phase induction motor. Hence instead of d. Induction motor, also known as asynchronous motor, is a kind of AC electric motor. The main difference between single phase and three phase motors is that a single phase motor runs on a single phase power source, whereas a three-phase motor runs on a three-phase power source. For different loads and set speeds, voltage of. According to the different power phase, it can be divided into single-phase and three-phase. Basically,. The single phase to 3 phase VFD is the best option for a 3 phase motor running on single phase power supply (1ph 220v, 230v, 240v), it will eliminate the inrush current during motor starting, make the motor run from zero speed to full speed smoothly, plus, the price is absolutely affordable. Single-phase induction motor - Squirrel cage rotor Customer : Product line : Single-Phase : ODP (IP22) Output : 3 HP Frame : 184T Full load speed : 1745 Frequency : 60 Hz Voltage : 208-230 V Insulation class : F Rated current : 17. Over a power range up to 1500 kW, the range of IP65/IP55 enclosed or IP23 protected induction motors satisfies the demands of industrial processes for standard or special environments. Single Phase induction Motor [1/Ch. Single Phase Motors. SINGLE-PHASE MOTORS 13. In addition, there are end bells, bearings, motor frame and other components. drive a three phase half H-Bridge power stage (or inverter) of the type commonly used in AC induction motor drives. It is logical that the least expensive, low-est maintenance type motor should be used most often. Explains NEMA motor standards. The negative effects of voltage unbalance on the performance of three-phase induction motors include: higher losses, higher temperature rise of the machine, reduction in efficiency and a reduction in developed torque . Space vector motion in an induction motor under step loads: 21. The main function of the PFD is to generate a carr. armature (c) commutator and (d) a set of brushes, which are short-circuited and remain in contact with the commutator at all times. Lets take a look at the motor stator that utilizes this power source. In general, the induction motor is cheaper and easier to maintain compared to other alternatives. 4 Magnetizing current, Flux and Voltage 60 6 Induction. Single Phase and Three Phase Motors are two different types of AC motors. voltages have on the operation and performance of a three-phase induction motor. Reversing and Dynamic Braking of Single-Phase Induction Motors By Dick Kostelnicek August 17, 2001 Introduction Single-phase induction motors drive many arbor-mounted cutting tools in the home workshop. Simply connect a capacitor and plug the motor into an AC power supply to operate. WEG-three-phase-induction-motors-master-line-50019089-brochure-english. Hysteresis Motor is a Single Phase Synchronous Motor with Salient Poles and without DC Excitation:. 17 A single-phase, 400V, 60Hz, series motor has the following standstill impedance at 60Hz. Stator: The stator is the outer most component in the motor which can be seen. Motors in that range are either capacitor-start with an internal switch or are capacitor start, capacitor run with an internal switch. • In a three-phase motor, the three phase windings are placed in the slots • A single-phase motor has two windings: the main and the starting windings. Keyword: Diagnosis, Electric Motor, Faults. If a 3-phase motor is spinning and you remove one of it's phases (as you did in lab last week), the motor keeps. The single-phase AC induction motor bests fit this description. What are the four main components of a singlephase motor?- 2. (ii) Because of constant torque, the motor is vibration free. With the U, V, W pairs that looks more like a three-phase motor modified to run on single phase. Three-Phase Induction Motors Single-Phase Induction Motors Reversible Motors Electromagnetic Brake Motors Clutch & Brake Motors Low-Speed Synchronous Motors Torque Motors Watertight, Dust-Resistant Motors Right-Angle Gearheads Linear Heads Brake Pack Accessories Installation CAD Data Manuals www. ) 6 : 60 mm sq. They are easier to maintain, cheaper and got rugged construction. The rotor in the induction motor is not energized. Split Phase Induction Motor. This formula then is changed to be the following. Contents: Fundamentals of single phase transformers : Introduction; Basics of magnetic circuits, Amperes law, linear and nonlinear magnetic circuits; Faradays law of electromagnet induction - Concept of an ideal transformer, assumptions; ideal transformer at no load and on load, phasor diagram, voltage current and power relations; basic construction of a practical single phase. This inverter converts DC output of the solar PV array in AC supply. control and soft-start to a single and multi-phase AC induction motor by using a three-phase inverter circuit. To appreciate the complexity of the drive for using 3 phase squirrel cage induction motor for traction application, let us start with speed torque characteristic of a conventional fixed frequency, fixed voltage squirrel cage induction motor shown in fig. EbookNetworking. Speed control in induction motors is difficult. The motor then produces a constant torque and not a pulsating torque as in other single-phase motors. Thanks for your reply Marvo. They are generally fitted in smaller electrical devices, such as power tools. Alternating flux acting on a squirrel cage rotor can not produce rotation, only revolving flux can. Capacitor Calculation For Single Phase Motor 07/13/2015 5:15 AM Gents, I have one 220 volts 0. The stator of a single phase induction motor is wound with single phase winding. thyristors acts like diodes. AC induction motors are optimal for uni-directional and continuous operation such as a conveyor system. Electrical engineering Mcqs for Preparation of Job Test and interview, freshers, Students, competitive exams etc. Lets take a look at the motor stator that utilizes this power source. However, before springing for a replacement, you may want to conduct some simple tests to see if the motor can still be repaired. The speed control of three phase induction motor is essential because the motor control industry is a dominant sector. are squirrel cage induction type which can be reversed by re-arranging connections to their terminals. to design water cooled single phase submersible For such motor the recommended winding wire as per motor which can be easily applicable to rapidly IS 8783(Part 2) is PVC insulated. CAPACITOR-START. This work is introducing the designing of new three phase squirrel cage induction motor with an objective of getting good efficiency. Explain the need for starters for three phase induction motors. SOLAS Electrical Course Notes – Module 2. According to the different power phase, it can be divided into single-phase and three-phase. Forced-air heating systems have a fan motor. A capacitor start three phase induction motor (PDF Download Available) 18 дек. They operate such devices as starters, flaps, landing gears, and hydraulic pumps. Motor Data and Wire Size Specifications Single and Three Phase Jet/Centrifugal Motors FW0016 0113 Supersedes 1109 Single Phase Motor Data 60 Hz 3450 RPM( ¹) Chart A Motor Model No. The single-phase high efficiency motor, or SHE-motor, uses capacitor banks to operate at a balanced three phase induction motor using a single phase source. Previously we talked about 3 phase induction motor construction, in this article, we will explain Working principle of single phase induction motor and construction of single phase induction motor … it's clear that you are interested in single phase motors. Single Phase Motors. Single phasing occurs as a result of several possibilities. Design of Single Phase Linear Induction Motor with Toroidal Winding Rahul A. Themotorisallowedtorunatnoload,andissuppliedwithvarious voltages at nofcmal frequency,rangingfrom 25 percentabove normaltoa value at which the motorwill just continuerunning. If the polarity of the line terminals of a dc series motor is reversed, the motor will continue to run in the same direction. Thus the rotor rotates in the same direction as that of stator flux to minimize the relative velocity. When three phase induction motor runs continuously, it is necessary to protect the motor from these anticipated faults. Single phase induction motors have types according to the way of their starting. This method is normally limited to smaller cage induction motors, because starting current can be as high as eight times the full load current of the motor. It presents a design of a low-cost, high-efficiency drive capable of supplying a single-phase a. A single phase brushed motor is easy enough to control by just varying the voltage reaching it, or through phase-angle control, or PWM (which in the end amounts to the same thing) but an induction motor is an entirely different animal. All you need is to connect a capacitor and plug the motor into an AC power supply and the motor can be easily operated. The stator has the three phase distributed winding. • list the common types of single-phase induction motors. In this module we will be discussing the three phase induction machine. 2019-05-19 (360) Rotary drive for gate valve from AUMA-Armaturenantriebe Ges. Among the conventional methods there is no one method by which single‐phase induction motors can be started, speed‐controlled and reversed. 3-Phase AC Induction Motor Vector Control, Rev. Information in this issue applies only to this type and may not be applicable to other types. Motors in that range are either capacitor-start with an internal switch or are capacitor start, capacitor run with an internal switch. A pulsating magnetic field is produced, when the stator winding of the single-phase induction motor shown below is energised by a single phase supply. 0 A Insulation class : F Locked rotor current (Il/In) : 6. There are a number of single phase motors on various pieces of equipment and what I will try to do here is to explain it as easily as possible. These poles are shaded by copper band or ring which is inductive in nature. An induction motor uses the principle of electromagnetic induction to cause the rotor to turn. The rotor connects the mechanical load through the shaft. Schematic diagram of a 4-phase 8/6 switched-reluctance motor: 22. 8 Duty cycle : S1 Service factor : 1. Single-phase induction motor - Squirrel cage rotor Customer : Product line : Single-Phase : ODP (IP22) Output : 7. of Electrical Engineering, Maharastra [email protected] Stationary grinders, table and radial arm circular saws frequently turn abrasive or cutting disks that are directly mounted on the motor's spindle. By proper control of the transistor switching,. In a split phase motor, the running winding should have (a) high resistance and low inductance (b) low resistance and high inductance (c) high resistance as well as high inductance (d) low resistance as well as low inductiance Ans: b 2. A three phase, 12 pole, salient pole alternator is coupled to a diesel engine running at 500 rpm. The hysteresis motor has a low noise figure compared to the Single phase Synchronous motors such that the load runs at uniform speed. They are a little trickier to make but will need single-phase or three-phase AC power to make them work. As mentioned above that, due to the rotating magnetic field of the stator, the induction motor becomes self starting. Syarikat Cathay Letrik, Johor Bahru, JB Single Phase and Three Phase Induction Motor Johor Bahru, JB Repairing, Rewinding, Services, Supplier. Matthias Wandel 541,386 views. EbookNetworking. This must then be. 0 No-load current : 6. Induction Motor Rotor • Owing to the fact that the induction mechanism needs a relative difference between the motor speed and the stator flux speed, the induction motor rotates at a frequency near, but less than that of the synchronous speed. The input power to a three phase induction motor is given by, Pin = 3 Vph Iph cos φ Where, φ is the phase angle between stator phase voltage Vph and the stator phase current Iph. Cycloconverter Drive for AC Motors Manishkumar M. PDF | The single-phase induction motor of the Problem No. What is common to all the members of this fam-ily is that the basic physical process involved in their operation is the conversion of electromagnetic energy to mechanical energy, and vice versa. In order to reduce the power consumption of single-phase AC motors, their speed can be regulated as required. 15 Design : ---. • AC induction motors are also the most common motors used in main powered home appliances. This complete article is for you. The General Electric Company began developing three-phase induction motors in 1891. At light load conditions, the induction motors take large starting currents and operate at a poor lagging power factor. The stator of a split phase induction motor has two windings, the main winding and s r m r s m r s L L i 2. and characteristics of commonly used single-phase motors. Single-Phase AC Induction Motors If two stator windings of unequal impedance are spaced 90 electrical degrees apart and connected in parallel to a single-phase source, the field produced will appear to rotate. series motor. These motors work at a power factor which is extremely small on light load (0·2 to 0·3) and rises to 0·8 or 0·9 at full load. Single phase induction motors generally have a construction similar to that of a three phase motor: an ac windings is placed on the stator, short-circuited conductors are placed in a cylindrical rotor. Efficiency determination and losses segregation of single-phase induction motors 61 If the parts of individual power losses Pli are calculated then: 1 (3) where Pli is the I part of individual loss, which number is k. Single Phase or ¼ Voltage Testing utilizes an AC voltage source (approximately 25% of the operating voltage) which is applied across a single phase of a three-phase motor. Nptel is a joint initiative from IITs and IISc to offer online courses & certification. The main advantages of induction motors are. Three phase induction motors are used where large amounts of power are required. It possess frame No. synchronous machine armatures and induction - motor stators above a few kW, are wound with double layer windings if the number of slots per pole per phase 𝒒𝒒= 𝑺𝑺 𝒎𝒎𝑷𝑷. M and Squirrel cage I. Three-phase induction motors are the most common and frequently encountered machines in industry - simple design, rugged, low-price, easy maintenance - wide range of power ratings: fractional horsepower to 10 MW - run essentially as constant speed from no-load to full load - Its speed depends on the frequency of the power source • not easy to have variable speed control • requires a variable-frequency power-electronic drive for optimal speed control Page 4 Three Phase Induction Motor ?. Lamparter electing a motor and connecting the electricals are the first challenges encountered after purchasing that long coveted machine tool. Single-phase motors generally operate at one speed and don't contain a device to select variable speeds. The motor then produces a constant torque and not a pulsating torque as in other single-phase motors. A 3 phase induction motor has two main parts (i) stator and (ii) rotor. Apart from lower power consumption, it allows easy extension of functionalities from power data logging, remote monitoring and control, emergency alarm etc. The main benefits of. Hi, do you have a good memory? If you; you will remember that single-phase induction motors are the most common in every place around us and the most used types of single phase motors are single phase induction motors and it’s the focus of our attention today working principle of single phase induction motors and types of single. These devices offer local motor disconnect means, manual ON/OFF control, and protection against short circuit, overload, and phase loss conditions. Each of these windings are constructed with numerous coils interconnected to form a winding. Why we use Capacitor in Single Phase Induction Motor?. all you want to know about transformers motors and windings Download all you want to know about transformers motors and windings or read online books in PDF, EPUB, Tuebl, and Mobi Format. ELECTRIC MOTORS OBJECTIVES After studying this unit, the student will be able to • state the purpose of an electric motor. This site is like a library, Use search box. Figure 4-11 shows a simplified schematic of a typical. They include – Brushless DC Motors – Stepping Motors – Single Phase Motors • The Universal Motors • Single-Phase Induction Motors. In a split phase motor, the running winding should have (a) high resistance and low inductance (b) low resistance and high inductance (c) high resistance as well as high inductance (d) low resistance as well as low inductiance Ans: b 2. Single phase induction motors generally have a construction similar to that of a three phase motor: an ac windings is placed on the stator, short-circuited conductors are placed in a cylindrical rotor. Either type may be single phase, two phase, or three phase. Traditionally, the stator resembles that of an induction motor; however, the windings are distributed in a different manner. In this lab, I am asking you to measure the properties of an intact copy of that motor to deter-. Check Answer, C. Speed and Direction of Rotation. Reversing and Dynamic Braking of Single-Phase Induction Motors By Dick Kostelnicek August 17, 2001 Introduction Single-phase induction motors drive many arbor-mounted cutting tools in the home workshop. Electrical engineering Mcqs for Preparation of Job Test and interview, freshers, Students, competitive exams etc. Single-phase AC Motors Single phase AC motors require a "trick" to generate a 2nd "phase" to develop starting torque Three common methods: split-phase (auxiliary winding is rotated 90°) capacitor shaded-pole. This is a single-phase induction motor, with main winding in the stator. In this case, one of the motor terminal gets connected to the other, when the motor is running. When it comes to controlling the speed of induction motors , normally matrix converters are employed, involving many complex stages such as LC filters, bi-directional arrays of switches (using IGBTs) etc. The Thermal overload is supplied as a separate item. This paper investigates the speed control performance of single-phase induction motor using microcontroller 18F2520. The rotor is separated from the stator by a small air-gap which ranges from 0. Construction of Three Phase Induction Motor: Figure 8. THE AC MOTOR Figure 5 Figure 6 Application Technology D Fundamentals of Polyphase Electric Motors. Some single-phase induction motors are also called squirrel cage motors because of the rotor's similarity. Numerical Problems Induction Motors. Induction motor with. Comparing with single phase motor, three phase induction motor has higher power factor and efficiency. sm sm sm ci sm ci sm ci ci ci 0. The Selection, Connection, Reversing and Repair of Electric Motors by RobertW. The system has been tested on different loads. 7) creates the net magnetic field Bnet. There are two general types of ac motors used in aircraft systems: induction motors and synchronous motors. The best solution for starting the 3-phase induction motor on a single phase supply (for Wye connection standard), ‘Cs’ must be at setting about 92% of the formula (4). Unlike polyphase induction motors, the stator field in the single-phase motor does not rotate. At light load conditions, the induction motors take large starting currents and operate at a poor lagging power factor. These machine are used to drive fans, pumps, air compressors, refrigeration compressors,. However, if the motor shaft is "manually" spun it may begin to turn and accelerate slightly. The electrical diagram below illustrates both a star (wye) connected and a delta connected three phase AC induction motor supplied with a single phase source voltage which shows the connection of a capacitor on different points of the motor terminal to achieve the dual rotational direction of the motor. Power Supply Test For single phase motors, the expected voltage is about 230V or 208V depending whether you are using the UK or America voltage system. All the thyristors are fired up at a=0° firing angle, i. Characteristically, linear induction motors have a finite primary or secondary length, which generates end-effects. current (dc) motor or generator, the induction motor or generator, and a number of derivatives of all these three. Traditionally, the stator resembles that of an induction motor; however, the windings are distributed in a different manner. But this project is for single phase induction motor only. Load Test on Single - Phase Squirrel Cage Induction Motor Aim: To conduct the load test on single phase squirrel cage induction motor and to draw the performance characteristics curves. on more efficient single-phase and three-phase alternating current (AC) electric motors, permanent magnet motors, and variable speed drives. 1 What is the formula to calculate the slip speed (N slip) of 3 phase squirrel cage. This decreases the starting current, giving a reasonably good accelerating torque at a good power factor even with low resistance cage motors. Advantages of 3 phase induction motor • Generally easy to build and cheaper than corresponding dc or synchronous motors • Induction motor is robust. This is called phase splitting. 1 Introduction The popularity of 3 phase induction motors on board ships is because of their simple, robust construction, and high reliability factor in the sea environment. voltage, locked rotor, phase reversal, ground fault, under voltage and over voltage. For this machine, however, because the stator must be connected to the three-phase circuit, the difference between being a motor or functioning as a generator lies in the speed of the rotor. 5 hp motor products. Exico cannot be held responsible for a damage caused by incorrect wiring !! Note !. Lamparter electing a motor and connecting the electricals are the first challenges encountered after purchasing that long coveted machine tool. Single-phase Induction Motor The winding used normally in the stator (Fig. Forced-air heating systems have a fan motor. 1 Balanced, Symmetric Three-phase Currents 58 5. The Three-Phase KIIS Series offers an optimally designed, high efficiency next generation Three-Phase motor that. The unstable region of operation is shown dotted in the Fig. how the rotor rotates when. Lets take a look at the motor stator that utilizes this power source. The output of this inverter is fed to the single phase induction motor. The focus will be on PSpiceTM, which is. Why it is not self starting ? it's expaination using double feild revolving theory. An induction motor's rotor can be either wound type or squirrel-cage type. While a single-phase motor is normally resilient and can operate unhindered for decades, there will come a time when it will break down and cause problems. AC induction motors are optimal for uni-directional and continuous operation such as a conveyor system. cycloconverter was used on a single phase induction motor and the result shows that it can vary the speed of an induction motor. Proposed Work. Define the Slot Data. A 3 phase AC induction motor photoed by Hordu Motor. local single phase a. furnishing, pump driving and crane driving etc. Answer: 1. Single-Phase AC Induction Motors Single-phase induction motors are optimal for uni-directional and continuous operation such as a conveyor system. The latter is short-circuited. Cycloconverter Fed Split Phase Induction Motor Fig. it's construction and working. Therefore, the motor is called a split-phase motor because it starts like a two-phase motor from a single-phase line. com Technical Support TEL: (800. A guide about how to identify starting and running winding of single phase induction motor or identifying start - run and common of one phase motor Electrical Online 4u A platform to learn electrical wiring, single phase, 3 phase wiring, controlling, HVAC, electrical installation, electrical diagrams. Motor Data and Wire Size Specifications Single and Three Phase Jet/Centrifugal Motors FW0016 0113 Supersedes 1109 Single Phase Motor Data 60 Hz 3450 RPM( ¹) Chart A Motor Model No. Unlike polyphase induction motors, the stator field in the single-phase motor does not rotate. Since the rotor impedance is low, the rotor current is excessively large. Comparing with single phase motor, three phase induction motor has higher power factor and efficiency. There are two general types of ac motors used in aircraft systems: induction motors and synchronous motors. Induction Motor Rotor • Owing to the fact that the induction mechanism needs a relative difference between the motor speed and the stator flux speed, the induction motor rotates at a frequency near, but less than that of the synchronous speed. There are single phase induction motors and three phase induction motors. Single Phase Induction Motor. The main difference between single phase induction motor and three phase induction motor is that single phase motors are NOT self-starting so they require some starting mechanism while three phase motors are self-starting. Almost 80% of the mechanical power used by industries is provided by three phase induction motors because of its simple and rugged construction, low cost, good operating characteristics, the absence of commutator and good speed regulation. The contacts are opened by pressing the "STOP" button. jpg 4,000 × 3,000; 2. The machine is started in the usual way and runs unloaded from normal voltage mains. Starting of 3-Phase Induction Motors The induction motor is fundamentally a transformer in which the stator is the primary and the rotor is short-circuited secondary. The angular position dependency of the rotor shaft is. Single-phase induction motor - Squirrel cage rotor Customer : Product line : Single-Phase : ODP (IP22) Output : 3 HP Frame : 184T Full load speed : 1745 Frequency : 60 Hz Voltage : 208-230 V Insulation class : F Rated current : 17. The terms single or two-phase configurations refer to the supply voltage systems applied to the windings terminals. We know that 'single phase induction motors are not self-starting '. • In a three-phase motor, the three phase windings are placed in the slots • A single-phase motor has two windings: the main and the starting windings. Let’s move on to induction motor drives. Huge variety of single phase motor online. Speed control of single-phase induction motors is desirable in most motor control applications since it not only provides variable speed but also reduces energy consumption and audible noise. Therefore, these. Since the rotor impedance is low, the rotor current is excessively large. com Introduction Three phase motors can be classified into two types: induction and synchronous. Single Phase Induction Motor Wiring Diagram Electronic starter for single phase motor is used for protecting motor from over A single- phase-motor starter wiring diagram is shown in the below figure. Manual motor Protectors. The motor will continue to try to drive the load…until the motor burns out…or until the properly sized overload elements and/or properly sized dual-. After analyzing this document, the reader can easily integrate the reference design and associated. Firing angle for positive converter. Single-phase induction motors are used extensively for smaller loads, such as household appliances like fans. and characteristics of commonly used single-phase motors. Motors, AC motor, DC motor, Induction motors, Brake motors, Speed Variable motor, Single phase motor, 3 phase motor, Gear motor, Gear reducer, Small motors. MachinesConstructional details - emf equation - Methods of excitation - Self and separately excited generators - Characteristics of series, shunt and compound generators - Principle of operation of D. The single phase induction motor is not self starting and hence the winding is splited into two as main and auxiliary winding which are 90 degrees electrically displaced. • The motor uses a squirrel cage rotor, which has a laminated iron core with slots. Induction motors are still among the most reliable and important electrical machines. 8 Duty cycle : S1 Service factor : 1. At the very least, a full H bridge drive will likely be needed. The system of differential equations representing the single phase induction motor model is developed and formulated. Single phase series motor, theory of operation performance and application, Shcrage motor,. Single Phase AC System vs Three Phase AC System (with Picture) Motor operates at 400 V or 300 V. Efficiency determination and losses segregation of single-phase induction motors 61 If the parts of individual power losses Pli are calculated then: 1 (3) where Pli is the I part of individual loss, which number is k. Multiple choice questions here are on topics such as Electrical Machines, Power Electronics, Electrical measurement & units, Utilization of Electrical Energy, Basic Electrical Engineering, Electrical. A low-cost solution is delivered, complete with a bill of materials (BOM), schematic, code and PCB artwork files. 8 16 20 24 34. Analyze the effect of changing different parameters of the induction motor on its performance and design. TECO Single Phase Induction Motors B56 Totally Enclosed Fan-Cooled Rolled Steel Imperial Frame BEGY Totally Enclosed Fan-Cooled Rolled Steel Metric Frame 71 - 80. Both types are popular but for larger sizes the induction motor is the most highly. | <urn:uuid:61895c9b-b2b9-471a-a8c8-6831b605b62a> | CC-MAIN-2019-47 | http://pbsb.top-jeunes-talents.fr/single-phase-induction-motor-pdf-nptel.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00097.warc.gz | en | 0.895288 | 7,445 | 2.65625 | 3 |
For those that don’t know, I am currently enrolled at the Vermont Center for Integrative Herbalism in Montpelier, Vermont. This is the first essay in a series of articles that I will publish on the subject of Holistic Human Physiology. Please feel free to comment below.
- Define Human Physiology
Human physiology is the study of the normal functioning of the human organism and its processes.
- Differentiate between allostasis and homeostasis
It is essential for herbalists to understand and be able to communicate the concepts of allostasis and homeostasis. Homeostasis is the more commonly used term that encompasses the self regulating actions of an organism. We can see this in basic bodily functions such as the regulation of mineral salts in the body. If we spend all day outside in the hot sun, our bodies will sweat in order to cool us down and regulate our temperature. Once we start sweating, we lose water and electrolytes. Our bodies recognize the need for these critical substances, so we become thirsty and seek out water and salts.
Allostasis is a more recent term which takes into account all manner of input on the organism. Genetics, nutrition, parents, environment, education, all of these things and more make up the foundation that we are built on. The stronger the foundation, the higher levels of stress we can endure. For organisms with poorer foundations, the allostatic load that can be endured is less, and they are more likely to be prone to disease.
All systems have an input and output, this is the nature of reality. Living organisms must obtain a variety of factors in order to survive, thrive, and reproduce. If the organism is unable to obtain everything it needs, or if it is obtaining poor quality versions of what it needs, the system breaks down faster than it would otherwise, and numerous other problems arise as the organism tries to compensate for lack of proper input.
There are a variety of ways in which the human organism can push themselves beyond their allostatic load. We all start with the genetic framework that is bestowed upon us by our parents. This may be a blessing or a curse, but hopefully we will enter the world with a set of parents that are able to provide us the safety, love, nutrition, interaction, and wisdom that is needed to thrive in this world.
Let us take the example of two different children. Let’s assume that these children have roughly the same genetic predisposition. Now let’s put these children into two entirely different environments. In the first environment, the child is raised as healthily as possible, they are not vaccinated, they do not experience trauma from circumcision or parental abandonment, they are breastfed from a healthy mother, they get plenty of proper nutrition, they live and sleep in a non-toxic environment, they are exposed to plenty of fresh air, sunshine, and playtime. In the second environment, the child is heavily vaccinated, not breastfed but instead they are fed formula made from GMO corn and soy and synthetic vitamins, they sleep in a toxic environment, perhaps the parents are smokers or use drugs or alcohol, the list goes on and on. All of these factors will establish the foundation for the allostatic load that the child is able to bear throughout life. This does not even address the complexity of genetic variances which is also essential to consider.
This breast milk article describes the importance of breastfeeding. It cannot be understated how important it is for mothers to breastfeed their babies.
If the child grows up in a poor environment, then the overall allostatic load they will be able to bear will be significantly lower than their counterpart with the solid foundation. The child who grows up in the healthy environment, when exposed to stress will have an easier time dealing with the stress, whereas the child with the poor environment will have a more difficult time dealing with stress and will be much more likely to exceed their allostatic load. When the children do exceed their allostatic load, the child with the strong foundation will recuperate faster than the child with the poor foundation. These things that seem common sense have been forgotten or misunderstood by the majority of the western world.
It would be intriguing to build a long term study that compares the allostatic load of different people and comparatively analyze how they were raised, what kind of diets they were fed as children, what sort of environment they were brought up in, etc.
- Identify the components of a cell and compare their basic functions
The mechanics of the cell are fascinating and even more so when we look at the organism from a holistic point of view. Let’s examine each component piece by piece.
Plasma membrane – The plasma membrane is a semi-permeable protective barrier that allows certain things to enter and leave the cell. The primary building blocks of this plasma membrane are phospholipids. Phospholipids are composed of three components: 1) a phosophate head 2) a Glycerol backbone, and 3) two fatty acid tails. These two fatty acid tails are held to the phophate head by the glycerol backbone, making them look like a tadpole sort of creature. The phosphate headgroup is hydrophilic, or water loving. Because the fatty acid tails are long carbon chains, they are what is called hydrophobic, or water repelling. The phosphate heads stay in contact with water, while the fatty acid tails are directed away from water, thus creating the inner space of the membrane. This dynamic is what allows these molecules to form bilayers.
Within the cell is an array of components that all have important roles. Let’s first look at the Nucleus, the control center of the cell. The nucleus contains the DNA of the cell, which is the information required for the cell to copy itself. DNA is essentially the blueprints that are used to make proteins over and over again. Damaged to the organism over time will result in degradation of the DNA, which continues to be copied until the organism can no longer sustain itself.
The Nucleus also contains the Nucleolus, which is responsible for making ribosomes. Ribosomes are organelles that are made up of a large and small subunit. Together the subunits form a mechanism that essentially examines a string of messenger RNA and creates proteins for use throughout the body. The nucleus itself is commonly called the “control center” of the cell, pirmarily because it contains the DNA which is the blueprint for every protein that the organism needs to continually rebuild itself.
Outside of the nucleus is the Cytoplasm, which is composed of Cytosol, various organelles, and protein fibers that make up the physical structure of the cell. The Cytosol is the fluid is which all of these other structures float around in. Within this fluid is a concentration of sodium and potassium ions that are involved in a variety of communication processes throughout the cell.
Each cell maintains is able to maintain its physical structure thanks to a Cytoskeleton. The cytoskeleton is a network of protein fibers that literally act as a skeleton and give shape to the cell. It is made up of four components: 1) the Microvilli which increase the cell surface area, 2) the Microfilaments which form a network inside the cell membrane, 3) the Microtubules which are the largest of the fibers making up the cytoskeleton, and 4) the Intermediate filaments that include myosin and keratin. All of these structures can essentially be seen as a kind of scaffolding or interior frame that provides some structure for the cell as well as assisting in the communication and transportation within the cell.
Mitochondria are the energy producers for the entire cell. They exist somewhat autonomously within the cell, but they have a synergistic relationship with the cell, so they are not perceived as a threat. The cell provides a safe haven for the mitochondria to live, and the mitochondria creates the necessary ATP that the body needs to live. We are just beginning to study how certain herbs can benefit the Mitochondria and our production of ATP. The link below describes the use of Cordyceps and Ginseng as potential ATP boosters.
Endoplasmic reticulum are the network of membranes where the majority of protein synthesis occurs. “Rough” endoplasmic reticulum is covered in ribsomes, while the “Smooth” endoplasmic reticulum is not. Rough endoplasmic retiuculum is primarily where protein synthesis occurs via the ribosomes. Smooth endoplasmic reticulum manufactures fats. In some cells the smooth ER stores calcium ions. It is also helpful in the detoxification of harmful substances to the cell.
The job of the Golgi apparatus is to take proteins from the Endoplasmic reticulum and deliver them correctly to their destination in the body. Transport vesicles are secreted by the ER in order to move proteins into the Golgi. From here, the proteins are “sorted and shipped” navigating through cisternae and toward the cell membrane. Special secretory vesicles are created which allow the proteins to pass through the cell membrane and get to where they are needed elsewhere in the body.
One of the vesicles created by the Golgi are Lysosomes. These are small, spherical storage vesicles that act as the digestive system of the cell by creating powerful digestive enzymes that are used to break down bacteria or old components of the cell that are no longer functional. One of the fascinating things about lysosomes is the fact that they only release their digestive enzymes when they detect a very acidic environment. This seemingly innate intelligence is another example of the cells ability to self regulate. Interestingly, lysosomes will sometimes release their digestive enzymes outside of the cell to dissolve extracellular material, such as the hard calcium carbonate in bones (Silverthorn, 71). This is the reason why people are who excessive coffee drinkers (or those who eat an abundance of acid forming foods) may sometimes experience bone density loss as well as inflammatory conditions like rheumatoid arthritis. In the situation of the coffee drinker, we are creating more of an acid environment, which our lysosomes detect and are using their digestive enzymes to free some of the calcium from our bones in order to reestablish alkalinity. This is not without negative consequences.
Arizona State University has a helpful diagram of cell components which I printed and brought to my teammates. https://askabiologist.asu.edu/content/cell-parts
- Describe the structure and function of the cell membrane, and the modes of transportation into and out of cells
The cell has two mechanisms of transport allowing movement in or out of the cell. These are Active transport and Passive transport.
Passive transport requires no energy, allowing essential substances like oxygen and water to flow in and out of the cell. This occurs through the process of diffusion. Diffusion is essential the equal dispersing of molecules in a given substrate. When there are an abundance of oxygen molecules in a given area, they will naturally diffuse throughout the area until there is a relative amount of oxygen throughout the given area. We can see this on the macro level with essential oils in a large room. If you open a bottle of essential oil, at first the oil molecules will be concentrated in a given area, but over a long enough period of time you will be able to smell them on the other side of the room because these molecules have dispersed themselves throughout the room.
Water on the other hand moves passively though the process of Osmosis. Water is constantly seeking to be in an Isotonic state, where there is a relatively equal dispersing of water molecules in a given area. The kidneys are constantly regulating the concentration of blood plasma. Since the phospholipid bilayer does not allow water to pass through it, the cell has cleverly evolved Channel Proteins that allow for the easy passage of water molecules in and out of the cell. Water specific channels are known as aquaporins.
Active transport, on the other hand, requires energy in order to move material in or out of the cell.
The cell membrane prevents certain substances from freely entering the cell, while it allows other substances to enter through one of two routes. One mode of transportation is through protein channels built directly into the cell membrane. For example, the sodium potassium pump controls the flow of sodium and potassium into and out of the cell. The cell is constantly trying to establish a concentration gradient between sodium and potassium, so this pump is constantly working. In order for this pump to function properly, there needs to be a healthy supply of ATP (Adenosine Triphosphate). This is why it is essential to live a lifestyle that encourages healthy mitochondria, as they are creators of ATP.
The reason these transport pumps require so much energy is that they are working against the concentration gradient and the electrochemical gradient. The concentration gradient is the concentration of various molecules on either side of the cell membrane, whereas the electrochemical gradient is the difference in electrical charge on either side of the cell membrane.
Another form of active transport is known as Cytosis, literally meaning cell action. There are two kinds of cytosis: Endocytosis and Exocytosis.
When the cell needs to move materials from inside of the cell to outside of the cell it is known as Exocytosis. This process is also known as vesicular transport and occurs when a substance is secreted into a vesicle. A vesicle is a transport molecule made up of phosopholipids that contain the material that is being transported. The vesicle makes its way to the cell membrane, at this point the bilayers rearrange and fuse with the cell membrane, allowing the substances to escape into the environment outside of the cell membrane.
When materials are needed inside the cell, this action happens in reverse and is referred to as Endocytosis. An example of this is Phagocytosis, where a cell literally consumes the material. We see this with white blood cells devouring foreign invaders. Pinocytosis is similar, where the cell membrane folds in on itself and forms a vesicle. Cells can also form vesicles through Receptor-Mediated Endocytosis. This process is activated when specific receptor proteins connect with the molecules they are designed for. See more on Cytosis from John Munro.
The Plasma Membrane – Dr. Marilyn Shopper
Endocytosis Exocytosis – John Munro
- Compare characteristics of the 4 tissue types
There are four different tissue types that make up all tissue in the body. Let’s examine them:
Epithelia are the sheets of cells that cover exterior surfaces of the body and protect the internal cavities within the body. They also make up the secretive glands and ducts, and are also found within certain sensory organs such as ears and nose. There are five categories of epithelial tissue.
The Protective Epithelium are the cells that are exposed to the environment, such as skin and lining of our mouth. These cells are tightly connected (think brick wall) in order to prevent potentially damaging substances from getting into the blood stream.
Ciliated epithelial tissues are found in the lubricated parts of the body, such as the nose, throat, upper respiratory system, and the female reproductive tract. These tissues are covered in cilia, which allows fluid to move across the surface of the tissues. Ciliated epithelium allows for the removal of foreign particles that would otherwise damage the tissues if they were exposed. For instance, mucus in the lungs help trap bacteria and get it out of the body.
Exchange epithelium is found in the lungs and the lining of blood vessels. This tissue type allows for easy access of molecules. The lungs and blood vessels are porous so that oxygen and carbon dioxide can flow freely into and out of tissues. This explains why the upper respiratory must be protected with a coating of mucus and prevent foreign matter from getting into the lungs where it will have easy access to the blood stream. Another reason why it is ideal to breathe through your nose and not through your mouth!
Secretory epithelial tissue are complex tissues that make up vital glands. These glands produce either endogenous or exogenous secretions which are used throughout the body in a variety of ways. Exocrine (think external) glands produce secretions into the body’s external environment via ducts that connect to the tissue surface. For instance, sweat glands, saliva glands, mammary glands, the liver and pancreas are all examples of glands that produce substances which are essential to the human organism. Endocrine (think internal) glands are glands that do not have ducts but instead secrete hormones directly into the blood.
Transporting epithelium is tissue that makes up the digestive tract, the intestines and kidneys. These tissues are made up of tight junctions that prevent movement between cells. These tissues must be tightly bound in order to prevent food particles and other substances from improperly leaking into the blood stream and wreaking havoc. We can see that the erosion of this tissue results in issues like Crohn’s Disease, Leaky Gut Syndrome, or other painful digestive diseases.
Connective tissue is what gives us structure and is therefore incredibly important. Our connective tissue is made up of blood, bone, fat, and proteins. These tissues are fibrous in nature and act as a sort of scaffolding which gives the organism shape and definition. I tend to think of this tissue similarly to the skeletal structure of a skyscraper, in that it is designed to be both strong and rigid, but also flexible enough so that it can bend and flex in order to sway with the wind. This type of thinking always makes me think of the old saying it is better to be like the Willow which can bend with the wind, than the Oak which is stiff and will break in a strong windstorm.
Muscle tissue is what allows us to run and jump! This tissue is able to contract and produce force. There are voluntary muscular contractions and involuntary contractions. Cardiac muscle in the heart operates in an involuntary fashion, although there is some research that shows we can control our heart rate. Smooth muscle and skeletal muscle operate voluntarily (for the most part).
Neural tissue is made of neurons and glial cells. Neurons are found most heavily in the brain and spine, but they are found everywhere in the body. Neurons communicate with each other through electrical signaling. When we choose to use our muscles in a given way, our brain first has to send the message to the muscle, which then responds to the message. More recently we are seeing an increase in neurodegenerative diseases like Multiple Sclerosis, where the degeneration of the myelin sheath (protective layer that covers the axon of the neuron) prevents the message from being properly received. This is often compared to the copper wiring that runs through any electrical system. As long as the wiring is covered with a protective coating, there are no problems, but if the protective coating deteriorates then the bare metal could possibly touch and the electrical signals could be crossed, causing all kinds of problems. In the human being, this can make daily movement, speech, or even the involuntary processes of life difficult. Many herbs and natural substances can be utilized to lessen the symptoms of MS or even slow or reverse the degradation the Myelin Sheath.
- Explain the importance of this modules material in the study of herbal medicine.
Clearly it is important for herbalists to have a clear comprehension of human physiology. It is not enough to know which herbs are good for which ailments, we have to know the why and the how. The better we understand the physiological processes of the human body, the better we will be able to choose and apply the correct herbs for specific clients.
For instance, an inexperienced herbalist might recommend Echinacea because they have heard it is good for colds, but do they understand why it is good? Do they understand the circumstances in which this herb might not be a good choice and instead long term tonification might be a better choice? Does the body need immediate immune system stimulation or does it need nourishing Yin tonics which will help to build the immunity?
Studying human physiology from a holistic point of view provides us with another tool in assessing the constitutions of our clientele. Instead of strictly looking at her or him from an energetic perspective (which is critical as well) we can integrate our knowledge of physiology and begin to build a sharper image of what is happening on both micro and macro levels. | <urn:uuid:03010153-d341-42f4-8c98-33b043b42629> | CC-MAIN-2019-47 | https://rogueherbalist.com/2016/06/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665767.51/warc/CC-MAIN-20191112202920-20191112230920-00177.warc.gz | en | 0.94865 | 4,272 | 2.734375 | 3 |
This tool was created in response to a special request. (Open group identification-Bonsais) Print and display in specific areas or in your circle time area.
Find a large plant or small tree that you can set on the floor in your daycare, in a corner where children are sure to see it. Use the tree to spark a conversation with your group. Name the various parts of a tree as you point to them. Ask children questions about what trees need to grow and be healthy. Fill a bin with leaves, cover it with a lid, and set it on a table before children arrive in the morning. When they discover the bin, use it to spark a conversation and introduce your theme.
(Open picture game-Trees) Print and laminate the pictures in the format you prefer. Use them to spark a conversation and ask children questions about the theme. Use adhesive paper to arrange leaves in a circle on the floor. Children can sit on the leaves for circle time. Before they arrive, you can also use leaves to create a path from the door to your circle time area.
- What happens to trees in the fall?
- What makes fall different from other seasons?
- What elements of nature are associated with fall?
- Can you name things that we see only during fall?
(Open picture game-Trees) Print and laminate the pictures in the format you prefer. Use a hole-punch to make a hole in the upper right and left corner of each picture. Stack the pictures and insert a ring through each set of holes. The flipogram is easy to manipulate. Simply show children how they can lift a picture and flip it under the stack. Name each item with your group. Use the flipogram to encourage children to talk during circle time and to ask them questions about the theme.
(Open word flashcards-Trees) Print and laminate the word flashcards. Have each child pick a word flashcard. They can take turns presenting their flashcard to the group. Ask them questions to see what they know about each element associated with the theme.
Poni discovers and presents-Trees
(Open Poni discovers and presents-Trees) Print, laminate, and cut out the cards. Use a Poni puppet, or another puppet that children are familiar with to present the different types of trees to your group.
(Open educa-chat-Trees) (Open giant word flashcards-Trees) Print the questions and the giant word flashcards. Laminate them. Insert the questions in a box so that children can take turns picking one. Spread the word flashcards out on the table or display them on a wall. Also print the “It’s my turn” card. Laminate it and glue it on a stick. It will help children respect the child whose turn it is to speak. You can also use a Poni puppet or a stuffed animal related to the theme. The questions will help children develop their observation skills, their ability to cooperate, their thinking skills, and waiting for their turn. This tool is a great way to animate circle time and
explore your theme.
Point to (or name) the picture
(Open giant word flashcards-Trees) Print, laminate, and display the word flashcards on a wall, next to your circle time area or glue them on a large piece of cardboard that you can move around. Ask children questions and have them identify the corresponding word. birch tree, maple tree, oak tree, spruce tree, white pine tree, larch tree, apple tree, cedar tree, buds, bark, tree trunk, stump
The woodworkers make…
Explore the woodworker profession with your group. Together, identify a variety of wooden items within your daycare. Explain to your group that the wood needed to build and create furniture comes from trees.
(Open thematic poster-Trees) Print and display within your daycare.
(Open educa-theme-Trees) Print and laminate the different elements representing the theme. Use them to present the theme to your group (and their parents) while decorating your daycare.
(Open educa-decorate-Trees) Print, laminate, and cut out the illustrations. Use them to decorate your walls and set the mood for the theme.
(Open garland-Trees) Print and let children decorate the garland elements. Cut out the items and use them to create a garland that can be hung near your daycare entrance or within your daycare.
(Open stickers-Trees) Print the illustrations on adhesive paper and use them to create original stickers for your group.
A forest in your daycare
Draw several trees on a long white paper banner or on open brown paper grocery bags. Decorate the trees with your group. Add leaves on the branches and at the bottom of each tree. If you prefer, trace children’s hands on orange, red, and yellow construction paper and cut out the shapes to represent leaves. Add construction paper pinecones, acorns, apples, etc. If you prefer, collect fallen branches with your group and use them to represent a forest in your daycare. Hang fabric leaves from the branches and from the ceiling. Display pictures of trees on the walls of your daycare.
(Open educa-numbers-Trees) Print and laminate the cards. Display them on a wall to decorate your daycare.
(Open educa-letters-Trees) Print and laminate the cards. Display them on a wall to decorate your daycare.
The pictures may be used as a memory game or to spark a conversation with the group. Use them to decorate the daycare or a specific thematic corner. (Open picture game-Trees) Print, laminate, and store in a “Ziploc” bag or in your thematic bins.
(Open picture game-Trees) Print the pictures twice and use them for a memory game.
ACTIVITY AND WRITING SHEETS
Activity sheets are provided for each theme. Print and follow instructions. (Open activity sheets-Trees)
Creating your own activity binder
Laminate several activity sheets and writing activities and arrange them in a binder along with dry-erase markers. Leave the binder in your writing area and let children complete the pages as they wish. At the end of the day, simply wipe off their work so the activity binder can be reused.
Writing activities-T like tree
(Open writing activities-T like tree) Print for each child or laminate for use with a dry-erase marker.
(Open word flashcards-Trees) (Open giant word flashcards-Trees) Print several word flashcards. Glue them on pieces of paper, laminate them, and arrange them in a binder. Show children how they can trace the words using dry-erase markers. If you wish, leave room under each word so children can try to write the words without tracing the letters.
(Open stationery-Trees) Print. Use the stationery to communicate with parents, in your writing area, or to identify your thematic bins.
(Open educa-nuudles-Trees) Print for each child. Have children color the sheet and use Magic Nuudles to give it a three-dimensional look. Variation: You don’t have Magic Nuudles? Have children fill the spaces designed for Magic Nuudles with bingo markers or stickers. To order Magic Nuudles.
(Open string activities-Trees) Print for each child. Children use white glue to trace the lines and press colorful pieces of yarn in the glue.
- Natural-colored wooden blocks (or colored ones).
- A few branches and pinecones.
- Tiny logs.
- Wooden sticks of all kinds that can be used for different types of constructions.
- Forest animal figurines.
Arts & crafts:
- Cardboard, tissue paper, empty egg cartons, recycled material, etc. Children can use them to represent a cabin in the woods.
- Hang a large piece of paper on a wall to create a mural. You can inspire children by drawing a few trees and letting them add leaves, animals, etc.
- A tree drawing printed on paper and children have to glue leaves and bark on it (torn pieces of green and brown construction paper). Glue sticks are best for this activity since liquid glue might seep through the paper.
- An easel with a large piece of paper (or paper on a wall) along with poster paint. Children can paint a forest.
- Popsicle sticks and white glue for building a log cabin.
- Discuss animal tracks with your group, apply paint to the bottom of children’s feet, and invite them to walk on paper. Once the paint is dry, encourage them to compare their footprints.
- A piece of waxed paper and white glue children can use to draw a spider web. Once the glue is dry, they can peel the web off the waxed paper and hang it.
- Discuss forest fires as you explore orange and yellow paint.
- Coloring pages related to forest animals, nature, birds, etc.
- Musical drawing: draw a forest as you listen to a CD of forest-related sounds.
- Provide recycled paper for children to draw on and explain the importance of preserving trees!
- A picnic basket filled with plastic dishes and food items, a blanket, a radio with a CD to listen to chirping birds as you pretend to have a picnic in the forest. This activity can be organized at lunch or snack time. Simply sit on a blanket on the floor, in your daycare.
- Camping in the forest:
- A tent, sleeping bags, utensils, plastic or disposable dishes, plastic food items, pyjamas, etc.
- No matter which theme you choose, decorate your area with giant paper trees, pictures of forests found in old calendars, fabric leaves, etc. The goal is literally to transform your area to make it look like a forest.
- Forest animal-themed memory game with educatall picture game or a store-bought game.
- Puzzles related to the theme.
- Brown and green modeling dough to create a forest. If you wish, you can use homemade modeling dough and leave children’s creations out to dry. They will enjoy building their very own miniature forest with the trees and animals.
- Fabric leaves that can be sorted by color, size, shape, etc.
- A felt board with felt trees, animals, etc. that can be used to invent stories and scenes.
- A variety of pre-cut mushroom shapes on which you have glued theme-related pictures for a unique memory game.
- An association game in which children must associate animals to the correct habitat.
- Set a variety of items related to the theme on a table (acorn, pinecone, squirrel figurine, pine needle, etc.). Ask children to observe the items closely. Cover them with a blanket and remove one item. Children must identify the missing item.
- Pieces of rope children can use to tie knots.
- Sorting game involving animals with fur and animal with feathers.
- Books about forest animals.
- Tales and fables with a forest setting: The Three Little Pigs, Little Red Riding Hood, Snow White and the Seven Dwarves, Hansel and Gretel, etc.
- Headphones and CDs with sounds of nature, chirping birds, animal sounds, etc.
- Puppets representing forest animals and birds.
- Connect the dots or dotted lines children can trace to reveal trees.
- Games with educatall.com word flashcards.
- Tracing activities that involve forest animal names. Associate pictures to each word to help children identify them.
- Various activity sheets related to the theme.
- An obstacle course throughout which children are encouraged to move like different forest animals.
- A treasure hunt where children must find pictures of forest animals.
- Try to whistle like a bird.
- Act out different actions associated with forest animals or insects.
- Pretend you are firefighters extinguishing a forest fire. Have children stand in line and pass a bucket filled with water down the chain, attempting to have as much water as possible in the bucket when it reaches the end of the line.
- Sing songs alongside a pretend campfire and explain the importance of properly extinguishing a campfire to avoid causing a forest fire.
- A large container filled with dirt.
- A container filled with pine needles.
- A bin filled with pinecones.
- A large container filled with autumn leaves (real or fabric).
- A container filled with sunflower seeds.
- As a group, prepare a fruit salad with different types of fruit that grow in trees (pears, apples, plums, pineapples, etc.).
- Let children cut mushrooms (white and brown) into tiny pieces and mix them with sour cream or plain yogurt to prepare a dip that can be served at lunch or snack time with a veggie platter.
- Prepare a recipe with berries that can be found in the forest.
- Prepare your own trail mix by mixing seeds, nuts, and dried fruit. Explain how this simple snack is great for hikes in the forest, since it provides energy.
- Fill a large container with leaves, pieces of bark, branches from coniferous trees, and pinecones.
- Arrange different types of mushrooms in clear containers and invite children to observe them.
- Have children use raffia, hay, and pieces of yarn to create nests for birds.
- Set up your very own vivarium and add any insects children find while playing outside to it. Be sure to cover your vivarium to avoid unpleasant surprises.
- Show children a compass, a map, etc.
- Plant flowers and different types of vegetables with your group.
- Build a birdfeeder. There are many simple models to try!
The flashcards may be used during circle time to spark a conversation with the group or in your reading and writing area. They may also be used to identify your thematic bins. (Open word flashcards-Trees) (Open giant word flashcards-Trees) roots, branch, leaves, tree trunk, bark, buds, acorn, nest, pine needles, tree sap, forest, wood
(Open sequential story-Trees) Print and laminate. Invite children to place the illustrations in the correct order.
Giant word flashcards-Trees
(Open word flashcards-Trees) (Open giant word flashcards-Trees) Print. birch tree, maple tree, oak tree, spruce tree, white pine tree, larch tree, apple tree, cedar tree, buds, bark, tree trunk, stump
(Open sequential story-Autumn) Print the story, laminate the illustrations, and cut them out. Children must place them in the correct order.
(Open forest scene) Print, laminate, and cut out the pieces. Children use them to decorate the scene.
ROUTINES AND TRANSITIONS
Let’s hop from tree to tree
(Open educa-decorate-Trees) Print. Laminate the pictures and use adhesive paper to arrange them on the floor. Play music. When the music stops, children must quickly find a picture to sit on (variation of musical chairs).
Fill a bin or basket with fabric leaves. Whenever children must wait for their turn, for example to wash their hands, give them two leaves that they can hold over their head as they hold tree pose (yoga). This exercise will keep them busy while providing them with a relaxing moment.
Our leaf board
When you go for walks with your group, collect a variety of pretty leaves. When you get back to daycare, sort the leaves together and arrange them in a large chart divided into sections. Write the name of a different type of tree at the top of each section, making sure to choose trees that can be found in your neighborhood. Help children associate the leaves to the corresponding tree.
Giant hopscotch game
Using colorful adhesive tape, draw a giant hopscotch game on the floor. It could, for example, connect two different areas within your daycare. Show children how they can alternate hops on one foot and on two feet. Add pictures related to the theme in each square.
Game-This is my spot-Trees
(Open game-This is my spot-Trees) Print each illustration twice. Use adhesive paper to secure one copy of each illustration on the table. Deposit the second copy of each illustration in an opaque bag and invite children to pick a card that will determine their spot at the table (corresponding illustration). The illustrations can also be used to determine children’s naptime spots or their place in the task train.
My leaf path
(Open my leaf path) Print, laminate, and arrange the pictures on the floor to create a path leading to various areas within your daycare. The path can lead to areas frequently visited by children throughout the day such as the bathroom, the cloakroom, etc. or, if you prefer, delimit your workshops.
ACTIVITIES FOR BABIES
A walk in the woods
Go for a walk in a nearby forest with your group. Name the things you see (squirrel, bird, pinecone, etc.). Encourage children to touch leaves, pine needles, etc.
Replace the balls in your ball pit with fabric leaves. Children will have a lot of fun manipulating the colourful leaves. Variation: You may also fill your ball pit with leaves that have fallen to the ground.
Hang a large piece of adhesive paper on a wall, with the sticky side facing you. Let children press leaves on the paper.
PHYSICAL ACTIVITY AND MOTOR SKILLS
From one tree to the next
Create an obstacle course. Add obstacles like a rope that children must walk over, without touching it. Set leaves on a chair and have children crawl under it. Incorporate whatever you have on hand, for example hats representing forest animals that they can wear to complete the course. Plan your course so that children explore different ways of moving about (jump, crawl, walk, etc.). Use your imagination to make your obstacle course fun!
Collect several pictures of trees and display them throughout your daycare. Use the walls, cupboards, the floor, etc. Children will discover the pictures as they go about their day.
Provide boxes (different sizes) and let children hide in them. They can pretend they are squirrels hiding in trees.
Replace the balls in your ball pit with fabric leaves. Children will have a lot of fun manipulating the colourful leaves.
Variation: You may also fill your ball pit with leaves that have fallen to the ground.
Cut several leaf shapes out of tissue paper. Give each child a drinking straw and show them how they can use them to transport leaves. They can breathe in over a leaf and hold their breath until they reach a designated area or container, not very far away. As soon as they resume breathing, the leaf will fall.
Hang a large piece of adhesive paper on a wall, with the sticky side facing you. Let children press leaves on the paper.
Playing in the leaves
Children love to jump in a big pile of leaves. This simple activity is best done outside, but if you really need to, you can bring a garbage bag full of leaves indoors and use the contents of the bag to create a pile on your daycare floor. Cleanup will be necessary, but children are sure to have a lot of fun!
Set several leaves on your parachute. Have children raise and lower the parachute very gently and encourage them to observe the leaves. Slowly, let them increase the speed at which they move the parachute to send the leaves flying through the air.
(Open lacing-Leaves) Print, laminate, and punch holes around each shape, where indicated. Children thread a shoelace, a piece of yarn, or ribbon through the holes.
Use adhesive tape to determine a start and finish line. Place two leaves 10 cm apart. Provide children with straws or empty toilet paper rolls they can use to blow on the leaves to move them towards the finish line. The first child to successfully cross his leaf over the finish line wins. The winner may try again with another child.
In your yard, find a tree that has a low branch that children can swing on, like monkeys. Help older children wrap their legs over the top of the branch. Of course, constant supervision is required throughout this activity. For added safety, set a thick exercise mat under the branch.
Hang a large paper banner on a fence or wall in your yard. Draw tree trunks and branches. Encourage children to collect leaves and have them glue them on the branches that you drew using glue or adhesive putty.
To the sawmill
Divide your group into two teams. Each team will need a large dump truck. Have each team roll around the yard, collecting fallen branches in their truck. At the end of the activity, count the branches collected by each team to determine which truck transported the most wood.
Our leaf home
Provide small toy rakes. Help children rake a large pile of leaves. Next, encourage them to use the leaves to represent the divisions in a house. They will like using the leaves to delimit a bedroom (or many bedrooms), a kitchen, a living room, a game room, a bathroom, etc. Let them play in their leaf home as they wish.
Our colorful tree
Fill a large bin with pieces of colorful ribbon. Have children take turns picking a ribbon they can tie on a branch of a tree that is in your yard. Name the colors together.
You will need a large cardboard box (appliance). Let children decorate it to represent a treehouse. They can, for example, draw windows and glue pieces of fabric on either side to represent curtains. They can set small plastic furniture items in the box. Set your treehouse under a large tree in your yard or at the top of a large play structure. Let children play in their treehouse and invent all kinds of scenarios. They could even eat their snack or sleep in their treehouse.
Leaves in water
Fill a large bin with water and add leaves. Encourage children to use drinking straws to blow on the leaves to make them move around.
You will need three empty bins. Glue a different color leaf on each one (ex. green, red, yellow). Set a large bag of leaves next to the bins. Children will have fun sorting the leaves by color.
Cabin in the woods
Children love playing in cabins. Drape old bedsheets over tables, chairs, and other furniture items to represent tents or cabins. Add objects that are normally found in the woods. Let children play in their tents and cabins.
A walk in the forest
(Open rally-Forest) Print and laminate so you can check the items on the list using a dry-erase marker. Go for a walk in your neighborhood or a nearby forest. Invite children to search for the items on the list. You can collect them in a bag or wagon and bring them back to daycare. Children can use them for a craft or observation activity.
Playing in the leaves
Children love jumping in large piles of leaves. This is a simple outdoor activity. For variety, why not bring leaves into the daycare and create your very own leaf storm. You will have a hefty cleanup job, but oh what fun!
Hide several different types of leaves throughout your yard. Have children search for them. This activity can be used to associate leaves with different types of trees.
Set several leaves on your parachute. Invite children to gently raise and lower your parachute to make the leaves bounce up and down. Gradually have them increase the speed at which they move the parachute until they send the leaves flying.
Ask children to build a bed of leaves on which they can lie down. Encourage them to observe the stars and enjoy this relaxing pause. Invite them to look for shapes in the clouds, to take the time to listen to the wind, the birds…
MUSICAL AND RHYTHMIC ACTIVITIES
A musical tree
Gather items related to trees (bark, branches, pinecones, dry leaves, etc.). With the help of the children in your group, have fun using the items to produce interesting sounds.
(Open educ-pairs-Trees) Print. Children must draw a line between identical items or color identical items using the same color. For durable, eco-friendly use, laminate for use with a dry-erase marker.
(Open educ-trace-Trees) Print for each child. Children must trace the lines with the correct colors and then color the corresponding items using the same colors.
Color by number-Trees
(Open color by number-Trees) Print for each child. Children must color the picture according to the color code.
Educ-big and small-Trees
(Open educ-big and small-Trees) Print and laminate. Children must place the illustrations in the correct order, from smallest to biggest.
(Open counting cards-Trees) Print and laminate. Prepare a series of wooden clothespins on which you can paint or draw numbers 1 to 9. Children count the items on each card and place the corresponding clothespin on the correct number.
Educ-same and different-Trees
(Open educ-same and different-Trees) Print and laminate for durable, eco-friendly use. Children must circle the illustration that is different in each row.
Roll and color-Trees
(Open roll and color-Trees) Print for each child. This game can be enjoyed individually or as a group. Children take turns rolling a die, counting the dots, and coloring the corresponding part.
(Open educ-math-Trees) Print and laminate for durable, eco-friendly use. Children must count the items in each rectangle and circle the correct number.
A tree for everything
(Open a tree for everything) Print. Children must cut out the items and associate them to the correct tree.
(Open shape forest) Print. Children must cut out the trees and glue them in the correct row.
(Open tree sections) Print. Children must cut the four sections and assemble them by gluing them on a piece of construction paper using a glue stick. When they are done, they can color their tree.
From a tree to paper
(Open from a tree to paper) Print. Invite children to cut the illustrations and glue them in the rectangles in the correct order to help them understand how paper comes from trees.
Squirrels in the trees
(Open squirrels in the trees) Print. Children must count the leaves in each tree and add the corresponding number of squirrels.
(Open game-Tree association) Print and laminate the game. Using Velcro, children associate the cards to the correct tree.
(Open game-Four trees) Print, glue the cards on opaque cardboard and cut them out. Arrange all the cards upside down on the floor or table (so you can’t see the illustrations). Children take turns rolling a die. Every time a child rolls a “1”, he can turn a card. If he doesn’t already have this tree in front of him, he keeps it and places it in front of him for everyone to see. The first child who has collected all four trees wins.
I am inventing my own tree
(Open I am inventing my own tree) Print, laminate, and cut each tree in half. Hand children the pieces and let them create original trees. They don’t have to match the trunk and the leaves that would normally go together.
MORAL AND SOCIAL ACTIVITIES
Homemade wooden puzzle
Before children arrive, stack several pieces of wood (2 in x 4 in x 8 in). Draw a tree on the side of the pieces, from top to bottom. The leaves will be on the side of the top pieces and the roots will be drawn on the side of the bottom pieces. Next, set all the pieces of wood in a bin and encourage children to stack them to assemble the tree.
(Open logging truck) Print for each child. Cut several empty toilet paper rolls into rings. Print and assemble the die. Children take turns rolling the die. Every time the die lands on the axe, the child who rolled it takes one cardboard ring (log) and sets it in his truck. The first child who fills his truck wins.
For story time, sit under a large tree with your group. Provide small blankets they can sit on. If you wish, you can even enjoy naptime in the shade. Children can also simply lie on their back and look up at the leaves gently swaying in the wind, the birds on the branches, etc.
I am going for a walk in the forest and I am bringing…
Sit in a circle with your group. Begin the game by saying, “I am going for a walk in the forest and I am bringing a flashlight.” The child next to you must repeat this sentence and add another item. Each child must repeat all the items listed by others before adding one of their own.
Cut several branches into sections. Set a large block of floral foam (or green modeling dough) on the table and invite children to prick the branches in it to represent a tiny forest. Set it on a windowsill or shelf to decorate your daycare.
My first herbarium
(Open my first herbarium) Print for each child. Throughout the theme (or season), children collect flowers and leaves they can add to their herbarium. Write the date under each new addition and help children identify the items they find. With younger children, print a single herbarium and have them complete it as a group.
Go for a walk with your group and collect pinecones. Glue a string to each pinecone and let children use a plastic knife to spread peanut butter all over them. Hang them in a tree (beware of allergies).
Leaves in water
You will need a bin filled with water or a water table. Add leaves. Children use straws to blow on the leaves to make them move about.
Use three empty storage bins. On each bin, glue a different coloured leaf (green, red, and brown for example). Place a large bag of leaves next to the bins. Children sort the leaves per their color.
Purchase maple water. Explain to your group how maple water comes from a tree. If possible, show them a maple tree when you go for a walk. Let them smell the maple water to appreciate its sweet scent. Talk about how maple syrup is made. Give each child a glass of maple water. Next, give each child a spoonful of maple syrup that they can pour on a pancake or waffle.
Give each child an empty cardboard milk carton. Let them decorate it as they wish to represent a birdfeeder. Next, fill them with pumpkin seeds, sunflower seeds, etc. Encourage children to eat the seeds and chirp like birds at snack time.
Blossoming fruit trees
You will need pretzel sticks, icing (or cream cheese), cherry-flavored Jell-O powder, and popcorn. Add the popped popcorn to a Ziploc bag along with the Jell-O powder. Shake the bag to color the popcorn. Have children press the pretzel sticks in the icing and then use it to “stick” popcorn on the tip to represent fruit tree blossoms.
Maple leaf cookies
Purchase maple leaf cookies and serve them with fruit at snack time.
ARTS & CRAFTS
A forest in the hallway
Trace each child’s silhouette on a large piece of paper. Ask children to stand with their arms open above their head and to press their legs together. Have children color their silhouette that will represent a tree trunk and branches. Let them glue Fun Foam or fabric leaves on the branches. Hang the trees in the hallway.
My stamped tree
Give each child four or five brown pipe cleaners. Help them twist the bottom of the pipe cleaners together to represent a tree trunk. Help them separate the upper extremity of the pipe cleaners to represent branches. Have children glue their tree on a piece of construction paper. Provide autumn-colored stamps pads and leaf-shaped stampers. Invite children to stamp leaves around their tree’s branches.
Before children arrive, saw a log to create several wooden disks. Invite children to observe the surface of the disks. Let each child paint on a disk. Let dry before varnishing their work and using hot glue to stick a piece of jute rope behind their masterpieces so they can be hung.
Open several pages of newspaper on the floor and use a black marker to draw tree outlines. Have children cut them out and glue them on a piece of green construction paper. Use this activity to explain to your group how newspaper is made from trees.
Set several leaves you collected with your group under pieces of paper and encourage children to color over the leaves to see the leaf veins appear like magic. Next, older children can cut out the leaves and hang them on an indoor clothesline to create an original garland.
(Open models-Trees) Print the models and use them for various crafts and activities throughout the theme.
My leafy hat
(Open educa-decorate-Trees) Print and cut out. Glue items on a paper hat or headband.
(Open stencils-Trees) Print and cut out the various stencils. Children can use them to trace and paint trees throughout the theme.
My crumpled tree
Trace a tree trunk with four (4) branches on construction paper. Have children fill the tree trunk with crumpled pieces of brown tissue paper. Make tiny balls of red, yellow, orange, and green tissue paper to add leaves to the tree.
With your group, stick several leaves on a large piece of paper. Provide sponges children can use to completely cover the leaves with paint. Once the paint is almost dry, gently remove the leaves.
Variation: Use old toothbrushes instead of sponges.
(Open puppets-Trees) Print the various models on cardboard. Ask children to cut them out and decorate them with arts & crafts materials. Glue a Popsicle stick behind each one to complete the puppets.
(Open models-Trees) Print several copies and use them as the base for various crafts and activities.
(Open tree trunk) Print. Apply red, yellow, or orange poster paint on children’s hands and encourage them to press them on the top of their tree to represent leaves.
Have children draw tree trunks on heavy cardboard and glue crumpled pieces of tissue paper on the branches to represent leaves.
(Open tree trunk) Print for each child. Have children glue real leaves on the branches to complete their tree.
Collect branches with your group. Let children paint the branches and sprinkle glitter on them. If you prefer, they can use the branches as paintbrushes for different activities or just for painting freely on construction paper.
Scrapbook-Walk in the forest
(Open scrapbook-Walk in the forest) Print for each child. Go for a walk in the forest and collect leaves and branches that chidldren can glue on their scrapbook page. If you wish, photograph your group during your walk, print the pictures and give each child a copy.
(Open coloring pages theme-Trees) Print for each child.
DIFFERENT WAYS TO USE THE COLORING PAGES
Identical coloring pages-Trees
Print the same coloring page for each child and an additional copy for your model. Color only certain parts of your picture. Present the model to your group and ask them to color their picture to make it look exactly like yours.
Print and laminate several coloring pages and arrange them in a binder with a few dry-erase markers. Leave everything on a table for children to explore.
Play musical drawing with your group. Give each child a coloring page. Have children sit around a table. When the music starts, they must pass the coloring pages around the table. Every time the music stops, they must color the picture in front of them until the music starts again.
Give each child a picture to color. When they are done, cut each picture into pieces to create unique puzzles.
Complete the drawing-Trees
(Open complete the drawing-Trees) Print for each child. Children must draw the missing items.
I am learning to draw-A tree
(Open I am learning to draw-A tree) Print and laminate the model sheet. Invite children to practice their drawing technique on the model sheet before attempting to draw a tree on their own.
(Open creative coloring-Trees) Print for each child. Have children complete the drawing as they see fit.
The educatall team | <urn:uuid:1d49e4f0-9a2c-4044-88e7-d0c45fa5e30d> | CC-MAIN-2019-47 | https://www.educatall.com/page/1519/Trees.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00217.warc.gz | en | 0.9124 | 7,818 | 3.625 | 4 |
Amazing things happen in the heavens. In the hearts of distant galaxies, black holes swallow stars. Once every 20 years or so, on average, a star somewhere in our Milky Way galaxy explodes. For a few days, that supernova will outshine entire galaxies in our night sky. Near our solar system, things are thankfully quiet.
Nevertheless, awesome events happen in our neighborhood too.
Eclipse means to overshadow. And that’s exactly what happens during a solar or lunar eclipse. These celestial events take place when the sun, moon and Earth briefly make a straight (or very nearly straight) line in space. Then one of them will be fully or partially shrouded by another’s shadow. Similar events, called occultations and transits, occur when stars, planets, and moons line up in much the same way.
Scientists have a good handle on how planets and moons move through the sky. So these events are very predictable. If the weather cooperates, these events easily can be seen with the unaided eye or simple instruments. Eclipses and related phenomena are fun to watch. They also provide scientists with rare opportunities to make important observations. For instance, they can help to measure objects in our solar system and observe the sun’s atmosphere.
Our moon is, on average, about 3,476 kilometers (2,160 miles) in diameter. The sun is a whopping 400 times that diameter. But because the sun is also about 400 times further from Earth than the moon is, both the sun and moon appear to be about the same size. That means that at some points in its orbit, the moon can entirely block the sun’s light from reaching Earth. That’s known as a total solar eclipse.
This can happen only when there is a new moon, the phase that appears fully dark to us on Earth as it moves across the sky. This happens about once per month. Actually, the average time between new moons is 29 days, 12 hours, 44 minutes and 3 seconds. Maybe you’re thinking: That’s an awfully precise number. But it’s that precision that let’s astronomers predict when an eclipse will occur, even many years ahead of time.
So why doesn’t a total solar eclipse occur each and every full moon? It has to do with the moon’s orbit. It is slightly tilted, compared to Earth’s. Most new moons trace a path through the sky that passes near to — but not over — the sun.
Sometimes the new moon eclipses only part of the sun.
The moon creates a cone-shaped shadow. The totally dark part of that cone is known as the umbra. And sometimes that umbra doesn’t quite reach Earth’s surface. In that case, people along the center of the path of that shadow don’t see a totally darkened sun. Instead, a ring of light surrounds the moon. This ring of light is called an annulus (AN-yu-luss). Scientists call these events annular eclipses.
Not all people, of course, will be directly in the center path of an annular eclipse. Those in line with only a portion of the shadow, its antumbra, will see a partially lit moon. The antumbra is also shaped like a cone in space. The umbra and antumbra are lined up in space but point in opposite directions, and their tips meet at a single point.
Why won’t the umbra reach Earth every time there’s a solar eclipse? Again, it's due to the moon’s orbit. Its path around Earth isn’t a perfect circle. It’s a somewhat squished circle, known as an ellipse. At the closest point in its orbit, the moon is about 362,600 kilometers (225,300 miles) from Earth. At its furthest, the moon is some 400,000 kilometers away. That difference is enough to make how big the moon looks from Earth vary. So, when the new moon passes in front of the sun and is also located in a distant part of its orbit, it’s won’t be quite big enough to completely block the sun.
These orbital variations also explain why some total solar eclipses last longer than others. When the moon is farther from Earth, the point of its shadow can create an eclipse lasting less than 1 second. But when the moon passes in front of the sun and is also at its closest to Earth, the moon’s shadow is up to 267 kilometers (166 miles) wide. In that case, the total eclipse, as seen from any one spot along the shadow’s path, lasts a little more than 7 minutes.
The moon is round, so its shadow creates a dark circle or oval on Earth’s surface. Where someone is within that shadow also affects how long their solar blackout lasts. People in the center of the shadow’s path get a longer eclipse than do people near the edge of the path.
Story continues below image.
People completely outside the path of the moon’s shadow, but within a few thousand kilometers on either side of it, can see what’s known as a partial solar eclipse. That’s because they’re within the partially lit portion of the moon’s shadow, the penumbra. For them, only a fraction of the sun’s light will be blocked.
Sometimes the umbra completely misses the Earth but the penumbra, which is wider, doesn’t. In these cases, no one on Earth sees a total eclipse. But people in a few regions can witness a partial one.
On rare occasions, a solar eclipse will start and end as an annular eclipse. But in the middle of the event, a total blackout occurs. These are known as hybrid eclipses. (The change from annular to total and then back to annular happens because Earth is round. So part of Earth’s surface will fall inside the umbra halfway through the eclipse. People in this region are almost 13,000 kilometers (8,078 miles) closer to the moon than are those at the edge of the shadow’s path. And that difference in distance can sometimes be enough to bring that spot on Earth’s surface from the antumbra into the umbra.)
Fewer than 5 in every 100 solar eclipses are hybrids. A bit more than one in three are partial eclipses. Somewhat more than one in three are annular eclipses. The rest, slightly more than one in every four, are total eclipses.
There are always between two and five solar eclipses every year. No more than two can be total eclipses — and in some years there will be none.
Why total solar eclipses excite scientists
Before scientists sent cameras and other instruments into space, total solar eclipses provided unique research opportunities to astronomers. For example, the sun is so bright that its glare normally blocks sight of its outer atmosphere, the corona. During a total solar eclipse in 1868, however, scientists collected data on the corona. They learned about the wavelengths — colors — of light it emits. (Such emissions helped identify the corona’s chemical make-up.)
Among other things, the scientists spotted a weird yellow line. No one had seen it before. The line came from helium, which is created by reactions inside the sun and other stars. Similar studies have since identified many known elements in the solar atmosphere. But those elements exist in forms not seen on Earth — forms in which many electrons have been stripped away. These data have convinced astronomers that temperatures in the solar corona must reach millions of degrees.
Scientists also have used eclipses to look for potential planets. For instance, they’ve looked for planets that orbit the sun even closer than Mercury does. Again, the sun’s glare normally would block the ability to see anything that close to the sun, at least from Earth. (In some cases, astronomers thought they had seen such a planet. Later studies showed they had been wrong.)
In 1919, scientists gathered some of the most famous eclipse data. Astronomers took photos to see if distant stars looked out of place. If they were shifted slightly — compared to their normal positions (when the sun wasn’t in the way) — that would suggest that light zipping past the sun had been bent by its huge gravitational field. Specifically, that would provide evidence supporting Albert Einstein’s general theory of relativity. That theory had been proposed only a few years earlier. And indeed, the eclipse did provide such evidence for relativity.
Sometimes the moon almost disappears for a short while as it falls into Earth’s shadow. Such lunar eclipses happen only at full moon, the phase when the moon is opposite the sun in our sky. It now appears as a completely lit disk. (From our vantage on Earth, it’s when the moon is rising as the sun is setting.) Just as with solar eclipses, not every full moon creates a lunar eclipse. But lunar eclipses happen more often than solar ones because Earth’s shadow is so much broader than the moon’s. In fact, Earth’s diameter is more than 3.5 times that of the moon. Being so much smaller than Earth, the moon can more easily fit completely within our planet’s umbra.
Although total solar eclipses temporarily black out only a narrow path on Earth’s surface, a total lunar eclipse can be seen from the entire nighttime half of the planet. And because Earth’s shadow is so wide, a total lunar eclipse can last up to 107 minutes. If you add in the time that the moon spends entering and leaving our planet’s penumbra, the entire event can last as much as 4 hours.
Unlike a total solar eclipse, even during a total lunar eclipse the moon remains visible. Sunlight travels through Earth’s atmosphere during the whole event, illuminating the moon in a ruddy hue.
Sometimes only a portion of the moon enters Earth’s umbra. In that case, there’s a partial lunar eclipse. That leaves a circular shadow on the moon, as if a chunk had been bitten away. And if the moon enters Earth’s penumbra but totally misses the umbra, the event is called a penumbral eclipse. This latter type of eclipse is often faint and hard to see. That’s because many portions of the penumbra are actually pretty well lit.
More than one-third of all lunar eclipses are penumbral. Some three in every 10 are partial eclipses. Total lunar eclipses make up the rest, more than one in every three.
An occultation (AH-kul-TAY-shun) is a sort of an eclipse. Again, these happen when three celestial bodies line up in space. But during occultations, a really large object (usually the moon) moves in front of one that appears much smaller (such as a distant star).
The moon has no real atmosphere to block light from behind it. That’s why some of the most scientifically interesting occultations occur when our moon moves in front of distant stars. Suddenly, the light from an object occulted by the moon disappears. It’s almost as if a light switch flicked off.
This sudden absence of light has helped scientists in many ways. First, it has let astronomers discover that what they first thought was a single star might actually be two. (They would have orbited each other so closely the scientists couldn’t separate the stars visually.) Occultations also have helped researchers better pin down distant sources of some radio waves. (Because radio waves have a long wavelength, it can be hard to tell their source by looking at that radiation alone.)
Finally, planetary scientists have used occultations to learn more about lunar topography — landscape features, such as mountains and valleys. When the ragged edge of the moon barely blocks a star, light can briefly peek through as it emerges from behind mountains and ridges. But it shine unimpeded through deep valleys that are pointed toward Earth.
On rare occasions, other planets in our solar system can pass in front of a distant star. Most such occultations don’t yield much new information. But big surprises occasionally turn up. Take 1977, when Uranus passed in front of a distant star. Scientists who meant to study the atmosphere of this gas planet noticed something weird. Light from the star flickered 5 times before the planet passed in front of the star. It flickered another five times as it was leaving the star behind. Those flickers suggested the presence of five small rings around the planet. But no one could confirm they existed until NASA’s Voyager 2 spacecraft flew by the planet nine years later, in 1986.
Even asteroids can occult the light from distant stars. Those events let astronomers measure the diameter of asteroids more accurately than with other methods. The longer that light from a star is blocked, the larger the asteroid must be. By combining observations taken from several different spots on Earth, researchers can map out the form of even oddly shaped asteroids.
Story continues below image.
Like an occultation, a transit is a type of eclipse. Here, a small object moves in front of a distant object that appears much larger. In our solar system, only the planets Mercury and Venus can transit across the sun from Earth’s viewpoint. (That’s because the other planets are farther than us from the sun and thus can never come between us.) Some asteroids and comets, however, can transit the sun from our point of view.
Scientists have always been interested in transits. In 1639, astronomers used observations of a transit of Venus — and simple geometry — to come up with their best estimate until that time of the distance between the Earth and the sun. In 1769, British astronomers sailed halfway around the world to New Zealand to see a transit of Mercury. That event couldn’t be seen in England. From data the astronomers collected, they were able to tell that Mercury has no atmosphere.
When an object passes in front of the sun, it blocks a little bit of light. Usually, because the sun is so large, much less than 1 percent of the light will be blocked. But that small change in light can be measured by ultra-sensitive instruments. In fact, a regular and repeated pattern of slight dimming is one technique that some astronomers have used to detect exoplanets — ones orbiting distant stars. The method doesn’t work for all distant solar systems, however. For transits to occur, such solar systems have to be oriented so that they appear edge-on as seen from Earth.
annulus (adj. annular) A ring-shaped object or opening.
antumbra That part of the moon's shadow during an eclipose that continues on beyond its umbra. As with a penumbra, the moon's only partly blocks the sun. For someone in the antumbra, the sun appears bigger than the moon, which will appear in silhouette. An annular eclipse occurs when someone in the moon's shadow on Earth passes through the antumbra.
asteroid A rocky object in orbit around the sun. Most asteroids orbit in a region that falls between the orbits of Mars and Jupiter. Astronomers refer to this region as the asteroid belt.
atmosphere The envelope of gases surrounding Earth or another planet.
average (in science) A term for the arithmetic mean, which is the sum of a group of numbers that is then divided by the size of the group.
black hole A region of space having a gravitational field so intense that no matter or radiation (including light) can escape.
celestial (in astronomy) Of or relating to the sky, or outer space.
chemical A substance formed from two or more atoms that unite (bond) in a fixed proportion and structure. For example, water is a chemical made when two hydrogen atoms bond to one oxygen atom. Its chemical formula is H2O. Chemical can also be used as an adjective to describe properties of materials that are the result of various reactions between different compounds.
comet A celestial object consisting of a nucleus of ice and dust. When a comet passes near the sun, gas and dust vaporize off the comet’s surface, creating its trailing “tail.”
corona The envelope of the sun (and other stars). The sun’s corona is normally visible only during a total solar eclipse, when it is seen as an irregularly shaped, pearly glow surrounding the darkened disk of the moon.
diameter The length of a straight line that runs through the center of a circle or spherical object, starting at the edge on one side and ending at the edge on the far side.
eclipse This occurs when two celestial bodies line up in space so that one totally or partially obscures the other. In a solar eclipse, the sun, moon and Earth line up in that order. The moon casts its shadow on the Earth. From Earth, it looks like the moon is blocking out the sun. In a lunar eclipse, the three bodies line up in a different order — sun, Earth, moon — and the Earth casts its shadow on the moon, turning the moon a deep red.
electron A negatively charged particle, usually found orbiting the outer regions of an atom; also, the carrier of electricity within solids.
element (in chemistry) Each of more than one hundred substances for which the smallest unit of each is a single atom. Examples include hydrogen, oxygen, carbon, lithium and uranium.
ellipse An oval curve that is geometrically a flattened circle.
excite (in chemistry and physics) To transfer energy to one or more outer electrons in an atom. They remain in this higher energy state until they shed the extra energy through the emission of some type of radiation, such as light.
exoplanet A planet that orbits a star outside the solar system. Also called an extrasolar planet.
field (in physics) A region in space where certain physical effects operate, such as magnetism (created by a magnetic field), gravity (by a gravitational field), mass (by a Higgs field) or electricity (by an electrical field).
galaxy A massive group of stars bound together by gravity. Galaxies, which each typically include between 10 million and 100 trillion stars, also include clouds of gas, dust and the remnants of exploded stars.
geometry The mathematical study of shapes, especially points, lines, planes, curves and surfaces.
helium An inert gas that is the lightest member of the noble gas series. Helium can become a solid at -272 degrees Celsius (-458 degrees Fahrenheit).
hybrid An organism produced by interbreeding of two animals or plants of different species or of genetically distinct populations within a species. Such offspring often possess genes passed on by each parent, yielding a combination of traits not known in previous generations. The term is also used in reference to any object that is a mix of two or more things.
lunar Of or relating to Earth’s moon.
Milky Way The galaxy in which Earth’s solar system resides.
moon The natural satellite of any planet.
NASA Short for the National Aeronautics and Space Administration. Created in 1958, this U.S. agency has become a leader in space research and in stimulating public interest in space exploration. It was through NASA that the United States sent people into orbit and ultimately to the moon. It also has sent research craft to study planets and other celestial objects in our solar system.
new moon The phase of the moon that appears fully dark, when viewed from Earth. At this time, the moon will sit between the earth and sun. So the lunar face lit by the sun is turned away from us.
New Zealand An island nation in the southwest Pacific Ocean, roughly 1,500 kilometers (some 900 miles) east of Australia. Its “mainland” — consisting of a North and South Island — is quite volcanically active. In addition, the country includes many far smaller offshore islands.
occultation A celestial eclipse-like event in which an object that appears large from Earth, such as the moon, obscures a smaller-seeming object, such as a distant star.
orbit The curved path of a celestial object or spacecraft around a star, planet or moon. One complete circuit around a celestial body.
penumbra The outer edges of the moon's shadow, a zone that is not completely dark. During a solar eclipse, people within the moon's penumbra will see only a partial blockage of the sun's light.
phenomena Events or developments that are surprising or unusual.
planet A celestial object that orbits a star, is big enough for gravity to have squashed it into a roundish ball and has cleared other objects out of the way in its orbital neighborhood. To accomplish the third feat, the object must be big enough to have pulled neighboring objects into the planet itself or to have slung them around the planet and off into outer space. Astronomers of the International Astronomical Union (IAU) created this three-part scientific definition of a planet in August 2006 to determine Pluto’s status. Based on that definition, IAU ruled that Pluto did not qualify. The solar system now includes eight planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune.
radiation (in physics) One of the three major ways that energy is transferred. (The other two are conduction and convection.) In radiation, electromagnetic waves carry energy from one place to another. Unlike conduction and convection, which need material to help transfer the energy, radiation can transfer energy across empty space.
radio To send and receive radio waves, or the device that receives these transmissions.
radio waves Waves in a part of the electromagnetic spectrum. They are a type that people now use for long-distance communication. Longer than the waves of visible light, radio waves are used to transmit radio and television signals. They also are used in radar.
relativity (in physics) A theory developed by physicist Albert Einstein showing that neither space nor time are constant, but instead affected by one’s velocity and the mass of things in your vicinity.
solar eclipse An event in which the moon passes between the Earth and sun and obscures the sun, at least partially. In a total solar eclipse, the moon appears to cover the entire sun, revealing on the outer layer, the corona. If you were to view an eclipse from space, you would see the moon’s shadow traveling in a line across the surface of the Earth.
solar system The eight major planets and their moons in orbit around our sun, together with smaller bodies in the form of dwarf planets, asteroids, meteoroids and comets.
star The basic building block from which galaxies are made. Stars develop when gravity compacts clouds of gas. When they become dense enough to sustain nuclear-fusion reactions, stars will emit light and sometimes other forms of electromagnetic radiation. The sun is our closest star.
sun The star at the center of Earth’s solar system. It’s an average size star about 26,000 light-years from the center of the Milky Way galaxy. Also a term for any sunlike star.
supernova (plural: supernovae or supernovas) A massive star that suddenly increases greatly in brightness because of a catastrophic explosion that ejects most of its mass.
theory (in science) A description of some aspect of the natural world based on extensive observations, tests and reason. A theory can also be a way of organizing a broad body of knowledge that applies in a broad range of circumstances to explain what will happen. Unlike the common definition of theory, a theory in science is not just a hunch. Ideas or conclusions that are based on a theory — and not yet on firm data or observations — are referred to as theoretical. Scientists who use mathematics and/or existing data to project what might happen in new situations are known as theorists.
transit (in astronomy) The passing of a planet, asteroid or comet across the face of a star, or of a moon across the face of a planet.
umbra The darkest part of the moon's shadow during a solar eclipse. For people on Earth passing within the umbra, the moon will appear to totally cover the sun, briefly blacking out its light.
unique Something that is unlike anything else; the only one of its kind.
Venus The second planet out from the sun, it has a rocky core, just as Earth does. Venus lost most of its water long ago. The sun’s ultraviolet radiation broke apart those water molecules, allowing their hydrogen atoms to escape into space. Volcanoes on the planet’s surface spewed high levels of carbon dioxide, which built up in the planet’s atmosphere. Today the air pressure at the planet’s surface is 100 times greater than on Earth, and the atmosphere now keeps the surface of Venus a brutal 460° Celsius (860° Fahrenheit).
wave A disturbance or variation that travels through space and matter in a regular, oscillating fashion.
wavelength The distance between one peak and the next in a series of waves, or the distance between one trough and the next. Visible light — which, like all electromagnetic radiation, travels in waves — includes wavelengths between about 380 nanometers (violet) and about 740 nanometers (red). Radiation with wavelengths shorter than visible light includes gamma rays, X-rays and ultraviolet light. Longer-wavelength radiation includes infrared light, microwaves and radio waves. | <urn:uuid:8301e1dc-7108-4120-a919-9ca968b03adc> | CC-MAIN-2019-47 | https://www.sciencenewsforstudents.org/article/eclipses-come-many-forms | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00138.warc.gz | en | 0.93889 | 5,351 | 3.71875 | 4 |
It’s a warm fall day outside Mission Dolores, where an Ohlone man whose baptismal name is Francisco is tied to a whipping post. A week ago he ran away to the village where he grew up, but the soldiers hunted him down and brought him back in chains. A priest has gathered the other Indians at the mission to witness Francisco’s punishment. “Remember that this is for your own good, my children,” he says as he raises the leather whip. “The devil may tempt you to run away. But you must fight off temptation to gain eternal life.” He brings down the whip on Francisco’s bare back. After applying 25 lashes, he drops the whip, bows his head, and says a prayer.
This is not a side of mission life that’s taught in the fourth grade. But scenes like this took place at every one of the 21 missions in the chain begun in 1769 by a diminutive Franciscan friar named Junípero Serra. Every schoolchild knows that California Indians at Serra’s missions were taught the Gospel, fed, and clothed; few know that many were also whipped, imprisoned, and put in stocks. Junípero Serra’s pious hope to convert pagan Indians into Catholic Spaniards resulted not only in the physical punishment of countless Indians, but in the death of tens of thousands of them—and, ultimately, in the eradication of their culture. So it was understandable that when Pope Francis announced plans to canonize Junípero Serra in January, some California Indians felt, at least figuratively, as if they were being whipped by a priest again.
“I felt betrayed,” says Louise Miranda Ramirez, tribal chairwoman of the Ohlone Costanoan Esselen Nation, whose people occupied large parts of northern California at the time of Serra’s arrival in 1769. “The missions that Serra founded put our ancestors through things that none of us want to remember. I think about the children being locked into the missions, the whippings—and it hurts. I hurt for our ancestors. I feel the pain. That pain hasn’t gone away. And it needs to be corrected.”
But the pain is not being corrected. In fact, say many Native American leaders, it’s being exacerbated. Since the announcement of the pope’s plan, Indians across California have risen up in protest. On Easter, representatives of the Ohlone, Amah Mutsun, Chumash, and Mono peoples gathered at Serra’s home mission in Carmel, San Carlos Borromeo, to denounce the canonization. Protests have also been held at Mission Dolores in San Francisco, Mission Santa Barbara, and Mission San Juan Bautista. When Francis canonizes Serra in Washington, D.C., on September 23, more demonstrations will likely take place (though the actions of a pope—who is infallible by definition—are not subject to any trappings of democracy, least of all public protests).
The conflict juxtaposes two radically different perceptions of the soon-to-be saint. In one, he is a selfless “evangelizer of the West,” as Francis called Serra when he announced the canonization: a man who forfeited his worldly possessions and traveled to the ends of the earth to save souls. In the other, he is a zealous servant of the Inquisition and agent of colonialism whose coercive missions destroyed the indigenous peoples who encountered them. These two versions of history not only force us to ponder whether a man who carries a mule train’s worth of toxic historical baggage should be declared a saint, but also raise difficult questions about Latino identity and the founding myths of the United States—because there is a political dimension to Francis’s choice: Well aware that Latinos are now the largest ethnic group in California and constitute a third of American Catholics, the Catholic church is making much of the fact that Serra will be America’s first Hispanic saint. His canonization may indeed promote greater acceptance of Latino Americans, especially immigrants, and challenge the Anglocentric creation myth that starts American history with Plymouth Rock. But it’s far from clear whether Serra can, or should, serve as an exemplar for Latinos. And the church’s attempt to weave Spain into the nation’s DNA raises as many questions as it answers.
In short, Francis kicked an enormous historical, theological, and ethical hornet’s nest when he made his announcement. Whether he did so wittingly or not, only he knows. But the dustup about Serra’s canonization gives California, and the nation, an opportunity to learn a lot more about California’s deeply tragic Spanish and colonialist origins than most ever knew before.
If any two people embody the contradictions and complexities of the Serra controversy, they’re Andrew Galvan and Vincent Medina. Galvan, 60, a curator at San Francisco’s Mission Dolores, is a descendant of Ohlone, Coast and Bay Miwok, and Patwin tribal groups; like many California Indians, he also has Mexican ancestry and is a devout Catholic, but unlike most, he emphatically supports the Serra canonization, or “cause” in church nomenclature. Medina, Galvan’s 28-year-old cousin, is a fellow curator at Mission Dolores and another devout Catholic—but also a staunch and vocal opponent of Serra’s cause.
I meet with Galvan and Medina in Fremont at Mission San Jose, which Galvan proudly tells me his ancestors helped build. A loquacious man with a neatly trimmed beard, wearing a thick necklace of Indian beads, he is as effusive as Medina is reserved and soft-spoken. Sitting on a pew inside the reconstructed mission, near a baptismal font where his great-greatgreat- grandmother was baptized in 1815, Galvan explains the roots of his love of Serra. “My family home is across the street. My parents were both devout Catholics, and on summer vacation, our hobby was to visit California missions. I can remember the family asking, ‘What mission haven’t we been to yet?’”
Paradoxically, Galvan goes on to describe the mission system as a monstrosity: “In California schools, the fourth-grade kids make papier-mâché or sugar-cube missions. But they’re never asked to build a slave plantation or a concentration camp with incinerators.” Given his equation of the missions with such hideous institutions, why does he support canonizing the Father-President of the missions, who could be seen as the Spanish colonialist equivalent of Heinrich Himmler?
“Because Junípero Serra is an evangelist,” he says. “Yes, the missions were part of colonialism, and colonialism is rotten to the core. But I blame the system, not the individual. Junípero Serra was a very good person operating in a very bad situation.”
Galvan admits that he lives with internal contradictions. “I can stand outside on the steps of this mission and say to you that because of colonial institutions like this, the traditional lifeways of my ancestors were almost eradicated. Then I can walk through those doors over to that baptismal font and say proudly, ‘Here’s where my family became Christians.’ That is the strangest juxtaposition in which a person can find himself. To do that balancing act and to be able to sleep at night is what I’ve had to do all these years.”
When Galvan excuses himself to check his email, Medina, who has been patiently waiting in an adjoining pew, comments that as a practicing Catholic, he sees nothing wrong with criticizing the church. “The traditional Ohlone world and the Catholic faith are not incompatible,” he points out. Many Ohlone attend both native dances and mass on Sundays. Medina doesn’t support Serra’s cause, he says, because he feels that “saints should be people who transcend their time, and Serra didn’t.”
“Serra had whips sent up to San Francisco, to different missions, and he would tell soldiers and priests to whip Indian people if they were acting out of line,” Medina continues. “And Indian people had never seen whips. That wasn’t part of our reality in the pre-contact world.”
For Medina, the cultural devastation caused by the missions couldn’t be any more personal. “Back in the 1930s, there were just two remaining speakers of our language, Chochenyo,” he says. “When I went to my grandfather and asked him about the language, he said, ‘We don’t know the language anymore. Why don’t you go learn it?’” Medina did, and now he teaches it. “But there’s a reason why my grandfather doesn’t speak Chochenyo,” he says. “There’s a reason why it wasn’t passed down to him from his mother. The decline started with Junípero Serra’s policies. He used to gripe about how Indians wouldn’t stop speaking their languages. He wanted them to speak Spanish.”
For most of the Golden State’s history, Serra, who headed the religious wing of Spain’s 1769 Sacred Expedition to colonize California, was considered the greatest Californian of them all. He and Ronald Reagan are the only Californians honored with a statue in the National Statuary Hall in Washington, D.C. Generations of schoolchildren were taught that Serra was a kind and beneficent figure. A 1957 state elementary school textbook, California Mission Days, managed to avoid mentioning religion altogether when introducing Serra as a boy named José: “[In] a faraway country called California lived darkskinned Indians. They had nobody to help or teach them. This José longed to do... He wanted to help the Indians of California.”
Streets and schools were named after Serra across the state, making him essentially a secular saint. In 1934, the Vatican began the process of declaring him an actual one by giving the Congregation for the Causes of Saints, the Vatican organization responsible for canonization, the green light to initiate the case for Serra’s sainthood. But from the very beginning, Serra’s cause faced strong criticism from scholars. Following the lead of pioneering anthropologists like UC Berkeley’s Alfred Kroeber, historians had begun to look at the Spanish colonial enterprise from the Indian point of view. From that perspective, the missions, however well-meaning they might have been, were instruments of mass death and cultural devastation. In 1946, journalist Carey McWilliams compared the missions to Nazi death camps, a trope that has remained popular in some quarters ever since.
But despite the inexorable triumph of the darker, revisionist view of the missions over the sentimental, Eurocentric one, Serra’s cause continued to advance through the slow-grinding machinery of the Vatican. In 1985, Pope John Paul II declared Serra venerable, the second of four stages required for sainthood. Indian activists exploded with outrage, but the church dismissed their objections. In 1988, the pope beatified Serra, the third stage, after the congregation approved the first of Serra’s two requisite miracles: the curing of a St. Louis nun suffering from lupus. A second miracle, however, proved elusive, leaving Serra in saintly limbo. And there matters stood until Pope Francis took the unusual step of waiving the second miracle when he made his surprise announcement in January.
Just how bad was Serra? And how much of a stretch is it to call him a saint? Leading Serra scholars say that the Franciscan’s view of Indians as childlike heathens who needed the lash for their own good was commonplace in his era. But they also acknowledge that the mission system he founded with the best of intentions had cataclysmic consequences for native peoples.
Steven Hackel, associate professor of history at UC Riverside and author of Junípero Serra: California’s Founding Father, rejects the equation of missions with slave labor camps. “Indians came into the missions for a lot of different reasons,” he says, including hunger, the attraction of the impressive Spanish technology, and because relatives had joined. “They weren’t forced there by soldiers with guns and lances.” But he also affirms perhaps the single most damning fact about the missions: Indians, once baptized, were not permitted to leave except for brief visits home (and sometimes not at all), and they were whipped for infractions, including running away. “In Serra’s day,” Hackel says, “coercion was a central component of the California mission system.”
“You’re not going to find evidence of Father Serra himself beating up an Indian,” Hackel notes. “But it’s clear from his own words that he endorsed corporal punishment as a means of ‘correcting,’ his word, wayward Indians. He viewed them as children, and so did Spanish law. Minors were to be corrected with flogging.” Hackel’s work also challenges the image of Serra as a benign, helpful figure, a kind of ur-Unitarian in a brown robe. Pointing to Serra’s ardent work in Mexico for the Inquisition, Hackel paints him as a self-flagellating zealot who verged on medieval fanaticism.
Robert Senkewicz, professor of history at Santa Clara University (a Jesuit Institution) and coauthor with Rose Marie Beebe of a new book on Serra, Junípero Serra: California, Indians, and the Transformation of a Missionary, says, “It’s not fair to say that Serra supported excessive punishment by the standards of his time. But nobody at the time argued that flogging was illegitimate. The major issue was who was going to be ordering it: soldiers or priests.” This hardly supports the notion that Serra was “an intrepid defender of the rights of native people,” as an official in the Congregation for the Causes of Saints claimed. Or, to return to Medina’s point, that Serra transcended his time.
For its part, the Catholic hierarchy seems to be speaking with two tongues about Serra. Francis’s praise of Serra as a great evangelizer, and Vatican statements such as the one by the congregation, ignore the mission system’s dark legacy and promulgate the traditional vision of Serra as a selfless servant of the Lord. But in a sign of the evolution of Catholic thinking on this issue, Sacramento’s Father Ken Laverone, one of the two American clerics most intimately involved with the Serra cause, takes a view that’s basically indistinguishable from Andrew Galvan’s: Serra and evangelizing, good; Spanish colonialism and destruction of Indians, bad.
“The critics are not going to buy this, I know, but the way we would look at the notion of sin is one’s intent,” Laverone says. “If you intend to do something evil, that’s wrong. But if you do something that you intend for good purposes, but it has evil consequences in the long run, that’s different. And Serra’s first intent was to bring the Gospel to the Americas.”
The crux for Laverone is that while Serra sparked the wildfire that immolated California’s indigenous peoples, he didn’t mean to. And besides, continues Laverone, California Indians were doomed anyway. “If it were not the Spaniards who did this in conjunction with the church, it would have been the Russians or the British, and the same thing or worse would have happened,” he says. While this might be true—many historians regard the Spanish era as less calamitous for Indians than the Mexican era or, certainly, the openly genocidal American era—being the least awful of an awful lot is hardly a ringing endorsement for anything, let alone sainthood. But Laverone is not moved by this argument. Confronted with it, he comments only that it’s “curious” that so many non-Catholics have taken an interest in what he paints as an internal Catholic matter.
Generally speaking, choices of saints are not controversial. The beatification of Salvadoran archbishop Oscar Romero, who was gunned down at the altar in 1980 by a suspected right-wing death squad, was held up for decades by conservative Latin American prelates who didn’t want to endorse liberation theology. But that was an internal church matter: Nobody protested when Francis beatified Romero in May. Given that even supporters of Serra’s canonization have been forced to introduce so many caveats that Saint Serra may ascend to a sky filled with asterisks, an obvious question arises: Why did Pope Francis, the most progressive pope since John XXIII died in 1963, choose to canonize this deeply problematic figure?
During his brief tenure, the Argentinean pope has been an outspoken advocate for the poor and the environment, a critic of global capitalism, and a defender of indigenous people. While visiting Bolivia in July, Francis said, “Many grave sins were committed against the native people of America in the name of God. I humbly ask forgiveness, not only for the offense of the church herself, but also for crimes committed against the native peoples during the so-called conquest of America.” That the man who made this statement should move to canonize Junípero Serra does not appear to make sense.
Professor Senkewicz believes that Francis “may not have been fully aware of the California controversy.” Other observers, however, are convinced that the pope knew the risks. Whatever the case, his motivations are clearly—if paradoxically—progressive. The Vatican has made it plain that it sees the canonization of Serra as a means of empowering the growing American Latino population, defending the rights of immigrants, and challenging the idea that the United States was crafted exclusively by Northern Europeans. All laudable goals—but it’s far from clear that Serra is the right instrument to achieve them.
First, while Serra was indeed Hispanic, he was certainly not Latino (the term refers to someone of Latin-American descent). About the only characteristic that he shared with the vast majority of the world’s present-day Latinos, other than Catholicism, was a common language. It’s a considerable reach to claim that a priest born on the Spanish island of Mallorca in 1713, who was part of a colonial enterprise that helped wipe out indigenous peoples across Latin America and California, is the spiritual ancestor of the mostly mixed-race Latinos of the 21st century. As for the church’s desire to revise the Anglocentric story about the creation of the United States, it’s true that Spanish missionaries and explorers were in North America long before the Puritans landed in New England. But Spanish colonialism here was a dead end, the last gasp of Spain’s expiring feudal empire, and its legacy has more in common with romantic myth than with strong founding-father material.
Whatever its effect on Latinos, immigrants, and Americans’ conceptions of their nation’s origins, the Serra controversy has already led to some commendable outreach from the Catholic church in California to native people. In the wake of Francis’s announcement, Laverone and his fellow vice-postulate told the California bishops that it created an opportunity to address the issue of church-Indian relations. The bishops agreed, and specific committees were set up to address education, curriculum, museums, missions, and liturgies. Native descendants of mission Indian peoples have been invited to visit the missions, and the whitewashed history presented in many mission museums is being revisited. Most important, Laverone hopes that his order will soon issue an apology. He sees these steps as the “silver lining” of the controversy.
Even so, Deborah Miranda, a professor of English at Washington and Lee University who is of Ohlone, Esselen, and Chumash tribal descent (and is Louise Ramirez’s sister), is wary. “Apologies that aren’t followed by a change of behavior, in general, don’t carry a lot of weight,” she warns.
Only one consequence of l’affaire Serra is agreed upon by all parties as a positive: It has opened up discussion of a tragic chapter in California history, one of which even many educated people are ignorant. “I think it would be good for all of us to tell the whole truth about the missions,” Father Laverone says. “Let’s get it out on the table and deal with it, and move forward.”
Knowledge is surely a good thing. But for many natives, the church’s hope that canonizing the author (albeit well-intentioned) of their forebears’ destruction will help atone for that destruction is as fruitless as its search for Serra’s elusive second miracle. Some things simply cannot be undone.
“I really don’t give a hoot whether Serra is canonized or not,” says Jonathan Cordero, a sociology professor at California Lutheran University who is descended from the only surviving lineage of Ohlone people from the San Francisco peninsula. Cordero opposes the canonization and regrets the pain that it has caused many native people. And yet, “it’s not going to make one bit of difference in my life, or, I believe, in the lives of California Indians from this point forward. The damage done to California Indians was done a long time ago.”
Originally published in the September issue of San Francisco
Have feedback? Email us at firstname.lastname@example.org Email Gary Kamiya at email@example.com Follow us on Twitter @sanfranmag
- See more at: http://www.modernluxury.com/san-francisco/story/junipero-serras-missions-destroyed-entire-native-cultures-and-now-hes-going-be-s#sthash.OGZ1H28p.dpuf
Washington Redskins Lose 6 Trademarks in Landmark Case
July 17, 2015
Leonard Peltier has been in prison for nearly 40 years
Published September 12, 2015
COLEMAN, FLORIDA – Leonard Peltier is celebrating his 71st birthday...
LEONARD PELTIER’S MESSAGE FROM THE PENITENTIARY ON HIS 71ST BIRTHDAY | <urn:uuid:52955482-d5d8-444e-ad8e-c70c15a46c3f> | CC-MAIN-2019-47 | https://www.aim-west.org/single-post/2015/08/25/Jun%C3%ADpero-Serra%E2%80%99s-missions-destroyed-entire-native-culturesnow-hes-going-to-be-a-saint | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668954.85/warc/CC-MAIN-20191117115233-20191117143233-00458.warc.gz | en | 0.963318 | 4,883 | 3.125 | 3 |
Department of Pediatric Nursing, Faculty of Health Sciences, Sakarya University, Sakarya, Turkey
*Address for Correspondence: Nursan Cinar, Sakarya University, Faculty of Health Sciences, Esentepe Campus, 54187, Sakarya, Turkey
Dates: Submitted: 29 November 2016; Approved: 20 February 2017; Published: 22 February 2017
Citation this article: Cinar N, Menekse D. Affects of Adolescent Pregnancy on Health of Baby. Open J Pediatr Neonatal Care. 2017;2(1): 019-023
Copyright: © 2017 Cinar N, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Keywords: Adolescent; Adolescent mothers and their child; Health outcomes; Pregnancy
The effects of adolescent pregnancies on child health are discussed in this paper. In recent decades adolescent pregnancy has become an important health issue in many countries, both developed and developing. According to WHO data in 2010, there are nearly 1, 2 billion adolescents in the world, which consists of 20% of the world population. 85% of these adolescents live in developing countries. A pregnancy in adolescence, which is a period of transmission from childhood to adulthood with physical, psychological and social changes, has been a public health issue having an increasing importance. Individual, cultural, social, traditional or religious factors play a great role in adolescent pregnancies which are among risky pregnancies. In the related studies, it is obviously stated that adolescent pregnancies, compared to adult pregnancies, have a higher prevalence of health risks such as premature delivery, low birth weight newborn, neonatal complications, congenital anomaly, problems in mother-baby bonding and breastfeeding, baby negligence and abuse. As a result, it is clear that adolescent pregnancies have negative effects on the health of children. Both the society and the health professionals have major responsibilities on this subject. Careful prenatal and postnatal monitoring of pregnant adolescents and providing of necessary education and support would have positive effects on both mother and child health. In this review, we have discussed affects of adolescent pregnancy on the health of a baby.
In recent decades adolescent pregnancy has become an important health issue in many countries, both developed and developing . According to WHO data in 2010, there are nearly 1,2 billion adolescents in the world, which consists of 20% of the world population. In developing countries, 85% of these adolescents live . Among the adolescent girls between the ages of 15-19, nearly 16 million births are seen in 2008, and this number consists of 11% of the world population . Although adolescent pregnancy is a serious health (anemia, hypertension, preeclampsia and eclampsia, abortion, assisted delivery, stillbirth, maternal complications and postpartum depression) and social problem (decreasing self-confidence, disruption of adolescent mother, departure from social activities) worldwide, more than 90% of cases occur in developing and developed countries and carry considerable risk [4,5]. In the United States, 9% of women aged 15 to 19 years become pregnant each year of which 5% will deliver a baby, 3% will choose to have an induced abortion, and 1% end in miscarriage or stillbirth [6,7]. The USA has the highest incidence (52.1 per 1000 15 to 19 year-olds) in the developed world and the UK has the highest incidence (30.8 per 1000 15 to 19 year-olds) in Europe [4,8].
The vast majority of these births (95%) occur in low- and middle-income countries . The 2014 World Health Statistics indicate that the average global birth rate among 15 to 19 years old is 49 per 1000 girls. Adolescent Delivery Rate (ADR) ranges from 1 to 299 births per 1000adolescent girls, with the highest rates in countries of sub-Saharan Africa .
According to Turkey Demographic and Health Survey (TDHS) 2013 data, 17.2% of the population that is nearly 1/5 of the population is comprised of 10-19 years old adolescents . It is stated that the percentage of motherhood in adolescence increases corresponding to the age, thus, 0.0% in 15 years old, 0.5% in 16 year olds, 3.4% in 17 years old, 4.6% in 18 years old, 16.2% in 19 years old. According to the TDHS 2008 data, the percentage of adolescent mothers was 6%, while that of TDHS 2013 was reduced to 5% . In Turkey, a study by the Ministry of Health was conducted in 81 provinces in order to detect adolescent labors. It was seen that adolescent labor number and rates were higher in provinces in the East and generally in the provinces that were identified as district health centers and in the provinces that received immigrants .
Individual, familial and social factors have a great role in the increase of adolescent pregnancies . In the studies conducted, the low level of education among mothers [1,5,13-15] a lower economic status , non-working mothers [1,4,5,16], mother's smoking , lack of information about contraception and early sexual activity are all factors that affect adolescent pregnancies. The study indicates that there is a significant correlation between ethnicity and adolescent pregnancy ; however, there are also studies claiming no between them .
In a study conducted in Turkey, adolescent pregnancy is seen that 73.8% is in illegal marriages at the age of 12-17 and 24.4% in illegal marriages at the age of 18-19 . According to the current Turkish civil laws in force, illegal marriages are not valid, but a considerable number of these couples preferred religious marriages, which are not defined as a legal marital status . In Thailand, 91.6% of the adolescent mothers below 20 are single parents . Illegal marriages or divorce in adolescent mothers are other factors that negatively affect maternal and infant health
Giving birth to a child during the adolescent years frequently is associated with long-term adverse consequences for the child . Together with biological immaturity, factors such as unintended pregnancy, inadequate perinatal care, poor maternal nutrition, and maternal stress may cause adverse obstetric and neonatal outcomes in pregnant adolescents .
It is possible to examine the effects of adolescent pregnancies on baby health under certain categories (Figure 1), which we will explain the following sections.
Preterm birth / premature infant / preterm delivery
Preterm is defined as babies born alive before 37 weeks of pregnancy . The rates of preterm birth, low birth weight, and asphyxia are higher among the newborns of adolescent mothers, all of which increase the risks of death and of future health problems for the baby . Preterm birth is another important perinatal challenge faced in clinical practice. Several studies have explored long-term health outcomes for premature infants, including chronic lung disease, visual and hearing problems and neurodevelopment delays [23,24].
In the studies conducted, the rates of preterm birth in adolescents and adults are detected as the following; Mukhopadhyay, et al. 27.7% - 13.%, Omar, et al. 22.5% - 2.9%, Rasheed, et al. 13.2% - 5.6%, Edirne, et al. 28.4% - 14.7%, Duvan, et al. 18.5% - 8.7%, Kovavisarach, et al. 12.1% - 7.1%, Gupta, et al. 8.7% - 7.6%, Keskinoglu, et al. 18.2% - 2.1%, Akdemir, et al. 16% - 1.8%. There is a significant difference between them (Table 1).
|Table 1: Rates of preterm birth in adolescent and adult mothers.|
|Studies||Rates of Preterm Birth|
|Mukhopadhyay, et al. ||27.7||13.1|
|Omar, et al. ||22.5||2.9|
|Rasheed, et al. ||13.2||5.6|
|Edirne, et al. ||28.4||14.7|
|Duvan, et al. ||18.5||8.7|
|Kovavisarach, et al. ||12.1||7.1|
|Gupta, et al. ||8.7||7.6|
|Keskinoglu, et al. ||18.2||2.1|
|Akdemir, et al. ||16||1.8|
Low birth weight
Low Birth Weight is characterized by the fetus below 2500 grams, and very low birth weight is the birth weight of the fetus below 1500 grams Low birth weight is closely associated with fetal and neonatal mortality and morbidity, inhibited growth and cognitive development, and chronic diseases later in life . In many studies, it is seen that there is a positive relationship between the adolescent pregnancy and low birth weight of the infant [4,14,15,28].
Low Apgar score<./
In some studies, it is expressed that the average Apgar scores in the 1st and 5th minutes are below 7 [25,29] the Apgar scores in the 1st and 5th minutes are below 7 [16,30], which reflects no difference between adolescents and adults. The rate of infant Apgar scores below 7 is 10.1% in adolescents while it is 1% in adults .
The rates of congenital abnormalities in adolescent deliveries are detected as 1.1% , 2.51% , 0.9% . In adolescent deliveries, cardiovascular and central nervous system abnormalities are most widely seen among the major congenital abnormalities .
In adolescent pregnancies, the risk of developing central nervous system abnormalities such as anencephalia, sipina bifida / meningocele, hydrocephalus/microcephalus; gastrointestinal system abnormalities such as omphalocele, gastroschisis; musculoskeletal system abnormalities such as cleft lip and cleft palate, polydactyly, syndactyly increases . Adult and adolescent mothers are compared in terms of iron folic tablet administration and it is seen that adult mothers (49.1%) use more iron folic tablets than adolescent mothers (40%), and that there is a significant difference between them .
In the study by Keskinoglu, et al. , conducted in Izmir with the adolescent mothers, meconium aspiration (8.7%), respiratory distress (2.3%), cordon presentation (2%), Rh isoimmunization (1.8%), infection (0.9%) and postpartum traumatic stress (0.5%) were listed among neonatal complications. In the study by Omar, et al. , it is stated that the rate of perinatal complications in the first 24 hours is 18.2% in adolescents, and 4.9% in mothers between 20-35 years old. In the studies, it is found out that the rate of premature rupture of membranes in adolescent pregnancies is 2.2% , 20.9% and 16.39% .
As one of the indicators of a country's health status, infant deaths have a great importance. According to 2013 data of TDHS in Turkey, infant death rate is significantly high among mothers younger than 20 (25 per thousand), 20-29 years old (14 per thousand) and 30-39 years old (25 per thousand) . Infant mortality seen in adolescent pregnancies is given in (Table 2).
|Table 2:f The rate of infant deaths in adolescent pregnancies.|
|Studies||Age||Rate of infant deaths (%)|
|Akdemir, et al.
(city of Sakarya in Turkey)
|Ages of 10-19||2.5|
|Ages of 20-35||0.1|
|Malabarey, et al.
|Ages of below 15||0.86|
|Ages of over 15||0.41|
|Chena, et al.
|Ages of 10-15||7.3|
|Ages of 16-17||4.9|
|Ages of 18-19||4.1|
|Mukhopadhyay, et al.
|Ages of 13-19||5.1|
|Ages of 20-29||1.7|
A study in the city of Sakarya in Turkey determines that postnatal infant death rate is 2.5% in adolescent pregnancies and 0.1% adult pregnancies . Malabarey, et al. in their studies in the USA evaluate the effect of young maternal age on adverse obstetrical and neonatal outcomes. They stated that the rate of infant deaths in pregnant adolescents below 15 is 0.86%, and it is 0.41% in pregnant adolescents over 15. In another study conducted in the USA, the rate of neonatal mortality is reported as 7.3% between the ages of 10-15, 4.9% between the ages of 16-17 and 4.1% between the ages of 18-19, so, the younger the mothers are, the higher the rate of deaths is . In a study conducted in India, it is stated that the rate of infant deaths earlier than 48 hours is 5.1% in adolescents and it is 1.7% in adults, which is significantly a high rate . The increased neonatal mortality in infants born to teenage mothers might be mediated by low weight gain during pregnancy, preterm birth, and/or low birth weight in teenage pregnancy . Because Adolescent pregnancies during pregnancy are may have maternal weight gain, inadequate prenatal care, due to pregnancy have a high risk for hypertension and preeclampsia .
Problems in the mother-baby bonding process
Bonding, which develops in three periods; pregnancy, labor and after the birth, is a mutual emotional relationship [32,33]. It deeply affects the physical, psychological and intellectual development of the child and retains its effect whole life. Parents have to play a key role in order to maintain a healthy process . Motherhood in adolescence is accepted as a risk factor for an adequate relationship between mother and infant and for the subsequent development of the infant .
The mother-child bonding is deteriorated in the first years of the child's life, especially because the mother is still immature and is undergoing a period of development . It is found out that adolescent mothers, compared to adults, have a lower tendency to touch, call, smile at and accept their babies . In a study by Crugnola, et al. it is expressed that adolescent mothers spend more time establishing a poor relationship and that they play less with their babies. In our literature reviews, we found out that there was limited number of studies on this subject. Informative studies are required on this subject.
Duration and success of breastfeeding
Breastfeeding practices among adolescent mothers is a biopsychological process which includes negative and positive factors along with the importance of social support in the intention of breastfeeding, starting and continuing of breastfeeding . Despite substantial evidence of maternal and infant benefits of breastfeeding, adolescent mothers initiate breastfeeding less often and maintain breastfeeding for shorter durations when compared to their adult counterparts .
The intention of breastfeeding is an important determiner in the starting and continuing of breastfeeding. McDowell, Wang, & Kennedy-Stephenson have reported in their study that 43% of adolescent mothers, 75% of mothers between 20-29 and 75% of mothers above 30 have the intention of breastfeeding. Kyrus, Valentine, & DeFranco have emphasized that adolescent mothers (44%) have a lower rate of breastfeeding intention than adult mothers (65%), which is influenced by an insufficient social support and poor socioeconomic conditions. It is more probable for mothers having breastfeeding intention to start breastfeeding . Kyrus, et al. have stated that breastfeeding rates in deliveries before 37 weeks are 20.9% for adolescent mothers younger than 15, 40.7% for mothers between 15-19, 56.8% for mothers older than 20, and that breastfeeding rate significantly decreases as the mother is younger. Teenage mothers' breastfeeding experiences may be similar to adult women's breastfeeding experiences, but teenage mothers may require additional breastfeeding support . Oddy, et al. have reported that 12.6% of the mothers below 20, 27.2% of the mothers between 20-24 and 29.9% of the mothers between 25-30, 21.5% of the mothers between 30-34 and 8.9% of the mothers older than 35 have breastfed their babies for less than 6 months. As for breastfeeding for more than 6 months, mothers younger than 20 have the least rate of breastfeeding (3.2%).
It which is conducted in a bistate metropolitan area in the Midwestern United States is emphasized that the interactive education given to the adolescent mothers by the team of lactation and peer consulting has a positive effect on their breastfeeding initiation and duration up to 6 months postpartum . It is essential that breastfeeding education is given to adolescent mothers and familial and social support is increased.
Infant abuse and negligence
Infants of adolescent mothers as a result of unplanned pregnancies may face various problems They are at risk of abuse, neglect, and school failure and are more likely to engage in criminal behavior later on [5,44]. Adolescent mothers may not possess the same level of maternal skills as adults do. There is a debate in the literature regarding the association between maternal age and child abuse . It is emphasized that children of adolescent mothers have a higher rate of maltreatment . In most studies, it is stated that children of adolescent mothers are exposed to a higher rate of unjust treatment in many ways, compared to the children of adult mothers [47,48].
It is reported that children of adolescent mothers are at risk in terms of cognitive and social development. Negative environmental conditions, including lack of stimulation or close and affectionate interaction with primary caregivers, child abuse, violence within the family, or even repeated threats of physical and verbal abuse during these critical years can have a profound influence on these nerve connections and neurotransmitter networks, potentially resulting in impaired brain development . Mother's lack of knowledge and experience on motherhood and baby care is a risk factor of child neglect.
Adolescent pregnancy is a common public health issue for both the mother and the child in terms of health, emotional and social outcomes. Adolescent pregnancies which have an important role in child health are an issue that should be carefully evaluated. In this compilation, the effect of adolescent pregnancies on the health of a child is discussed. Upon a literature research, it is seen that pregnant adolescents have greater health problems during pregnancy, labor and after labor, compared to pregnant adults. As for the baby, it is reported that premature labor, low birth weight, neonatal complications, congenital abnormalities, problems in mother-child link and breastfeeding and child abuse and neglect are among the problems that are widely seen as a result of adolescent pregnancies. The factors affecting this condition are emphasized as the educational and occupational status of the pregnant adolescent, socio-economical conditions, marital status, and family structure, racial and ethnic roots. The risk factors for adolescent pregnancy are multiple and complex. In order to clarify this issue, more comprehensive epidemiological studies which evaluate the effect of adolescent pregnancies on child health are needed.
As a result, it is clear that adolescent pregnancies have negative effects on the health of children. Both the society and the health professionals have major responsibilities on this subject. Careful prenatal and postnatal monitoring of pregnant adolescents and providing of necessary education and support would have positive effects on child health.
- Gokce B, Ozsahin A, Zencir M. Determinants of adolescent pregnancy in an urban area in Turkey a population-based case-control study. Journal of Biosocial Science 2007; 39: 301-311.
- World Health Organization. WHO [Internet]. 10 facts on adolescent health. 2010. [cited 2016 March 16]. Available from: https://www.who.int/features/factfiles/adolescenthealth/facts/en/index.html,
- World Health Organization. WHO [Internet]. Early marriages, adolescent and young pregnancies, [cited 2016 March 16]. Available from: https://apps.who.int/gb/ebwha/pdf_files/EB130/B130_12-en.pdf.
- Gupta N, Kiran U, Bhal K. Teenage pregnancies obstetric characteristics and outcome. European Journal of Obstetrics & Gynecology and Reproductive Biology. 2008; 137: 165-171
- Omar K, Hasim S, Muhammad NA, Jaffar A, Hashim SM, Siraj HH. Adolescent pregnancy outcomes and risk factors in Malaysia. International Journal of Gynecology and Obstetrics. 2010; 111: 220-223
- Darroch JE. Adolescent pregnancy trends and demographics. Curr Womens Health Rep. 2001; 1: 102-110.
- Malabarey OT, Balayla J, Klam SL, Shrim A, Abenhaim HA. Pregnancies in young adolescent mothers a population-based study on 37 million births. North American Society for Pediatric and Adolescent Gynecology. 2012; 25: 98-102.
- UNICEF. A league table of teenage births in rich nations. Innocenti Report Card No. 3; 2001: https://www.unicef-irc.org/publications/pdf/repcard3e.pdf
- World Health Organization. WHO [Internet]. Adolescent pregnancy. [cited 2016 March 16]. Available from: https://www.who.int/mediacentre/factsheets/fs364/en/.
- Turkey Population and Health Survey, 2013. Ankara. [cited 2016 March 16]. Available from: https://www.hips.hacettepe.edu.tr/tnsa2013/rapor/TNSA_2013_ana_rapor.pdf
- Guney R, Eras Z, Ayar B, Saridas B, Dilmen U. Adolescent births in Turkey. Sakarya Medical Journal. 2012; 3: 91-92.
- Santos MI, Rosario F. A score for assessing the risk of first-time adolescent pregnancy. Family Practice. 2011; 28: 482-488.
- Ahmed MK, Ginneken J, Razzaque A. Factors associated with adolescent abortion in a rural area of Bangladesh. Tropical Medicine and International Health. 2005; 10: 198-205
- Edirne T, Can M, Kolusari A, Yildizhan R, Adali E, Akdag B. Trends, characteristics, and outcomes of adolescent pregnancy in eastern Turkey. International Journal of Gynecology and Obstetrics. 2010; 110: 105-108
- Mukhopadhyay P, Chaudhuri RN, Paul B. Hospital-based perinatal outcomes and complications in teenage pregnancy in india. J Health Popul Nutr. 2010; 28: 494-500
- Raatikainen K, Heiskanen N, Verkasalo PK, Heinonen S. Good outcome of teenage pregnancies in high-quality maternity care. European Journal of Public Health. 2005; 16: 157-161.
- Demirgoz M, Canbulat N. [Adolescent pregnancy: Review]. Turkiye Klinikleri J Med Sci. 2008; 28: 947-952.
- Keskinoglu P, Bilgic N, Picakciefe M, Giray H, Karakus N, Gunay T. Perinatal outcomes and risk factors of Turkish adolescent mothers. J Pediatr Adolesc Gynecol. 2007; 20: 19-24.
- Kovavisarach E, Chairaj S, Tosang K, Asavapiriyanont S, Chotigeat U. Outcome of teenage pregnancy in rajavithi hospital. J Med Assoc Thai. 2010; 93: 1-8
- Centers for Disease Control and Prevention (CDC) [Internet]. [cited 2016 March 18]. CDC Health Disparities and Inequalities Report -United States, 2011. Available from: https://www.cdc.gov/mmwr/pdf/other/su6001.pdf
- World Health Organization (WHO). [Internet]. [cited 2016 March 18]. Preterm birth, 2013, https://www.who.int/mediacentre/factsheets/fs363/en/
- World Health Organization (WHO). [Internet]. [cited 2016 March 18]. Available from: https://www.who.int/maternal_child_adolescent/topics/maternal/adolescent_pregna cy/en/.
- Kuo CP, Lee SH, Wu WY, Liao WC, Lin SJ, Lee MC. Birth outcomes and risk factors in adolescent pregnancies results of a taiwanese national Survey. Pediatrics International. 2010; 52: 447-452.
- Rasheed S, Abdelmonem A, Amin M. Adolescent pregnancy in upper egypt. International Journal of Gynecology and Obstetrics. 2011; 112: 21-24
- Duvan CI, Turhan NO, Onaran Y, Gumus I, Yuvaci H, Gozdemir E. Adolescent pregnancies: maternal and fetal outcomes. The New Journal of Medicine. 2010; 27: 113-116.
- Akdemir N, Bilir F, Cevrioglu AS, Ozden S, Bostanci S. Investigation of obstetric outcomes of adolescent pregnancies in Sakarya Region. Sakarya Medical Journal. 2014; 4: 18-21.
- Chena XK , Wen SW, Fleming N, Yanga Q, Walker MC. Increased risks of neonatal and postneonatal mortality associated with teenage pregnancy had different explanations. Journal of Clinical Epidemiology. 2008; 61: 688-694.
- Bukulmez O, Deren O. Perinatal outcome in adolescent pregnancies: a case-control study from a Turkish university hospital. European Journal of Obstetrics &Gynecology and Reproductive Biology. 2000; 88: 207-212.
- Dane B, Arslan N, Batmaz G, Dane C. Does maternal anemia affect the newborn? Turk Arch Ped. 2013; 195-199.
- Thato S, Rachukul S, Sopajaree C. Obstetrics and perinatal outcome of Thai pregnant adolescents: a retrospective study. Int J Nurs Stud. 2007; 44: 1158-1164.
- Yildirim Y, Inal MM, Tinar S. Reproductive and obstetric characteristics of adolescent pregnancies in turkish women. J Pediatr Adolesc Gynecol. 2005; 18: 249-253.
- Kavlak O, Sirin A. The Turkish version of Maternal Attachment Inventory. Journal of Human Sciences. 2009; 6: 189-202.
- Kose D, Cinar N, Altinkaynak S. Bonding process of the newborn and the parents. STED. 2013; 22: 239-245.
- Crugnola CR, Lerardi E, S Gazzotti S, Albizzati A. Motherhood in adolescent mothers: Maternal attachment mother-infant styles of interaction and emotion regulation at three months. Infant Behavior & Development 2014; 37: 44-56.
- Molina RC, Roca CG, Zamorano JS, Araya EG. Family planning and adolescent pregnancy. Best Practice & Research Clinical Obstetrics and Gynaecology. 2010; 24: 209-222
- Deutscher B, Fewell R, Gross M. Enhancing the interactions of teenage mothers and their at-risk children: effectiveness of a maternal-focused intervention. Topics Early Childhood Educ. 2006; 26: 194-205.
- Wambach KA, Cohen SM. Breastfeeding experiences of urban adolescent mothers. J Pediatr Nurse. 2009; 24: 244-254.
- Wambach KA, Aaronson L, Breedlove G, Domian EW, Rojjanasrirat W, Yeh HW. A Randomized controlled trial of breastfeeding support and education for adolescent mothers. West J Nurs Res. 2010: 33; 486-505.
- McDowell, M. A., Wang, C.-Y, Kennedy-Stephenson, J. Breastfeedingin the United States: Findings from the National Healthand Nutrition Examination Surveys 1999-2006. (NCHS databriefs, no. 5). Hyattsville, MD: National Center for Health Statistics. 2008. Retrieved from https://www.cdc.gov/nchs/products/databriefs/db05.htm
- Kyrus KA, Valentine C, DeFranco DO. Factors associated with breastfeeding initiation in adolescent mothers. J Pediatr. 2013; 163: 1489-1494.
- Sipsma H, Phil M, Biello KB, Cole-Lewis H, Kershaw T. Like father, like son: the intergenerational cycle of adolescent fatherhood. American Journal of Public Health. 2010; 100: 517-524.
- Nelson A, Sethi S. The breastfeeding experiences of canadian teenage mothers. Journal of Obstetric, Gynecologic & Neonatal Nursing. 2005; 34: 615-624.
- Oddy WH, Kendall GE, Jianghong L, Jacoby P, Robinson M, Psych H, et al. The long-term effects of breastfeeding on child and adolescent mental health a pregnancy cohort study followed for 14 years. J Pediatr. 2010; 156: 568-574.
- Barnet B, Liu J, DeVoe M, Duggan AK, Gold MA, Pecukonis E. Motivational intervention to reduce rapid subsequent births to adolescent mothers: a community-base randomized trial. Ann Fam Med. 2009; 7: 436-445.
- Klein JD. Adolescent pregnancy: current trends and issues. Pediatrics. 2005: 116.
- U.S. Department of Health and Human Services, Administration on Children, Youth and Families Child maltreatment 2007. Washington, DC: U.S: Government Printing Office; 2010.
- Dixon L, Browne K, Hamilton-Giachritsis C. Risk factors of parents abused as children: A mediational analysis of the intergenerational continuity of child maltreatment. Journal of Child Psychology and Psychiatry. 2005; 46: 47-57.
- Sidebotham P, Golding J. Child maltreatment in the "children of the nineties": A longitudinal study of parental risk factors. Child Abuse & Neglect. 2001; 25: 1177-1200.
- Pinzon JL, Jones VF. Care of adolescent parents and their children. Pediatrics. 2012; 130: e1743-1756.
Authors submit all Proposals and manuscripts via Electronic Form! | <urn:uuid:463b4095-164c-4def-92c5-b9242b119692> | CC-MAIN-2019-47 | https://www.scireslit.com/Pediatrics/PNC-ID15.php | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668525.62/warc/CC-MAIN-20191114131434-20191114155434-00217.warc.gz | en | 0.893036 | 6,489 | 2.703125 | 3 |
CBI - China: Day 1 of 2,987 of the Second Sino-Japanese War, the full-scale invasion of China by Japan and full-scale resistance by China. It will become the largest Asian war fought in the 20th century and a major front of what will broadly be known as the Pacific War.
A surviving Chinese baby after a 1937 Japanese air raid on Shanghai
China was a divided country in 1937. Chiang Kai-Shek had formed a Nationalist Government in 1927, but his dictatorial regime was opposed by Mao Tse Tung’s Communists. Civil war between the two erupted.
In 1931, Japan, eager for the vast natural resources to be found in China and seeing her obvious weakness, invaded and occupied Manchuria. It was turned into a nominally independent state called Manchukuo on 08 Mar 32, but the Chinese Emperor who ruled it was a mere puppet of Japan.
The Japanese regarded the Chinese as racial inferiors. From their base in Manchuria, they inflicted their territorial encroachment into China and the whole north of the country was gradually taken over. All the while, resistance was scant at the beginning, as Chinese Nationalist and Communist infighting seemed more important to them than uniting to engage the Japanese aggression. Chiang and Mao operated largely on their own against the Japanese, rarely cooperating with each other.
By 1940, the war against Japan descended into stalemate. The Japanese seemed unable to force victory and the Chinese were unable to evict the Japanese from the territory they had conquered. But western intervention in the form of economic sanctions (most importantly oil) against Japan would transform the nature of the war. It was in response to these sanctions that Japan decided to attack America at Pearl Harbor, and so initiate WWII in the Far East.
The Second Sino-Japanese War was, right behind the German-Soviet War, the biggest and most costly war in human history. Over 20 million Chinese were killed, with an additional 10 million wounded. It is also known as the Double Seven War because of this date (7-7).
After the 07 Dec 41 attack on Pearl Harbor that pulled the US into war against Japan, the 2nd Sino-Japanese War would merge into the greater conflict of WWII and would continue until a week after Japan unconditionally surrenders to the Allies on 02 Sep 45, thanks to the Soviet Union's last minute power grabs.
The length, scale and nature of the Second Sino-Japanese War had debilitated China, which emerged politically unsettled, economically exhausted and scarred by an enormous amount of human suffering.
The First Sino-Japanese War, by the way, was fought in 1894-95, resulting in a Japanese victory, forcing China to cede Formosa and to recognize the nominal independence of Korea.
Day 1 of 3 of the Lugou Bridge Incident. After last night's accidental skirmish, one of the Japanese soldiers comes up missing and is assumed to have been captured. Turns out he had gotten lost, then found his way back. However, no one informed his superiors that he had returned. The Japanese mount an attack in revenge for their missing man.
Spain: Day 356 of 985 of the Spanish Civil War.
1938 — , July 7
Spain: Day 721 of 985 of the Spanish Civil War.
CBI - China: Day 366 of 2,987 of the 2nd Sino-Japanese War. China and Japan enter the second year of this, the Double Seven War, which began on 07 Jul 37.
Day 27 of 139 of the Battle of Wuhan.
1939 — , July 7
CBI - China: Day 731 of 2,987 of the 2nd Sino-Japanese War. China and Japan enter the third year of this, the Double Seven War, which began on 07 Jul 37.
Day 24 of 68 of the Battle of Tianjin.
CBI - Mongolia: Day 58 of 129 of the Battle of Khalkhin Gol, a border dispute between the Soviet Union and Japan.
1940 — , July 7
Atlantic: German sub U-99 sinks the British ship SEA GLORY and later toward the end of the day, sinks the Swedish ship BISSEN south of Cape Clear, Ireland.
Atlantic: German sub U-34 sinks the Dutch tanker LUCRECIA 100 miles west of Land's End in southwestern England.
Atlantic: The British sub HMS H43 lands at Hubert Nicolle on the Channel Island of Guernsey during the night to collect intelligence for the planned commando raid code named Operation AMBASSADOR.
Atlantic: Over the Channel, six British fighters are shot down while downing four enemy planes during aerial battles with the Luftwaffe.
MTO - Italy: Italy grants Vichy France permission to keep her Mediterranean bases armed.
West Africa: Operation CATAPULT: A British warplane off the carrier HMS HERMES attacks the French battleship RICHELIEU in dock at Dakar, sinking her in shallow waters.
East Africa: Day 28 of 537 of Italy's East African campaign in the lands south of Egypt.
CBI - China: Day 1,097 of 2,987 of the 2nd Sino-Japanese War. China and Japan enter the fourth year of this, the Double Seven War, which began on 07 Jul 37.
Day 236 of 381 of the Battle of South Guangxi.
1941 — , July 7
South America: Day 3 of 27 of the Ecuadorian-Peruvian War, a territorial dispute between Peru and Ecuador.
Atlantic: Under the pretext of defending the western hemisphere against Axis incursions, the US 1st Marine Brigade lands in Iceland to relieve the British garrison that has been there since the previous year.
ETO: During a RAF bombing at Münster, Germany, a Wellington bomber catches fire. New Zealander Sgt James Ward ties a rope around himself, climbs out onto the wing and puts out the flames, earning himself the Victoria Cross.
ETO - UK: After sundown, the Luftwaffe bombs Southampton.
Russian Front - Finland: Day 9 of 142 of Operation SILVER FOX, a joint German-Finnish campaign to capture the Russian port of Murmansk in the Arctic.
Russian Front - Finland: Day 7 of 140 of Operation ARCTIC FOX, a joint German-Finnish campaign against Soviet Northern Front defenses at Salla, Finland.
Russian Front - Finland: Day 16 of 164 of the Battle of Hanko.
Russian Front: Day 16 of 167 of Germany's Operation BARBAROSSA, the invasion of the USSR.
Russian Front - North: German Army Group North continues its advance toward Leningrad, capturing Pskov, Russia.
Russian Front - Center: Day 2 of 31 of the 1st Battle of Smolensk, Russia. German Army Group Centre needs to take this region before taking Moscow.
Russian Front - South: Day 6 of 21 of the Battle of Bessarabia, Russia. German and Romanian troops continue their attack at Bessarabia to take the land and city that Romania was forced to cede to the USSR a year ago.
Russian Front - South: In Ukraine, German Army Group South continues their drive toward southern Russia.
Russian Front - South: German and Romanian troops keep advancing toward Vinnitsa and the Black Sea port of Odessa, Ukraine.
MTO - Yugoslavia: Occupied Yugoslavia is carved up between the Axis powers of Germany, Italy, Hungary and Bulgaria, with Croatia becoming an independent state.
MTO - Libya: Day 89 of 256 of the Siege of Tobruk.
Middle East: Day 30 of 37 of the Battle for Syria and Lebanon.
East Africa: Day 393 of 537 of Italy's East African campaign in the lands south of Egypt.
CBI - China: Day 1,462 of 2,987 of the 2nd Sino-Japanese War. China and Japan enter the fifth year of this, the Double Seven War, which began on 07 Jul 37.
1942 — , July 7
USA: In an effort to make more new B-29 bombers available the US Army, the US Navy cancels its orders for B-29 in exchange for a number of existing B-24s and B-25s from the US Army.
Atlantic: Day 11 of 14 of Germany's Hunt for Allied Convoy PQ-17. During this hunt, U-boats and the Luftwaffe will sink 24 merchant ships. During the night, 5 more ships are sunk as the remaining convoy runs off into the Arctic Ocean.
Atlantic: US 1st Air Force: An A-29 Hudson light bomber sinks German sub U-701 southeast of Cape Hatteras, North Carolina.
ETO - UK: Two pro-German spies, Jose Key and Alphons Timmerman are hanged at Wandsworth prison in England.
Germany: Himmler authorizes sterilization experiments to take place at Auschwitz Concentration Camp.
Russian Front - North: Day 303 of 872 of the Siege of Leningrad.
Russian Front - North: Day 64 of 658 of the Siege of the Kholm Pocket.
Russian Front - Center: Day 6 of 22 of Germany's Operation SEYDLITZ, a plan to trap and capture numerous Soviet troops. German Army Group A begins their offensive in the Donets Basin in eastern Ukraine.
Russian Front - South: Day 10 of 27 of the Battle of Voronezh, Russia. The 4th Panzer enters Voronezh. The Soviets respond by filling in the gap between Bryansk and the Southwest Fronts, thus creating the Voronezh Front.
Russian Front - South: Day 10 of 150 of Germany's CASE BLUE, the failed offensive to take the Caucasus oil fields.
MTO - Egypt: Day 7 of 27 of the 1st Battle of El Alamein.
East Africa: Day 64 of 186 of the Battle of Madagascar.
CBI - China: Day 1,827 of 2,987 of the 2nd Sino-Japanese War. China and Japan enter the sixth year of this, the Double Seven War, which began on 07 Jul 37.
Day 54 of 124 of Japan's Zhejiang-Jiangxi Campaign, launched to punish anyone suspected of aiding the Doolittle raiders in China. Roughly 250,000 Chinese will be killed.
PTO - Alaska: Day 31 of 435 of the Battle of Kiska, Aleutian Islands.
PTO - Malaya: Day 139 of 357 of the Battle of Timor Island.
1943 — , July 7
MTO: Atlantic: The German sub U-95l is sunk in the east Atlantic by a B-24 of the 1st Antisubmarine Squadron.
Atlantic: Off the coast of Brazil, German sub U-185 sinks 3 merchant ships. East of Jamaica, U-759 sinks the Dutch cargo ship POELAU ROEBIAH.
Germany: At Wolf's Lair, German rocket research team leader Walter Dornberger presents his research to Hitler and convinces him to give rocket research and production a high priority.
Russian Front - North: Day 668 of 872 of the Siege of Leningrad.
Russian Front - North: Day 429 of 658 of the Siege of the Kholm Pocket.
Russian Front - Center: Day 3 of 50 of the Battle of Kursk, Russia. The German pincers are getting bogged down as Soviet resistance increases.
MTO - Italy: US 9th and 12th Air Forces bombs several targets in Sicily and Sardinia.
CBI - China: Day 2,192 of 2,987 of the 2nd Sino-Japanese War. China and Japan enter the seventh year of this, the Double Seven War, which began on 07 Jul 37.
US 14th Air Force attacks shipping at Canton.
PTO: Day 42 of 47 adrift in a raft for the survivors of B-24 GREEN HORNET that crashed 850 miles from Hawaii.
PTO - Alaska: Day 396 of 435 of the Battle of Kiska, Aleutian Islands.
PTO - New Guinea: Day 77 of 148 of the 2nd Battle of Lae-Salamaua. US 5th and 13th Air Forces provide air support. Australian troops capture Observation Hill, an important terrain feature west of Mubo.
PTO - Solomon Islands: Day 18 of 67 of the Battle of New Georgia. US 5th and 13th Air Forces provide air support.
1944 — , July 7
ETO - Germany: The 8th Air Force sends 1,100 heavy bombers from bases in England to various targets in Germany. An entire squadron of B-24 Liberators of the 492nd Bomb Group from North Pickenham is wiped out by the Luftwaffe on a bombing mission to Bernburg, Germany.
Crashed B-24 Liberator of the 492nd Bomb Group
Many heavy bombers were lost on July 7, 1944, particularly west of Bernburg, Germany, in a fierce air battle with the Luftwaffe. One B-24 that dropped out of formation after being shot up was never seen again... until 59 years later.
The McMurray Crew 801 of the 492nd Bomb Group was presumed to have crashed into the North Sea and all nine crewmen were listed as MIA until the wreckage and remains were found in the former East Germany by the Missing Allied Air Crew Research Team (MAACRT), headed by German Enrico Schwartz. He and his team spent two years looking for them and another two getting the proper permits to excavate the crash site. Today, all the members of McMurray Crew are buried in Arlington National Cemetery.
The 492nd Bomb Group was operational for a mere 89 days from April 18 to August 7, 1944. Three particular bad days earned them the unwanted nickname the Hard Luck Group...
3 deadliest missions for the 492nd Bomb Group
19 May 44
20 Jun 44
07 Jul 44
Casualties per 1000 Combatants in WWII
US Army Air Force
US Marine Corps
467th Bomb Group
492nd Bomb Group
The casualties for the 492nd Bomb Group may not seem all that alarming until compared to the statistics shown to the right of combat casualties in World War II. The casualty rate for the 467th Bomb Group, though still much higher than average, pales in comparison to what the 492nd Bomb Group suffered.
Whenever a bomb group had an easy "milk run" mission, it was usually because some other bomb group was unfortunate enough to be attacked by the Luftwaffe that day. As the Luftwaffe's numbers continued to dwindle, the odds of surviving a mission improved, but was never completely without danger. As the Germans were forced back into Germany, they moved more and more anti-aircraft batteries into greater concentrations to protect targeted areas.
492nd Bomb Group related dates...
18 Apr 44: 492nd Bomb Group arrives at Station 143, North Pickenham
09 May 44: 492nd BG does exhibition formation flight over 2nd AD bases
11 May 44: 492nd BG flies its first mission; no casualties
19 May 44: 492nd BG Mission 05 to Brunswick; first of their three deadliest
20 Jun 44: 492nd BG Mission 34 to Politz; second of their three deadliest
07 Jul 44: 492nd BG Mission 46 to Bernburg; third of their three deadliest; entire 859th Bomb Squadron is wiped out
15 Jul 44: The bomb dump at nearby Metfield mysteriously explodes, rocking the countryside and destroying the 491st BG's base
07 Aug 44: 492nd BG flies its final mission (no casualties) and is disbanded
13 Aug 44: 801st Provisional Group is redesignated the 492nd Bomb Group
15 Aug 44: 491st Bomb Group moves into the base at North Pickenham
Charles W Arnett
The 492nd Bomb Group has a special place in the hearts of brothers Paul and David Arnett, the two guys behind this ScanningWWII website. Their father, Charles Arnett, served in the 492nd BG as the pilot of a B-24 Liberator. He was shot down on his third mission and spent a year as a POW at Stalag Luft III where the "Great Escape" took place. It is because of our love for him that we developed a passion for WWII.
Dates related to Charles Arnett...
26 Jul 42: US Lt Col Clark shot down flying an RAF fighter over France
19 May 44: 1st of 3 deadly missions for the Hard Luck 492nd Bomb Group
Ruth Register volunteered for the American Red Cross after her husband was killed in the Pacific, taking her from North Dakota to Europe and back, including being assigned to the 492nd Bomb Group at North Pickenham.
ETO - UK: Day 25 of 86 of the V-1 "Buzz Bomb" offensive on Britain.
ETO - France: Day 32 of 49 of Operation OVERLORD, the Allied invasion of Normandy, France, known forever simply as D-Day. D-Day+31: Allied Air Forces provide air support. Attacks near Carentan by the US 7th Army are held off by German counterattacks.
ETO - France: Day 32 of 62 of the Battle of Caen. The RAF bombs German positions in and around Caen.
Russian Front - Finland: Day 13 of 15 of the Battle of Tali-Ihantala. This becomes the largest battle in Scandinavian history.
Russian Front - Finland: Day 17 of 50 of the Battle of Karelia. Soviet troops continue their offensive against the Finns in eastern Karelia between Lake Ladoga and Lake Onega in northern Russia.
Russian Front - North: Day 157 of 191 of the Battle of the Narva Isthmus, Estonia. Both German and Soviet troops remain locked in their defensive positions.
Russian Front - Center: Day 3 of 16 of the Battle of Vilnius, Lithuania.
Russian Front - Center: Day 3 of 27 of the Battle of Siauliai, Lithuania.
Russian Front - Center: Day 3 of 23 of the Battle of Belostock, Poland.
Hungary: Miklós Horthy suspends the deportation of Hungarian Jews to concentration camps.
MTO: US 15th Air Force bombs targets in Germany and Yugoslavia.
MTO - Italy: Day 22 of 34 of the Battle of Ancona (north of Rome). Allied Air Forces provide air support. The US 5th Army completes its clearing of the Rosignano area.
CBI - Burma: Day 125 of 166 of the UK's Operation THURSDAY. US 10th Air Force provides air support.
CBI - Burma: Day 120 of 147 of the Battle of Myitkyina. US 10th Air Force provides air support.
CBI - Burma: Day 98 of 302 of the Chinese Salween Offensive. US 14th Air Force provides air support.
CBI - China: Day 2,558 of 2,987 of the 2nd Sino-Japanese War. China and Japan enter the eighth year of this, the Double Seven War, which began on 07 Jul 37.
Day 82 of 259 of Japan's Operation ICHI-GO.
Day 16 of 48 of the Battle of Hengyang. US 14th Air Force provides air support.
CBI - Japan: US 20th Air Force B-29s night bomb at Sasebo, Omura and Tobata.
PTO - Caroline Islands: US 7th Air Force night bombs in the Truk Atoll and again during the day.
PTO - Dutch New Guinea: Day 6 of 61 of the Battle of Noemfoor. The island is declared secured, however bitter fighting from Japanese holdouts will continue until 31 Aug.
PTO - Mariana Islands: Day 23 of 25 of the Battle of Saipan. US 7th Air Force provides air support. Vice-Admiral Nagumo and General Saito both commit suicide as the Japanese position on Saipan deteriorates.
PTO - New Guinea: Day 42 of 83 of the Battle of Biak. There are still 3,000 Japanese soldiers on the island who won't give up.
PTO - New Guinea: Day 206 of 597 of the Battle of New Britain. US 13th Air Force provides air support.
PTO - New Guinea: Day 77 of 481 of the Battle of Western New Guinea. 5th Air Force provides air support.
PTO - New Guinea: Day 24 of 80 of the Battle of Lone Tree Hill.
PTO - Philippines: The USS MINGO attacks a Japanese convoy off Luzon, sinking the destroyer TAMANAMI.
PTO - Solomon Islands: Day 250 of 295 of the Battle of the Bougainville Islands. US 13th Air Force provides air support.
1945 — , July 7
CBI: US 14th Air Force disrupts the Japanese withdrawal in French Indochina and in China.
CBI - China: Day 2,923 of 2,987 of the 2nd Sino-Japanese War.
PTO: The USS TREPANG sinks the Japanese freighter KOUN MARU NUMBER TWO.
PTO - Borneo: Day 28 of 67 of the Battle of North Borneo. US 5th and 13th Air Forces provide air support.
PTO - Dutch East Indies: Day 7 of 21 of the 2nd Battle of Balikpapan.
PTO - New Guinea: Day 558 of 597 of the Battle of New Britain. US 10th Air Force provides air support.
PTO - New Guinea: Day 442 of 481 of the Battle of Western New Guinea. US 10th Air Force provides air support.
PTO - Philippines: Day 260 of 299 of the 2nd Battle of the Philippines, aka the Liberation of the Philippines or the Philippines Campaign.
PTO - Philippines: Day 205 of 244 of the Battle of Luzon. The battle is said to over but hold-outs will continue fighting until the end of the war.
PTO - Philippines: Day 120 of 159 of the Battle of Mindanao Island. The battle is said to over but hold-outs will continue fighting until the end of the war.
PTO - Philippines: Day 112 of 135 of the Battle of the Visayas region. The battle is said to over but hold-outs will continue fighting for several weeks.
Day-By-Day listings for July 7 were last modified on Sunday, January 31, 2016 | <urn:uuid:36705f7c-f243-4b33-bfac-9f789d42b353> | CC-MAIN-2019-47 | http://www.scanningwwii.com/cgi-bin/wwii.cgi?page=wwii-day-by-day&day=0707 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667945.28/warc/CC-MAIN-20191114030315-20191114054315-00458.warc.gz | en | 0.9364 | 4,733 | 3.640625 | 4 |
Reducing the size of government.
Streamlining the bureaucracy.
Returning power to the states.
And Equality of opportunity. These are all principles of the Republican Party. Throughout our history, the Republican Party has consistently demonstrated that our party was founded on equality for all. Republicans fought the battle for equality on behalf of our fellow Americans long before it was popular to do so. The Republican Party, since its inception, has been at the forefront of the fight for individuals’ Constitutional rights.
As the party of the open door, while steadfast in our commitment to our ideals, we respect and accept that members of our Party can have deeply held and sometimes differing views. This diversity is a source of strength, not a sign of weakness, and so we welcome into our ranks those may hold differing positions. We commit to resolve our differences with civility, trust, and mutual respect, and to affirm the common goals and beliefs that unite us.
The Missouri Compromise & Dred Scott
In an effort to preserve the balance of power in Congress between slave and free states, the Missouri Compromise was passed in 1820 admitting Missouri as a slave state and Maine as a free state. Furthermore, with the exception of Missouri, this law prohibited slavery in the Louisiana Territory north of the 36° 30´ latitude line.
In 1854, the Missouri Compromise was repealed by the Kansas-Nebraska Act.
The Supreme Court decision Dred Scott v. Sandford was issued on March 6, 1857. Delivered by Chief Justice Roger Taney, this opinion declared that slaves were not citizens of the United States and could not sue in Federal courts. In addition, this decision declared that the Missouri Compromise was unconstitutional and that Congress did not have the authority to prohibit slavery in the territories.
The Dred Scott decision was overturned by the 13th Amendment (Ratified in December, 1865) and 14th Amendment (Ratified in July of 1868) to the Constitution.
Appeal of the Independent Democrats
Salmon P. Chase[/caption]In 1854, Salmon P. Chase, Senator from Ohio, Charles Sumner, Senator from Massachusetts, J. R. Giddings and Edward Wade, Representatives from Ohio, Gerritt Smith, Representative from New York and Alexander De Witt, Representative from Massachusetts authored the “Appeal of the Independent Democrats” as a response to Senator Steven Douglas who introduced the “Kansas-Nebraska Act” in the same year. This act would superseded the Missouri Compromise and has been considered by some to be the “point of no return” on the nation’s path to civil war.
In fact, after the bill (Kansas-Nebraska Act) passed on May 30, 1854, violence erupted in Kansas between pro-slavery and anti-slavery settlers, a prelude to the Civil War.
The act passed Congress, but it failed in its purposes. By the time Kansas was admitted to statehood in 1861 after an internal civil war, southern states had begun to secede from the Union.
The Independent Democrats and many northern Whigs abandoned their affiliations for the new antislavery Republican party, leaving southern Whigs without party links and creating an issue over which the already deeply divided Democrats would split even more.
In the Beginning
In Ripon, Wis., under the leadership of lawyer Alvan E. Bovay, representatives of various political groups took a strong stand against the Kansas-Nebraska Act and suggested the formation of a new party. Other anti-Nebraska meetings in Michigan, New York, and throughout the North that spring also recommended the organization of a new party to protest the bill.
In July of 1854, a convention was held in Madison to organize the new party. The members resolved, “That we accept this issue [freedom or slavery], forced upon us by the slave power, and in the defense of freedom will cooperate and be known as Republicans.” The Wisconsin Republican Party was dominated by former Whigs, yet they played down their backgrounds to concentrate solely on the issue of slavery, the one issue on which they knew all Republicans could agree.
When the 1854 election returns were in, Wisconsin Republicans had captured one of the two U.S. Senate seats, two of the three U.S. House of Representatives seats, a majority of the state assembly seats, and a large number of local offices. The next year, Wisconsin elected a Republican governor.
The first official Republican meeting took place on July 6th, 1854 in Jackson, Michigan.
Today in the town of Jackson, Michigan at the corner of Second & Franklin Streets, there is a historical marker that reads;
“On July 6, 1854, a state convention of anti-slavery men was held in Jackson to found a new political party. “Uncle Tom’s Cabin” had been published two years earlier, causing increased resentment against slavery, and the Kansas-Nebraska Act of May, 1854, threatened to make slave states out of previously free territories. Since the convention day was hot and the huge crowd could not be accommodated in the hall, the meeting adjourned to an oak grove on “Morgan’s Forty” on the outskirts of town. Here a state-wide slate of candidates was selected and the Republican Party was born. Winning an overwhelming victory in the elections of 1854, the Republican party went on to dominate national parties throughout the nineteenth century.”
It is estimated that over 1,000 people attended this meeting, far exceeding the capacity of the hall event organizers had secured for the event. So as an alternative, the meeting was adjourned to an oak grove on “Morgan’s Forty” on the outskirts of town.
The name “Republican” was chosen because it alluded to equality and reminded individuals of Thomas Jefferson’s Democratic-Republican Party.
So, was the Republican Party born in Wisconsin or Michigan?
Was Wisconsin the birthplace of the Republican Party? Or was it in Michigan? Perhaps it was in Pennsylvania? That debate continues even to this day.
The name was first publicly applied to the movement in a June 1854 editorial by New York editor Horace Greeley, who said;
“It would fitly designate those who had united to restore the Union to its true mission of champion and promulgator of Liberty rather than propagandist of slavery.”
Local meetings were held throughout the North in 1854 and 1855. The first national convention of the new party was only held in Pittsburgh on February 22, 1856. Whether one accepts Wisconsin’s claim depends largely on what one means by the words “birthplace” and “party.” Modern reference books, while acknowledging the ambiguity, usually cite Ripon as the birthplace of the organized movement to form the party.
If not born in Ripon, the party was at least conceived there.
February 22nd, 1856
In January of 1856, a pro-Republican newspaper in Washington, D.C. published a call for representatives of the various state Republican organizations to meet in convention in Pittsburgh, PA with the dual purposes of establishing a national Republican organization and setting a date for a future nominating convention.
Representatives from northern states, as well as a number from several southern states and western territories, gathered in that city on Washington’s Birthday to set forth their common beliefs and political goals and to lay the groundwork for a national party. The Pittsburgh convention neither made nominations nor established a formal party platform; its representatives were not even formal delegates. Yet the Pittsburgh convention served as the first national gathering of the young Republican Party and successfully established the party as a legitimate national political institution while providing the Republicans with a sense of identity and unity.
This convention became the seminal event in the Republican Party’s formative years through its direct impact upon both the party’s early policies and the subsequent nominating convention in Philadelphia in June of that year.
The first organizational convention for the Republican Party was held in the Lafayette Hall (on the corner of Fourth Avenue and Wood Street) in downtown Pittsburgh.
June 17, 1856, Philadelphia, Pennsylvania – Let’s Get This Party Started
In 1856, the Republicans became a national party under the Republican Party Platform of 1856, nominating John C. Fremont of California for president. At the convention, Abraham Lincoln lost his bid as a vice presidential candidate to William L. Dayton, a former senator from New Jersey.
Fremont was a national hero who had won California from Mexico during the Mexican-American War and had crossed the Rocky Mountains five times.
Fremont, known more as an explorer than for his brief time as a U.S. senator, became the front runner after two major contenders withdrew from the race before the balloting began: Salmon P. Chase of Ohio and William H. Seward of New York. The 600 voting delegates at the convention represented the Northern states and the border slave states of Delaware, Maryland, Virginia, Kentucky and the District of Columbia. The symbolically important Territory of Kansas was treated as a full state.
The Republican platform advocated, like the Democrats’ platform, construction of a transcontinental railroad and welcomed improvements of river and harbor systems. The most compelling issue on the platform, however, was opposition to the expansion of slavery in the free territories and urgency for the admission of Kansas as a free state, calling upon “Congress to prohibit in the Territories those twin relics of barbarism — Polygamy and Slavery.”
The Republicans united under the campaign slogan;
“Free soil, free labor, free speech, free men, Fremont.”
John Fremont was defeated in the Presidential election of 1856 by Democrat by James Buchanan.
The Republicans of that time worked to pass…
- 13th Amendment, which outlawed slavery
- 14th Amendment, which guaranteed equal protection under the laws
- 15th Amendment, which helped secure voting rights for African-Americans
Women as Leaders
In 1896, Republicans were the first major party to favor women’s suffrage. The Republican Party played a leading role in securing women the right to vote. When the 19th Amendment was finally added to the Constitution, 26 of the 36 state legislatures that had voted to ratify it were under Republican control.
Jeanette Rankin – Montana
The first woman elected to Congress was a Republican, Jeanette Rankin from Montana in 1917.
Jeannette Rankin’s life was filled with extraordinary achievements: she was the first woman elected to Congress, one of the few suffragists elected to Congress, and the only Member of Congress to vote against U.S. participation in both World War I and World War II.
“I may be the first woman member of Congress,” she observed upon her election in 1916. “But I won’t be the last.”
As the first woman Member, Rankin was on the front-lines of the national suffrage fight. During the fall of 1917 she advocated the creation of a Committee on Woman Suffrage, and when it was created she was appointed to it.
When the special committee reported out a constitutional amendment on woman suffrage in January 1918, Rankin opened the very first House Floor debate on this subject. “How shall we answer the challenge, gentlemen?” she asked. “How shall we explain to them the meaning of democracy if the same Congress that voted to make the world safe for democracy refuses to give this small measure of democracy to the women of our country?”
The resolution narrowly passed the House amid the cheers of women in the galleries, but it died in the Senate.
Margaret Chase Smith – Maine
The first woman to serve in both the United States House of Representatives and the United States Senate was Margaret Chase Smith of Maine.
For more than three decades, Margaret Chase Smith served as a role model for women aspiring to national politics. As the first woman to win election to both the U.S. House and the U.S. Senate, Smith cultivated a career as an independent and courageous legislator. Senator Smith bravely denounced McCarthyism at a time when others feared speaking out would ruin their careers. Though she believed firmly that women had a political role to assume, Smith refused to make an issue of her gender in seeking higher office. “If we are to claim and win our rightful place in the sun on an equal basis with men,” she once noted, “then we must not insist upon those privileges and prerogatives identified in the past as exclusively feminine.“
On June 1, 1950, Margaret Chase Smith delivered in the Senate Chamber a “Declaration of Conscience” against McCarthyism, defending every American’s “right to criticize…right to hold unpopular beliefs…right to protest; the right of independent thought.” A Republican senator from Maine, Smith served 24 years in the U.S. Senate beginning in 1949, following more than four terms in the House of Representatives. She was the first woman to serve in both houses of Congress. At a time when it was unusual for women to serve in Congress, Smith chose not to limit herself to “women’s issues,” making her mark in foreign policy and military affairs. She established a reputation as a tough legislator on the Senate Armed Services Committee. She also became the first woman to run for president on a major party ticket in 1964. When she left the Senate in 1973, Smith retired to her home in Skowhegan, Maine, where she died in 1995, at the age of 97.
Sandra Day O’Connor – United States Supreme Court Justice
Born on March 26, 1930, in El Paso, Texas, Sandra Day O’Connor went on to become the first female justice of the United States Supreme Court in 1981. Long before she would weigh in on some of the nation’s most pressing cases, she spent part of her childhood on her family’s Arizona ranch. O’Connor was adept at riding and assisted with some of ranch duties. She later wrote about her rough and tumble childhood in her memoir, Lazy B: Growing Up a Cattle Ranch in the American Southwest, published in 2003.
In 1973 while serving in the Arizona State Senate, Sandra Day O’Connor was chosen by her fellow Senators to become the first female majority leader of any state senate in the United States.
In 1974, she took on a different challenge. O’Connor ran for the position of judge in the Maricopa County Superior Court. As a judge, Sandra Day O’Connor developed a solid reputation for being firm, but just. Outside of the courtroom, she remained involved in Republican politics. In 1979, O’Connor was selected to serve on the state’s court of appeals.
On September 25th, 1981 Justice Sandra Day O’Connor was sworn into office as a Justice of the United States Supreme Court, nominated to the Supreme Court by President Ronald Reagan.
O’Connor blazed a trail into the highest court in the land, winning unanimous support from the Senate as well as then-United States Attorney General William French Smith.
After Judge O’Connor’s appointment, she served from 1981 until 2006 and was the valued “swing” vote on many important legal and social issues that came before the court at that time.
More Republican Achievements
- In 1860, one of the “planks” in the Republican Party Platform called for building the Transcontinental Railroad and in 1862, the Republican-controlled congress passed the Pacific Railway Act, authored by Rep. Samuel Curtis (R-IA) and was signed into law later the same day by Abraham Lincoln.
- In 1862, the Republican-controlled 37th Congress passed the Land-Grant College Act. The law, written by Representative Justin Morrill (R-VT), distributed federal land to states to fund the establishment of colleges and universities throughout the country.
- In 1863, the statue atop the U.S. Capitol was hoisted into place. Among the onlookers was the African-American who made it, Philip Reid. Mr. Reid had been a slave until freed by the Republican Party’s DC Emancipation Act the year before.
- In 1863, Romualdo Pacheco was elected state treasurer of California, and then to the state legislature. In 1871, he was elected Lt. Governor. Four years later, the incumbent governor was elected to the U.S. Senate, making Pacheco the 12th Governor of California and the first Hispanic Governor in U.S. History.
- In 1867, with the purpose of establishing an institution of higher learning for emancipated slaves and other African-Americans, Senator Samuel Pomeroy (R-KS) and Representative Burton Cook (R-IL) wrote the charter for Howard University, in Washington, D.C.
- In 1870, Hiram Revels, born a free man, and a former military chaplin, began his political career as a Republican, on the Natchez City Council. He then won a seat in the state senate. When the state was re-admitted to the Union in 1870, the legislature elected Revels to the U.S. Senate.
- In 1871, the Republican-controlled 42nd Congress passed a Civil Rights Act aimed at the Ku Klux Klan. Guilty of murdering hundreds of African-Americans, this terrorist organization had also eradicated the Republican Party throughout most of the South. The law empowered the Republican administration of Ulysses Grant to protect the civil rights of the former slaves in federal court, bypassing the Democrat-controlled state courts.
- The 1871 Civil Rights Act, along with the GOP’s 1870 Civil Rights Act, effectively banned the Klan and enabled Republican officials to arrest hundreds of Klansmen. Though the U.S. Supreme Court would eventually strike down most of the 1871 Civil Rights Act, the Ku Klux Klan was crushed. The KKK did not rise again until the Democratic administration of President Woodrow Wilson
- In 1887, Susanna Salter (R-KS), daughter-in-law of a former Lt. Governor, was elected mayor of Argonia, a Kansas town of some 500 people. Support from the local Republican Party was key to her victory. The first woman to serve as mayor, Salter became a national celebrity. On March 2, 1960, President Dwight Eisenhower honored her with a proclamation celebrating her 100th birthday.
- In 1906, President Theodore Roosevelt (R-NY) nominated Oscar Straus for Secretary of Commerce and Labor. The German-born Straus would be the first Jewish person to serve as a Cabinet Secretary. While in office, he strongly denounced Democrats’ attempts to incite class hatred.
- In 1924, Republican President Calvin Coolidge signed the Indian Citizenship Act, granting citizenship to all Native Americans. The law had been written by Rep. Homer Snyder (R-NY), who had been a delegate to the 1916 and 1920 Republican National Conventions. It was passed by the Republican-controlled 68th Congress.
- In 1940, the Republican National Convention approved a plank in its platform calling for racial integration of the armed forces: “Discrimination in the civil service, the army, navy, and all other branches of the Government must cease.”
- For the next eight years, Democratic presidents Franklin Delano Roosevelt and Harry Truman refused to integrate. Not until 1948 did President Truman finally comply with the Republicans’ demands for racial justice in the U.S. military.
- In 1954, the U.S. Supreme Court ruled that racial segregation in public schools is unconstitutional. The author of Brown v. Board of Education was a Republican, Chief Justice Earl Warren.
Members of the Republican Party also:
- Established the Federal Highway System under President Dwight D. Eisenhower.
- Passed the Civil Rights Act in 1957, establishing the Civil Rights Division within the Department of Justice over the Democrats attempt to fillibuster the passage of this legislation.
- Republican President Eisenhower ordered solidiers of the 101st Airborne Division to enforce school intergration in Little Rock, Arkansas.
- Eisenhower appointee to the Federal Bench, former Georgia Republican Party leader Elbert Tuttle recognized that the Brown vs Board of Education was a “broad mandate for racial justice” ruled in 1962 that the University of Mississippi admit its first African-American students.
- Elected the first Asian-American Senator
- Elected the first Hispanic U.S. Senator
- Passed the Indian Citizenship Act
- Proposed and established Yellowstone National Park
The Republican Party Creed
As Republicans, we believe;
- That the free enterprise system is the most productive supplier of human needs and economic justice,
- That all individuals are entitled to equal rights, justice, and opportunities and should assume their responsibilities as citizens in a free society,
- That fiscal responsibility and budgetary restraints must be exercised at all levels of government,
- That the Federal Government must preserve individual liberty by observing Constitutional limitations,
- That peace is best preserved through a strong national defense,
- That faith in God, as recognized by our Founding Fathers is essential to the moral fiber of the Nation.
Many thanks to the Republican Party of Virginia Beach for sharing content from their website | <urn:uuid:7dd670ff-ad07-4b68-9be5-c00df4ebf17f> | CC-MAIN-2019-47 | https://fairfaxgop.org/resources/republican-party-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00498.warc.gz | en | 0.968243 | 4,423 | 3.640625 | 4 |
Please note: This older articleby our former faculty member remainsavailable on our site for archival purposes. Some information contained in it may be outdated.
Siding isnt weather-proof. A second line of defense is a critical component in smart weather-protecting wall designs.
by Paul Fisette 2001
The shell of a house serves as the first line of defense between the occupants and the outdoor environment. Walls function as a weather barrier, nail base for finish materials and an energy conserving boundary. A sensible wall system is durable. And this requires all components in a wall assembly to be compatible for the long haul. Siding, siding finishes, housewraps, insulation and wall frames must work together while achieving distinctive goals. So it is in this light that we should view a primary, but often overlooked, component in residential wall systems: weather-resisting wall wraps.
Wood, brick, masonry, vinyl, and other sidings do not function as barriers to driving rain. Siding is porous. There are a multitude of joints, laps, and connections making it discontinuous. Water and air are driven through these leakage points by wind, gravity and capillary forces. Also, we generally use water-sensitive materials for siding and structural elements. Leaking water rots wood, grows mold, corrodes steel and lowers insulating R-values. Another concern is that leaking air strips heat from homes and dollars from energy budgets. So air-tight construction is desirable.
Force of NatureMost of us live in climates influenced by rain and wind. During a storm, a thin film of water clings to windward surfaces. Porous materials, like unfinished shingles, stained wood clapboards, and masonry veneers soak up water. Non-porous materials like freshly painted wood, aluminum and vinyl dont. But the film of water sticks to all siding products. As the winds speed and direction shifts, water moves up, down and sideways under the influence of air pressure. It moves from areas of high pressure to areas of low pressure. The area directly behind a wind-blown wall surface is at a lower pressure than its exterior face. This pressure difference works to suck the water inward through any hole it finds. Ive stripped problem walls immediately after heavy rain to monitor rain intrusion and establish moisture profiles. It is perfectly clear that butt-joints, seams, holes, and siding overlaps are siphon points driven by air pressure, gravity and capillary suction. If there is no building paper, water will get wicked up into the wood sheathing where is often causes structural problems.
Many carpenters make the mistake of thinking that siding wood, brick, vinyl, stucco is an impenetrable barrier against the elements. The truth is, whether water is propelled by wind, capillary attraction, gravity, or some combination of these forces, sooner or later it finds its way behind, around or through the siding. Your local code may not require you to use felt or housewrap, but unless you live in an extremely arid climate you need to use it. Typically, building paper is installed as soon as the sheathing is installed. But to be effective, it must be integrated with the flashing that follows in later stages of the job. This means, for example, having to slit the housewrap above windows to tuck under the upper leg of a metal cap flashing, then taping the wrap to the flashing. And the wrap itself must be properly layered, overlapped and taped where necessary to provide a clear drainage path (see Watertight Walls article.)
The Problem with Caulked JointsI think the majority of builders and siding manufacturers believe that caulking around windows, between the end-joints of siding and along corner boards constitutes the development of a impenetrable rain barrier. Building codes even prescribe caulked siding as an acceptable weather protection system. The argument goes: If you caulk every joint, hole and seam in the siding, how can it leak? I am not a fan of this approach, especially if caulked joints involve wood or wood-based products. The best silicone sealants boast elongation rates up to 75%. Lesser caulks, like some acrylics, move a paltry 15%! This means that when a 1/8-inch wide caulked joint, made with the highest-grade sealant, moves 3/32-inch it fails!
There are two things wrong with the sealed-face approach: First, dynamic joints, like siding joints, move dramatically as a result of moisture and thermal loading. For example, a 6-inch wooden corner board will shrink and swell 1/4-inch when exposed to normal weather conditions. By the way, vinyl moves too. Nail slots in vinyl siding are elongated for a reason: to allow for nail slippage as the vinyl siding expands and contracts (thermally). Secondly, even if a joint doesnt move enough to make the caulk itself fail, in time, repetitive movement and prolonged exposure cause failure at the bonded connection. Look closely at caulked joints that have been in service for several years and you will see hairline cracks where the caulk once bonded securely to wood, masonry and vinyl components. A hairline crack is large enough to admit pressurized water, but not large enough to encourage drying. In the short term caulking can help block water penetration. In the long run it actually traps moisture behind the siding. Can an effective sealed-face barrier system be constructed? Yes, but, it is too risky and requires vigilant and costly maintenance.
Barrier DesignThere are basically 3 types of weather-barrier systems: the sealed-face method; the vented rain-screen approach; and the redundant-barrier system. The sealed-face method is straight out non-effective. The vented rain-screen approach is clearly the Mother of all weather-barrier systems. However, the redundant-barrier approach works well and is the most cost effective option.
The vented rain-screen is a system where lengths of strapping are fastened to housewrap-protected wall sheathing. Siding is attached to the strapping leaving an air space between the back of the siding and the face of the sheathing. This design does 2 very important things: The air pressure between the air on the outside of the siding and the air space created behind the siding is similar (if the siding is leaky to air). Therefore, rainwater is not sucked through the penetrations in the siding. No driving force! The second strength of this system is that the air space behind the siding promotes rapid drying if any water does get past the siding.
Constructing a rain screen is somewhat costly and labor intensive. Installation is unconventional, so it requires rethinking of some details. Window and door trim must be padded out. Flashing should be extended back to the sheathing beyond the air space and under the housewrap. Door hinges may need to be extended, so doors can be fully opened. Roof overhangs at gable ends must be extended to cover thicker wall sections. The bottom of the air space must be covered with screening to prevent critters from entering the vent chamber. These and other accommodations are certainly doable, but involve more labor and materials than typical construction. In my opinion, rain screens are required fare for wet, wind-blown areas like the Pacific Northwest, exposed coastal environments and hilltop exposures. But, this approach is not required or cost-effective for most climates and construction budgets.
The redundant-barrier works well for the vast majority of homes built today. And this system has the advantage of being familiar to builders. Basically, putting tar paper or approved housewrap on the exterior walls before siding is installed is the first step to build an effective redundant-barrier system. Proper installation is required to make this system work. You must design a drainage plane that keeps water out! When water penetrates the siding, it must have a clear path to follow downward. Water must remain outside of the protective wrap. Be sure that tops of windows, doors and penetrations are flashed properly (see Making Walls Watertight). All water must be directed outward. Also, we must choose materials that are capable of providing the protection we expect and need. The barrier should be resistant to liquid water and air infiltration, while being permeable to water vapor.
It should be noted that the redundant barrier approach works reasonably well with sidings that overlap like clapboards, lap siding and vinyl siding. These siding applications leave small air spaces between the sheathing wrap and siding. This provides a minimal drainage plane and promotes some drying. However, panel siding, T 111, and board siding lay flat against the sheathing wrap and do not provide any drainage or dryiing space. Water that gets past the siding can remain trapped between the siding and wrap for longer periods of time, raising the potential for moisture problems.
Building Code RequirementsI am very respectful of building code development and the enforcement process, but I dont think building codes provide clear direction in this case. Basically, all Model codes agree on the need for a weather-resistant barrier paper (usually specified as #15 felt or Grade D Kraft paper) behind stucco, brick, stone and other porous veneers. The paper requirement is typically omitted for other types of siding when theyre installed over rated structural sheathing. Alone among the codes, BOCA, in its 1998 supplement, requires a layer of #15 felt over the sheathing regardless of the siding type. BOCA has also beefed up its flashing requirements, spelling out nine areas needing flashing, and getting rid of an earlier exception for leakproof caulking (apparently in recognition that no caulking is leakproof for long. (See BOCA 1405.3.6 and 1405.3.10)
Though 15-pound felt is usually cited, all the codes allow for the substitution of equivalent materials, opening the door for plastic housewraps. To qualify as an equal, the housewrap must pass performance tests conducted by an independent lab and paid for by the manufacturer. The manufacturer submits the test data to the evaluation services of the various code bodies, which issue reports describing the materials properties and stating which code performance requirements it meets. Assuming it meets the right criteria, the housewrap can then be used instead of the felt or building paper specified in the code.
Be careful: As in most code matters, its up to your local inspector to approve an equivalent material. Chances are, given the wide use and acceptance of housewrap, you wont have a problem. But if its an unfamiliar brand, the inspector may ask you to provide the evaluation service report for the product.
So far, weve just been talking about the structural codes, all of which reference the Model Energy Code. Under the MEC you either have to use caulk, tape, and gaskets to seal up seams and penetrations in the building shell against air infiltration, or the easier route, you can install a vapor-permeable housewrap. If you live in a state or locale that has adopted and enforces MEC, this may be the reason you use a housewrap. Felt will also meet the criteria, since its perm rating is typically around 5 in the dry state.
Making Sense of Housewrap Specifications TestingASTM (the American Society of Testing & Materials) has recently convened a task force on weather-resistive barriers asphalt-treated kraft paper, asphalt-saturated organic felt, and housewrap in an effort to bring some consistency to the performance criteria by which these products are measured. A recent memo from the chairman of the group states that the three materials, any of which may meet the code criteria for building paper or weather-resistive barrier, are described by differentstandards and that there is no way to compare materials by a common set of criteria. The memo goes on to list no less than 24 test standards that manufacturers may pick and choose from to gain code approval for their products.
Apples to OrangesA basic problem is that even if two manufacturers use the same test, the results cant be compared because the tests are often set up differently. For example, ASTM E 283, commonly used to test resistance to air infiltration, requires that the weather barrier be stretched over an 88-foot wall frame. However, the manufacturer can instruct the testing lab to put the wrap over anything from an open-stud wall to a fully-sheathed, sided, insulated, and drywalled frame. Plywood can be oriented horizontally, so the seams fall between studs, or vertically, so the seams fall over the studs. To make a comparison, you would have to buy a copy of the code report for each product. Unless the test assemblies were exactly the same, a comparison of the specs would be meaningless.
There are many test procedures that can be used to qualify wall wraps as water resistant, but ASTM D 779, commonly called the boat test, is recognized as the industry standard. In this test, a small sample of wall wrap is folded like a piece of origami and floated on water in a petri dish. A powdered substance, called an indicator, is sprinkled on top of the wrap in a fine-layered, 1-inch circle. As water soaks up through the wrap, the indicator begins to change color. When an observer determines that the indicator is changing color at the fastest rate a sign that water is passing through the wrap at the most rapid rate the test is over and the elapsed time is noted. To qualify as a Grade D wrap, it must take at least 10 minutes for the color to change at its fastest rate. If a wall wrap claims a rating of 60, that means it took 60 minutes.
A problem with the boat test is that water vapor can also trigger the indicators change of color meaning that a highly vapor-permeable wrap like Tyvek fails. As an alternative, DuPont put Tyvek through AATCC 127, the hydro-head test, to prove its water resistance. In this test, the material is subjected to a 22-inch column of water the same force exerted by a 200-mph and must not leak a drop for 5 hours. This is a far more demanding test for water resistance than the boat test, yet as far as I know, among the plastic wraps, only Tyvek and R-Wrap have passed. Some researchers claim that felt has also passed, though inconsistently.
How Much Is Enough?Here again, product literature can be misleading. Some manufacturers may list hydro-head test values like 186 cm. This is the height that the water column reached before the material began to leak.
One tested value that actually can be compared between brands of housewrap is vapor permeance, which is usually tested according to ASTM E 96, with the results expressed in perms. The higher the value, the more permeable the material. (A material with a perm rating of 1 or less is considered a vapor barrier.) Unfortunately, the wide spread in perm ratings among brands from 5 perms to over 200 perms makes it a little difficult to assess the importance of this number. The codes require wall wraps to match or exceed Grade D building paper, which has a minimum perm value of 5.
To complicate things, the permeance of felt paper is a moving target. Felt paper absorbs water and ranges from a low of around 5 perms when its dry to over 60 perms when its exposed to relative humidity above 95%. The perm values of engineered wall wraps, however, are moisture-stable. Although high permeance is generally desirable in a wrap, excessively high ratings are not as important as resistance to air and water.
The ProductsThere is no shortage of housewrap products. The last time I counted there were at least 14 brands. The knee-jerk reaction is to think that all products work the same: wrap the house; apply the siding; and youre warm and dry. Plastic housewraps are engineered materials. They are designed to prevent air infiltration and keep out liquid water, while allowing water vapor to escape from inside of the home. Thats a tall order. Felt paper and all of the plastic housewraps display these properties to one degree or another. The difficulty comes in distinguishing between them. The question is: how well do these materials work? And if you choose to use a housewrap, does it matter which brand?
With all of the code test data available, youd think it would be easy to evaluate performance and compare one product to another. Unfortunately, there is not consistency in the testing procedure or in how the results are reported, so comparisons are difficult or meaningless. As an alternative, my students and I recently decided to do some testing of our own in the lab at UMass.
Lab BenchMy current work at the University of Massachusetts includes laboratory study and field investigation of construction problems. I receive hundreds of questions regarding the performance of building materials each year. Many questions are related to siding performance and moisture intrusion. Most water intrusion problems I see are clearly related to the improper installation of materials. Usually, flashing details around doors, windows and penetrations are to blame. But I was roused by my field work to test some of the more popular housewrap brands and see how they performed when exposed to a few basic laboratory conditions.
woven polypropylene with a perforated coating
Tenneco Building Products
woven polyethylene with a perforated coating
Simplex Products Division
woven polypropylene with a perforated coating
porous polyethylene film laminated to scrim
Simplex Products Division
spun-bonded polypropylene with a perforated coating
E.I. DuPont de Nemours & Co.
15 Pound Felt
Our goal was not to establish quantifiable data that predicted real-world performance. But we did want to explore the character or tendencies of these wraps when exposed to clean water, soapy water, and cedar-extractive-rich water. We subjected each wrap to a 3-1/2 inch hydro head instead of the 22-inch head used in the AATCC 127 test. A 3-1/2 inch head delivers a force to the wrap that is roughly equivalent to a 70 mph wind. We recorded the loss of water over a 2-hour period for each test we performed. Wind pressure and hydro-head conditions are certainly 2 different things, but we felt this was a reasonable level of stress to apply since wind commonly exerts a similar force on rain-covered walls.
Our test results showed that after a series of 2-hour test runs, clean water never leaked through Tyvek or R-Wrap; 15-pound felt lost 30% of its water on average; and all other products drained completely. It was especially noteworthy that the perforated wraps (Amowrap, Pinkwrap and Barricade) lost more than 80% of the water in the first 15 minutes. The performance of Felt and Typar was highly variable. Typar and Felt often held water for 30 minutes or more before leaking.
There was speculation that surfactants (soaps) could make housewraps more water permeable. And we found this to be true. Surfactants, which break down the surface tension of water, making it flow more easily, are present in soaps and oils that can be found on the surface of construction materials and hands of installers. This may be significant since people regularly powerwash their homes, perhaps making them more likely to leak. Also, cedar and other wood sidings contain water soluble extractives that are thought to act as surfactants. Paints and stucco have surfactants as part of their formulation too. So surfactants seemed like an interesting thing to investigate.
We ran a series of hydro tests using soapy water and then another series using a cedar-extractive solution. We limited our tests to Tyvek, R-Wrap and Felt, since these were the winners of the first round of clean-water tests. Tyvek and R-Wrap lost about 10% of the soapy water column in 2 hours. Felt seemed unaffected by soap, still loosing 30% of its water. Tyvek and R-Wrap lost about 3% of the cedar-extractive mix in 2-hours, while Felt again lost 30%. It does appear that soaps and extractives do have at least some affect on the water resistance of housewraps.
NOTE: Typar introduced a new non-perforated housewrap in 2003. We tested this new version in our lab during the spring semester of 2003 using the same tests described above. We found that the new Typar performed as well as Tyvek and Rwrap in the hydro-head testing. In fact it demonstrated superior resistance to surfactants when compared with the performance of Tyvek.
Housewrap or Felt?Based on our testing, if I were buying a housewrap today, I would choose either Tyvek or R-Wrap, because they display the best water resistance. But so far, Ive avoided the million dollar question housewrap or felt? The truth is, theres not million dollar answer. In general, I dont think it matters a whole lot. If you get the flashing details right, and are careful installing the building paper, you will prevent 99% of the moisture problems caused by wind-driven rain and snow. Either product, housewrap or felt, will provide an adequate secondary drainage plane. And either product is permeable enough to allow interior moisture to escape.
As it happens, I have felt paper on my own home, and if I could choose between felt and housewrap and do it over again, Id still choose felt. Thats because I believe that under certain circumstances, felt outperforms housewrap. For example, an ice dam or roof leak may allow liquid water to get behind the felt or housewrap. Its also possible for the suns heat to drive water vapor through the housewrap from the outside, where it can condense on the sheathing. In either of these cases, you now have liquid water on the wrong side of the wrap. Under these conditions, the liquid water would be trapped by the housewrap, which is permeable only to water vapor. Felt, on the other hand, will absorb the water, and more quickly dry to the outside.
End NotesDespite your best efforts, some water will make it through the siding, so you ought to plan for it. If you choose the right housewrap and install it correctly, you should have dry wall cavities. One associated issue that deserves special mention is the installation of wood siding over housewraps.
Wood is an absorbent material. It stores water. Since rain is sucked through butt-joints, seams and even upward past overlapping edges, it has access to the back surface. We usually paint the face of siding to reduce water absorption. But many builders leave the backside raw. You dont want to store water in a place that has direct contact with vapor permeable housewraps. The suns heat can turn the stored liquid water into vapor. The vapor moves inward when the temperature of the siding face is warmer than the air behind the siding. And since housewraps are vapor permeable, they can allow vapor to pass into the building envelope from the outside. As the sun sets or moves to another side of the house, the temperature of the wall may drop below the dewpoint temperature, changing the vapor back to liquid. And guess what? The reconstituted liquid is on the wrong side of a water-resistant barrier! This set of conditions is suspected to have caused wet sheathing in several unusual cases.
In short: Backprime wood siding so it doesnt absorb water and bleed extractive juice onto potentially sensitive housewraps! The best advice is to pre-treat all sides of the wood with a coating of clear water repellent preservative. Water repellents block liquid water much better that paint. And it allows vapor to pass out of the wood if any water happens to get sucked into the siding through splits and cracks. Very forgiving! After the water repellent has dried, install the siding, prime and apply 2 top coats of 100% acrylic latex paint. Dont forget to treat the ends, edges and backs of wood siding.
always use housewrap (even with a vented rain screen)
determine if climate requires a vented rain screen or redundant barrier system.
for redundant barriers I would choose Tyvek, R-Wrap or 15-pound felt
tape all seams in barrier
protect all flashings with overlapping wrap
avoid use of caulking, concentrate on developing an effective drainage plane.
protect all penetrations with appropriate detailing
prime all surfaces of wood siding (back-priming) before applying top coats
For results of our follow-up study involving capillary suction through housewraps see Leaky Housewraps.
Read the original post:
Housewraps, Felt Paper and Weather Penetration Barriers ... | <urn:uuid:452e4b10-d2eb-41d4-ac7c-1d11fdd16055> | CC-MAIN-2019-47 | https://www.termite.tv/page/3 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00537.warc.gz | en | 0.936149 | 5,233 | 2.96875 | 3 |