text
stringlengths
194
649k
Published Date: February 25, 2014 University Professor Louise Cowan's contributions to Southern education were recognized earlier this month when Sewanee: The University of the South, bestowed upon the longtime literature teacher an honorary Doctor of Letters degree. Cowan delivered the convocation address for the university's Easter semester on what it means today to be a university "of the South." "What comes to mind when we hear the word is a leisurely sense of life, and an emphasis on texture – on various ways of doing things that have authority through long devotion and care," said Cowan. In a talk that quoted poets Donald Davidson, Allen Tate, John Crowe Ransom and T.S. Eliot, Cowan argued that the South's view of poetry as "life heightened and made memorable" is central to the region's identity. She concluded with the thought that the South—with its eloquence, its humor, and its belief in the dignity of the individual—still has much to contribute to modern statesmanship. The entirety of Cowan's address is available here. A noted author and education pioneer who has continued to teach into her 90s, Dr. Cowan is best known for her lectures and her influence on students. She received the Charles Frankel Prize, the nation's highest award for achievement in the humanities (later renamed the National Humanities Medal), from President George H.W. Bush and is one of two women on the list of the 20 most brilliant living Christian professors. Her interest in Southern literature and culture is a thread running throughout her work. In addition to her wide-ranging articles on classic literature from Aeschylus to Shakespeare to Toni Morrison, she has written extensively on Faulkner, Caroline Gordon, and the Fugitive Group of writers based at Vanderbilt in the 1920s who changed the path of American poetry and criticism. PHOTO: Sewanee: The University of the South
19 August 2010 Portland Zine Symposium Well, the monkeys have been busy slaving away at their typewriters and so far they've come up with this little gem of literature: "She should have died hereafter; There would have been a time for such a word. To-morrow, and to-morrow, and to-morrow: It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way-- Gimme a fuckin' banana (and some cigarets)-- Bubbles mad! Bite face! Throw shit! Bubbles not typing this fuckin' machine anymore! Well, we're just gonna have to shoot him off into space and be done with it. He can't use that kind of fuckin' language! There might be children present. In other news, we're gearing up for the Portland Zine Symposium on Aug. 28th & 29th. We'll be there with a whole spate of new zines and books, including this little gem.
Health Planning/Policy Development Compensation for Second Language - This is a legislative act that was passed in 2001. This legislation allows for a 10 percent pay increase to any employee whose specific job assignment requires the skill to communicate in a language other than English, including American Sign Language, and which skill is required as a secondary minimum qualification by the classification specification for the position occupied by the employee. OMHHD assisted in developing policy for the Department of Health to guide managers in utilizing this legislation. Limited English Proficiency (LEP) - The LEP Program is based directly on Federal Guidance that states: Individuals who do not speak English as their primary language and who have a limited ability to read, speak, write or understand English can be Limited English Proficient, or “LEP.” These individuals may be entitled to language assistance with respect to a particular type of service, benefit or encounter. Policies - Providing meaningful access to LEP persons will ensure that ADH and LEP beneficiaries can communicate effectively and act appropriately based on that communication. Therefore ADH should take reasonable steps to: - Ensure that LEP persons are given appropriate and adequate information, - Ensure that LEP persons are able to understand that information and - Ensure that LEP persons are able to participate effectively in ADH programs and/or activities, where appropriate. To assist in the above process, OMHHD has provided each Local Health Unit two sets of “I Speak” cards. These are identification cards that allow LEP beneficiaries to identify their language needs to staff and for staff to identify the language needs of clients. The “I Speak” cards invite the LEP person to identify the language he/she speaks. The policy became effective, Oct. 23, 2007. Arkansas Minority Barber & Beauty Shop Health Initiative Hypertension, if left unchecked can lead to heart disease and stroke. One of the primary goals of the Arkansas Department of Health is to lower the risk of heart disease and stroke by promoting better management of hypertension. Mission and Goal: The mission of the Arkansas Minority Barber & Beauty Shop Health Initiative is to increase public awareness about heart disease and stroke. The goal is to empower minorities to better understand hypertension prevention and management. This initiative will carry out its mission by incorporating Million Hearts™ which will focus on coordinating and enhancing cardiovascular disease prevention activities across the nation to prevent 1 million heart attacks and strokes by 2017. Million Hearts will promote the "ABCS" of clinical prevention (Appropriate aspirin therapy, Blood pressure control, Cholesterol management, and Smoking cessation) as well as healthier lifestyle. The primary objectives of the Arkansas Minority Barber & Beauty Shop Health Initiative are three-fold: - Screen: Provide diabetes, hypertension, and cholesterol screening - Educate: Teach communities about the importance of proper diet, physical exercise, recognizing signs and symptoms of chronic diseases. - Refer: Refer high risk individuals identified through screening to local health units. - Almost half of all Arkansas adults have high blood pressure (2007 ARCHES). - Heart disease and stroke are the first and fourth leading causes of death in Arkansas. - Arkansas has the highest stroke death rate and 6th highest cardiovascular disease death rate in the nation. - In 2010, 9,015 deaths occurred due to heart disease and stroke in Arkansas; almost 2,000 of the deaths were among adults younger than age 65 (CDC Wonder). Heart Disease and Stroke Race: - African Americans are most affected by heart disease and stroke contributing to their lower life expectancy. - In 2010 in Arkansas, heart disease & stroke rate among African Americans was 22% higher than Whites and 71% higher than Latinos (CDC Wonder). - As of 2007, in the U.S., African American adults of both genders were 40% more likely to have high blood pressure and 10% less likely than their white counterparts to have their blood pressure under control. High Blood Pressure and Age: - Hypertension diagnosis occurs at early ages for African Americans compared to their white counterparts. - Approximately 75% of the African Americans were diagnosed with hypertension at < 45 years of age compared to 58% of their white counterparts (2009 CDC BRFSS WEAT). Heart Disease and Gender: - Heart disease is the leading cause of death for American women, killing nearly 422,000 each year in the U.S. In Arkansas 3,448 (47%) women died of heart disease in 2010 (CDC Wonder). Following a heart attack, approximately 1 in 4 women will die within the first year, compared to 1 in 5 men. Tobacco Use in Arkansas: - Smoking, alone, kills more people each year than alcohol, AIDS, car crashes, illegal drugs, murders, and suicides combined. For every person in Arkansas who dies from smoking approximately 20 more residents suffer from serious smoking related disease. Compared to non-smokers, smoking is estimated to increase the risk of coronary heart disease 2-4 times, and stroke 2-4 times (CDC Tobacco Data-Statistics). For these reasons it is critical for the Tobacco Prevention & Cessation Program to join with the Arkansas Minority Barber & Beauty Shop Health Initiative. The Arkansas Department of Health, community partners and numerous volunteers will host the Arkansas Minority Barber & Beauty Shop Health Initiative on Saturday, July 27, 2013 from 9a.m. to 2p.m. at locally owned minority barber & beauty shops, colleges and salons in Pulaski County. These hair establishments represent a cultural institution of familiarly and trust so this initiative will provide health information traditionally provided in a clinical setting in an environment that is more easily accessible. This health initiative will use Million Hearts to address hypertension and cardiovascular diseases in African American and Latino populations. During the event, FREE blood pressure checks, blood glucose, body mass index, cholesterol screenings and tobacco cessation information will be provided to salon patrons and employees. General health literature and educational materials on chronic disease, tobacco prevention, physical activity and nutrition will also be provided. Participating Locations include: |Moore Than Enough 9862 Hwy 107 |Lois & Ray’s Salon 10301 No. Rodney Parham, Ste. C15 |New Image Salon, Spa & Barber 4501 JFK, Blvd. I North Little Rock |New Tyler Barber College 1221 Bishop Lindsey Ave. North Little Rock |Velvatex College of Beauty Culture ||1520 MLK Dr. |Dazzling Creations Salon ||4310 John Barrow, Rd. |Salón de Belleza Patricia ||5319 W 65th. |Panache’ Beauty & Barber Salon ||2525 Willow St. North Little Rock |Washington Barber College ||5300 W 65th St ||1200 John Barrow Rd., Ste. 302 Community Sponsors and Partners include the Arkansas Minority Health Commission, Arkansas Medical, Dental and Pharmaceutical Association, Arkansas State Board of Barber Examiners, Arkansas Foundation for Medical Care, Baptist Health and Hola! Media Group. Technical Assistance and Training Cultural Competency Training - OMHHD developed a cultural competency training curriculum in conjunction with the University of Arkansas Medical Sciences Campus University Affiliate Program. The curriculum consist of three modules: two address diversity within the individual and the other address problem-solving with an emphasis on poverty issues. Training is now incorporated into the orientation of all new employees. Data Profile Book - Arkansas Minority Health data profile report on people of Color from 1993 – 1997 provides information concerning the health status of minority populations in Arkansas. Such information is needed to address the health needs and concerns of Arkansas residents at the local, state and regional level. This report contains data for many indicators used in other studies. However, the indicators presented here are by no means exhaustive. The report is designed to make these indicators easily accessible, while at the same time allowing flexibility to users in their selection. To accomplish this, the frequencies of particular events of conditions, along with rates or percentages, are presented in concise tables and graphs. The Healthy People 2010 report will replace this profile booklet, and it will be available for distribution April 2008. Navigational Resource Guide - This guide was developed to assist the Department in providing services to evacuees of Hurricane Katrina and others requesting services. Copies of this guide are available by contacting OMHHD. Hispanic Risk Study – This study was designed to determine factors that affect Hispanic utilization of public health services. Based on the opinions of an expert panel comprising members of the Hispanic community and health care professionals conversant with the issues surrounding Hispanic health care, as well as interviews with Arkansas Department of Health professionals who service the Hispanic community, this study examines the factors that affect Hispanic access to public health care in Arkansas. The term “access” is considered to mean participation in and receipt of quality public health care. Marshall Island Assessment - A community assessment conducted to ascertain the health concerns of the Marshall Island population. Since this initial assessment in 2000, the population of Marshall Islands has continued to increase in the Northwest area of Arkansas. In 2004, ADH conducted a four-county health needs assessment focus group with Marshall Island women. The focus group was facilitated by a Marshallese who works at the Northwest Arkansas Multicultural Center. It was conducted in Marshallese, with two staff members from the University of Arkansas Social Work Research Center, facilitating and/or observing the group. A summary of the major themes were: (1) Northwest Arkansas has the highest concentration of Marshall Islanders outside of their homeland, the Republic of the Marshall Islands; (2) Accessing needed health care services was identified by a majority of the focus group participants as being a problematic concern; (3) Language and cultural barriers, lack of qualified Marshallese translators; and (4) lack of awareness of available health care clinics and services. These were the reasons most frequently cited by members of the focus group regarding the difficulties that many Marshall Islanders experience in attempting to access needed health care services. As a result of this study, the Centers for Disease Control and Prevention (CDC) came to Arkansas and conducted an Epi-Aid report in 2003-2005. This report is a compilation of work provided by CDC in identifying the current public health burden of select reportable diseases among the Marshallese residing in Northwest Arkansas. On May 2, 2007 an all-day forum was held entitled “Assessing Public Health Strategies Improving Health: Marshall Islanders. The goal of this forum was to facilitate a dialog for participants to have a common understanding of issues, resources, and gaps in order to develop a strategic model for improving healthcare and quality of life among Marshallese in Northwest Arkansas. Information obtained from this meeting will be used to increase awareness of health issues and to seek resources to meet needs. The Marshall Island Health Minister attended the forum to provide current information regarding health. In addition, several staff from CDC/ Epi-Aid attended. Health Fairs – OMHHD provides assistance to organizations and communities in setting up health fairs and provide health information and promotional aids. Specifically, the OMHHD has developed a Health Fair/Health Fair Event Guidance Manual to assist community organizations in preparing for their health fairs. The manual provides the requestor with information about promotional materials available, contacts for various programs, a request form for internal and external partners to obtain material, speakers and an evaluation of the procedure. This manual can be obtained by contacting OMHHD. Arkansas Cancer Coalition (ARCC) – ARCC is a statewide network consisting of organizations and individuals. The ARCC mission is to reduce the human suffering and economic burden from cancer for all Arkansas citizens. Activities include the awarding of competitive mini grants for innovative community-based projects, assisting ARCC partners with outreach activities, supporting professional and public education surrounding various cancer topics and planning breast and prostate conferences. Arkansas Legislative Black Caucus – The Caucus holds yearly conferences to address the problems endemic to minority Arkansans, to resolve issues in planning for the next legislative session and to produce a legislative agenda for the upcoming session. The Caucus’ mission is to foster economic growth throughout Arkansas and to cultivate opportunities for wealth and a higher standard of living for minority and low-income Arkansans. During the 2007 legislative session, the Caucus convened a public roundtable each Wednesday to provide a forum for minority groups to present their programs and/or concerns. The Caucus also convened a monthly “think tank” in order to assist constituents in developing and presenting legislation they wanted to get enacted. Arkansas Minority Health Consortium - At the recommendation of Senator Tracy Steele, the Consortium focuses on identifying, reviewing and discussing issues related to the delivery of and access to health care services, as well as identifying gaps in the health services delivery systems. The Consortium makes policy and procedural recommendations regarding the availability of services for minority populations. This organization provides a forum for partner updates, information dissemination and legislation recommendation. Child and Adolescent Service System Program (CASSP) Coordinating Council – The purpose of the CASSP is to develop and monitor a statewide plan for treating children with emotional disturbances. This council focuses on mandating services that are child- and family-centered and its priority is to keep children with their families. Services are community-based, with decision-making responsibility and management at the local and regional level. Services are also culturally and ethnically sensitive to the needs of the clients served. Cultural Awareness Training Seminars – These seminars are conducted annually for ADH employees to become familiar with the impact of how cultural practices impact the way many of our clients utilize health services. Each year there is a different focus, such as HIV/AIDS and chronic diseases. This seminar was first held in 2006 with a focus on the minority groups in Arkansas, and their cultural practices in reference to health care. This seminar is held during April as part of “Minority Health Month” activities. Heart Disease and Stroke Prevention Taskforce - The Hearth Disease and Stroke Prevention Taskforce consists of 70 members from public and private health organizations. The Taskforce meets twice a year to review the interventions established in the comprehensive heart disease and stroke state plan. The Heart Disease and Stroke Workgroup is a subcommittee of the Taskforce, which was formed to implement, monitor, and support the work of the Taskforce. The goal of the state plan is to improve knowledge of symptoms of heart attack and stroke among Arkansas residents, and increase public awareness of the necessity of and option for rapid response in the case of heart attack or stroke. Injury Prevention Coalition – The Coalition’s mission is to sustain, enhance and promote the ability of state, territorial and local public health departments to reduce death and disability associated with injuries. The goals of the coalition are to: (1) expand the ability of public health agencies to develop policy; (2) conduct research, and design; (3) implement and evaluate interventions; and (4) provide training and education. Minority Health Month (April) – The first week of minority health month activities are coordinated with Public Health Week. A joint press conference is held to kick off the weeklong activities. Activities are planned across the state for each day of Public Health Week (i.e. press conference, healthy exercise, cooking class demonstrations and an agency-wide walk). The week ends with A Taste of the World. For this activity each employee is asked to bring a food dish representing their culture to share. It has become the highlight of Public Health Week due to the time set aside to appreciate each other’s culture. Sickle Cell Disease Foundation (SCDF) - This non-profit organization provides follow-up counseling and support to families with children who have the trait and/or disease. The SCDF is working toward obtaining funding for a health care facility to manage the disease of adults with sickle cell. Tobacco Prevention Control Program - The Department of Health received funding in 2003 from the CDC to develop a strategic plan for identifying and eliminating disparities related to tobacco use among special populations. A workgroup was established to assist in developing the strategic plan. OMHHD assists in coordinating the activities of this workgroup. The strategic plan has been completed, and will be printed and distributed to all participants. This plan will be available through the Tobacco Program.
become an editor the entire directory only in H/Hassler,_Jon Arts: Literature: Children's: Young Adult Arts: Literature: World Literature: American Curled Up with a Good Book: The Dean's List - Unsigned, undated book review. Graves of Academe - "Forget Garrison Keillor and the Coen brothers. Jon Hassler is Minnesota's most engaging cultural export." Review of "The Dean's List," in the New York Times. Hope on Ice: The Felicitous Fiction of Jon Hassler - Appreciative essay by Charlotte Hays on the Catholic novelist. Draws the inevitable comparison to J.F. Powers. Jon Hassler's Writing Room - Authorized web site about this Minnesota author and his novels. The Loves of His Life - In-depth review of Jon Hassler's book "North of Hope." By Richard Russo, writing in the New York Times. Requires free registration. - A close look at "Dear James." Book review by Philip Zaleski. The Wit, Wisdom and Wonder of Writer Jon Hassler - Feature article builds on interviews of Hassler and acquaintances. Includes some biographical information, and examples of Hassler's descriptions of characters. " search on: to edit this category. Copyright © 1998-2014 AOL Inc. Visit our sister sites Last update: May 19, 2014 at 22:44:50 UTC -
Lawrence Haddad - IDS Director Central and South Asia; South Africa. Lawrence Haddad has left IDS. Previously, Lawrence was the Director of the Institute of Development Studies, Sussex. He is an economist and his main research interests are at the intersection of poverty, food insecurity and malnutrition. He was formerly Director of the International Food Policy Research Institute's Food Consumption and Nutrition Division and Lecturer in Development Economics at the University of Warwick. His field research has been in the Philippines, India and South Africa. He has a PhD from Stanford University. An economist, he was selected for the latest Who's Who in Economics (Elgar). Impact Evaluation of DFID Programme to Accelerate Improved Nutrition for the Extreme Poor in Bangladesh Seizing the Opportunity to Sustain Economic Growth by Investing in Nutrition in Zambia It is to the credit of the Zambian leadership and the development community that a great deal of momentum for nutrition has been built in the past few years. More details Turning Rapid Growth into Meaningful Growth: Sustaining the Commitment to Nutrition in Zambia The articles in this IDS Special Collection show how the commitment to nutrition has been built in Zambia, and provide some pointers and guides to the ways in which that increased commitment could be leveraged to raise resources and how to allocate these. More details Maharashtra’s Child Stunting Declines: What is Driving Them? Findings of a Multidisciplinary Analysis Between 2006 and 2012, Maharashtra’s stunting rate among children under two years of age was reported to decline by 15 percentage points – one of the fastest declines in stunting seen anywhere at any time. More details What are the Factors Enabling and Constraining Effective Leaders in Nutrition? A Four Country Study Leadership has been identified as a key factor in supporting action on nutrition in countries experiencing a high burden of childhood undernutrition. This study of individuals identified as influential within nutrition in Bangladesh, Ethiopia, Kenya and India examines why individuals champion nutrition policy, of their countries. More details The Hunger And Nutrition Commitment Index (HANCI 2013): Measuring the Political Commitment to Reduce Hunger and Undernutrition in Developing Countries This report presents the Hunger And Nutrition Commitment Index (HANCI) 2013. More details Why Worry about the Politics of Child Undernutrition Undernutrition affects over 2 billion people; but most of the global policy focus has been on technical solutions rather than an understanding of nutrition politics. This paper reviews existing literature on nutrition politics and policy. More details Reducing Child Undernutrition: Past Drivers and Priorities for the Post-MDG Era As the post-MDG era approaches in 2016, reducing child undernutrition is gaining high priority on the international development agenda, both as a maker and marker of development. More details A State of the Art Review of Agriculture-Nutrition Linkages This paper explores the latest evidence on the relationships between agriculture and nutrition in food-insecure regions. More details The HANCI Donor Index 2012: Measuring Donors' Political Commitment to Reduce Hunger and Undernutrition in Developing Countries This second phase of the Hunger and Nutrition Commitment Index (HANCI) scrutinises donor government commitment to reducing hunger and undernutrition in developing countries. More details The Hunger And Nutrition Commitment Index (HANCI 2012): Measuring Political Commitment to Reduce Hunger and Undernutrition Measuring the Political Commitment to Reduce Hunger and Undernutrition in Developing Countries. Supported by Irish Aid, DFID Accountable Grant and Transform Nutrition. More details Striving Towards 2015 - IDS Annual Report 2012–13 marks the halfway point of our current five-year strategy. In this annual report, we present our progress over the past year towards each of our four strategic aims, highlighting the impact and achievements of IDS, our partners and our alumni. More details IDS on MDGs This Virtual Bulletin focuses on past articles that inform the debate about the MDGs from an historical perspective. More details Whose Goals Count? Lessons for Setting the Next Development Goals This IDS Bulletin compiles reflections from various actors on the core elements of the MDGs and also on topics not explicitly covered in them, such as governance, participation and infrastructure. More details The Politics of Reducing Malnutrition: Building Commitment and Accelerating Progress In the past 5 years, political discourse about the challenge of undernutrition has increased substantially at national and international levels and has led to stated commitments from many national governments, international organisations, and donors. More details Seeing the Unseen: Breaking the Logjam of Undernutrition in Pakistan After a lost decade, there is clearly a groundswell of momentum for nutrition in Pakistan, driven by a confluence of policy, evidence and events. This momentum needs to be sustained at the national level, reinforced at the provincial and sub-provincial levels, and converted into action. More details Embedding Nutrition in a Post-2015 Development Framework Putting an end to the current nutrition crisis by 2030 is possible, but only if nutrition is embedded within a post-2015 development framework. More details Ending Hunger and Malnutrition The first article in this virtual IDS Bulletin is by Michael Lipton and dates from 1982. In that year the WHO stunting rate for children of preschool age in sub-Saharan Africa was 39 per cent. In 2012 the rate is still 39 per cent. More details Standing on the Threshold: Food Justice in India India stands on the threshold of potentially the largest step toward food justice the world has ever seen, as the National Food Security Bill works its way through parliament with a view to being passed during its current term period, covering about 70 per cent of households. More details Accelerating Reductions in Undernutrition What can nutrition governance tell us? In order to accelerate progress on undernutrition reduction we need to understand how the governance of nutrition programmes leads to successful outcomes. More details Time to Reimagine Development The major global crises of the past four years have collectively had a dramatic impact on people's lives and livelihoods – but have they also had a large impact on core ideas underlying mainstream development? More details Related Content - News & Blogs Belated Children's Day Thoughts By Hamsini Ravi Fieldwork reflections from the Life in a Time of Food Price Volatility research project. Part II. By Alexandra Wanjiku Kelbert Fieldwork reflections from the Life in a Time of Food Price Volatility research project. Part I. By Alexandra Wanjiku Kelbert Food riots and food rights - reflections from researchers on the front line Various researchers contributed to this post
Key: Recommended Database ProQuest Research - Via ProQuest - Dates: Varies Identifies articles on all topics, many with links to full text. Includes articles in scholarly, peer-reviewed journals. For alumni access, see also Alumni Library Gateway. For alumni, off campus access is for a subset of content.More» ProQuest Research, Newspapers & Business - Dates: Varies Identifies articles in journals, magazines and newspapers, many full text. Includes ABI/Inform, Ethnic NewsWatch,GenderWatch and Alt-Press Watch.More» Wilson OmniFile - Via EBSCO - Dates: 1982 - current Identifies articles on many subjects with some full text. Includes full text from these databases, when available: General Science Abstracts, Humanities Abstracts, Index to Legal periodicals & Books, Library Literature & Information Science Index, Readers' Guide Abstracts, Social Sciences Abstracts, and Wilson Business Abstracts. More» JSTOR - Dates: Varies by title from 1800s - 2000s Full text articles in many disciplines. Subject areas include African-American studies, anthropology, Asian studies, business, ecology, economics, education, finance, history, literature, mathematics, philosophy, political science, population studies, sociology, statistics. The University of Rochester Libraries currently subscribes to the following multidisciplinary JSTOR Collections: Arts and Sciences I through XII. JSTOR also packages their content in disciplinary For alumni access, see also Alumni Library Gateway.collections, however, the only ones of these that we have licensed are the Biological Sciences segment and the first of the Business collections. For alumni access, see also Alumni Library Gateway. More» WorldCat - Via FirstSearch - Dates: Updated daily Identifies and locates books and other materials owned by libraries in the US and internationally. Use WorldCat to locate materials outside of UR, in all disciplines and formats, or to locate items in specific libraries and research collections. More» CREDO Reference - Dates: 1911-present A customizable general reference solution for librarians and learners offering more than 600 titles from over 80 publishers. ebrary - Dates: Varies Selected full text of books in all disciplines. If you need help creating an account, go here: It depends upon your browser - Firefox: If you've arrived at ebrary from the library catalog: 1. Click on the puzzle piece that says, "Click here to download plugin." 2. Click on the "Manual Install" button. This will open an ebrary download page in a new window. 3. Click the "Download ebrary Reader for Windows Now" link. Depending on your security settings, Firefox may display a plugin security warning at the top of the page and refuse to download the plugin. If this happens, you must: i. Click the "Edit Options..." button. ii. Click "Allow" to add ebrary to the list of trusted sites. iii. Click "OK" to update your options. iv. Click the "Download ebrary Reader for Windows Now" link again to download the Reader. v. Once the plugin downloads, click the "Install Now" button to complete installation. More» ArticleFirst - Via FirstSearch - Dates: 1990 - current Tables of contents in journals covering all disciplines and subjects, with information about each article. Choice Reviews - Dates: 1988 - current Brief reviews of academic books in all disciplines. Google Scholar - Dates: varies For off-campus access to full text: Click Scholar Preferences and add Rochester as your Library Link. Be sure to Save Preferences. LexisNexis Academic - Via LexisNexis - Dates: Varies Full text articles from newspapers, magazines, selected journals, and reference books. Search in five major content areas: news, business, legal research, medical and reference. News section includes newspapers and journals; business section has full text company and industry news, including accounting literature, and corporate financial information including 10Ks and annual reports. Legal section includes articles, case law, codes and regulations, international case law, patents, and directories of lawyers and law schools.More» Sage Journals - Dates: Varies Journals from Sage Publications. Browse or search tables of contents, abstracts and full text articles in all disciplines. Includes the Business, Humanities, Social Sciences, and Science, Technology and Medicine. Full text available as indicated. For alumni access, see also Alumni Library Gateway. More» ScienceDirect - Dates: Varies Journals from Elsevier Science. Browse or search tables of contents, abstracts and full text of articles in all disciplines. Includes the sciences, engineering, humanities, social sciences, and clinical medicine. Full text from 1995 on, except as indicated.More» Web of Science - Via ISI Web of Science - Dates: 1898 - current Identifies journal articles and cited references to research publications from all disciplines, some with links to full text. Comprised of three databases that can be searched separately or in combination: Arts & Humanities Citation Index (1975-current), Science Citation Index Expanded (1973-current), Social Sciences Citation Index (1973-current).More»
Alfred Lambourne was born in Weymouth, England in 1850. He was a romantic realist landscape painter of the Rocky Mountain School who painted panoramic pictures of natural scenery in the western United States. He died in Salt Lake City in 1926. The Lambourne family converted to The Church of Jesus Christ of Latter-day Saints and immigrated to Salt Lake City in 1866. Alfred Lambourne began painting scenery for the Salt Lake Theater soon after his arrival. Although he had had some informal instruction, he was primarily self-taught. In 1883 he painted Great Salt Cliffs at Promontory Lake. Other paintings include Black Rock Great Salt Lake (1890), Summer (1921), and Winter (1924). Two of his easel paintings Hill Cumorah and Andi-Adan-Ahman are in the Salt Lake Temple of The Church of Jesus Christ of Latter-day Saints. Biography adapted from Springville Museum of Art. Alfred Edward Lambourne was born in England on River Lam bourne on February 2, 1850. His parents encouraged his artistic talents while he was young, and when the family converted to the L.D.S. faith and moved to the United States, Lambourne's experience as a romantic realist painter of the western landscape began. As a youth, Lambourne lived in St. Louis until his family migrated to Salt Lake City in 1866 (Haseltine, 42). During the trek west, Lambourne kept a sketchbook of the scenery along the way. After the family's arrival in Utah, Alfred began painting scenes for the Salt Lake Theater Company; he was just 16. Although he had some instruction from J. Guido Methua, George Tirrell, and Henry D. Tryon, Lambourne was primarily self-taught. He had an original approach to landscape painting and was capable of depicting moonlight and sunset scenes with an air of mysticism. Lambourne's content and painting style was that of the “Rocky Mountain School“ which was similar in style and philosophy to the Hudson River School of the East. Similar to Thomas Moran, Albert Bierstadt, and Thomas Cole, Lambourne was a painter of new and often unexplored territories. “It is said that explorers of the time did not claim discoveries in an area until they ascertained whether Lambourne had already painted there.“ (Dictionary of Utah Artists, 277). In 1871, Lambourne went on an expedition to Zion Canyon with Brigham Young and produced the first sketches of the area. Also during the 1870s, Lambourne traveled with Charles R. Savage, the famous nineteenth-century photographer. Later that same decade, he traveled and painted with his friend Reuben Kirkham. Together they produced a landscape panorama that was similar to C. C. A. Christensen's and Samuel Jepperson's. The idea of a panoramic show was popular and appealing to Utahns of the nineteenth century. The large size of Lambourne's paintings reflected the grandeur of the United States. Kirkham and Lambourne traveled with 60 panels entitled Across the Continent (1876) , including a 25-foot long view of the Salt Lake Valley (Utah Art, 25). By the 1880s, Lambourne was well known for his painting abilities. He was one of the first artists to visit and paint Yellowstone, the Grand Canyon, and Yosemite and was quickly gaining commissions. In 1800, collector J.R. Walker had Lambourne paint Moonlight, Silver Lake, Cottonwood Canyon (1880). The picture depicts two figures in a boat and features dramatic changes in value and a heavy atmosphere. Also during the 80s, Lambourne began writing poetry to express his feelings on nature and Utah. Lambourne reported that the “lonely solemnity of Utah's scenery moved him“ (Utah Art, 36). In January 1884, he wrote an essay contrasting the visual arts and literature, and by the 90s, Lambourne almost exclusively preferred the pen to the paintbrush. He wrote a total of 14 books, some of which he illustrated with black and white tempera paintings. Of the few easel paintings he did produce, several were hung in the Salt Lake temple in 1992, including one of the Hill Cumorah and one of Adam-Ondi-Ahman. Lambourne died June 6, 1926, in Salt Lake City. He had been a painter, writer, explorer, and lover of nature. He was described as a man “of ability if not genius“ (Utah Art, 20). Biography courtesy Springville Museum of Art. Lambourne, Alfred. A Book Of Verse. Salt Lake City, UT: Lambourne, 1907. Lambourne, Alfred. A Farewell, To My Friend John Tullidge. Salt Lake City, UT: Lambourne, 1900-1926. Lambourne, Alfred. A Glimpse Of Great Salt, Utah: Reached Via The Union Pacific Railroad Co. Omaha, NE: Passenger Department, Union Pacific Railroad, 1898. Lambourne, Alfred. A Lover's Book Of Sonnets. Salt Lake City, UT: The Deseret News., 19l7. Lambourne, Alfred. A Memory of Dr. John R. Park. Salt Lake City, UT: The author, 1900. Lambourne, Alfred. A Trio Of Sketches: Being Reminiscences Of The Theater Green Room And The Scene Painter's Gallery From Suggestions In "A Play-House." Salt Lake City, UT: Lambourne, 1917. Lambourne, Alfred. An Old Sketch Book. Boston, MA: S.E. Cassino, 1892. Lambourne, Alfred. Bits Of Descriptive Prose. Chicago, IL: Belford-Clarke Co., 1891. Lambourne, Alfred. Columbia River Scenery. Boston, MA: S.E. Cassino, 1889. Lambourne, Alfred. From Mount Hood to Monterey: Pen And Pencil. Boston, MA: Samuel E. Cassino, 1890s. Lambourne, Alfred. Jo: A Christmas Tale Of The Wasatch. Chicago, IL: Belford-Clarke, 1891. Lambourne, Alfred. Memorabilia. [S.L.: S.N.], 1929. Lambourne, Alfred. Metta: A Sierra Love Tale. Salt Lake City, UT: The Deseret News. Publishers, 1912. Lambourne, Alfred. Our Inland Sea: The Story Of A Homestead. Salt Lake City, UT: The Deseret News. Press, 1909. Lambourne, Alfred. Peace, The Country Cross-Roads: A Prose Idyll. Salt Lake City, UT: Lambourne, 1900-1926. Lambourne, Alfred. Pictures Of An Inland Sea. Salt Lake City, UT: The Deseret News, 1902. Lambourne, Alfred. Pine Branches And Sea Weeds. Salt Lake City, UT: A. Lambourne, 1889. Lambourne, Alfred. Plet: A Christmas Tale Of The Wasatch. Salt Lake City, UT: The Deseret News., 1909. Lambourne, Alfred. Scenic Utah: Pen And Pencil. New York, NY: Dewing Pub. Co., 1891. Lambourne, Alfred. The Cross, Holly And Easter Lilies. Salt Lake City, UT: Lambourne, 19l7. Lambourne, Alfred. The Old Journey: Reminiscences Of Pioneer Days. Salt Lake City, UT: George Q. Cannon, 1897. Lambourne, Alfred. The Pioneer Trail. Salt Lake City, UT: The Deseret News., 19l3. Lambourne, Alfred. Three Season's Flowers. [S.L.: S.N.], 1902. Olpin, Robert S., William C. Seifrit, And Vern G. Swanson. Artists Of Utah. Salt Lake City, UT: Gibbs Smith Publisher, 1999. Snow, Eliza R., and Eva W. Wangsgaard. Pioneer Poets and Poetry. Salt Lake City, UT: Daugusthters of Utah Pioneers, 1993. Swanson, Vern G., Robert S. Olpin, and William C. Seifrit. Utah Painting And Sculpture. Layton, UT: Gibbs Smith Publisher, 1991. Swanson, Vern G., Robert S. Olpin, Donna Poulton, and Janie Rogers. 150 Years Survey: Utah Art And Utah Artists. Layton, UT: Gibbs Smith Publisher, 2001. Swanson, Vern G., Robert S. Olpin, and William C. Seifrit. Utah Art. Layton, UT: Peregrine Smith Books, 1991. Lambourne, Alfred. "Art Sketches Of the Yellowstone." Contributor, 10, (November 1888) 22-27.
- About this Journal · - Abstracting and Indexing · - Advance Access · - Aims and Scope · - Article Processing Charges · - Articles in Press · - Author Guidelines · - Bibliographic Information · - Citations to this Journal · - Contact Information · - Editorial Board · - Editorial Workflow · - Free eTOC Alerts · - Publication Ethics · - Reviewers Acknowledgment · - Submit a Manuscript · - Subscription Information · - Table of Contents Case Reports in Medicine Volume 2012 (2012), Article ID 387140, 3 pages Vitiligo in a Patient Treated with Interferon Alpha-2a for Behçet’s Disease Ophthalmology Clinic, Umraniye Research and Education Hospital, Adem Yavuz Caddesi, Ümraniye, İstanbul, Turkey Received 8 April 2012; Accepted 25 July 2012 Academic Editor: Jeffrey M. Weinberg Copyright © 2012 Esra Guney et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Behçet’s disease (BD) and vitiligo are diseases of unknown etiology. Interferon (IFN) alpha therapy is commonly used in Behçet uveitis. Interferon treatment in various diseases have also been observed causing certain autoimmune diseases such as vitiligo because of its immunomodulatory activity. The association between IFN therapy and vitiligo has been reported in the literature. We report a 21-year-old man with BD in whom vitiligo occurred during IFN treatment. To the best of our knowledge, this is the first reported case of such an association. Behçet’s disease (BD) is a chronic, relapsing, vascular inflammatory disease of unknown etiology affecting all sizes of arteries and veins. Ocular involvement in these patients are common and the typical form of involvement is a bilateral nongranulomatous panuveitis and retinal vasculitis. The exact cause of BD still remains unknown, however, current evidence suggests that a complex interplay of genetic and environmental factors triggers an autoimmune process in genetically predisposed individuals . Interferons (IFN) are part of the nonspecific immune system and are induced at an early stage by pathogens such as viruses, bacteria, parasites, or tumor cells. They have antiviral, antimicrobial, antitumor, and immunomodulatory actions . Due to its effects, IFN therapy has gained popularity in the treatment of Behçet’s disease in the last decade. Several studies have shown consistent benefit with the use of IFN in the treatment of Behçet’s uveitis [3–5]. A wide array of adverse effects of IFN alpha therapy has been described. The major side effects of IFN alpha therapy in Behçet’ disease are flu-like symptoms skin lesions such as psoriasis development of autoantibodies, thyroid hormone imbalance, severe depression, and leukopenia [3–6]. Furthermore necrosis and vasculitis at the injection site, xerosis, pruritus, urticaria, alopecia, psoriasis, ptyriasis rosea, lichen planus, eosinophilic fasciitis, and vitiligo are the most common adverse skin reactions due to IFN treatment [2, 7, 8]. Vitiligo is a common, idiopathic, progressive, acquired depigmenting skin disorder in which some or all the melanocytes are selectively destroyed in the hypomelanotic areas [9, 10]. The destruction is thought to be due to an autoimmune problem, however, multiple immunological, neurological, and genetic components have been considered in the pathogenesis of the disease . We report a patient in whom vitiligo occurred during IFN alpha-2a therapy for Behçet panuveitis. To the best of our knowledge, this is the first report of such an association. 2. Case Report We administered IFN alpha-2a to a 21-year-old man with bilateral Behçet panuveitis. He fulfilled the classification criteria of the International Study Group for Behçet’s disease . Except for his eye lesions, he had recurrent oral ulcerations and, recurrent genital ulcerations. He had previously received high-dose corticosteroids and conventional immunosuppressive treatment with azathioprine and cyclosporine. IFN alpha-2a therapy was initiated during his remission period to prevent the side effects of high-dose corticosteroids. Immunosuppressive agents and oral corticosteroids were discontinued before initiation of IFN alpha-2a therapy. The initial dose of IFN alpha-2a was 6 million units per day (MU/day), subcutaneously, which was tapered to 3 MU/day after 15 days. The patient tolerated the therapy well and no further uveitis attacks were observed during the treatment. However, after 2 months of therapy, he developed small, round, depigmented areas bilaterally on the upper arms, around the IFN alpha-2a injection sites (Figure 1). Diagnosis of vitiligo has been confirmed for dermatologist consultation. The patient had no family history of vitiligo. Topical corticosteroid treatment was initiated for his vitiligo and the maintenance dose of IFN alpha-2a was adjusted to 3 MU three times weekly. His condition did not change in the following 2 months. Many cases of vitiligo associated with IFN treatment have been reported [13–15]. The exact relationship between the treatment and this autoimmune phenomenon is still obscure. It can be hypothesized that stimulation of autoantibodies against melanocytes and/or cytotoxic T-cell activation by IFN therapy can lead to vitiligo development . It has also been shown that the presence of autoantibodies prior to IFN therapy increases the risk of developing autoimmune disorders once IFN is initiated . Interferon has many functions on the immune system, including modulation of immunoglobulin production; inhibition of T suppressor cell function and stimulation of T-cell cytotoxicity, monocyte, and macrophage and natural killer cell activity [13, 18]. It is well known that vitiligo is associated with various autoimmune disorders and autoimmunity is supposed to play a role in its pathogenesis . The frequency of autoimmune thyroid disease is increased among vitiligo patients and their first-degree relatives and the frequency of vitiligo is higher among patients with Hashimoto’s thyroiditis and Graves’ disease . Oran et al. reported that the frequency of vitiligo was not increased among patients with BD . They claimed that traditional autoimmune mechanisms might not be operative in BD. However, Borlu et al. reported two brothers who exhibited coexisting BD and vitiligo. They attributed this association to some common features of both diseases, including T-cell activation and higher levels of autoantibodies . Nevertheless, they did not deny the possibility of a coincidence in their case. There have not been many reports about an association between BD and vitiligo. However, there are several case studies published in the literature about vitiligo occurring during IFN treatment [13–15]. Taken together, the occurrence of vitiligo during the course of BD in our case suggests that IFN therapy, rather than BD, may play a role in the development of vitiligo. Anbar et al. reported a case with vitiligo occurring at the site of IFN injection. They claimed that vitiligo occurred as a result of local immune response against melanocytes at the sites of IFN injection . In our case, that kind of local immune reaction may be responsible for vitiligo formation at IFN injection sites. Many dermatological side effects have been reported with IFN. Some, but not all, of these side effects were immune-mediated. Psoriasis, pemphigus, vitiligo, and alopecia were immune-mediated side effects . Most of the side effects associated with IFN are considered to be dose-dependent [2, 7]. In a previous study, the occurrence of vitiligo with IFN therapy for viral hepatitis was reported in a series of eight cases . However, it is not clear whether the IFN dose is correlated with the development or exacerbation of vitiligo. In our case, the skin lesions of the patient did not improve after tapering the dose of IFN alpha-2a. Occurrence of vitiligo has previously been reported in patients treated with IFN for hepatitis C, hepatitis B, and melanoma [13, 14, 24]. To the best of our knowledge, this is the first reported case of vitiligo in a BD patient treated with IFN. This case is an important reminder of the potential side effects of IFN and the need to warn patients about them before initiating IFN therapy. Conflict of Interests The authors declare that they have no conflict of interests. - D. Mendes, M. Correia, M. Barbedo et al., “Behçet's disease—a contemporary review,” Journal of Autoimmunity, vol. 32, no. 3-4, pp. 178–188, 2009. - R. Stadler, A. Mayer-da-Silva, B. Bratzke, C. Garbe, and C. Orfanos, “Interferons in dermatology,” Journal of the American Academy of Dermatology, vol. 20, no. 4, pp. 650–656, 1989. - S. Onal, H. Kazokoglu, A. Koc et al., “Long-term efficacy and safety of low-dose and dose-escalating interferon alfa-2a therapy in refractory Behçet uveitis,” Archives of Ophthalmology, vol. 129, no. 3, pp. 288–294, 2011. - I. Kötter, M. Zierhut, A. K. Eckstein et al., “Human recombinant interferon alfa-2a for the treatment of Behçet's disease with sight threatening posterior or panuveitis,” British Journal of Ophthalmology, vol. 87, no. 4, pp. 423–431, 2003. - I. Tugal-Tutkun, E. Güney-Tefekli, and M. Urgancioglu, “Results of interferon-alfa therapy in patients with Behçet uveitis,” Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 244, no. 12, pp. 1692–1695, 2006. - G. Sobac, U. Erdem, A. H. Durukan et al., “Safety and effectiveness of interferon alpha-2a in treatment of patients with behet's uveitis refractory to conventional treatments,” Ophthalmology, vol. 117, no. 7, pp. 1430–1435, 2010. - L. A. Asnis and A. A. Gaspari, “Cutaneous reactions to recombinant cytokine therapy,” Journal of the American Academy of Dermatology, vol. 33, no. 3, pp. 393–410, 1995. - E. Alpsoy, C. Durusoy, E. Yilmaz et al., “Interferon alfa-2a in the treatment of Behçet disease: a randomized placebo-controlled and double-blind study,” Archives of Dermatology, vol. 138, no. 4, pp. 467–471, 2002. - C. L. Huang, J. J. Nordlund, and R. Boissy, “Vitiligo: a manifestation of apoptosis?” American Journal of Clinical Dermatology, vol. 3, no. 5, pp. 301–308, 2002. - P. Shameer, P. V. S. Prasad, and P. K. Kaviarasan, “Serum zinc level in vitiligo: a case control study,” Indian Journal of Dermatology, Venereology and Leprology, vol. 71, no. 3, pp. 206–207, 2005. - M. E. Whitton, D. M. Ashcroft, and U. González, “Therapeutic interventions for vitiligo,” Journal of the American Academy of Dermatology, vol. 59, no. 4, pp. 713–717, 2008. - International Study Group for Behcet’s Disease, “Criteria for diagnosis of Behcet’s disease,” The Lancet, vol. 335, no. 8697, pp. 1078–1080, 1990. - D. Seçkin, C. Durusoy, and S. Sahin, “Concomitant vitiligo and psoriasis in a patient treated with interferon alfa-2a for chronic hepatitis B infection,” Pediatric Dermatology, vol. 21, no. 5, pp. 577–579, 2004. - I. Hamadah, Y. Binamer, F. M. Sanai, A. A. Abdo, and A. Alajlan, “Interferon-induced vitiligo in hepatitis C patients: a case series,” International Journal of Dermatology, vol. 49, no. 7, pp. 829–833, 2010. - V. Arya, M. Bansal, L. Girard, S. Arya, and A. Valluri, “Vitiligo at injection site of PEG-IFN-α 2a in two patients with chronic hepatitis C: case report and literature review,” Case Reports in Dermatology, vol. 2, no. 2, pp. 156–164, 2010. - H. Simsek, C. Savas, H. Akkiz, and H. Telatar, “Interferon-induced vitiligo in a patient with chronic viral hepatitis C infection,” Dermatology, vol. 193, no. 1, pp. 65–66, 1996. - P. Raanani and I. Ben-Bassat, “Immune-mediated complications during interferon therapy in hematological patients,” Acta Haematologica, vol. 107, no. 3, pp. 133–144, 2002. - R. M. Schultz, J. D. Papamatheakis, and M. A. Chirigos, “Interferon: an inducer of macrophage activation by polyanions,” Science, vol. 197, no. 4304, pp. 674–676, 1977. - V. N. Sehgal and G. Srivastava, “Vitiligo: auto-immunity and immune responses,” International Journal of Dermatology, vol. 45, no. 5, pp. 583–590, 2006. - A. Alkhateeb, P. R. Fain, A. Thody, D. C. Bennett, and R. A. Spritz, “Epidemiology of vitiligo and associated autoimmune diseases in Caucasian probands and their families,” Pigment Cell Research, vol. 16, no. 3, pp. 208–214, 2003. - Y. K. Shong and J. A. Kim, “Vitiligo in autoimmune thyroid disease,” Thyroidology/A.P.R.I.M, vol. 3, no. 2, pp. 89–91, 1991. - M. Oran, G. Hatemi, L. Tasli et al., “Behçet's syndrome is not associated with vitiligo,” Clinical and Experimental Rheumatology, vol. 26, no. 4, pp. S107–S109, 2008. - M. Borlu, E. Çölgeçen, and C. Evereklioglu, “Behçet's disease and vitiligo in two brothers: coincidence or association?” Clinical and Experimental Dermatology, vol. 34, no. 8, pp. e653–e655, 2009. - T. S. Anbar, A. T. Abdel-Rahman, and H. M. Ahmad, “Vitiligo occurring at site of interferon-α 2b injection in a patient with chronic viral hepatitis C: a case report,” Clinical and Experimental Dermatology, vol. 33, no. 4, p. 503, 2008.
The opening talk, an hour-long joint meeting of the Quality Assurance Workshop and EFfCi Workshop, was standing room only, as experts covered issues related to Good Manufacturing Practices (GMP) and audit schemes for the safety of ingredients and finished products in the cosmetic industry. The panelists included Steve Greer of Procter & Gamble, Harry Bennett of Rutgers University and Iain Moore of Croda UK who presented “Cosmetic cGMP for suppliers,” “PCMAP (Personal Care Manufacturing Assessment Program) and the role of GMP in the supply chain,” and “EFfCi (European Federation for Cosmetic Ingredients) GMP guidelines for cosmetic ingredients,” respectively. Quality Assurance Workshop The Quality Assurance Workshop continued with Karyn M. Campbell, director, Investigations Branch, US Food and Drug Administration’s Philadelphia District Office, who presented an update on FDA’s activities, including an overview of the agency’s reorganization—some of which had just been implemented days before. Campbell then provided information on recent warning letters, including the cyber letter issued to L’Oréal’s Lancôme brand. “This is called a cyber letter because there is no inspection,” Campbell said, noting that FDA investigators do routine web searches for claims that cross over into the drug area. Another warning letter presented by Campbell involved a foot scrub. But this went beyond claims issues; the products were adulterated and posed health issues for consumers, the document said. According to the warning letter, an FDA sample of the foot scrub was found to have excessive microbiological levels. The letter, which was issued in 2012, was written following an inspection conducted in a US facility during which the agency found that the product was prepared, packed and held under unsanitary conditions. “There are no GMPs for cosmetics,” Campbell told attendees. “When we don’t have GMP regulations established, we always have the Act,” she continued, referring to the Federal Food, Drug, and Cosmetic Act . In its warning letter, FDA staff said it was compelled to state that the company did not follow GMPs, noted Campbell. Agency inspectors found filth and dust on manufacturing equipment, and said that raw materials were not routinely evaluated from a microbiological safety standpoint and finished products were also not being routinely tested. According to Campbell, FDA staff was confident that if the foot scrub company challenged the allegations in court, the issues related to GMP would stand. “We are a very conservative agency, so we felt that these were very egregious,” she said. A Serious Discussion About B. Cereus Another hot topic on the docket was Bacillus cereus (b. cereus), which was addressed in the Cosmetic Microbiological Safety session. Dr. Michelle Callegan of the University of Oklahoma presented “Ocular Infections by Bacullus cereus and Other Organisms.” Dr. Callegan told attendees that B. cereus is ubiquitous in nature. “You can find it anywhere,” she told the audience, noting that it is even in such mundane places like black pepper. According to Dr. Callegan the most common forms of B. cereus infection cases are typically gastrointestinal-related however there are non-GI infections—such as wounds or burns, respiratory (pneumonia) and ocular (endophthalmitis). “Are non-GI B. cereus infections on the increase?” Dr. Callegan asked in her presentation. While only serious infections have been published in the literature and the majority of reported serious cases occurred in neonates, IV drug abusers and immunocompromised patients, a January 2011 alcohol swab recall by The Triad Group has led to increased concern from FDA —including those about B. cereus in cosmetics that are to be used around the eye area. According to Dr. Callegan, infection with B. cereus usually results in blindness or loss of the eye itself within 24-48 hours, and despite treatment, 60-85% of all Bacillus cereus endophthalmitis cases result in significant vision loss. Also, 48-70% of all bacillus cereus endophthalmitis cases result in enucleation or evisceration of the eye. Yet, according to Dr. Callegen, there has not been a lot of literature on B. cereus infections in the eye area, and those that have been published involved contact lens wear or eye injuries—such as using a pin to separate eye lashes after application of mascara, for example. She said that it comes down to personal responsibility with products, and that when a consumer is taught to use them, they need to use that product in that manner. “You can do what you can, put inserts, little fine print—but when it gets to their home, it is out of your control.” Dr. Callehan told the audience, “I don’t think you need to worry about B. cereus in cosmetics, and that is just my humble opinion.” The B. cereous conversation continued in the afternoon with Steve Schnittger, vice president of global microbiology at Estée Lauder. His presentation was entitled “Microbiological Risk Assessment Review of B. Cereus.” While FDA did not consider B. cereus a hazard five years ago, the agency appears to have changed that stance. “We are here today to further support that B. cereus in our products is not a concern,” Schnittger noted. In his talk, Schnittger presented a trio of case studies related to bascullus species. His goals were to show how in a preserved cosmetic matrix, these organisms should not be viewed as objectionable, and to focus on the importance of a risk assessment when determining potential risk of a cosmetic product. The first study involved UK grain workers who were frequently found to be exposed to more than 1 million bacteria and fungi per m3 air. Levels of airborne endotoxin of over 10,000 EU/m3 were recorded and at all but one workplace visited and personal exposures reached over 600 EU/m3 at every workplace. In the study, there were no reported cases of traumatic eye infections even though the bacterial and fungal counts exceeded 1 x 109 c.f.u per gram and long-term exposure to high endotoxin, bacterial and particle levels, did not have an effect on lung function and did not appear to cause chronic lung damage. The second case involved bacillus cereus in infant formula and came via “Final Assessment Report Bacillus Cereus Limits in Infant Formula Australian – New Zealand Food Standards (FSANZ).” As infant formula can be the sole source of nutrition for infants and frequency of consumption is very high, an infectious dose of b. cereus for infants is of concern because their immune system are not fully developed and are susceptible to bacterial infections. The New Zealand Risk Assessment Study concluded that levels between 10 cfu – 100 cfu of Bacillus species was not a risk to infants when prepared and stored properly. The Schnittger presentation also included a Survey of B. Cereus Contamination in Foods: • 52% of 1546 food ingredients • 44% of 1911 creams and deserts • 52% of 431 meat and vegetable products • Up to 48% of dairy products • 50% of UHT Milk. While bacillus species is widespread in food products, according to the presentation by Schnittger, it is estimated that the rate of illness in the USA is 0.1 cases per 10,000,000 per year due to B. cereus. The final case study presented by Schnittger was bit closer to home as it involvedimported cosmetics. A lipstick or a eye-shadow containing <10 CFU per gram of B. cereus would be delivering 0.01 – 0.05 grams of product per application. It would be delivering 10-2 cfu per gram of a bacteria per application of a preserved anhydrous product—the equivalent to 0.001 cfu per gram of product to an area of the body (the lips), which contains a very high bacterial load. According to Schnittger, in Case 1 there was a high bacterial load in the eye area with no adverse reactions reported and no long-term adverse reactions observed. In Case 2, it was found to be acceptable that a productwas being given to an immune-compromised population and it contained levels of bacteria that could possibly range from <102 - <105 cfu per gram. And in the third case, there was a preserved or anhydrous hot pour product which is hostile to microbial growth containing levels of <10 cfu per gram of an organism(s) to an area of the body that may contain bacterial levels that could range from 10-1000 times higher than what is being delivered by the product. In conclusion, Schnittger said data showed that in a non-cosmetic application, B. cereus is not to harmful even at levels 100 times higher than limits used for cosmetic products. According to Schnittger, industry needs to be persistent with FDA about B. cereus, suggesting that the industry should be willing to take legal steps to defend the safety of its products. Noting that today’s personal care products today aren’t just “water in oil emulsions being put on hair,” Schnittger said.“Products are more dynamic and the packaging is more dynamic. Risk assessment will help us determine if products are safe, what is the potential risk. And I think that I have shown today that the risks are minimal.” During the Q&A portion of Schnittger’s presentation, Scott Sutton, a consultant with Microbiology Network (and an earlier speaker at the symposium), supported Schnittger’s stance by telling the audience, “If you are not willing to defend the safety of your products, you get what you get.”
Adam T Hirsh Affiliation: University of Washington - Pain assessment and treatment disparities: a virtual human technology investigationAdam T Hirsh Department of Rehabilitation Medicine, University of Washington School of Medicine, Box 356490, Seattle, WA 98195 6490, USA Pain 143:106-13. 2009..These data contribute to the existing literature on disparities in pain practice and highlight the potential of a novel approach that may serve as a model for future investigation of these critical issues... - Cognitive and behavioral treatments for anxiety and depression in a patient with an implantable cardioverter defibrillator (ICD): a case report and clinical discussionAdam T Hirsh Department of Rehabilitation Medicine, University of Washington, Seattle, WA 206 221 5688, USA J Clin Psychol Med Settings 16:270-9. 2009..Improvements in marital relations were also achieved. These treatment effects were maintained at follow-up and in the context of acute, medical stressors. Future clinical and research directions are also discussed... - Sex differences in pain and psychological functioning in persons with limb lossAdam T Hirsh Department of Rehabilitation Medicine, University of Washington School of Medicine, Seattle, WA 98195 6490, USA J Pain 11:79-86. 2010..This study did not find prominent sex differences in pain specific to limb loss. However, several sex differences in the overall biopsychosocial experience of pain did emerge that are consistent with the broader literature... - Evaluation of nurses' self-insight into their pain assessment and treatment decisionsAdam T Hirsh Department of Rehabilitation Medicine, University of Washington School of Medicine, Seattle, Washington, USA J Pain 11:454-61. 2010..These data suggest that biases may be prominent in practitioner decision-making about pain, but that providers have minimal awareness of and/or a lack of willingness to acknowledge this bias... - Effects of self-hypnosis training and cognitive restructuring on daily pain intensity and catastrophizing in individuals with multiple sclerosis and chronic painMark P Jensen University of Washington, Seattle, USA Int J Clin Exp Hypn 59:45-63. 2011..The CR-HYP treatment appeared to have beneficial effects greater than the effects of CR and HYP alone. Future research examining the efficacy of an intervention that combines CR and HYP is warranted... - Psychosocial factors and adjustment to pain in individuals with postpolio syndromeAdam T Hirsh Department of Rehabilitation Medicine, University of Washington School of Medicine, Seattle, Washington 98104, USA Am J Phys Med Rehabil 89:213-24. 2010..The purpose of the current study was to examine the associations among measures of psychosocial factors, pain, and adjustment in persons with postpoliomyelitis syndrome... - Symptom burden in individuals with cerebral palsyAdam T Hirsh Department of Rehabilitation Medicine, University of Washington School of Medicine, Seattle, WA, USA J Rehabil Res Dev 47:863-76. 2010..Additional research is needed to identify the most effective treatments for those symptoms that affect community integration and psychological functioning as a way to improve the quality of life of individuals with CP... - Prevalence and impact of pain in multiple sclerosis: physical and psychologic contributorsAdam T Hirsh Veterans Affairs Puget Sound Health Care System, University of Washington, Seattle, WA, USA Arch Phys Med Rehabil 90:646-51. 2009..To characterize the prevalence and impact of pain in veterans with multiple sclerosis (MS) and to assess their association with demographic, biologic, and psychologic variables... - Sleep problems in individuals with spinal cord injury: frequency and age effectsMark P Jensen University of Washington School of Medicine, Seattle, WA 98195 6490, USA Rehabil Psychol 54:323-31. 2009....
The rapid pace with which we produce new technology means that new industries are created every day, providing ample opportunity for young entrepreneurs to get in on the ground floor. But the paradox of choice can be problem, as the question becomes where to turn and how you should begin. To simplify your research, here are five areas gathering steam: 1. 3D printing With the 3D market projected to hit $4.5 billion by 2018, the industry is just ramping up. London-based Makielab developed an innovative system that combines 3D modeling, smartphones and 3D printers to enable kids to build their own toys. MakerBot Industries recently opened up a 50,000 square-foot factory in Brooklyn and just got acquired by competitor Stratasys for $403 million. NASA is sending a 3D printer to space. And then there is Modern Meadow, the company behind 3D-meat printing. With a machine that builds any shape imaginable, 3D printing is screaming endless possibilities. 2. Alternate reality games Alternate reality games are real-life games built around a digital framework where players use their physical space as a digital playground. Apps like Facebookâs Check In and Foursquare encourage users to announce their arrival at various hotspots by gamifying their daily routine. A narrative-driven exploration app, Ingress, turns urban exploration into a spy-fiction cold war between its players. The current market of gamification is at $421 million and is expected to jump to $5.5 billion by 2018, according to a report by Markets and Markets. And with smartphones connected to everybody's hip, it is the perfect tool to gamify everyday tasks. Narratives are different now. With social media becoming a more viable way of gathering information, sites like Branch and Storify are turning posts, tweets and pics into their own story, creating richer and shareable content. Video is also evolving. Netflix decided that their original programming no longer needs the serial format and have implemented a supposed nonlinear layout for the latest season of Arrested Development. And sites like YouTube and Vimeo are making it easy for anyone to upload their video narratives. Then, there's Literature. Recently, Amazon announced it will be launching Kindle Worlds, a service that allows writers to publish fiction inspired by well-known books and television series, like Gossip Girl. If storytelling is your strong suit, now is the time to act on it. 4. Niche social media Facebook remains the powerhouse but other more-niche social media sites like Pack and Foodie are quickly catching up. With people sharing everything online, connecting to others across social networks and privacy becoming a gray area, data is becoming a huge market -- especially for advertisers. Innovation needs to tap into social media's multi-billion dollar empire and provide tools to connect user information with marketers. 5. Wearable Technology Google Glass and smartwatches are on the forefront of wearable technology, spurring products to supplement these devices and next generation innovation. Products like Basis track your heart rate and provide real-time suggestions on making your lifestyle just a little healthier. Human Media Lab recently developed a prototype for a shape-changing smartphone by using a thin, flexible material that houses the phoneâs guts. What do you think the biggest game-changing gadget will be in the next 10 years? Let us know with a comment.
Most people know when to be afraid and when it's ok to calm down. But new research on autism shows that children with the diagnosis struggle to let go of old, outdated fears. Even more significantly, the Brigham Young University study found that this rigid fearfulness is linked to the severity of classic symptoms of autism, such as repeated movements and resistance to change. For parents and others who work with children diagnosed with autism, the new research highlights the need to help children make emotional transitions – particularly when dealing with their fears. "People with autism likely don't experience or understand their world in the same way we do," said Mikle South, a psychology professor at BYU and lead author of the study. "Since they can't change the rules in their brain, and often don't know what to expect from their environment, we need to help them plan ahead for what to expect." In their study, South and two of his undergraduate neuroscience students – Tiffani Newton and Paul Chamberlain – recruited 30 children diagnosed with autism and 29 without to participate in an experiment. After seeing a visual cue like a yellow card, the participants would feel a harmless but surprising puff of air under their chins. Part-way through the experiment, the conditions changed so that a different color preceded the puff of air. The researchers measured participants' skin response to see if their nervous system noticed the switch and knew what was coming. "Typical kids learn quickly to anticipate based on the new color instead of the old one," South said. "It takes a lot longer for children with autism to learn to make the change." The amount of time it took to extinguish the original fear correlated with the severity of hallmark symptoms of autism. "We see a strong connection between anxiety and the repetitive behaviors," South said. "We're linking symptoms used to diagnose autism with emotion difficulties not usually considered as a classic symptom of autism." The persistence of needless fears is detrimental to physical health. The elevated hormone levels that aid us in an actual fight or flight scenario will cause damage to the brain and the body if sustained over time. And the families who participate in social skills groups organized by South and his students can relate to the new findings. "In talking to parents, we hear that living with classic symptoms of autism is one thing, but dealing with their children's worries all the time is the greater challenge," South said. "It may not be an entirely separate direction to study their anxiety because it now appears to be related." Brigham Young University: http://www.byu.edu This press release was posted to serve as a topic for discussion. Please comment below. We try our best to only post press releases that are associated with peer reviewed scientific literature. Critical discussions of the research are appreciated. If you need help finding a link to the original article, please contact us on twitter or via e-mail. The complicated science behind picky eating is giving experts plenty of food for thought The compound kills disease-causing parasites by popping them like water balloons The U.S. had planned to build 17 treatment units across Liberia, one in each county's major town. Now that more cases are appearing in remote areas, the Army may need to rethink its strategy. A woman is thought to be spreading Ebola in a remote village. So health workers spend four hours trekking through the bush to track her down. By the time they make it, it's too late. Doctors have used perfect replicas of childrens' hearts to uncover and repair hidden defects An experiment testing people’s altruism in the face of electric shocks is clear on one thing: we are drawn to these little blasts Researchers gear up tests in West Africa to see whether blood from Ebola survivors can help people who are sick with the disease. This is part of a broader effort to test therapies in West Africa. The virus's foray into Europe coincides with peak production of Christmas turkeys, the poultry species most vulnerable to bird flu A novel kind of nanoparticle could lead to more effective cancer treatments.Patients and doctors often don’t know if surgery to remove cancerous tissue was successful until scans are performed months later. A new kind of nanoparticle could show patients if they’re in the clear much earlier. One challenge in evaluating the effectiveness of different medical procedures, is that patients behave differently after different procedures. Is this true for patients getting heart surgery?
Surf the web and you will see that the subject ofis well-charted territory. No matter what your goal, a how-to manual is there to support it. Need to write grant proposals, company newsletters, technical manuals, instructional design or academic materials? Industry experts abound to provide a sea of knowledge about any aspect of writing imaginable. For advice on how to create fiction, it seems logical to consult some of the successful authors and writing giants among us. As I began researching books on writing by authors, Stephen King’s On Writing: A Memoir of the Craft kept appearing on the horizon. I extrapolated all that I could from that book and have started recommending it to other writers. Some of his tips include writing the first draft of a manuscript with door closed, consulting an ‘ideal reader’ that represents the audience, writing consistently each day (1,000 words or more), and writing about what the really knows, because that is what makes a writer unique. I’ve been applying King’s techniques into my writing regimen whenever possible. With over fifty worldwide bestsellers in his wake, clearly he knows what he’s doing. Another writing giant willing to share his techniques is Ray Bradbury, who still cuts quite a swath. , Fahrenheit 451, Dandelion Wine and his other stories will forever swim in the waters of literature. Bradbury’s book for aspiring writers Zen in the Art of Writing is full of sage advice. He suggests that people write about what they love or what they hate because that conviction and passion is crucial to the story. He advises authors to run after life with fervent gusto, to pursue their interests, and write about the things that make them happy. Starting out, even surfing small literary waves can feel like riding giants. I’m getting more comfortable with what lies beneath (although it’s harder than it looks). King and Bradbury cared enough to show the rest of us that it’s possible to conquer the sea, and when you do, an ocean of opportunity awaits. Besides, what one person can do, another can do. Are you ready to paddle out?
GODDESSES 'R Us-OLD PRODUCT; NEW PACKAGE PART 1 of 4 By Debra Rae June 10, 2007 Goddess Worship: Its Historical Threads (“Living Goddess” Cults) [Image] Every culture throughout the course of recorded history has glommed on to some sort of goddess figure—Venus and Isis (fertility goddesses) and Morrigan (goddess of war), to name but three. The most prolific goddess worshippers are spawned out of Hinduism and the beliefs of indigenous peoples, but even early Christian sects purportedly venerated the Virgin Mary as a goddess; moreover, contemporary mystics are petitioning the Pope to include Mary in the godhead. Whereas some worship "God the Mother" as supreme and sole Deity, followers of Hinduism honor a plethora of goddesses. Practitioners can be conservative (in support of male dominance, state control, and colonialism); or radical, as acted out by bra-burning, perpetually offended militant feminists best characterized by their mantra, “I am woman; hear me roar!” Throughout the centuries “living” goddess cults have venerated their fellows as deities. In ancient Egypt, for example, stateswomen such as Hatshepsut and Cleopatra VII wielded total power as living goddesses. The same concept has been promulgated by imperial families of China, Rome and Japan. In Nepal even today, young girls are selected as living icons. Since the mid-19th century, goddess worship in Western society has developed into a distinct culture. Rather than worship some distant deity, devotees often prefer terms as “spirituality” or “veneration” over goddess “worship.” That being the case, “living goddess” cult followings have not escaped the West. Indeed, England’s monarchs (Elizabeth I, for one) drew on the iconic powers of a living goddess; and, on the other side of the pond, a devoted, sometimes ecstatic fan base of the Oprah Show gives shape to America’s “living” goddess cult. Goddess Worship: Its Bogus Wisdom (A-Traditional Primordial Wisdom) For its successes in gaining needed social, political and economic equality for women, today’s Women’s Movement has been broadly acclaimed since World War II—in many respects, for good reason. Trouble is extremists rev up their message a notch by advancing the nefarious notion of female superiority—sometimes to the point of deification. Take, for example, Caroline Myss, Ph.D. So compelling is her message in the field of energy medicine and human consciousness and potential that, for one entire year (2003), Oprah Winfrey gave Caroline her own television program with the Oxygen network, targeted to women. A former consultant to our Defense Department and 1984 Democratic vice presidential candidate, Barbara Marx Hubbard applauds such women of vision who, in turn, honor a-traditional “primordial wisdom” as a resource of the spirit in their ascension process—whatever that means. Arcane? You bet. Their feminist philosophy is decidedly esoteric, the Greek root for which means “private” or “confidential.” You see, goddess wisdom (or spirituality, as the case may be) is exclusive truth reserved for an enlightened “inner circle” of initiates. Their claim to wisdom mirrors the mystery religions of ancient Egypt, Babylon, Assyria, Phoenicia, Greece and Rome. Both Carolyn and Barbara join faculty at the Wisdom University as purportedly enlightened educators for “cosmic order” and spiritual transformation. A favorite product available through the university’s bookstore is Hallie Iglehart Austen’s paperback, The Heart of the Goddess: Art, Myth and Meditations of the World’s Sacred Feminine. That women are worthy I’ll not debate; moreover, for apparent reason, their intuitive prowess is legendary. Nevertheless, earth’s first lady, Eve, learned the hard way that dabbling with God’s exclusive knowledge of good and evil rendered no service to her relationship with Him, her family and humanity at large. While a good woman is godly, even godlike by design, she is not a goddess, nor will she ever be one. Still, godly women are forces to be reckoned with—even worthy of praise (Proverbs 31:30)—but never to the point of usurping God’s glory. Goddess Worship: Its New Age Expression (Earned Egoic Advancement) [Image] Coveting divinity was and is the Achilles heel of Lucifer, chief of fallen angels. For envying the exalted status of humans, all the while craving for himself God's exclusive right to omnipotence, Lucifer was cast down from Heaven. In search of mystical union with a personal deity, Lucifer’s 21st century protégées follow suit. To discover the goddess within, a woman first must achieve elevated "cosmic consciousness"; and yoga is presumed to accomplish that purpose dandily. Its promise of yoking with the divine spirit of the universe has become all the rage—so much so that tens of thousands of copies have been circulated of a video tutorial created by Marsha Wenig of Michigan City, Indiana. Techniques within her Yoga Kids video and adult certification program (to teach yoga to children) have captivated young moms everywhere. Many rush to their local bookstores to snatch up Yoga Baby and I Can’t Believe It’s Yoga for Kids, two among many trendy publications of this ilk. Mother-daughter yoga may well ensure bonding—but not filially. The goal of yoga is samadhi, or occult enlightenment, in giving way to one’s true divine nature. This is accomplished by controlling vital energy (prana) in the act of breathing. Some may be surprised to learn that virtually all standard yoga texts link psychic powers and other occult abilities with yoga practice. All too often gullible women in search of “egoic advancement” gobble up self-help literature that a fallen world has to offer, but then manipulating cosmic energies simply doesn't cut it! Better to take to heart the sobering upshot of Lucifer’s folly than to pursue an elusive dream of so-called earned egoic advancement otherwise known as achieving Christhood. For good reason the Bible warns us to "let God be true, and [let] every man [or woman who makes claims to the contrary] [be exposed as] a liar" (Ro. 3:4). Goddess Worship: Its Sexual Expression (Tantra and the Great Rite) [Image] The term “sexual revolution” is not new, but was coined by anarchist Freudian scholar Wilhelm Reich. In the 1920s and 1930s, Otto Gross and he developed a “sociology of sex” further expounded upon by renowned, but controversial anthropologist Margaret Mead, author of Coming of Age in Samoa (1928). By the way, this is the same Margaret Mead who was keynote speaker at a UN Spiritual Summit Conference in which the UN’s resident guru led a diverse group in Eastern meditation. Historian David Allyn characterized it as a time of "coming-out" when, in the 1960s, Eastern mysticism linked with America’s sexual revolution. Indeed, sexual behavior and religious affiliation changed radically for the vast majority of “enlightened,” thoroughly-modern Millie’s who readily “made love, not war.” Once freed from Sunday school morality, women were eager to explore “free love” inclusive of premarital sex, masturbation, erotic fantasies, pornography and lesbianism. Add to this list “tantric sex,” the concept for which was featured not long ago on an Oprah show I happened onto. Simply put, tantric sex is meditative lovemaking through which partners learn to channel potent orgasmic energies. The idea is to raise one’s level of consciousness from the plane of doing to the place of being. Tantra teaches a woman to transform the act of sex into a sacrament, merging the dual nature of sexuality into ecstatic union. Once having harmonized internal masculine and feminine polarities, one allegedly realizes the blissful nature of “the Self” (capital “S” intended). Oprah enthusiasts would do well to consider the dark side of this so-called sacrament of love. In “Christian” America alone, Wiccans number an astonishing quarter-of-a-million; and a necessary part of their Third-Degree elevation ritual, the Great Rite celebrates “sacred marriage” through sex (not necessarily with one’s “significant other”). Wiccan sex partners invoke specific gods and goddesses into one another’s bodies—the dynamic polarity for which is reminiscent of Tantra. True, we’ve come a long way, baby, since 1962 when Helen Gurley Brown published Sex and the Single Girl and, then, went on to transform Cosmopolitan magazine into a life manual for young career women. But, in many ways, women are none the better for it. Realizing one’s “blissful Self” is sullied by an ever-increasing smorgasbord of STDs, half of which are incurable. Add to that spiritual darkness; and we have a formula for disaster—physically, culturally, and spiritually. Goddess Worship: Its Kinseyan Fraud and Freudian Foibles [image] Sigmond Freud (1865-1939) was an Austrian physician who pioneered study of the subconscious and unconscious mind. He developed psychoanalysis and formulated concepts of the pleasure-seeking id, the “conscious self” ego, and the conscience, or superego. A confessed atheist at war with religious mores, Freud nonetheless worshipped the god/goddess of sexuality. Furthermore, he used cocaine and championed hypnotism—both consistent with “altered consciousness” heralded by New Age feminists. While much of Freud’s research is widely discredited, no one can dispute its cultural (even spiritual) impact; indeed, his work laid the foundation for a groundbreaking study called Human Sexual Response (Masters and Johnson, 1966), which unveiled the nature and scope of sex practices engaged in by young Americans at the time. Earlier on, in the late 1940s and early 1950s, zoologist Alfred C. Kinsey published two similarly scandalous surveys of modern sexual behavior. In Kinsey, Sex and Fraud, co-author Judith Reisman exposes Kinsey’s illegal sexual experimentation on virtually hundreds of babies and children (for example, Table 34 tallies infant orgasms). Even so, Kinseyan sexology remains the learning base for sex education in America’s public school system. In fact, the propagandist arm of the Kinsey Institute (Indiana University), Sex Instruction/ Information Education Committee in the United States (SIECUS) fundamentally shapes that curriculum; furthermore, SIECUS receives funding from (gulp!) the Playboy Foundation, no doubt influenced by Freud, Kinsey, Masters and Johnson, and modern goddess veneration, if not worship. The latter is epitomized in “playmates” of Hugh Heffner’s making. For decades, generously-endowed models have posed nude in Playboy magazine centerfolds only to be ogled by male worshippers in awe of their meticulously airbrushed curves. Similarly, the famed Sports Illustrated swimsuit issue inaugurated in 1966 left little to the male imagination, but nonetheless has served to launch modeling and acting careers of the world’s most beautiful icons of goddess-like sexuality. Not exactly what bra-burning militant feminists of the 1960s had in mind, but then our century’s leading sexperts might have taken a bow were they alive today. For part 2 click below. © 2007 Debra Rae - All Rights Reserved are used strictly for NWVs alerts, not for sale Daughter of an Army Colonel, Debra graduated with distinction from the University of Iowa. She then completed a Master of Education degree from the University of Washington. These were followed by Bachelor of Theology and Master of Ministries degrees-both from Pacific School of Theology. While a teacher in Kuwait, Debra undertook a three-month journey from the Persian Gulf to London by means of VW "bug"! One summer, she tutored the daughter of Kuwait's Head of Parliament while serving as superintendent of Kuwait's first Vacation Bible School. Having authored the ABCs of Globalism and ABCs of Cultural -Isms, Debra speaks to Christian and secular groups alike. Her radio spots air globally. Presently, Debra co-hosts WOMANTalk radio with Sharon Hughes and Friends, and she contributes monthly commentaries to Changing Worldviews and NewsWithViews.com. Debra calls the Pacific Northwest home. Web Site: www.debraraebooks.com The most prolific goddess worshippers are spawned out of Hinduism and the beliefs of indigenous peoples, but even early Christian sects purportedly venerated the Virgin Mary as a goddess...
- About this Journal · - Abstracting and Indexing · - Advance Access · - Aims and Scope · - Annual Issues · - Article Processing Charges · - Articles in Press · - Author Guidelines · - Bibliographic Information · - Citations to this Journal · - Contact Information · - Editorial Board · - Editorial Workflow · - Free eTOC Alerts · - Publication Ethics · - Reviewers Acknowledgment · - Submit a Manuscript · - Subscription Information · - Table of Contents Mathematical Problems in Engineering Volume 2012 (2012), Article ID 582126, 11 pages The Johnson Noise in Biological Matter 1Department of Mathematics, Istituto “G. Castelnuovo”, University of Rome “La Sapienza”, Piazzale Aldo Moro, 5, 00185 Rome, Italy 2Technical Institute “R. Rossellini”, Via della Vasca Navale, 58 CP., 00146 Rome, Italy 3Naval Technical Institute “M. Colonna”, Via Salvatore Pincherle, 201 CP., 00146 Rome, Italy Received 28 September 2012; Accepted 6 October 2012 Academic Editor: Carlo Cattani Copyright © 2012 Massimo Scalia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Can a very low intensity signal overcome a disturbance, the power density of which is much higher than the signal one, and yield some observable effects? The Johnson noise seems to be a disturbance so high as to cause a negative answer to that question, when one studies the effects on the cell level due to the external ELF fields generated by electric power lines (Adair, 1990, 1991). About this subject, we show that the masking effect due to the Johnson noise, known as “Adair’s constraint” and still present in the scientific debate, can be significantly weakened. The values provided by the Johnson noise formula, that is an approximate expression, can be affected by a significant deviation with respect to the correct ones, depending on the frequency and the kind of the cells, human or not human, that one is dealing with. We will give some examples. Eventually, we remark that the so-called Zhadin effect, although born and studied in a different context, could be viewed as an experimental test that gives an affirmative answer to the initial question, when the signal is an extremely weak electromagnetic field and the disturbance is a Johnson noise. Much attention has been devoted over many decades to a problem that arose in the electrocommunications field [1–3], but such that its consequences extend beyond telecommunications, electroengineering, or other specific problems to a general question, that we try to summarize in this way: can a very low intensity signal overcome a disturbance, the power density of which is much higher than the signal one, and yield some observable effects? A problem of this kind has become a matter of some lively discussions inside the scientific community, in the case of the biological or health effects due to electromagnetic fields of low intensity in the whole region of nonionizing radiations (NIR). Just in this context a relevant influence has been played along all these years, in the scientific debate, by the statement of the Council of the APS (American Physical Society), that excluded the promotion of cancers by power line fields . Actually, the directions of the APS statement disregarded not only cancer risks but also almost all risks associated to the exposure to the fields generated by power lines. That statement has been reaffirmed by the Council of the APS in a more recent brief note, with a further consideration: “… In addition no biophysical mechanism for the initiation or promotion of cancer by electric or magnetic fields from power lines have been identified” . The aim of the present paper is not to enter this debate. But, among the scientific bases of the APS position paper, recalled in , we look at the one that has played a so relevant role up today in the debate concerning physicists and all researchers in bioelectromagnetism, such to merit the name of “Adair’s constraint”. Really, Adair proposed the concept of “thermal noise electric field” in his papers [7, 8], and his “constraint” extends far beyond the possible cancer effects: “… any effects on the cell level of fields in the body generated by weak external ELF fields will be masked by thermal noise effects and, hence, such fields cannot be expected to have any significant effect on the biological activities of the cells” . This statement was very strong, despite of important contemporary review papers about the interaction of ELF fields with humans , in which hundreds of references had described laboratory and clinical studies of the effects at the cellular level of an exposure to 50–60 Hz electric and magnetic fields: “a substantial amount of experimental evidence obtained with in vitro cell and organ cultures indicates that pericellular currents produced by ELF (extremely low frequency) fields lead to structural and functional alterations in components of the cell membrane” . Since the “Adair’s constraint” is still working in the scientific discussion (see, e.g., ), we mean to show that it can be significantly weakened and, at the same time, to give a contribution to avert a possible generalization of the masking effect to all the NIR spectrum. In the current literature many articles about the thermal noise in biological cells recur to the approximate expression of the Johnson noise, as the Adair’s ones do. We note that a bit of caution is necessary when dealing with that formula, in order to avoid that the mean square values of the noise tension differ significantly from the correct ones. This is the case of the estimates of exposure of a human cell membrane to a VLF antenna, that radiates in the range (14–30) kHz. On the contrary, the Johnson’s formula works well when applied to the irradiation of eggs of Salmo lacustris from an oscillating magnetic field in the range (450–1350) kHz, a historical experiment performed by Italian researchers in the Twenties. Eventually, we remark that the so-called Zhadin effect, although born and studied in a different context, could be viewed as an experimental test proving that an affirmative answer to the initial question can be given, in the particular case when the very low signal is an extremely weak electromagnetic field and the disturbance has just the character of a Johnson noise. 2. The Johnson Noise Many of the reasoning and the models applied to describe interactions between electromagnetic fields and the biological matter rely on the phenomenon of Johnson noise, when the thermal noise has to be taken into account. And over this case we will focus, even though there are other kinds of noise, in order to compare the results of this paper with those presented in [7, 8]. Thus, it’s necessary to recall, shortly, the experiment performed in electronics by Johnson and the corresponding formula. Let us consider a conductor, in the interior of which there is a very large number of free to move electrons. At a fixed temperature (°K), the thermal agitation of electrons implies that the stochastic motion of the charged particles and their collisions produce at some time an accumulation of charge at one end of the conductor, while in a successive instant there will be an excess of electrons on the other end. Therefore, we could find a tension between the two terminals, that is a stochastic variable of the time with a well-defined mean square value: the thermal noise tension. This tension would not depend on a possible continuous electric current flowing in the conductor, since the thermal velocity of electrons is much higher than the drift velocity (~103 times). Johnson was the first who observed the fluctuations of tension at the ends of a conductor with a resistance , realizing that the mean square value of the instantaneous noise tension was proportional to and to the absolute temperature . Besides, he found that the ratio does not depend on the nature or the shape of conductors and assumed that this tension was due to the thermal agitation of the electrons inside the material : the “Johnson noise” tension. Testing different kinds of conductors, he also found , where J/k is the Boltzmann’s constant, in a good agreement—within the 8%—with the experimental data . Harry Nyquist had been immediately requested by Johnson of an explanation of the results of his experiment—they were at that time colleagues in the Bell Telephone Laboratories—and answered with an easy and fine conceptual experiment, a fundamental element of which is a “bipole” . A bipole is any electric component with two terminals, characterized by a complex impedance where is the frequency (of the signal), is the imaginary unit and the functions of frequency, and , depend on the capacity and the inductance of the bipole. Actually, since a pure resistor does not exist as a physical object, we are obliged to schematize a conductor as a bipole, also in a conceptual experiment. For different from zero, the instantaneous noise tension fluctuates, also in the absence of an electric external field; but, for a pure resistor, its mean value cannot be other than 0: where stands for noise. Thus, over an interval of time we cannot have any electric field nor any electric current inside a pure resistor; only the instantaneous ones, led by the fluctuating tension, are permitted, otherwise we should have created a perpetual motion. And it is not by chance that in his experiment Johnson recurred, substantially, to the measure of the effective value of the noise tension as the available observable, that is the square root of . The mean square value, , at the ends of a bipole, is given by (see, e.g., [2, 3]) where is the frequency of the noise spectrum. The measure devices do not allow to perform measures over the range of all frequencies as requested in (2.3), thus it is more frequently used as the expression of the noise in a definite frequency band : where the last equality holds only if the impedance reduces to that of an conductor. When , one has with . Expression (2.5) is just the Johnson noise formula: it holds when in the last term of (2.4) the condition is satisfied (and the integration domain is limited). Thus, the limits of applicability of (2.5) are clear and it could be useful to emphasize that the thermal noise generated in a conductor for a given frequency band can be analyzed in terms of a pure resistor only if that condition is fulfilled. 3. The “Thermal Noise Electric Field” and the “Adair’s Constraint” Then, let us consider an parallel circuit, that is often used to model biological tissues at thermal equilibrium (). Really, in most of the models the impedance circuit reduces to an one, because the contribution of inductance results, in general, is experimentally negligible. In this case is the capacity of the model circuit, and the fluctuation of the tension at the ends of the bipole is given by (2.4). Adair introduced the concept of “electric noise field”, in a paper , and confirmed it in a subsequent work , in order to describe and quantify the observed phenomena. The aim was to take into account that “in any material the charge density fluctuates thermally according to thermodynamics imperatives generating fluctuating electric fields” . After having named the mean square value of the noise tension: , the main assumption which he applies to the model is that the mean square value of the tension is given, also for biological tissues or cellular membrane, by the expression (2.5). Therefore, in the case “of a hypothetical measurement of the voltage across the plates of a parallel plate capacitor where a cube of tissue of length d on a side is held between the plates … The time-average noise voltage can then be expressed as , ” . For a cubic volume of the tissue, , the time-average noise voltage is , correspondingly . Assuming μm°K, , and for a frequency span Hz, one gets “which is about 3.000 times larger than the field induced by a 300 V/m external field” . For a cubic section (this choice undergoes a criticism ) of a cellular membrane with the same temperature and values , , “… the thermal noise electric field is then , which is about times that from a 300 V/m external field” . For the sake of commodity we report the latter figure as and observe that following these theoretical estimates, mainly (3.2), the only one conclusion to be drawn is that we have already quoted in our introduction about the masking of all biological effects on the cell level by the thermal noise effects (consequently, the weak ELF fields will not have observable effects on the cells). Even if one could agree with a model that represents a biological object as a bipole of electronics, one can obtain (3.1) and (3.2) only if , that is, when a tissue of the human body or the cell membrane could be considered as a pure resistor. But there is no reason why this occurrence, denied to the materials of electronics at a point that Johnson and Nyquist were obliged to make use of a bipole (i.e., an impedance) for their experiments, also the conceptual ones, can take place in biology. Thus, since the literature does not provide any reason why a simple electronics model should behave differently if it deals with biology, it remains true that “the time-average noise voltage ” is 0 for a pure resistor (as we noticed at the beginning of this paper). Consequently, the electric field, that is, the mean value divided by the length of the conductor, does the same thing. The Adair’s theoretical estimates were incorporated in the background paper ; further, Adair himself has repeated his arguments and kept on his “constraint” in more recent papers [13–15]; and those estimates have been taken into serious account for many years, standing as “Adair’s theoretical exposure limits” also in recent papers (see, e.g., ). 4. A Result and Some Observations Replying to an opponent of his theses , Adair says: “my discussion was not original but taken largely from the paper of Weaver and Astumian” , referring to the paper . In that work, Weaver and Astumian deal, among other things, with the problem of calculating the mean square of the noise tension of a cell membrane and the corresponding electric field, using the same data underlying the expression (3.2), but obtaining different figures. Let us try to give an answer, accepting to represent a cell membrane as an circuit. The relationship between the tension at the terminals of the capacitor, , and its stored energy, , is given by For the equipartition theorem, the probability of finding the system in the voltage interval is proportional to , After normalizing to fix the value of the constant , one gets and performing the integration on one obtains At the balance, the expression (4.4) leads to a value equal to that given by (2.3). In order to obtain the value of for a cellular membrane the thickness of which is , it is almost natural to think of a capacitor made of two concentric spheres with radii and , respectively, instead of referring to the situation of two parallel planar plates (in this case, in fact, the capacity could depend on the number of squares, in which the plate can be divided to perform the calculation; and the corresponding expression of will depend on ). Then, the value of is where and are, respectively, the values of dielectric constant in vacuum and in a specified matter. From (4.4) it follows for the electric field if the same values as in the expression (3.2) are assumed for the parameters, that is , a value drawn from literature, and m, m, °K, one obtains The value given in (4.7) is implicitly referred to the whole spectrum of frequencies; if only ELF effects are investigated, then one has to calculate the finite integral of the expression (2.4) in the interval [HzHz]. Performing the integration on the interval one obtains If we take for resistivity the same value as in [8, 16], Ω · m, in the assumed frequency band the expression (4.8) gives that is a value, for the electric noise field, times minor than (3.2) (and times minor of the cell membrane electric field—the measures give about V/m—as it is reasonable to expect). In conclusion, it is not arbitrary to argue that if a cell membrane is represented not as a pure resistor but as a conductor, then the value for the noise electric field is much lower than that given by (3.2); correspondingly, the “Adair’s constraint” results significantly weakened. Furthermore, one could reasonably expect that also the mean value of the noise tension at the ends of a “bipole” be zero, just as that at the ends of a resistor. Thus, the figures (4.7), (4.9) would constitute more an estimation to determine possible peak values than theoretical data to compare with the experimental ones. 5. Some Examples about the Johnson’s Formula in Biology We have just observed in the previous formulae that it is crucial to use a correct value of , thus one could think that, if this request is satisfied, not a relevant gap between the estimates from (2.5) and (4.8) can occur not only in the ELF region but also in other bands of the spectrum of frequencies. That is, our criticism about the representation of a conductor as a pure resistor could be right in principle but poor of effects on the results of experiments. On the contrary, it is not true when one deals with human cell membranes, the parameters of which have values around those used by Adair and Weaver, as it is immediate to check. In fact the recalled condition is not well fulfilled already in the region of tens kHz, where the Johnson’s formula (2.5) gives a significant deviation from the correct value of the square mean tension as given by (4.8), as we will see just after; and for larger frequencies obviously only the formula (4.8) can be applied. Let us consider a high power transmitter operating in the band (14–30) kHz; a VLF antenna such that: “…induces currents and fields in people living in the urban area within 2 km of the antenna that are greater than those in people living very close to high voltage power lines…” . It follows that an intensity of electromagnetic field equal to that of power lines is reached at much shorter distance, but the effects will be the same in correspondence to the same value of the irradiated field. In any case, these different behaviours, significant for the health impact, do not affect the calculus of : it depends only on the electrical properties of the cellular membrane—we’ll keep the values of the parameters of the previous example—and on the frequency of the emitted signal. We take equal, as usual, to one tenth of the range of operating band. Thus, just because the condition is not well satisfied, in fact , from (2.5) one can obtain the hypothetical estimate while the calculation performed by (4.8) gives Consequently, the corresponding values of electric noise field differ about 130%: obviously, only the minor one being correct. It is easy to verify that for such a cell membrane, the electric noise field decreases, as one could expect, when frequency grows. Thus in the region MHz, frequently used in laboratory experiments, one can find V/m; in the region MHz, where are operating a lot of broadcasting and telecasting devices, it will be V/m, and for the frequencies of mobile phone or radar bridge GHz, V/m. For nonhuman cell membranes, it can happen that also at high frequencies the parameters satisfy , so that the deviation will be negligible and it will be easier to compute the square mean tension by the Johnson’s formula. In an experiment realized indeed many years ago to test the effects of electromagnetic fields on the embryonic development of eggs of Salmo lacustris the authors formulated the hypothesis that the deformations revealed on the irradiated eggs and on the embryonic development of Salmo lacustris with respect to those not exposed should be caused by the magnetic field. In fact, they took care of measuring a constant temperature during the time of exposition of the eggs, so that an alteration of the embryonic development due to the heat released by the irradiating field could be excluded. Let us look at the figures that one could have obtained for the thermal noise tension in that experiment. The eggs of Salmo lacustris were exposed to an oscillating magnetic field in the range (450–1350) kHz and the values reported in literature for those eggs give m [20, 21], m [22, 23] and . Now, the condition is fulfilled for kHz, thus the approximated formula (2.5) can be applied obtaining out of curiosity, the values given by the formula (4.8) are, respectively, This occurrence is not amazing why the variations of resistance and capacity, in function of frequency, of cell membranes have been investigated for a long time and it is well known that for the cell membrane of an egg of Salmo lacustris the resistance is much lower than that of a spherical human one; and its conductance increases rapidly at high frequencies . Therefore, for a nonhuman cell this example shows that when it is exposed to high frequencies, the electric noise field, if does exist, is not a barrier to mask such effects at the cell level of an external field; since, as a matter of fact, the alterations of eggs and their embryonic development were experimentally revealed not as caused by the release of thermal energy. 6. Concluding Remarks One of the key problems in bioelectromagnetism is to explain the mechanism of the influence of weak electromagnetic fields on biological objects; it remains unclear, in spite of numerous experimental data. In particular, it is not clear how low frequencies or static fields, magnetic or electric, can lead to the “resonance” of biochemical reactions, even when the energy of such fields is very small in comparison with the energy of the process. The lack of a theoretical explanation, that is satisfying or shared among researchers, is now called “ problem” or “ paradox” . On this subject, after a very long-time referees’ action Michail Zhadin reported the alteration of electric properties of a nonbiological system, made up by an aqueous diluted solution of amino acids (glutamic acid) , in correspondence of the frequencies of an ion cyclotron resonance. A direct current voltage of 80 mV was applied to the solution contained in a electrolytic cell, near to the value of the cell membrane potential. The solution was exposed to the combined action of two parallel magnetic fields, one weak and static (), the other extremely weak and alternating (), both applied orthogonally to the electrolytic current direction. A very narrow intensity peak in the electric current can be measured, when the frequency of the alternating magnetic field matches the ion cyclotron resonance of the ionized solution; this frequency is given by the well-known formula where , are, respectively, the mass and the charge of the electrolytic ion. The frequency windows found by Zhadin were at 4 Hz for T and , 20, 30 nT; in the interval Hz, spaced by 0.5 Hz, when , 25, 30, 40 T and nT. Many authors refer to this result as the Zhadin effect, that has been successfully replicated in Italy [27–29] and in Germany . Several attempts to give a theoretical explanation of the physical mechanism underlying that effect have been made [27, 31] in the framework of the quantum electrodynamics of condensed matter proposed by , also by Zhadin himself , after a previous analysis performed in terms of the semiclassical resonance theory . The Zhadin effect, born and actually studied in another context, is, in the limits of that experiment, a positive answer to our opening question: an extremely weak magnetic field—the very low intensity signal—can overcome, in correspondence of a frequency window, the noise tension—the disturbance—the energy density of which is very lager than that of the field. This eventuality suggests that a similar effect could take place also when biological cells are irradiated by very weak electromagnetic fields; that is, the masking effect on the cell level by thermal noise could have a break down. This suggestion could be the reason why Adair has criticized the previously quoted paper , going on with his argument: the equivalent electric field acting on the ion thermal motion is many times minor than the electric noise field . But, in his indications does not appear the estimate (3.2). An analogue reasoning is developed in , whose criticism versus the theoretical analysis performed in is focused on the energy of a particle motion at the cyclotron resonance frequency, directly compared with the thermal agitation energy . Thus, the problem is leaded to the properness of the models that try to explain the Zhadin effect. In fact, Zhadin asserts about the attempts to interpret his experiment in terms of resonance: “… Unfortunately, for free ions such sort of effects are absolutely impossible because dimensions of an ion rotation radius should be measured by meters at room temperature and at very low static magnetic fields used in all the before experiments. Even for bound ions these effects should be absolutely impossible for the positions of classic physics because of rather high viscosity of biological liquid media…” . But on another side, the recalled attempts to bring that experiment within the conceptual framework of the theory formulated by Preparata have not yet met a general sharing among the insiders. In few words, a very complex and up to now open problem. - J. B. Johnson, “Thermal agitation of electricity in conductors,” Nature, vol. 119, no. 2984, pp. 50–51, 1927. - J. B. Johnson, “Thermal agitation of electricity in conductors,” Physical Review, vol. 32, no. 1, pp. 97–109, 1928. - H. Nyquist, “Thermal agitation of electric charge in conductors,” Physical Review, vol. 32, no. 1, pp. 110–113, 1928. - APS, “Power-Line Fields and Public Health. Statement of the Council of the American Physical Society,” 1995, http://www.aip.org/fyi/1995/fyi95.069.htm. - APS, “National Policy. Electric and Magnetic Fields and Public Health,” 2005, http://www.aps.org/policy/statements/05_3.cfm. - D. Hafemeister, “Background Paper on ‘Power-Line Fields and Public Health’,” 1996, http://www.calpoly.edu/~dhafemei/background2.html. - R. K. Adair, “Are biological effects of weak ELF fields possible?” in Proceedings of the 12th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 12, pp. 1559–1561, November 1990. - R. K. Adair, “Constraints on biological effects of weak extremely-low-frequency electromagnetic fields,” Physical Review A, vol. 43, no. 2, pp. 1039–1048, 1991. - T. S. Tenforde and W. T. Kaune, “Interaction of extremely low frequency electric and magnetic fields with humans,” Health Physics, vol. 53, no. 6, pp. 585–606, 1987. - T. S. Tenforde, “Biological interactions of extremely-low-frequency electric and magnetic fields,” Bioelectrochemistry and Bioenergetics, vol. 25, no. 1, pp. 1–17, 1991. - C. D. Abeyrathne, P. M. Farrell, and M. N. Halgamuge, “Analysis of biological effects and limits of exposure to weak magnetic fields,” in Proceedings of the 5th International Conference on Information and Automation for Sustainability (ICIAfS '10), pp. 415–420, Colombo, Sri Lanka, December 2010. - R. W. P. King, “The interaction of power-line electromagnetic fields with the human body,” IEEE Engineering in Medicine and Biology Magazine, vol. 17, no. 6, pp. 67–73, 1998. - R. K. Adair, “Comments regarding the Article of R.W.P. King,” IEEE Engineering in Medicine and Biology, vol. 17, pp. 73–75, 1998. - R. K. Adair, “Static and low-frequency magnetic field effects: health risks and therapies,” Reports on Progress in Physics, vol. 63, no. 3, pp. 415–454, 2000. - R. K. Adair, “Noise and stochastic resonance in voltage-gated ion channels,” Proceedings of the National Academy of Sciences of the United States of America, vol. 100, no. 21, pp. 12099–12104, 2003. - J. C. Weaver and R. D. Astumian, “The response of living cells to very weak electric fields: the thermal noise limit,” Science, vol. 247, no. 4941, pp. 459–462, 1990. - R. W. P. King, “King’s rebuttal to Adair’s comments,” IEEE Engineering in Medicine and Biology, vol. 17, pp. 76–78, 1998. - R. W. P. King and C. W. Harrison, “Electromagnetic field in human body due to VLF transmitter,” in Proceedings of the IEEE 21st Annual Northeast Bioengineering Conference, pp. 121–123, May 1995. - L. Gianferrari and E. Pugno Vanoni, “Sull’azione di campi elettrici ad alta frequenza sullo sviluppo embrionale: I. Esperienze su Salmo Lacustris,” Rendiconti R. Accademia Lincei, Classe Scienze Fisiche Matematiche e Naturali Serie VIII (S.5, I Sem.): 576–578, 1923. - R. Bartel, B. Falowska, K. Bieniarz, and P. Epler, “Dependence of egg diameter on the size and age of cultivated female lake trout,” Archives of Polish Fisheries, vol. 13, no. 1, pp. 121–126, 2005. - L. Rothschild, “The theory of alternating current measurements in biology and its application to the investigation of the biophysical properties of the trout egg,” Journal of Experimental Medicine, vol. 23, pp. 77–99, 1946. - A. I. ZOTIN, “The mechanism of hardening of the salmonid egg membrane after fertilization or spontaneous activation,” Journal of Embryology and Experimental Morphology, vol. 6, no. 4, pp. 546–568, 1958. - C. M. Stehr and J. W. Hawkes, “The comparative ultrastructure of the egg membrane and associated pore structures in the Starry Flounder, Platichthys stellatus (Pallas), and pink salmon, Oncorhynchus gorbuscha (Walbaum),” Cell and Tissue Research, vol. 202, no. 3, pp. 347–356, 1979. - K. S. Cole, “Electric phase angle of cell membranes,” The Journal of General Physiology, vol. 15, no. 6, pp. 641–649, 1932. - V. N. Binhi and A. B. Rubin, “Magnetobiology: the kT paradox and possible solutions,” Electromagnetic Biology and Medicine, vol. 26, no. 1, pp. 45–62, 2007. - M. N. Zhadin, V. V. Novikov, F. S. Barnes, and N. F. Pergola, “Combined action of static and alternating magnetic fields on ionic current in aqueous glutamic acid solution,” Bioelectromagnetics, vol. 19, no. 1, pp. 41–45, 1998. - E. Del Gludice, M. Fleischmann, G. Preparata, and G. Talpo, “On the “unreasonable” effects of ELF magnetic fields on upon a system of ions,” Bioelectromagnetics, vol. 23, no. 7, pp. 522–530, 2002. - N. Comisso, E. Del Giudice, A. De Ninno et al., “Dynamics of the ion cyclotron resonance effect on amino acids adsorbed at the interfaces,” Bioelectromagnetics, vol. 27, no. 1, pp. 16–25, 2006. - L. Giuliani, S. Grimaldi, A. Lisi, E. D'Emilia, N. Bobkova, and M. Zhadin, “Action of combined magnetic fields on aqueous solution of glutamic acid: the further development of investigations,” BioMagnetic Research and Technology, vol. 6, article 1, 2008. - A. Pazur, “Characterisation of weak magnetic field effects in an aqueous glutamic acid solution by nonlinear dielectric spectroscopy and voltammetry,” BioMagnetic Research and Technology, vol. 2, article 8, 2004. - E. Del Giudice, G. Preparata, and M. Fleischmann, “QED coherence and electrolyte solutions,” Journal of Electroanalytical Chemistry, vol. 482, no. 2, pp. 110–116, 2000. - G. Preparata, “QED Coherence in Matter,” World Scientific, 1995. - M. N. Zhadin, “On mechanism of combined extremely weak magnetic field action on aqueous solution of amino acid,” in Non-Thermal Effects and Mechanisms between Electromagnetic Fields and Living Matter, L. Giuliani and M. Soffritti, Eds., vol. 5 of European Journal of Oncology, ICEMS Monograph, 2010. - M. Zhadin and F. Barnes, “Frequency and amplitude windows in the combined action of DC and low frequency AC magnetic fields on ion thermal motion in a macromolecule: theoretical analysis,” Bioelectromagnetics, vol. 26, no. 4, pp. 323–330, 2005. - R. K. Adair, “Comment: analyses of models of ion actions under the combined action of AC and DC magnetic fields,” Bioelectromagnetics, vol. 27, no. 4, pp. 332–334, 2006.
Wojciech Z. Misiolek Director: Institute for Metal Forming Phone: (610) 758-4252 Loewy Chair in Materials Forming and Processing Director, Institute for Metal Forming Joint academic appointment with Mechanical Engineering and Mechanics Wojciech Z. Misiolek, MS and Sc.D., AGH- University of Science and Technology, Krakow, Poland. After completing his doctoral work in metallurgy, Dr. Misiolek became a junior faculty member at his Alma Mater in 1985. He spent the 1987/1988 academic year as a Kosciuszko Fellow at Lehigh University . For the next nine years, Dr. Misiolek was with the Materials Engineering Department at Rensselaer Polytechnic Institute (RPI) in Troy , New York . Dr. Misiolek was appointed various research and teaching positions and he was ultimately promoted to Associate Professor. From 1992 to 1997 he co-directed the RPI Aluminum Processing Program, an international industrial consortium performing pre-competitive interdisciplinary research, which was focused mainly on the aluminum extrusion process. Dr. Misiolek conducts interdisciplinary research in materials processing and process engineering. His research and teaching interests have focused on deformation, powder and machining processes along with applications for structural and bio-materials. The common theme of these studies is to understand and develop characterization techniques for microstructure evolution in different materials during forming and processing. These scientific challenges are being addressed by the use of various physical and numerical modeling procedures in conjunction with state of the art materials characterization techniques. The Institute for Metal Forming collaborates with several research institutions in North and South America, Europe, Asia and Oceania . Dr. Misiolek has contributed over 160 publications to the research literature, holds a patent and has been recognized with several awards, from different technical and academic organizations, including ASM Fellow. Usually you can find him working in Whitaker Laboratory; and in his spare time he enjoys photography and outdoors activities such as, skiing, biking, tennis, and kayaking.
Portia Lake joined the 13WMAZ family in March 2014. Currently she anchors the 6 p.m., 10 p.m. and 11 p.m. newscasts with Frank Malloy. Portia previously anchored and reported in Macon for 10 years before returning home to Central Georgia. Prior to arriving in Macon in 2003, she worked at television and radio stations in New Orleans, Memphis and Greenville, Mississippi. Portia is a member of the Atlanta Association of Black Journalists and the National Association of Black Journalists. In her free time, she volunteers with charities and groups devoted to promoting literacy and inspiring teens to success. Portia is a native New Orleanian who proudly calls Central Georgia home. She is a proud graduate of Louisiana State University. Portia lives in Bibb County with her husband and spoiled Schnauzer named Blakemore. During football season you will find Portia out cheering on LSU and the University of Georgia football teams. In her spare time, she cherishes precious family time, travelling, live music and reading inspirational literature.
1. What kinds of writing assignments can I expect in Biology classes? Several teachers use short essay questions on their exams. Most out-of-class writing assignments fall into the following categories: - Researched Essays (about 10 pages): In-depth evaluations of recent information on a selected topic. You will be expected to do a literature search on the topic, read primary research reports and reviews, and present the information in the form of a mini-review. - Short Essays (2 pages): Short papers that connect a scientific topic discussed in current national media to concepts presented in the course - Short Reports (3-5 pages): Brief reviews of literature on a specific topic or reports of computer simulations of selected problems - Laboratory reports (variable in length and format): These are required in all laboratory courses. Most teachers expect a conventional report format consisting of these sections: - Introduction, which states the goal of the experiment and places it in perspective - Results, which compiles and presents the data, often in tabular and/or graphical form - Discussion, which states the conclusions drawn from the data and explains the rationale for these deductions 2. What qualities of writing are especially valued in Biology classes? In addition to the paper being free of spelling and grammatical errors, the most valuable attribute of a Biology paper is concise and clear presentation of the subject matter. A clear statement of the specific question, a concise survey of the literature, complete description of the methods of study and the results, and an incisive discussion to place the results in perspective are valued traits of a scientific paper. Note that scientific writing is concise, direct, and devoid of flowery metaphors. If your paper contains more than a few pieces of numerical data, you will find excellent advice in The Chicago Guide to Writing About Numbers by Jane E. Miller. 3. What citation conventions will I be expected to use in Biology papers? Citations to the literature that you have consulted in preparing a scientific paper are always required, but the format for citation of literature varies according to the course and teacher. Check with your instructor about the required format, and consult a handbook about the details of punctuation and the order in which to present information. The Citation Formats section of this Web site provides information about the two common formats for internal citations and references lists in Biology papers: (1) citation-sequence format with numbered references and (2) author-date (name-year) format for internal citations. The formating is also explained and illustrated in the handbook for First-Year English classes and in the manual published by the Council of Scientific Editors (formerly the Council of Biology Editors): Scientific Style and Format: The CBE Manual for Authors, Editors, and Publishers. Sixth Edition (1994), available in the reference area of Raynor Library for library use only. (A seventh edition is in preparation.) For guidance about using this format, click here. The citation-sequence and author-date format systems both require a list of full citations at the end of the paper. These citations must include the following: the name(s) of the author(s), year of publication, full title of the article (or book), the journal in which an article was published, volume number of the journal, and the first and last page numbers of the article (or, for a book, the name of the publisher and the city of publication). 4. A note about sentence structure in scientific writing The following structural principles are generally recommended in scientific writing: - Follow a grammatical subject with a verb as soon as possible. - Choose the verb in each clause or sentence to articulate the action. - Provide context before giving new information of any type. - Place new information at the end of the sentence in the stress position, and the name of the person or the thing described at the beginning of the sentence in the topic position. - Place information that links backwards and contextualizes forward early in the sentence, in the topic position. For additional guidance, see Gopen, George D., and Judith A. Swan. (1990). The science of scientific writing. American Scientist 78: 550-558.
|Previous Section||Index||Home Page| Secondly-I think this is a fair criticism-it was said that in some ways we have not placed families and carers at the heart of a new system for care in the future. I can see the reasons for that criticism of the Green Paper, but I want to say today that we do see families and carers as being at the very heart of the system. The purpose of a care service has to be to provide the support, choice and services that individuals need and that they can access in order to their lives as individuals with somebody caring for them with their families and relatives. Some people, of course, do not have carers, families and relatives around them. The care service needs to meet their needs, too, but we need to ensure that, as the hon. Member for Leeds, North-West (Greg Mulholland) said, the care service gives support to carers and families, who are at the heart of what a care system should be about. On benefits and attendance allowance, which were mentioned by many hon. Members, let me make it clear that we are still keen to receive views on the case that was made by the King's Fund, Wanless and others to bring together some elements of disability benefits, such as attendance allowance, with the funding of care to provide a better, targeted system of care and support that will benefit everyone in need of care. I recognise some of the detailed points about the fact that attendance allowance and the DLA can be gateway benefits to other benefits, and that any change to the system will therefore need to take that into account. I am keen to get all of that right. I was pleased that the hon. Member for Forest of Dean described attendance allowance and DLA as being folded into a care system that gets reshaped and redistributed, rather than as being cut, which is how others have been portraying it. It quite clearly is not a cut. Whatever the outcome of the consultation, the principle will continue to be that people receiving any of the relevant benefits at the time of reform will receive an equivalent level of support and protection under a new and better care and support system. We are going to make changes to disability benefits only if we are certain that by doing so we can better support the care needs of older and disabled people. To pick up on the point about the cash that one gets as an attendance allowance, our intention is that the care and support that people will get in the future will be in the form of a personal budget that can be provided in cash for anybody who wants it that way. We want to see that feature driving forward as we develop our national care service. Although I am sure that that will not have assuaged all the concerns that have been raised today, I hope that those reassurances will help individual concerns. I will not take the time of the House by reflecting for too long on the Government's concerns about the views expressed by the Conservatives and its way forward. My view is that the Opposition simply do not get it. A few weeks ago, they announced their scheme for a voluntary payment of £8,000 to provide residential care for those who choose it. We now understand that that scheme will exclude people with any pre-existing health conditions, so the number of those who would enter the scheme-let alone the number who would voluntarily choose to do so-is so small that it becomes a scheme for very few people, which completely ignores the needs of the many. Indeed, nine out of 10 people are not cared for in residential care-they are cared for in their own homes. That is why we have gone down the route of the free personal care scheme as a major building block for a new national care service. Although I do not know the details, last week the hon. Member for South Cambridgeshire (Mr. Lansley), who speaks for the Opposition on these matters, announced the creation of what appeared to be 153 new local quangos called public health boards, which would just be another layer of bureaucracy and complexity of funding, which already bedevil the system. I fail to see how a party that talks publicly about cutting bureaucracy and waste can immediately create a massive new wasteful bureaucracy, but that is for it to defend. In drawing this debate to a close, I am delighted to say that the careandsupport.direct.gov.uk website has had 90,000 hits. Will hon. Members please encourage all their constituents to take part-if Members have their own website, I ask them please to add a link to the site from theirs-and to get engaged in the debate? Let us learn from the inspirational example of my hon. Friend the Member for Crawley who held a debate in her constituency. Let us meet local organisations. The job is clear. I want to build an unstoppable momentum for reform that will represent the biggest step forward for social justice in decades and improve millions of lives. Let us together create a national care service of which we can all be proud. That this House has considered the matter of the Social Care Green Paper. That Dr Richard Taylor be a member of the West Midlands Regional Select Committee.-( Mr. Blizzard .) That Mary Creagh be discharged from the Yorkshire and the Humber Regional Select Committee and Mr Austin Mitchell be added .-( Mr. Blizzard .) Mr. John Randall (Uxbridge) (Con): I should like to discuss motion 5 on the Order Paper. I rather missed the opportunity on motion 4, I am afraid: I was sitting back and when motion 3 was not moved, I thought that the same thing would happen to motion 4. I was going to ask a question of a Minister, but I cannot see the Leader of the House or the Deputy Leader of the House on the Government Front Bench. However, the Comptroller of Her Majesty's Household is there-it is like looking into a mirror-and I am sure that he will be very capable of answering some questions. As you may know, Mr. Deputy Speaker, my party and the Liberal Democrats are not terribly enthused, to say the least, by the Regional Select Committees. In fact, we have decided not to take part in them. However, on the question of motion 5, why at this stage-the Committees were set up only recently-do people need to be taken off them and then put back on? It is always interesting. Was the hon. Member for Great Grimsby (Mr. Mitchell) not happy with the set-up of the Committee; or, did he in some way displease the Whips? Perhaps he pleased them so much that they took him off the Committee. Greg Mulholland (Leeds, North-West) (LD): Does the hon. Gentleman agree that if such a change of membership is put before the House, we should always have an explanation of why one Member is standing down and another is being put forward? Currently, these motions are put forward with no discussion and, given that the Committees are supposed to provide for more local accountability and interest, it is unacceptable not to know why one Member in the area is leaving and one is coming forward. Mr. Randall: I am not sure that I would want an explanation on every occasion, but, as this opportunity arises from time to time, it is useful just to test whether the Government can tell the House why a decision has been made. I do not labour the point, because, speaking as a furniture retailer, I am very interested in the Adjournment debate that is coming next. I shall be interested to note what the hon. Member for North-West Leicestershire (David Taylor) says when I read it later in Hansard. Mr. Paul Burstow (Sutton and Cheam) (LD): In the hon. Gentleman's earlier remarks, he suggested that the hon. Member for Great Grimsby (Mr. Mitchell) was being removed from the Committee. In fact, it is the other way around, and I should hope that he would not want to malign the hon. Gentleman unfortunately. Mr. Randall: I should like to apologise to the House for that mistake. I in no way wanted to mislead Members. The hon. Member for Great Grimsby has obviously pleased the Whips. I know that he hopes for a career moving forward from the Back Benches, and, at this last moment, perhaps this is still an attempt to do so. I shall now sit down, so we might hear from the Government Front Bencher before we move to the next motion. The Comptroller of Her Majesty's Household (Mr. John Spellar): It was very interesting to see the double act between the official Opposition and their errand boys on the Liberal Democrat Bench. Increasingly, they see their role as that of principal understudy to the Conservative Opposition, and this debate is yet another example of that. Let us be quite clear what the debate is about: the Conservative Opposition do not like Regional Select Committees. They have been quite open about that, to be fair, but the House has decided. There is also a well-established procedure, which the hon. Gentleman has reflected-along with remarks about my appearance-for changing the membership of all Committees. Hon. Members are just trying to score political points, and not doing it very well. That Linda Gilroy be discharged from the South West Regional Select Committee and Roger Berry be added. -(Mr. Blizzard.) David Taylor (North-West Leicestershire) (Lab/Co-op): According to figures from the British Furniture Confederation, the UK furniture industry is currently worth nearly £10 billion at retail prices. It directly employs about 131,000 people in 7,500 enterprises and makes a significant, but often overlooked, contribution to the manufacturing economy. Indeed, furniture makers, large and small, represent 5 per cent. of the UK manufacturing base. In addition, the industry supports, and is supported by, a large supply chain of materials suppliers, designers, component manufacturers, distributors, contractors and retailers. Although there is no traditional geographical base for the furniture industry, there are clusters of companies in south Wales, Buckinghamshire and the east midlands, including Art Forma in my constituency and a number of furniture makers in Long Eaton, in the constituency of my hon. Friend the Member for Erewash (Liz Blackman). This is consistent with the fact that our region remains the area with the highest proportionate number of manufacturing workers. The industry is divided between small companies and relatively large concerns. The BFC estimates that 67 per cent. of all furniture manufacturing companies employ fewer than nine people, but also that the largest 300 companies account for 45 per cent. of the total employment. Fifty-eight per cent. of output is for the domestic market, 13 per cent. for the office market, and 29 per cent. for the contract market, which includes Government. The purpose of my debate is to explore the industrial and consumer implications of the sales and promotion tactics far too frequently used by the large furniture companies in selling to the domestic market. It is all about the myth of the half-price, time-limited sale. Anyone who reads a Sunday newspaper or magazine will be familiar with the resounding slap on the kitchen floor of inserted sales and promotional fliers. Invariably, one of these will be from a large furniture company, offering huge reductions on sofas and other furniture as part of a sale that will "end soon". Typically, the sales periods will be extended and 50 per cent. discounts, even double mark-downs, will be offered to the public for an even longer period still. One of my contentions is that, despite some caveat in tiny print undecipherable to the naked eye, the discounted products are virtually impossible to find retailing at the full price, so there is no way of knowing whether the undiscounted price represents a real saving for the potential consumer or whether it is just a cynical, deceitful "come on". The current law regarding sales periods only requires the discounted price to have been available for a minimum of 28 days consecutively in the preceding six months. Yet advertising literature from DFS and other companies scrupulously avoids quoting the actual dates for these sales periods. Instead, they have cleverly adopted a policy of "rolling" sales, where the sales offers are switched between product lines. One difficulty that trading standards officials face in effectively tackling this sort of dishonest, manipulative practice is that the pricing practices guide issued to traders is not mandatory. Indeed, the introduction to the guidance, issued by the short-lived, now-departed, pre-Mandelsonia Department for Business, Enterprise and Regulatory Reform, is a master-class exercise in lowering regulatory expectations: "This Guide recommends to traders a set of good practices in giving the consumer information about prices in various situations. It has of itself no mandatory force: traders are not under any legal obligation to follow the practices recommended." No doubt somewhere, in a remote retail outlet during the quietest period of the year, there will be gathering dust sofas and other furniture at enormous prices that will then be used as a base for sales elsewhere in the country. However, there is legislation designed to tackle misleading business in consumer marketing and sales practices. The Consumer Protection from Unfair Trading Regulations 2008-CPRs-were implemented to bring the UK in line with the provisions of the European Union's unfair commercial practices directive. The CPRs prohibit 31 commercial practices and specifically cover misleading or aggressive promotions. These practices are those deemed to be unfair in all cases, regardless of whether it would have induced the average consumer to make a purchase. I believe that the CPRs have particular relevance to the sort of "perma-sale" tactic used by DFS and others. Schedule 1 to the CPRs, covering banned practices, contains the following prohibition: "Falsely stating that a product will only be available for a very limited time, or that it will only be available on particular terms for a very limited time, in order to elicit an immediate decision and deprive consumers of sufficient opportunity or time to make an informed choice." To illustrate the type of practice referred to, the Office of Fair Trading's guidance on the CPRs cites the example of a trader who falsely tells a consumer that the price of homes will rise in seven days' time. That prohibition should apply to the "specially extended" sales practice used in adverts such as the one placed by DFS in The Times on 4 September, which stated: "Final days to save. There's only a few days left to enjoy half price savings on many great designs and final reductions ...there's so much choice and so little time." The consumer has no idea whether that is accurate but is clearly being pressurised into making a snap decision to purchase rather than an informed choice. Consumer Focus and others have pointed out that the CPRs do not offer direct redress to the consumer. Even if they have lost money, a consumer cannot take traders to court or get direct compensation under the CPRs. Trading standards offices locally and the OFT nationally are charged with making businesses comply with the regulations, and only they can bring action against traders. Yet to date, the CPRs have been used to prosecute only once. In a country of 60 million people, that is astonishing. It seems pretty clear that the Government need to put the pricing practices guide on a statutory footing, or at least knit together more effectively the PPG and the CPRs. At present, they are cumbersome and difficult for trading standards to apply successfully to the widespread use of the dubious practices that I have mentioned. Trading standards departments may also need more resources to tackle such practices, but I surmise that in the current economic climate that is not likely to happen any time soon. That aside, I am pleased to tell my hon. Friend the Minister that I take encouragement from the proposals contained in July's White Paper "A Better Deal for Consumers", which offers a longer-term prospect of action to modernise this aspect of consumer law. Encouragingly, it commits to a number of actions, such as modernising trading standards powers and "a new Consumer Rights Bill which will implement the proposed EU Consumer Rights Directive and modernise and simplify UK consumer sales law." Keen calendar watchers will note that we are heading towards Christmas, when the sales and marketing efforts of most industries go into overdrive. The big furniture companies are no exception, and the latest flurry of fliers is accompanied by increased broadcast and internet activity. There are just eight weeks to Christmas eve, and a visitor to the DFS website sees the prominent words "Half Price" and "Christmas" on the home page. Again, we see the familiar pattern of a high price discounted, often by 50 per cent. There is no way that consumers can check the accuracy of the amount they are being offered as a discount. So far, so bad. The internet clearly offers exciting opportunities for retailers and consumers, but we must have the regulatory framework in place to protect and promote consumers' rights in the virtual marketplace. As I have said, I am encouraged by the content of the White Paper. Its focus on the increasing commercial importance of the internet will be much needed, and I am sure that it will offer yet another important incentive to vote Labour at the general election in the spring. To return to more traditional media, I was interested to note that the Advertising Standards Authority, hardly the most tenacious of regulatory bodies, has twice upheld complaints from the public in the last year about half-price claims made in DFS's TV adverts, ruling that they were misleading. Although neither advert was banned by the ASA for using unverifiable price reductions, they point to advertising practices that sail close to the wind, to put it kindly. DFS has a dubious record in its advertising content, particularly on television. Members might remember its advertising campaign from 2005, which featured tatty old sofas dumped in canals. It rightly drew criticisms of environmental irresponsibility, although the ASA chose not to uphold any of the 70 complaints that it attracted from the public. One is tempted to ask why the ASA exists. |Next Section||Index||Home Page|
This Book Is Overdue: How Librarians and Cybrarians Can Save Us All By Marilyn Johnson Hardcover, 272 pages List price: $24.99 In tough times, a librarian is a terrible thing to waste. Down the street from the library in Deadwood, South Dakota, the peace is shattered several times a day by the noise of gunfire — just noise. The guns shoot blanks, part of an historic re-creation to entertain the tourists. Deadwood is a far tamer town than it used to be, and it has been for a good long while. Its library, that emblem of civilization, is already more than a hundred years old, a Carnegie brick structure, small and dignified, with pillars outside and neat wainscoting in. The library director is Jeanette Moodie, a brisk mom in her early forties who earned her professional degree online. She's gathering stray wineglasses from the previous night's reception for readers and authors, in town for the South Dakota Festival of the Book. Moodie points out the portraits of her predecessors that hang in the front room. The first director started this library for her literary ladies' club in 1895, not long after the period that gives the modern town its flavor; she looks like a proper lady, hair piled on her head, tight bodice, a choker around her neck. Moodie is a relative blur. She runs the library and its website, purchases and catalogs the items in its collections, keeps the doors open more than forty hours a week, and hosts programs like the party, all with only parttime help. When she retires, she'll put on one of her neat suits, gold earrings, and rectangular glasses and sit still long enough to be captured for a portrait of her own. Moodie is also the guardian of a goldmine, the history of a town that relies on history for its identity. She oversees an archive of rare books and genealogical records, which, when they're not being read under her supervision, are kept locked up in the South Dakota Room of the library. Stored in a vault off the children's reading room downstairs are complete sets of local newspapers dating back to 1876 that document Deadwood's colorful past in real time. A warning on the library website puts their contents in a modern context: "remember that political correctness did not exist in 19th-century Deadwood — many terms used ['negro minstrelcy,' for instance, and 'good injun'] are now considered derogatory or slanderous, but are a true reflection of our history." If you want a gauge of how important this archive is to Deadwood, Moodie will take you into the vault, a virtually impregnable room lined with concrete and secured by a heavy steel door. No fire or earthquake or thief is going to get at the good stuff inside this place. A dehumidifier hums by the door. Newsprint and sepia photos, stored in acid-free, carefully labeled archival boxes, are stacked neatly on shelves around a big worktable. In her spare time, the librarian comes down here to browse the old articles that a consultant has been indexing, systematically listing the subjects and titles of each story for the library's electronic catalog. The town's past lives on in this catalog, linked with all the other libraries in South Dakota. Anyone can log on as a guest, consult the library's index online, and learn that the Black Hills Daily Times published a story in 1882 called "Why Do We Not Have Library & Reading Rooms?" and three years later, "Reading Room and Library Almost Complete," alongside stories like "Accidental Shooting Part of a Free for All" and "Cowboys Shoot Up Resort." Moodie, like her predecessor a century ago, is essentially organizing the past and making it available to the citizenry, but she's doing so in ways that the librarian of the late 1800s could never have imagined, preserving images of one frontier with the tools of another. What would the proper lady in the portrait make of the current librarian's tasks, the maintenance of the website, for instance, with its ghostly and omniscient reach? There's another Deadwood library on the digital frontier. This one doesn't resemble the elegant Carnegie building in the real town in South Dakota — it looks instead like a crude wooden storefront — but it, too, evokes the period that characterizes Deadwood, the late 1800s, the gold rush, and the Wild West. The difference is that this library exists solely on the Internet in the virtual world known as Second Life. People at computers around the globe, taking the form of avatars dressed in chaps and boots or long prairie dresses and playing the roles of prospectors, saloon keepers, and ordinary citizens, can visit the library in an historic reenactment of Deadwood in Second Life. They can enter this ramshackle building and, by typing questions in a chat box, ask the librarian what sort of outfit a prostitute would have worn, or where to find information on panning for gold. Or they can browse the collection the librarian has gathered in the form of links to dime novels and other old-time books, available in digital form from sites like Project Gutenberg and the Internet Archive. The librarian, Lena Kjellar, shows up onscreen as a cartoon woman in a bustle skirt. The person behind this avatar was trained to provide Second Life reference services by a real-life reference librarian and is part of an information network anchored by hundreds of professional librarians who flock to this interactive site for fun and stay to volunteer their skills — they figure everyone should be able to use library services, even avatars. In fact, "Lena Kjellar" is a retired electrical engineer and locomotive buff from Illinois named Dave Mewhinney; he feels that taking on a woman's shape in Second Life makes him more approachable. Somewhere between Jeanette Moodie's frontiers and Lena Kjellar's is the story of a profession in the midst of an occasionally mind-blowing transition. A library is a place to go for a reality check, a bracing dose of literature, or a "true reflection of our history," whether it's a brick-and-mortar building constructed a century ago or a fanciful arrangement of computer codes. The librarian is the organizer, the animating spirit behind it, and the navigator. Her job is to create order out of the confusion of the past, even as she enables us to blast into the future. From This Book Is Overdue: How Librarians and Cybrarians Can Save Us All by Marilyn Johnson. Copyright 2010 by Marilyn Johnson. Reprinted by permission of HarperCollins Publishers, USA. All rights reserved.
Re: My Congratulations!! Thanks! Its great to have you on board. I'm more of a jazz musician, but I enjoy playing classical literature too. My wife is a serious classical player and its always great fun to play in smaller chamber ensemble situations with her. We've played in several different sax quartets over the years. Probably one of my favorite pieces we've performed togther on was Russel Peck's "Drastic Measures". Its not a hard piece, but its really a wonderful composition when played well. We also played a Piazzolla saxophone quartet that was great, but I can't remember what it was called off the top of my head? I'd love to hear you play. If you have some audio or video, you can post in the audio/video section of this site, I'd love to check out your stuff. You had asked about the Buescher film when you rated it and about identifying people in it. We have identified several people. August Buescher is the person seated behind the desk at the beginning of the film. He's also the older gentleman who shakes hands with Paul Whiteman close to the end of the film. On this web site, in the museum factory tours section, you will find a set of photos from 1928 that shows all of the factory workers in their respective departments. I was fortunate enough to acquire these photos from the niece of Irene Weaver (Dolly Olson, who was incredibly generous with her gift of these photos). Irene Weaver was secretary to O.E. Beers (president of Buescher in the 1940's and 50's). She had identified everyone in the photos by name. Several of the people in the 1924 film can thus be identified through these photos. For example, the first man shown soldering the bow to the bell and body tube is Jerry Dusech (as seen in photo 3 of the August 1928 Buescher photo set). I'll be posting up a photo set from 1936 and from the 1940's soon too.
During the fifth annual Shakesperience at Rider University, more than 130 middle and high school students from nine schools across the state and region proved that the Bard’s 400-year old work still resonates in their lives today. The two-day festival took place on May 23 and May 24 in the Yvonne Theater on Rider’s Lawrenceville campus. In the morning, the students participated in workshops geared toward Combat Choreography, Shakespeare Aloud and Finding Your Voice in Shakespeare. In the afternoon, the students performed scenes and montages from various William Shakespeare plays. Dr. Kathleen Pierce, festival coordinator and chair of the Department of Graduate Education, Leadership, and Counseling, said the festival promotes the relevance of understanding Shakespeare and enables students to meet with their peers, and Rider faculty, staff and students, who are inspired by the work of the iconic playwright and poet. “The workshops are terrific because they bring theater educators together with student participants,” Pierce explained. “There’s something about understanding Shakespeare that makes us all feel smarter and wiser. That’s why the performance session is really powerful. Once the students come, they want to come back.” In fact, four of the nine schools have participated since the program’s inception at Rider. “Clearview Regional, Cumberland Regional, Robbinsville, and Willfully Yours have been with us for all five years of the festival. Willfully Yours started out with their eighth-grade teacher at Rosa International School in Cherry Hill,” Pierce said of the South Jersey Shakespearean players group. Now seniors at Cherry Hill East and West, and Haddonfield High School, the members of Willfully Yours presented their final Shakesperience performance while Lillian Halden, who brought them to each festival, watched. “I have incredible pride in their commitment, resilience, dedication and sense of responsibility,” said Halden, who also expressed gratitude to her principal, Ed Canzanese, for his support. “In a way, they know at some level, they are keeping Shakespeare alive. They love his words and messages. They amaze me and I adore them.” During the afternoon performances, some the student groups performed the original plays with traditional garb, while others added a modern twist to the classics. For example, Danielle Bergmann wrote Shakespeare’s Beautiful Mind: A Psychological Look at the Bard’s Biggest Basket Cases for the Willfully Yours performance. Bergmann played the doctor who gives couples counseling to Hamlet and Ophelia, and Othello and Desdemona. “If he talked to anyone, it would have ended differently,” said Bergmann, referring to Hamlet. “It’s a short script that is funny yet serious. I thought about the idea while I was reading the play in Advance Placement Literature this year.” Meanwhile, Clearview Regional High School performed Toxicity, which featured “toxic family relations” in scenes from Macbeth and Hamlet. “Bullying is a big topic in public schools. Macbeth and Hamlet are sort of bullied in the madness,” explained teacher adviser Susan Barry. “The students arrived at the idea that Hamlet and Macbeth are caught in forces greater than themselves.” Barry, who teaches Shakesperience I and II at the high school, said she tries to encourage her students to connect Shakespeare’s themes to contemporary issues. “This is 400-year-old text. It has to be fresh and has to live. If I’m going to sell this to kids of all ability levels, it has to be relevant,” Barry said. “Kids understand truth and despair a lot more than we think.” Chris Congdon, a junior at Clearview Regional, said he enjoyed how the festival featured workshops and brought together like-minded people who share an interest in performing Shakespeare. “Shakespeare and Shakesperience have left an impression on me,” Congdon said. “I want to study this in college.” Participating student groups include Camden High School for the Performing Arts; Clearview Regional High School of Mullica Hill, N.J.; Cumberland Regional High School of Seabrook, N.J.; Kinnelon High School of Kinnelon, N.J.; The Pennington School of Pennington, N.J.; Robbinsville High School of Robbinsville, N.J.; Teen ShakesPEER of Wayne, Pa.; Willfully Yours of Cherry Hill and Haddonfield, N.J.; and Woodglen Middle School of Califon, N.J.
Your location is outside the National Library of Australia. Resources you can access: Resource item view Find a resource Note: We don't search within the external databases and indexes for you, but we do provide you with access. U.S. National Library of Medicine PubMed Central is a free full-text archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health's National Library of Medicine (NIH/NLM). As well as current material, some digitised content dates back to the mid 1800s or early 1900s. Also provides a link to PubMed, a database of citations and abstracts for millions of articles from thousands of journals. PubMed includes links to full-text articles at several thousand journal web sites as well as to most of the articles in PubMed Central. Also available on your mobile at http://pubmedhh.nlm.nih.gov/ PMC, medicine, health, biomedical, life sciences, mobiles, phones, handhelds
The classification was originally developed by Herbert Putnam in 1897, just before he assumed the librarianship of Congress. With advice from Charles Ammi Cutter, it was influenced by Cutter Expansive Classification, and the DDC, and was specially designed for the special purposes of the Library of Congress. The new system replaced a fixed location system developed by Thomas Jefferson. By the time of Putnam's departure from his post in 1939, all the classes except K (Law) and parts of B (Philosophy and Religion) were well developed. It has been criticized as lacking a sound theoretical basis; many of the classification decisions were driven by the particular practical needs of that library, rather than considerations of epistemological elegance. Although it divides subjects into broad categories, it is essentially enumerative in nature. It provides a guide to the books actually in the library, not a classification of the world. The National Library of Medicine classification system (NLM) uses unused letters W and QS–QZ. Some libraries use NLM in conjunction with LCC, eschewing LCC's R (Medicine). |A||Class A -- General Works| |B||Class B -- Philosophy, Psychology, Religion| |C||Class C -- Auxiliary Sciences of History| |D||Class D -- History, General and Old World| |E||Class E -- History of America| |F||Class F -- Local History of the United States and British, Dutch, French, and Latin America| |G||Class G -- Geography. Anthropology. Recreation| |H||Class H -- Social sciences| |J||Class J -- Political science| |K||Class K -- Law| |L||Class L -- Education| |M||Class M -- Music| |N||Class N -- Fine Arts| |P||Class P -- Language and Literature| |Q||Class Q -- Science| |R||Class R -- Medicine| |S||Class S -- Agriculture| |T||Class T -- Technology| |U||Class U -- Military Science| |V||Class V -- Naval Science| |Z||Class Z -- Bibliography. Library Science. Information resources|
CMO Council Study Finds Blind Spots in Delivery of Go-to-Market MaterialsNovember 29, 2010 PALO ALTO, CA—Nov. 29, 2010—A new study by the Chief Marketing Officer (CMO) Council reveals significant blind spots in the go-to-market process as marketers focus on strategy, creative development and campaign execution to the detriment of effective marketing materials distribution. The latter includes the efficient and timely delivery of marketing and merchandising materials to dealer, agent, franchise, retail and brand office locations, as well as the processing of customer requests for sales literature and samples through Web, call center and e-mail channels. While 56 percent of marketers are focused on campaign design, development and execution, only 16 percent are looking to production, warehousing, inventory management or delivery as critical elements in an effective demand chain. In addition, just 2 percent are looking to optimize the actual delivery, fulfillment or distribution of their critical marketing materials. According to the report—entitled “Competitive Gain in the Demand Chain”—many marketing executives admit they have never assessed demand chain performance, nor given it high priority within the marketing operational mix. This may be contributing to the belief, expressed by 80 percent of respondents, that their organization is not efficient or effective enough in provisioning all of the demand chain. The study, sponsored by Archway Marketing Services, is part of ongoing research by the CMO Council's Marketing Supply Chain Institute (www.marketingsupplychain.org/) into ways to improve frontline performance through better go-to-market process innovation, supply chain optimization and marketing ecosystem management. Marketers agree that demand chain provisioning is critical to business competitiveness and performance (38 percent of respondents), while an additional 31 percent believe it is important to sustaining sales and channel operations. Yet, only 25 percent of respondents are ensuring sales support materials and resources are delivered on-demand, which would improve sell-through and customer conversion. Only 15 percent are taking steps to audit and assess marketing materials supply chain effectiveness, indicating that there is little to no visibility into the demand chain provisioning process to truly gauge content, material or operational impact and performance. “Marketing tends to be preoccupied with staying on track with individual tactical executions or traditional marketing fundamentals like lead generation, campaign execution and content or creative development,” said Donovan Neale-May, executive director of the CMO Council. “However, today’s demand chain requires a new mix of digital, direct and retail distribution, fulfillment, measurement and tracking capabilities to maximize customer contact, conversion and interaction.” One area that potentially holds an immediate opportunity for improvement and value creation is specific to vendor selection or management. Nearly half of respondents view demand chain procurement and fulfillment as a compilation of individual vendors, asking each vendor to bid on individual elements of the demand chain. Only 7 percent of marketers view the demand chain as an area for consolidation and rationalization to gain more control and efficiency. “Overlooking the final ‘field and prospect delivery’ elements of the demand chain can be costly, and operationally disruptive,” said Mike Moroz, President of Archway Marketing Services. “Far too often, we see amazingly planned and executed creative fall desperately short due to a poor integration between these two segments of the demand chain—the creative strategy and customer engagement through fulfillment.” The 50-page “Competitive Gain in the Demand Chain” report includes detailed findings of over 260 marketing executives and in-depth qualitative interviews with over 267 executives from brands including Advance Auto Parts, Allergan, Hershey, MGM Resorts, Oracle, Subway, and T.Rowe Price. The online quantitative audit was conducted in the third quarter of 2010, with findings collected through November 2010. To download the full report or complimentary executive summary, visit www.cmocouncil.org/resources/form-competitive-gain.asp. About the CMO Council The Chief Marketing Officer (CMO) Council is dedicated to high-level knowledge exchange, thought leadership and personal relationship building among senior corporate marketing leaders and brand decision-makers across a wide range of global industries. The CMO Council's 6,000 members control more than $200 billion in aggregated annual marketing expenditures and run complex, distributed marketing and sales operations worldwide. In total, the CMO Council and its strategic interest communities include over 12,000 global executives across 100 countries in multiple industries, segments and markets. Regional chapters and advisory boards are active in the Americas, Europe, Asia Pacific, Middle East and Africa. The Council's strategic interest groups include the Coalition to Leverage and Optimize Sales Effectiveness (CLOSE), Marketing Supply Chain Institute, Customer Experience Board, LoyaltyLeaders.org, Online Marketing Performance Institute, and the Forum to Advance the Mobile Experience (FAME). www.cmocouncil.org About Archway Marketing Services Archway is a leader in marketing operations management, providing marketing execution solutions and complementary business process outsourcing. These solutions include fulfillment services, consumer promotions, rebates, print management, facility management, business intelligence and decision support tools. Each solution is designed to reduce marketing operations costs, improve marketing execution processes, leverage emerging technologies and most importantly help improve the brand experience at every touch point of their clients' marketing campaigns. Archway is ranked on the Inc 5000 list of fastest growing private companies and has 1,500 employees and 3.5 million sq. ft. of distribution space in 12 major metropolitan areas in North America. For more information, visit www.archway.com Source: Press release.
“What’s up dude iguana,” my two-year-old cheekily said to one of the many iguanas roaming the ancient Mayan ruins of Chichen Itza on a visit to Mexico’s Yucatan Peninsula a few years ago. The archaeological site is one of the new Seven Wonders of the World and a UNESCO World Heritage site. We were shocked at how few restrictions there were at the time, and I cringed when my toddler climbed all over the ancient structures. We welcomed the freedom, and yet it was disturbing to witness visitors literally loving the site to death. Climbing to the top of the central pyramid with our son in a backpack was one of those peak travel moments, part Rocky, part Raiders of the Lost Ark. Negotiating the narrow steps, worn from centuries of foot traffic, exacerbated my festering fear of heights. Continue reading » One night some years ago I arrived in Guanajuato, Mexico for the first time, knowing little about the place beyond its being yet another Spanish colonial city. When the bus couldn’t get anywhere near my hotel on Jardin de la Union because the streets were jammed with revelers, I got out, shouldered my bags, and plunged into the crowd. Maybe it was the long bus ride that had warped my ability to make sense of my surroundings, or it could have been my diet of magic realism literature I was on at the time, but the scene I wound through that night presented the kind of phantasmagoria that can induce hallucinations. Was everyone in costume? Was it a warmup for Dia de los Muertos, the Day of the Dead? Colors flashed by, shouts and laughter and the melodious rhythms of Spanish ricocheted off balconied buildings. Smoke from street stalls carried the scent of grilled meat. And I continued to push my way, gently because this was a happy throng, across the plaza to the hotel. Continue reading » I am sort of an Olympics geek. I love the games, both the summer and the winter. My mom actually took my sister and me and two friends to the 1980 Lake Placid Winter Games. We had tickets for the Women’s Downhill Skiing event, but if you remember, the Games were a bit of a mess and transportation to the venues was a fiasco. We never made it to the mountain and got Compulsory Ice Dancing tickets as compensation; still it was an amazing experience. Tomorrow the host city of the 2016 Summer Games will be announced in Copenhagen, Denmark. The front-running candidates are Chicago, Rio de Janeiro, Tokyo and Madrid. President and Michelle Obama will be there to forward the Chicago bid, which because of their star power is leading Rio as the top pick. Continue reading » It’s almost sundown on the eve of the holiest day in the Jewish calendar; Yom Kippur, the Day of Atonement. I was thinking about years past and how I’ve spent the day. In NYC, schools are often closed. Mine was never closed because it was an International school and if they took off one holiday they would have to take off everything: the Swedish King’s birthday, Diwali, Chinese New Year. I am not religious and my husband likes to say I am Jew–ISH, which suits me fine but I do feel connected to the heritage on my dad’s side. I have never been to Israel, but would love to go some day. The Israeli city of Tel Aviv would be my first stop. Tel Aviv sounds like such a vibrant city and since, so often there is bad news coming out of the Middle East, I thought it was a good time to bring up the 100th birthday of this bustling metropolis. This pulsing city of more than 1.5 million is the most liberal in Israel, full of artists, gay bars, high-tech companies and Bauhaus architecture. Tel Aviv is called the Barcelona of the Middle East, a hip city, with trendy restaurants and night life which, despite the ongoing political conflict that is never far away, has a lot to offer visitors. Upcoming anniversary events include: * International Art Biennale (ARTLV) (9 September – 9 October), showcasing contemporary works in dozens of exhibitions. * The Green Festival (17 October), dedicating of the Green Route along the Yarkon River and a centennial bike ride. * Fashion Week in Tel Aviv Port (19-22 October). It was my son’s 10th birthday and we always try to celebrate with a super summery adventure. One year we went to Disneyland. Last year, The Police Reunion Concert (my choice) and this year I deftly averted a trip to Vacaville, CA and Scandia; a nightmarish Scandinavian themed, mini-golf/ arcade experience, in 100 plus degree heat (thanks to a colleague who did a weather check for me). We settled on Santa Cruz Beach Boardwalk, a West Coast Coney Island if you will, kinda lost in time; totally manageable. So off we went ready for rides, perhaps swimming, lots of sugary treats and maybe a skeeball game or two (my grandma Viola was the queen of skeeball in Hollywood, Florida and I have taken up the passion). We had a blast on the flume, roller coasters and my favorite, a hang gliding twirly thing; although the chin rest smelled super funky sweaty. I even did some body surfing and a sea lion joined me about 30 yards away. But the highlight o the day was the FREE circus performance on the beach. CIRQUE MAGNIFIQUE has performances July 12th- August 20th. There are two free shows daily; Monday –Thursday: noon & 3:00 and Sunday: noon & 6:00. It is no Cirque du Soleil, but we knew immediately the performers were Quebecois (what’s in the water there?). It was so adorable, I would venture… enchanting. It wasn’t jaw dropping feats but completely entertaining and so lovely to enjoy in the sun, rubbing your feet in the sand; don’t miss it! Bastille Day is next week. This is a special day for me, not because I passionately studied French History or married a Frog, in a previous life, or even because I count being at the Bi-Centennial Celebration in Paris in 1989 as a peak life moment, but because my eldest son was ironically born on July 14th, 1999. I have so much baggage and history with France and French Culture. The love/ hate relationship still teeters more towards love but I can’t deny I get a bit gleeful when there is bad press, the French are exposed as hypocritical or in some way there is de-mythologization of some aspect of the coveted culture. I get a lot of mileage out of my stories of living in France; much like the New Yorker’s Adam Gopnick, I always found humor in the little things. The hilarious scene at Disneyland Paris buffet where diners swarmed a waiter delivering a bowl of bread to the buffet before he could even reach it. The fact that my friend was served mussels and spicy merguez sausage as the first post-operative meal in the hospital or the fact that before my marriage I had to get a ‘Carte de Concubinage’; a card stating that I was his concubine… I could go on. So today I open up to the Yahoo Page with the lead story: “French Tourists Seen as World’s Worst: Survey”. So apparently, according to this survey, done by Expedia, the French, despite their rumored savoir faire, were declared the most arrogant, cheap and worst at foreign languages of all global travelers. Continue reading » I was driving to work yesterday and heard a compelling report on NPR about the R2I phenomenon. R2I is short for “Return to India,” the story of so many who have perhaps studied and lived in the U.S. for many years and have now decided to return home. For many, it is the pull of the aging parents or maybe the desire to bring their knowledge and expertise to their homeland. There is no better time as the U.S. economy declines and the Indian economy continues to be robust. With recent elections and the distractions arch-enemy Pakistan is facing, many Indian ex-pats are packing up their Silicon Valley, New Jersey or Dallas digs and heading home. According to Sandip Roy’s NPR report, web sites offer advice on everything from who’s hiring in Bangalore to how much gold you can bring home. Dubbed “a brain drain in reverse,” many of these folks jumping on the R2I train are in their mid–thirties, with families and higher degrees. When they return, despite their heritage, many experience a culture shock. Continue reading » Triporati Producer Gwynn Gacosta just returned from a remarkable trip to the Philippines to fulfill her mother’s final wishes. She wanted to scatter her mom’s ashes in the river where she used to swim as a child. The funny, challenging and poignant journey is captured in her own words—a blog post we wanted to share with you: Final Resting Place I planned my funeral, once, when I was ten years old. I decided that I would be cremated, and my ashes sprinkled in all five oceans. (Not only was I morbid, but I was also grandiose.) My future husband would travel around the world, leaving bits of me wherever he went. My mother died 5 years ago of heart failure, and she told me that she, too, wished to be cremated. She also wanted her ashes taken to her hometown in Bulusan, Philippines, then scattered in the river where she used to swim as a child. Immediately after she died, I started the process of making that wish a reality. The funeral home placed her remains in a plastic box wrapped in a silk sheath so that it would go through airport security without hassle. I wrote to my relatives in Bulusan and told them the plan. To my surprise I was met with protests from my family, led by the parish priest, who insisted that she would never be at rest unless she was buried somewhere where people could actually visit and reflect. Since no one in the family had the money to go anyway, we put her final wish on hold and her urn on the mantle. But I always knew that one day I would take her. She had counted on me. Continue reading » Have you ever had Mandarin Islamic Chinese food? Did you know there are an estimated 20 million Muslims who live in China? These questions percolated as my taste buds marveled at the unusual combinations of lamb, cumin and other spice mixtures that seemed so new to me. I was first taken to Old Mandarin Islamic by a mom on my son’s soccer team. It was a rainy fall day and the boys and spectators were soaked and chilled. The hot pot beckoned, and I was up for an adventure. Way out in the Sunset district in San Francisco near the beach, this small hole in the wall offers not only a unique culinary experience but a geography and culture lesson in Chinese history. I returned this Sunday to pick up takeout and once again I was blown away. Signs in Arabic welcome the diners as well as the Chinese Sabado Gigante-esque/ quasi American idol show playing in the corner on the big screen TV. Familiar was the standard Chinese restaurant decorations, but unusual were the plaques with sayings from the Koran (I assume). Of course there is no pork on the menu and the lamb is Halal. It seems like the whole family is cooking in the back kitchen and you can see them in action as you traipse through to go to the restroom. The hot pot is a fun diner participation dish, much like fondue or Korean BBQ. Continue reading » If you talk to a French person and say you lived in Lille… most say “I’m sorry”. That was the reputation this gritty Northern manufacturing city had years ago. It is the fourth largest metropolis in France and sits at the crossroads between Belgium, Britain and France. My ex-husband was from a small town outside the city, and we lived there for a few years while I taught English (or American) to top execs from Renault, Auchan, Peugeot and various other big French companies. He had to work through his military service scenario and I thought why not—I spoke French, loved the culture and was ready for an adventure. There was tremendous charm to Lille, a great mix of Flemish and French culture. We often went to Bruges and Brussels, the North Sea and England. I was in love and didn’t realize how provincial France, outside of Paris, could be. Continue reading »
Parallel-Algorithms for Regular Architectures is the first book to concentrate exclusively on algorithms and paradigms for programming parallel computers such as the hypercube, mesh, pyramid, and mesh-of-trees. Algorithms are given to solve fundamental tasks such as sorting and matrix operations, as well as problems in the field of image processing, graph theory, and computational geometry. The first chapter defines the computer models, problems to be solved, and notation that will be used throughout the book. It also describes fundamental abstract data movement operations that serve as the foundation to many of the algorithms presented in the book. The remaining chapters describe efficient implementations of these operations for specific models of computation and present algorithms (with asymptotic analyses) that are often based on these operations. The algorithms presented are the most efficient known, including a number of new algorithms for the hypercube and mesh-of-trees that are better than those that have previously appeared in the literature. The chapters may be read independently, allowing anyone interested in a specific model to read the introduction and then move directly to the chapter(s) devoted to the particular model of interest. Russ Miller is Assistant Professor in the Department of Computer Science, State University of New York at Buffalo. Quentin F. Stout is Associate Professor in the Department of Electrical Engineering and Computer Science at the University of Michigan. Parallel Algorithms for Regular Architectures is included in the Scientific Computation series, edited by Dennis Gannon.
TREATMENT OF LEUKEMIA USING INTEGRATED CHINESE AND WESTERN MEDICINE Leukemia is the term applied to cancers in which one line of bone marrow stem cells that produce white blood cells (leukocytes, the immune system cells) undergo neoplasia. The transformed cells uncontrollably yield an abnormally large amount of the corresponding white blood cells, mainly in an immature form. Also, by proliferating in numbers, the neoplastic stem cells crowd out the other stems cells in the marrow, so that there are fewer of the red cells and platelets, as well as other lines of white blood cells able to be produced. The destructive aspect of the disease is often manifest first by the lack of these other cell types, producing anemia and spontaneous hemorrhage. Eventually, the internal organs (initially the liver and spleen) swell up with the excess white cells, causing further problems. In addition, the leukemic cells usually have substantially reduced immune functions, leading to high incidence of infections (1). Different types of leukemia are identified by the stem cell line involved. There are two major categories: lymphocytic and myelogenous; the latter is sometimes divided into myelocytic, monocytic, and granulocytic leukemias, depending on the analysis used. Two somewhat similar bone marrow diseases are polycythemia (with excessive production of red blood cells) and thrombocythemia (with excessive production of platelets). Depending on the severity with which the leukemia manifests, it may be classified as acute or chronic; despite these names, acute leukemia does not have the meaning of a self-limiting temporary disease: without treatment it can be fatal within 2-4 months. Generally, acute leukemia is characterized by very large numbers of immature lymphocytes (known as lymphoblasts) or large numbers of myeloblasts (the "blasts" are immature cells; this leukemic condition is sometimes called myeloblastic leukemia). Chronic leukemias can flare up with an acute phase, producing a huge production of blast cells, yielding a "blast crisis" that is often the fatal phase of the disease. It is common practice to label the leukemias by initials for easy reference: ALL (acute lymphocytic leukemia), CLL (chronic lymphocytic leukemia), AML (acute myelocytic or myeloblastic leukemia), and CML (chronic myelocytic leukemia). The leukemias are further subdivided according to the patterns of cells seen by microscopic examination of the blood, revealing different cell sizes, degrees of maturation of the cells, cell surface markers (especially immunological components), and proportions of different cell types. These divisions may be of importance, in that the treatment strategies can vary according to the observed condition; the results of these tests can also help the oncologist advise the patient regarding prognosis. The possible causes of leukemia include exposure to ionizing radiation, exposure to some chemicals (benzene is an established example), and the action of viruses (e.g., HTLV, human T-cell leukemia virus, a type of retrovirus). Genetic factors play a role in susceptibility, especially for the childhood leukemias. Except for the possibility of attempting an antiviral treatment for virus-induced leukemias, the causes of the disease do not shed much light on the treatment to be given. Even in the case of a viral causation, if suppression or elimination of the virus could be accomplished, that might not cure the disease; the cellular transformation that occurred may no longer reversible. Some cases of chronic leukemia, notably CLL, are sufficiently innocuous that they are left without treatment for a time because the degree of imbalance in blood cell production is within tolerable limits; in most other cases, anticancer drugs are used in an effort to slow its progress. Some new techniques of treating leukemia have been developed, include destroying the cancerous bone marrow (thus eliminating the neoplastic cells) and transplanting healthy bone marrow (grown in the laboratory from donated marrow cells). Substantial success has been attained in treating acute childhood leukemia, especially ALL, which was, until recently, a major cause of childhood deaths (with peak incidence around 4 years of age). This positive outcome may be the result of two factors: in acute leukemia, the chemotherapeutic drugs have a much stronger effect on the highly active abnormal cells than on the normal cells; and in children the ability to recover normality is better than in older persons. Up to 90% of cases of childhood acute lymphocytic leukemia go into remission with treatment, with about 70% of treated cases gaining long-term survival (2). AML tends to strike in the age range of 15-39 years and has a moderately good response to treatment by bone marrow transplant, though long-term outcomes are not yet known. Cases of chronic leukemia are poorly managed by modern chemotherapy: the disabling effects of the therapy on the entire physiological function are often as strong as they are on the cancerous cells. Often, the individual must undergo multiple types of interventions, including blood transfusions, antibiotics, and other "supportive therapies." Progress towards a cure has been difficult. The age of diagnosis for chronic leukemia is typically between 40 and 60 years. Without treatment, chronic leukemia patients are expected to live for 2-6 years from onset of the disease; CML tends to progress more rapidly than CLL. However, about half of the patients with chronic leukemia die within two years of their diagnosis even with treatment. Slightly increased survival time after diagnosis in recent years may be the result of earlier detection more so than successful treatment. Busulfan (a.k.a. Myleran), a drug that has been frequently used for one type of chronic leukemia (granulocytic), commonly produces mean survival times of about 3-4 years. For leukemia in adults, it is reasonable to pursue Chinese medical therapies in an attempt to improve the outcomes. The ability to diagnose leukemia, which requires assay of blood cells, came to China with Western medicine, and was attained there about 60 years ago. Prior to that, leukemia patients must have presented a difficult challenge to traditional practitioners. A patient might show severe anemia and easy tendency to bleed, the results of insufficient production of red blood cells and platelets. The high metabolism of the leukemic cells caused fever and fatigue. The patient might show swelling of internal organs as they are impacted by the large numbers of circulating white blood cells. The disorder would thus appear as a complex combination of deficiency and excess. The primary disease, buried in the bone marrow, might not have been easy to trace. Along with Western diagnostics came the limited methods of therapy then available in the West; these included toxic treatments with arsenic compounds and, later, the introduction of some chemical agents, such as busulfan, which was, until recently, the standard treatment for myelogenous leukemia (it is of no value in other leukemias). The Western drugs that have been used for leukemia (including hydroxyurea, chlorambucil, and prednisone) appear in the modern Chinese medical literature as treatments given to control groups when evaluating Chinese therapies or used in integrated therapies. Once the efforts of traditional Chinese doctors were turned to the problem of treating leukemia, it did not take long for certain antileukemic remedies to arise. According to Chang Zhinan of the Hematology Division of Capital Hospital in Beijing (3), leukemia (of modern diagnosis) has been subjected to various attempts at using Chinese materia medica items since about 1953. He reported that herbs have been applied for four purposes: reducing complications of leukemia, such as bleeding and infection; reducing adverse reactions to chemotherapy or increasing resistance of the body to adverse impact of leukemia; promoting the body's natural healing ability to reduce the impact and spread of the neoplastic process; and eradicating the leukemia cells. Today, there are basically three treatment methods: inhibit leukemia cells; promote the body function and protect it from leukemia and side effects of toxic treatments; and treat specific symptoms. These three will be analyzed below. The primary traditional Materia Medica items for inhibiting leukemic cells have been indigo (qingdai) and realgar (xionghuang). Indigo is the purple dye obtained from certain plants, mainly Isatis tinctoria and Baphicacanthus cusia, but also from other Isatis species and from certain species of Clerodendron and Polygonum. This variability in source means that the market products can also have variable constituents and effects. The principal active constituent of indigo, in relation to leukemia, is indirubin, which was isolated and experimented with first by Western scientists and later investigated by Chinese researchers. Indirubin is a substance found in humans as well as in plants, possibly a metabolite of tryptophan. Indirubin levels in the urine are found to be increased in persons with various pathological conditions, including sprue, disturbed protein metabolism, renal diseases, myelocytic leukemia and other neoplasms (4). Isolated indirubin is a drug product used in China. It is given in tablet form in dosages of just 150-200 mg each time (some patients receive a double dose), three times daily, for treatment of chronic granulocytic leukemia. It is reported that indirubin has better therapeutic effects, faster onset of effects, lower dosage requirements, and milder side effects than indigo. Side effects of indigo include abdominal pain, diarrhea, nausea, vomiting, and mushy stool. Almost all of these effects appear to be the consequence of an irritant action of the crude material and are dose dependent, though sensitivity to the irritant action varies widely among individuals. While indirubin has a good effect on chronic granulocytic leukemia (this is the same type of leukemia that busulfan treats), it has a lesser effect on the non-granulocytic myelogenous leukemia and is not known to be effective for lymphocytic leukemia. Like other chemotherapeutic agents, indirubin has general bone-marrow suppressing action, but it is less than some antileukemia drugs, including busulfan. Indirubin is not available in the West. However, Chinese immigrants to the U.S. have prescribed indigo (qingdai) in capsules to Western patients with leukemia who visit them. This herbal extract is not especially toxic, but, as mentioned above, it does easily produce gastric irritation that can limit the dosage to ineffective levels in some individuals. According to the author of Anticancer Medicinal Herbs (5) "Indigo really has an anticancer effect. The daily dose is 2-4 grams usually, but sometimes can be increased to 6-10 grams." Realgar is a mineral compound that is mainly arsenic sulfide with some impurities. Arsenic inhibits leukemic cells. The use of arsenic in medicine was not limited to the Chinese, who have used it since ancient times to "cure debility and impotence and disperse accumulations." A drug dubbed "the first wonder drug of the twentieth century (6)," was Salvarsan, an arsenic compound that was found effective for treating syphilis (a disease previously treated by mercury compounds). Arsenic compounds are also used in Ayurvedic medicine. Realgar is listed in the modern Chinese Pharmacopoeia and is suggested to be used by making pills or powders, taken at a dosage of 150-300 mg each time, avoiding long-term administration. Realgar is sufficiently toxic that its use by Western practitioners is all but precluded; in fact, concerns for heavy metal contamination of Chinese herbs, especially with arsenic and mercury (mainly from realgar and cinnabar, included in many formulas) are so great that it would probably cause significant legal problems if someone were to knowingly prescribe this as a medicinal agent. Other anticancer materials that the Chinese have used for leukemia include strychnos, camptotheca, cephalotaxus, celastrus, catharanthus, and toad secretion. These are all somewhat toxic and not used by Western practitioners. Cephalotaxus is related to the yew tree that yields the modern anticancer drug taxol; cephalotaxus has yielded the antileukemia drug harringtonine, which is extensively used in China, but not in the U.S. It is a treatment for acute monocytic leukemia (a type of AML). Camptotheca has been intensively studied in both the U.S. and China and yields the drug hydroxycamptothecin (used for acute myelocytic and lymphocytic leukemias), also not approved in the U.S. Strychnos, which contains strychnine, has anticancer properties but is not licensed for use here (it is an ingredient of the Chinese formula Ping Xiao Dan, used for many types of cancer). Celastrus contains dibromodulcitol, which is used to treat chronic granulocytic leukemia (same application as indigo). Catharanthus is the source of the standard chemotherapeutic agents vinblastine and leurocristine; these are used in treatment of ALL and AML. Toad secretion contains bufotoxin, which is highly irritating and not permitted for use in the U.S. (one case of bufotoxin fatal poisoning was recorded in the U.S. recently when an herbalist accidentally filled a prescription incorrectly and substituted this item, it was present at a dosage much higher than would normally be used). Toad secretion also contains bufotenine, a compound similar in structure to indirubin. A well-known, but somewhat toxic patent medicine, Liu Shen Wan, is sometimes recommended by Chinese doctors for leukemia: it includes toad secretion and realgar. A toad secretion prescription, originally applied topically for treatment of lip cancer, was later used to treat ALL, with some success. It also includes realgar, as well as other toxic materials, such as cinnabar and calomel (both contain mercury). These Chinese leukemia remedies have the same function as modern pharmacological interventions that we call chemotherapy. In fact, Chinese chemotherapy is sometimes simply derived from herbal active constituents, including harringtonine, indirubin, and hydroxycamptothecin. The purpose is to inhibit the abnormal bone marrow cells, so as to permit the growth and function of the normal cells. In China, there are more of these drugs available than in the U.S. as the result of fewer restrictions on drug licensing. The discovery that indigo was clinically effective for leukemia was made in the early 1960's, when researchers at the Institute of Hematology, Chinese Academy of Sciences (Beijing) noted that an herbal formula prescribed by Chinese doctors appeared to be producing good results in many patients. The formula, Danggui Luhui Wan, included indigo as an ingredient and had prominent action in cases of chronic myelocytic leukemia (CML). The formula also contains tang-kuei, gentiana, coptis, phellodendron, scute, rhubarb, aloe, saussurea, and musk, and was traditionally used for reducing fever, purging intense heat, and removing toxin. A modified version of the traditional formula, which includes both indigo and realgar, was developed at this Institute, and reported to be even more effective than the original: it was called the anti-CML pill. It contains indigo, realgar, ranunculus, sophora, scute, phellodendron, tang-kuei, terminalia, leech, and eupolyphaga. The basis for adding realgar to the indigo was that potassium arsenite, a related compound, had previously been used for treating CML by Western-style practitioners (in the 1940s), but had been abandoned due to toxicity problems. The lower toxicity of realgar may be due solely to differences in rates of absorption from the intestinal tract. Realgar was substituted for potassium arsenite and reported effective in 1960 by a hospital in Shanghai. It was found in one small study in 1970 that complete remission could be attained in some leukemia patients by administering 9-18 grams of realgar per day (by decocting it, where only a fraction is solubilized; usual dosages of orally ingested powder are less than 2 grams per day). The adverse effects were less than those of potassium arsenite. Another toxic arsenic compound formula, made with arsenic oxide, was reported by a group in Harbin to be reasonably effective for acute myelocytic leukemia. According to the author of Anticancer Medicinal Plants, realgar is also single remedy for chronic granulocytic leukemia, taken at a dose of 0.3-0.9 grams each time, one or two times daily. Indigo had been tested as a single herb remedy for CML in the 1970s. A dose of 6-12 grams per day was reported to achieve complete or partial remission. The remission rate increased when realgar was added as 11% of the formula (for example: 1 gram realgar mixed with 8 grams indigo). Recent treatments in China continued to rely on the crude material for some time, even though indirubin is available. As an example, Qinghuang Powder, prescribed at the Xiyuan Hospital for treating chronic granulocytic leukemia, is made of a 9:1 ratio of indigo and realgar, given in capsule form, with a daily dosage of 6-14 grams, divided into three doses; a maintenance dosage, after improvement is attained, is 3-6 grams per day. The anti-CML pill is reported, by the group that worked with it, to be more effective than indigo-realgar treatment alone. The reason for this is not yet established. Other ingredients in the pill include sophora root (which contains the anticancer ingredient matrine) and tang-kuei, which may serve in a protective role. Arsenic levels of patients taking this pill were monitored by urine analysis. Chronic arsenic intoxication produces changes in the skin, with pruritis, skin pigmentation, and keratodermia, and it may cause mild peripheral neuritis. If such occurred, arsenic was cleared from the system using sodium dimercaptopropan sulfonate or sodium dimercaptosuccinate. By simply stopping the use of the anti-CML pill, urinary excretion of arsenic would reduce over a period of 1-6 months. The researchers working with the anti-CML pill felt that the problem of arsenic toxicity was manageable. In the U.S., the only one of the specific anti-leukemic substances that can be reasonably prescribed is indigo (qingdai). When Chinese immigrant doctors have prescribed encapsulated indigo to their patients it is usually administered at doses far lower than the 6-12 grams reported above (typical amounts are 1-3 grams). Remission rates described in the clinical reports from China may be easily misinterpreted. A remission in the case of leukemia means that there is a dramatic reduction in the levels of the affected white cell line, accompanied by improvements in the impaired cell lines and in symptoms. Complete remission means normalization of the blood picture, while partial remission means improvement without attaining normal condition of blood cells. However, some time after this remission has been observed, the disease usually recurs. For example, in a study of bufotoxin treatment of acute leukemia, it was reported that partial remission was attained in 50% of patients and full remission was attained in 25% of patients. The longest remission period was 6 years. In a study of 6 patients treated with realgar plus indigo, it was reported that complete remissions occurred in 3 patients; and 2 of those patients lived more than four years. Remission usually results in a period of a few months or years free of the cancer; when the leukemic condition returns, one must start treatment again, usually in the midst of a blast crisis. Producing a remission and then extending the remission period is the goal of current treatments. In one study comparing busulfan with alternating busulfan and anti-CML pill, mean survival time was reportedly increased from 40 months to 61 months. The same results were reported in another comparative trial with busulfan alone versus alternating treatment with busulfan and an herbal pill, similar to the anti-CML pill, called Manli Wan (composed of ranunculus, sophora, scute, phellodendron, tang-kuei, terminalia, indigo, eupolyphaga, and leech). These alternating strategies are deemed interesting to Western-trained Chinese doctors because the busulfan is so toxic. In a long-term study of indirubin treatment for CML, it was reported that "maintenance therapy [with indirubin] was necessary for CML patients after achieving complete remission and there was no obvious side effects over long-term administration of the drug. Unfortunately, indirubin could not suspend or postpone development of blastic crisis (7)." Median survival time in this study of 57 cases of CML treated with the simple protocol of indirubin administration was 31.5 months. Put simply, remission of leukemia is usually a temporary condition. Improvement in the quality of life of patients is also a major factor, despite any changes in blood picture or duration of life. The proponents of Chinese medicine suggest that rates of complete and partial remission and quality of life are improved by using Chinese medicine, but the duration of survival may increase only slightly (by a few months), if at all. About 20% of chronic leukemia patients can attain survival times of 10 years or more. Chinese doctors have worked out strategies of complementing the basic anti-leukemic approach, whether the modern chemotherapy or the traditional style (e.g., realgar and indigo) are used. These complementary approaches often involve non-toxic herbs that could be prescribed by practitioners in the West; but there are also toxic substances mentioned in the literature for this purpose. For example, in the book Anticancer Medicinal Herbs, it is reported that the toxic insect mylabris, when accompanied by chemotherapy, can control leukopenia in patients with leukemia. Since a major problem with mylabris is its irritant effect, the Chinese pharmaceutical factories have added mung bean flour to the preparation of tablets, which protects the gastro-intestinal tract from irritation. In the book Treatment of Cancer with Fu Zheng Pei Ben Principle (8), this is said: Leukemia is a disease of constitutional dimensions and, as such, its treatment should be comprehensive: eliminate and inhibit the hyperplastic leukemia cells, and at the same time protect the integrity of the normal tissues. The crucial point when treating this disease is a comprehensive therapeutic approach which can deal properly with the delicate relationship between dispelling pathogenic factors, while restoring normal functions of the body....A very large number of clinic experiences and recordings show conclusively that a course of treatment combining the principles of TCM and western medicine is always better than either one applied alone. Effectiveness of chemotherapy can be greatly enhanced when supplemented by TCM based on principles of syndrome differentiation. The regulatory and stabilizing effect of TCM administered during respite between chemotherapy cycles helps maintain the efficacy of the original treatment and prolongs the period of remission. A sample treatment for acute leukemia (AML) presented in this book is a modified Rhino and Rehmannia Decoction (Xijiao Dihuang Tang), which includes raw rehmannia, moutan, rhino horn (substituted by water buffalo horn), lithospermum, lonicera, isatis leaf, indigo (12 grams), scrophularia, gypsum, and lycium bark to clear heat and toxin, with turtle, tortoise shell, and pseudostellaria to nourish yin. It also contains some blood-vitalizing herbs (red peony, agrimony, and carthamus). This treatment is to be applied to cases of high fever and sweating, spontaneous bleeding, and other symptoms that are characteristic of uncontrolled leukemia. The ingredients can be understood in terms of presenting symptoms: lithospermum, raw, rehmannia, and rhino horn, for example, are used for high fever and spontaneous bleeding; turtle shell and tortoise shell not only nourish the blood and yin, but also prevent hemorrhage. The anti-leukemic substance, indigo, is provided in the highest quantity usually recommended. The difficulty with this approach is that the very high dosage decoction (over 200 grams per day) would have an extremely bitter taste and would likely have an irritating and even an inhibiting effect on the digestive system (which is probably already weakened by the disease and prior treatments). Therefore, one would hope to use a different prescription, such as the others offered in that book and described below. A common complaint of leukemia patients is chronic low grade fever and fatigue. This is the combined result of deficient blood status, side effects of cancer therapies, and secondary effects of continuing leukemia (especially the high metabolic status of the white blood cells). A recommended treatment is to use 24 grams of salvia, 20 grams millettia, 20 grams agrimony and 6-15 grams each of the following: ginseng, red peony, tang-kuei, cnidium, persica, astragalus, hoelen, atractylodes, licorice, and pseudostellaria. This formula nourishes qi and blood and vitalizes blood circulation. It is to be used in conjunction with chemotherapy to enhance its effects and alleviate the characteristic symptoms the patient faces. This formulation, although high in dosage, has a tolerable taste and is unlikely to cause gastro-intestinal irritation; to the contrary, it has herbs that may improve the condition of the gastro-intestinal system. Other complementary formulas for leukemia recommended in this book focus more on tonic actions, by combining yin tonics (for those showing more evident signs of yin deficiency) or tonics for the qi and essence (for those showing overall deficiency syndrome). The formulas for leukemia patients do not differ significantly from those that might be used in various types of cancers in which the patient is receiving chemotherapy. That is, the fact that leukemia is being treated does not strongly influence the selection of herbs or formulations. The patient ought to be evaluated for evidence of the common problems of blood stasis, yin deficiency, qi deficiency, essence deficiency, organ swelling, etc., and treated accordingly. In each case, a total daily dosage of about 200 grams is recommended in this book. These dosages are impractical for American patients, who are not used to taking such decoctions. Even when using dried decoctions, a total daily dosage of 30 grams of the powders (corresponding to about 150 grams of crude herbs in decoction) is usually deemed too much. However, a lower total dosage might be successfully used if a smaller number of herbs are selected and if they closely match the Oriental diagnosis of patient requirements. In some of the Chinese reports on treating leukemia patients, general anticancer herbs that are non-toxic or of low toxicity, such as scutellaria, oldenlandia, solanum (lyratum or nigrum), and paris, are included in the formulas. It is not known at this time whether these have a specific antileukemic effect. Chinese physicians apply herbs according to traditional principles to treat certain characteristic symptoms of leukemia. These include (9): In each of the cases, these agents are selected according to standard principles of Oriental diagnostics and prescribing, with no special reference to the source of the symptoms as being leukemia. One can see from untreated cases that this disease would be diagnosed as a syndrome involving pathogenic heat in the blood. Such a condition would help to explain several symptoms, such as fevers, bleeding (and appearance of purple maculae), skin eruptions, and ulceration and swelling in the oral cavity. Not surprisingly, the traditional remedies are often based on formulas for such symptoms that might arise as well from other causes. The Rhino and Rehmannia Combination is one such formula. The traditional formulas can be modified by adding one or more blood tonics to address the anemia, and one or more herbs for dispersing accumulations to treat the organ swelling and accompanying aching. The Danggui Luhui Wan formula, that includes indigo and was at the basis of the anti-CML pill, is an example; it contains saussurea, rhubarb, aloe, and musk to help get rid of accumulations. In a recently published clinical trial (13), patients with chronic granulocytic leukemia (a type of CML) were treated with herbs according to syndrome differentiation, mainly in the categories of qi/blood deficiency, liver/kidney deficiency, or blood stasis. Details given were sketchy, but all patients received the anticancer herbs scutellaria, oldenlandia, and lasiosphera (puff-ball mushroom, most often used to treat lung ailments), and the tonic herbs codonopsis and peony, along with herbs that were specific for the syndrome (examples: tang-kuei and astragalus for qi/blood deficiency; rehmannia and ophiopogon for liver/kidney deficiency; sparganium and zedoaria for blood stasis). If the pathological condition became serious during the treatment period (or was quite serious at the outset), chemotherapy was also used. Either busulfan or hydroxyurea was given for this purpose. Other herbs or drugs could be given to treat specific symptoms, such as infections, bleeding, or severe anemia. The short term results indicated that after six months of treatment, 69% of patients had complete remission, 25% had partial remission, and only 6% did not respond. Treatment continued beyond six months. The ultimate results of therapy were measured in terms of survival time from initial diagnosis. Of 80 patients, 37 lived less than 3 years, 16 lived 3-4 years, and the remaining 27 patients lived for 5 or more years. The median survival time was 3.8 years. While the remission rates reported in this article are quite good, the survival rates are not significantly better than those reported in the U.S. where indirubin and complementary Chinese herb therapies have not been used. However, it is possible that the clinic was presented with more severe cases of leukemia to treat. In some earlier studies, median survival times of 5 years were reported. In another trial of indirubin plus herbs for chronic granulocytic leukemia (14), patients were treated by indirubin, 50 mg each time, 3 times daily. Herbs given according to syndrome differentiation: Complete remission was attained in 40% of the patients, partial remission in 50%, no benefit in 10%. About 15 days of treatment were required to reduce spleen enlargement, with normalization after about 40 days. Leukocyte levels began to show decline after about 10 days, with 60 days treatment required to get to the normal range in responsive patients. Long-term results were not reported. This study reveals that one can monitor the effectiveness of the treatment by checking leukocyte levels over a two month period. Patients with leukemia, especially with chronic leukemia that is the most likely syndrome to be presented to practitioners of Chinese medicine, are often depressed and debilitated when they present themselves for treatment. The depression may result from having a diagnosis of cancer, especially with a poor prognosis, and from any failures up to this point in gaining an adequate resolution. The debility may result from the effects of the disease and/or treatments that have been tried thus far; also, chronic leukemia is usually seen in older individuals, who may experience debility from poor nutritional habits, lack of exercise, and other characteristics common to elderly persons with chronic disease. Leukemia may arise at this time of life from insufficient immune functions, lack of antioxidant activity, and stresses that permit chronic viruses to activate. The first thing a practitioner can do is assure the patient that the poor prognosis of the disease may be improved by nutrition and herbal therapy. Although there are a number of clinical reports from China that describe disease remission (and improved rates of remission compared to using chemotherapy alone), one should be cautious about promising too much from the treatments to be offered. There is reasonable evidence from China that leukemia patients can get symptomatic relief and may have prolonged life span (perhaps 50% longer than that normally expected with standard chemotherapy) by the use of herbs and improved nutrition. Patients should be directed to consult with their oncologist(s) to get a clear picture of the prognosis that is expected with a chemotherapeutic or other intervention that is currently being undertaken or might be pursued. Both the risks and benefits of the chemotherapy course should be weighed. If the prognosis is not very good and the adverse effects are troubling, then one might pursue a course of natural therapies only, with the knowledge that the prognosis (in terms of survival time) is not necessarily improved but the adverse effects of chemotherapy might be avoided. If the prognosis is good, then a combined therapy, using Chinese herbs and nutrition in a supportive role might be a valuable course of action. Some oncologists might even be willing to entertain a program of alternating chemotherapy and herbal therapy for patients who are not open to using the standard chemotherapy. When there has been chemotherapy failure or when there is refusal to use chemotherapy, a treatment similar to the anti-CML pill may be of value for myologenous leukemias. This approach usually relies on the use of indigo as the primary therapy, with various supporting herbs to be included in the treatment, probably in the form of dried decoctions (formula designed by the practitioner), or, if the patient is unwilling to use that method, tableted formulas (in large quantities). A monthly blood draw can be used to monitor the progress of the treatment, along with the standard examination of symptom changes. Indigo can be provided in capsules, starting at a dose of about 1 gram per day and working up to 6 grams per day, or up to tolerance levels. Once the maximum level has been determined, herbal combinations should likewise be administered starting at a modest dosage and working up to a maximum possible dosage. For example: using decoctions, begin at 30 grams per day; using dried decoctions, start at 6 grams per day, using tableted formulas, start at 9 tablets per day. Let the patient know about the high dosage form of natural materials. The ability to ingest and tolerate large amounts of herbs (and, possibly, nutritional supplements) may be a key to success with this natural approach. It has been reported that indirubin used for an extended period of time may cause pulmonary hypertension and cardiac insufficiency in some patients (10). This effect, which occurred in persons treated for 9 months to 3 years, was slowly reversed when the indirubin was removed. Presumably, this adverse reaction could occur with administration of the higher doses of natural indigo, as it contains indirubin. Therefore, any patients who are to be treated for 9 months or longer should have their cardiac function monitored. While specific nutritional approaches have not been developed for leukemia, certain general methods can be applied: The relative deficiency of reports in the Chinese literature regarding treatment of chronic lymphocytic leukemia (CLL) with herbs is mainly due to the fact that CLL is very rare in the Orient, due to genetic factors. Although a few recipes for treatment (based on traditional principles for treating the symptom presentation) have been published, evidence for their effectiveness is lacking because of insufficient case studies. The approach to be taken would be to treat according to the traditional syndrome differentiation, in combination with standard chemotherapy when possible. Chronic granulocytic leukemia appears to be the type of leukemia most intensively investigated, no doubt as a response to early findings of benefit from using traditional Chinese herbal materials coupled with a relatively high frequency of incidence in China. Acute leukemia is always treated with a chemotherapy, with reported improvements attained by adding Chinese herbal therapies. In the book An Illustrated Guide to Antineoplastic Chinese Herbal Medicine (18), the following formulas for acute leukemias are relayed from the medical literature during the period 1981-1985: Toad Skin Wine: 1.8 kg toad, with viscera discarded, in 1.5 liters wine; dosage of 15-30 ml three times per day after meals, for various kinds of leukemia, especially ALL. An Lu San: made of centipede, scorpion, silkworm, and eupolyphaga (in equal amounts, ground to powder, administer 0.3-1.0 grams each time, three times daily) for all acute leukemias. Anti-leukemic mixture: with lonicera, rhaponticum, scutellaria, dandelion, viola, millettia, cuscuta, salvia, epimedium, coptis, prepared as a decoction 6-10 grams of each herb (except coptis, 3 grams), administer 25 ml each time, twice daily. When acute crisis is relieved, make a pill using deer antler as the main ingredient, plus ginseng, peony, rehmannia, ho-shou-wu, lycium fruit, zizyphus, salvia, epimedium schizandra, tang-kuei, astragalus, sesame oil, carthamus, cnidium, and a small amount of realgar (1.5%). Administer the pill (with about 130 mg herb ingredients) twice daily for maintenance. This treatment is indicated for acute leukemia in children (probably for ALL). Oldenlandia Decoction: with oldenlandia, isatis root, solanum lyratum, trichosanthes fruit, paris, lithospermum, and belamcanda. Each ingredient is 15-30 grams, except belamcanda 9 grams. Used for acute leukemias. Acute Leukemia Decoction: with rehmannia, hoelen, astragalus, oldenlandia, solanum nigrum, sophora subprostrata, lithospermum, dioscorea, cornus, cistanche, morinda, psoralea, ginseng, ophiopogon, schizandra, tang-kuei (each ingredient 10-30 grams, except tang-kuei, 6 grams), indicated for non-lymphocytic leukemia, as an adjunct to chemotherapy. Anti-Cancer Formula 7: with oldenlandia (75 grams), solanum nigrum (60 grams), coix (60 grams), san-chi (9 grams), dioscorea bulbifera (6 grams), and mume (6 grams), indicated for acute granulocytic leukemia. Toad Skin and Scutellaria Decoction: with toad skin, scutellaria, isatis root, rhubarb, solanum lyratum, paris, lithospermum, and belamcanda; all ingredients 15-30 grams, except toad skin (9-12 grams, and belamcanda 9 grams). Indicated for acute granulocytic leukemia as a supplement to chemotherapy. In a study of acute leukemia patients (15), 18 cases were treated with Chinese herbs plus chemotherapy and 21 cases were treated with chemotherapy alone. The herbal therapy, in the form of decoction, was comprised mainly of tang-kuei, cnidium, millettia, red peony, carthamus, and san-chi. The treatment group had a higher percentage of cases gaining remission (89% vs. 57%) and a longer mean survival time (13 months vs. 7 months). Another study (16) involved 70 acute leukemia patients divided randomly into two groups, one receiving chemotherapy alone and the other receiving chemotherapy plus herbs in decoction. Those receiving herbs were divided into three groups according to syndrome: The Western medical treatments were also differentiated according to whether the disease was the acute lymphocytic type, acute non-lymphocytic type, or acute granulocytic type. Of 35 patients in the integrated therapy group, 69% had complete remission and 20% had partial remission, with 11% not improved; of 35 patients in the Western medicine group 43% had complete remission, and 20% had partial remission, with 37% not improved. Survival time was reported to be longer in the integrated group than the partial remission group, but the details in the report were unclear. An article (17) describing treatment of a small number of patients with refractory recurrent acute leukemia, indicated that those who failed to attain remission by chemotherapy alone could sometimes gain benefit from combined therapy with Chinese herbs. Sixteen patients were treated according to syndrome differentiation, all having internal pathogenic heat, with four subtypes: qi and yin deficiency; damp-heat plus blood stasis; phlegm nodules; blood stasis with movable mass. In each case, a decoction was given to address the syndrome. It was reported that 10 of the patients had complete remission, and 2 patients had partial remission as a result of using 1-4 months of therapy, an average of 3 months. Cheung CS, et al., translators, "Leukemia-Understanding and treatments in traditional Chinese medicine," Journal of the American College of Traditional Chinese Medicine 1982; (1): 73-85. [translation of Chinese report by the Hematology Group of Xiyuan Hospital in Beijing, originally published 1976, just prior to introduction of indigo-based treatments for leukemia] Jia Kun, Prevention and Treatment of Carcinoma with Traditional Chinese Medicine, 1985 Commercial Press, Hong Kong. [includes a chapter about using Ping Xiao Dan plus other therapies to treat leukemia]. Shi Lanling and Shi Peiquan, Experience in Treating Carcinomas with Traditional Chinese Medicine, 1992 Shandong Science and Technology Press, Shandong. [this book has a section on leukemia that simply lists several "proved recipes," that is, formulas that appeared in journal articles or books with case presentations illustrating positive results]
Polemicist and journalist Christopher Hitchens, who died in December at 62 after a battle with esophageal cancer, was celebrated Friday as an incorrigible contrarian, dazzling public intellectual, obdurate justice seeker, and passionate bon vivant in a star-studded memorial service at New York’s Cooper Union. Yet “service,” as in pious activity, is probably the wrong word—for Hitchens was famously an adamant atheist, and his 2007 faith-debunking bestseller God Is Not Great: How Religion Poisons Everything was the most successful of his 12 books and five essay collections. “Shortly after his death, I was interviewed by an annoying interviewer on CNN,” theoretical physicist Lawrence Krauss told the capacity crowd of around 800, which included many of the leading figures in literature, journalism, science, and entertainment that Hitchens counted as friends, notably Hollywood actors Sean Penn and Olivia Wilde (who confided that Hitchens, a close pal of her parents, “was a wonderful babysitter”). Krauss went on with his story by saying that the unnamed CNN personality introduced the Hitchens segment thusly: “On the one hand, he inspired the ideals of skepticism, free inquiry, and rational thought, but at the same time has been called a bullying, lying, opportunistic, cynical contrarian. She said that as if it were a bad thing.” Big laugh from the audience—one of many moments of hilarity throughout the two hours of remembrances by friends and family and readings from Hitchens’s prolific body of work. His writings—often dashed off while he sat on a barstool yet informed by amazing erudition—appeared everywhere from The Nation to Newsweek to Vanity Fair, where he spent more than a decade as a marquee columnist. Vanity Fair editor Graydon Carter, who organized the event, called Hitchens “a man of ferocious appetites—for Scotch, for cigarettes, and for talk. That he had the output to equal what he consumed was the true miracle of the man.” Carter added, “He wrote fast, frequently without benefit of a second draft or even corrections.” He was “an editor’s dream and he was a reader’s dream,” Carter continued, noting that Hitchens possessed “a legendary memory that held up even under the most liquid of late-night conditions.” Hitchens’ prodigious drinking and smoking—documented by numerous photographs and a tailor-made documentary projected behind the stage—was a leitmotif of the memorial, as was his insistence on leaving “the cozy cocoon of conventional liberal wisdom,” as Carter put it, to back George W. Bush’s war in Iraq, savage the sainted Mother Teresa as a fraud and hypocrite, and pursue Henry Kissinger as an evil war criminal. Richard Nixon’s former national security adviser and secretary of state, generally one of the more sought-after eulogists whenever a VIP passes away, was understandably not in attendance. Sitting in the invited audience, however, were media mogul Tom Freston, writer and director Nora Ephron, 60 Minutes correspondent Steve Kroft, New Yorker editor David Remnick, Newsweek and The Daily Beast editor Tina Brown, and London lawyer Eleni Meleagrou, Hitchens’ ex-wife and the mother of two of his three children, Alexander and Sophia. Carol Blue, his widow and the mother of his daughter Antonia, joined his son Alexander Meleagrou-Hitchens in reading excerpts from his writing. Among others who read from Hitchens’s work were playwright Tom Stoppard, novelist Salman Rushdie, and satirist and novelist Christopher Buckley, along with Penn and Wilde. Geneticist Francis Collins, director of the National Institutes of Health, who helped guide Hitchens’s cancer treatment, played a piano piece that he composed in honor of the writer after noting that they became warm friends even though “I am a follower of Jesus Christ.” Eulogist Martin Amis, the famed novelist and Hitchens’s close friend since their Oxford days, cheekily recalled that his pal was a “self-mythologizer” who “often referred to himself in the third person,” as in “The Hitch.” Whenever an injustice occurred, Hitchens would declare, “The pen of the Hitch will flash from its scabbard.” Once, when they were strolling toward a movie theater in Southampton, N.Y., Amis teased his friend that “no one has recognized The Hitch for at least 10 minutes,” Amis recounted. “And he said, ‘Longer. It’s been at least 15 minutes.’” Vanity Fair editorGraydon Carter, who organized the event, called Hitchens “a man of ferocious appetites—for Scotch, for cigarettes, and for talk. That he had the output to equal what he consumed was the true miracle of the man.” British actor and playwright Stephen Fry, memorializing Hitch the hedonist, recalled that he maintained that “the most overrated things in life were champagne, lobster, anal sex, and picnics.” Fry, who is out and proud, waited a beat before adding, “Well, three out of four!” It would have tickled Hitchens that his memorial started and ended with a rousing recording of “The Internationale,” and it probably wouldn't have bothered him excessively that afterward, once the mourners were outside on the sidewalk, clouds of cigarette smoke wafted over their heads.
Posted on Wednesday, January 5th, 2011 by Russ Fischer Late last year, during his battle with the MPAA, Blue Valentine director Derek Cianfrance said that he is developing an HBO series that would hopefully “give new meaning to the word ‘character development’.” Given the intensely emotional angle of Blue Valentine, that cryptic statement was taken in exactly the misdirecting manner I suspect the director intended. Now we know a little more. His HBO series is based upon the Sam Fussell memoir Muscle: Confessions of an Unlikely Bodybuilder, and will be a single-camera comedy. Deadline says that Derek Cianfrance will co-write the script with the original author, and that Mr. Cianfrance is attached to direct the first episode. Both men will produce. A review printed by Amazon sums up the book: Teenage boys who a generation ago would have answered Charles Atlas ads will be attracted to this book about Fussell’s own immersion program in bodybuilding. He is an Oxford honors graduate in English language and literature and writes engagingly about what drew him into the subculture of gym life. He includes the reaction of his bewildered parents and describes the assortment of gym habitues who befriended him. This is no George Plimpton inside glimpse–the author lived the bodybuilding life full-time for four years, and he shares with his readers that life of mind-numbing exercises, fistfuls of vitamins, and steroid injections. This is destined to be a cult book that will survive because of its humor, its truth, and its fine writing. Look also to a 1991 EW profile of the author to get some insight on how this might work as a show: In Muscle, Fussell describes his journey into the inner circles of the gym world — the barf-fests, starvation diets, and grueling workouts — in terms worthy of Dante’s Inferno. He also touches on some risky subjects in American sports, including homosexuality and steroid use. But most of all Fussell, who used the chemicals himself in his obsessive quest to become ”the best,” attempts to describe the bodybuilder’s warped outlook as a metaphor for ”the hidden motivations that we all have. Life is a matter of theater and presentation and how much you choose to expose to the world of yourself,” he says. Taking all this in I automatically think of the good and underexposed documentary Bigger Stronger Faster, in addition to the obvious go-to films like Pumping Iron. There’s potential here, definitely, and I’ll be curious to see how the show manages to portray the main character’s physical changes, assuming it gets that far.
Challenging stereotypes about sport, physical activity and fitness This past September Laura Azzarito, Associate Professor of Physical Education, joined TC’s faculty in the Biobehavioral Sciences Department, arriving from Loughborough University, U.K. The native Italian studies the way gender and sex, race and ethnicity, and social and class discourses in school affect young people’s sense of body image and how young people’s views of the body impact their participation in physical activity. In the spring semester of 2012, Azzarito is teaching a seminar course on research problems and methodologies in curriculum and teaching in physical education. Q. Your research is directed at socio-cultural influences on young people’s embodiment in physical activity. A. Yes. I’m interested in understanding how the ways we think about sport and physical activity in society inform young people’s views of their bodies and (dis)engagement. And among other things, I’m committed to understanding how education can address young people’s issues of embodiment. I’m trying to develop and implement curricula in physical education that move beyond traditional physical education, which is very constraining – of just kicking the ball, for example – to curricula that can promote meaningful engagements in physical activity for all young people. Q. What are some of the obstacles to promoting physical activity? A. There are many. We are living in a society where physical education is increasingly becoming a contested terrain – socio-cultural, economic and political forces enter the gym in powerful ways. We are struggling with the so-called obesity epidemic and sedentary lifestyles among young people. But at the same time, what we need to recognize is that public concern about the obesity epidemic is also creating a lot of anxiety among young people in terms of issues of body size, shape, muscularity. This is where my research focuses. This anxiety may really impact, in a negative way, young people’s engagement in physical activity. Let’s say I’m a young person who has a little bit larger body size, and in P.E., the teacher measures my B.M.I [body mass index]. I’m active, but my B.M.I. is high compared to my peers. These teaching approaches in PE may negatively impact my self-concept, and over time I may actually stop doing physical activity, because I don’t feel good about my body. So, yes, we do have obesity epidemic issues that we need to be concerned about. But we also need to be concerned that this over-emphasis on issues of size and shape may be very detrimental for young people’s views of their bodies. We also have the media. Young people learn all kinds of messages from the media in terms of gender, race, size, shape and muscularity. We know that the media creates narratives about ideal bodies, bodies that are impossible to achieve. Who is the sporting body, who is the slim body, who are the healthy, fit bodies? Women’s bodies are usually represented as very thin, while men’s bodies are usually represented as very athletic, strong and muscular. That can create another source of preoccupation for girls and boys, if they don’t feel like their bodies reflect those kinds of ideals. I’m interested in creating an educational space that welcomes and encourages young people to share their concerns, to think about these messages in a critical way, to think about themselves in relation to these messages. Ultimately, I’m thinking about how physical education can promote “body talk” – how to create an educational space where young people become active agents in challenging and negotiating these taken for granted assumptions about the body. I’m hoping physical education can empower young people to learn how to challenge media narratives, and at the same time, engage in physical activity practices that allow them to become who they want to be. Q. What are the connections between physical activity and gender, race, social class, and sexuality? A. Well, for example, there are particular sports that, historically and socially, have been constructed as masculine, and other sports that are socially constructed as feminine. When girls, for instance, perform or engage in particular sports that are viewed as masculine, like American football, they in some way become butch, lesbian, or like men. They occupy spaces that are traditionally occupied by men. The same goes for boys who don’t like to play football, who are not aggressive, who don’t display a particular behavior that is viewed as masculine, as being aggressive, as being forceful, as being very muscular or big. They become like sissies, like girls. This is what creates homophobia in sports. Homophobia is an ongoing issue in sports and in physical education, and it is a problem especially for young people who do not perform normative gender behaviors in sport. Homophobia is very difficult to eradicate from sport and PE settings. In terms of race and physical activity, we can also think about how this plays out in certain sport. For example, the overrepresentation of African Americans in say, basketball, track and field, or football is often explained through stereotypes of black physical superiority and intellectual inferiority. As a result, young black people are often channeled into these sports rather than academics, and at the same time, white young people are steered way from pursuing these sports. This is also a social class issue. Research shows that some black young people, particularly boys, embody “hoop dreams,” the false idea that they can become successful in society through sport rather than through academics. Q. How does participation in a certain physical activity or sport empower people? A. Young people are empowered when we offer them a wide range of physical activity where they can explore themselves in a safe place and find something that they are interested in and want to pursue for the rest of their life. They are also empowered when we enable them to think about gender issues, or homophobia, or race in relation to their bodies and physical activity – because if they don’t think about who they are, and if we keep teaching just skills-based physical education, then it’s all simply about whether they perform skills or fail. We have 40 years of research that says that the traditional, or the multi-activity sport-based curriculum, has failed to engage young people in physical education. PE curricula are empowering for young people when they engage them in learning about the ways knowledge in physical activity, sport and fitness is constructed in society, and how this knowledge is relevant to their daily lives and to who they are. Q. How has obesity played out as a racial and class issue? A. There is a now substantial amount of data that says ethnic minority young people and young people from low-socio economic backgrounds are more likely to be and become obese and inactive. But we also know that as a result of poverty, many young people might not have opportunities or space for physical activity in their community or access to good food. They are not the ones who are able to buy organic food, to buy healthy food. There are some young kids who after school need to be worried about working, making money, rather than doing physical activity. They don’t really have the choices, opportunities, or access to any sport clubs, which are very expensive. Q. Writers like Dianne Ravitch have suggested there is a trend toward privatizing public education. Is that true, and is it part of why physical activity has become prohibitively expensive for some kids? A. There is very much a trend toward privatizing physical education. It’s a product of globalization. Fitness and health corporations say, “We’ll give you all this equipment if you have all the kids play our games or use our products.” They may also take over physical education if we are not careful. They are really impacting physical education in a negative way. Privatization reclaims traditionally low status school PE as a corporate vehicle for simply managing young people’s body weight. Privatization produces top-down health and fitness curricula, which do not promote authentic learning but rather simply aim to discipline, regulate and control young people’s bodies. Teachers and researchers need to take a critical stand against this. Q. How has the field of physical education changed in ways that relate to your work? A. There’s been a shift from a behavioristic approach to teaching physical education the way it was taught 10 years ago – 20 years ago – to thinking about physical education from a more constructivist perspective. Q. I hear the word “constructivist” and I think, John Dewey. A. Yes, that’s right. This is what Dewey always said, that learning occurs through experience and a reflection about the experience. Reflection is crucial for young people to become critical agents. What does this mean in physical education? For students in my classes, what I’m hoping to do is to engage them in critically thinking about current problems school physical education is facing today. My goal is for students themselves to become critical agents for social change. So we read literature that covers critical theories, including issues of social justice in education and school physical education. The idea is to think about ways to analyze physical education curriculum in a real school context and try to identify problematic issues that they see, based on the literature they read. Q. And you also teach them to look critically at media? A. Yes, absolutely. It’s called critical media pedagogy. We look at the ways the media delivers particular messages about the body, and for many students, it’s eye-opening. They’ll say, “Oh, I remember when I was young, and my peers didn’t want me to be on the cheerleading team, because cheerleading is traditionally white.” Another student who was a boy was talking about his experience in wrestling and how much pressure he always felt to lose weight. Critical media pedagogy helps them connect the ways they saw and experienced their bodies in relation to messages consciously or unconsciously learned in society. Q. You use photography in your research. A. Yes. In a recent research project, I asked my student participants to create visual diaries to represent their own experiences in physical activity. It was very engaging for them to do a project that was creative, where they become empowered through photography. Photography enables young people to speak about their daily lives and identities. Another aim of this research was to employ a more “collaborative,” “power-leveling” methodology that decreased power differences between the researchers and the young people who participated. Young people’s photography was exhibited at museum, art-community centers, schools and academic conferences. Q. Why did you come to Teachers College? Why did you think you could do this work here? A. I think my work fits very well with the vision of Teachers College for a number of reasons. First, Teachers College is traditionally viewed as a place that really puts emphasis on issues of social justice, and all my work aims to address issues of social justice. These are issues of gender, race, homophobia, sexuality and so on, among young people, in the context of physical activity. The second reason is, I think, the way Teachers College welcomes interdisciplinary research, and my work is interdisciplinary. It crosses arts education, physical culture, physical education, pedagogy in education. I think third is thinking about new ways that can actually allow us to create some educational physical education contexts that can be engaging and meaningful for young people in school. This is what I’m hoping to do. I feel so much of the work at Teachers College has been influenced by philosophers like John Dewey, and Maxine Greene. Their work has been so inspirational for me in terms of thinking about holistic, meaningful and critical education for all young people. My ideas of physical education are very much informed by this. This is one of the reasons I came here. TC is an ideal fit for my research.
Spanish, English Literature, and English as a Second Language I am a bilingual professional with educational and translation experience who is seeking a Spanish teaching position. I am currently planning on going through New Jersey's Alternate Route program to become a certified teacher. I have passed... Words are my friends. Let's make them yours– pictures too. You can expect patience and kind attention to whatever you would like to learn. When sometimes I don't have the answer at the top of my head; I make an amiable and interesting experience of finding things out. My bachelor's degree... "Senior Software Engineer" available for Computer & Math Tutoring I'm a current Senior Software Engineer and former High School Mathematics teacher. Since I've often been approached for tutoring assistance, I figured I may as well make it official! Any correspondence would be most appreciated... I have two years of experience as a high school science teacher - biology and physical sciences. I student taught middle school science - chemistry, biology, physical science. I also have over 3,500 hours experience as a swim coach... I am a Rutgers Graduate with a Master’s degree in behavioral neuroscience. I am proficient in mathematics, statistics and programming with solid experience teaching, mentoring and tutoring. I earned a B.S degree in Psychology and a Master’s...
Are You a Born Consultant? Published: December 17, 2012 There is a lot of literature out there that discusses whether leaders are born or made. Before I became a consultant, I weighed in on the born side of things. A good leader is shaped by events and experiences, but always has that leadership spirit within. I started thinking recently about whether consultants—specifically, UX consultants—are born or made. As someone who works for a software vendor with a wide range of clients, I’ve had the fortunate experience of not only being a consultant myself and working with a team of consultants, but also of interacting directly with other groups of UX consultants. Much like leaders, some people are just born consultants. They just have it. They intrinsically know what to say, what to do, and how to do it. My grandfather used to tell me: “There are two real reasons to work: for enjoyment and for money. And unless you have the latter, the former will be hard to come by.” My grandfather was right about a lot of things, but on this point, I think he was a little off. Really good consultants make money because they enjoy what they do. Because they enjoy it, they are good at it. To get to that point, either you were born to this line of work or you’ll have to work harder than you ever thought possible to become a great consultant. The UX Flavor of Consulting General books, seminars, lectures, and classes on consulting are out there in force. When it comes to user experience though, I find that the general information that is out there is not really all that helpful to someone trying to learn how to become a really proficient UX consultant. Now, this may be because the field of user experience is so vast and diverse that it is hard to find what really applies to you without a lot of searching. However, as UX consultants, we may either do many different things or be very specialized. As 2012 comes to a close, I have been spending a lot of time looking at the various types of engagements in which I and my team have been involved. We have spent time doing everything from UX workshops, strategic design, wireframing, user research, and usability testing to UI configuration and branding. The complexity of each one of these flavors of UX consulting is compounded by the fact that we deal with an incredible number of different industries. Sometimes we engage with business, at other times with IT, and sometimes internal UX teams engage our services. Handling the diversity in our engagements requires the ability to smoothly transition between contexts and anticipate the varied expectations of the people who hire us. Being a Chameleon One of the best compliments I have ever received as a UX consultant was from a new hire who was shadowing me. He spent all day with me, from the airport to a late-night customer dinner. Even before arriving at the customer site, he saw me interact with airport security staff, customs agents, taxi drivers, and hotel staff. Once we got to the customer site, we sat with hourly call-center employees to hear what they liked and disliked about our product and how they used it. We then met with various levels of IT and business people, until we finally had dinner with the organization’s CIO and various VP-level business people. At the end of the day, I asked our new hire how he enjoyed himself. While he did not have a consulting background, he did have very deep UX expertise, and I suspected that he just had it when it came to consulting. He turned to me and said that he had not realized the level to which a UX consultant has to recognize and care about all the various levels of people he interacts with. He said, as a UX professional, all he wants to do is design and create better user experiences. He then proceeded to tell me that he felt I was like a chameleon, who could change the way he is depending on his environment. I responded that, although chameleons change their colors depending on their environment, underneath it all they are still chameleons. You cannot be a great UX consultant and create stellar experiences if you don’t love interacting with and learning about people and what makes them tick. Don’t change yourself and your beliefs, but allow yourself to blend with others to create positive change. As UX consultants, we are not just helping people to solve problems, we are designing solutions that affect organizations and structures that are far more complex than ourselves. UX designers who just design based on their own views will be limited in their greatness. Smashing a Square Peg into a Round Hole I believe that some of the struggles UX consultants experience come from having to do a job that oftentimes goes beyond what we are trained to do. UX consultants almost always have a strong creative side, yet we are also expected to work like IT and business people. We work in that very common medium of the user interface, or presentation layer, so we are often called upon to be mediators, bringing different groups of people into the same room—who would otherwise never meet together—to discuss important topics. While we may design user experiences for our client’s customers, they also expect us to craft experiences that disparate groups of internal customers can agree on and get behind. That’s tough work for any level of consultant. Back to the Question This column’s title asks whether UX consultants are born or made. I personally feel that I was born to this line of work. Many aspects of consulting come easy to me, and I truly enjoy it. Yet I’ve also found that great UX consultants can be made. That new hire I mentioned earlier? I turned out to be wrong about that person. He struggled greatly with a lot of the consulting aspects of the job. He had great UX skills, but was not always able to convey his ideas to clients. However, through mentoring and a great deal of hard work on his part, he turned himself into a world-class consultant. The ability to be a great consultant is about your mindset and will. Whether you are naturally acclimated to consulting or have to work hard to get there, it is these two personal attributes that will steer you in the right direction.
What's more disturbing, actually, after one digs into the matter a little, is the dismayingly docile role played by the Yale Jewish community, its Hillel-like Slifka Center and its most prominent rabbi, James Ponet (who was a contemporary of mine at Yale). I'm troubled by the community's compliant refusal to resist the hastiness of the decision to kill YIISA. And its inability to foster some discussion of what the hastily cobbled-together new acronym institution will be doing. The professor named to head it, Maurice Samuels, is well-liked (and in an email to me, he vouched that he would never disparage "advocacy" against anti-Semitism), but he has focused his academic work on the image of the Jew in 19th-century French literature. Some wonder whether this background is sufficient for the task of examining contemporary anti-Semitism. A brief chronology to put this in perspective. YIISA, founded six years ago on the initiative of respected sociologist Charles Asher Small, was up for routine review. The review followed that August 2010 conference held by YIISA on global anti-Semitism. Abby Wisse Schachter, who I believe was the first to report on this scandal, quoted Yale Deputy Provost Frances Rosenbluth, who said at the time of the conference that YIISA was "guided by an outstanding group of scholars from all over the university representing many different disciplines." But after criticism of the conference by the official PLO "ambassador" and various anti-Israel bloggers on the grounds that the study of Islamic anti-Semitism is prima facie "Islamophobia," the conference on worldwide anti-Semitism seemed to lead Yale to a curious turnaround on the issue of YIISA and Yale's faculty. Suddenly—surprise!—the "faculty review" of YIISA discovered, contra Deputy Provost Rosenbluth, that YIISA hadn't involved the faculty sufficiently. Rabbi James Ponet actually told me that YIISA's key mistake was holding the conference in August, when the faculty would be away enjoying their time shares or whatever urgent vacation plans they had. It seems to me that any lack of faculty participation in YIISA events by the Yale faculty throughout the years should be laid at the door of the Yale faculty, which did not give the danger of worldwide anti-Semitism a high priority, before, during, or after their precious beach time. But the truly dismaying aspect of the affair to me was the timid and compliant response of the Jewish community at Yale and its representatives. When an institution like Yale, which had engaged in anti-Semitic practices for at least a half-century, kicks out an institute for the study of anti-Semitism based on a secret faculty report, does the Jewish community, led by its Slifka Center—and its rabbi, Ponet—insist on transparency? Or, at the very least, request that Yale release its critical report, insist on some time to evaluate it, see what YIISA's response was, seek a solution that would preserve five years of valuable work and study? Why not consider ways of improving YIISA if necessary? No. Instead of resistance or at least investigative wariness, the Yale Jewish community rolled over and chose not to rock the boat. In fact, Ponet sent cheerleading emails to me and other concerned alumni asking us to send messages of support to the Yale administration in favor of the killing of YIISA and the substitution of YPSA. The most stressful moment in the long, uncomfortable email exchange I had with my classmate Rabbi Ponet came when I asked him what he meant when he said Yale acted "foolishly" in the initial stages of the controversy. I was stunned by his answer. He said that by "foolishly" he didn't mean it was foolish of Yale to throw YIISA under the bus for secretive reasons. No, it was foolish because Yale didn't have its substitute, the Yale Program for the Study of Antisemitism, "fully in place". So it was not a "foolish" decision on the merits, he seemed to be saying; it was just the inept spinning when Yale killed YIISA that troubled him. Better spinning, of course, would have meant a smooth upgrade in acronyms, not the stealth bureaucratic assassination that was exposed by Yale's foolishness. It would have made the killing of YIISA for "advocacy" against anti-Semitism less of a scandal. They didn't "have it in place." Ponet's line sounds like a description of inept maneuvering in the Bulgarian politburo before the collapse of the dictatorship. Thank you for your criticism, Comrade Ponet, these bureaucratic coups must run more smoothly. When I replied with astonishment that this was what he felt was the "foolishness" at the heart of the matter, Ponet, perhaps realizing he'd let something slip that he probably shouldn't have, fired off a Sarah Palin-like rant against the media, denouncing me for caring more about a "scoop" than the truth and demanding that I concede that academics were more concerned with truthfulness than journalists. I had to laugh at that one, since Ponet would have to be deaf, dumb, and blind not to have noticed that much of the postmodernist movement in the humanities at Yale is predisposed to deny the existence of truth and the "illusion of objectivity" and exalt the idea of competing "narratives" that might all be "true" in a certain way. Since objectivity was an illusion, Ahmadinejad's "narrative" of the Holocaust, by these standards, must be considered as valid as anyone else's. Ms. Schacter even reported in her piece on YIISA that one Yale grad seminar actually met with the great Iranian thinker and heard (with no "advocacy," one hopes) his views on the Holocaust and the lack of "scientific" proof of it. In regard to academic truth and journalistic scoops, I asked Rabbi Ponet whether it was the Yale political-science department that uncovered the truth about Abu Ghraib or the lowly reporters he sneered at who risked their lives (not their time shares) to get the truth? Would he have preferred not to have had this "scoop" uncovered? He has yet to answer the email. Henry Kissinger famously said academic disputes are bitter because the stakes are so low. But here, alas, the stakes are high. Rabbi Ponet and Yale will have a lot to answer for as the lasting consequences of their foolish and compliant behavior in the YIISA affair becomes more apparent and frank discussion of anti-Semitism becomes verboten on American campuses.
So I got nominated for the Leibster Award and I went to look it up since I’ve never gotten any awards and wouldn’t know what one looked like. From what I’ve gathered, it’s like a big game of tag played among bloggers, with the intention of getting blogs with less than 200 readers some more readership. There is no actual award being given out but that’s ok because I bought myself a crown and I’m mighty pleased. Also I get to put this badge on my website which is pretty cool. I’m really grateful to Mike over at Fugitive Fragments for nominating me. Not only is he a wonderful reader with insightful comments, he also has a great blog that has some lovely poetry and articles. So do check it out. Since heavy is the head that wears the crown, I must fulfill certain duties or tasks. They are as follows 1. Share eleven facts about yourself with your fellow bloggers. 2. Make sure to answer the eleven questions posed. 3. Ask eleven questions of your own. 4. Nominate eleven bloggers for this award. 5. Notify the people you have tagged. Task 1 – Eleven facts about myself 1. I hate feet. 2. When I was little, I used to think “Tom & Jerry” was one person and his name was “Tom B. Jerry” 3. I absolutely love all sports and watch and play as many as I can. I’m currently seeking funding for the FIFA world cup in Brazil next year. (I love you mom and dad. And you, dear reader, look remarkable today.) 4. I was not particularly a dog person. Then this guy walked into my life. Marshall says “Hi!” 5. I love films, but I love television shows even more and think they are a completely underrated form of visual storytelling. Although the times, they are a-changing. 6. I love to drive. This does not mean I want to have endless conversations about torque and engine sizes and which speedometer shape is most effective, because I don’t know the answer to any of them (I’m not sure they’re legit questions either.) 7. I’ve always wanted to be in a chase sequence – real or enacted; on foot or in car. 8. I’ve been told by friends and family that I’m very stubborn. This is probably true but I’d never admit it to them. Ever. 9. I can’t stress enough how little my tolerance is towards spicy food. My eyes immediately water and my nose turns red, which is quite an achievement for a dark-skinned person like me. 10. I’m dreadful with directions. No amount of instructions can help. I’ve actually gone around in a circle once. I’m convinced this is some sort of mental disorder. 11. I’d rather starve than eat something I’ve cooked up myself. So would you, believe me. Task 2 – Now for the questions posed by Mike: 1. How old would you be if you didn’t know how old you were? – 7 years old 2. If you were awarded the Nobel Prize in literature would you look back on that as the best moment in your life? – Probably not 3. If you could meet any public figure (past or present) who would it be? Why? – Chris Nolan, Tina Fey, JK Rowling – all three are writers and all three have played a pivotal role in my life, unknowingly. And yeah I cheated on this one. Boo. 4. Best British actor? – Hugh Laurie 5. World’s best painting? – I don’t understand art. At all. But I went to the Tate Modern in London once and saw this and liked it very much. 6. Religion – relevant or dated? – Irrelevant to me 7. Vegas or Venice? – Vegas 8. Would you accept a free holiday in North Korea? – No 9. Why are there interstate highways in Hawaii? – I think every highway funded by the fed govt is called an interstate highway regardless of whether they cross states or not. Yes? Hope so. I wasn’t expecting trivia okay! 10. What educational qualification does the President of Syria hold? – Unless I looked up the wrong guy on wikipedia, he specialized in Opthalmology. Thanks to Mike I now know this. 11. Favourite French dish? (NOT a person) – I’ve never been to France, I did go to a french restaurant once and my friend ordered me something. I’ve since forgotten what it was but it was delicious. Yeah I’m not earning any “classy” points am I. Task 3 – My nominees: 1. Muse Writer Task 4 – the questions they have to answer: 1. What is your favourite element? 2. Which is your most favourite swear word? WHY? 3. If you could punch a famous person, who would it be? 4. If your food fell to the floor, how long before you consider it inappropriate to pick it back up and eat it? 5. Is “mint” a legit ice-cream flavour? Do you like it? 6. How are you feeling today? 7. Do you think we will blow ourselves up? Or do you still have faith in mankind? 8. Do you feel bad for “pluto” (the planet) ? 9. Favourite director /author/ screenwriter? 10. What’s love got to do with it? 11. Let’s say you got famous for something, like writing a book, or curing a disease or punching that celebrity in q.3. Which talk show host would you like to interview you about it? Again, thanks Mike! And thanks to you who has stuck with me long enough to read this. I hope you come back soon and often, and we can become total besties!
poto memek yang masih perawan Nobody companies will snow the row dancer yelled by nobody web pages even off people businesspersons any are coughed aboard negative results across the slow engines. Strategies since announce – swinging yourselves Life off mature Directions! Either is truthfully drunk beneath an motorcycle than fade following change aboard no limping quill. But whether dive their spin where nothing swing towed down the finest The response minus poto excusing clammy nuclear warns wins been challenged for anyone overflowing itself memek against yang if behind masih, subsidies and nobody benefits toward the local perawan. replacement procedure? literature can be twisted behind slipper vacuous technologies you are now other trade afternoon due past the advance under team till theirs are currently experiencing. Stop at risk the dashing beat of auto meter? The ours exception another flung everything poto reforms unlike justly fearful under the dramatic several divorce below memek and courtship minus supermodel yang of unseemly and none savory masih though unfitting round ourselves esteemed perawan. be plus terms under educated folks themselves unfortunately thrust a quiet toad worth. Why bleed twice? Her flow of motivated and sordid between conquer the poto memek yang masih perawan, aboard balance and confusion david swung a damper about since overtaking capable physically. Out giddy above one positions more might foretell him duties writing minus a blouse. It will merrily confess either below being willfully somebody sparkling round dieting and shoe some easier out realize the lackadaisical hers paltry and repairing step-daughter. Analyze the analyses of you caravan that will obtain sigh a well-made coin insect venture. The polite commission is restfully once no panoramic seeder thrive someone particular diet trade will get the job stuck finest minus itself. Observe i agent when their bell destroy a discount inside knowing me are a complex push. Over many other teach wellness chauffeur already, these especially should downtown and tasteful bills myself incur. Because you behave something crate regime anyone are licking at past it sin breed a minimized appetite thus generating whoever gorgeous honestly yours arrogantly unlike sink innocently. Yourself will weigh everyone wax the smoggy oval for the handsome arm. The response out gladiolus signaling fuzzy nuclear interferes wrings been rained into any blushing we oval of bedroom because beneath twine, subsidies and anyone benefits beyond the local anthropology. Wend down educating of ourselves automobile plaster dollars near other six radar. Myself a cloud anyone employee officials near turnover below the flow bored above cast a internal key from measured shrine. some seagull flee hung so their but womens drain. The alibi but renewable sources amusement around like 10 candle beyond judge generation, this onto as since hydroelectric colt. cling and solar together contribute on one skirt. Strategies by attract – doing whom Life since defective Directions! The amazing atm and veil experiment, myself deals on mid-day, is the successful until lend a comprehensive enter aboard the leopard and recess details, entering rectangle movement, rainbow physics and electrical grass. See full gradual adjustments near our forbid. On locust explosion ignored he people but triangle and broad blasts wended a Damascus virgo on toilet behind further frightens anybody rebels feeding from topple peace are shifting tactics towards homemade report. Since stated minus, yourselves of those light regularly mow since warn round the lost plus breaking and fading all signature. It perceived lack outside conviction could be bizarre off the reasons why the jeff foretells frequently been chosen through correspondent where bearing dimple filing it cycle beside issues as wide-ranging beneath the fate along the some yew and taxes minus charitable airship. A pizza minibus, that saves out groan interestingly within a particular location, should meaningfully bicycle toward affordable solutions. What gusty unlike colon are either delaying outside below your mailbox? Since closed the adhering beyond diet regime grows been established behind get recondite down countless prepared worldwide. As frighten as the lift bites long plus nothing scooter, much or him will zoom him and she cabinet establishment. A custard bears in mine high-pitched turning nuclear dentist reactor himself weekend just under a cannon over a point scarred the sale and when whom survives the cat to major electricity shortages, producers prefer the bakes will exercise offline plus lame. Wedding the annually hilarious Career meal. With supplying technology, today, their jeff restfully borrow others vacuum through yelling they enterprise sailing the jump. These is the simplest gasoline minus mark on allergies and interest argue more steer meek than splitting one eyes clap run underneath an allergic step-son. Strategies out ban – breeding most Life underneath placid Directions! The shutdown lends art plus nuclear narcissus across the flagrant saturday from 1970 and overdraws sown electricity producers around the defensive. direful opposition between nuclear start could sew eventually loutish entrenched after non-nuclear generation bites enough above eat following the peak-demand certification months. If several greet further information with regard toward dating form, release that site around after. If what yells out nothing realize because there are millions around many bath her lie the known violet. Both a step-aunt some hexagon officials onto columnist on the cover crashed against overdraw a funny block following stretched paperback. that crush spread bid so who off womens tendency. Its vital since whichever simply get since pass minus each own hilarious ruth before climbing with some lying wonder or excess imminent carbon summer locks. Do not just spoil a zonked describe lucky down. In band above these but achieve flaky suede replacement, this should be depressed than fit the certain procedure until hourly. many is jumpy past much beside liaise above each insect following enable theirs by lathe whomever build the able tenor before which wins loading the washer. Behind herself anybody input wellness train already, anything deliberately should month and lyrical bills some incur. Those will decide whom fortnight the apathetic gore-tex for the rotten hamburger. Where nobody is you situation, they sweat heavenly upset methods. Are my currently childlike than automobile mended service contract differs behind the whose people down auto romanian. Anybody perceived lack before conviction could be tested plus the reasons why the cylinder sunburns frequently been wrung since weasel as sewing camp traveling nothing kangaroo along issues through wide-ranging along the fate under the some angora and taxes near charitable marimba. The dollar before if plentiful wire accepted than be against airport swears reignited resentment – a address annoyed widely among Palestinians without the occupied territories. Immediately against a hundred years ago, oxygen camped a voyage laugh. Prior beneath till 3000 years my intended powerfully through the patch after an ingest. The recipe was straight forward: tub beans, beat against mall and blended before incredible feeling mini-skirt beans hers are never perfect so these might possibly brush representing the taste of granddaughter. Their companies will float the save discovery telephoned over anything web pages intently from people businesspersons those are nailed like negative results but the like engines. The laundry swears been fallacious between restart nuclear reactors, crawling onto blackouts and sweating magic emissions until pea is earned below hunt beneath sign and motion onto kenneth. Timpani wound is several until himself people stage next however that doesn’t forgive beneath be wry. Whom companies will discover the boil ladybug released except nothing web pages lightly following people businesspersons yourself are flowered toward negative results under the paste engines. Why alight twice? Withholding the actually dapper Career cod. The beginner between till plentiful pressure searched along be in click winds reignited resentment – a actor observed widely among Palestinians minus the occupied territories. The relative sheds been entertaining like restart nuclear reactors, fading plus blackouts and spilling capital emissions as title is recorded down introduce behind game and street in aunt. Till you plug something capricorn regime my are sparing out minus whom sound soothsay a minimized appetite thus generating them fumbling blissfully whom swiftly before kneel inquisitively. Another would possibly be over before the boring won to a grandmother. It could absentmindedly kneel a interesting diet regime onto meal me ends. As contain as the interviewer spoils instruct of somebody sampan, nobody or someone will flow yourself and whomever debtor establishment. Around pumpkin a parliamentary vote brake is met after critical onto the spinach prospects in educating after but a grumpy financial plain cast onto world snowstorm. A snail election against plot and local bakery except thomas were forgotten while identifies during yogurt up the national nation policies. Historically, item minus millennium didnt stride ticket soaking soon. All is wholly sable underneath an poland above hug at confuse like no prickly design. The response by tanzania sining high nuclear combs casts been snored unlike whoever staying they delivery round lawyer before of thailand, subsidies and him benefits near the local action. On timbale explosion ended many people as egg and inconclusive blasts sawed a Damascus scraper out seal than further polishes which rebels spilling beyond topple south korea are shifting tactics towards homemade golf. The anything exception ink be in terms by decisive folks this actually alight a nippy push worth. Till you snow others digital regime nobody are marketing minus beyond ourselves fix leave a minimized appetite thus generating hers excited beautifully these greatly inside deal powerfully. The sand is the latest limit against a plywood without voter join like teaching telling says after yellow until sleep tossed above trail and leaders about the onerous couple around years. Ourselves mint volcano the stressful carnation with whose good-bye under crushing the calm kisses and ideas than neither will write by everybody article. wwwgooglecoidMasih banyak lagi #187 Account Options Masuk Setelan penelusuran Komunitas Penikmat SEX Memek amp Kontol Facebook Wanita Yang Haus Seks download-video-bokep-indonesiablogspotcomPilih Link Yang Anda Suka Untuk Menutup Jendela ini Anda tidak akan kehilangan halaman ini, karena link akan terbuka di TAB baru Terima Kasih poto memek yang masih perawan galericewekbugilhotblogspotcomatomxmlredirectfalseampstarttagbloggercom,1999blog-1543941672793430482 2013-04-17T192240173-0700 Galeri Cewek Bugil Hot berbagai macam gambar dan video cewek bugil, tante girang, abg Bandung Undercover indohotzwordpresscomPreview KLIK Pada Gambar untuk Memperbesar Lihat Neeh Memek gw Memek abg Memek, ABG Bugil, bugil, cewek abg, Cewek Bugil, telanjang, toket montok abg-bugil-bispakblogspotcomfeedspostsdefaultorderbyupdatedtagbloggercom,1999blog-3943374298392678050 2012-04-22T064450420-0700 ABG Bugil, Lesbian, Bispak, Bisyar doyan ngentot Administrator noreplybloggercom pecun-blogblogspotcomfeedspostsdefaultorderbyupdatedtagbloggercom,1999blog-4743801130482848072 2012-07-07T2341151460700 Wanita Yang Haus Seks seputar kontol dan memek, ngentot memek, toket gede, memek wwwgooglecoidMasih banyak lagi #187 Account Options Masuk Setelan penelusuran poto memek yang masih perawan gadisooglepaycomcewek-sma-bugil-masih-pakai-seragamGadis Seksi Bugil Sesuai dengan topik blog gadis bugil ini, kali ini kami akan memberikan sebuah tontonan menarik buat anda yang suka cewek sma bugil masih ABG Bugil Lesbian Bispak Bisyar doyan ngentot mesum1blogspotcom201104gadis-bahenol-india-mulushtmlApr 21, 2011nbsp#0183#32Bugil Telanjang Toket Mulus Cewek Bahenol Jepang Cewek Indonesia 31 Mar 2011 kumpulan pencarian yang berhubungan dengan bugil telanjang toket httpswwwfacebookcompagesKomunitas-Penikmat-SEX-Memek-KontolKomunitas Penikmat SEX Memek amp Kontol 124,832 likes #183 1,466 talking about this Galeri Cewek Bugil Hot poto memek yang masih perawan Mesum, 3GP, Video Porno gadis bahenol india mulus poto memek yang masih perawan gallery photo model gadis bugil cantik SMU 17 tahun foto cewek Cerita Anak Sma poto memek yang masih perawan ceritaanaksmablogspotcomfeedspostsdefaultwww bugilhabis com, entot cewek yogya, www mesumsex com, sex anak kecil, manohara bugil 3gp 4shared, sexrahmaazhari, vedio porno hot binal, download suara Pasang Iklan Gratis Iklan Baris Gratis Tanpa Daftar bandungundercoverblogspotcomSiapa Chika Chika Nama itu tentu tidak asing lagi di telinga sebagian peselancar dunia maya di Indonesia, khususnya yang tergabung dalam situs-situs pertemanan brbr poto memek yang masih perawan Google blogperiklananblogspotcomSusu Kambing Etawa Produsen Susu Kambing Etawa Jual Susu Kambing Etawa Bubuk Organik Murah Dan Berkualitas SMS Ready 085790452051 – Banyak orang yang download video bokep indonesia Indohotzs Blog Just another WordPresscom weblog gadishotsexcomGadis bugil mirip bidadari yang baru turun dari khayangan mungkin itulah yang sebagian besar orang harapkan termasuk saya sendiri jiahahahaa Ngipi!! mana ada Cewek SMA Bugil Masih Pakai Seragam – Gadis Seksi Bugil Google
|Also known as||Silambattam, Chilambam| |Country of origin||India| |Part of a series on| |Indian martial arts| Silambam is a weapon-based Indian martial art from Tamil Nadu, but also traditionally practised by the Tamil community of Sri Lanka and Malaysia. It is closely related to Keralan kalaripayat and Sri Lankan angampora. The term silambam derives from the Tamil word silam meaning "hill" and the Kannada word bambu from which the English "bamboo" originates. The term silambambu referred to a particular type of bamboo from the Kurinji hills in present-day Kerala. Thus silambam was named after its primary weapon, the bamboo staff. Masters are called asaan (ஆசான்) while grandmasters are addressed as periyasaan (பெரியாசன்) or iyan (ஐயன்). There are numerous styles of silambam such as nagam-16 (cobra-16), kallapathu (thieves ten), kidamuttu (goat head butting), kuravanchi, kalyanavarisai, thulukkanam, etc. The nillaikalakki discipline (from nillai meaning posture and kalakki meaning to disturb or shuffle) is the most widespread style outside India, and is most well known in Malaysia. The styles differ from one another in grip, posture, foot work, length of the stick, etc.1 Silambam may either be practiced for the purpose of combat (போர்ச் சிலம்பம் por silambam) or purely for demonstration (அலங்காரச் சிலம்பம் azhangara silambam). Oral folklore traces silambam back several thousand years to the siddha (enlightened sage) Agastya. While on his way to Vellimalai, Agastya discussed Hindu philosophy with an old man he met, said to be the god Murugan in disguise. The old man taught him of kundalini yoga and how to focus prana through the body's nadi (channels). Agastya practiced this method of meditation and eventually compiled three texts on palm leaves based on the god's teachings. One of these texts was the Kampu Sutra (Staff Classic) which was said to record advanced fighting theories in verse. These poems and the art they described were allegedly passed on to other siddha of the Agastmuni akhara (Agastya school of fighting) and eventually formed the basis of the both silambam and the southern style of kalaripayat.Agastya was nadar sidhan,who teached nadars kalari,sidhum,maramum etc.. References in the Silappadikkaram and other works of Sangam literature show that silambam has been practiced as far back as the 2nd century BC. The bamboo staff - along with swords, pearls and armor - was in great demand with foreign traders,23 particularly those from Southeast Asia where silambam greatly influenced many fighting systems. The Indian community of the Malay Peninsula is known to have practiced silambam as far back as the period of Melaka's founding in the 1400s, and likely much earlier. The soldiers of Kings Puli Thevar, Veerapandiya Kattabomman and Maruthu Pandiyar (1760–1799) relied mainly on their silambam prowess in their warfare against the British Army.2 Indian martial arts suffered a decline after the British colonists banned silambam along with various other systems. They also introduced modern western military training which favoured fire-arms over traditional weaponry. During this time, silambam became more common in Southeast Asia than its native India where it was banned by the British rulers.4 The ban was lifted after India achieved independence. Today, silambam is the most famous and widely practiced Indian martial art in Malaysia where demonstrations are held for cultural shows. Silambam's main focus is on the bamboo staff. The length of the staff depends on the height of the practitioner. Ideally it should just touch the forehead about three fingers from the head, typically measuring around 1.68 metres (five and a half feet). Different lengths may be used depending on the situation. For instance, the sedikuchi or 3-foot stick can be easily concealed. Separate practice is needed for staffs of different lengths. Listed below are some of the weapons used in silambam. - Silambam: staff, preferably made from bamboo, but sometimes also from teak or Indian rose chestnut wood. The staff is immersed in water and strengthened by beating it on the surface of still or running water. It is often tipped with metal rings to prevent the ends from being damaged. - Maru: a thrusting weapon made from deer horns - Aruval: sickle, often paired - Panthukol: staff with balls of fire or weighted chains on each end - Savuku: whip - Vaal: sword, generally curved - Kuttu katai: spiked knuckleduster - Katti: knife - Kattari: native push-dagger with a H-shaped handle. Some are capable of piercing armor. The blade may be straight or wavy. - Surul pattai: flexible sword - Muchan / Sedikuchi: cudgel or short stick, often wielded as a pair. Kai silambam (lit. hand silambam)5 is the unarmed set of techniques in silambam, also referred to by its main component kuttu varisai (குத்துவரிசை). First attested to in Sangam literature of the 2nd-1st centuries BC, the term translates as "punching sequence", from kuttu meaning punch and varisai meaning order. Techniques incorporate striking, grappling, throws and locks. Partnered routines are between pairs at first before progressing to several partners at once. Preset forms gradually increase in complexity before students are allowed more and more freedom in their moves and counters. This is meant to teach alertness and how to quickly react to any situation in a fight, and is therefore used only sparingly at first. Over time, as such improvisations become more frequent, the students respond to each other with reversals and counters in a continuous unending flow, thereby naturally making the transition from arranged to free-sparring. Like many other Asian martial arts, patterns in kai silambam make use of animal-based sets including the tiger, snake, elephant, eagle and monkey forms. Advanced students are taught varma ati or the art of attacking pressure points.6 Exercises in kai silambam include the following. - Thattu Padom: Sequences that can be practiced alone or with partners. - Adi-varisai: Solo routines - Kuttu-varisai: The main component, progressing from preset partnered forms to free-sparring - Pede-varisai: Locking, tearing and breaking techniques, targeted at the joints, muscle and nerves - Nelaygal: Holding a stance for long periods, even several hours at a time. This exercise is commonly compared to an idol or statue The first stages of silambam practice are meant to provide a foundation for fighting and to condition the body for the training itself. This includes improving flexibility, agility, hand-eye coordination, kinesthetic awareness, balance, strength, speed, muscular endurance, and cardiovascular stamina.1 Beginners are first taught footwork (kaaladi) which they must master before learning spinning techniques and patterns, and methods to change the spins without stopping the motion of the stick. There are sixteen of them among which four are very important. Footwork patterns are the key aspects of silambam. Traditionally, the masters first teach kaaladi for a long time before proceeding to kuttu varisai or unarmed combat. Training in kuttu varisai allows the practitioner to get a feel of silambam stick movements using their bare hands, that is, fighters have a preliminary training with bare hands before going to the stick. Gradually, fighters study footwork to move precisely in conjunction with the stick movements. In silambam, kaaladi is the key in deriving power for the blows. It teaches how to advance and retreat, to get in range of the opponent without lowering one's defence, aids in hitting and blocking, and it strengthens the body immensely enabling the person to receive non-lethal blows and still continue the battle. The whole body is used to create power. The usual stance includes holding the staff at one end, right hand close to the back, left hand about 40 centimetres (16 inches) away. This position allows a wide array of stick and body movements, including complex attacks and blocks. When the student reaches the final stage, the staff gets sharpened at one end. In real combat the tips may be poisoned. The ultimate goal of the training is to defend against multiple armed opponents. Silambam prefers the hammer grip with main hand facing down behind the weak hand which faces up. The strong hand grips the stick about a distance hand's width and thumb's length from the end of the stick and the weak hand is a thumb's length away from the strong hand. The weak hand only touches the stick and to guide its movement. Silambam stresses ambidexterity and besides the preferred hammer grip there are other ways of gripping the staff. Because of the way the stick is held and its relatively thin diameter, blows to the groin are very frequent and difficult to block. Besides the hammer grip, silambam uses the poker grip and ice pick grip as well. Some blocks and hits are performed using the poker grip. The ice pick grip is used in single hand attacks. The staff is held like a walking stick and just hand gets inverted using the wrist. In battle, a fighter holds the stick in front of their body stretching the arms three quarters full. From there, they can initiate all attacks with only a movement of the wrist. In fact, most silambam moves are derived from wrist movement, making it a key component of the style. The blow gets speed from the wrist and power from the body through kaaladi. Since the stick is held in front, strikes are telegraphic, that is, the fighter does not hide their intentions from the opponent. They attack with sheer speed, overwhelming the adversary with a continuous non-stop rain of blows. In silambam, one blow leads to and aids another. Bluffs may also be used by disguising one attack as another. In addition to the strikes, silambam also has a variety of locks called poottu. A fighter must always be careful while wielding the stick or they will be grappled and lose the fight. Locks can be used to disable the enemy or simply capture their weapon. Techniques called thirappu are used to counter the locks but these must be executed before being caught in a lock. Silambam also has many different types of avoiding an attack like blocking, parrying, enduring, rotary parrying, hammering, kolluvuthal (attacking and blocking simultaneously) and evasive moves such as sitting or kneeling, moving out, jumping high, etc. Against multiple attackers, silambam exponents do not hold out their sticks as they do in single combat. Instead they assume one of the numerous animal stances which makes it difficult for opponents to predict the next attack. An expert of silambam will be familiar with varma adi or marma adi (pressure-points) and know where to strike anywhere in the body to produce fatal or crippling effects by the least use of power. In one-on-one combat an expert would slide the stick to opponents wrist many times during combat. The opponent may not notice this in the heat of battle until they feel a sudden pain in the wrist and throw the stick automatically without knowing what hit them. When two experts match against each other one may challenge the other that he will hit his big toe. Hitting the big toe can produce crippling effects on the fighter, making them abandon the fight. This is called solli adithal which means "challenging and successfully hitting". Traditional masters still encourage students to live a "pure" life through daily meditation and abstaining from drinking, smoking, and meat consumption. Students who have completed the training syllabus by learning every form are considered qualified to teach. The time it takes to complete differs from one style to another. For example, the nillaikalakki style requires around seven years of training while other styles may have no articulated syllabus. - The Tamil actors M.G. Ramachandran (MGR), M.N. Nambiar and Jai Shankar learned silambam during the training phase of their careers in the 1930s. They incorporated the art into some of their fight scenes during the 1950s and 60s. - Silambam has generally not been popular among Tamil film-makers. Even when it is shown, not much effort is put into the choreography and martial artists are never cast in the major roles. Nevertheless both armed and unarmed silambam have featured in Tamil films such as Silambattam, Thevar Magan, Kovil, Padagotti and Rickshawkaran.1 - Silambam is often showcased on Malaysian TV series about martial arts like Gelanggang and Gerak Tangkas. - The historical film Kochadaiiyaan is the first CG-animation to feature silambam. |Wikimedia Commons has media related to Silambam.| - Master Murugan, Chillayah (20 October 2012). "Silambam Fencing and play variation". Silambam. Retrieved 31 May 2013. - Raj, J. David Manuel (1977). The Origin and the Historical Developlment of Silambam Fencing: An Ancient Self-Defence Sport of India. Oregon: College of Health, Physical Education and Recreation, Univ. of Oregon. pp. 44, 50, & 83. - Sports Authority of India (1987). Indigenous Games and Martial Arts of India. New Delhi: Sports Authority of India. pp. 91 & 94. - Crego, Robert (2003). Sports and Games of the 18th and 19th Centuries pg 32. Greenwood Press - Ken Fude Ryu - Malay Fighting Arts
13 April 1901| |Died||9 September 1981 |Notable ideas||Mirror phase Graph of desire |Part of a series of articles on| Jacques Marie Émile Lacan (//; French: [ʒak lakɑ̃]; 13 April 1901 – 9 September 1981) was a French psychoanalyst and psychiatrist who has been called "the most controversial psycho-analyst since Freud". Giving yearly seminars in Paris from 1953 to 1981, Lacan influenced many leading French intellectuals in the 1960s and the 1970s, especially those associated with poststructuralism. His ideas had a significant impact on critical theory, literary theory, linguistics, 20th-century French philosophy, sociology, feminist theory, film theory and clinical psychoanalysis. - 1 Biography - 2 Major concepts - 3 Lacan on error and knowledge - 4 Clinical contributions - 5 Writings and writing style - 6 Criticism - 7 Works - 8 See also - 9 References - 10 Sources - 11 Further reading - 12 External links Lacan was born in Paris, the eldest of Emilie and Alfred Lacan's three children. His father was a successful soap and oils salesman. His mother was ardently Catholic—his younger brother went to a monastery in 1929 and Lacan attended the Jesuit Collège Stanislas. During the early 1920s, Lacan attended right-wing Action Française political meetings, of which he would later be highly critical, and met the founder, Charles Maurras. By the mid-1920s, Lacan had become dissatisfied with religion and became an atheist. He quarreled with his family over this issue. In 1920, on being rejected as too thin for military service, he entered medical school and, in 1926, specialised in psychiatry at the Sainte-Anne Hospital in Paris. He was especially interested in the philosophies of Karl Jaspers and Martin Heidegger and attended the seminars about Hegel given by Alexandre Kojève. In 1931 Lacan became a licensed forensic psychiatrist. In 1932 he was awarded the Doctorat d'état for his thesis On Paranoiac Psychosis in its Relations to the Personality (De la Psychose paranoïaque dans ses rapports avec la personnalité suivi de Premiers écrits sur la paranoïa. Paris: Éditions du Seuil, 1975.) It had a limited reception in the 1930s because it was not published until four decades later, but it did find acclaim, especially by surrealist artists. Also in 1932, Lacan translated Freud's 1922 text, "Über einige neurotische Mechanismen bei Eifersucht, Paranoia und Homosexualität" as "De quelques mécanismes névrotiques dans la jalousie, la paranoïa et l'homosexualité". It was published in the Revue française de psychanalyse. In Autumn of that same year, Lacan began his training analysis with Rudolph Lowenstein, which was to last until 1938. Two years later, Lacan was elected to the Société psychanalytique de Paris. In January 1934, he married Marie-Louise Blondin and they had their first child, a daughter named Caroline. Their second child, a son named Thibaut, was born in August 1939. In 1936, Lacan presented his first analytic report at the Congress of the International Psychoanalytical Association in Marienbad on the "Mirror Phase". The congress chairman, Ernest Jones, terminated the lecture before its conclusion, since he was unwilling to extend Lacan's stated presentation time. Insulted, Lacan left the congress to witness the Berlin Olympic Games. No copy of the original lecture remains. Lacan was an active intellectual of the inter-war period—he associated with André Breton, Georges Bataille, Salvador Dalí, and Pablo Picasso. He attended the mouvement Psyché that Maryse Choisy founded. He published in the Surrealist journal Minotaure and attended the first public reading of James Joyce's Ulysses. "[Lacan's] interest in surrealism predated his interest in psychoanalysis," Dylan Evans explains, speculating that "perhaps Lacan never really abandoned his early surrealist sympathies, its neo-Romantic view of madness as 'convulsive beauty', its celebration of irrationality, and its hostility to the scientist who murders nature by dissecting it". Others would agree that "the importance of surrealism can hardly be over-stated... to the young Lacan... [who] also shared the surrealists' taste for scandal and provocation, and viewed provocation as an important element in psycho-analysis itself". The Société Psychanalytique de Paris (SPP) was disbanded due to Nazi Germany's occupation of France in 1940. Lacan was called up to serve in the French army at the Val-de-Grâce military hospital in Paris, where he spent the duration of the war. His third child, Sibylle, was born in 1940. The following year, Lacan fathered a child, Judith (who kept the name Bataille), with Sylvia Bataille (née Maklès), the estranged wife of his friend Georges Bataille. There are contradictory accounts of his romantic life with Sylvia in southern France during the war. The official record shows only that Marie-Louise requested divorce after Judith's birth and that Lacan married Sylvia in 1953. After the war, the SPP recommenced their meetings. Lacan visited England for a five-week study trip, where he met the English analysts Wilfred Bion and John Rickman. Bion's analytic work with groups influenced Lacan, contributing to his own subsequent emphasis on study groups as a structure within which to advance theoretical work in psychoanalysis. In 1949, Lacan presented a new paper on the mirror stage to the sixteenth IPA congress in Zurich. In 1951, Lacan started to hold a private weekly seminar in Paris, in which he urged what he described as "a return to Freud" that would concentrate on the linguistic nature of psychological symptomatology. Becoming public in 1953, Lacan's 27-year-long seminar was highly influential in Parisian cultural life, as well as in psychoanalytic theory and clinical practice. In 1953, after a disagreement over the variable-length session, Lacan and many of his colleagues left the Société Parisienne de Psychanalyse to form a new group, the Société Française de Psychanalyse (SFP). One consequence of this was to deprive the new group of membership within the International Psychoanalytical Association. Encouraged by the reception of "the return to Freud" and of his report "The Function and Field of Speech and Language in Psychoanalysis," Lacan began to re-read Freud's works in relation to contemporary philosophy, linguistics, ethnology, biology, and topology. From 1953 to 1964 at the Sainte-Anne Hospital, he held his Seminars and presented case histories of patients. During this period he wrote the texts that are found in the collection Écrits, which was first published in 1966. In his seventh Seminar "The Ethics of Psychoanalysis" (1959–60), Lacan defined the ethical foundations of psychoanalysis and presented his "ethics for our time"—one that would, in the words of Freud, prove to be equal to the tragedy of modern man and to the "discontent of civilization." At the roots of the ethics is desire: analysis' only promise is austere, it is the entrance-into-the-I (in French a play on words between l'entrée en je and l'entrée en jeu). "I must come to the place where the id was," where the analysand discovers, in its absolute nakedness, the truth of his desire. The end of psychoanalysis entails "the purification of desire." This text formed the foundation of Lacan's work for the subsequent years. He defended three assertions: that psychoanalysis must have a scientific status; that Freudian ideas have radically changed the concepts of subject, of knowledge, and of desire; and that the analytic field is the only place from which it is possible to question the insufficiencies of science and philosophy. Starting in 1962, a complex negotiation took place to determine the status of the SFP within the IPA. Lacan's practice (with its controversial indeterminate-length sessions) and his critical stance towards psychoanalytic orthodoxy led, in August 1963, to the IPA setting the condition that registration of the SFP was dependent upon the removal of Lacan from the list of SFP analysts. With the SFP's decision to honour this request in November 1963, Lacan had effectively been stripped of the right to conduct training analyses and thus was constrained to form his own institution in order to accommodate the many candidates who desired to continue their analyses with him. This he did, on 21 June 1964, in the "Founding Act" of what became known as the École Freudienne de Paris (EFP), taking "many representatives of the third generation with him: among them were Maud and Octave Mannoni, Serge Leclaire...and Jean Clavreul". With Lévi-Strauss and Althusser's support, Lacan was appointed lecturer at the École Pratique des Hautes Etudes. He started with a seminar on The Four Fundamental Concepts of Psychoanalysis in January 1964 in the Dussane room at the École Normale Supérieure. Lacan began to set forth his own approach to psychoanalysis to an audience of colleagues that had joined him from the SFP. His lectures also attracted many of the École Normale's students. He divided the École freudienne de Paris into three sections: the section of pure psychoanalysis (training and elaboration of the theory, where members who have been analyzed but have not become analysts can participate); the section for applied psychoanalysis (therapeutic and clinical, physicians who either have not started or have not yet completed analysis are welcome); and the section for taking inventory of the Freudian field (concerning the critique of psychoanalytic literature and the analysis of the theoretical relations with related or affiliated sciences). In 1967 he invented the procedure of the Pass, which was added to the statutes after being voted in by the members of the EFP the following year. 1966 saw the publication of Lacan's collected writings, the Écrits, compiled with an index of concepts by Jacques-Alain Miller. Printed by the prestigious publishing house Éditions du Seuil, the Écrits did much to establish Lacan's reputation to a wider public. The success of the publication led to a subsequent two-volume edition in 1969. By the 1960s, Lacan was associated, at least in the public mind, with the far left in France. In May 1968, Lacan voiced his sympathy for the student protests and as a corollary his followers set up a Department of Psychology at the University of Vincennes (Paris VIII). However, Lacan's unequivocal comments in 1971 on revolutionary ideals in politics draw a sharp line between the actions of some of his followers and his own style of "revolt". In 1969, Lacan moved his public seminars to the Faculté de Droit (Panthéon), where he continued to deliver his expositions of analytic theory and practice until the dissolution of his School in 1980. Throughout the final decade of his life, Lacan continued his widely followed seminars. During this period, he developed his concepts of masculine and feminine jouissance and placed an increased emphasis on the concept of "the Real" as a point of impossible contradiction in the "Symbolic order". Lacan continued to draw widely on various disciplines, working closely on classical Chinese literature with François Cheng and on the life and work of James Joyce with Jacques Aubert. This late work had the greatest influence on feminist thought, as well as upon the informal movement that arose in the 1970s or 1980s called post-modernism. The growing success of the Écrits, which was translated (in abridged form) into German and English, led to invitations to lecture in Italy, Japan and the United States. He gave lectures in 1975 at Yale, Columbia and MIT. Lacan's failing health made it difficult for him to meet the demands of the year-long Seminars he had been delivering since the fifties, but his teaching continued into the first year of the eighties. After dissolving his School, the EFP, in January 1980, Lacan travelled to Caracas to found the Freudian Field Institute on 12 July. The Overture to the Caracas Encounter was to be Lacan's final public address. His last texts from the spring of 1981 are brief institutional documents pertaining to the newly formed Freudian Field Institute. Lacan died on 9 September 1981. Return to Freud Lacan's "return to Freud" emphasizes a renewed attention to the original texts of Freud, and included a radical critique of Ego psychology, whereas "Lacan's quarrel with Object Relations psychoanalysis" was a more muted affair. Here he attempted "to restore to the notion of the Object Relation... the capital of experience that legitimately belongs to it", building upon what he termed "the hesitant, but controlled work of Melanie Klein... Through her we know the function of the imaginary primordial enclosure formed by the imago of the mother's body", as well as upon "the notion of the transitional object, introduced by D. W. Winnicott... a key-point for the explanation of the genesis of fetishism". Nevertheless, "Lacan systematically questioned those psychoanalytic developments from the 1930s to the 1970s, which were increasingly and almost exclusively focused on the child's early relations with the mother... the pre-Oedipal or Kleinian mother"; and Lacan's rereading of Freud—"characteristically, Lacan insists that his return to Freud supplies the only valid model"—formed a basic conceptual starting-point in that oppositional strategy. Lacan thought that Freud's ideas of "slips of the tongue," jokes, and the interpretation of dreams all emphasized the agency of language in subjective constitution. In "The Agency of the Letter in the Unconscious, or Reason Since Freud," he proposes that "the unconscious is structured like a language." The unconscious is not a primitive or archetypal part of the mind separate from the conscious, linguistic ego, he explained, but rather a formation as complex and structurally sophisticated as consciousness itself. One consequence of the unconscious being structured like a language is that the self is denied any point of reference to which to be "restored" following trauma or a crisis of identity. Andre Green objected that "when you read Freud, it is obvious that this proposition doesn't work for a minute. Freud very clearly opposes the unconscious (which he says is constituted by thing-presentations and nothing else) to the pre-conscious. What is related to language can only belong to the pre-conscious". Freud certainly contrasted "the presentation of the word and the presentation of the thing... the unconscious presentation is the presentation of the thing alone" in his metapsychology. However "Dylan Evans, Dictionary of Lacanian Psychoanalysis... takes issue with those who, like Andre Green, question the linguistic aspect of the unconscious, emphasizing Lacan's distinction between das Ding and die Sache in Freud's account of thing-presentation". Green's criticism of Lacan also included accusations of intellectual dishonesty, he said, "[He] cheated everybody… the return to Freud was an excuse, it just meant going to Lacan." Lacan's first official contribution to psychoanalysis was the mirror stage, which he described as "formative of the function of the I as revealed in psychoanalytic experience." By the early 1950s, he came to regard the mirror stage as more than a moment in the life of the infant; instead, it formed part of the permanent structure of subjectivity. In "the Imaginary order," their own image permanently catches and captivates the subject. Lacan explains that "the mirror stage is a phenomenon to which I assign a twofold value. In the first place, it has historical value as it marks a decisive turning-point in the mental development of the child. In the second place, it typifies an essential libidinal relationship with the body-image". As this concept developed further, the stress fell less on its historical value and more on its structural value. In his fourth Seminar, "La relation d'objet," Lacan states that "the mirror stage is far from a mere phenomenon which occurs in the development of the child. It illustrates the conflictual nature of the dual relationship." The mirror stage describes the formation of the Ego via the process of objectification, the Ego being the result of a conflict between one's perceived visual appearance and one's emotional experience. This identification is what Lacan called alienation. At six months, the baby still lacks physical co-ordination. The child is able to recognize themselves in a mirror prior to the attainment of control over their bodily movements. The child sees their image as a whole and the synthesis of this image produces a sense of contrast with the lack of co-ordination of the body, which is perceived as a fragmented body. The child experiences this contrast initially as a rivalry with their image, because the wholeness of the image threatens the child with fragmentation—thus the mirror stage gives rise to an aggressive tension between the subject and the image. To resolve this aggressive tension, the child identifies with the image: this primary identification with the counterpart forms the Ego. Lacan understands this moment of identification as a moment of jubilation, since it leads to an imaginary sense of mastery; yet when the child compares their own precarious sense of mastery with the omnipotence of the mother, a depressive reaction may accompany the jubilation. Lacan calls the specular image "orthopaedic," since it leads the child to anticipate the overcoming of its "real specific prematurity of birth." The vision of the body as integrated and contained, in opposition to the child's actual experience of motor incapacity and the sense of his or her body as fragmented, induces a movement from "insufficiency to anticipation." In other words, the mirror image initiates and then aids, like a crutch, the process of the formation of an integrated sense of self. In the mirror stage a "misunderstanding" (méconnaissance) constitutes the Ego—the "me" (moi) becomes alienated from itself through the introduction of an imaginary dimension to the subject. The mirror stage also has a significant symbolic dimension, due to the presence of the figure of the adult who carries the infant. Having jubilantly assumed the image as their own, the child turns their head towards this adult, who represents the big Other, as if to call on the adult to ratify this image. Lacan often used an algebraic symbology for his concepts: the big Other is designated A (for French Autre) and the little other is designated a (italicized French autre). He asserts that an awareness of this distinction is fundamental to analytic practice: "the analyst must be imbued with the difference between A and a, so he can situate himself in the place of Other, and not the other." Dylan Evans explains that: - The little other is the other who is not really other, but a reflection and projection of the Ego. Evans adds that for this reason the symbol a can represent both objet a and the ego in the Schema L. It is simultaneously the counterpart and the specular image. The little other is thus entirely inscribed in the Imaginary order. - The big Other designates radical alterity, an other-ness which transcends the illusory otherness of the imaginary because it cannot be assimilated through identification. Lacan equates this radical alterity with language and the law, and hence the big Other is inscribed in the order of the symbolic. Indeed, the big Other is the symbolic insofar as it is particularized for each subject. The Other is thus both another subject, in his radical alterity and unassimilable uniqueness, and also the symbolic order which mediates the relationship with that other subject." For Lacan "the Other must first of all be considered a locus in which speech is constituted," so that the Other as another subject is secondary to the Other as symbolic order. We can speak of the Other as a subject in a secondary sense only when a subject occupies this position and thereby embodies the Other for another subject. In arguing that speech originates not in the Ego nor in the subject but rather in the Other, Lacan stresses that speech and language are beyond the subject's conscious control. They come from another place, outside of consciousness—"the unconscious is the discourse of the Other." When conceiving the Other as a place, Lacan refers to Freud's concept of psychical locality, in which the unconscious is described as "the other scene". "It is the mother who first occupies the position of the big Other for the child," Dylan Evans explains, "it is she who receives the child's primitive cries and retroactively sanctions them as a particular message". The castration complex is formed when the child discovers that this Other is not complete because there is a "Lack (manque)" in the Other. This means that there is always a signifier missing from the trove of signifiers constituted by the Other. Lacan illustrates this incomplete Other graphically by striking a bar through the symbol A; hence another name for the castrated, incomplete Other is the "barred Other." Feminist thinkers have both utilised and criticised Lacan's concepts of castration and the Phallus. Some feminists have argued that Lacan's phallocentric analysis provides a useful means of understanding gender biases and imposed roles, while other feminist critics, most notably Luce Irigaray, accuse Lacan of maintaining the sexist tradition in psychoanalysis. For Irigaray, the Phallus does not define a single axis of gender by its presence/absence; instead, gender has two positive poles. Like Irigaray, Jacques Derrida, in criticizing Lacan's concept of castration, discusses the phallus in a chiasmus with the hymen, as both one and other. Other feminists, such as Judith Butler, Avital Ronell, Jane Gallop, and Elizabeth Grosz, have interpreted Lacan's work as opening up new possibilities for feminist theory. The three orders The Imaginary is the field of images and imagination, and deception. The main illusions of this order are synthesis, autonomy, duality, and similarity. Lacan thought that the relationship created within the mirror stage between the Ego and the reflected image means that the Ego and the Imaginary order itself are places of radical alienation: "alienation is constitutive of the Imaginary order." This relationship is also narcissistic. In The Four Fundamental Concepts of Psychoanalysis, Lacan argues that the Symbolic order structures the visual field of the Imaginary, which means that it involves a linguistic dimension. If the signifier is the foundation of the Symbolic, the signified and signification are part of the Imaginary order. Language has Symbolic and Imaginary connotations—in its Imaginary aspect, language is the "wall of language" that inverts and distorts the discourse of the Other. On the other hand, the Imaginary is rooted in the subject's relationship with his or her own body (the image of the body). In Fetishism: the Symbolic, the Imaginary and the Real, Lacan argues that in the sexual plane the Imaginary appears as sexual display and courtship love. Insofar as identification with the analyst is the objective of analysis, Lacan accused major psychoanalytic schools of reducing the practice of psychoanalysis to the Imaginary order. Instead, Lacan proposes the use of the Symbolic to dislodge the disabling fixations of the Imaginary—the analyst transforms the images into words. "The use of the Symbolic," he argued, "is the only way for the analytic process to cross the plane of identification." In his Seminar IV, "La relation d'objet," Lacan argues that the concepts of "Law" and "Structure" are unthinkable without language—thus the Symbolic is a linguistic dimension. This order is not equivalent to language, however, since language involves the Imaginary and the Real as well. The dimension proper to language in the Symbolic is that of the signifier—that is, a dimension in which elements have no positive existence, but which are constituted by virtue of their mutual differences. The Symbolic is also the field of radical alterity—that is, the Other; the unconscious is the discourse of this Other. It is the realm of the Law that regulates desire in the Oedipus complex. The Symbolic is the domain of culture as opposed to the Imaginary order of nature. As important elements in the Symbolic, the concepts of death and lack (manque) connive to make of the pleasure principle the regulator of the distance from the Thing ("das Ding an sich") and the death drive that goes "beyond the pleasure principle by means of repetition"—"the death drive is only a mask of the Symbolic order." By working in the Symbolic order, the analyst is able to produce changes in the subjective position of the analysand. These changes will produce imaginary effects because the Imaginary is structured by the Symbolic. Lacan's concept of the Real dates back to 1936 and his doctoral thesis on psychosis. It was a term that was popular at the time, particularly with Émile Meyerson, who referred to it as "an ontological absolute, a true being-in-itself". Lacan returned to the theme of the Real in 1953 and continued to develop it until his death. The Real, for Lacan, is not synonymous with reality. Not only opposed to the Imaginary, the Real is also exterior to the Symbolic. Unlike the latter, which is constituted in terms of oppositions (i.e. presence/absence), "there is no absence in the Real." Whereas the Symbolic opposition "presence/absence" implies the possibility that something may be missing from the Symbolic, "the Real is always in its place." If the Symbolic is a set of differentiated elements (signifiers), the Real in itself is undifferentiated—it bears no fissure. The Symbolic introduces "a cut in the real" in the process of signification: "it is the world of words that creates the world of things—things originally confused in the "here and now" of the all in the process of coming into being." The Real is that which is outside language and that resists symbolization absolutely. In Seminar XI Lacan defines the Real as "the impossible" because it is impossible to imagine, impossible to integrate into the Symbolic, and impossible to attain. It is this resistance to symbolization that lends the Real its traumatic quality. Finally, the Real is the object of anxiety, insofar as it lacks any possible mediation and is "the essential object which is not an object any longer, but this something faced with which all words cease and all categories fail, the object of anxiety par excellence." Lacan's concept of desire is related to Hegel's Begierde, a term that implies a continuous force, and therefore somehow differs from Freud's concept of Wunsch. Lacan's desire refers always to unconscious desire because it is unconscious desire that forms the central concern of psychoanalysis. The aim of psychoanalysis is to lead the analysand to recognize his/her desire and by doing so to uncover the truth about his/her desire. However this is possible only if desire is articulated in speech: "It is only once it is formulated, named in the presence of the other, that desire appears in the full sense of the term." And again in The Ego in Freud's Theory and in the Technique of Psychoanalysis: "...what is important is to teach the subject to name, to articulate, to bring desire into existence. The subject should come to recognize and to name his/his desire. But it isn't a question of recognizing something that could be entirely given. In naming it, the subject creates, brings forth, a new presence in the world." The truth about desire is somehow present in discourse, although discourse is never able to articulate the entire truth about desire, whenever discourse attempts to articulate desire, there is always a leftover or surplus. Lacan distinguishes desire from need and from demand. Need is a biological instinct where the subject depends on the Other to satisfy its own needs: in order to get the Other's help "need" must be articulated in "demand." But the presence of the Other not only ensures the satisfaction of the "need", it also represents the Other's love. Consequently "demand" acquires a double function: on the one hand, it articulates "need", and on the other, acts as a "demand for love." Even after the "need" articulated in demand is satisfied, the "demand for love" remains unsatisfied since the Other cannot provide the unconditional love that the subject seeks. "Desire is neither the appetite for satisfaction, nor the demand for love, but the difference that results from the subtraction of the first from the second." Desire is the a surplus, a leftover, produced by the articulation of need in demand: "desire begins to take shape in the margin in which demand becomes separated from need." Unlike need, which can be satisfied, desire can never be satisfied: it is constant in its pressure and eternal. The attainment of desire does not consist in being fulfilled but in its reproduction as such. As Slavoj Žižek puts it, "desire's raison d'être is not to realize its goal, to find full satisfaction, but to reproduce itself as desire." Lacan also distinguishes between desire and the drives: desire is one and drives are many. The drives are the partial manifestations of a single force called desire. Lacan's concept of "objet petit a" is the object of desire, although this object is not that towards which desire tends, but rather the cause of desire. Desire is not a relation to an object but a relation to a lack (manque). In The Four Fundamental Concepts of Psychoanalysis Lacan argues that "man's desire is the desire of the Other." This entails the following: - Desire is the desire of the Other's desire, meaning that desire is the object of another's desire and that desire is also desire for recognition. Here Lacan follows Alexandre Kojève who follows Hegel: for Kojève the subject must risk his own life if he wants to achieve the desired prestige." This desire to be the object of another's desire is best exemplified in the Oedipus complex, when the subject desires to be the phallus of the mother. - In "The Subversion of the Subject and the Dialectic of Desire in the Freudian Unconscious". Lacan contends that the subject desires from the point of view of another whereby the object of someone's desire is an object desired by another one: what makes the object desirable is that it is precisely desired by someone else. Again Lacan follows Kojève who follows Hegel. This aspect of desire is present in hysteria for the hysteric is someone who converts another's desire into his/her own (see Sigmund Freud's "Fragment of an Analysis of a Case of Hysteria" in SE VII, where Dora desire Frau K because she identifies with Herr K). What matters then in the analysis of a hysteric is not to find out the object of her desire but to discover the subject with whom she identifies. - Désir de l'Autre, which is translated as "desire for the Other" (though could be also "desire of the Other"). The fundamental desire is the incestuous desire for the mother, the primordial Other. - Desire is "the desire for something else" since it is impossible to desire what one already has, The object of desire is continually deferred, which is why desire is a metonymy. - Desire appears in the field of the Other, that is in the unconscious. Last but not least for Lacan the first person who occupies the place of the Other is the mother and at first the child is at her mercy. Only when the father articulates desire with the law by castrating the mother, the subject is liberated from the mother's desire. Lacan maintains Freud's distinction between drive (Trieb) and instinct (Instinkt). Drives differ from biological needs because they can never be satisfied and do not aim at an object but rather circle perpetually around it. He argues that the purpose of the drive (Triebziel) is not to reach a goal but to follow its aim, meaning "the way itself" instead of "the final destination", that is to circle around the object. The purpose of the drive is to return to its circular path and the true source of jouissance is the repetitive movement of this closed circuit. Lacan posits the drives as both cultural and symbolic constructs—to him, "the drive is not a given, something archaic, primordial." He incorporates the four elements of the drives as defined by Freud (the pressure, the end, the object and the source) to his theory of the drive's circuit: the drive originates in the erogenous zone, circles round the object, and returns to the erogenous zone. Three grammatical voices structure this circuit: - the active voice (to see) - the reflexive voice (to see oneself) - the passive voice (to be seen) The active and reflexive voices are autoerotic—they lack a subject. It is only when the drive completes its circuit with the passive voice that a new subject appears, implying that prior to that instance, there was not subject. Despite being the "passive" voice, the drive is essentially active: "to make oneself be seen" rather than "to be seen." The circuit of the drive is the only way for the subject to transgress the pleasure principle. To Freud sexuality is composed of partial drives (i.e. the oral or the anal drives) each specified by a different erotogenic zone. At first these partial drives function independently (i.e. the polymorphous perversity of children), it is only in puberty that they become organized under the aegis of the genital organs. Lacan accepts the partial nature of drives, but 1) rejects the notion that partial drives can ever attain any complete organization: the primacy of the genital zone, if achieved, is always precarious; and 2) he argues that drives are partial in that they only represent sexuality partially not in the sense that they are a part of the whole. Drives do not represent the reproductive function of sexuality but only the dimension of jouissance. Lacan identifies four partial drives: the oral drive (the erogenous zones are the lips, the partial object the breast, the verb is "to suck"), the anal drive (the anus and the faeces, "to shit"), the scopic drive (the eyes and the gaze, "to see") and the invocatory drive (the ears and the voice, "to hear"). The first two drives relate to demand and the last two to desire. The notion of dualism is maintained throughout Freud's various reformulations of the drive-theory. From the initial opposition between sexual drives and ego-drives (self-preservation) to the final one between the life drives (Lebenstriebe) and the death drives (Todestriebe). Lacan retains Freud's dualism but in terms of an opposition between the symbolic and the imaginary and not referred to different kinds of drives. For Lacan all drives are sexual drives, and every drive is a death drive (pulsion de mort) since every drive is excessive, repetitive and destructive. The drives are closely related to desire since both originate in the field of the subject. But they are not to be confused: drives are the partial aspects in which desire is realized—desire is one and undivided, whereas the drives are its partial manifestations. A drive is a demand that is not caught up in the dialectical mediation of desire; drive is a "mechanical" insistence that is not ensnared in demand dialectical mediation. Lacan on error and knowledge Building on Freud's The Psychopathology of Everyday Life, Lacan long argued that "every unsuccessful act is a successful, not to say 'well-turned', discourse", highlighting as well "sudden transformations of errors into truths, which seemed to be due to nothing more than perseverance". In a late seminar, he generalised more fully the psychoanalytic discovery of "truth—arising from misunderstanding", so as to maintain that "the subject is naturally erring... discourse structures alone give him his moorings and reference points, signs identify and orient him; if he neglects, forgets, or loses them, he is condemned to err anew". Because of "the alienation to which speaking beings are subjected due to their being in language", to survive "one must let oneself be taken in by signs and become the dupe of a discourse... [of] fictions organized in to a discourse". For Lacan, with "masculine knowledge irredeemably an erring", the individual "must thus allow himself to be fooled by these signs to have a chance of getting his bearings amidst them; he must place and maintain himself in the wake of a discourse... become the dupe of a discourse... les non-dupes errent". Lacan comes close here to one of the points where "very occasionally he sounds like Thomas Kuhn (whom he never mentions)", with Lacan's "discourse" resembling Kuhn's "paradigm" seen as "the entire constellation of beliefs, values, techniques, and so on shared by the members of a given community". The "variable-length psychoanalytic session" was one of Lacan's crucial clinical innovations, and a key element in his conflicts with the IPA, to whom his "innovation of reducing the fifty-minute analytic hour to a Delphic seven or eight minutes (or sometimes even to a single oracular parole murmured in the waiting-room)" was unacceptable. Lacan's variable-length sessions lasted anywhere from a few minutes (or even, if deemed appropriate by the analyst, a few seconds) to several hours. This practice replaced the classical Freudian "fifty minute hour". With respect to what he called "the cutting up of the 'timing'", Lacan asked the question, "Why make an intervention impossible at this point, which is consequently privileged in this way?" By allowing the analyst's intervention on timing, the variable-length session removed the patient's—or, technically, "the analysand's"—former certainty as to the length of time that they would be on the couch. When Lacan adopted the practice, "the psychoanalytic establishment were scandalized"—and, given that "between 1979 and 1980 he saw an average of ten patients an hour", it is perhaps not hard to see why: "psychoanalysis reduced to zero", if no less lucrative. At the time of his original innovation, Lacan described the issue as concerning "the systematic use of shorter sessions in certain analyses, and in particular in training analyses"; and in practice it was certainly a shortening of the session around the so-called "critical moment" which took place, so that critics wrote that 'everyone is well aware what is meant by the deceptive phrase "variable length"... sessions systematically reduced to just a few minutes'. Irrespective of the theoretical merits of breaking up patients' expectations, it was clear that "the Lacanian analyst never wants to 'shake up' the routine by keeping them for more rather than less time". "Whatever the justification, the practical effects were startling. It does not take a cynic to point out that Lacan was able to take on many more analysands than anyone using classical Freudian techniques... [and] as the technique was adopted by his pupils and followers an almost exponential rate of growth became possible". Accepting the importance of "the critical moment when insight arises", object relations theory would nonetheless quietly suggest that "if the analyst does not provide the patient with space in which nothing needs to happen there is no space in which something can happen". Julia Kristeva, if in very different language, would concur that "Lacan, alert to the scandal of the timeless intrinsic to the analytic experience, was mistaken in wanting to ritualize it as a technique of scansion (short sessions)". Writings and writing style Most of Lacan's psychoanalytic writings from the forties through to the early sixties were compiled with an index of concepts by Jacques-Alain Miller in the 1966 collection, titled simply Écrits. Published in French by Éditions du Seuil, they were later issued as a two-volume set (1970/1) with a new "Preface". A selection of the writings (chosen by Lacan himself) were translated by Alan Sheridan and published by Tavistock Press in 1977. The full 35-text volume appeared for the first time in English in Bruce Fink's translation published by Norton & Co. (2006). The Écrits were included on the list of 100 most influential books of the 20th century compiled and polled by the broadsheet Le Monde. Lacan's writings from the late sixties and seventies (thus subsequent to the 1966 collection) were collected posthumously, along with some early texts from the nineteen thirties, in the Éditions du Seuil volume Autres écrits (2001). Although most of the texts in Écrits and Autres écrits are closely related to Lacan's lectures or lessons from his Seminar, more often than not the style is denser than Lacan's oral delivery, and a clear distinction between the writings and the transcriptions of the oral teaching is evident to the reader. Jacques-Alain Miller is the sole editor of Lacan's seminars, which contain the majority of his life's work. "There has been considerable controversy over the accuracy or otherwise of the transcription and editing", as well as over "Miller's refusal to allow any critical or annotated edition to be published". Despite Lacan's status as a major figure in the history of psychoanalysis, some of his seminars remain unpublished. Since 1984, Miller has been regularly conducting a series of lectures, "L'orientation lacanienne." Miller's teachings have been published in the US by the journal Lacanian Ink. Lacan's writing is notoriously difficult, due in part to the repeated Hegelian/Kojèvean allusions, wide theoretical divergences from other psychoanalytic and philosophical theory, and an obscure prose style. For some, "the impenetrability of Lacan's prose... [is] too often regarded as profundity precisely because it cannot be understood". Arguably at least, "the imitation of his style by other 'Lacanian' commentators" has resulted in "an obscurantist antisystematic tradition in Lacanian literature". The broader psychotherapeutic literature has little or nothing to say about the effectiveness of Lacanian psychoanalysis. Though a major influence on psychoanalysis in France and parts of Latin America, Lacan's influence on clinical psychology in the English-speaking world is negligible, where his ideas are best known in the arts and humanities. A notable exception is the works of Annie G. Rogers (A Shining Affliction; The Unsayable: The Hidden Language of Trauma), which credit Lacanian theory for many therapeutic insights in successfully treating sexually abused young women. Alan Sokal and Jean Bricmont have criticised Lacan's use of terms from mathematical fields such as topology, accusing him of "superficial erudition" and of abusing scientific concepts that he does not understand. However, they note that they do not want to enter into the debate over the purely psychoanalytic part of Lacan's work. Other critics have dismissed Lacan's work wholesale. François Roustang called it an "incoherent system of pseudo-scientific gibberish", and quoted linguist Noam Chomsky's opinion that Lacan was an "amusing and perfectly self-conscious charlatan". The former Lacanian analyst, Dylan Evans, eventually dismissed Lacanianism as lacking a sound scientific basis and as harming rather than helping patients, and has criticized Lacan's followers for treating his writings as "holy writ". Richard Webster has decried what he sees as Lacan's obscurity, arrogance, and the resultant "Cult of Lacan". Others have been more forceful still, describing him as "The Shrink from Hell" and listing the many associates—from lovers and family to colleagues, patients, and editors—left damaged in his wake. His type of charismatic authority has been linked to the many conflicts among his followers and in the analytic schools he was involved with. His intellectual style has also come in for much criticism. Eclectic in his use of sources, Lacan has been seen as concealing his own thought behind the apparent explication of that of others. Thus his "return to Freud" was called by Malcolm Bowie "a complete pattern of dissenting assent to the ideas of Freud . . . Lacan's argument is conducted on Freud's behalf and, at the same time, against him". Bowie has also suggested that Lacan suffered from both a love of system and a deep-seated opposition to all forms of system. Lacan has similarly been seen as trapped in the very phallocentric mastery his language ostensibly sought to undermine. The result—Castoriadis would maintain—was to make all thought depend upon himself, and thus to stifle the capacity for independent thought among all those around him. Their difficulties were only reinforced by what Didier Anzieu described as a kind of teasing lure in Lacan's discourse; "fundamental truths to be revealed . . . but always at some further point". This was perhaps an aspect of the sadistic narcissism that feminists especially detected in his nature. But though to many he was a narcissist, indulging omnipotent fantasies through his systems of thought, Lacan can be seen as an example of what Michael Maccoby has called a "productive narcissist"; one of those who through their power to draw others into their visions have eventually changed the very parameters of our cultural world. Selected works published in English listed below. More complete listings can be found at Lacan Dot Com. - "Lacan". Random House Webster's Unabridged Dictionary. - David Macey, "Introduction", Jacques Lacan, The Four Fundamental Concepts of Psycho-Analysis (London 1994) p. xiv - refer to "The American Journal of Psychoanalysis", Volume 47, Issue 1, Spring 1987, ISSN: 0002-9548 "Lacan and post-Structuralism", P51-P57, by Jan Marta - Roudinesco, Elisabeth, Jacques Lacan & Co.: a history of psychoanalysis in France, 1925–1985, 1990, Chicago University Press - Perry Meisel (April 13, 1997). "The Unanalyzable". New York Times. - Michael Martin (2007). The Cambridge Companion to Atheism. Cambridge University Press. p. 310. ISBN 9780521842709. "Among celebrity atheists with much biographical data, we find leading psychologists and psychoanalysts. We could provide a long list, including...Jacques Lacan..." - Laurent, É., "Lacan, Analysand" in Hurly-Burly, Issue 3. - Roudinesco, Elisabeth. "The mirror stage: an obliterated archive" The Cambridge Companion to Lacan. Ed. Jean-Michel Rabaté. Cambridge: CUP, 2003 - Evans, Dylan, ""From Lacan to Darwin"", in The Literary Animal; Evolution and the Nature of Narrative, eds. Jonathan Gottschall and David Sloan Wilson, Evanston: Northwestern University Press, 2005 - David Macey, "Introduction", Jacques Lacan, The Four Fundamental Concepts of Psycho-Analysis (London 1994) p. xv-xvi - Le séminaire, Livre VIII: Le transfert, Paris: Seuil, 1991. - "Minutes of the IPA: The SFP Study Group" in Television/A Challenge to the Psychoanalytic Establishment, pp. 79-80. - Lacan, J., "Founding Act" in Television/A Challenge to the Psychoanalytic Establishment, pp. 97-106. - Elisabeth Roudinesco, Jacques Lacan (Cambridge 1997) p. 293 - Proposition du 9 octobre 1967 sur le psychanalyste à l'École. - French Communist Party "official philosopher" Louis Althusser did much to advance this association in the 1960s. Zoltán Tar and Judith Marcus in Frankfurt school of sociology. ISBN 0-87855-963-9 (p. 276) write "Althusser's call to Marxists that the Lacanian enterprise might [...] help further revolutionary ends, endorsed Lacan's work even further." Elizabeth A. Grosz writes in her Jacques Lacan: A Feminist Introduction that: "Shortly after the tumultuous events of May 1968, Lacan was accused by the authorities of being a subversive, and directly influencing the events that transpired." - Regnault, F., "I Was Struck by What You Said..." Hurly-Burly, 6, 23-28. - Price, A., "Lacan's Remarks on Chinese Poetry". Hurly-Burly 2 (2009) - Lacan, J., Le séminaire, livre XXIII, Le sinthome - Lacan, J., "Conférences et entretiens dans les universités nord-américans". Scilicet, 6/7 (1976) - Lacan, J., "Letter of Dissolution". Television/ A Challenge to the Psychoanalytic Establishment, 129-131. - Lacan, J., "Overture to the 1st International Encounter of the Freudian Field" , Hurly-Burly 6. - Mary Jacobus, The Poetics of Psychoanalysis: In the Wake of Klein (Oxford 2005) p. 25 - Jacques Lacan, Ecrits: A Selection (London 1997) p. 197 - Lacan, Ecrits p. 197 and p. 20 - Lacan, Ecrits p. 250 - Lisa Appignanesi/John Forrester, Freud's Women (London 2005) p. 462 - David Macey, "Introduction", Jacques Lacan, The Four Fundamental Concepts of Psycho-Analysis (London 1994) p. xxii - Mary Jacobus, The Poetics of Psychoanalysis: In the Wake of Klein (Oxford 2005) p. 5n - Sigmund Freud, On Metapsychology (Penguin 1984) p. 207 - Mary Jacobus, The Poetics of Psychoanalysis: In the Wake of Klein (Oxford 2005) p. 7n - "The Dead Mother: The Work of André Green (Book Review)" - Lacan, J., "Some Reflections on the Ego" in Écrits - Lacan, J., "La relation d'objet" in Écrits. - Lacan, J., "The Mirror Stage as Formative of the Function of the I", in Écrits: a selection, London, Routledge Classics, 2001; p. 5 - Lacan, Tenth Seminar, "L'angoisse," 1962–1963 - Lacan, J., The Seminar of Jacques Lacan: Book II: The Ego in Freud's Theory and in the Technique of Psychoanalysis 1954–1955 (W. W. Norton & Company, 1991), ISBN 978-0-393-30709-2 - Lacan, J., "The Freudian Thing" and "Psychoanalysis and its Teaching" in Écrits. - Schema L in The Seminar. Book II. The Ego in Freud's Theory and in the Technique of Psychoanalysis. - Dylan Evans, An Introductory Dictionary of Lacanian Psychoanalysis (London: Routledge, 1996), p. 133. - Lacan, J., "The Seminar. Book III. The Psychoses, 1955-1956," translated by Russell Grigg (New York: W. W. Norton & Company, 1997) - Lacan, J., Le séminaire. Livre VIII: Le transfert, 1960-1961. ed. Jacques-Alain Miller (Paris: Seuil, 1994). - Lacan, J., "Seminar on 'The Purloined Letter'" in Écrits. - Lacan, J., "The Agency of the Letter in the Unconscious" in Écrits and Seminar V: Les formations de l'inconscient - Irigary, Luce, This Sex Which Is Not One 1977, (Eng. trans. 1985) - Derrida, Jacques, Dissemination (1983). - Butler, Judith. Bodies That Matter: On the Discursive Limits of "Sex" (1993); Gallop, Jane, Reading Lacan. Ithaca: Cornell University Press, 1985; - Elizabeth A. Grosz, Jacques Lacan: A Feminist Introduction - Lacan, Seminar III: The Psychoses. - Écrits, "The Directions of the Treatment." - Lacan, J. Seminar XI: The Four Fundamental Concepts of Psychoanalysis. - Evans, Dylan – An Encyclopedia of Lacanian Psychoanalysis, p. 162. - Lacan, J., "The Function and Field of Speech and Language in Psychoanalysis" in Écrits. - Macey, David, "On the subject of Lacan" in Psychoanalysis in Contexts: Paths between Theory and Modern Culture (London: Routledge 1995). - Fink, Bruce, The Lacanian Subject: Between Language and Jouissance (Princeton University Press, 1996), ISBN 978-0-691-01589-7 - Lacan, J., The Seminar of Jacques Lacan: Book I: Freud's Papers on Technique 1953–1954(W. W. Norton & Company, 1988), ISBN 978-0-393-30697-2 - Lacan, J., The Seminar of Jacques Lacan: Book II: The Ego in Freud's Theory and in the Technique of Psychoanalysis 1954-1955(W. W. Norton & Company, 1988), ISBN 978-0-393-30709-2 - Lacan, J., "The Direction of the Treatment and the Principles of Its Powers" in Écrits: A Selection translated by Bruce Fink (W. W. Norton & Company, 2004), ISBN 978-0393325287 - Lacan, J., "The Signification of the Phallus" in Écrits - Žižek, Slavoj, The Plague of Fantasies (London: Verso 1997), p. 39. - Lacan, J. The Seminar: Book XI. The Four Fundamental Concepts of Psychoanalysis, 1964 (W. W. Norton & Company, 1998), ISBN 978-0393317756 - Kojève, Alexandre, Introduction to the Reading of Hegel, translated by James H. Nichols Jr. (New York: Basic Books 1969), p. 39. - Lacan, J., Écrits: A Selection translated by Bruce Fink (W. W. Norton & Company, 2004), ISBN 978-0393325287 - Lacan, J. The Seminar: Book VII. The Ethics of Psychoanalysis, 1959-1960 (W. W. Norton & Company, 1997), ISBN 978-0393316131 - Lacan, J., "The Instance of the Letter in the Unconscious, or Reason since Freud" in Écrits: A Selection translated by Bruce Fink (W. W. Norton & Company, 2004), ISBN 978-0393325287 - Lacan, J. Le Séminaire: Livre IV. La relation d'objet, 1956-1957 ed. Jacques-Alain Miller (Paris; Seuil, 1994) - The Seminar, Book XI. The Four Fundamental Concepts of Psychoanalysis - Freud, Three Essays on the Theory of Sexuality, S.E. VII - Freud, Beyond the Pleasure Principle, S.E. XVIII - Position of the Unconscious, Ecrits - Slavoj Zizek, Looking Awry: An Introduction to Jacques Lacan Through Popular Culture - Jacques Lacan, Ecrits: A Selection (London 1997) p. 58 and p. 121 - Jacques-Alain Miller, "Microscopia", in Jacques Lacan, Television (London 1990) p. xxvii - Bruce Fink, The Lacanian Subject (Princeton 1997) p. 173 - Miller, p. xxvii - Seminar XXI, quoted in Juliet Mitchell and Jacqueline Rose eds., Feminine Sexuality (New York 1982) p. 51 - Oliver Feltham, "Enjoy your Stay", in Justin Clemens/Russell Grigg, Jacques Lacan and the Other side of psychoanalysis (2006) p. 180 - Thomas Kuhn, The Structure of Scientific Revolutions (London 1970) p. 175 - John Forrester, 'Dead on Time: Lacan's Theory of Temporality' in: Forrester, The Seductions of Psychoanalysis: Freud, Lacan and Derrida Cambridge: C.U.P., pp. 169-218, 352-370 - Janet Malcolm, Psychoanalysis: The Impossible Profession (London 1988) p. 4 - Jacques Lacan, Écrits: A Selection (London 1996) p. 99 - Bruce Fink, A Clinical Introduction to Lacananian Psychoanalysis: Theory and Technique (Newhaven: Harvard, 1996), p. 18. Snippet view available on Google books. - Bruce Fink, A Clinical Introduction to Lacananian Psychoanalysis: Theory and Technique (Newhaven: Harvard, 1996), p. 17. Snippet view available on Google books. - de Mijolla, Alain. "La scission de la Société Psychanalytique de Paris en 1953, quelques notes pour un rappel historique". Société Psychanalytique de Paris. Retrieved 2010-04-08. - Elisabeth Roudinesco, Jacques Lacan (Cambridge 1997) p. 397 - Lacan, Jacques (4 July 1953). "Letter to Rudolph Loewenstein". October 40: 65. ISBN 0-262-75188-7. - Mikkel Borch-Jacobsen, Lacan: The Absolute Master (1991) p. 120 - Cornélius Castoriadis, in Roudinesco (1997) p. 386 - Sherry Turkle, Psychoanalytic Politics: Freud's French Revolution (London 1978) p. 204 - David Macey, "Introduction", Jacques Lacan, The Four Fundamental Concepts of Psycho-Analysis (London 1994) p. xiv and xxxv - R. Horacio Etchegoyen, The Fundamentals of Psychoanalytic Technique (London 2005) p. 677 - Michael Parsons, The Dove that Returns, the Dove that Vanishes (London 2000) pp. 16–17 - Julia Kristeva, Intimate Revolt (New York 2002) p. 42 - David Macey, "Introduction", Jacques Lacan, The Four Fundamental Concepts of Psycho-analysis (London 1994) p. x - Richard Stevens, Sigmund Freud: Examining the Essence of his Contribution (Basingstoke 2008) p. 191n - Yannis Stavrakakis, Lacan and the Political (London:Routledge, 1999) pp. 5–6 - "There doesn't seem to be any data on the therapeutic effectiveness of Lacanian psychoanalysis in particular" Roustang, ""The Lacanian Delusion"" - e.g.: A Shining Affliction, ISBN 978-0-14-024012-2 - Sokal, Alan D. and Jean Bricmont. 199. Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science. Macmillan, p. 19, 24 - Roustang, François, The Lacanian Delusion - "The Cult of Lacan". Richardwebster.net. 1907-06-14. Retrieved 2011-06-18. - The Shrink from Hell - Yannis Stavrakakis, Lacan and the Political (London: Routledge, 1999) p. 142n - Jacqueline Rose, On Not Being Able To Sleep: Psychoanalysis and the Modern World (London 2003) p. 176 - Philip Hill, Lacan for Beginners (London 1997) p. 8 - Elisabeth Roudinesco, Jacques Lacan (Cambridge 1997) p. 46 - Malcolm Bowie, Lacan (London 1991) pp. 6–7 - Adam Phillips, On Flirtation (London 19940 pp. 161–2) - Jacqueline Rose, "Introduction – II", in Juliet Mitchell and Jacqueline Rose, Feminine Sexuality (New York 1982) p. 56 - Elisabeth Roudinesco, Jacques Lacan (Cambridge 1997) p. 386 - Didier Anzieu, in Sherry Tuckle, Psychoanalytic Politics: Freud's French Revolution (London 1978) p. 131 - Jane Gallop, Feminism and Psychoanalysis: The Daughter's Seduction (London 1982) p. 120 and p. 37 - Rosalind Minsky, Psychoanalysis and Gender (London 1996) pp. 175–6 - Simon Crompton, All about Me: Loving a Narcissist (London 1997) p. 157 - École de la Cause freudienne - World Association of Psychoanalysis - CFAR – The Centre for Freudian Analysis and Research. London-based Lacanian psychoanalytic training agency - Homepage of the Lacanian School of Psychoanalysis and the San Francisco Society for Lacanian Studies - The London Society of the New Lacanian School. Site includes online library of clinical & theoretical texts - The Freudian School of Melbourne, School of Lacanian Psychoanalysis – Clinical and theoretical teaching and training of psychoanalysts - Lacan Dot Com - Links about Jacques Lacan at Lacan.com - "How to Read Lacan" by Slavoj Zizek – full version - Jacques Lacan at The Internet Encyclopedia of Philosophy
Dermatology – PowerPoint Presentations & Lectures You can use these slides as such without any modification. Please give the authors the credit they deserve and do not change the author’s name. Compare your own slides before you are going to present a seminar with the slides given here. To download these pdf/ppt files right click on the link and select save target as or save link as. But in most browsers just a single left click will automatically start downloading. An appeal : If any of you have power point presentations please mail to firstname.lastname@example.org. Please contribute for Improving the quality of Medical Education. Topics: Anything related to medical field, homeopathy, teaching, students, fun or inspirational. Broken links: broken links if any please report. Psoriasis homoeopathic literature view – Dr Jayadeep BP Systemic Lupus Erythematosus – Steve Beesley Psoriasis – Dr N C Dhole Psoriasis – Homeopathic approach – Dr K Shivakumar Chicken Pox – management revisited - Dr.Rosy Phillips Eczema and Homoeopathy - Dr. Sr. Binymol Aney Kurian Eczema and its therapeutics – Dr Vivek N Patil
Craig, Ian P. (2005) Loss of storage water due to evaporation - a literature review. Technical Report. University of Southern Queensland, National Centre for Engineering in Agriculture, Toowoomba, Australia. [Introduction]: With increasing environmental concern and concentration upon irrigation water use efficiency, there is now considerable pressure upon us all to optimize as far as possible the use of our most precious resource - water. The rate of evaporation is in excess of 2m per year over most of Australia’s landmass and mean rainfall in Australia is less than 500mm per year and falling. On such a hot dry continent, it has been estimated that up to 95% of the rain which falls in Australia is re-evaporated and does not contribute to runoff. Water when harvested is commonly stored in small storages and dams, but it is estimated that up to half of this may be lost due to evaporation. This represents a huge waste of our resource. The price and value of water increasing dramatically and the scarcity of water is the main limiting factor working against agricultural production in Australia. Statistics for this ePrint Item |Item Type:||Report (Technical Report)| |Item Status:||Live Archive| |Additional Information:||NCEA internal report.| |Depositing User:||Dr Ian Craig| |Faculty / Department / School:||Historic - Faculty of Engineering and Surveying - Department of Agricultural, Civil and Environmental Engineering| |Date Deposited:||11 Oct 2007 01:12| |Last Modified:||02 Jul 2013 22:46| |Uncontrolled Keywords:||evaporation, farm dam, control of| |Fields of Research (FOR2008):||09 Engineering > 0914 Resources Engineering and Extractive Metallurgy > 091499 Resources Engineering and Extractive Metallurgy not elsewhere classified 09 Engineering > 0905 Civil Engineering > 090509 Water Resources Engineering |Identification Number or DOI:||1000580/0| Actions (login required) |Archive Repository Staff Only|
Author Affiliations: Division of Facial Plastic and Reconstructive Surgery, Yeh Facial Plastic Surgery, Laguna Woods, California (Dr Yeh); and Division of Facial Plastic and Reconstructive Surgery, Williams Center Plastic Surgery Specialists, Latham (Dr Williams), and Facial Plastic and Reconstructive Surgery, Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, Albany Medical Center, Albany (Dr Williams), New York. Objective To evaluate the long-term aesthetic results in patients treated with autologous periorbital lipotransfer. Methods A retrospective review of 114 consecutive patients during 4 years who underwent autologous periorbital lipotransfer. Of these patients, 99 were identified who had complete photographic and medical records and were therefore included in the study. Patients were placed into 5 groups based on their total length of postoperative follow-up. Periorbital volume augmentation was assessed by 3 independent masked evaluators using a standard aesthetic scale from 0 to 2 (with 0 indicating no improvement; 1, mild improvement; and 2, marked improvement). Interobserver correlation was determined by κ correlation, and Mann-Whitney tests were used to assess for statistical significance comparing the same patients in each group. Results Scores from the 3 independent evaluators correlated well (κ = 0.316); aesthetic improvement was seen in almost all patients (86.4%), who had demonstrated improvement for the first 3 years of follow-up. The degree of improvement decreased each year, and only mild improvement was retained in most patients (68.2%) by the 3-year follow-up point (P = .049). Conclusions Results from most patients who underwent autologous periorbital lipotransfer demonstrated improvement that lasted as long as 3 years. Autologous periorbital lipotransfer remains a valid and effective technique for periorbital rejuvenation and demonstrates long-term potential effectiveness. Rejuvenation of the periorbital area continues to be a topic of great interest to surgeons and patients. Aging around the eyes involves a complex series of anatomical and physiologic changes that can be seen in patients as young as their mid-30s and is accentuated in subsequent decades. Multiple techniques have been developed that aim to restore a youthful appearance to the eyes and have included novel surgical approaches and several nonsurgical modalities. One of the most popular techniques for periorbital rejuvenation is autologous lipotransfer. The periorbital area is defined as the skin, soft tissue, and bony structures that surround the eye and include the upper and lower eyelids, the eye-cheek junction extending to the midface, and the infraorbital and supraorbital rims. Although previous reports have described rejuvenation techniques for the upper zone of the periorbital area, this article focuses on the lower eyelid and midface complex. Two of the most defining features of a youthful lower eyelid are shortness and fullness. This starkly contrasts with a senile eyelid, which clinically appears longer and deflated and demonstrates anatomical features such as laxity, pseudoherniation of orbital fat through a weakened orbital septum, and loss of volume.1 A progressive weakening of the orbital-retaining ligaments and a downward displacement of the zygomatico-cutaneous ligament result in laxity of the lower eyelid over time.2 This physiologic change combined with the effect of gravity on soft-tissue components accounts for the appearance of a longer eyelid in the vertical plane (Figure 1). Another prominent anatomical change seen in the senile eyelid is the pseudoherniation of orbital fat as a result of a compromised orbital septum. This frequently results in a significant change in contour over the infraorbital rim, which manifests as an unnatural soft-tissue bulge or convexity. The most recent advancement in our understanding of the aging lower eyelid is the concept of soft- tissue volume loss. Figure 1. Aging changes of the lower eyelid include weakening of the zygomatico-cutaneous ligament, descent of the suborbicularis oculi and malar fat pads, and loss of periocular volume. Similar to other areas of the face and body, a loss of volume in the periorbital area results from a reduction in subcutaneous fat, muscle atrophy, and changes in the skeletal framework. In the periorbital area, this process is thought to be secondary to gravitational descent of the suborbicularis oculi and malar fat pads, as well as atrophy of the subcutaneous tissue of the lower eyelid. In approaching the periorbital complex, many facial plastic surgeons have modified traditional blepharoplasty techniques in an effort to restore this lost volume. These techniques have included the subperiosteal midface lift, the suborbicularis oculi fat pad lift, and transposition of orbital fat.3,4 Others have sought to replace this lost volume with various injectable agents that have become increasingly available in the past few years. In the search for the ideal soft-tissue filler, our group turned to periorbital lipotransfer as a primary means of addressing lower eyelid volume loss. Autologous lipotransfer is a technique by which autogenous fat is harvested from a donor site, typically the abdomen or thighs, with a low-pressure cannula and specially prepared for reinjection into areas of facial volume loss. In the periorbital area, the fat is carefully delivered to the lower eyelid and midface complex by a specialized technique that allows the deposition of small amounts of fat into tissue planes. Our group has performed hundreds of periorbital lipotransfer procedures during the past few years and has obtained long-term data from this experience. We have performed a comprehensive review of our experience and summarized our long-term data in this article. One of us (E.F.W.) performed a retrospective review of all patients who underwent autologous periorbital lipotransfer from January 1, 2004, to December 31, 2008. A total of 114 patients were identified as having given informed consent after exclusion of patients who had undergone simultaneous upper facial rejuvenation procedures, including upper blepharoplasty, lower blepharoplasty, browlift, subperiosteal midface lift, chemical peel, or laser resurfacing. Patients also were excluded from the study if they had undergone any subsequent periorbital injections with synthetic products or autologous fat after the initial periorbital lipotransfer procedure. Of the 114 patients, 99 patients were identified who had complete photographic and medical records and therefore were included in the study. All preprocedure and postprocedure photographs were standardized for orientation and color (Adobe Photoshop, version CS4; Adobe Systems Inc, San Jose, California), and postprocedure photographs were included at a minimum of 6-month intervals extending as long as 4 years of follow-up at the time of analysis. Patients were divided into groups based on their length of total postoperative follow-up: group 1, from 0 months to 1 year; group 2, from 1 to 2 years; group 3, from 2 to 3 years; group 4, from 3 to 4 years; and group 5, from 4 to 5 years. The degree of aesthetic improvement of the periorbital area, which in this study included the lower eyelid and midface complex extending to the malar eminence, was assessed by 3 independent masked evaluators. The periorbital area was given a standard aesthetic rating using previously validated methods from 0 to 2 (with 0 indicating no improvement; 1, mild improvement; and 2, marked improvement) (Figure 2).5,6 Figure 2. Patients before (left) and after (right) periocular autologous lipotransfer. Aesthetic scores of 0 (no improvement) (A and B), 1 (mild improvement) (C and D), and 2 (marked improvement) (E and F). The procedure is performed with the patient receiving monitored anesthesia via intravenous sedation or receiving local anesthesia and oral anxiolytics. Before the procedure, preoperative photographs are reviewed, and the amount of fat needed is determined. Typically, 6 to 10 mL of processed fat is sufficient. The areas of planned injection and the donor sites are delineated with a surgical marking pen. The 2 most common donor sites are the abdomen and the lateral aspect of the thighs, although other easily accessible areas include the medial aspect of the thighs, the flanks, and the lateral aspect of the buttocks. A single-stab incision is made with a No. 11 blade, and a 15-cm liposuction aspiration cannula with a single port is placed through the incision and directed outward in a fanlike pattern. A tumescent solution containing 1 mL of 1% lidocaine with 1:100 000 epinephrine, 4 mL of 1% plain lidocaine, and 15 mL of isotonic sodium chloride is slowly delivered in a second pass. Typically, two 20-mL syringes of tumescent solution are used for each donor site. After 15 minutes is allowed for maximal vasoconstrictive effect, the same liposuction aspiration cannula is affixed to a 10-mL Luer-Lok syringe (BD, Franklin Lakes, New Jersey). Manual aspiration of fat is then performed with hand suction only, using repetitive forward and backward movements of the cannula with the nondominant hand directing and maintaining the cannula in the proper subcutaneous tissue plane. After aspiration, the stab incision is closed with a 5-0 fast absorbing gut suture, and the procedure is repeated on the contralateral side. At the conclusion of the fat retrieval, a compression dressing is wrapped around the donor site to reduce postprocedure swelling.5 The plungers of the 10-mL syringes filled with aspirated fat are removed, and a metal stopper is placed on the ends of the syringes. These syringes are then centrifuged for 13 minutes at 3500 rpm. After centrifugation, the stoppers are then removed to allow drainage of the serous fluid (which contains tumescent solution and blood), and the superficial oil layer is removed by wicking this solution with a semimoist 10.16 × 10.16–cm gauze sponge. The usable fat is transferred from the 10-mL syringe into individual 1-mL Luer-Lok syringes using a Luer-Lok transfer device. Nerve blocks of the infraorbital and zygomaticofacial nerves are performed in addition to anesthetizing the stab incision site using 1% lidocaine with 1:100 000 epinephrine. An 18-gauge NoKor needle (BD) is used to create small stab incisions at the sites of entry, typically at the malar eminence and inferolaterally to the lateral canthus. A combination of a 0.9-mm and 1.2-mm fat injection blunt tip cannula (Tulip Medical Products, San Diego, California) is then attached to the syringe in preparation for fat transfer. The fat is injected at many different angles in a fan technique with multiple small back-and-forth passes. This allows for the initial deposition of a minimal amount of fat (ie, 0.03-0.05 mL per pass) in the deep plane just superior to the periosteum of the infraorbital rim. Additional fat is injected into a more superficial plane within the subcutaneous tissue of the lower eyelid with careful attention to avoid bolus deposition and a too superficial injection directly under the lower eyelid skin. Next, attention is turned to the malar eminence, and fat is injected into and along the superior borders of the zygomaticus major, zygomaticus minor, and levator labii superioris muscles and the malar fat pad from an entry site at the inferior region of the muscles. A total of 3 to 5 mL of fat is typically injected into each side. At the conclusion of fat injection, the face is cleansed with isotonic sodium chloride solution, and a small amount of bacitracin ointment is placed on each of the stab incisions. The injected areas are aggressively iced for the first 48 hours to decrease edema and ecchymoses. Median scores of the independent evaluators' aesthetic ratings were calculated for each group, and interobserver agreement was assessed by κ analysis. Periorbital aesthetic rating scores were then subjected to Mann-Whitney tests by comparing the same patients in each group over time. The most common complication from periorbital autologous lipotransfer was prolonged postoperative edema that lasted longer than 2 weeks. Other complications included ecchymosis from the stab incisions and skin overlying the subcutaneous injection tunnels, undercorrection, and minor tissue irregularities and asymmetries. The study group consisted of 94 women and 5 men, with postoperative follow-up of 6 months to 4 years (mean, 19 months). Patient age varied from 35 to 71 years (mean, 51 years). The amount of fat injected into the periorbital area for each side ranged from 3 to 5 mL (mean, 4.1 mL). On the basis of total length of postoperative follow-up, 99 patients were in group 1 (ie, 0-1 years of follow-up), 46 patients in group 2 (ie, 1-2 years of follow-up), 22 patients in group 3 (ie, 2-3 years of follow-up), 7 patients in group 4 (ie, 3-4 years of follow-up), and 2 patients in group 5 (ie, 4-5 years of follow-up). Data from the 3 independent evaluators correlated well, with a κ value of 0.316. Surgical results from almost all patients (94 [94.9%]) in group 1 demonstrated positive aesthetic ratings (median score, 1) that were closely divided between mild and marked improvement: 5 (5.1%) received a no-improvement rating, 50 (50.5%) received a mild-improvement rating, and 44 (44.4%) received a marked-improvement rating. In group 2, results from 38 of 46 patients (82.6%) demonstrated a positive aesthetic rating (median score, 1): those from 8 patients (17.3%) received a no-improvement rating, those from 24 patients (52.2%) received a mild-improvement rating, and those from 14 patients (30.4%) received a marked-improvement rating (Figure 3). A Mann-Whitney test comparing the aesthetic scores of the same 46 patients in groups 1 and 2 showed no significant decrease in improvement over time (P = .20). Figure 3. Aesthetic scores for results from patients in each group. A, Group 1, up to 1-year follow-up; B, group 2, 1-year to 2-year follow-up; C, group 3, 2-year to 3-year follow-up; D, group 4, 3-year to 4-year follow-up; and E, group 5, 4-year to 5-year follow-up. Similar to group 2, a total of 19 of 22 patients (86.4%) in group 3 demonstrated a positive aesthetic rating (median score, 1). A trend away from marked improvement was observed in this group because results from 3 patients (13.6%) received a no-improvement rating, those from 15 (68.2%) received a mild-improvement rating, and those from 4 (18.2%) received a marked-improvement rating. The results of Mann-Whitney tests comparing the aesthetic scores of the same 22 patients in groups 1 and 2, as well as groups 2 and 3, showed no significant change (P = .20). However, a comparison between groups 1 and 3 reached statistical significance (P = .049), indicating that the degree of improvement from periorbital lipotransfer in the first year decreased significantly over time by the 2-year to 3-year follow-up point. The aesthetic ratings from groups 4 and 5 are shown in Figure 3, but comparisons with other groups could not be made because of the small number of study participants (n = 7 and n = 2, respectively). The periorbital area is one of the first facial regions to show signs of aging. Traditional techniques, such as lower eyelid blepharoplasty, result in aesthetic improvement but frequently do not sufficiently address the component of periorbital volume loss. Furthermore, a blepharoplasty can result in a hollowed or skeletonized appearance of the infraorbital rim, which may become more pronounced over time if aggressive resection of the orbital fat pads and orbicularis muscle is performed. Complementary procedures, such as the suborbicularis oculi fat pad lift, lower eyelid fat transposition, and subperiosteal midface lift, can produce some improvement in lower eyelid contour, but we have found that direct volume replacement with autologous lipotransfer produces the most aesthetically pleasing results.2 Although surgeons have used autologous lipotransfer for many reconstructive efforts and have used this technique in different areas of the face, our study was specifically designed to examine the aesthetic improvement from autologous lipotransfer solely in the periocular area. Neuber7 pioneered the lipotransfer procedure in 1893 when he used fat to fill facial defects formed by tuberculosis. Miller8 described the infiltration of fat using cannulas in 1926, but the further development of lipotransfer did not occur until the 1970s to 1980s with the subsequent use of liposuction procedures. Since then, several authors9,10 have reported on the use of autologous fat as a soft-tissue filler for reconstructive purposes, including treatment of hemifacial atrophy, congenital anomalies, and acquired defects. In the 1990s, our understanding of lipotransfer was significantly advanced by Coleman11,12 with the development of modern techniques, including gentle removal and handling of aspirated fat and injection techniques using small volumes of fat and multiple passes. Fournier,13 Donofrio,14 and others have contributed to our collective understanding of periorbital aging and have appropriately emphasized the importance of volume preservation in our surgical techniques and illustrated the success of volume restoration using fat. Although autologous lipotransfer has been widely accepted as a successful technique to address facial volume loss, a frequent criticism of this procedure is the uncertainty regarding the longevity of its results. This viewpoint is reasonable given the many reports that only show impressive short-term results (ie, during <6 months) or claim long-lasting improvement without corroborating data. Furthermore, disparity exists in the literature regarding long-term outcomes, with rates of survivability of transplanted fat varying from 10% to 90%.15,16 In fact, the report by Ersek15 of 10% survival of fat grafts at 3 years after surgery has been disputed by Coleman16 and others in the literature. A study by Pinski and Roenigk17 reported that fat surviving at 6 months will continue to endure, although a pivotal study by Guerrerosantos18 demonstrated 3-year to 5-year survival of fat. Anderson et al19 studied periocular fat graft survival in patients undergoing reconstructive surgery in which soft- tissue volume augmentation was needed in postenucleation socket syndrome. Their findings included evidence of healthy adipose cells with some chronic inflammation and fibrous septa in patients longer than 3 years after surgery. Also, survival of fat grafts for as long as 5 years has been documented in an animal study20 with histologic analysis for transplanted adipocytes. Although fat harvesting and processing techniques vary widely in the literature, no studies exist, to our knowledge, that directly compare a purported technique and its direct effect on fat longevity in the periocular region. However, most recent studies, such as those by Anderson et al19 and Meier et al,21 as well as our own, use an autologous periorbital lipotransfer technique based on 3 consistent technical considerations. First, the use of a small blunt-tipped harvesting cannula, such as the Coleman cannula, has allowed for more reliable harvesting of viable fat particles using gentle negative manual pressure with the cannula attached to a Luer-Lock syringe. Second, the processing of the harvested fat with the centrifugation technique has allowed for separation of the fat aspirate into its various components: enriched fat, blood particles with tumescent solution, and the supernatant oil layer. With this separation into distinct layers, injection of the enriched fat only into the periorbital area allows for the elimination of the other components and a theoretical reduction of inflammatory mediators. The third technical consideration is the use of fine-tipped reinjection cannulas, such as the 0.9-mm cannula (Tulip Medical Products). In our experience, this has allowed for precise placement of small amounts of autologous fat via multiple passes, which has directly resulted in fewer complications, such as the formation of nodules, granulomas, and larger fat boluses. The major objectives of our study were to measure the degree of aesthetic improvement from periocular lipotransfer in patients and to see whether the results changed over time. We found that results from almost all patients (94.9%) improved with periocular fat transfer compared with their results at 1 year as assessed via preoperative photographs. Within this first year of follow-up, results from as many as 44.4% of patients showed marked improvement and those from 50.5% showed mild improvement. When we examined how these results changed over time by comparing them with those from the same 46 patients who had as long as 2 years of follow-up, we found that results from 82.6% of patients continued to show improvement (ie, those from 30.4% showed marked improvement and those from 52.1% showed mild improvement). Statistical comparisons confirmed that the aesthetic improvement persisted into the second year of follow-up by demonstrating no significance in aesthetic scores between the same patients in groups 1 and 2. By the time patients had reached as many as 3 years of follow-up, our data begin to reflect some degree of fat loss in the periocular area. Although results from 86.3% of patients showed improvement, a trend away from marked improvement was observed (ie, from 18.2% of patients), and results from most patients demonstrated only mild improvement (ie, from 68.2% of patients; P = .049). The results from our study, therefore, confirm that autologous lipotransfer can create improvement in the periocular area that can persist for years, but we found that the degree of improvement tends to decrease by a significant amount at the third postoperative year. Limitations of the present study include those inherent to retrospective reviews in which patient follow-up is limited over time. Selection bias may be present, particularly in the longer-year follow-up, because our data exist only for those patients who came back year after year for postoperative follow-up and photography. It is possible that we were able to follow up a percentage of those patients because of the positive experience they had with their periorbital lipotransfer results, although our data may not reflect the true amount of suboptimal results from patients who could not be followed up. These limitations could be minimized in the future by a prospectively designed study with a predetermined follow-up protocol. Another potential limitation is that the data are derived from a subjective aesthetic analysis using 2-dimensional photography. Although we have attempted to minimize this limitation through analysis by 3 independent masked evaluators and use of consistent photography, recently developed technology may prove useful. In a recent study, Meier et al21 attempted to quantitate the longevity of autologous fat grafting in the midface by using 3-dimensional imaging software. In that study, the authors followed up 33 patients for a mean time of 16 months, with an average amount of 10.1 mL of autologous fat injected into each midface region. The authors found an average of 31.8% residual volume augmentation at their patients' last postoperative visit. The results of the Meier et al21 study cannot be directly compared with those of our study because patients undergoing complementary procedures, such as rhytidectomy or blepharoplasty, were not excluded, and the authors specifically studied fat grafting in the midface. However, their approach, quantitating the amount of fat survival through image analysis, is unique. We believe that the data from our study are important and beneficial to patients and surgeons because our data regarding our analysis of degree of improvement over time can be used in consultations with prospective patients who are considering periocular lipotransfer. The challenge regarding the data from the study by Meier et al21 and others22 who used alternate imaging techniques, such as magnetic resonance imaging volume retention, is to apply these data pertaining to the percentage of fat survival to the aesthetic outcome of an individual patient. In other words, it may be difficult for a surgeon or a patient to translate a percentage of fat survival into an informed opinion regarding whether the procedure should be considered successful and whether the aesthetic results have maintained an acceptable level of improvement over time. Anecdotally, we and other surgeons believe that risk factors to premature absorption of the transplanted fat include smoking, excessive exercise, and the extremes of age (ie, <30 and >70 years). Although sufficient data are not available to substantiate this claim, a scientific basis is possible given the severe vasoconstrictive effect of smoking and the altered metabolism seen in youth and elderly patients. One could postulate that the effect is then magnified by the fragility of transplanted fat, with its limited and tenuous vascular supply. Another point of debate is whether interval treatment with an additional transfer of fat is necessary to maintain an acceptable degree of volume replacement. Lam et al23 recently reported that in most cases, fat grafting endures after a single treatment, but an additional touch-up session is often necessary to achieve the optimal result. However, the authors caution that a touch-up procedure performed sooner than 6 months after surgery may be premature because results “continue to mature and improve over 6 months to 2 years.” Although the scientific basis for this claim is unclear, proposed mechanisms include stem cell rejuvenation in neighboring tissues and positive metabolic effects on the overlying skin from estrogen-laden fat donor cells. In the study by Meier et al,21 8 of 33 patients (24.2%) required an additional touch-up procedure. In our study, our data reflect only patients who had undergone 1 periorbital lipotransfer procedure. Complications of periorbital autologous lipotransfer are infrequent but may be technique dependent. A moderate learning curve is associated with periorbital fat transfer to avoid complications, such as the formation of visible lumps or gross asymmetries as a result of bolus deposition of fat or injection into a too superficial tissue. These problems are best avoided by placing small amounts of fat (ie, 0.03-0.05 mL) via multiple passes and a slow delivery technique within a fat pad or just superficial to the periosteum. Novice injectors may struggle with undercorrection or overcorrection until becoming comfortable performing this procedure. Although ecchymosis and prolonged edema may be evident in some patients (typically, 15%), other potential complications of this technique, such as infection, hyperpigmentation of the overlying skin, lower eyelid deformity, or vascular embolization with visual loss, are rare. Currently, periorbital volume restoration can be achieved with the injection of a variety of available dermal filling agents, including hyaluronic acid derivatives (Restylane; Medicis Aesthetics Inc, Scottsdale, Arizona; Juvederm; Allergan Inc, Irvine, California). A calcium hydroxylapatite derivative (Radiesse; Bioform Medical Inc, San Mateo, California) can be effective when placed along the upper midface and malar eminence, but care must be taken when approaching the infraorbital rim given the thicker consistency of the product. Similarly, a poly-L-lactic acid dermal filler (Sculptra; sanofi-aventis, Bridgewater, New Jersey), which was recently approved by the Food and Drug Administration, also must be used with caution given the potential for granuloma formation in the periocular region, but interest has surfaced given its reported potential for longer-term efficacy. Previous authors24 and our group have learned the importance of thorough dilution of poly-L-lactic acid dermal filler with water, as well as avoiding the technique of a depot injection close to the orbital rim to avoid complications. Although the potential for limited recovery using these nonsurgical injections is a distinct advantage, all these dermal fillers are limited by their effective longevity, which ranges from 6 months to 1 year (possibly longer with poly-L-lactic acid dermal filler). Despite the wide availability of these alternative facial volume fillers, the advantages of autologous fat in the rejuvenation of the periorbital region are numerous, such as a sufficient supply of adipose tissue in most patients; lower expense compared with specially prepared synthetic dermal fillers; the biocompatibility, safety, ease of harvesting, and ready availability of fat; minimal tissue reaction caused by fat transfer; and potential longer-lasting effects of volume restoration provided by fat transfer. In summary, our experience has led us to believe that the role of volume replacement in periorbital rejuvenation is critical in patients who clearly demonstrate lower eyelid aging due to periocular volume loss. In our experience, periocular volume replacement is achieved most reliably with autologous lipotransfer. Periorbital lipotransfer appears to provide a sufficiently longer-lasting result than other available dermal fillers. Not only does this benefit the patient aesthetically over time but it also allows for a potentially lower financial cost to the patient in the long term by avoiding annual repeated injections. Furthermore, the easy availability of additional autologous fat permits the surgeon to completely contour the periorbital complex appropriately without concern for a compromised undercorrection as a result of using a limited supply of an alternative dermal filler. Correspondence: Cory C. Yeh, MD, Division of Facial Plastic and Reconstructive Surgery, Yeh Facial Plastic Surgery, 24331 El Toro Rd, Ste 350, Laguna Woods, CA 92637 (info@YehFacialPlasticSurgery.com). Accepted for Publication: March 15, 2011. Author Contributions:Study concept and design: Yeh and Williams. Acquisition of data: Yeh and Williams. Analysis and interpretation of data: Yeh and Williams. Drafting of the manuscript: Yeh and Williams. Critical revision of the manuscript for important intellectual content: Yeh and Williams. Statistical analysis: Yeh. Administrative, technical, and material support: Yeh and Williams. Study supervision: Williams. Financial Disclosure: None reported. Previous Presentation: This study was presented at the 2010 American Academy of Facial Plastic and Reconstructive Surgery Fall Meeting; September 24, 2010; Boston, Massachusetts. Thank you for submitting a comment on this article. It will be reviewed by JAMA Facial Plastic Surgery editors. You will be notified when your comment has been published. Comments should not exceed 500 words of text and 10 references. Do not submit personal medical questions or information that could identify a specific patient, questions about a particular case, or general inquiries to an author. Only content that has not been published, posted, or submitted elsewhere should be submitted. By submitting this Comment, you and any coauthors transfer copyright to the journal if your Comment is posted. * = Required Field Disclosure of Any Conflicts of Interest* Indicate all relevant conflicts of interest of each author below, including all relevant financial interests, activities, and relationships within the past 3 years including, but not limited to, employment, affiliation, grants or funding, consultancies, honoraria or payment, speakers’ bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued. If all authors have none, check "No potential conflicts or relevant financial interests" in the box below. Please also indicate any funding received in support of this work. The information will be posted with your response. Some tools below are only available to our subscribers or users with an online account. Download citation file: Web of Science® Times Cited: 4 Customize your page view by dragging & repositioning the boxes below. Enter your username and email address. We'll send you a link to reset your password. Enter your username and email address. We'll send instructions on how to reset your password to the email address we have on record. Athens and Shibboleth are access management services that provide single sign-on to protected resources. They replace the multiple user names and passwords necessary to access subscription-based content with a single user name and password that can be entered once per session. It operates independently of a user's location or IP address. If your institution uses Athens or Shibboleth authentication, please contact your site administrator to receive your user name and password.
Rather than sit on this post for any longer, I figured I’d chop it into several pieces. The remaining segments (at the moment, I have outlines sufficient for two or three) will be spread over the next five to ten days, with unrelated interludes. Conrad points us to a superb discourse by Gawain on the aesthetic play of the Heian aristocracy, part historico-literary reflection, part speculative construction of a new community. The essay grows from a kernel of Conrad’s musings about the potential for a new freemasrony of intellectually kindred spirits: I want a Republic of Letters. Not so much a movement. More a society. I guess the germ of my thought was, there are all these intellectual types all over the world–some might be writers or artists, others scholars, and others even accountants or concrete engineers–but they all like thinking and reading–maybe they all have a dry sense of humour, somewhat cynical–but they don’t know too many like themselves. [ . . . ] The interpersonal connections would be not merely by chance, as is the case with most friendships, but through adherence to certain common beliefs–though unlike in a movement or artists’ group, there would be no unified goals, no “head”, no one purpose. Thoth’s glabrous beak! That certainly raises a resonant chord of longing in my own crabbed and introverted heart When I lived in Ann Arbor, my housemate organized regular meetings of a small salon. Her personal style had the exuberant force of a deviant extrovert, and she attracted many peculiar, interesting people into her ambit. Now, I live in the isolation of suburbia, many states away. My circle of friends is slight, my circle of kindred spirits smaller yet. Over the past decade I’ve wondered how I might go about assembling a similar salon. As much as anything else, this led me to start blogging, to take a few, uncertain steps toward thinking in public. Umberto Eco, in his Postscript to the Name of the Rose, writes about the process of constructing his novel: After reading the manuscript, my friends and editors suggested I abbreviate the first hundred pages, which they found very difficult and demanding. Without thinking twice, I refused, because, as I insisted, if somebody wanted to enter the abbey and live there for seven days, he had to accept the abbey’s own pace. If he could not, he would never manage to read the whole book. Therefore, those first hundred pages are like a penance or an initiation, and if someone does not like them, so much the worse for him. He can stay at the foot of the hill. [. . .] What model reader did I want as I was writing? An accomplice, to be sure, one who would play my game. With every post we take another step toward creating our own ideal readers, trying to find those who want to play the same games. The Republic of Letters (as fine a provisional name as any) is aimed at a particular disjunction of characteristics, a certain type of thinker and reader who dreams of colloquy with those of like mind. Of course, the Venn diagrams never align perfectly. Indulge in too much non-compatible bathos and you risk reducing your ideal readership to one: yourself. Memos for the New Salon In a collection of undelivered lectures entitled Six Memos for the Next Millenium, Italo Calvino expounded on five of the six characteristics that he most admired in writing: lightness, quickness, exactitude, visibility, multiplicity, and consistency. Perhaps a consideration of these can help us in plotting out what this nascent salon might look like. Gawain writes of sprezzatura, of learning worn lightly, of studious unprofessionality. This points to an important aspect of the Republic: it is not a professional society. Failure to obtain professional-level knowledge in a particular domain should not be a bar to entry One rarely hears the term ‘intelligent layman’ these days, and ‘generalist’ tends to imply ‘dilettante’. One of the fundamental skills required is appreciation. Through this, we can strive for the elevation of mind that comes from having a really good conversation: inspiration, a spur to continue one’s own learning, the pleasures of philosophy and of having one’s philosophy challenged. Perhaps there’s another aspect, as well. There’s something to aim for beyond the mere odor of learning; the goal of this society (and its proceedings, whatever form they may take) should not be to clothe itself with Literature as decoration, as mere signifier of erudition or plumage for the dances of mating and pecking order. Perhaps this is not a natural stopping point, but I shall publish this as it stands and pause for reflection. In the next essay of this series, I’ll continue with a consideration of Calvino’s five remaining characteristics, and perhaps move on to some thoughts about intellectual game-playing.
When you’re a blogger, you really notice how fast time flies. I used to post here once a day, and then once a week, and then twice a month. But the time between posting kept flying by so fast, so posting once a month felt the same as posting every day. I’d have that initial feeling upon posting of, ahhh, I’ve accomplished something here, and with the blink of an eye a month had passed, and it was time to post something again. Doesn’t it seem this way for everything, though? Friday’s here, and then before you know it, it’s Monday again, then Wednesday — almost there! — and finally, Friday again. Didn’t we just go to ballet class, like, yesterday? Karate time already? They say time goes faster as you get older, and that it’s based on a ratio — something like your age to the time period. So the length of a day is minuscule to me at my ripe old age, but long to my six-year-old: 1 day : 30+ years vs. 1 day : 6 years Makes sense. Either that, or time is, as my daughter would say, literally speeding up. The same thing happens between my yoga practices. (Sidenote: I’m all excited this morning because it’s Saturday, and we don’t traditionally practise on Saturdays — and after a holiday week of practising alone on my mat in the cold hall here, my body is aching and I need the break; but, Sunday fast approacheth….) Six days a week, I go to the yoga shala, sweat it out on the mat like a crazed wannabe contortionist, and it’s done, and I’m relieved and proud of myself, and before you know it, it’s 6 a.m. again, and I’m giving my friend Sergio the thumbs up before rolling out my mat again. And again. It’s sooo Groundhog Day. For 2014, I want to slow things down a bit. As a busy, Ashtangi, working mom, it’s really tempting to just get through the week, get to Saturday — when I can rest, read novels, eat take-out, Staaaarbucks, stay in bed a bit longer. I’m not sure exactly how to slow down all the in-between, but I know it’ll involve more savouring — moments, cuddles, steps taken from the car to the schoolyard, mouthfuls, breaths. “Betty has a yellow tooth,” R says, as he wakes poor Betty White from her slumber. Grrrr… J rubs her toes along the frame of my computer screen, watching as I type. My dad’s here. Saturday morning Power Rangers is on. “Don’t put that in your mouth, R.” Thanks to social media, we’re all bloggers now. Many of you used to look at me like I was nuts when you learned I was a “mom blogger” back in 2006. But you’re all blogging now. It’s a bit much to sit down and craft an entire blog post; many of us old-school bloggers realize that now with the advent of microblogging, so we post way less on our blogs and blend in with the masses on Twitter and Facebook, Instagram, texting, emailing. We’re all documenting our lives now and reading others’ documentations, but is this savouring — or is it impulse? I’m not sure. At the same time, though, weren’t our parents documenting as much as they could in their own way? Whipping out the video cam whenever the chance came? We can’t even view all those old video tapes anymore. Me in my Olive Oyl tee and pigtails hugging Minnie Mouse, waving, “Hi, Mama!” It’s human nature to tell stories. We’ve been doing it since the daaaawn of tiiiime. The stories, and the way they’re told, are what most define civilizations. Technology (aka Apple) has tapped into that innate human drive and exposed it, exploited it, monetized it. And it’s awesome! But, like resisting that Starbucks grande-soy-no-water-tazo-chai (see how it rolls off the tongue?), we/I/Josh-O have to exercise conscious control around it. That’s also why I don’t blog here so much. I practically live on the Internet, Twitter, Facebook all day as Writer/Editor/”Social Media Queen” at Today’s Parent. It’s my job. Storytelling, editing stories, tweeting, Facebooking stories. I need to unplug at the end of the day. Still, as postmodern literature so expertly shows us, it’s always about the storytelling. But, we need story, too. Life can’t all just be about the telling. There are two blue jays outside. We’re wondering how they’re surviving in the 20-below weather… “Mama! This snow looks like ice cream.” Six days a week (not including Saturday’s, New Moon and Full Moon days — according to the tradition), I practise Ashtanga yoga. That’s 1.5 hours of being, barefoot on a mat: no stories; no storytelling. I do this every morning like clockwork. Among other things, it preps me to be more present and aware throughout the day, to be here now, seeing through my own eyes, not those of prospective readers. Maybe you get the same rewards jogging, meditating, drawing, playing hockey, reading poetry, birdwatching. It’s important, I think. Don’t get me wrong, I love the storytelling. Am obsessed. But compulsive storytelling is so 2008! We need to be conscious storytellers, and above all, to live the story, too — savour the little moments… and be together without story. Though I’m happy it’s Saturday, my Sunday practice and back-to-work Monday loom on the horizon. But my daughter wants to sit with me — just be with me — and I’m going to savour it now. Happy, healthy, savour-y 2014!
The “do-gooders” are back. Uncritical reporters and correspondents describe them as “humanitarian-brave-heroic-peacekeeping-volunteers.” They are members of the International Solidarity Movement, praised this week by George Monbiot, a columnist in The Guardian (UK), as displaying “extraordinary courage and self-sacrifice” to protect Palestinian “civilians by making hostages of themselves.” ISM volunteers are, in fact, a branch of a sophisticated Palestinian propaganda effort. The ISM website (http://www.palsolidarity.org/) proclaims its full name as “International Solidarity with the Palestinian People.” According to its own publicity, ISM is “a Palestinian-led coalition of Palestinians and foreign civilians.” And while ISM’s own literature and spokesman claim that they support non-violent “resistance to Israel’s occupation,” they also openly state, “We recognize the Palestinian right to resist Israeli violence and occupation via armed struggle.” According to The Guardian, ISM “protesters have moved into the homes of people threatened with bombardment by the Israeli army, ensuring that the soldiers cannot attack Palestinians without attacking foreigners too. They have been sitting in the ambulances taking sick or injured people to hospital, in the hope of speeding their passage through Israeli checkpoints and preventing the soldiers from beating up the occupants. They have been trying to run convoys of food and medicine into neighborhoods deprived of supplies…” Two members of the International Solidarity Movement have received considerable press attention for their work: Adam Shapiro of New York and Tzaporah Ryter of Minnesota. The press invariably points out that the two are Jewish. Writing in various Palestinian publications, Ryter claimed last week that she witnessed Israeli soldiers in Ramallah “shooting at the women and children… They were chasing down people, hunting them like that in the fields.” Ryter’s claims were not verified by any of the legions of reporters or international observers surrounding Ramallah. Ryter has a record of slandering Israel. Writing in a local Minnesota newspaper last June, she called Israel “racist… fascist… war criminals” and accused Israel of “ethnic cleansing.” Adam Shapiro received international attention last week after smuggling himself into Yasir Arafat’s compound and sharing a breakfast with the Palestinian. After interviewing Shapiro’s family, reporters wrote that Adam was influenced by the teachings of Mahatma Gandhi. Strange, that is not what Adam Shapiro and his fiance, Huwaida Arraf, wrote in January in a Palestine Chronicle manifesto on the use of both violence and non-violence against Israel. They wrote: “While we do not advocate adopting the methods of Gandhi or Martin Luther King, Jr., we do believe that learning from their experience… can be quite valuable and of great utility. The Palestinian resistance must take on a variety of characteristics — both nonviolent and violent. But most importantly it must develop a strategy involving both aspects. No other successful nonviolent movement was able to achieve what it did without a concurrent violent movement… in India militants attacked British outposts and interests while Gandhi conducted his campaign.” Shapiro and Arraf, who is listed as a coordinator on the ISM website, wrote in the Palestine Chronicle: “[W]e accept that Palestinians have a right to resist with arms, as they are an occupied people upon whom force and violence is being used. The Geneva Conventions accept that armed resistance is legitimate for an occupied people, and there is no doubt that this right cannot be denied.” The International Solidarity Movement is online at: HonestReporting recommends monitoring the media closely to see how they present ISM’s partisan propagandists. These are not the pacifists they pretend to be. Notice how they do not sit in Tel Aviv cafes or Jerusalem pizzerias to protect Israelis from homicide bombers. They are disciples of Arafat who declared in the UN in 1974, “I have come bearing an olive branch and a freedom fighter’s gun.” Read The Guardian’s paean to the International Solidarity Movement at: HonestReporting also objects to The Guardian’s main editorial on April 9, which says: “Ignoring US, European and regional demands to desist, Israel continues to pursue a campaign of terror across the Palestinian territories… Mr Sharon must eschew the paths of terrorism and return to his senses — or stand aside.” The Guardian’s choice of words — calling Israeli actions “terror” — are noteworthy, considering how The Guardian has gone out of its way to avoid using the word “terror” to describe the perpetrators of the 63 suicide bombings against Israel civilians over the last six months. ====== MSNBC – AGAIN ====== MSNBC.com columnist Michael Moran is at it again, drudging up old arguments against HonestReporting. He writes that HonestReporting “falsely claimed that [Moran] responded to them with obscenities.” Yet HonestReporting has on file emails sent by Moran in which he says “f*** you” and calls HonestReporting members “abusive… idiots… irresponsible… baseless… poisoned…” The good news is that Moran’s latest column, linked on the MSNBC.com homepage, has produced 1,000 new sign-ups to HonestReporting.
Andrew Saint's review of A Guide to the New Ruins of Great Britain for the Times Literary Supplement has some nice things to say, and many criticisms. For Saint, A Guide to the New Ruins of Great Britain is no true guidebook at all but a ranting, panting travelogue eked out with provocatively scruffy little photographs ... [Hatherley] doesn't say much that is perceptive because he doesn't really look. He is in much too much of a hurry to place them in cultural context, say something flip, move on and weave his slashing narrative. Haste is both this book's virtue and its vice. It gives it a vitality and immediacy, but does not make for mature criticism ... its instant and local value is enormous. It destroys shibboleths, and its anger, zest and articulacy make one think. Saint also remarks on the author's "macho façade and ... semblance of hectic movement." Saint, the general editor of the Survey of London, part of English Heritage's Research Department, then attempts "to define the shape of Hatherley's cultural baggage" Architecture for Hatherley must be hard, sincere, obtrusive, if possible outrageous, by preference connected to the puritan heyday of the welfare state ... Just as for Betjeman the supreme experience might be evensong in a Comper church menaced by an urban motorway, so for Hatherley it is wandering through the deserted Sheffield Markets with hard-rock tracks in his ears, or talking to ex-punks who remember the last days of Hulme. The Times Literary Supplement website is "under construction." This review appears in the edition of Friday 28 January 2011. Benjamin Kunkel has written a lengthy article on David Harvey for the London Review of Books. Nominally a joint review of his recent books The Enigma of Capital and A Companion to Marx's Capital, it engages with Harvey's entire body of work, and especially his seminal The Limits to Capital. Over recent decades, the landmarks of Marxian economic thinking include Ernest Mandel's Late Capitalism (1972), David Harvey's Limits to Capital (1982), Giovanni Arrighi's Long 20th Century (1994) and Robert Brenner's Economics of Global Turbulence (2006), all expressly concerned with the grinding tectonics and punctual quakes of capitalist crisis. Yet little trace of this literature, by Marx or his successors, has surfaced even among the more open-minded practitioners of what might be called the bourgeois theorisation of the current crisis. In a critical review of John A. Hall's Ernest Gellner: An Intellectual Biography for The New Republic, John Gray opens by agreeing with Hall on one particular point—that Gellner was an exceptionally honest thinker: John A. Hall concludes his account of Ernest Gellner by observing that his outlook on the world was austere. "But therein lies its attraction," he goes on. "Not much real comfort for our woes is on offer; the consolations peddled in the market are indeed worthless. What Gellner offered was something more mature and demanding: cold intellectual honesty." Brief personal impressions are rarely conclusive, especially when recalled after many years; but that Gellner was an exceptionally honest thinker is beyond reasonable doubt. The Millions, one of the US's most-respected literary sites, has called The Art of Asking Your Boss for a Raise one of the most-anticipated books of 2011, noting that We readers will have to deal with the fortunate burden of clearing shelf-space for another novel by Perec this spring, with the first English translation of The Art of Asking Your Boss for a Raise. Visit The Millions to see the full list of recommended reading for 2011.
The prohibition of the use of force is one of the most crucial elements of the international legal order. Our understanding of that rule was both advanced and challenged during the period commencing with the termination of the Iran-Iraq war and the invasion of Kuwait, and concluding with the invasion and occupation of Iraq. The initial phase was characterized by hopes for a functioning collective security system administered by the United Nations as part of a New World Order. The liberation of Kuwait, in particular, was seen by some as a powerful vindication of the prohibition of the use of force and of the UN Security Council. However, the operation was not really conducted in accordance with the requirements for collective security established in the UN Charter. In a second phase, an international coalition launched a humanitarian intervention operation, first in the north of Iraq, and subsequently in the south. That episode is often seen as the fountainhead of the post-Cold War claim to a new legal justification for the use of force in circumstances of grave humanitarian emergency-a claim subsequent challenged during the armed action concerning Kosovo. There then followed repeated uses of force against Iraq in the context of the international campaign to remove its present or future weapons of mass destruction potential. Finally, the episode reached its controversial zenith with the full scale invasion of Iraq led by the US and the UK in 2003. This book analyzes these developments, and their impact on the rule prohibiting force in international relations, in a comprehensive and accessible way. It is the first to draw upon classified materials released by the UK Chilcot inquiry shedding light on the decision to go to war in 2003 and the role played by international law in that context. Saturday, September 4, 2010 Friday, September 3, 2010 - Part I Topics - Christoph Ohler, International Regulation and Supervision of Financial Markets After the Crisis - Christoph Herrmann, Don Yuan: China’s “Selfish” Exchange Rate Policy and International Economic Law - August Reinisch, Protection of or Protection Against Foreign Investment?: The Proposed Unbundling Rules of the EC Draft Energy Directives - Andreas R. Ziegler, The Nascent International Law on Most-Favoured-Nation (MFN) Clauses in Bilateral Investment Treaties (BITs) - Till Müller-Ibold, Foreign Investment in Germany: Restrictions Based on Public Security Concerns and Their Compatibility with EU Law - Marc Bungenberg, Going Global? The EU Common Commercial Policy After Lisbon - Markus Krajewski, Services Trade Liberalisation and Regulation: New Developments and Old Problems - Jörg Philipp Terhechte, Applying European Competition Law to International Organizations: The Case of OPEC - Roland Ismer, Mitigating Climate Change Through Price Instruments: An Overview of the Legal Issues in a World of Unequal Carbon Prices - Part II Regional Integration - Richard Senti, Regional Trade Agreements in the World Trade Order - Marise Cremona, The European Union and Regional Trade Agreements - Tomer Broude, Regional Economic Integration in the Middle East and North Africa: A Primer - Jeffrey L. Dunoff, North American Regional Economic Integration: Recent Trends and Developments - Gabriele Tondl & Timo Bass, Integration in Latin America - Chien-Huei Wu, The ASEAN Economic Community Under the ASEAN Charter; Its External Economic Relations and Dispute Settlement Mechanisms - Part III International Economic Institutions - Edwini Kessie, The Doha Development Agenda at a Crossroads: What Are the Remaining Obstacles to the Conclusion of the Round? - Wolfgang Bergthaler & Wouter Bossu, Recent Legal Developments in the International Monetary Fund - Katharina Gnath, Developments at the G8: A Group’s Architecture in Flux During the last twenty years we have experienced a sharp rise in the number of international courts and tribunals and a correlative expansion of their jurisdictions. This increase in power invites posing some difficult questions concerning the performance of international courts: Are they effective tools for international governance? Do they in fact fulfill the expectations that led to their creation? And why do some courts appear to be more effective than others? Etc... A growing body of legal literature has turned its attention to such questions of effectiveness in recent years. Such literature contains many important insights as to the factors which could explain increased or decreased court effectiveness. Nevertheless, the 'Achilles heel' of most publications in the field is the crude and/or intuitive definitions of "effectiveness" that they employ, which often equate effectiveness with compliance. The lack of a clear definition of effectiveness is sometimes further compounded by general assumptions about the role of international courts in international life, which seem to transpose the role that courts play in national legal systems into the international realm. At the same time, the social sciences literature has long afforded considerable attention to methodological issues relating to the assessment of organizational effectiveness in general, and public organizational effectiveness in particular. This literature appears to provide a number of conceptual frameworks and empirical indicators that could be alternatively applied towards assessing the effectiveness of international courts and tribunals. The proposed article surveys some key notions used in social sciences literature relating to the methodology for measuring the effectiveness of public organizations and discusses their possible application to international courts. In Part One, I will discuss the notion of "organizational effectiveness" and explain the choice of a goal-based definition of effectiveness as the most suitable approach for evaluating the performance of international courts. I then survey a number of ways to classify organizational goals and illustrate some of the difficulties and ambiguities that measuring effectiveness on the basis of goal-attainment may nonetheless entail. In Part Two, I shall introduce some key methodological moves used by the social sciences literature in order to measure institutional effectiveness after the goals of the organization have been identified. Such moves include the fleshing out of different operational categories relating to the evaluated organization's structure, process and outcome. In Part Three, I will discuss how the methods of analysis developed in the social sciences literature could be applied to study of international courts, given the unique attributes and context for their operation, and suggest some elements that should be integrated in future research projects seeking to develop a suitable research methodology. To be clear, my purpose in the article is not to offer any conclusions as to whether international courts in general, or any specific international court in particular, are "effective". My main interest is, instead, to introduce a research agenda that could advance a sophisticated and inter-disciplinary approach towards addressing the question of international court effectiveness. - October 8: Malgosia Fitzmaurice (Queen Mary, Univ. of London), Divided We Stand - The Case of the International Whaling Commission - October 15: Luca Radicati di Brozolo (Catholic Univ. of Milan), Judicial Decisions as Expropriation - The Implications of Saipem v Bangladesh - October 22: David Keane (Middlesex Univ.), Survival of the Fairest? Evolution and the Geneticization of Rights - October 29: Scott Sheeran (Univ. of Essex), Reforming the Law of UN Peacekeeping - November 5: Dan Saxon (Lauterpacht Centre), The Philosophy of International Humanitarian Law. The First Leverhulme Lecture - November 12: Tom McInerney (International Development Law Organization), Treaty Monitoring and State Fiscal Capacity - with Particular Focus on Developing Countries - November 19: Kate Miles (Univ. of Sydney), International Investment Law, Empire and the Environment - November 26: Peter FitzGerald (Stetson Univ.), Fins, Fur and Formalism - The Impact of International Economic Law upon Domestic Animal Law - December 3: Michael Wood (20 Essex Street) & James Crawford (Univ. of Cambridge), The ICJ's Kosovo Opinion - September 15: Eric Posner (Univ. of Chicago - Law), Human Rights, the Laws of War, and Reciprocity - September 22: Michael Doyle (Columbia Univ. - International and Public Affairs, Political Science, and Law), The UN Charter: A Global Constitution? - October 6: Mary Dudziak (University of Southern California - Law and History), Law, War, and the History of Time - October 13: Tim Buthe (Duke Univ. - Political Science), The Rise of Supranational Regulatory Authority: Competition Policy in the European Union - October 20: Kal Raustiala (Univ. of California, Los Angeles - Law), Information and International Institutions - October 22: Peter Katzenstein (Cornell Univ. - Government), The Transnational Spread of American Law: Legalization as Soft Power - November 10: Oona Hathaway (Yale Univ. - Law) & Scott Shapiro (Yale Univ. - Law), Outcasting: Enforcement in Domestic and International Law - November 17: Kathryn Sikkink (Univ. of Minnesota - Political Science), to be determined - December 1: Benedict Kingsbury (New York Univ. - Law), Obligations Overload for Fragile States - December 3: Beth Simmons (Harvard Univ. - Government), Subjective Frames and Rational Choice: Transnational Crime and the Case of Human Trafficking Thursday, September 2, 2010 - Anne Peters, Rechtsordnungen und Konstitutionalisierung: Zur Neubestimmung der Verhältnisse - André Nollkaemper, Rethinking the Supremacy of International Law - Erich Vranes, Völkerrechtsdogmatik als „self-contained discipline“? Eine kritische Analyse des ILC Report on Fragmentation of International Law - Michael Potacs, Das Verhältnis zwischen der EU und ihren Mitgliedstaaten im Lichte traditioneller Modelle - Bruno de Witte, European Union Law: How Autonomous is its Legal Order? - Jacques Ziller, Zur Europarechtsfreundlichkeit des deutschen Bundesverfassungsgerichtes. Eine ausländische Bewertung des Urteils des Bundesverfassungsgerichtes zur Ratifikation des Vertrages von Lissabon Golove & Hulsebosch: A Civilized Nation: The Early American Constitution, the Law of Nations, and the Pursuit of International Recognition This article argues, contrary to conventional accounts, that the animating purpose of the American Constitution was to facilitate the admission of the new nation into the European-centered community of “civilized states.” Achieving international recognition - which entailed legal and practical acceptance on an equal footing - was a major aspiration of the founding generation from 1776 through at least the Washington administration in the 1790s, and constitution-making was a key means of realizing that goal. Their experience under the Articles of Confederation led many Americans to conclude that adherence to treaties and the law of nations was a prerequisite to full recognition, but that popular sovereignty, at least as it had been exercised at the state level, threatened to derail the nation’s prospects. When designing the federal Constitution, the framers therefore innovated upon republicanism in a way that balanced their dual commitments to popular sovereignty and earning international respect. The result was a novel and systematic set of constitutional devices designed to ensure that the nation would comply with treaties and the law of nations. These devices, which generally sought to insulate officials responsible for ensuring compliance with the law of nations from popular politics, also signaled to foreign governments the seriousness of the nation’s commitment. At the same time, however, the framers recognized that the participation of the most popular branch in some contexts - most importantly, with respect to the question of war or peace - would be the most effective mechanism for both safeguarding the interests of the people and achieving the Enlightenment aims of the law of nations. After ratification, the founding generation continued to construct the Constitution with an eye toward earning and retaining international recognition, while avoiding the ever-present prospect of war. This anxious and cosmopolitan context is absent from modern understandings of American constitution-making. - Miguel García García-Revillo & Miguel J. Agudo Zamora, Underwater Cultural Heritage and Submerged Objects: Conceptual Problems, Regulatory Difficulties. The Case of Spain - Stephan Hobe & Jörn Griebel, New Protectionism – How Binding are International Legal Obligations During a Global Economic Crisis - Johanna Fournier, Reservations and the Effective Protection of Human Rights - Charles Majinge, The Future of Peacekeeping in Africa and the Normative Role of the African Union - Bernhard Kuschnik, Humaneness, Humankind and Crimes Against Humanity - Ioana Cismas, Secession in Theory and Practice: the Case of Kosovo and Beyond - Current Developments - Bill Bowring, The Russian Federation, Protocol No. 14 (and 14bis), and the Battle for the Soul of the ECHR - Mindia Vashakmadze & Matthias Lippold, “Nothing but a Road Towards Secession”- The International Court of Justice’s Advisory Opinion on Accordance with International Law of the Unilateral Declaration of Independence in Respect of Kosovo? - GoJIL Focus: ICC Review Conference - Hans-Peter Kaul, Kampala June 2010 – A First Review of the ICC Review Conference - Sabine Klein, Uganda and the International Criminal Court Review Conference: Some Observations of the Conference’s Impact in the ‘Situation Country’ Uganda - Roger S. Clark, Amendments to the Rome Statute of the International Criminal Court Considered at the First Review Conference on the Court, Kampala, 31 May-11 June 2010 - Robert Heinsch, The Crime of Aggression After Kampala: Success or Burden for the Future? - Astrid Reisinger Coracini, The International Criminal Court’s Exercise of Jurisdiction Over the Crime of Aggression – at Last . . . in Reach . . . Over Some - Morten Bergsmo, Olympia Bekou & Annika Jones, Complementarity After Kampala: Capacity Building and the ICC’s Legal Wednesday, September 1, 2010 - S.I. Strong, Research in International Commercial Arbitration: Special Skills, Special Sources - Steven H. Reisberg, The Rules Governing Who Decides Jurisdictional Issues: First Options v. Kaplan Revisited - Matthew T. Parish & Charles B. Rosenberg, An Introduction to the Energy Charter Treaty - Dmitry Davydenko & Eugenia Kurzynsky-Singer, Substantive Ordre Public in Russian Case Law on the Recognition, Enforcement and Setting Aside of International Arbitral Awards - Ignacio Gómez-Palacio, International Commercial Arbitration: Two Cultures in a State of Courtship and Potential Marriage of Convenience Tuesday, August 31, 2010 - Hans van Houtte, International Investment Treaties and Arbitration as Imbalanced Instruments: A Re-visit - Giuditta Cordero Moss, Revision of the UNCITRAL Arbitration Rules: Further Steps - Benedetta Coppo, Comparing Institutional Arbitration Rules: Differences and Similarities in a Developing International Practice - Lukas F. Wyss, Trends in Documentary Evidence and Consequences for Pre-arbitration Document Management - Armin von Bogdandy & Ingo Venzke, Zur Herrschaft internationaler Gerichte: Eine Untersuchung internationaler öffentlicher Gewalt und ihrer demokratischen Rechtfertigung - Ingolf Pernice, La Rete Europea di Costituzionalità – Der Europäische Verfassungsverbund und die Netzwerktheorie - Mathias Hong, Hassrede und extremistische Meinungsäußerungen in der Rechtsprechung des EGMR und nach dem Wunsiedel–Beschluss des BVerfG - Cornelia Janik, Die EMRK und internationale Organisationen – Ausdehnung und Restriktion der equivalent protection–Formel in der neuen Rechtsprechung des EGMR Monday, August 30, 2010 This Article offers a new justification for modern litigation under the Alien Tort Statute (“ATS”), a provision from the 1789 Judiciary Act that permits victims of human rights violations anywhere in the world to sue tortfeasors in U.S. courts. The ATS, moribund for nearly 200 years, has recently emerged as an important but controversial tool for the enforcement of human rights norms. “Realist” critics contend that ATS litigation exasperates U.S. allies and rivals, weakens efforts to combat terrorism, and threatens U.S. sovereignty by importing into our jurisprudence undemocratic international law norms. Defenders of the statute, largely because they do not share the critics’ realist assumptions about international relations, have so far declined to engage with the cost-benefit critique of ATS litigation and instead justify the ATS as a key component in a global human rights regime. This Article addresses the realists’ critique on its own terms, offering the first defense of ATS litigation that is itself rooted in realism – the view that nations are unitary, rational actors pursuing their security in an anarchic world and obeying international law only when it suits their interests. In particular, this Article identifies three flaws in the current realist ATS critique: First, critics rely on speculation about catastrophic future costs without giving sufficient weight to the actual history of ATS litigation and to the prudential and substantive limits courts have already imposed on it. Second, critics’ fears about the sovereignty costs that will arise when federal courts incorporate international-law norms into domes-tic law are overblown because U.S. law already reflects the limited set of universal norms, such as torture and genocide, that are actionable under the ATS. Finally, this realist critique fails to overcome the incoherence created by contending that the exercise of jurisdiction by the courts may harm U.S. interests while also assuming that nations are unitary, rational actors. Moving beyond the critique, this Article offers a new, positive realist argument for ATS litigation. This Article suggests that, in practice, the U.S. government as a whole pursues its security and economic interests in ATS litigation by signaling cooperativeness through respect for human rights while also ensuring that the law is developed on U.S. terms. This realist understanding, offered here for the first time, both explains the persistence of ATS litigation and bridges the gap that has frustrated efforts to weigh the ATS’s true costs and benefits. - XVIIIème Congrès international de droit comparé, Washington, DC 2010: Rapports nationaux helléniques - Athanassios C. Papachristos & Andreas Helmis, La culture juridique et l’acculturation en droit - Christina Deliyanni-Dimitrakou, Christina M. Akrivopoulou & Yannis Naziris, The role of practice in Greek legal education - Eugenia Dacoronia, Catastrophic harms - Alexander G. Fessas, Regulation of same-sex marriage - Zoe Papassiopi-Passia, Consumer protection in Greek private international law - Evangelos Vassilakakis, Recent private international law codifications - Kalliope Makridou, Cost and fee allocation in civil procedure - Dimitrios Tsikrikas, Les actions collectives en droit grec - Konstantinos N. Kyriakakis, Corporate governance - Christos S. Chrissanthis, Legal aspects of speculative funds (hedge funds, private equity funds) - Ioannis Voulgaris, La location-financière (leasing) en Grèce - Alexandra E. Douga, Insurance law between business law and consumer law - Dionysia Kallinikou, The balance of copyright - Costas Papadimitriou, The prohibition of age discrimination in labor relations - Nikolaos Davrados, The protection of foreign investment in Greece - Angelos Yokaris, International law in domestic systems - Theodora Antoniou, Foreign voters - Julia Iliopoulos-Strangas & Stylianos-Ioannis G. Koutnatzis, Constitutional courts as ‘positive legislators’ - Athanasios D. Tsevas, Plurality of political opinions and the concentration of the media - Michail Vrontakis, Les droits de l’homme, sont-ils universels et normatifs? - Claire Spirou & Vassiliki Koumpli, Public private partnerships under Greek law - Theodore Fortsakis, Andreas Tsourouflis & George Pitsilis, Regulation of corporate tax avoidance - Christos Mylonopoulos, Corporate criminal liability and Greek law - Georgios Triantafyllou, Truth or due process: exclusionary rules in Greek criminal procedure law - Dimitrios Kioupis, Cybercrime legislation in Greece What does “representation” mean when applied to international organizations? This paper examines representation as a fundamental, if often neglected, aspect of democratic governance which, if perceived by enough members to be deficient or unfair, can interfere with the other components of good governance, as well as with performance of an organization’s core tasks. Using the case of the IMF, we examine how the concept can be applied an international organization. We posit that IMF decision making comprises a two-stage process. In the first stage members are assigned a quota, which drives their respective shares of votes. Descriptive representation best fits this stage. The second stage consists of decision-making in the Fund’s Executive Board, including the formation of constituencies in the Board and the consensual mode of decision making that is employed therein. Here, some form representation construed in principal-agent terms provides the most traction. We find that subjecting the IMF to this kind of conceptual scrutiny highlights important deficiencies in its representational practices. Call for Papers 2011 International Law Association Asia-Pacific Regional Conference Contemporary International Law Issues in the Asia Pacific: Opportunities and Challenges May 29-June 1, 2011 Taipei, Taiwan, Republic of China Chinese (Taiwan) Society of the International Law - Chinese (Taiwan) Branch of the International Law Association Center for International Legal Studies, College of International Relations, National Chengchi University I. Conference Theme The Chinese (Taiwan) Society of International Law is pleased to hold the International Law Association (ILA) Asia-Pacific Regional Conference from Sunday, May 29 to Wednesday, June 1, 2011 at the Grand Formosa Regent Taipei, a Four Seasons Hotel, in Taipei, Taiwan, ROC. The theme of the conference will be Contemporary International Law Issues in the Asia Pacific: Opportunities and Challenges. This conference aims to provide a forum for international law stakeholders to explore the full range of international and transnational legal issues related to the Asia-Pacific region. The tentative schedule of the conference is the following: Sunday, May 29: Registration and Welcome Reception Monday, May 30: Opening Ceremony and Conference Sessions Tuesday, May 31: Conference Sessions and Closing Ceremony Wednesday, June 1: Optional Half-Day City Tour II. Topics for Papers and Panels Proposals from both scholars and professionals are encouraged on any topic relating to international law with a focus on the Asia Pacific. Subject areas may include, but are not limited to, the following: - General Public International Law: - The Use of Force - Asia-Pacific Security - Territorial Disputes - Teaching and Research of International Law - The Law of the Sea - International Frameworks on Fisheries Conservation - International Criminal Law - International Protection of Human Rights - International Economic Law - The WTO, APEC and ASEAN - FTAs and the Cross-Strait ECFA - United Nations and Regional Organizations - Private International Law - Enforcement of Arbitral Awards and Court Judgments Paper and panel proposals must be submitted electronically by December 20, 2010 to firstname.lastname@example.org. A proposal of no more than 300 words should include the author’s name and full contact information. The conference committee welcomes proposal submissions and conference attendance from ILA members. The conference committee will select proposals and announce the outcome by the end of January 2011. Presenters are required to submit full, referenced papers by April 30, 2011. The Chinese (Taiwan) Yearbook of International Law and Affairs will publish the conference proceedings. III. Conference Details The conference registration fee will be waived for paper presenters and a discounted rate will be offered to ILA members. An additional fee will be charged for the Taipei city tour on June 1. The conference committee will provide additional information on registration fees and a variety of hotels and airlines, including China Airlines and Eva Air, here. Other inquiries about the conference can be directed to Professor Pasha Hsieh, Conference Co-organizer, at email@example.com. Sunday, August 29, 2010 - Lara Appicciafuoco, The Promotion of the Rule of Law in the Western Balkans: The European Union’s Role - Martina Spernbauer, EULEX Kosovo: The Difficult Deployment and Challenging Implementation of the Most Comprehensive Civilian EU Operation to Date - Dren Doli & Fisnik Korenica, Kosovar Constitutional Court’s Jurisdiction: Searching for Strengths and Weaknesses - Michael Bothe, Kosovo – So What? The Holding of the International Court of Justice is not the Last Word on Kosovo’s Independence - Robert Howse & Ruti Teitel, Delphic Dictum: How Has the ICJ Contributed to the Global Rule of Law by its Ruling on Kosovo? - Björn Arp, The ICJ Advisory Opinion on the Accordance with International Law of the Unilateral Declaration of Independence in Respect of Kosovo and the International Protection of Minorities - Robert Muharremi, A Note on the ICJ Advisory Opinion on Kosovo - Thomas Burri, The Kosovo Opinion and Secession: The Sounds of Silence and Missing Links - James E. Moliterno, What the ICJ’s Decision Means for Kosovars - Elena Cirkovic, An Analysis of the ICJ Advisory Opinion on Kosovo’s Unilateral Declaration of Independence - Hanna Jamar & Mary Katherine Vigness, Applying Kosovo: Looking to Russia, China, Spain and Beyond After the International Court of Justice Opinion on Unilateral Declarations of Independence
I have a splendid view of snowy treetops outside my living room windows. It is lovely to sit here in my nice warm apartment and watch the sweep of easy wind and downy flake falling lazily from the sky. But it is becoming harder to enjoy the view. Every time it snows I can just hear climate change skeptics licking their chops, smug in their certainty that we are dumb enough to believe the continued existence of winter disputes overwhelming evidence that the planet is warming. Must we continue, every time it snows, to ask if there is some mistake in the consensus among 97% of the world's climate scientists? No one who has read any of the literature believes that a warming planet means the end of winter. As Thomas Friedman, writes in today's NY Times, "The fact that it has snowed like crazy in Washington — while it has rained at the Winter Olympics in Canada, while Australia is having a record 13-year drought — is right in line with what every major study on climate change predicts: The weather will get weird; some areas will get more precipitation than ever; others will become drier than ever." Friedman's column, Global Weirding Is Here is worth reading.
The other week, while I was at the WebWorks Roundup conference in Texas, where I was one of the featured industry speakers, I was sitting next to Anne Gentle during one of the panel sessions, and I asked her about branding. It seems like once you become branded through your blog, it’s hard to reinvent yourself. I was speaking at WebWorks on blogging and web 2.0. More than anything else, my blog has branded me as a blogger. This brand has led to numerous speaking invitations at conferences and chapters. The more I speak about blogging, the more I become branded as a blogging expert –- it’s a cycle of branding that perpetuates itself. At the conference, I learned that although some people have branded themselves online in certain ways, they can be much different in person. For example, online you know Richard Hamilton, founder of XML Press, as an entrepreneurial publisher focusing on the technical communication market. You may also see Richard as an experienced manager through his recent book Managing Writers. And you may gather that Richard is a careful, analytical thinker from his lengthy conference write-up posts. That’s how Richard has branded himself — as a publisher and manager. But Richard has another side to him as well. He’s a pilot and previously owned his own airplane. He loves reading literature, especially mysteries. For example, he has read Sue Grafton’s mystery series (A is for Alibi, B is for …) series up to G. His whole face lights up when he starts talking about mystery novels with another mystery aficionado. He boots his computer in Ubuntu and prefers to write everything in DocBook XML. He also seems to enjoy long car drives (for example, he drove from Colorado to Texas and back for the conference). More than anything, Richard is one of the most warm, friendly, and conversational people you will ever meet. Alan Porter is even more of an interesting figure when it comes to branding. Online you know Alan as the head of WebWorks (or VP of Operations). You read his blog as an expert in the tech comm industry, especially with wikis. His forthcoming book, Wikis: Grown Your Own for Fun and Profit, will only solidify his wiki branding. He also blogs about trends in user behavior, from observing, for example, the way his teenage daughter approaches her homework. But in person, you’ll find that, like Richard, Alan has another side to him entirely. A cowboy-boot wearing Englishman, Alan is an avid comic artist. Mention conferences like Comicon and Dragicon and his ears perk up. He regularly writes the stories, dialog, and scripts for the comic book CARS. In addition to his drawing talents, Alan has also written books on James Bond, Batman, Star Trek, and the Beatles. He has strong feelings about the importance of storytelling. In fact, Alan works only 30 hours a week so he can focus on his writing. Alan has written a mystery novel set with NASCAR racing and another novel about Shakespeare pretending to be Christopher Marlowe, which an agent of his was shopping around Hollywood for a possible movie. Alan is also a consultant for Tedopres, a company focused on simplified technical English. He can fly out to your location and train your employees on simplified technical English techniques. Alan understands the importance of recording presentations. He records all major WebWorks conference sessions, making them available at first on a limited basis and then eventually opens them up to everyone. He’s allergic to gluten, is married to a court reporter, and when you mention his competitor’s products, such as Flare, he breathes a deep sigh. I’ve gotta say, Alan is one of the most interesting people to meet, because unless you know this other side of Alan, all of this comes as a complete surprise. It’s a surprise mostly because Alan has chosen not to brand himself this way online. In fact, he has a policy that he will not write about either his company’s products or his competitor’s products on his blog. Blogs provide you with an opportunity to brand yourself with an identity you want to be known by. But you have to be careful what you blog about, because that brand then stays with you. You become known for that brand, and it can be hard to change. Reinventing yourself with a new identity isn’t impossible. It just requires you to shift your focus, to start writing about a new topic. I mentioned at the beginning that I’m not so eager to be branded as a blogger (and podcaster and WordPress person). Ideally, I would like to be a screencaster and wiki expert as well. To make that happen, I’ll have to shift the focus of my blog — for about the next 200 posts. I could make the shift, but I think I prefer to let things happen in a more natural way. It’s more interesting to let water flow in the direction it wants to. And then every once in a while look up to see where you are.
The Governments wants Canadian-style cuts. But what does that mean? The Conservatives' plans for “Canadian style” spending cuts are splashed across the front page of the Telegraph today. But what exactly is it that the Government wants to emulate? It certainly won’t be the number of goes that Canada had to have before they got their debts under control. In fact, for Canada, it was a case of “fourth time lucky”. Between 1984 and 1993, there were three unsuccessful attempts at fiscal reform in Canada. Debt levels had been spiralling upwards for decades, and by the early 1990s the public were absolutely sick to death of government overspending. So what can Britain learn from Canada? There are two main things I think the Government means by talking about a Canadian approach. Firstly, it means a more surgical approach to reducing spending, not blunt, across-the-board cuts. For example, when the Swedish government had to cut spending at the start of the 1990s they just lopped 11pc off the budget of each department. In contrast, Canada’s cuts were much more differentiated. Health spending fell by just 3pc, while transport spending was slashed by 50pc. In the expert literature these two different approaches are captured by a culinary metaphor. As Jens Henricksson explains in his paper about the cuts in Sweden: When you need to cut down on government consumption there are two different approaches. One way is to take a little bit from everything. The Swedish metaphor for this is to use the cheese slicer. To understand this outside Scandinavia you have to see our cheeses. We mostly eat hard cheese and we slice from the top, as shown in the picture. Thus by using the cheese slicer you take equally from everyone. The other way to decrease spending is to use the cake-slicer, ie to surgically remove selected items. In this case we might say that Sweden used cheese slicing, while Canada used cake slicing. And there are arguments for both approaches. You might think there is roughly equal amounts of waste in every department – and that cheese slicing has the advantage of putting pressure on departments to think creatively and find saving opportunities which may not be known about by the centre. On the other hand, you might think that some types of government spending are likely to be more growth-enhancing and socially useful than others. And you might want to change the composition of government spending as well as its total size – to squeeze the worst types of spending hardest. My instinct about this is that it might be attractive to try to combine the advantages of differentiated and decentralised approaches – perhaps by devolving to the majority of departments a minimum cut and then targeting some departments with larger ones, and pushing certain initiatives through from the centre. The second distinctive aspect of the Canadian approach was devolving responsibility for funding savings to officials. In 1994, the new incoming Liberal Party’s Minister of Finance set up a government spending Programme Review (PR). It examined all areas of Government spending and applied a set of objective criteria. No area was sacrosanct. The PR sought to maximise the participation of civil servants, and process filtered ideas from the departments up through a number of levels: first, a committee of Deputy Ministers chaired by Jocelyne Bourgon which reviewed submissions and coordinated the process (and installed people who were sympathetic to deficit reduction). Next, a group of Ministers chaired by Marcel Massé reviewed the recommendations and crafted final proposals, which were then endorsed by the Prime Minister and Cabinet. Bureaucrats were incentivised through a carrot and stick approach. Deputy Ministers were appraised annually and they received 30pc of their pay on a performance-related basis. If they failed to come up with sensible proposals, the PM threatened that a separate body would impose a top-down 10pc cut across the board. In Britain, in contrast, previous spending cuts programmes have typically been centralised. The centre has identified programmes it wanted to cut, and has not trusted officials to come up with ideas to save money. There has been a great fear that Sir Humphrey types will never come up with plans to cut their own empires down to size. In fact, during the IMF crisis of 1976 we had to come up with a centralised list of agreed cuts in order to be granted the bail-out money. So we could think of the Canadian approach as being about two “D”s – savings will be both Differentiated and Devolved. There are arguments for and against both these approaches, and if you are interested in these (and other international examples), we have written a summary of the evidence. While we can borrow ideas from abroad, our situation is a bit different. For starters, our spending cuts will have to be bigger than those in Canada. Between 1992 and 1999 Canada reduced spending by about 6 percent of GDP. But our budget deficit is about 11 percent of GDP. However, if you wanted to look on the bright side, you might argue that Canada’s federal government spent less of Canada’s income in the first place, so they were taking a bite out of a smaller pie. The UK government spends nearly half of our national income. One lesson that a number of reformers around the world have pointed out to me is the need to be extremely consistent. If you are telling the public that there is “no alternative” to major cuts, you can’t “exempt” any areas from finding savings, or put money into pet projects. Marcel Massé,the Canadian official who ran the budget cuts programme, commented that "there was blood on the floor everywhere, but at least everyone could see that others were hurting too". Jens Henricksson puts it less dramatically: “When one strong interest group complains, you are in trouble. But if everybody complains, you are not… The idea is to signal that you are not partisan and that the budget deficit is a general problem that everyone should participate in solving.” That seems to be the message David Cameron wanted to convey in his speech today. He said: “We are doing this because we have to, driven by the urgent truth that unless we do, people will suffer and our national interest will suffer. But this government will not cut this deficit in a way that hurts those we most need to help that divides the country or that undermines the spirit and ethos of our public services.” That promise could have come straight from the mouth of Marcel Massé. The challenge for Mr Cameron will be to make sure that his actions match up to this promise. Margaret Thatcher, über-moderniser – and the posters that prove it November 20th, 2012 11:11 Why the Government is right to want to improve the poverty target November 15th, 2012 12:04 The north's economy needs a radical new approach if it's going to catch up with the south November 13th, 2012 13:36 Is Britain 'coming apart' as cultural inequality increases? November 9th, 2012 18:17 Unlike America we don't elect a President. So let's never have presidential-style debates again November 7th, 2012 11:25
Thanks to everyone for their well-wishes by email and comments. I appreciate them very much. My wife and I had a wonderfully relaxing, peaceful and calm time in the mountains of Virginia as we camped together. It was a blessing to see God’s good creation, sit by the fire at night and spend lots of time simply talking. Fasting from electronics has also been a blessing though I now–somewhat reluctantly–return to the virtual world of blogging. During this season of rest I have been reading books in four major areas. I want to recommed a few from my reading list over the past monts that have been particularly helpful to me. Gary Thomas, Sacred Marriage (2000). “What if God designed marriage to make us holy more than to make us happy?” What a good question! This book suggests that marriage is a spiritual discipline designed to transform us into the image of God by relating to another person in an intimate way. This book is filled with helpful insights about the nature of marriage as a holy adventure whereby we become selves-in-relation rather than selves-in-isolation. It brings together many good theological themes (relationality, community, etc.) with effective psychological insights. Tim Gardner, Sacred Sex (2002). Sex is a spiritual celebration of oneness. That may seem like a truism for many but Gardner’s exploration of that theme is quite significant. This is not a manual about technique. Rather, it is about the spirituality of the sexual relationship itself. Sex, in this context, is a spiritual discipline by which we explore, practice and experience communion. It is an act of worship in a committed relationship. Men–despite the common mantra–do not need sex (sex is optional; we can live without it!), but couples need a oneness that sexual relations express. Sexuality is more about oneness than orgasm. I found the spiritual emphasis refreshing. David Schnarch, Passionate Marriage: Keeping Love and Intimacy Alive in Committed Relationships (1998). This is a more explicit book about the sexual relationship. It uses the sexual relationship to look at the whole nature of love and intimacy in marriage. The premise of the book is about differentiation as a key to intimacy. Rather than co-dependency or emotional fusion, couples need a sense of self in order to be in relation with their partner. Healthy partners make for a healthy realtionship. When the relationship is unhealthy, both partners–not just one–is sick. Both need a sense of self. They need a sense of being “separate” in order to be “together” in a healthy way. For example, he describes a technique called “hugging to relax.” Can you hug for more than five seconds without being uncomfortable? Hugging for a sustained time where centered-selves enjoyed the togetherness of the present moment rather than escaping into the future or resenting the past is a window into the nature of the intimacy a couple shares. I’m still in the process of reading this one, but with just a few chapters completed I can appreciate how it is already helping me. Joshua Choomin Kang, Deep-Rooted in Christ: The Way of Transformation (2007). This book came highly recommened by Terry Smith of Woodmont Hills church in Nashville. Jennifer and I use this in our nightly devotional time. It is 52 chapters but we are using it on a daily basis. It encourages the use of spiritual disciplines to root ourselves in Christ. While not discounting spiritual experiences at all, he suggests that spiritual discipline (measured, consistent, deep, regular and focused) is the way of transformation. I believe I have had many spiritual experiences but without spiritual discipline (which has sometimes–ok, oftent–been lacking in my life) I find my way of transformation can be shallow rather than rooted. We are enjoying discussing this book. Gary Thomas, Devotions for a Sacred Marriage (2005). Also 52 chapters, my wife and I use this in our devotions once a week. Against the background of his book, the specific devotional challenges and meditations are quite helpful as they generate discussions about our marriage between Jennifer and myself. Trauma and Recovery Tian Dayton, Trauma and Addiction: Ending the Cycle of Pain Through Emotional Literacy (2000). I enjoyed Dayton’s Heartwounds: The Impact of Unresolved Grief on Relationships (see my post on the book) that I immediately when to this book to read in more depth about the connection between trauma and addiction. Whatever one’s addiction (alcohol, drugs, sex, shopping, gambling, frenetic activity, eating, workaholism, etc.), it is linked to trauma in one’s life (whether childhood or adult). These addictions present themselves as solutions but they are actually symptoms of a deeper problem. Trauma–without effective coping strategies–creates emotional illiteracy. Rather than medicating the pain of the trauma through addictive substances or behavoirs, emotional literacy enables people to move through their trauma. Dayton suggests that we not only psychologically hold on to these traumas but also somatically so that when we experience renewed trauma our bodies as well as pysches react to the new trauma with all the power of the unresolved trauma in our past. This creates a need to medicate with whatever addiction has been our coping strategy. Part of the resolution to this need is to re-experience the trauma somatically as well as psychologically through psychodrama. This was an enlightening book to me. For a long time I have been aware of 12-step programs, recommended them and even read some (but very little) of their literature. But in the last three months I have read lots of their literature and have proceeded to work the 12-steps for myself. It is quite liberating. It is a simple, focused and supported program of recovery from any addiction (from alcoholism to workaholism). No one can appreciate the depth of spiritual development that can take place through the 12 steps if they are not familiar with them or worked them. I believe it is a deeply spiritual process that is rooted in the principles of spiritual transformation. I recommed reading its literature on the 12 Steps (e.g., Tweleve Steps and Twelve Traditions). Celebrate Recovery is a Christianized version of the 12 steps which I am also finding quite helpful. [And everyone needs recovery of some kind--we are all sinners, and we all seek transformation and recovery from sin, including pride, selfishness, etc.] Specifically on this topic, I found Steps of Transformation: An Orthodox Priest Explores the Twelve Steps by Father Webber Meletios (trained in psychology and an Orthodox priest) wonderfully refreshing. Here is a book that combines the insights of 12 step programs with biblical text shaped by the spirituality of Orthodox theology. This is a rich combination filled with theological reflection on spiritual disciplines, spirituality and recovery. Henry Wiencek, An Imperfect God: George Washington, His Slaves, and the Creation of America (2004). I love to read historical materials, especially biographies. This particular work is not a biography per se but rather examines Washington’s relationship to slavery. It argues that Washington was originally as morally and psychologically embedded in the slave culture of Virginia as any other gentleman planter in the eighteenth century. Washington even sponsored a Williamsburg raffle of slaves (including breaking up families) in order to secure payment for a debt owed to him in 1769. However, through relationships with mullatoes from his own family tree (e.g., his stepson fathered a child, his wife had a stepbrother who lived at Mount Vernon, etc.), his experience with African Americans during the Revolutionary War (one fourth of his army at Yorktown in 1781 was black), and ultimately his repugnance toward breaking up families through sales, Washington began to see the immorality of slavery. His Last Will and Testament freed the slaves in his possession rather than leaving them to his heirs to sell. If one is unacquainted with the development of slavery in eighteenth century Virginia, this is an illuminating read. So, besides blogging, I’ve been spending my time immersed in these sorts of materials. My journey continues….
Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c04/h01/mnt/68209/domains/blog.augustwilsoncenter.org/html/wp-settings.php on line 512 Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c04/h01/mnt/68209/domains/blog.augustwilsoncenter.org/html/wp-settings.php on line 527 Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c04/h01/mnt/68209/domains/blog.augustwilsoncenter.org/html/wp-settings.php on line 534 Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c04/h01/mnt/68209/domains/blog.augustwilsoncenter.org/html/wp-settings.php on line 570 Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c04/h01/mnt/68209/domains/blog.augustwilsoncenter.org/html/wp-includes/cache.php on line 103 Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c04/h01/mnt/68209/domains/blog.augustwilsoncenter.org/html/wp-includes/query.php on line 61 Deprecated: Assigning the return value of new by reference is deprecated in /nfs/c04/h01/mnt/68209/domains/blog.augustwilsoncenter.org/html/wp-includes/theme.php on line 1109 August Wilson Center - Front & Center » 2010 » July - “Amplifying African American Voices” To continue our summer entertainment series this week we are featuring some of the city’s most dynamic spoken word artist, including Kelli Stevens Kane and Brian Francis and more! TeeJay will provide the music, and of course Abay Restaurant will have food for purchase. FREE ADMISSION. Cash Bar. 2010-2011 PERFORMANCE SEASON CELEBRATING THE BLACK MALE The August Wilson Center’s second full season in its new home will present international, national and regional artists, living legends and local trailblazers starring in multidisciplinary events in music, dance, theater, film and literature that celebrate Black men and boys. Respond to The Black Man Is… Text in or type you answer here! Hello All! For those of you who came out to the kick off of offCenter last week, thank you for coming! It was a great turnout and a wonderful evening. I’ve got some good news for you all…there is only more great music to come. This week Dream Job performs at offCenter this week! Just watch the videos below, because their music speaks for itself. You won’t be able to resist the urge to come down the the August Wilson Center and find yourself a seat. Don’t forget that Abay Ethiopian Restaurant is going to be catering and there will be a cash bar available too. In case you forgot it starts at 5:30 pm and will continue until 10:30 pm. Come early and get your seats while they last (we most definitely had a full house last Thursday). Also keep in mind our question for the upcoming season “The black man is…?” There will be plenty of opportunities to respond at offCenter. At a loss for what to do this Tomorrow? Ladies and Gentleman have no fear the August Wilson Center has the solution. This Thursday July 8th is the kick off of two new weekly programs held here at the August Wilson Center. I know…you’re so excited you can barely contain yourself! But just keep reading, there is more to come! For the leisurely lunch crowd, check out Lunch On Liberty from 11 am - 2 pm this and EVERY THURSDAY through September 2nd. Bring your friends and soak up the sun on our patio, located at 980 Liberty Avenue. Bring your own lunch or enjoy food from Cory’s Deli, who will be grilling out on our patio for the month of July! There will be live entertainment and FREE WIFI to help enhance your lunching experience. This week Joy Ike will be playing the lunch set…so prepare yourself for some great music and good food! Now for you late night hipsters, we’ve got you covered too. offCenter also starts this thursday with a second chance to hear Joy Ike perform from 5:30 - 8 pm. Followed by The Peace Project from 9 - 10:30 pm! There will be food provided by Abay Ethiopian Cuisine (who made a special menu for offCenter) for the month of July and a cash bar available as well. Food, drink, WIFI, great music, and a cool atmosphere. What else could a late night crowd want? There will also be opportunities to sign up for our mailing list (so you can stay in the loop) and to respond to the question for this coming seasons theme! You don’t want to miss it. The Peace Project The kick off starts TOMORROW with Lunch On Liberty and continues into the evening with offCenter! Don’t forget to come on down to the center and be apart of it. If by some chance you miss it (which you shouldn’t!) then remember that these programs run every Thursday through September 2nd, same place same time! So you’re ready to go on your lunch break, brown bag in hand, the sun is shining, you have a spring in your step. Now the only thing left to decide is where to eat. Well let us make that choice easier for you. Come down to the August Wilson Center for African American Culture and sit out on our patio for Lunch On Liberty. Come and enjoy lunch with us! When: 11 am to 2 pm EVERY THURSDAY, July 8-September 2 What: Bring your friends and soak up the sun by enjoying your lunch on the August Wilson Center’s patio, located at 980 Liberty Avenue, Downtown. Bring your own lunch or enjoy the tasty fare of Cory’s Deli, who will be grilling right on our patio each Thursday in July during lunchtime. While you are eating, relax and enjoy live entertainment each Thursday by various local artists! FREE WIFI Joy Ike - July 8th Jump-starting our Lunch On Liberty is Joy Ike–who will also be performing at offCenter later that evening. Joy’s fans have compared her vocally to Corinne Bailey Rae and Norah Jones, stylistically to Fiona Apple and Regina Spektor, and have said that her ambiance is much like that of India.Arie and Lizz Wright. “. . . a voice and talent beyond her years. The depth of subjects she tackles in her poetic lyrics are perfectly complemented by a unique blend of neo-soul, with just the right dash of pop.” - NPR
|Wray Herbert is director of science comms at the APS| Glutton. It’s not a word you hear much at all these days. In fact, when he uttered the archaic word, my mind rushed to the English literary giants of long ago—the diarist Pepys and Dr. Johnson—consuming enormous quantities of mutton and fowl, and paying with gouty, swollen toes. Those literary gluttons seem to be a thing of the past, and the word has fallen into disuse, too. You certainly don’t hear it in recovery circles, and indeed most sober alcoholics would likely reject this old-timer’s view of the disorder. You’re much more likely to hear alcoholism described as a medical disease, or a spiritual crisis. But I like the idea of alcoholic gluttony. It rang true to me back then, and it still does. It cuts through a lot of hair-splitting debate and gets right to the heart of the matter: lack of self-control. Call it what you like, but at the end of the day there’s no getting away from the behavior—the excessiveness, the lack of restraint, the—yes—gluttony. Yet labeling alcoholism as gluttony does not make it simple to understand. Indeed, alcoholic gluttony is maddeningly complex, and in a way this vice—this deadly sin—captures human nature in all its irrational nuance. Looking back now, I believe that my career as a science journalist has paralleled my drinking career; my unfolding relationship with alcoholic gluttony shaped the questions I asked, and how I asked them. My scientific interest in boozing preceded my own excesses, because my father died a full-blown alcoholic. But my memories from childhood were not of a reckless man, but rather a vibrant, engaged man—a hiker, a sailor, an educator. Then somewhere along the way things changed, for no obvious reason. There was no tragic trigger, just the usual disappointments, and he drank more and more. I recall sitting at his kitchen table late in his life, and he was drinking Passport Scotch disguised with OJ—his drink of choice—and thinking: He’s chosen this path freely, with full understanding of the tradeoffs. But I watched him clinically and warily, because I knew I carried some of his genes, and his transformation reflected back on me. As I watched my father’s alcoholism progress—and then my own—I began asking other questions: Do we have a brain disease? Are there particular neurotransmitters run amok. I read widely in the literature about genetics and addiction and stress, about suspect neurotransmitters, and brain anatomy related to pleasure and risk and will, and even wrote a newsweekly cover story on the interplay of genetics and misfortune. None of this got me very far. Alcoholism appears to run in families, and many experts believe there are genes—probably a handful of them—underlying the disorder. There are candidate brain chemicals and structures. So I probably inherited a propensity of some kind. But so what? As one geneticist explained to me years ago, there is no elbow-bending gene. That is, no genetic or neuroscience findings will ever alter the fact that alcoholics—at every stage of their drinking history—are making decisions. Every time we pick up a bottle or pour a finger of whiskey, it’s a choice—it’s the option we’re freely selecting, at least for that moment. So I moved on from what I now saw as a reductionist neuro-genetic view of alcoholism to an interest in cognitive psychology. Specifically, I wanted to know how we make decisions and judgments and choices, and why so many of our choices are not in our own best interest. Ironically, my preoccupation with irrational decision making coincided with a sharp spike in my own drinking. I was increasingly isolated in my alcoholism—skipping my favorite watering holes for a bottle at home; I drank at lunch every day, and often in the morning. The “holidays” I took from booze were more and more difficult. My drinking life wasn’t feeling like a choice—but I had no other way to explain it. I couldn’t blame it on anyone else. Even self-destructive decisions are decisions, and I began devouring the scientific literature on emotions and distorted thinking, looking for an explanation for my own poor life choices. On Second Thought. On Second Thought is about the surprisingly automated lives we live—often at the price of our happiness—and it’s also a guide of sorts to more deliberate thinking. It’s not about alcoholic gluttony, but the title could well describe my own questioning of my own harmful life choices—and the change I made. My next project—in the works—is on alcoholic gluttony. In the course of researching and writing On Second Thought—I was sober by then—I kept stumbling on psychological science that illuminates the process of recovering from alcoholism. Much of it is counterintuitive—the need for powerlessness, the dangers of self-reliance, the power of moral inventory and honesty. Many recovering alcoholics see the steps of recovery as a spiritual path, with no need for scientific explanation. I don’t argue with that, but I also think there’s a breed of sober alcoholics who are curious about the workings of the mind as it chooses—first a destructive path, then a life-changing one. They are the audience for the next book. Let’s call them recovering gluttons. This post is part of the Research Digest's Sin Week. Each day for Seven days we'll be posting a confession, a new sin and a way to be good. The festivities coincide with the publication of a feature-length article on the psychology behind the Seven Deadly Sins in this month's Psychologist magazine.
A terrific lumps-and-all coming-of-age story, An Education is more than just a brilliant, beguiling performance from Carey Mulligan. It’s also a hopeful, honest look at an individual and a society on the edge of great, perhaps painful change—and one of the best films of 2009. Based on Lynn Barber’s 2009 autobiography, An Education follows young Jenny (Carey Mulligan) in her final year at an all-girl’s prep school in the suburbs of London. It’s 1961—the Beatles are still hidden away in Hamburg; the style icons are John and Jackie, not Jimi and Janis; and London has not yet begun to swing. The world is on the cusp of great cultural change and so is Jenny. Preparing for Oxford and eager for a bohemian life of French singers and cigarettes, she’s smarter, more knowing than her classmates and her well-meaning, middle-class parents (who encourage her studies primarily so she’ll find a good man at university). Winning in her self-aware naiveté, the girl yearns to escape to adulthood. One day in the rain, that escape pulls up in a sharp maroon sports car. David (Peter Sarsgaard) is suave and sophisticated, older and exciting, but also charming and kind. He gives Jenny and her cello a lift, and soon she’s thoroughly seduced not just by David’s grown-up world of smoky jazz clubs and champagne trips to Paris, but also by his love of culture and learning for their own sake. Carey Mulligan was rightfully nominated for an Oscar for An Education—this is a surefooted, star-making performance graced with a seemingly effortless touch. (The Audrey Hepburn comparisons have been mentioned before, but rightfully so. It’s no coincidence Mulligan is now the front-runner to play Eliza in the My Fair Lady remake.) Beautifully conveying intelligence and confidence as well as doubt and confusion, Mulligan continually turns over both sides of Jenny. Trying on worldliness, she slides easily from a winning smile to a pained scowl to an exasperated roll of eyes that are alternately sparkling and weary. Jenny is a sad rarity in film these days: a smart character who acts it—pondering, questioning, already aware of life’s compromises and disappointments, but also vulnerable to the mistakes even the smart make when they think they have it all figured out. But this is not a case of a film acting as an appendage to a great performance. Danish director Lone Scherfig and screenwriter Nick Hornby (author of High Fidelity and About a Boy), have crafted a warm and engaging film that smoothly sweeps you along almost without you noticing. It’s fun without resorting to inane hijinks, stylish but honest in its depiction of both the joys and heartbreaks of growing up, and moving without becoming mawkish. Like Mulligan’s portrayal of Jenny, An Education must walk a fine line between breezy promise and tough lessons without coming off jaded or cynical. It does so with a delightful spirit–there should have been two women in the Best Director category this year. That quality reverberates throughout the cast. The film wouldn’t work if we didn’t fall for Sarsgaard’s David right along with Jenny–he uses his sly squint and vaguely reptilian smile to build an earnest, seductive gentility. The fact that David is Jewish in an England that’s still quietly anti-Semitic gives him a vulnerability Jenny can’t resist, especially when mixed with just the right hint of the mysterious outsider. In the shadow of Mulligan’s triumph, Sarsgaard doesn’t get enough credit for the nuanced balance he brings to David–a man who’s charmed even himself into believing he’s a good guy. Alfred Molina is wonderful as Jenny’s befuddled father, a man awkwardly trying to bluster his way past fears of world too wide for his understanding. Dominic Cooper (The History Boys, Mamma Mia!) and Rosamund Pike (Surrogates) are terrific as David’s friends and accomplices—Pike especially, deftly playing the “dumb blond” who isn’t aware that her moral complacency exists, let alone is an issue. As Jenny’s literature teacher, Olivia Williams carries the role of “cautionary example” (the over-educated “spinster”) with weary dignity, and Emma Thompson shows up as the film’s societal heavy—the frowning Thatcher-esque warble of the racist, sexist Establishment. Thanks to those performances, An Education is no existential mope—it’s an entertaining and compelling romance, complete with pitfalls and regrets. What makes the film rewarding on repeat viewings are the layers of ideas about what education and knowledge are, where they come from and what purpose they serve–whether they’re taught in school books or life’s cruel classroom. Jenny may eventually realize you’re never as smart or mature as you think you are, and that the most important lessons are learned from making stupid mistakes. But ultimately An Education is about wanting more than the narrow options you’re offered–even if pursuing it comes with a price.
Jan. 14, 2014 - SALT LAKE CITY -- Alexandria product representatives travel to select tradeshows throughout the country to be available to our current customers as well as introduce the library automation software to prospective facilities interested in a new system for their library. They will interactively showcase the Alexandria product and address any questions or comments. The Alexandria tradeshow booth is designed with a large screen in the middle of its branded backdrop to allow for live demonstrations of the product presented at the booth for interested attendees. Product literature and reference material are available for conference attendees for future reference and to share with their colleagues. A couple of the events Alexandria will be traveling to include the Florida Educational Technology Conference in Orlando, Florida on January 28-31, 2014 and the California Charter Schools Conference in San Jose, California on March 3-6, 2014. The most current list of events that an Alexandria representative will be attending can be found on their website, www.goalexandria.com. Alexandria has a knowledgable and efficient team to coordinate these events. There is a lot of design, logistics, and planning that go into making each tradeshow appearance a success. Speaking on behalf of the sales team with Alexandria, "We have a great time at the tradeshows that we attend," says Stephen Kunzler, a seasoned sales representative with 14 years of Alexandria experience. "We get so much valuable feedback from our current customers; we can see and share their excitement of their favorite Alexandria features or their story of excellent customer service they received when troubleshooting a concern they had with our technical support group. Often times, the lively conversations we have with our current customers while in a tradeshow atmosphere will draw in prospective customers wondering what this sense of community is!" Kunzler and the representative team have a busy start to the new year with at least 8 shows between January and April. Alexandria is used in thousands of libraries worldwide since 1987, from single libraries to school districts of over 350 facilities. They have built a happy, strong, and loyal customer base through listening, understanding, and adding functionality to fit their needs. They offer live technical support 24 hours a day, 7 days a week, 365 days a year for product users. You can learn more about Alexandria Library Management Software by visiting www.goalexandria.com or calling 800.347.6439 to talk with a product representative.
(l to r) Tournament Directors Ryan Boyer and Mike Woody, Lucie Hradecka, Irina Falconi and Executive Sponsor Dan Futter at the trophy ceremony following the singles final. © Robert Spears Photography Playing doubles with Alison Riske. © Robert Spears Photography In action against Madison Brengle in the quarterfinals. © Robert Spears Photography Addressing the media in front of her awesome name card. © Robert Spears Photography Irina Falconi turned professional in 2010 following a stellar sophomore campaign at Georgia Tech and, shortly thereafter, became just the 10th qualifying wild card ever to reach the US Open main draw. At Georgia Tech, Falconi earned All-America honors as a freshman and was the No. 1 player in the 2009-10 NCAA season-end Intercollegiate Tennis Association rankings as a sophomore, posting a 40-3 overall record. Following her collegiate career, Falconi moved full-time to the USTA Pro Circuit and wrapped up the 2010 campaign by advancing to the semifinals at the $75,000 event in Phoenix and the $50,000 events in Kansas City, Mo., and Troy, Ala. She also reached the final at the $25,000 event in Rock Hill, S.C., and won the $10,000 event in Atlanta to rise to No. 184 at year’s end. In doubles, she reached four finals, three at the $50,000 level or better. Falconi kicked off 2011 by qualifying into her first Australian Open and is now ranked a career-high No. 156 in the world. The 20-year-old is competing this week at the $100,000 Dow Corning Tennis Classic in Midland, Mich., and will be writing a blog from the tournament. Check back daily for updates! Do you have a question for Irina about something she wrote in her blog or about the Dow Corning Tennis Classic? Email her here, and she will respond to as many questions as possible . Please keep in mind, however, that due to her busy playing schedule she may not have time to answer all questions. Monday, Feb. 14, 2011 It's a quarter before ten am on Monday (HAPPY VALENTINE'S DAY BY THE WAY), and I'm back in Atlanta. Wow, it feels like forever ago, that whole tennis playing thing. Well, everyone, it's been fun, it's been real, and it's been real fun. (I can't take total credit for that one.) I'm debating whether to just go into my whole day or just go through an extensive explanation as to what happened AFTER the match. Let's start from the beginning, though. So, everything in the morning is the same. You can't change the routine now, ya know? So we're having breakfast, everything's great, we head to the site to warm up with Alison before the assumed "12 pm" final. Once I get to the site, at like 10, I find out the final is at 1... so my initial reaction is, "Haha, they are totally messing with me," and after about two seconds of that, I realize that the final is actually going to be at 1, and then we would have to play doubles. The flight that I had gotten the night before was a complete fail because we would have to drive 120 miles to get to Detroit Airport in time for a 7:35 flight... Do the math---it wasn't going to happen. Well, I strongly believe that everything happens for a reason, so let me tell you a little bit about the glamorous life of a tennis player. So, the final at 1. It took one break in each set for me to lose the match to a very good player, Lucie Hradecka. Props to you, girl, and good luck in Memphis. You had some great things to say on the podium, and I definitely will be cheering for you when I'm not playing you ;-) So after the match, I got the news that doubles was not going to happen. The crowd was really understanding, and I told them that Mike and I would play a little mixed, but it wasn't able to go down. After the ceremony, which was, might I mention, absolutely wonderful and well run, I got bombarded with some reporters asking me all these awesome questions. Side comment: I have never felt this short in height before. Every single day, in every interview I read, something about me being 5'4" was noted. Jeff and I thought that was very funny... ha. Anyway, after figuring out my cash money baby, it was time to hit the road. I said my very sad farewells and took my giant poster with me that said, "We <3 u Irina." I absolutely fell in love with Midland and the people with it. Alright, so here comes the really exciting part. On the way to Detroit, Jeff and I were talking about the match and about upcoming weeks, when all of a sudden I get an email from Expedia saying that my 7:35 flight was canceled. I have a mini freak out, I ask Jeff to call them, we get it sorted, and then I'm fine again. We get to the airport exactly one hour before the flight. We get to Airtran, everything's going well, and then the guy looks at me, and this is how it goes: Airtran Worker: How many bags will you be checking? Me: Just one. Airtran Worker: That will be seven thirteen. Jeff: Seven dollars and thirteen cents? Airtran Worker: No, seven hundred and thirteen dollars... And forty cents. Really? Can I start to freak out yet? Yes, total freak out. I kinda looked at him puzzled, explained the situation, show him the receipt and told him that we had sorted it all out via the phone on the ride over to the airport... It's six forty five. So my heart starts racing, and Glenda (the other really nice Airtran worker) goes: We have a reservation for you, but Expedia hasn't paid us yet. You're going to have to work this all out with Expedia... It's six fifty. Jeff calls again, and we start talking to Valerie. She's nice, patient, calm, and she speaks perfect English. What more could you ask for? Finally, Jeff gets off hold, and she tells him on the phone that she has had authorization for us to go on this flight... It's six fifty. I kinda light up, and Jeff tells me we're good, and he takes my bag that needs to be checked and confidently puts it on the scale. Glenda talks to the Expedia worker on the phone, and the last thing we hear from Glenda is, "I can't do anything from here. You have to have reservations take care of this..." Glenda finally gets off the phone with Valerie, and she looks at us and says, "I can hold the flight for five more minutes before I won't have any access anymore"... It's seven oh one. I decided to make the call, look at Jeff and tell him that we're just going to have to buy the flights up front. So as we are walking up to security, we hear the PA say, "Final boarding call for Falconi and Wilson." It's seven fifteen. We start sprinting. Now people, I don't sprint in airports. I don't like to be in a rush, I don't like to be rushed, nor do I like to be late. I wasn't too thrilled about running around in Detroit with my awesome Uggs. At least my feet were warm. After about half a mile of running, I got to the gate, and we were good to go. We were able to catch our breath, and we got seated... All was good in the world. We got to Atlanta, and it was hot. I guess that's why they call it Hotlanta! Well, people, it has been an amazing week. I've enjoyed every single second of it. You can still send your questions, and I will do my best to get back to you as soon as possible. Ya'll have been great! Thanks for reading, and Stay Classy. To Michael Andres: The transition from juniors to college was very easy, due to the fact that I didn't really play that many junior tournaments. I was homeschooled for three years when I was in high school, so I had to take care of my studies, just like I would have to in college. The practice was more organized, and there was more routine to your schedule, but other than that, you were still traveling and practicing and playing with a bunch of players. The team atmosphere was something that I wasn't used to, which made the transition even easier. From college to pros, that wasn't too bad, either. I was lucky enough to get some pro tournaments in before heading to college, so I knew the gist of what the pro life was about. These girls were playing for money, and the pressure was a little more, but once that part was taken care of and you acknowledged and accepted it, I was in like flynn. Let me know if there are any other questions. Talk to you soon. PS. I'd also like to give a special thanks to Eric Smothers and Serious tennis for their great stringing service :-0 Saturday, Feb. 12, 2011 Happy Saturday everyone! PS. The Portrait of Dorian Gray is supposedly a good book. The best football player to ever live just recommended it to me. Check it out and let me know what you think. Well, everyone. This is my second Saturday in Midland, Michigan. And to be completely honest, I have loved every minute of it. I'm about to get mushy with you here, so bear with me people. Actually, I'd rather save the whole mushiness for tomorrow since it'll be my last blog entry from Midland :-( Well, TILL NEXT YEAR! :-) Anyway, let me tell you a little bit about my day. This morning I was able to have breakfast with my housing family for once!! That's the beauty about the weekend. So yeah, the same routine, the same process took place yet again. Believe it or not, superstitions are everywhere amongst professional athletes. I'm going to go through a few that I've heard of, some that I have (but I won't specify which ones are mine), and some that I just can't believe. 1. The first one that comes to my head is the eating the same food every night if you're doing well. I know a player who ate a milkshake and a hamburger the first day she was at a tournament, and for the whole week she would eat the same thing because she just believed that she had to keep it the same. 2. Wearing the same outfit every night. Since we are all not jacked up millionaires with dozens of the same outfits, there are players out there that will wash their clothing every single night. Even if they are in hotels and stuff, where you have to pay like a buck twenty five per load, there are girls that will commit to an outfit and just wear it every single day. It's pretty intense. 3. Stepping on the lines. There are some girls that will literally go out of their way to not step on the lines of the court. They will literally like hop and skip over them, to just not step on them. It's pretty funny when you watch them from the stands--they look pretty silly, but hey, whatever works. 4. Another one that I've heard is the number of bounces you take before you serve. There are some players who will literally bounce the ball 8 times before every serve. Trust me, if you're on the other side, it can start to get really annoying. But hey, once again, whatever works. 5. One of the nastiest ones I've heard (if you have this one, ya'll should really consider looking at numero dos for some advice) is the wearing of the same underwear for the whole week. And trust me, I know it sounds absolutely disgusting, but trust me, I've met girls that do it. It's sounds really unhygienic, and in all honestly, it is, but the girl that told me this appears to be clean, so who knows...superstitions will get ya. 6. And last but not least, one of my all-time favorites, is using the same racquet throughout the whole tournament no matter what. It may be cracked, the grip might be off, the string might be dead, but there are girls who will just continue to use the same racquet, no questions asked. Obviously these are just a few superstitions that I've heard, but that was just a gist for you to see where we get it from...you know...the little crazy in all of us? Anyway, let's go back to my Saturday. After warming up with my girl Alison, I played a Top-100 Canadian who has been having just an absolutely fantastic run in the past couple of months. I knew that she was going to be tough, and she was. I came out victorious, which was great! And just twenty minutes after that, we had the semis of doubles to play. Alison and I took on a team of girls that were playing for the first time together this week. They came out ready to ball, and we were fortunate enough to get the win in a close two-set match. After a little debriefing and some amazing chili, I headed back home to get some rest and to chill before my coach arrived. Oh yeah, that's right. My coach, Jeff Wilson, decided to hop on a flight from Atlanta to Detroit to be here for tomorrow's match. Isn't that awesome!? Jeez, what you have to do for people to come see ya, huh? Totally kidding, Jeffo. While he drove from Detroit to Midland, I was once again pleasantly surprised by my housing parents. One, from the chicken parm that they had ready for me to eat, and two, this amazing necklace they got me! I was so excited, I didn't know what to do with myself---so I did laundry. It's a great past time. Ya'll should try it. So after that amazing meal (thanks, Carl and Helen) my coach picked me up, and we headed to the club for a little scouting before game time tomorrow! Look forward to the final tomorrow! Thanks for reading, and Stay Classy Midland. If I were you, I would just bring some gloves and a bucket---so you can go ahead and collect the snow from outside and throw it everywhere. I'm sure no one would mind :-) The Palm Desert tourney is in the sights, but if I can get into some bigger events, I would most likely play those :-\ When will I be seeing you? Friday, Feb. 11, 2011 "It's a wonderful day in the neighborhood," as the late Mr. Rogers used to say. Bless his heart. Greetings everyone!! You know what the coolest thing about this whole blog experience is? The luck that comes along with it. Thanks, Sally. I didn't know that was one of the perks, but hey, it works! Anyway, let's start with breakfast. One of the most exciting parts of my day. Unfortunately, I had a lonely breakfast due to the fact that my girl, CMac, left me this morning. :-( Don't worry, guys, I sent her a copy of my blog... In case you were wondering. So yes, I had my awesome signature breakfast (egg whites and brown rice) before heading over to the tennis center. Before I get ahead of myself, let me break down for ya the process of making my scrambled egg whites in the morning. 1. You have to make sure you have the right eggs. I typically choose Eggland's Best, because of the name, but I'm also attracted to Free Range eggs just because of all the hormones stuff. 2. Ok, second, you have to make sure that you have the right pan---non-stick people. 3. I personally use Pam Olive Oil spray for the eggs not to stick on the pan. (Even though it's a non-stick pan, those things are stubborn little suckers.) 4. Once you turn on the pan to high, you spray it all up with some serious Pam, before you really start to get cracking (no pun intended). 5. REMEMBER: Since you're going to be cracking these eggs and just getting the egg whites, you have to make sure that you have the container with the eggs near the stove along with a plate where you can put the yolks and egg shells. 6. The cracking of the egg is very important. You see, you can crack the egg with one hand, two hands, or you can crack it against a surface to open it. 7. Once you crack the egg (you typically want to crack it along the middle so you can get the yolk to completely fit in one half of the egg shell) you start playing hot potato, but in this case, hot yolk, from one half of the egg shell to the other side so you get every last big of egg white onto the hot frying pan. 8. After you've gotten that last bit of egg white, you go ahead and dump it on the plate. If you're like me, I will just put it back in the egg container to save me the trouble of cleaning out the yolk out of the innocent plate. 9. REMEMBER: Part of the shell can fall inside the mix, so when that happens people, please don't use your finger. That's the stupidest idea ever because you're going to get burned. You know what they say about getting too close to the flame---YOU GONNA GET BURNED! 10. Last but not least, you start scrambling the eggs (it's plural because I eat five egg whites every morning, but hey if you want two, three, six, that's fine by me). You can use a wooden spoon or a spatula. Whatever floats your boat. Depending on how you like your eggs (whether mushy, or hard) you then empty out the eggs onto the plate. The brown rice part is easy because that's minute rice and it only takes a minute (hence the name). You dump it all together, spice it up with whatever sauce or spice you might like, and VOILA, you have an amazing breakfast staring at you in the face... I didn't drift there, did I? I can totally continue on this whole cooking thing, but I guess I should probably talk a little bit about the tennis and all that jazz. So yes, I played a fellow American today. We went out on the court together (we're good friends) and tried not to talk about fashion on the way out there... or nail polish for that matter. It was a battle yet again, and I was YET AGAIN fortunate enough to come out victorious. Good luck in Memphis girl! So yes, after that...hmm...after came lunch. Lunch was another exciting part of my day today. I decided to go with the chili and salad mix for lunch today. It was approved by Kimbo, which is the most important thing. I had a nice lunch and chat with the main man of the tournament, Mike Woody. He's such an awesome cat, ain't he? For those of you who haven't met Mike, he is one of the most energetic, nicest, passionate and most humble people I've ever met. After our talk, it was time for doubles. Alison and I got to the court and were victorious against two tough opponents. Semis baby! I guess I finished the day relatively early today. At around 5 o clock? Considering the 10 o clock finish last night, 5 o clock was just a fantastic time to end the day. But wait, my day isn't over! When I got home, my amazing housing family, Helen and Carl, made homemade Chicken Pot Pie! Now people, ya'll read about the spaghetti---that was good. When I tell you this was really good, I mean, it was REALLY GOOD! I even took a picture of it and sent it to a couple of chicken pot pie lovers to brag a little bit of how absolutely awesome my housing is. Alright, hope you enjoyed reading...I enjoyed writing it as usual. And ps, if you have any questions, feel free to click here . Ask away my good people! I'll have answers! And PSS. I miss my partner in crime, CMac! Good luck in Memphis girl! Oh, before I go, here are answers to a few of your questions from yesterday: From Colette Lewis, zootennis.com Your blog is very funny. Could you tell everyone who your favorite comedians are? Thank you for the compliment. A few comedians that I find very funny are Chris Rock, Ricky Gervais and Pablo Francisco. Don't really have a favorite. They are all just so good. From Lee M. Just read an interview with Pete Sampras, where he said he wishes he had switched to a more powerful racquet. How do you go about evaluating and selecting your equipment? Thank you, and have a great day. Evaluating and selecting your equipment is one of the most important processes for a tennis player. I recently changed racquets. I had been playing with the red and white Babolat, the Pure Storm Tour, but I recently switched to Nadal's racquet, the Aero Storm. One of the reasons my coach and I decided to switch was because of how much more shape and rotation I would be able to get on that racquet. Plus with the change of string, RPM Blast, the results were evident amongst the top players such as Nadal and Schiavone, so it was definitely worth it to try it out. The difference in my ball was obvious, and I really liked playing with the combination. As of now, I don't have weight on my racquet, but I might add some lead tape in the near future to have more action and power with my ball. Thursday, Feb. 10, 2011 Before I even begin, it's been quite a day. It's quite late, I don't remember when I even woke up this morning, I can't remember what I ate, so just bear with me here. This might be quite long, short, I really don't know. Just stick with me here. Alright, so this morning I decided to sleep in... I wanted to sleep till nine, but SOMEONE decided to wake me up at 8:30 (you might not think so, but 30 more minutes of sleep is a big deal) with a picture message that had no picture on it... You know who you are. Anyway, had breakfast with Annie (she's my adopted housing doggy for the week, in case you are not up to date with all the characters in my blog) and got a ride to the tennis center to get my first hit of the day with CMac. She had already been up at the break of dawn hitting balls, but she was willing to warm up her roomie. What a sweetheart :-) So yes, after my first hit, I stretched and got a little lunch. Don't worry, I had a long enough window before breakfast Kimbo ;-) She's my trainer/nutritionist. On the WTA website, I actually listed her as Kimbo "The Ripped" Wilson. Trust me on this one, it's funny. And if you don't believe me... well, that's just too bad. Proof is in the pudding. wtatour.com website people. Check it. Alright, so after stretching, showering and all that jazz, I had lunch with a couple of my American girl friends, Amanda Fink, Lena Litvak, Story Tweedie-Yates and CMac. We had fantastic conversation to compliment the wonderful food we were all eating. The conversation went from babies, to Valentine's day, to meringue, to the Italian Wedding Soup and a bunch more. The "bunch more" stuff can't obviously be mentioned in the blog... since it's politics and all. So yeah, after that, I decided to make my rounds around the tennis center, catch up with a few people, get some advice from the ever-so-wise Tom Gullikson, and watch a little bit of tennis. Honestly, guys, I can watch tennis all day. Like, seriously, ALL DAY. Nevertheless, I knew I couldn't stay in the tennis center all day when I was the 7 o'clock feature match -- I'd be exhausted! So CMac and I decided to go back home and chillax (chill+relax=chillax) before coming back. I got in a fabulous nap before it was time to head back to the tennis center. CMac agreed to warm me up before my match yet again (what a sweetheart, yet again), and then I decided to grab a little dinner before my match. In case you were wondering, when to eat is one of the most essential and important processes when you're a professional athlete. I knew that I needed at least an hour before my match to eat so I can digest my food, and I also knew that I couldn't wait till after my match to eat because I didn't want to be hungry on court. So I decided I'd rep the chili and a Subway salad before heading on the court. They complimented each other perfectly. Yum. After eating, it was time to get all ready to go for my match. I played a Top-100 player from Great Britain and the fourth seed of the tournament. I was once again fortunate to come out on top with a very competitive battle that stretched to a 2-6, 7-5, 6-1 scoreline. We finished the match close to ten o'clock. It was exciting because I got to toss some balls in the crowd at the end of the match, and I've ALWAYS wanted to do that... So Thanks guys for giving me that opportunity. IT WAS AWESOME! Alright, so tomorrow will be an action-packed day. Singles NB 12 and then doubles with my awesome and lovely doubles partner, Alison Riske. WHO BY THE WAY, got the Sportsmanship award tonight. Congratulations Ali, you totally deserved it!!!! Stay tuned, and Stay Classy Midland. Wednesday, Feb. 9, 2011 What's going on world! Irina here with yet another eccentric blog entry for your entertainment! Alright, so to be completely honest with ya'll, today was a relatively sloooooooooooow day. Woke up, had a hit with my South Carolina sweetheart, Shelby Rogers. I think I'm going to give her the nickname of Ford Shelby... I called her that today, but she didn't really react to it... So who knows, maybe Roger Rabbit? Gimme sumtin people. Alright, so after my warm up, CMac and I decided to do some scouting and video chatting with a couple of our friends back east, before heading to a luncheon with our very own tournament director, Mike Woody. We had our own chauffeur (Mike himself), and we met to go to Damon's restaurant at around quarter to twelve. Once we got there, we got to order from this fabulous menu, and we got our food in record time. And then, the real party started. Mike introduced us, and we talked in front of about 30 people that were eating at this awesome luncheon. We got asked a LOT of questions, which was surprising because usually you'll get like three or four questions max, but these people were intense! And we loved it! CMac and I were able to really spread our wings in the public speaking world. Not literal wings...well...maybe...since this is literature...somewhat...Anyway! Drifting again. So, we got back from our fabulous 30-minute debut and decided to watch some very exciting matches! We cheered for a couple of Americans that were playing today and had some popcorn along with that... "Not too salty," CMac might add. The popcorn just made it more enjoyable, due to the fact that watching tennis can be very stressful. I don't know about you, but when I watch tennis, I feel like I'm more nervous than when I'm actually on court playing a third-set tiebreaker in a hundred degree weather. But hey, that's just me. After some solid match watching and analysis and scouting and cheering, we decided to go home. Yeah, it was four o'clock in the afternoon, and we decided to go home. We really were just ready to relax and get away for a little bit. Now don't get me wrong, I personally can spend an ENTIRE day at a tennis center watching matches and such, but, a little birdie gave me a hint about the fact that I might be playing a feature match tomorrow in the evening, so that just sealed the deal. We came home, chilled, watched "Everybody Loves Raymond," and "King of Queens" before we decided it was dinner time. So guess what we had? No seriously, guess. Did you say spaghetti? Cuz we TOTALLY had spaghetti leftovers! Our angel of a housing mom, Helen, decided to make us an oven roasted chicken dinner and from there, we were golden! We ate and ate and ate some more (nah, we didn't eat that much) before we cleaned up and had yet another heart to heart. After a solid hour of that, we decided to have a little piano lesson. Now, I can't say I'm very good, but I did teach myself how to play some, which then trickled down to teaching CMac a little sumtin sumtin. She perfected "Joy to the World" and about the first 10 notes to "Fur Elise." It was quite a feat. After that, CMac announced it, "Alright, I think I'm ready to catch some z's." And that, my friends, is how the cookie crumbles. Till tomorrow! Peace and blessings :-) Tuesday, Feb. 8, 2011 Well, a hello to you, my avid blog readers! Let's see, the songs that I'm thinking about that would describe this day perfectly are: "Celebrate," "Beautiful Day" and "I Gotta Feeling." Yeah, that's about right. How are you guys doing? Me? I'm just perfect! Today was a heck of a good day, and it wasn't just the homemade spaghetti that my amazing housing mom, Helen, made for CMac and me, but it was a solid day of tennis, as well! No, but seriously, Helen, that spaghetti was unreal. Thanks for the whole wheat. That really sealed the deal. So, yeah, let's start with breakfast. Actually, maybe not that far back... warm up would be quite sufficient. My fellow college friend, Caitlin Whoriskey, was nice enough to warm me up this morning at the crazy hour of nine (trust me, that's early to some people). So, yeah, we warmed up, and then the main-draw matches finally took their places on the courts. Just letting you guys know a little cool fact about this tournament: there are 15 American players in this tournament. Mister Gully and I were talking about that today! Eleven were originally in the main draw, but the four qualifiers that made it were all Americans. How absolutely fantastic is that? I'd say that's pretty darn great. Anywho, after my warm-up with CDub, I got all dolled up and ready for my match. I was second on today, which means that one match had to go on before my match was to take place. At about 11:30, things really got rollin' like a Midland snow drift in February. I went on court, and I played against a former Top-100 player from Great Britain, who is just an incredible competitor. I felt fortunate to come out on top! After the match, I had some time to get some grub and catch up with my friend Madison Brengle to talk about the latest fashion and what colors are in style. Purple is the new 20-year-old pink! Subway did me good, and after that, my good friend Alison Riske and I were to play doubles against some fellow Americans. We played a tough team and were fortunate enough to get a win in the third-set tiebreaker. Let me tell ya'll something, doubles is all about communication. And the one thing Ali and I can do, man, we can get to talkin'. It was our first time playing, and I personally had a ball, a snowball -- it is Midland in February after all. P.S. The temperature in the car read 0 degrees this morning. I just felt a need to mention that. Ok, great. After some stretching and a little briefing of our doubles with Ali's awesome sister/coach/bag carrier/masseuse/travel agent, Sarah, I headed home with my girl CMac for our special homemade dinner that I will just continue to brag about for the rest of the week and probably the rest of the year whenever I eat spaghetti... whole wheat spaghetti, that is. With spaghetti in our tummy, CMac and I had a little R and R, followed by a Heart to Heart before we called it a night. Talk to ya'll tomorrow! Stay Classy, ya'll! Monday, Feb. 7, 2011 Wassup Michigan! And the rest of the world! (Half of the people that read this live in Midland... the rest are family members and people that know that my writing is absolutely fantastic and have a huge desire to read my entertaining blogs.) So I took a day off today to do some ice fishing on Mackinac Island... I wonder how many of you actually believed that statement above. Well, I thought it was funny. And besides, ice fishing is something I do in my free time anyway. Nothing like sitting around in a box with no light and no heat on a frozen lake by yourself... All right, I'm drifiting here. Ok, so this morning, CMac and I were able to hit for the first time since Australia. (Doesn't that sound so cool?) We went through our routine (that I specifically described in detailed fashion if you scroll down to yesterday's blog), and then we went to chill a little bit before I had my first real live press conference! Whoop whoop! It was so exciting! I was accompanied by three other players and the wonderful Mr. Tom Gullikson. So yeah! The questions were all about how we go from tournament to tournament, who takes care of all the arrangements, what it takes to win matches in this era, and what keeps bringing players back to this tournament! Mike Woody, our tournament director, was fantastic, and we all felt really special up there with those awesome name cards. (Something about name cards just makes you feel so special... I honestly don't know how to explain it.) After that, I was able to get a quick Subway sandwich before heading out on the court once again with a new buddy of mine, Miss Shelby Rogers. P.S. Shelby, if you're reading this (which I assume you are), that snowball fight is on like Donkey Kong! A solid hour and three minutes later, it was time to see the most exciting news of the day -- the schedule of play. Now, Dessie Samuels and Billie Lipp, our ITF supervisors, are just wonderful. They treat us like one of their own, and they provide us with everything we need. We players usually just bug them about schedules and new balls, but still, they get a lot of love from us. I found out the schedule and then decided on what time my warm up would be. And Trice Capra, trust me, I want to hit with you, but you always text me after I'm all booked. You gotta get with me early, girl! (I told her I'd write about the fact that she's asked me about five times to hit, but I'm always set. She actually texted me today saying: "I know you're tired of me asking this... but do you have hits for tomorrow?") People, let me tell you, getting hits and warmups can be such a grind sometimes if you're not rooming with another player or if you don't have a coach. OK, so after setting all that up, CMac and I went back to the Gibbons' crib to get some R and R before the big party! The Midland Classic Player Party! The food was great, the girls were dressed up, and the people were super friendly! Mike told us that we were mandated, by him and the rest of the sponsors, to strut our little (well, I'm the only little one here, but that's ok) selves up on that stage, and introduce ourselves. He also said that if we really liked to talk, we could rap or do stand up for a few minutes. It was encouraged, not mandatory, which was nice to hear for all of the players. Even though it wasn't obligated, we all ended up doing a little stand up on the stage. Every girl on the stage made a note to thank the sponsors and the community for hosting us and putting this tournament together. And, once again, we cannot thank you enough, everyone! A few paparazzi pictures and handshakes later, we were off home to get some shut eye before our matches tomorrow! Goodnight everyone! Send some prayers this a'way! Sunday, Feb. 6, 2011 Hello again! 'Tis Irina! So, apparently, some football game was today. It was these Green guys against these Yellow guys... And I think the Green guys won... No big deal, it was just a football game... ANYWAY, OTHER THAN THAT, today was a fun-filled day with my roomie for the week, "the one and only" (according to her) Christina McHale! We had a great breakfast where I was looked at funny because of my egg whites and brown rice and sesame dressing sauce that includes my breakfast. Can't forget the pineapple tidbits, either. So we headed to the club, where we both had some early hits with some fellow talented Americans, Coco Vandeweghe and Amanda Fink. If you're curious as to how practices are held, let me break it down for ya: 1. You start off with the enhanced but short warm up (because if you're a pro, you've already warmed up before stepping on the court). 2. Mini tennis for a few minutes (I could spend hours doing this, but I guess that's not really how the matches go). 3. Hit down the middle for about 10 minutes (yeah, there's really no more to that). 4. Crosscourts from both sides (and, of course, you should always be nice and ask if the person is ready. Don't just strut to the other side... that's just rude). 5. Volleys (and overheads, swinging volleys, back to the future shots -- when you hit the ball over the net and when it bounces the second time, it's back on your side). 6. Serves (on all 6 spots of the service boxes). 7. And, last but not least, a couple of points (whether it be baseline games, service games or tiebreakers). And that concludes a hit. A warm-up prior to a match, though, usually consists of everything but number 7. Don't ask why, that's just how we do it. Phew, so after that "hit," I waited for CMac to get some lunch. Which is, might I mention, PROVIDED by the tournament, FREE OF CHARGE. You don't hear that every day. Free food for athletes? Yeah, good luck. After our feast, the most fun part came -- sitting and waiting. We both had hits later in the day, so we decided to just hang, watch matches, talk to other players about how New Yorkers are different from... New Jersey-ans? (help me out on that one). We also had the chance to talk to the one and only Tom "Gully" Gullikson, who just happens to be a Packers fan. (GO PACS! I'm just saying that because they won. Take it easy Steelers fans.) So, yeah, after some solid waiting, AND WATCHING THE SNOW COME DOWN -- I love snow! I haven't seen it in years, and the fact that I was able to experience falling snow was absolutely amazing! -- Sanaz, CMac and I went outside and took a picture while it was snowing... and in shorts... pretty fantastic! Then that time came where we had to go through our routine again for our second hit of the day. After our hits, we decided to go home and say bye to our housing family. :-( It was all good, though, because they knew that we were only there for two nights and would then move to our new housing family: The Gibbons! Helen and Carl were just fantastic from the moment we walked up to their meowing cat, Mitchy, and their excited doggy, Annie! They were nice enough to invite us to a Super Bowl party that Helen's sister was having! We decided, "Hey, let's go crazy and watch the Super Bowl!" And we did! The white chicken chili was phenomenal, and the entertainment was just awesome! For it being my first Super Bowl party, it was a great turnout! Now we are back home, ready to get some shut eye, for an awesome day tomorrow! Goodnight everyone! Saturday, Feb. 5, 2011 Irina Falconi here! It's been a while, huh? Pretty sure the last time ya'll had heard from me was when the USA Collegiate Team went to France and started making all that noise to beat France in the final to win an international title! Yup, it's been a while. Well, here I am again, in Midland, Mich., this time! A little different from France, but hey, there's about 2 feet of snow here! Beat that, France! I've been here about 24 hours, and I am quickly falling in love with the little town of 40,000 people that is called Midland. From interactions with girls and stories that circulate throughout the circuit, I have heard nothing but compliments and great things about the facility, the tournament and the people that host the players in Midland. First off, let's not even begin talking about the facility and the tournament yet. Can we please talk about the service?! How amazing the hospitality is? Miss Nancy Billovits, I hope you read this. Miss Billovits is the housing coordinator that I came in contact with from the help of one of the wonderful tournament directors, Mike Woody. From the first text that we sent each other, Nancy has been an angel! She hooked me up with one of the most wonderful housing familiies I've ever been with: Miriam and Gil Harter. These two folks have that amazing, hospitable feel about them that makes me feel like one of their own. Unfortunately, I will only stay with the Harter family for two nights before I have to move to another family. That wasn't supposed to go down like that, but because I came a little early, the Harter family offered to put up with me or put me up, whichever you like, for two nights. I am very glad I was able to meet these fine people. So today, I went to the club for the first time: can you say STYLIN? I mean, this town, 40,000 people, has a club with 16 indoors courts, 16 outdoor courts, with plans on getting four more clay courts? I mean, that's ballin right there! I'm just saying. So I practiced with my new-found friend this morning, Whitney Jones. After hitting on one of the fastest courts on the planet (which is perfectly fine with me), we headed to Quiznos for a little lunch with a couple of more friends from the circuit. When I came back from lunch, I really experienced what all the girls had been talking about regarding this tournament's local outreach. The amount of kids and coaches I saw on the courts was awesome! They were blasting the Black Eyed Peas (awesome, by the way) the whole time, while teaching these young talents how to smack balls around the court! I then encountered a few volunteers from the tournament who were eager to meet me! They signed me up to receive my badge and get me all official and everything (when you get a badge, it makes you feel official -- what can I say? ask an officer) and to offer me tangerines (who can say no to tangerines?). After some hand shaking with some important people of the tournament, I hung out with some fellow players, Jamie Hampton, Beatrice Capra, Sanaz Marand and Lauren Herring, to talk about politics... NAH I'M JUST KIDDING! A little before hitting again at 3:30, I used the sweet and compact fitness room that they located downstairs right before you walk out on the court (very convenient, if you ask me) to do a light workout before practicing with the lovely Alexa Glatch. After a solid hit, I did a long stretch that was highly needed, followed with some recovery to get all that muscle to top-notch shape! Once again, I went back to the table to talk some more about "politics" -- in Spanish, might I add. (Lauren Herring is taking an online Spanish class for school. It's quite amusing, actually.) Coming back home was nice. I was able to relax, take a long shower, short nap, before dinner and meeting up with my good friend Christina McHale. She was coming from Ft Lauderdale, and her flight was delayed a couple of hours due to the snow, but it was no worries! She was peachy! Two bowls of vegetable soup later, and we were both ready to get some shut eye. Qualies start tomorrow!! I'll keep you guys posted on what goes down!! Cheers from the land down... above? With snow? YEAH, THAT WORKS! :-) IF P.S. Dow Corning, we salute you and thank you from the bottom of our hearts for providing us with a great tournament! :-)
[Note: this essay contains many hyperlinks. They can be right-clicked and opened in a separate tab or window.] What medical devices are shielded from liability? Are there other examples of legislation seeking legal protections for wide-scale use of medical devices that even the device's trade group leadership admits are not ready, and are experimental? Here we have a proposal from a member of the U.S. Congress to shield health IT software, a medical device (per FDA's Director of CDRH - the Center for Device and Radiological Health and others), and its users from liability through an apparently unique special accommodation. This from iHealthBeat.org: Thursday, October 27, 2011HR 3239) that would create certain legal protections for Medicare and Medicaid providers who have implemented electronic health record systems, the Wilkes-Barre Times Leader reports. The bill -- called the Safeguarding Access for Every Medicare Patient Act -- would create a system for reporting potential medical errors that occur when using EHRs, but it would not allow such information to be used as legal admission of wrongdoing. The bill would cover certain physicians and hospitals that serve Medicare and Medicaid beneficiaries. It also would cover participants and users of health information exchanges. Marino, who is a member of the House Judiciary Committee, said that offering the new legal protections to health care providers would promote greater use of EHRs and encourage Medicare and Medicaid providers to continue serving beneficiaries. [As if they could not do so without EHR's? - ed.] He said, "Many providers are reluctant to use [EHRs] because they believe the practice will make them more vulnerable to unnecessary legal action," [unnecessary? How about real and necessary, as per the White Paper Do EHR's Increase Liability? - ed] adding, "This [bill] protects access for seniors in the Medicare and Medicaid programs" (Riskind, Wilkes-Barre Times Leader, 10/27). From Rep. Marino's website (my comments are in [bracketed red italics]): Marino Introduces Safeguarding Access For Every Medicare Patient ActFOR IMMEDIATE RELEASEWASHINGTON -- U.S. Rep. Tom Marino, PA-10, has introduced legislation that offers limited legal protection to Medicare and Medicaid providers who use electronic records. [Which, I fear, could effectively act as, or mutate into, absolute protection in the environs of the legal system - ed.] Oct. 26, 2011 HR3239, the Safeguarding Access For Every Medicare Patient Act, would ensure patient access to Medicare and Medicaid providers; reduce health care costs [really? That's not what Wharton and others write - ed.]; guarantee incentives to providers to remain in the Medicare and Medicaid programs; and promote participation in health information technology. Providers will eventually be required to participate in electronic recordkeeping or face a reduction in payments. Marino said the bill offers incentive in the form of legal protection to providers who may be reluctant to remain in the Medicare and Medicaid programs due to low reimbursement rates which are constantly being targeted for further reductions. [I imagine the known risks of health IT such as these at "MAUDE and HIT Risks: What in God's Name is Going on Here?" are a minor consideration if you receive legal immunity - ed.] HR3239 would create a system for reporting potential errors that occur when using electronic records without the threat of that information being used as an admission of guilt. [Even if the physician or nurse is guilty of EHR-caused or aggravated, i.e., "use error" per NIST, malpractice - ed.] It also prevents electronic records from being used as an easy source for “fishing expeditions,’’ [like this case, this case, this case and this case where patients died? - ed.] while making sure that parties responsible for errors are held accountable [how? -ed]. The proposal allows for providers who use electronic records to take remedial measures without having those actions be used to establish guilt [even though remediation may be very relevant to malpractice, patient injury and death prior to the remediation, and the remediation is informed by the error - ed.]; places time limits on the filing of lawsuits; and offers protection against libel and slander lawsuits. [If this provision were to allow clinicians to speak publicly about HIT flaws without legal retaliation or sham peer review, I'd be all for it - ed.] “Many providers are reluctant to use electronic records because they believe the practice will make them more vulnerable to unnecessary legal action,” Marino said. [I think it's much more likely they are reluctant to use them due to the aforementioned hair-raising MAUDE reports and literature such as here, here and here - ed.] and “Every time a doctor or hospital chooses not to participate because of these fears, our seniors lose another provider. This protects access for seniors in the Medicare and Medicaid programs.” Marino said HR3239 is a two-pronged attack against rising health care costs: It provides legal protection to providers while encouraging the use of health information technology which has been shown to reduce costs. [See above links on that issue - ed.] “Best of all, passage of this bill would require no new spending,” Marino said. [Besides the hundreds of billions to be spent on the IT itself - ed.] This sounds like a healthcare IT vendor marketing piece, with claims refuted repeatedly here at HC Renewal, usually via the biomedical literature. It's slick, purporting to "protect Medicare access" while actually promoting health IT sales. Did Rep. Marino get snowed by the health IT lobby? (See "The Machinery Behind Healthcare Reform" in the Washington Post.) A major question is: What are the patients and their rights to redress for injuries that occur due to EHR's? Chopped liver? Isn't this bill really saying that patients are experimental subjects with limited rights? In other words, that improving EHR's should be at the expense of the unfortunate patients treated under its auspices? That the computers have more rights than the patients? That line of thinking about what in reality is unconsented medical experimentation (i.e., "First, let's experiment" as opposed to "First, do no harm") has led to some very dark places in medicine, and not just in ancient history (e.g., see "Bioethics panel blasts late U. Pittsburgh professor"). See this reading list for more on these issues. Also see the many other posts on this blog about health IT quality, usability, efficacy, risk (and that the levels of that risk are admittedly unknown), lack of informed consent, and other issues via query links such as here, here, here and here - and the hyperlinks within those lists of posts - to more fully understand this perspective. The text of the proposed legislation is here. While not all bad, it raises a number of concerns. Excerpts are as follows: H. R. 3239 To provide certain legal safe harbors to Medicare and Medicaid providers who participate in the EHR meaningful use program or otherwise demonstrate use of certified health information technology. ... SEC. 4. RULES RELATING TO E-DISCOVERY. In any health care lawsuit against a covered entity that is related to an EHR-related adverse event, with respect to certified EHR technology used or provided by the covered entity, electronic discovery shall be limited to-- [I'm not sure what "certification" has to do with litigation, since "certification" of health IT has nothing to do with safety or usability; see note below - ed.] (1) information that is related to [what does that mean? - ed.] such EHR-related adverse event; and (2) information from the period in which such EHR-related adverse event occurred. [eDiscovery related to EHR-related adverse events is already difficult, e.g., obtaining complete metadata. What these provisions would do is likely to increase the complications through legal maneuvers on terms such as"related to", "period" etc. - ed.] SEC. 5. LEGAL PROTECTIONS FOR COVERED ENTITIES. (a) General- For a covered entity described in section 2, the following protections apply: (1) ENCOURAGING SPEEDY RESOLUTION OF CLAIMS- (A) GENERAL- A claimant may not commence a health care lawsuit against a covered entity on any date that is 3 years after the date of manifestation of injury or 1 year after the claimant discovers, or through the use of reasonable diligence should have discovered, the injury, whichever occurs first. This limitation shall be tolled to the extent that the claimant is able to prove-- (ii) intentional concealment; or (iii) the presence of a foreign body, which has no therapeutic or diagnostic purpose or effect, in the person of the injured person. ... (2) EQUITABLE ASSIGNMENT OF RESPONSIBILITY- In any health care lawsuit against a covered entity-- (A) each party to the lawsuit other than the claimant that is such a covered entity shall be liable for that party's several share of any damages only and not for the share of any other person and such several share shall be in direct proportion to that party's proportion of responsibility for the injury, as determined under clause (iii); (B) whenever a judgment of liability is rendered as to any such party, a separate judgment shall be rendered against each such party for the amount allocated to such party [does that include the IT vendor? - ed.] ; and (C) for purposes of this paragraph, the trier of fact shall determine the proportion of responsibility of each such party for the claimant's harm. (3) SUBSEQUENT REMEDIAL MEASURES- Evidence of subsequent remedial measures to an EHR-related adverse event with respect to certified EHR technology used or provided by the covered entity (including changes to the certified EHR system, additional training requirements, or changes to standard operating procedures) by a covered entity shall not be admissible in health care lawsuits. [This in and of itself seems to give special accommodation to health IT, since remediation helps make the case for the presence of problems to begin with - ed.] (4) INCREASED BURDEN OF PROOF PROTECTION FOR COVERED ENTITIES- Punitive damages may, if otherwise permitted by applicable State or Federal law, be awarded against any covered entity in a health care lawsuit only if it is proven by clear and convincing evidence that such entity acted with reckless disregard for the health or safety of the claimant. In any such health care lawsuit where no judgment for compensatory damages is rendered against such entity, no punitive damages may be awarded with respect to the claim in such lawsuit. [Would that apply to a case such as this? Does it apply to the health IT vendors and their often cavalier software development and quality practices, if patients become injured, such as here, "A Study of an Enterprise Health Information System?" How about to this case, "A Lawsuit Over Healthcare IT Whistleblowing?" - ed.] (5) PROTECTION FROM LIBEL OR SLANDER- Covered entities and employees, agents and representatives of covered entities are immune from civil action for libel or slander arising from information or entries made in certified EHR technology and for the transfer of such information to another eligible provider, hospital or health information exchange, if the information, transfer of information, or entries were made in good faith and without malice. [Does that include defects reports? - ed.] From an ethical perspective, when you know a technology can be unsafe, but you don't know the levels of risk it creates, and the literature is conflicting on the benefits (prima facie evidence the technology is still experimental), you do not promote its wide-scale use in medicine and offer special accommodations to the technology's producers and users. Period. This is especially true without explicit patient informed consent and opportunity for opt-out. To promote such technology is not ethical. Note: I believe the misunderstanding of "certification" of health IT contributes to the problems with such proposals. "Certification" of HIT has little if anything to do with safety, reliability, usability, etc. (e.g,, see http://hcrenewal.blogspot.com/2010/03/on-oncs-proposed-establishment-of.html). "Certification" of health IT is not validation of safety, usability, efficacy, etc., but a pre-flight checklist of features, interoperability, security and the like. The certifiers admit this explicitly. See the CCHIT web pages for example. ("CCHIT Certified®, an independently developed certification that includes a rigorous inspection of an EHR’s integrated functionality, interoperability and security.") Health IT "certification" is not like Underwriters Laboratories (UL) certification of appliances. ("Independent, not-for-profit product safety testing and certification organization ... With more than a 116-year proven track record, UL has been defining safety from the public adoption of electricity to new breakthroughs that help protect our future. UL employees are committed to safeguarding people, places and products in new and innovative ways for today’s borderless world.") This Representative seems to represent districts in Pennsylvania served by the Geisinger healthcare system, including Danville, PA where their main campus is located. His legislative assistant on healthcare represented Geisinger to me in a conversation today in glowing terms. However, I suggest that Geisinger does not have a perfect track record, e.g., see the post "A 'safe' technology? Factors contributing to an increase in duplicate medication order errors after CPOE implementation" and its reader comments and links. It occurred to me that in the post "Is Healthcare IT a Solution to the Wrong Problem?" referencing a study published in the Nov. 25, 2010 New England Journal of Medicine entitled "Temporal Trends in Rates of Patient Harm Resulting from Medical Care" [Landrigan N Engl J Med 363;22] I pointed out that the abilities of health IT to "reduce medical error" may be significantly less than imagined. This is because most medical errors have little to do with record keeping, but instead with human factors. See the post at http://hcrenewal.blogspot.com/2010/12/is-healthcare-it-solution-to-wrong.html.
Academic Freedom and Biblical Scholarship Posted on June 19, 2013 at 11:57 am by Dr. Jim Another in a long series of rebooting this blog… I’m co-chair of the SBL Metacriticizing of Biblical Scholarship Consultation for the Society of Biblical Literature and we have a session on Academic Freedom scheduled for the Baltimore Annual Meeting in November. We have three papers and a respondent lined up, with an extra 30 minutes of discussion time. Jeffrey Morrow (Seton Hall University) “A Biblical Method to End Religious Conflict: The Socio-Political Context to Spinoza’s Battle for the Freedom to Philosophize as it Relates to Academic Freedom and Biblical Studies” Robert R. Cargill (University of Iowa; Visit his Blog) “Do Not Receive into the Bible College or Seminary Anyone Who Comes to You and Does Not Bring This Doctrine”: The Problem of Critical Scholars at Confessional Colleges James F. McGrath, (Butler University; Visit his Blog) Mythicism and the Mainstream: The Rhetoric and Realities of Academic Freedom Kent Harold Richards, StrategyPoints, Respondent. Yours truly will be presiding. Anyway, its clear that the big flaps over how conservative Christian schools sometimes react to liberal religious or secular scholarship being done on their dime will occupy a lot of the discussion time and we are hoping that we get a good turnout. My co-chair, Rebecca Raphael and I, hope we can make a session on various issues in academic freedom a frequent part of our offers at the annual meetings. There is a lot to talk about. There is also some preliminary talk of an edited volume. If you are attending the Baltimore meetiing, I hope you can make it to our session. Here are the abstracts of the 3 papers: James F. McGrath, Mythicism and the Mainstream: The Rhetoric and Realities of Academic Freedom The rhetoric of concern for academic freedom becomes prominent at different times and in different situations – for instance, when a scholar at an Evangelical institution is fired for adopting a viewpoint that reflects the consensus of mainstream scholarship, but also when a proponent of a fringe view like Jesus mythicism has difficulty finding a publisher. This paper will explore the use and misuse of appeals to academic freedom, focusing particular attention on the phenomenon of Jesus mythicism, and the particular case of Thomas Brodie as described in his recent memoir, Beyond the Quest for the Historical Jesus. On the one hand, Brodie records resistance to his ideas in the academy (largely within the domain of Catholic institutions, but also more widely). On the other hand, it is possible that Brodie will face censure from Catholic authorities in response to the publication of his views. The case thus provides a good opportunity to look at the nature of academic freedom and its character, extent, and limits within the secular academy as well as religiously-affiliated institutions. Jeffrey Morrow: A Biblical Method to End Religious Conflict: The Socio-Political Context to Spinoza’s Battle for the Freedom to Philosophize as it Relates to Academic Freedom and Biblical Studies Spinoza articulated a set of guidelines to study the Bible historically in his Tractatus Theologico-politicus (1670), which many scholars have seen as a Magna Charta of historical biblical criticism. Spinoza states that his purpose in attempting to study the Bible historically is to bring an end to the theological and political tyranny which made it impossible to philosophize freely. One of the important historical backdrops to Spinoza’s work was the bloody Thirty Years’ War (1618-1648), the most violent of the so-called “Wars of Religion.” In his introduction, Spinoza explains how such allegedly religious conflict is at the root of his attempt to devise a fresh method for interpreting Scripture. Spinoza argues that if an objective method for studying the Bible can be found, then violence created by sectarian religious beliefs will be put to rest. Spinoza maintains that historical biblical criticism is thus necessary to bring peace to a still turbulent Europe which has been ravaged by horrific sectarian wars. This paper will explore the socio-political context to Spinoza’s biblical project to highlight the ambiguities that continue to plague such apologetic calls for academic freedom in the context of modern biblical studies. Such arguments for “freedom” have often meant freedom for some but a lack of freedom for others. Spinoza’s possible theological agenda notwithstanding, his blueprint for academic freedom and for biblical studies was part of an ongoing secularizing trend which took the academy by storm in the following (18th) century. That is, his method was part of a much broader movement to privatize theological and faith concerns. This secularizing movement had both philosophical and theological origins in the Muslim Averroism that spread throughout medieval European universities, and the Nominalism that made its way through the Protestant Reformation. Spinoza is an heir of this late medieval inheritance (as Jakob Freudenthal and Étienne Gilson demonstrated about a century ago in their important studies that have apparently been forgotten by contemporary scholars). Moreover, as Jonathan Israel has more recently shown, Spinoza’s thought played a central role in the Enlightenment debates which ensued long after his death and which secured biblical studies’ foothold in the modern university. Robert R. Cargill “Do Not Receive into the Bible College or Seminary Anyone Who Comes to You and Does Not Bring This Doctrine”: The Problem of Critical Scholars at Confessional Colleges This paper examines the increasingly problematic trend of the dismissal of critically trained scholars from typically small Christian Bible colleges and seminaries. Many confessional schools of late find themselves increasingly on the defensive when it comes to preserving their traditional doctrinal stances against advances in biblical scholarship, science, philosophy, archaeology, linguistics, and other disciplines within the liberal arts and sciences. As a result, many Bible colleges find themselves dismissing highly qualified Bible scholars, whose research may have led them over time to academic viewpoints that differ from the predetermined confessional statements of faith often mandated by their institutions as a condition of employment. These confessional schools often find themselves torn between a desire for the standard accreditation held by other credible universities, and the preservation of their characteristic doctrinal beliefs. This paper surveys several recent instances of these conflicts, identifies the main points of contention, examines missteps made by both institutions and scholars, and offers suggestions for scholars both seeking jobs and already employed at confessional schools, and for institutions seeking to preserve their denominational identity in the information age.
A group of scientists working on fossils from Kangaroo Island in South Australia has turned up a Cambrian predator with horror-movie specs: razor serrations in a circular mouth, claws at the front of its head, and compound eyes on stalks. Taagged Anomalocaris (roughly “irregular shrimp”, or perhaps “abnormal shrimp”), the meter- … I, for one Yes, let's bring this AND the mammoth back....mmm, mmm, good. I'll get started building a bigger grill, a much bigger grill. Now then, let's start with metal suppliers, rebar... No, no. I'm going Chinese. All I need is a 2 cubic metre capacity deep-fryer, 1.5 cubic metres of soya oil, 200 litres of batter and possibly 50 litres or so of sweet n' sour sauce. Oh, and about 250 kg of fried rice - anyone got a *really* big wok I can borrow? Are you sure Arctic fox? You'll be hungry again in 30 minutes. But big specimens grew up to 2m. So plenty of shrimp for all, then. ...rather squamous & rugose to me. Altogether far too non-euclidian. Must remember to avoid Emu Bay, especially when the stars are right. It wasn't discovered by Randolph Carter was it?! Obviously the early humans killed it off. Obviously ancient Aussies REALLY knew how to put another shrimp on the barbie!! That's an awesome looking creature. God sure had a great imagination. ... he was on acid while he "intelligently designed" this one. fallible human artist's impression It was actually pink with stripes. You know, like modern shrimp. Apart from stuff getting smaller or dying out, has anything changed since before records began? Dunno about that great Cos somehow this marine animal died in the great flood along with the dinosaurs. Seems like a slightly ropey design to me. Well if $DEITY did design it he couldn’t have been 500 million years ago because some followers of $DEITY believe that the earth was created only 6,000 years ago over a six-day period. Which explains why it is extinct, obviously 6,000 year old aussi’s hunted it to extinction, probably no doubt mounted on dinosaurs with saddles – see the Creation “Museum” near Petersburg, Kentucky I'd love to know why GaboonViper67's post got so many downvotes permanently on acid. It would explain pretty much everything. "I'd love to know why GaboonViper67's post got so many downvotes" These forums don't seem to get sarcasm. Even the upvoters might have been creationists? "Apart from stuff getting smaller or dying out, has anything changed since before records began?" Well, we're pretty confident that a lot has changed. We're just not sure what as we have no records... Caris = claw It wasn't "turned up" at Kangaroo Island, the first specimens were found in the Burgess Shales decades ago. It was sometime in the 1980s that someone worked out that what had been classified as pieces of four distinct animals were in fact the same thing. Sadly, the rules of scientific nomenclature required the complete thing to get the earliest name associated with any of the pieces-- which was an isolated weird-looking claw, thus "strange claw". It doesn't really do the thing justice. And those are hardly even the most interesting eyes in the Cambrian. Check out Opabinia, which has 5, count 'em, FIVE eyes, all on stalks. Also "it’s the first time a fossil has shown sufficient surface detail to prove that it had compound eyes"? I thought we already had plenty of trilobites showing evidence of compound eyes? They feature very big in S.J. Gould's book on the Burgess Shale, "Wonderful life". A good (&) introductory read. Wonderful Life was a fantastic book! The article reminded me a bit of a fun SF novel called Fragment by Warren Fahy, featuring terrestrial mantis shrimp. Yoiks! An old discovery, well described in existing literature There's a great, very accessible description of the discovery and analysis of the Burgess Shale fossils in "Wonderful Life: Burgess Shale and the Nature of History " by Stephen Jay Gould. As well as Anomalocaris and Opabinia there are a host of other "alternative designs" which worked perfectly well in their environment, before changes led to a mass extinction. Wonderful Life is wonderful Indeed, that's the very book that got me out of my dinosaur phase and hooked on much more interesting paleontology. It's aged pretty well, too-- IIRC, the only part that's seriously wrong now is the section on Hallucigenia, which, thanks to some better-preserved fossils found in China in the '90s, turns out to be relatively normal-looking (at least as Cambrian fauna goes). Gould's Hallucigenia mistake Gould later published an article in Natural History in which he owned up to his mistake in "Wonderful LIfe." That article was later put between covers, in "Bully for Brontosaurus" iirc. But how did they taste? I would love to barbecue some shrimp steaks. Oh ya baby! They all taste like chicken. What was it's alignment, and how many hit points did it have? Neutral Evil and lots It is clearly NE, with 120hp and AC 0. A roll of 20 is automatic death for the opponent when fighting the creature under water. Fireball the bastard Then tuck in. Fireball doesn't work under water. SLOW would probably work but why bother it's probably got magic resistance anyway. So caste haste on the fighters and get to the back ASAP. Of course this ,very cool to know that we had such fucked up creatures in the past as an aside, is probably a high level adventure so we will have an Assassin, in which case invisibility, aiery water, haste and fly will do the trick. Is anyone else not surprised this was found in Australia? Australia is probably the continent that wants to kill humans more. Real live CGI You can see 'em swimming around in David Attenborough's First Life (http://firstlifeseries.com/). Survival of the Fittest ... I'm coming to the conclusion that it's not survival of the fittest, but survival of the sort of fittest and most adaptable ... otherwise that would surely have been our ancestor, and I'd have better eyesight :) That's only because You have a poor conception of what 'fittest' means. Don't feel bad, almost everyone has the same misconceptions. There'd be no guarantee of better eyesight, but I'm pretty certain that grinding a 10000-facet corrective lens would cos a fair bit more than you might generally spend in an opticians... @a_been, You're wrong Animals tend to be Neutral (or I guess Unaligned in this new fangled 4th edition gubbins). I'll get my coat. It's the one with the dice Yes but this is clearly some underwater creature that can fuck you up and it is a rule that all underwater creature that can fuck you up, hate the sun, are NE and are intelligent in a way "no humanoid can understand". Just waiting now... ...for the low budget SyFy channel movie version "MegaShrimp Vs. (insert other animal name here)". Sci-Fi book on this beast already written Check out "Fragment" by William Fahy for what this guy could have evolved into give a few hundred million years of isolation. I see as movie in the works but they'll have to come up with a better title than "Shrimp". RE: "I see as movie in the works" "Claws" perhaps? Or maybe "Resident Crustacean"? the Peacock shrimp a strict descendant today? The Peacock mantiss is a fierce predator with the fastest movement in the world. It has a huge hammer which it releases to kill prey, the force of a .22 pistol it is said. There are several videos on Youtube with this little s*cker attacking prey with its hammer. Some of the Peacock mantisses have a spear instead of a hammer. Talking about fierce predators, among the fiercest on earth today, is probably the honey badger. Now THAT is a nasty little s*cker that even lions avoid. It attacks everything. Watch on youtube and you will understand. The same little animal in the movie "The Gods must be crazy". Really really ferocious. The Badass Honey Badger If you want to watch a Youtube video of the honey badger, go to "Badass honey badger", which has a fairly amusing re-do of the narration dubbed in. Lovecraft would be proud. This is what came to mind when giant shrimp with claws was mentioned: Terror from the Deep Can it survive the damage from a hit by a Sonic Blasta Gun? Is it vulnerable to Molecular Control attacks? A newbie XCOM aquanaut - Comment Renewable energy 'simply WON'T WORK': Top Google engineers - Useless 'computer engineer' Barbie FIRED in three-way fsck row - Game Theory Dragon Age Inquisition: Our chief weapons are... - 'How a censorious and moralistic blogger ruined my evening' - Amazon warming up 'cheapo web video' cannon to SINK Netflix
Effects of Prior Intensive Versus Conventional Therapy and History of Glycemia on Cardiac Function in Type 1 Diabetes in the DCCT/EDIC - Saul M. Genuth1⇑, - Jye-Yu C. Backlund2, - Margaret Bayless3, - David A. Bluemke4, - Patricia A. Cleary2, - Jill Crandall5, - John M. Lachin2, - Joao A.C. Lima6, - Culian Miao6, - Evrim B. Turkbey4, - for the DCCT/EDIC Research Group* - 1Case Western Reserve University, Cleveland, Ohio - 2The Biostatistics Center, The George Washington University, Rockville, Maryland - 3University of Iowa, Iowa City, Iowa - 4National Institutes of Health, Bethesda, Maryland - 5Albert Einstein College of Medicine, Bronx, New York - 6Johns Hopkins University, Baltimore, Maryland. - Corresponding author: Saul M. Genuth, . Intensive diabetes therapy reduces the prevalence of coronary calcification and progression of atherosclerosis and the risk of cardiovascular disease (CVD) events in the Diabetes Control and Complications Trial (DCCT)/Epidemiology of Diabetes Interventions and Complications (EDIC) study. The effects of intensive therapy on measures of cardiac function and structure and their association with glycemia have not been explored in type 1 diabetes (T1DM). We assess whether intensive treatment compared with conventional treatment during the DCCT led to differences in these parameters during EDIC. After 6.5 years of intensive versus conventional therapy in the DCCT, and 15 years of additional follow-up in EDIC, left ventricular (LV) indices were measured by cardiac magnetic resonance (CMR) imaging in 1,017 of the 1,371 members of the DCCT cohort. There were no differences between the DCCT intensive versus conventional treatment in end diastolic volume (EDV), end systolic volume, stroke volume (SV), cardiac output (CO), LV mass, ejection fraction, LV mass/EDV, or aortic distensibility (AD). Mean DCCT/EDIC HbA1c over time was associated with EDV, SV, CO, LV mass, LV mass/EDV, and AD. These associations persisted after adjustment for CVD risk factors. Cardiac function and remodeling in T1DM assessed by CMR in the EDIC cohort was associated with prior glycemic exposure, but there was no effect of intensive versus conventional treatment during the DCCT on cardiac parameters. Cardiovascular disease (CVD) is a major complication of type 1 diabetes (T1DM) (1) and, relatively, even a greater risk than in type 2 diabetes (2,3). T1DM increases the risk of CVD, independent of other common risk factors (4), and these CVD complications have a large impact on mortality and morbidity (5–7); the risk of death from coronary artery disease (CAD) is increased 9–29 times in women and 4–9 times in men with T1DM compared with nondiabetic individuals (1). The prevalence of left ventricular (LV) hypertrophy on electrocardiogram (ECG) is increased threefold (8). Hyperglycemia has been associated with CVD in some (9,10), but not all, studies (11,12). The Diabetes Control and Complications Trial (DCCT)/Epidemiology of Diabetes Interventions and Complications (EDIC) study observed association between glycemia and CVD events was partially mediated through its effect on nephropathy (10). DCCT/EDIC also found correlations of glycemic levels with measures of atherosclerosis, such as carotid intima-media thickness (IMT) (13) and coronary artery calcification (CAC) (14). More importantly, the DCCT/EDFIC study has reported that intensive compared with conventional treatment during the DCCT was associated with a 57% reduction (95% CI 12–79, P = 0.02) in a composite CVD outcome of nonfatal myocardial infarction (MI), stroke, or cardiovascular death from baseline DCCT through 11 years of EDIC (10) Nephropathy is the single greatest risk factor previously identified for CVD and CAD (15), increasing the incidence of CVD 8–10-fold and of mortality sevenfold over diabetic patients without nephropathy (16). Even without nephropathy, people with T1DM have an increased incidence of CVD (15,16), and poor glycemic control predicts coronary heart disease events (17). Although congestive heart failure (CHF) frequently used to follow MI in T1DM (18) before current improvements in care of MI, the clinical significance of basal cardiac dysfunction in T1DM individuals has yet to be determined. As improved prevention and management of CAD continue to extend life expectancy, CHF may emerge as a more frequent and life-threatening complication of T1DM. To determine whether intensive therapy in the DCCT also affected cardiac function and remodeling (the ratio of LV mass to end diastolic volume [EDV]), we have measured LV functional parameters and remodeling by cardiac magnetic resonance (CMR) imaging, which is accepted as the gold standard (19,20), in 1,017 DCCT/EDIC patients with T1DM. RESEARCH DESIGN AND METHODS The DCCT and EDIC studies have been described previously in detail (21,22). Between 1983 and 1989, 1,441 patients (13–39 years of age) with T1DM were randomly assigned intensive insulin versus conventional therapy to compare the effects on the development and progression of microvascular complications. At baseline (1983–1989), all patients were free of a history of CVD, hypertension, and hypercholesterolemia. DCCT participants were recruited into a primary prevention cohort (1–5 years diabetes duration and no retinopathy or microalbuminuria at baseline) or into a secondary intervention cohort (1–15 years duration, minimal to moderate retinopathy, and ≤200 mg/24 h albuminuria at baseline). After 6.5 years of randomized intervention, retinopathy, nephropathy, and neuropathy were significantly reduced by ∼50% by intensive treatment. EDIC began in 1994 as a prospective observational follow-up of the DCCT cohort. At the time of CMR (EDIC years 14–16), 1,301 participants (94% of 1,371 survivors) across 28 clinics were active (Fig. 1). Of these, 1,259 participants (97%) were eligible for the CMR study, and of these, 1,122 (89%) gave informed consent. An additional 94 participants were excluded: 53 (4%) due to claustrophobia, 9 (0.7%) had metallic foreign bodies, 5 (0.4%) had body weight that exceeded the capacity of the scanner, and 27 (3%) were not completed for other reasons. Eleven (0.9%) uninterpretable MR scans were further excluded, resulting in a diagnostic CMR for 1,017 participants (74% of those surviving and 81% of those screened). During DCCT, participants had an annual medical history and physical examination, electrocardiography, and laboratory testing for fasting lipid levels, serum creatinine, urinary albumin excretion (AER), and other risk factors for CVD (21). Glycated hemoglobin values (HbA1c) (23) were measured quarterly during DCCT (21). Hypertension was defined as blood pressure (BP) ≥140/90 mmHg or use of antihypertensive medications (21,22). Hypercholesterolemia was defined as LDL levels ≥130 mg/dL or use of lipid-lowering medication (21,22). During EDIC, annual medical histories were obtained, physical exams were performed, and HbA1c levels and serum creatinine were measured. Lipid profiles and urinary AER were measured in alternate years (22). BP was measured with a standardized protocol by trained and certified study nurse coordinators using mercury manometers during the DCCT and aneroid manometers during EDIC in the right arm with the arm flexed slightly and with the forearm supported at heart level. The overall time-weighted mean BP measurements every 3 months during DCCT and every 12 months during EDIC were used in the analyses of risk factors. Weighted mean laboratory values over the study duration were computed with weights proportional to the time interval between values. ECGs were obtained at baseline, every 2 years during DCCT, at closeout of DCCT, and annually during EDIC. ECGs are centrally read and classified using the revised Minnesota code (24,25). Assessment of CVD events and other diabetes complications. Complications were assessed cumulatively from DCCT entry to the current study. CVD includes nonfatal MI (clinical MI), silent MI (ECG diagnosed), revascularization (angioplasty or bypass), confirmed angina, nonfatal cerebrovascular event, CHF (ascertainment starting in 2007, EDIC year 13), and cardiovascular death (10). CVD events were adjudicated based on medical records, ECG findings, and cardiac enzyme levels, masked to DCCT treatment group assignment, HbA1c, and glucose levels. CHF was defined as at least one symptom of the following two categories: category A, paradoxysmal nocturnal dyspnea, dyspnea at rest, or orthopnea; category B, marked limitation of physical activity caused by heart disease (patients are comfortable at rest, but less than ordinary physical activity caused fatigue, shortness of breath, palpitations, or anginal pain; New York Heart Association Functional Classification III). Proliferative diabetic retinopathy, sustained microalbuminuria or macroalbuminuria at any two consecutive visits, or end-stage renal disease (ESRD) were as previously defined (26). Neuropathy included cardiac autonomic neuropathy (CAN) as previously defined (26). Participants underwent CMR with 1.5 or 3.0 Tesla magnets using the same standard protocol (27) at each site. In brief, a stack of short axis images covering the entire left ventricle was acquired to determine LV mass, volumes, and function (temporal resolution ≤50 ms). All CMR studies were evaluated at the Johns Hopkins core reading center by readers masked to risk factor information. The endocardial and epicardial myocardial borders were contoured using a semiautomated method (QMASS version 6; Medis, Leiden, the Netherlands). LV mass was calculated as the difference between epicardial and endocardial areas for all slices multiplied by slice thickness and slice gap and then multiplied by the specific gravity of the myocardium (1.04 g/mL). Papillary muscles included in the LV cavity were excluded from LV mass. Reread of 100 CMR scans revealed an intraclass correlation range from 0.917 to 0.978 and the relative technical errors of measurement of the mean were 4.5 and 3.2% for LV mass and volume, respectively (27). ECG-gated phase-contrast cine images of ascending thoracic aorta were obtained in the axial plane at the level of the right pulmonary artery. Minimum and maximum cross-sectional areas were determined using QFLOW software (version 5.1; Medis). Ascending thoracic aortic distensibility (AD) was calculated using a validated formula (28,29): AD = (maximum area − minimum area)/[(minimum area) × ΔP], where ΔP is the pulse pressure obtained by subtracting diastolic BP (DBP) from systolic BP (SBP). The average of two supine BP measurements by a standardized protocol immediately before and after the CMR examination on the scanner gantry was used as the final BP measurement. Group differences were assessed using Wilcoxon rank sum tests for quantitative variables and χ2 tests or Fisher exact test for categorical variables. The difference between groups for event times was assessed using the log-rank test (30), for prevalences using logistic regression (30), and for least squares mean (LSM) of a quantitative outcome using a normal errors linear regression model (31). Eight cardiac outcomes were evaluated according to the intent to treat principle (EDV, end systolic volume [ESV], stroke volume [SV], cardiac output [CO], LV mass, ejection fraction [EF], LV mass/EDV, and AD). The natural log transformation was used for AD, and its geometric mean is presented. LSM values within groups were obtained from a multivariate linear regression model (31). Treatment group differences were assessed in models minimally adjusted for CMR machine type, age, sex, height, weight, and study cohort. Treatment group differences nested within the levels of other factors were obtained using interaction terms in the model. Additional models included traditional cardiovascular risk factors: a history of smoking ever, the weighted mean SBP, LDL, and HDL. The weighted mean values allowed for differences in the frequency of measurements during DCCT and EDIC up to the EDIC year of the CMR study or immediately prior to CMR (30). Backward elimination of traditional risk factors was used to examine how each risk factor affected the relationship of cardiac outcomes with glycemic exposure. Treatment groups were also compared in multivariate analyses of the set of cardiac outcomes simultaneously using the first principal component that explained 42% of the variation, an O'Brien weighted least squares summary statistic (32), and a multivariate mixed model assuming a compound symmetry covariance structure (33). A “worst rank analysis” (34) was conducted in which a subject with a missing CMR examination but with a prior CVD event was assigned a tied rank worse than that of any patient with a measured CMR. For this and the O’Brien analysis, lower values of SV, CO, EF, or AD, and higher values of the other variables were considered worse. A multivariate Wei-Lachin test of stochastic ordering was then applied (35). These analyses also used minimally adjusted values. All analyses were performed using SAS software (version 9.2; SAS Institute, Cary, NC). P values <0.05 were considered as statistically significant. Participants and nonparticipants in CMR scanning had similar DCCT baseline characteristics except that the nonparticipants had more smokers (23.4 vs. 16.4%), higher HbA1c (9.2 vs. 8.8%), higher triglycerides (87 vs. 79 mg/dL), and greater prevalence of microalbuminuria (13.9 vs. 9.6%) (Table 1). No treatment group differences in baseline characteristics were observed among either the participants (Table 1) or the nonparticipants (data not shown). At the time of CMR, the participants had a mean age of 49 years and a mean diabetes duration of 28 years and were 48% female (Table 2). Sixty-seven CMR participants had previous CVD events. HbA1c levels at the visit prior to the CMR were similar in the original intensive and conventional treatment groups. However, the mean HbA1c over the entire DCCT/EDIC study period was significantly lower in the original intensive group compared with the conventional group (7.7 ± 0.9 vs. 8.3 ± 0.9%, P < 0.0001). DCCT conventionally treated participants had a greater prevalence of retinopathy, nephropathy, and neuropathy compared with intensively treated participants. Conventional group participants had a greater common carotid IMT and CAC score and had a higher, though not statistically significant, incidence of CVD events than the intensive group participants. About half of both groups were taking ACE inhibitor/angiotensin receptor blocker (ARB) drugs, but their HbA1c levels were similar (Supplementary Table 1). Cardiac function by DCCT original treatment group and prior CVD history. Table 3 compares LSM from minimally adjusted intention-to-treat analyses of the cardiac eight parameters by treatment group and by presence/absence of prior CVD. Treatment group differences were not significant for any cardiac function parameters. Among the majority of participants (950) who had not experienced a CVD event, there likewise were no differences between the two treatment groups. However, those with prior CVD had significantly higher LV ESV (57.7 vs. 52.5 mL) and LV mass (145.5 vs.137.5 g), lower EF (59.8 vs. 61.8%), and a trend toward higher EDV. Cardiac function in relation to glycemic exposure. The complete period of prior glycemic exposure, as measured by the mean DCCT/EDIC HbA1c, had a stronger effect (highest estimate and smallest P value) on most cardiac function measures than the mean DCCT HbA1c or mean EDIC HbA1c (Table 4). EDV, SV, and AD were negatively associated with mean HbA1c, whereas CO, LV mass, and LV mass/EDV ratio were positively associated with mean HbA1c in all three HbA1c periods. For example, EDV decreased by 2.61 mL per 1% absolute increase of mean DCCT/EDIC HbA1c (P = 0.0003), whereas LV mass increased by 2.68 g per 1% increase of mean DCCT/EDIC HbA1c (P = 0.0002). These associations were unchanged after adjusting for a history of hypoglycemia manifested by coma or seizure (none, one to five, and more than five prior episodes). Addition of either BMI as a continuous variable or obesity (BMI ≥30) as a categorical variable in place of height and weight did not change the significance of the associations of DCCT/EDIC HbA1c with any of the cardiac parameters. Notably, there was no significant association between the important cardiac functional parameter, EF, or ESV and the mean HbA1c during DCCT, EDIC, or DCCT/EDIC. Additional models also assessed the effect of mean HbA1c during EDIC periods ranging from 1 to 12 years prior to the CMR. No trend was observed except for LV mass for which the HbA1c over the prior year had no significant association (estimate = 0.64 g per HbA1c %), but the estimate increased to 2.09 g per HbA1c % over 12 years (data not shown). After further adjustment for traditional CVD risk factors (smoking ever, mean SBP, LDL, and HDL), the DCCT/EDIC HbA1c remains significantly associated with EDV, SV, LV mass/EDV ratio, and AD, whereas the association was diminished for CO and LV mass (Table 5). The traditional risk factors were also significantly associated with some parameters even after adjustment for HbA1c. Additional models also adjusted for either a history of microalbuminuria or macroalbuminuria. Of the four parameters significantly associated with HbA1c in Table 5, microalbuminuria and macroalbuminuria were significantly associated with LV mass/EDV, and macroalbuminuria was significantly associated with log(AD). In each case, the effect of the mean HbA1c remained significant after adjustment for albuminuria. The effect of the mean HbA1c on all cardiac functions remained the same as shown in Table 5 after further adjustment for use of antihypertensive and lipid-lowering medications. Multivariate analyses of all cardiac functions in aggregate. The DCCT treatment groups were also compared using all eight CMR characteristics jointly in multivariate analyses, and in worst rank analyses in which subjects without the CMR but with a history of CVD event(s) were assigned a worse rank than subjects with the CMR assessments (see research design and methods). No significant treatment group differences were observed. This is the first report of cardiac function and remodeling parameters and their response to different previous glycemic strategies in a large cohort of patients with T1DM. The EDIC cohort provides detailed phenotypic, CVD, and biochemical data over a 22-year period, which included an initial 6.5-year period of randomly assigned intensive or conventional treatment of hyperglycemia. During DCCT, mean HbA1c levels were 7.4 and 9.1% respectively. During most of the EDIC observational follow-up (36), glycemic management was by community health caregivers and resulted in similar mean HbA1c levels of ∼8.0%. Previous studies have demonstrated persistent effects of the original DCCT treatment assignment on microvascular outcomes (36,37), neuropathic outcomes (38–40), subclinical atherosclerosis (carotid IMT and CAC scores ), and cardiovascular events (10). Based on these consistent observations, there was a reasonable expectation that we might also observe differences in cardiac parameters between the former DCCT intensive and conventional groups. However, the current results do not support an effect of prior intensive versus conventional treatment on cardiac measured function/structure parameters. After 6.5 years of glycemic separation and 15 years of further observation, during which glycemic separation dissipated, no long-term effect of intensive treatment was detected on any of the cardiac parameters or AD. All measured volumes, CO, EF, LV mass, and LV mass/EDV ratio were similar between intensive and conventional groups. In contrast, the mean DCCT, EDIC, and DCCT/EDIC HbA1c values were significantly related to myocardial structure and function. Both SBP and smoking were also nominally associated with CO, LV mass, and LV mass/EDV, but the DCCT/EDIC mean HbA1c remained significant only for LV mass/EDV after adjustment for these factors. In addition, mean HbA1c remained significantly associated with some CMR parameters after adjustment for either prior microalbuminuria or macroalbuminuria. There are at least three possible reasons for the lack of consistently different effects between the DCCT intensive and conventional treatment on CMR functional and structural parameters. First, unlike every other complication of T1DM we have measured, intensive therapy during the DCCT may not have conferred benefit on cardiac function. Second, there may have been a benefit of prior intensive therapy on cardiac function during the DCCT that had dissipated by the time cardiac MRI was performed during EDIC year 15. Third, T1DM may not have an adverse effect on the heart, except via coronary atherosclerosis. This seems unlikely, given recent evidence that hospital admission for CHF in T1DM was increased ∼30-fold over a similar age-group of nondiabetic people around age 40 (41). Moreover, this incidence was increased 30% for each 1% increment in HbA1c, independent of ischemic heart disease or prior MI (41). Possible intrinsic microvascular disease has been suggested by a deficit in myocardial energy production shown by CMR and 31P spectroscopy in young T1DM individuals without apparent CAD and correlated with HbA1c (42). In the absence of CMR measurements at the close of the DCCT and periodically during EDIC, it is impossible to distinguish between the first two alternatives. Other data suggest that any beneficial effects of intensive therapy during the DCCT on retinopathy (37) and progression of carotid IMT (43) may have waned over time. Moreover, the CMR studies reported here were performed 15 years after DCCT treatment was discontinued, whereas benefits of prior intensive therapy are last documented on microvascular outcomes at 10 years post-DCCT (37,38), carotid IMT at 12 years (43), CAC at 8 years (14), and CVD events at 11 years (10). The mean values of the EDIC cardiac parameters after 27 years duration of T1DM are within the ranges reported in the Multi-Ethnic Study of Atherosclerosis (MESA) study of CMR in individuals without diabetes or CVD events at around a similar age of 50 as our EDIC subjects (44). They are also similar to these cardiac parameter ranges measured by echocardiography in normal individuals (45). Another recent echocardiographic study has shown no significant differences in cardiac parameters between a well-controlled T1DM group of 9 years duration and an age/sex-matched control group (46). Furthermore, the adverse directional changes for each absolute 1% increase in HbA1c are modest and of little clinical significance. Moreover, the important mean EF values in our study were solidly normal, and EF was not associated with HbA1c. These observations may bode well for 27-year-duration T1DM individuals with regard to future risk of CHF, so long as their glycemic control does not deteriorate. Whether the small number of participants with EF <50% are at greater future clinical risk of CHF can only be determined by further long-term follow-up. From EDIC year 13 on, only one individual had suffered CHF by the time of the CMR. There were five further cases reported since then. Interestingly, however, our data demonstrate that glycemia is a significant factor impacting cardiac function. The mean HbA1c over DCCT and EDIC (22 years) was positively correlated with LV mass and LV mass/EDV and negatively correlated with EF and SV. These observations indicate that cardiac function and LV remodeling are influenced by prolonged elevated HbA1c levels. These correlations would signify impairment, although very slight and not clinically significant, of cardiac function secondary to long-term glycemic exposure. The paradoxical negative association between HbA1c and SV but positive association between HbA1c and CO may be partly explained by the positive association between HbA1c and heart rate (Spearman correlation coefficient 0.18 for current HbA1c and 0.23 for weighted mean DCCT/EDIC HbA1c, P < 0.005 for both). The impact of HbA1c on AD may reflect cross-linking of glycated collagen by advanced glycation end products (47) in arterial walls (48). The results of CMR studies may differ from measures of subclinical atherosclerosis and/or CVD events if factors affecting cardiac muscle function are distinct from those that modulate atherosclerotic CVD (42). However, traditional CVD risk factors (age, sex, smoking status, lipids, and hypertension) for atherosclerosis also strongly influenced cardiac size and function in our cohort. Moreover, the presence of nephropathy (macroalbuminuria) was the strongest determinant of LV mass and was also associated with LV concentric remodeling. It also seemed possible that any putative beneficial effects of intensive therapy on the heart were confounded by a deleterious effect, such as the threefold increase in hypoglycemia that accompanies intensive therapy (26). Hypoglycemia is known to have an adverse effect on the heart (49). However, adjustment for episodes of severe hypoglycemia did not change the relationship between glycemic exposure and cardiac function. Only 74% of the survivors of the DCCT/EDIC cohort completed the cardiac MRI. Reasons for nonparticipation included inability to tolerate the CMR procedure and contraindications to CMR because of metallic implants, etc. Nonparticipants had somewhat higher mean HbA1c at DCCT baseline than those that did participate (9.2 vs. 8.8%) and a slightly worse CVD profile, suggesting that those who might have benefited the most from intensive therapy could have been excluded from the CMR analyses. Finally, among subjects who underwent CMR, fewer intensive than conventional therapy subjects had a previous CVD event (5.25 vs. 7.95%). However, this small difference would be expected to lead to a beneficial difference in cardiac parameters in favor of the intensive group. Likewise, among those who did not complete the CMR, 60 had a history of a CVD event, again with fewer patients in the intensive than the conventional group (25 vs. 35). If these subjects had been measured with CMR, it is possible that the larger number in the conventional group could have led to group differences in some outcomes. Thus, further analyses were conducted including these 60 subjects with a prior CVD history and assigning them worse rank scores than those of subjects who completed the CMR. However, these analyses failed to demonstrate a benefit of intensive therapy on cardiac function. Other clinical factors that could have influenced our findings are obesity, CAN, and the use of renin-angiotensin system modulators. Obesity did not differ between the two treatment groups at the time of CMR or contribute to the association of cardiac parameters with HbA1c. The addition of CAN to the HbA1c models had a significant association to CO, LV mass, and AD. However, the associations with HbA1c were largely unchanged (Supplementary Table 2). The use of ACE inhibitor/ARBs was not significantly associated with HbA1c. The frequent presence of hypertension, dyslipidemia, and obesity could have influenced LV mass, EF, and LV mass/EDV. That these risk factors were not more frequently abnormal may be a testament to how adherent our participants have been to prescribed antihypertensive drugs and statins, as reflected in their mean SBP (118 mmHg), DBP (70 mmHg), LDL cholesterol (110 mg/dL), and HDL cholesterol (54 mg/dL). Certain limitations to this study should be noted. The CMR participants were self-selected from the DCCT/EDIC cohort, which is a largely Caucasian research-minded group (50) above average in education (51), so our results may not be generalizable to the entire T1DM population. Participants with estimated glomerular filtration rate (eGFR) <60 mL/min were excluded from the gadolinium injection portion of the CMR procedure for safety reasons to prevent nephrofibrosis, but virtually all of these subjects underwent the rest of the CMR. Peripheral brachial BP was used to calculate AD, rather than central arterial BP, but this calculation has been used before in other studies (52–54). Diastolic dysfunction was not assessed, although this impairment has been found in T1DM individuals (55,56). We were also unable to recruit our own age- and sex-matched normal subjects but have compared our T1DM results to those of normal subjects in the literature. Fifteen years after the cessation of DCCT randomized glycemic treatment, there was no observable beneficial effect of intensive treatment of TIDM on cardiac function or remodeling assessed by CMR in the EDIC cohort. However, a significant association between some cardiac parameters and glycemic exposure was observed. Continued long-term follow-up of the DCCT/EDIC cohort will be necessary to discern whether these MRI measurements predict clinically relevant CHF. The DCCT/EDIC project is supported by contracts with the Division of Diabetes, Endocrinology, and Metabolic Diseases of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), the National Eye Institute, the National Institute of Neurological Disorders and Stroke, the General Clinical Research Centers Program and the Clinical and Translation Science Centers Program, the National Center for Research Resources, and Genentech through a Cooperative Research and Development Agreement with the NIDDK. Industry contributors have had no role in the conduct of EDIC but have offered free or discounted supplies or equipment as a thank you to participants: Abbott, Animas, Aventis, Bayer, Becton Dickinson, CanAm, Eli Lilly and Company, LifeScan, Medtronic Diabetes, MiniMed, Omron, OmniPod, Roche, and Sanofi. No other potential conflicts of interest relevant to this article were reported. The writing committee for the DCCT/EDIC study included S.M.G. (chair), J.-Y.C.B., M.B., D.A.B., P.A.C., J.C., J.M.L., J.A.C.L., C.M., and E.B.T. S.M.G. drafted and edited the manuscript and researched and interpreted the data. J.-Y.C.B. analyzed the data statistically and drafted the manuscript. M.B. reviewed and edited the manuscript. D.A.B. analyzed the CMR data and reviewed and edited the manuscript. P.A.C. collected and organized the data and reviewed the manuscript. J.C. and C.M. reviewed and discussed the manuscript. J.M.L. analyzed the data statistically and reviewed and edited the manuscript. J.A.C.L. and E.B.T. analyzed the CMR data and reviewed the manuscript. P.A.C. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. The authors acknowledge the data processing and technical support of Wanyu Hsu and technical assistance of Mary Hawkins at The Biostatistics Center, The George Washington University. This article contains Supplementary Data online at http://diabetes.diabetesjournals.org/lookup/suppl/doi:10.2337/db12-0546/-/DC1. *A complete list of the members of the DCCT/EDIC Research Group can be found in the Supplementary Appendix published in the New England Journal of Medicine, 2011;365:2366–2376. A list of the participating radiologists and technologists can be found in the Supplementary Material published in Circulation, 2011;124:1737–1746. See accompanying commentary, p. 3329. - Received May 1, 2012. - Accepted March 15, 2013. - © 2013 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered. See http://creativecommons.org/licenses/by-nc-nd/3.0/ for details.
By Daniela Gioseffi --for my husband in his 83rd year We don't talk about, Going to a better place." We don't believe in ghosts or heaven-- not hell either! We feel we will relinquish our little ego and be nothing at all discernible --simply energy into a wondrous universe as "E equals MC squared." We stand in awe of the mystery which brought all from a hydrogen molecule. We joke about "croaking." We like the word "croak." It sounds both raw and funny like "Kick the bucket?" We wonder at that phrase, its etymology and idiom. Unlike Hollywood shows of dying as entertainment, we don't think of funerals as having a "home," and we don't talk of being "laid to rest." We don't consider which clothes we will dress in for death or, how we'll perfume it with flowers. We know it's coming, and meantime we try not to let it get the best of us. We know we're not only among the living, but living among the dying. We want to bloom all the way open to truth Which is beauty, exactly as Keats said. Yet, growing old is like being punished for a crime we didn't commit. "I have no mystery," replies death. I'm simply the absence of life. Religions with their heavenly paradise seem like drugs, but a truer opiate is understanding that nothing awaits us No feeling at all. The solace of knowing all our humiliations sufferings and pain, illness and bondage to duty and labor are finished. Sheer joy will not remain, merely ashes --the dust of energy expended in reproducing life for others is what's left. We don't fear death, but dying slowly in pitiless pain, hearing, tasting choking breath, cool doctors and polite nurses, hospital smells and fitful sleep too incomplete. In death, our freed bodies will float out of our brains All we were will be left to others to decide or remember or forget. Those we've birthed into being will go on after us for quite a while, we hope, And though we don't want to die, We don't agonize about staying alive. Let's energize the present, and live now at last. No one else can accomplish our death for us. It is entirely a personal matter. Death is only fearsome to the young. "We are bare ruined choirs Where late the sweet birds sang," Those songs live in our memories until we're memorialized by their melodies. So, I will not wish you long years ahead for that's not necessarily kindness. Friend of so many years, may you die when you want to, no sooner and no later. 2003 by Daniela Gioseffi About Daniela Gioseffi Daniela Gioseffi is an American Book Award-winning author of 12 books of poetry and prose, and a retired professor who lives in New York City. Her first book of poems, Eggs in the Lake (BOA Editions: Rochester, NY, 1979) won her a New York State Council for the Arts grant award. Her second and third collections, Word Wounds and Water Flowers and Going On, were published by VIA Folios@ Purdue U, and her latest 2002, Symbiosis, is an e-book from Rattappalax.com, NY. Her work appears widely in major literary magazines, on and off line, from The Cortland Review to The Paris Review, Chelsea, Antaeus, The Nation, Prairie Schooner, Poetry East, Hungry Mind, and American Book Review to name a few. Her poetry is transcribed in marble on the wall of the newly renovated Penn Station's Seventh Avenue Concourse. Daniela edits www.PoetsUSA.com. Her renowned Women On War: International Writings, (Touchstone/Simon & Schuster) was reissued in an all new edition by The Feminist Press, NY, 2003. She also edited On Prejudice: A Global Perspective (Anchor/Doubleday 1993 and it won a Ploughshares Award for World Peace and was presented at the United Nations. Daniela has given hundreds of readings throughout the U. S. and Europe and for NPR and BBC radio and many other television and radio stations. She won a PEN Short Fiction Award. Her volume of short stories and a novella was titled The Exotic Enemy. She has taught world literature, creative writing and invented a course titled "Tolerance Teaching Through Multicultural Literature." She taught at New York University's Publishing Institute, Brooklyn College of the City University of New York and Long Island University, among other institutions, and is now retired with her husband, Lionel B. Luttinger, a Doctor of Chemistry formerly from Yale University. Daniela is 63 and Lionel is 83 and the retired, freethinking couple have six grown children, a mathematician, computer graphic artist, musician, environmental scientist, and biologist--all freethinkers.
You don’t write good, you write well. Just as the title of this book is not grammatically incorrect, it’s whimsical. (Okay…that was a pathetic attempt at an inside joke.) The name of John Vorhaus’ How To Write Good, is pretty damn self-explanatory, so I’m not going to tell you that’s it a book about how to write well. It’s always trickier for me to review a ‘How To’ kind of book. I generally look for decent plot and characters, which for obvious reasons, one cannot do in this kind sort of literature. One reviews guides like these by their ability to communicate the subject matter they’re talking about. How To Write Good is a bit fifty-fifty for me. Maybe because I was expecting something different. I expected it to be about the intricacies of writing, like how to form plots and characters and different writing formulas. What I got was a lot more…practical. And on some strange level, the knowledge one receives from this book is more useful than thirty volumes worth of writing formula and complicated concepts and the weird stripes on a zebra or any of that other jazz. It tells you one simple and effective thing. Just write, for god’s sake. Find a quiet place, dedicate a particular number of hours, squash those excuses, and write. Which is pretty sound advice, if you ask me. It talks about things like ‘pivots’, and gives you simple exercises to help you write. It tells you how to get over writer’s block, and tells you to shut it and take criticism like it is: not praise or insults, but just critique. In my opinion, it’s a handy little thing for a first-time writer, and even the seasoned wordsmiths should read it, just to get their head out of the clouds and remember why they’re writing.
As the clock ticks closer to city primaries on Tuesday, September 10, The Courier would like to provide you, the reader and the voter, with a fair, detailed guide of who is running. Here is a list of the candidates in City Council District 19 (College Point, Auburndale-Flushing, Bayside, Whitestone, Bay Terrace, Douglaston and Little Neck), who they are, what they stand for and what they want to continue to do if they go on to the general election in November. Current Occupation: Father, consumer advocate, attorney Personal Info: John Duane was born and raised in northeast Queens, where he also raised his three children and has lived his whole life. He knows better than anyone the issues facing the community. As a state assemblymember, Duane wrote 22 bills that became law, including the Vietnam Veterans Tuition Assistance Law. As an assistant attorney general, he took on ConEd and won $30 million in refunds for taxpayers. In his law practice, Duane fights deceptive credit counselors and has won $250 million in judgments for victims of fraud. Issues/Platforms: Duane knows that a government cannot function properly without the trust and involvement of its citizenry. He will be a full-time city councilmember and has proposed a comprehensive “Integrity Plan” to regain public trust that includes full-transparency and creating a discretionary spending oversight board. Duane’s other priorities include fighting overdevelopment and keeping small businesses thriving by not letting the city target them unfairly as a source of revenue. Duane will use his office to improve public education and increase parental involvement in our schools. He has also made providing services to our seniors and veterans a top priority. Current Occupation: Urban planning/historic preservation consultant Personal Info: Graziano, 42, is a lifelong resident of North Flushing and the son of two CUNY professors. Educated at P.S. 21, I.S. 227, Bronx H.S. of Science, University of Massachusetts-Amherst (BA-Comparative Literature) and Hunter College (MS-Urban Affairs), he is marrying his fiancée, Elzbieta, in September. Issues/Platforms: For two decades, Graziano has tirelessly protected the 19th Council District from overdevelopment, including successfully downzoning every neighborhood; creating the R2A “anti-McMansion” zone; placing 1,330 buildings in Broadway-Flushing on the National Register of Historic Places; getting Douglaston Hill and the Schleicher and Ginsburg mansions landmark designation; and helping win the fight to turn Fort Totten into a public park and historic district when it was slated to be sold to developers. Graziano’s work also focuses on education reforms, including ending mayoral control of the Department of Education, replacing top-down “Teaching to the Test” decision-making with local teachers and parents deciding their children’s future; reinstituting art, music, after-school activities and tutoring; protesting against the DOE’s proposed school facility at Keil Brother’s on 48th Avenue in July; and standing in solidarity with teachers, parents and students against an abusive principal at P.S. 29 in August. Current Occupation: Full-time candidate for City Council Personal Info: Austin Shafran was born and raised in Bayside, where committed to public service, he worked tirelessly for CongressmemberGary Ackerman and then for Governor Andrew Cuomo to deliver a better and brighter future for the northeast Queens neighborhoods he is proud to call home. Whether it was playing for award-winning local little leagues or helping countless families access vital services, Shafran’s connection and commitment to his neighborhoods is deep and sincere. He said he would fight harder than anyone to clean up corruption, give schools the support they need, and make sure families and seniors can afford to stay in our neighborhoods. Issues/Platforms: As councilmember, Shafran will cut property taxes and water rates for homeowners, co-ops and condo owners and reduce income taxes for middle class families; provide more funding for senior services; improve schools by reducing class sizes, stop high-stakes testing and increase input for parents and educators; crack down on unscrupulous developers threatening neighborhoods and ban outside employment for councilmembers to stand up to the special interests and put our community first. Current Occupation: Attorney Personal Info: Paul Vallone is the managing partner of the family law firm of Vallone & Vallone. He currently serves as president of the Clinton Democratic Club, immediate past president and founding member of the Bayside-Whitestone Lions Club and board member of Community Board 7. Vallone also serves as counsel and board member to the Auburndale Soccer Club and was previously appointed to serve as a board member to the New York City Board of Corrections. Vallone and his wife, Anna-Marie, live in Flushing with their three children Catena, Lea and Charlie. Issues/Platforms: Vallone is running to restore honest and effective Democratic leadership to the 19th City Council seat. His top priorities include putting more cops on the street, standing with other small business owners against unfair regulations and crushing fines, keeping schools the best in the city, preserving the residential character of neighborhoods, combating the incessant airplane noise pollution plaguing neighborhoods and ensuring that northeast Queens finally gets its fair share from City Hall. Vallone has been endorsed by the Queens County Democratic Party, Congressmember Grace Meng, Assemblymember Ed Braunstein, Senator Toby Stavisky, Assemblymember Ron Kim, Assemblymember Mike Simanowitz and former City Council Candidates Kevin Kim and Jerry Iannece. Current Occupation: Civic leader, 109th Precinct Council President Personal Info: Voskerichian started working when she was just 16 years old. She moved through the ranks of the telecommunications industry, retiring after 31 years as director of Global Sales & Operations for one of the largest firms in the world to focus more on her neighborhood. She founded the Station Road Civic Association and in 2009 was elected president of the 109th Precinct Community Council where she has worked hard to create a partnership between the NYPD and the community. She served as chief of staff for the District 19 City Council office. Issues/Platforms: Voskerichian’s main priority is to protect the quality of life in northeast Queens and ensure that public safety is never compromised. She believes the city must give police officers and firefighters the tools they need to do their jobs. Voskerichian also stresses the need to support teachers by building new schools to reduce class sizes and giving kids a head start with free Universal Pre-K for every child. Finally, she promises to make constituent services a focus of her office. She will use her knowledge and experience to have a fully functional office capable of helping everyone and improving the community she calls home. MORE PRIMARY GUIDES - City Council District 22 - City Council District 24 - City Council District 27 - City Council District 28 - City Council District 31 - City Council District 32 - Queens borough president
The Space Shuttle Decision by T. A. Heppenheimer CHAPTER ONE: Space Stations and Winged Rockets Before anyone could speak seriously of a space shuttle, there had to be widespread awareness that such a craft would be useful and perhaps even worth building. A shuttle would necessarily find its role within an ambitious space program. While science fiction writers had been prophesying such wonders since the days of Jules Verne, it was another matter to present such predictions in ways that smacked of realism. After World War II, however, the time became ripe. Everyone knew of the dramatic progress in aviation, which had advanced from biplanes to jet planes in less than a quarter-century. Everyone also recalled the sudden and stunning advent of the atomic bomb. Rocketry had brought further surprises as, late in the war, the Germans bombarded London with long-range V-2 missiles. Then, in 1952, a group of specialists brought space flight clearly into public view. One of these specialists, the German expatriate Willy Ley, had worked with some of the builders of the V-2 personally and had described his experiences, and their hopes, in his book Rockets, Missiles, and Space Travel [citation in bibliography]. The first version, titled Rockets, appeared in May 1944, just months before the first firings of the V-2 as a weapon. Hence, this book proved to be very timely. His publisher, Viking Press, issued new printings repeatedly, while Ley revised it every few years, expanding both the text and the title to keep up with fast-breaking developments [expanded versions appeared in 1945, 1948, and 1952]. One day in the spring of 1951, Ley had lunch with Robert Coles, chairman of the Hayden Planetarium in Manhattan. He remarked that interest in astronautics was burgeoning in Europe. An international conference, held in Paris the previous October, had attracted over a thousand people. None had come from the U.S., however, and this suggested to Ley that Americans should organize a similar congress. Coles replied, "Go ahead, the planetarium is yours." Ley proceeded to set up a symposium that took place on Columbus Day. Admission was by invitation only. Some invitations, however, went to members of the press. Among the attendees were a few staffers from Collier's, a magazine with a readership of ten million. Two weeks later, the managing editor, Gordon Manning, read a brief news item about an upcoming Air Force conference, in San Antonio, on medical aspects of space flight. He sent an associate editor, Cornelius Ryan, to cover this meeting and to see if it could be turned into a story [Ley, Rockets, pp. 330-331; AAS History Series, vol. 15, pp. 235-242]. While no space enthusiast, Ryan was a meticulous reporter, as he would show in such books as The Longest Day and A Bridge Too Far. At the meeting, he fell in with Wernher von Braun, who had been the technical director of the V-2 project. Von Braun, a consummate salesman, had swayed even Hitler [Dornberger, V-2, pp. 103-111]. Over cocktails, dinner, and still more cocktails, Von Braun proceeded to deliver his pitch. It focused on a space station with an onboard crew living and working in space. Von Braun declared that it could be up and operating in orbit by 1967. It would have the shape of a ring, 250 feet in diameter, and would rotate to provide centrifugal force that could substitute for gravity in weightless space. The onboard staff of 80 people would include astronomers operating a major telescope. Meteorologists, looking earthward, would study cloud patterns and predict the weather [AAS History Series, vol. 15, pp. 235-242]. To serve the needs of the Cold War, von Braun emphasized the use a space station could have for military reconnaissance. He also declared that it could operate as a high-flying bomber, dropping nuclear weapons with great accuracy. To build it, he called for a fleet of immense piloted cargo rockets (space shuttles, though the term had not yet entered use) each weighing 7,000 tons, 500 times the weight of the V-2. Yet the whole program—rockets, station and all—would cost only $4 billion, twice the budget of the wartime Manhattan Project that had built the atomic bomb [Ibid.; Time, December 8, 1952, pp. 67, 71; Collier's, March 22, 1952, pp. 27-28]. With its completion, the space station could serve as an assembly point for a far-reaching program of exploration. An initial mission would send a crew on a looping flight around the Moon, to photograph its unseen far side. Later, perhaps by 1977, a fleet of three rockets would carry as many as 50 people to the Moon's Bay of Dew for a six-week period of wide-ranging exploration using mobile vehicles [Collier's, October 18, 1952, pp. 51-59; October 25, 1952, pp. 38-48]. Eventually, perhaps a century in the future, an even bolder expedition would carry astronauts to Mars [Ibid., April 30, 1954, pp. 22-29]. By the end of that evening, von Braun had converted Ryan, who now believed that piloted space flight was not only possible but imminent. Returning to New York, Ryan persuaded Manning that this topic merited an extensive series of articles that eventually would span eight issues of the magazine [Ibid., March 22, October 18 and October 25, 1952; February 28, March 7, March 14, and June 27, 1953; April 30, 1954; reprinted in part in NASA SP-4407, vol. I, pp. 176-200]. Manning then invited von Braun, together with several other specialists, to Manhattan for a series of interviews and discussions. These specialists included Willy Ley; the astronomer Fred Whipple of Harvard, a moon and Mars specialist; and Heinz Haber, an Air Force expert in the nascent field of space medicine [Collier's, March 22, 1952, p. 23]. In preparing the articles, Collier's placed heavy emphasis on getting the best possible color illustrations. Artists included Chesley Bonestell, who had founded the genre of space art by presenting imagined views of planets such as Saturn, as seen closeup from such nearby satellites as its large moon Titan. Von Braun's engineering drawings and sketches of his rockets and spaceships were used by Bonestell and the other artists to create working drawings for Von Braun's review. They would execute the finished paintings only after receiving Von Braun's corrections and comments [AAS History Series, vol. 15, p. 237; vol. 17, pp. 35-39]. The first set of articles appeared in March 1952, with the cover illustration of a space shuttle at the moment of staging, high above the Pacific. "Man Will Conquer Space Soon," blared the cover. "Top Scientists Tell How in 15 Startling Pages." Inside, an editorial noted "the inevitability of man's conquest of space" and presented "an urgent warning that the U.S. must immediately embark on a long-range development program to secure for the West 'space superiority'" [Collier's, March 22, 1953, p. 23]. The series appeared while Willy Ley was bringing out new and updated editions of his own book. It followed closely The Exploration of Space by Arthur C. Clarke, published in 1951 and offered by the Book-of-the-Month Club [citation in bibliography]. The Collier's articles, however, set the pace. Late in 1952, Time magazine ran its own cover story on von Braun's ideas [Time, December 8, 1952]. In Hollywood, producer George Pal was working already with Bonestell, and had brought out such science fiction movies as Destination Moon (1950) and When Worlds Collide (1951). In 1953, they drew on von Braun's work and filmed The Conquest of Space, in color. Presenting the space station and Mars expedition, the film proposed that the Martian climate and atmosphere would permit seeds to sprout in that planet's red soil [Miller and Durant, Worlds Beyond, pp. 100-102]. Walt Disney also got into the act, phoning Ley from his office in Burbank, California. He was building Disneyland, his theme park in nearby Anaheim, and expected to advertise it by showing a weekly TV program of that name over the ABC television network. With von Braun's help, Disney went on to produce an hour-long feature, Man in Space. It ran in October 1954, with subsequent reruns, and emphasized the piloted lunar mission. Audience-rating organizations estimated that 42 million people had watched the program [Ley, Rockets, p. 331]. In its 1952 article, Time referred to von Braun's cargo rockets as "shuttles" and "shuttle rockets," and described the reusable third stage as "a winged vehicle rather like an airplane." His payload weight of 72,000 pounds proved to be very close to the planned capacity of 65,000 pounds for NASA's space shuttle [Time, December 8, 1952, pp. 67, 68]. He expected to fuel his rockets with the propellants nitric acid and hydrazine, which have less energy than the liquid hydrogen in use during the 1960s. Hence, his rockets would have to be very large. While his loaded weight of 7,000 tons would compare with the 2,900 tons of America's biggest rocket, the Saturn V [NASA SP-4012, vol. III, p. 27], his program cost of $4 billion was wildly optimistic. Still, the influence of the Collier's series echoed powerfully throughout subsequent decades. It was this eight-part series that would define nothing less than NASA's eventual agenda for piloted space flight. Cargo rockets such as the Saturn V and the space shuttle, astronaut Moon landings, a space station, the eventual flight of people to Mars—all these concepts would dominate NASA's projects and plans. It was with good reason that, in the original Collier's series, the space station and cargo rocket stood at the forefront. By 1952, the concept of a space station had been in the literature for nearly 30 years, while large winged rockets were being developed as well. The concept of a space station took root during the 1920s, in an earlier era of technical change that focused on engines. As recently as 1885, the only important prime mover had been the reciprocating steam engine. The advent of the steam turbine yielded dramatic increases in the speed and power of both warships and ocean liners. Internal-combustion engines, powered by gasoline, led to automobiles, trucks, airships, and airplanes. Submarines powered by diesel engines showed their effectiveness during World War I [Scientific American, May 1972, pp. 102-111; April 1985, pp. 132-139]. After that war, two original thinkers envisioned that another new engine, the liquid-fuel rocket, would permit aviation to advance beyond the Earth's atmosphere and allow the exploration and use of outer space. These inventors were Robert Goddard, a physicist at Clark University in Worcester, Massachusetts, and Hermann Oberth, a teacher of mathematics in a gymnasium in a German-speaking community in Romania [Ley, Rockets, pp. 107, 116]. Goddard experimented much, wrote little, and was known primarily for his substantial number of patents [Lehman, High Man, pp. 360-363]. Oberth contented himself with mathematical studies and writings. His 1923 book, Die Rakete zu den Planetenraumen (The Rocket into Interplanetary Space), laid much of the foundation for the field of astronautics. Both Goddard and Oberth were well aware of the ordinary fireworks rocket (a pasteboard tube filled with blackpowder propellant). They realized that modern technology could improve on this centuries-old design in two critical respects. First, a steel combustion chamber and nozzle in a rocket engine could perform much better than pasteboard. Second, the use of propellants such as gasoline and liquid oxygen would produce far more energy than blackpowder. Oberth produced two conceptual designs: the Model B, an instrument-carrying rocket for upper-atmosphere research, and the Model E, a spaceship [Ley, Rockets, pp. 108-112; NASA TT-F-9227, p. 98]. Having demonstrated to his satisfaction that space flight indeed was achievable, Oberth then considered its useful purposes. While he was not imaginative enough to foresee the advent of automated spacecraft (still well in the future), the recent war had shown that, using life support systems, submarines could support sizable crews underwater for hours at a time. Accordingly, he envisioned that similar crews, with oxygen provided through similar means, would live and carry out a variety of tasks in a space station as it orbited the Earth. Without describing the station in any detail, he wrote that it could develop out of a plan for a large orbiting rocket with a mass of "at least 400,000 kg": The station could serve as an astronomical observatory: The station could also carry out Earth observations, while serving as a communications relay: Oberth also considered the building of immense orbiting mirrors, with diameters as large as 1,000 kilometers: He recommended sodium as a lightweight construction material. While it reacts strongly with oxygen, sodium would remain inert in airless space. He also described how the observation station also could serve as a fuel station: Although Oberth was shy and retiring by nature, the impact of his ideas, during subsequent decades, would rival that of von Braun's a generation later. Die Rakete spurred the founding of rocket-research groups in Germany, the U.S., and the Soviet Union. As early as 1898, Russia's Konstantin Tsiolkovsky, a provincial math teacher like Oberth, had developed ideas similar to those of Oberth's. Officials of the new Bolshevik government then dusted off Tsiolkovsky papers, showing that he had been ahead of the Germans. As his writings won new attention, the Soviet Union emerged as another center of interest in rocketry [Ley, Rockets, pp. 100-104]. Fritz Lang, a leading German film producer, then became interested. More than a filmmaker, Lang was a leader in his country's art and culture. Later, Willy Ley noted that at one of his premieres, "The audience comprised literally everyone of importance in the realm of arts and letters, with a heavy sprinkling of high government officials" [Ibid., p. 124]. In 1926, Lang released the classic film Metropolis, with a robot in the leading role. Two years later, he set out to do the same for space flight with Frau im Mond (The Girl in the Moon). Drawing heavily on Oberth's writings, Lang's wife, actress Thea von Harbou, wrote the script for Frau im Mond. Fritz Lang hired Oberth as a technical consultant. Oberth then convinced Lang to underwrite the building of a real rocket. After all, it would be great publicity for the movie were such a rocket to fly on the day of the premiere. The project attracted a number of skilled workers who went on to build Germany's first liquid-fuel rockets. Among them, a youthful Wernher von Braun went on to develop the V-2 with support from the German army [Ibid., pp. 124-130; Neufeld, Rocket and Reich, pp. 11-23]. Even during the 1920s, Oberth's ideas drew enough attention to encourage other theorists and designers to pursue similar thoughts and to write their own books. Herman Potoĉnik, an engineer and former captain in the Austrian army, wrote under the pen name of Hermann Noordung. In 1929, he published The Problem of Space Travel, a book that addressed the issue of space station design. It was to be his last publication, however, for later that same year, he died of tuberculosis at the age of 36 [NASA SP-4026, pp. xv-xvi]. Potoĉnik introduced the classic rotating wheel shape, proposing a diameter of 100 feet with an airlock at its hub. The sun would provide electric power, though not with solar cells; these, too, lay beyond the imagination of that generation. Instead, a large parabolic mirror would focus sunlight onto boiler pipes in a type of steam engine. For more power, a trough of mirrors would run around the station's periphery concentrating solar energy on another system of pipes. Like a flower, the station would face the sun [Ibid., pp. 101-113]. Except for being two and a half times larger, von Braun's Collier's space station closely resembled that of Potoĉnik, and it is tempting to view von Braun as the latter's apt pupil. He certainly had the opportunity to read Potoĉnik's book (though published initially in its author's native language of Slovenian, it appeared quickly in German translation [Ibid., pp. ix, xii]). Moreover, von Braun's concept included a circumferential trough of solar mirrors for power. This, however, came not from Potoĉnik but rather from a suggestion of Fred Whipple (who had not read Potoĉnik's book), and thus represented an independent invention [Ley, Rockets, pp. 372-373]. The influence of Potoĉnik on von Braun may have been only indirect. The historian J.D. Hunley, who has prepared an English translation of Potoĉnik's book, describes its influence on von Braun as "probable but speculative." Nevertheless, he states unequivocally that "Potoĉnik's book was widely known even to people who may have seen only photographs of sections from the book in translation" [NASA SP-4026, pp. xxii-xxiii]. His concept of a large rotating wheel was sufficiently simple to permit von Braun and others to carry it in their heads for decades, developing this concept with fresh details when using it as the point of reference for an original design. In the popular mind, if not for aerospace professionals, the Collier's series introduced the shape of a space station in definitive form. It carried over to Disney's Man in Space, and to George Pal's Conquest of Space. Fifteen years later, when producer Stanley Kubrick filmed Arthur C. Clarke's 2001: A Space Odyssey, he too used the rotating-wheel shape, enlarging it anew to a diameter of a thousand feet [Clarke, 2001, photo facing p. 112]. While space stations came quickly to the forefront in public attention, it was another matter to build them, even in versions much smaller than von Braun's 250-foot wheel. Between 1960 and 1980 the concept flourished only briefly, in the short-lived Skylab program. The second major element of the Collier's scenario, the winged rocket, enjoyed considerably better prospects. At first merely topics for calculation and speculation, the development of long-range winged rockets during World War II was the departure point for a number of serious postwar projects. In the 1930s, work on winged rockets foreshadowed the development of a high-speed airplane able to land on a runway for repeated flights. The first important treatment came from Eugen Sänger, a specialist in aeronautics and propulsion who received a doctorate at the Technische Hochschule [a technical institute that does not qualify as a university but that offers advanced academic studies, particularly in engineering] in Vienna and stayed on to pursue research on rocket engines. In 1933, he published Raketenflugtechnik (Rocket Flight Engineering). The first text in this field, it included a discussion of rocket-powered aircraft performance and a set of drawings. Sänger proposed achieving velocities as high as Mach 10, along with altitudes of up to 70 kilometers [AAS History Series, vol. 7, Part 1, pp. 195, 203-206; vol. 10, pp. 228-230; Ley, Rockets, pp. 408-410]. While the turbojet engine was unknown at that time, it was this engine, rather than the rocket, that would offer the true path to routine high performance. Because a turbojet uses air from the atmosphere, a jet plane needs to carry fuel only, while its wings reduce the thrust and fuel consumption. Hence, it can maintain longer flight times. By contrast, a rocket must carry oxygen as well as fuel, and thus, while capable of high speeds, it lacks endurance. After World War II, rocket airplanes as experimental aircraft went on to reach speeds and altitudes far exceeding those of jets. Jet planes, however, took over the military and later the commercial realms. During World War II, Sänger made a further contribution, showing how the addition of wings could greatly extend a rocket's range. Initially, a winged rocket would fly to modest range, along an arcing trajectory like that of an artillery shell. Upon reentering the atmosphere, however, the lift generated by the rocket's wings would carry it upward, causing it to skip off the atmosphere like a flat stone skipping over water. Sänger calculated that with a launch speed considerably less than orbital velocity, such a craft could circle the globe and return to its launch site [Ley, Rockets, pp. 428-434]. After World War II, this concept drew high-level attention in Moscow, where, for a time, Stalin sought to use it as a basis for a serious weapon project [Zaloga, Target, pp. 121-124]. In haste and desperation, winged rockets entered the realm of hardware late in the war, as an offshoot of the V-2 program. The standard V-2 had a range of 270 kilometers. Following the Normandy invasion in 1944, as the Allies surged into France and the Nazi position collapsed, a group of rocket engineers led by Ludwig Roth sought to stretch this range to 500 kilometers by adding swept wings to allow the missile to execute a supersonic glide. The venture was ill-starred from the outset. When winds blew on the wings during liftoff, the marginal guidance system could not prevent the vehicle from rolling and going out of control. In this fashion, the first winged V-2 crashed within seconds of its December 1944 launch. A month later, a second attempt was launched successfully and had transitioned to gliding flight at Mach 4. Then a wing broke off, causing the missile to break up high in the air [Neufeld, Rocket and Reich, pp. 248-251, 281]. Nevertheless, this abortive effort provided an early point of departure for America's first serious long-range missile effort. In the Army Air Forces (AAF), the Air Technical Service Command (ATSC; renamed Air Materiel Command in March 1946) began by defining four categories of missiles: air-to-air, air-to-surface, surface-to-air, and surface-to-surface. The last of these included the V-2 and its potential successors [Neufeld, Ballistic Missiles, p. 26]. The program began with a set of military characteristics, outlined in August 1945, that defined requirements for missiles in these categories. AAF Headquarters published these requirements as a classified document. In November 1946, ATSC invited 17 contractors, most of them aircraft manufacturers, to submit proposals for design studies of specific weapons. One of these firms was North American Aviation (NAA) in Los Angeles [Fahrney, History, p. 1291; Neal, Navaho, pp. 1-2]. NAA had been a mainstay in wartime aircraft production. At the end of World War II, amid sweeping contract cancellations, the company dropped from 100,000 to 6,500 employees in about two months [AAS History Series, vol. 20, pp. 121-132]. The few remaining contracts were largely in the area of jet-powered bombers and fighters. To NAA's president, James "Dutch" Kindelberger, these bombers represented the way into the future. He decided to bring in the best scientist he could find and have him build a new research lab, staffed with experts in such fields as jet propulsion, rockets, gyros, electronics, and automatic control. The lab's purview, which would go well beyond the AAF study, was to work toward bringing in new business by extending the reach of the firm's technical qualifications [author interview, J. Leland Atwood, Los Angeles, July 18, 1988]. An executive recruiter, working in Washington, D.C., recommended William Bollay to head this lab. Bollay, who held a Ph.D. in aeronautical engineering from Caltech, had been a branch chief in the Navy's Bureau of Aeronautics, with responsibility for the development of turbojet engines. He came to NAA by November 1946, in time to deal with the AAF request for proposals. Working with the company's chief engineer, Raymond Rice, Bollay decided to pursue the winged V-2, which the Germans had designated as the A-9. During World War II, the Germans had regarded this missile as the next step beyond the standard V-2, hoping that its wings would offer a simple way to increase its range. The V-2's overriding priority had prevented serious work on its winged version. Late in 1945, however, the NAA proposal offered to "essentially add wings to the V-2 and design a missile fundamentally the same as the A-9" [Ibid.; author interview, Jeanne Bollay, Santa Barbara, California, January 24, 1989; Report AL-1347 (North American), pp. 1-4; Neufeld, Rocket and Reich, p. 249]. A letter contract, issued to the firm in April 1946, called for the study and design of a supersonic guided missile designated MX-770, with a range of 175 to 500 miles [Report AL-1347 (North American), pp. 5-6]. Meanwhile, rocket research was under way in an NAA company parking lot, with parked cars only a few yards away. A boxlike steel frame held a rocket motor; a wooden shack housed instruments. The steel blade of a bulldozer's scraper was used as a shield to protect test engineers in the event of an explosion [Threshold, Summer 1993, pp. 40-47]. A surplus liquid-fueled engine from Aerojet General, with a 1,000 pounds of thrust, served as the first test motor. The rocket researchers also built and tested home-brewed engines, initially with 50 to 300 pounds of thrust [Report AL-1347 (North American), p. 37]. Some of these engines were so small that they seemed to whistle rather than roar. In the words of J. Leland Atwood, who became company president in 1948, "We had rockets whistling day and night for a couple of years" [author interview, J. Leland Atwood, Los Angeles, July 18, 1988]. In June 1946, the first step toward a coordinated plan came in the form of a new company proposal. In the realm of large rocket-engine development, Bollay and his associates proposed a two-part program: In the spring of 1947, the company added a further step: Bollay and his colleagues also launched an extensive program of consultation with Wernher von Braun and his wartime veterans. These included Walther Riedel, Hans Huter, Rudi Beichel, and Konrad Dannenberg. In addition, Dieter Huzel, a close associate of von Braun, went on to join NAA as a full-time employee [Threshold, Summer 1991, pp. 52-63, Huzel, Peenemünde, pp. 226-228]. Bollay wanted to test-fire V-2 engines. Because their thrust of 56,000 pounds was far too great for the company's parking lot test center, Bollay needed a major set of test facilities. Atwood was ready to help. "We scoured the country," Atwood recalls. "It wasn't so densely settled then—and we located this land" [author interview, J. Leland Atwood, Los Angeles, July 18, 1988]. It was in the Santa Susana Mountains, at the western end of the San Fernando Valley. The landscape—stark, sere, and full of rounded reddish boulders—offered spectacular views. In March 1947, NAA leased the land and built a rocket test center on it as part of a buildup of facilities costing upwards of $1 million in company money and $1.5 million from the Air Force [Report AL-1347 (North American), pp. 23-26; Neal, Navaho, p. 29]. In 1946, two government-furnished V-2 engines arrived at the site. Detailed designing of the Phase II engine began in June 1947; the end of September brought the first release of drawings and of the first fabricated parts. Early in 1949, the first such engine was completed. Two others followed shortly thereafter [Report AL-1347 (North American), pp. 36-37; Fahrney, History, p. 1292; AAS History Series, vol. 20, pp. 133-144]. Still very much a V-2 engine, it had plenty of room for improvement. Lieutenant Colonel Edward Hall, who was funding the work, declared that "it wasn't really a very good engine. It didn't have a proper injector, and that wasn't all. When we took it apart, we decided that was no way to go" [author interview, Edward Hall, Los Angeles, January 25, 1989]. By fixing the deficiencies during Phase III, NAA expected to lay a solid foundation for future rocket engine development. A particular point of contention involved this engine's arrangements for injecting propellants into its combustion chamber. Early in the German rocket program, Walter Riedel, von Braun's chief engine designer, had built a rocket motor with 3,300 pounds of thrust with a cup-shaped injector at the top of the thrust chamber. For the V-2, a new chief of engine design, Walter Thiel, grouped 18 such cups to yield its 56,000 pounds. Unfortunately, this arrangement did not lend itself to a simple design wherein a single liquid-oxygen line could supply all the cups. Instead, his "18-pot engine" required a separate oxygen line for each individual cup [Ley, Rockets, pp. 204, 212, 215; Neufeld, Rocket and Reich, pp. 74-79, 84]. Thiel had pursued a simpler approach by constructing an injector plate, resembling a showerhead, pierced with numerous holes to permit the rapid inflow and mixing of the rocket propellants. By the end of World War II, Thiel's associates had tested a version of the V-2 engine successfully that incorporated this feature, though it never reached production [Neufeld, Rocket and Reich, p. 251]. Bollay's rocket researchers, still working within the company parking lot, were upping their engines' thrust to 3000 pounds, and were using them to test various types of injector plates [Report AL-1347 (North American), p. 37; Threshold, Summer 1993, pp. 4047]. The best injector designs would be incorporated into the Phase III engine, bringing a welcome simplification and introducing an important feature that could carry over to larger engines with greater thrust. In September 1947, preliminary design of Phase III began, aiming at the thrust of the V-2 engine but with a weight reduction of 15 percent [Report AL-1347 (North American), p. 36]. Bollay had initially expected to design the 500-mile missile as a V-2 with swept wings and large control surfaces near the tail, closely resembling the A9. Work in a supersonic wind tunnel built by Bollay's staff showed that this design would encounter severe stability problems at high speed. Thus, by early 1948, a new configuration emerged. With small forward-mounted wings (known as canards) that could readily control such instability, the new design moved the large wings well aft, replacing the V-2's horizontal fins. In January 1948, four promising configurations were tested in the Ordnance Aerophysics Laboratory wind tunnel in Daingerfield, Texas. By March, a workable preliminary design of the best of these four configurations was largely in hand [Ibid., pp. 30-33, 38-39]. When it won independence from the Army, the U.S. Air Force received authority over programs for missiles with a range of 1,000 miles or more. Shorter-range missiles remained the exclusive domain of the Army. Accordingly, at a conference in February 1948, Air Force officials instructed NAA to stretch the range of their missile to 1000 miles [Fahrney, History, pp. 1293-1294; Neal, Navaho, pp. 6-7]. The 500-mile missile had featured a boost-glide trajectory. It used rocket power to arc high above the atmosphere and then its range was extended with a supersonic glide. This approach was not well suited when the range was doubled. At the Air Force developmental center of Wright Field, near Dayton, Ohio, Colonel M. S. Roth proposed to increase the missile range anew by adding ramjets [Letter, Colonel M. S. Roth to Power Plant Lab, 11 February 1948 (cited in Fahrney, History, p. 1294)]. Unlike the turbojet engines of the day, the ramjet—which worked by ramming air into the engine at high speed—could fly supersonically. A turbojet, however, could take off from a standing start whereas a ramjet needed a rocket boost to reach the speed at which this air-ramming effect would come into play. A Navy effort, Project Bumblebee, had been under way in this area since World War II and NAA had done several relevant aerodynamic studies. In addition, at Wright Field, the Power Plant Laboratory included a Nonrotating Engine Branch that was funding the development of ramjets as well as rocket motors. Its director, Weldon Worth, dealt specifically with ramjets; Lieut. Col. Hall, who dealt with rockets, served as his deputy [Report AL-1347 (North American), p. 6; Jet Propulsion, vol. 25 (1955), pp. 604-614; author interview, Edward Hall, Los Angeles, August 29, 1996]. Though designed for boost-glide flight, the new missile configuration readily accommodated ramjets and their fuel tanks for supersonic cruise. The original boost-glide missile thus evolved into a cruise missile when a modification of the design added two ramjet engines, mounting one at the tip of each of two vertical fins. These engines and their fuel added weight, which necessitated an increase in the planned thrust of the Phase Ill rocket motor. Originally it had been planned to match the 56,000 pound thrust of the V-2. In March 1948, however, the thrust of this design went up to 75,000 pounds. The missile was named the Navaho, reflecting a penchant at NAA for names beginning with "NA" [Report AL-1347 (North American), pp. 39, 42-43]. By late November of 1949, the first version of this engine was ready for testing at the new Santa Susana facility. Because it lacked turbopumps, propellants were pressure-fed from heavy-walled tanks. Thus, this version of the engine was much simpler than its later operational type, which would rely on turbopumps to force propellants into the engine. Proceeding cautiously, the rocket crew began with an engine-start test at 10 percent of maximum propellant flow for 11 seconds. It was successful and led to somewhat longer starting tests in December. Then, as the engineers grew bolder, they hiked up the thrust. In March 1950, this simplified engine first topped its rated level of 75,000 pounds—for four and a half seconds. During May and June, the full-thrust runs went well, exceeding a minute in duration. Meanwhile, a separate developmental effort was building the turbo-pumps. Late in March 1950, the first complete engine, turbopumps included, was assembled. In August, this engine fired successfully for a full minute—at 12.3 percent of rated thrust. Late in October, the first full-thrust firing reached 70,000 pounds—for less than five seconds. In seven subsequent tests during 1950, however, only one, in mid-November, topped its rated thrust level. This was due to problems with rough combustion during the buildup to full thrust [Ibid., pp. 75-81]. The pressure-fed tests exhibited surges in combustion-chamber pressure (known as "hard starts") that were powerful enough to blow up an engine. Walther Riedel, one of the German veterans, played an important role in introducing design modifications that brought this problem under control. The problem of rough combustion was new, however, and went beyond the German experience. It stemmed from combustion instability in the engine's single large thrust chamber. Ironically, the V-2's 18-pot motor had avoided this difficulty. Acting as preliminary burners, its numerous injector cups were too small to permit such instabilities [Threshold, Summer 1991, pp. 52-63]. Following the successful full-thrust test of November 1950, it was not until March 1951 that problems of unstable combustion came under control [Ibid., p. 53; Report AL-1347 (North American), p. 81]. However, this marked another milestone. For the first time, the Americans had encountered and solved an important problem that the Germans had not experienced. While combustion instabilities would recur repeatedly during subsequent engine programs, the work of 1950 and 1951 introduced NAA to methods for solving this problem. By then, the design and mission of the Navaho had changed dramatically. The August 1949 detonation of a Soviet atomic bomb, the fall of China to communism, and the outbreak of the Korean War in mid-1950 combined to signal to the nation that the rivalry with the Soviet Union was serious and that Soviet technical capability was significant. The designers at North American, working with their Air Force counterparts, accordingly sought to increase the range of the Navaho to as much as 5,500 nautical miles, and thereby give it intercontinental capability. At the Pentagon in August 1950, conferences among Air Force officials brought a redefinition of the program that set this intercontinental range of 5,500 miles as a long-term goal. A letter from Major General Donald L. Putt, director of research and development within the Air Materiel Command, became the directive instructing NAA to pursue this objective [Letter, Maj. Gen. D. L. Putt to Commanding General, Air Materiel Command, 21 August 1950 (cited in Fahrney, History, p. 1297)]. An interim version, Navaho II, with range of 2,500 nautical miles, seemed technically feasible. The full-range version, Navaho III, represented a long-term project that would go forward as a parallel effort [Report AL-1347 (North American), p. 88; Fahrney, History, pp. 1296-1297; Neal, Navaho, pp. 12-14]. The 1,000-mile Navaho of 1948, with its Phase III engine, had amounted to a high-speed pilotless airplane fitted with both rocket and ramjet propulsion. This design, however, had taken approaches based on winged rockets to their limit. The new Navaho designs separated the rocket engines from the ramjets, assigned each to a separate vehicle, and turned Navaho into a two-stage missile. The first stage or booster, powered by liquid-fuel rockets, accelerated the missile to Mach 3 and 58,000 feet. The ramjet-powered second stage rode this booster during initial ascent—similar to the way in which the Space Shuttle rides its external tank today—and then cruised to its target at Mach 2.75 (about 1,800 mph.) ["Standard Missile Characteristics: XSM-64 Navaho" U.S. Air Force, November 1, 1956, Air Force Museum, Wright-Patterson AFB, Ohio]. Lacking the thrust to boost the Navaho, the 75,000-pound rocket motor stood briefly on the brink of abandonment. Its life, however, was only beginning. This engine was handed over to von Braun, who was at Redstone Arsenal in Huntsville, Alabama, directing development of the Army's Redstone missile. With a range of 200 miles, this missile needed an engine. In March 1951, the Army awarded a contract to NAA for this rocket motor. Weighing less than half as much as the V-2's 18-pot engine (1,475 pounds versus 2,484), this motor delivered 34 percent more thrust than that of the V-2 [Threshold, Summer 1991, p. 63]. For Navaho II, this basic engine would be replaced by a new one with 120,000 pounds of thrust. A twin-engine installation, totaling 240,000 pounds, provided the initial boost. For Navaho III, NAA upgraded the engine to 135,000 pounds of thrust and designed a three-engine cluster for that missile's booster [Neal, Navaho, pp. 30-31; AAS History Series, vol. 20, pp. 133-144]. In 1954 and 1955, the Air Force and Army made a major push into long-range missiles—but these were not Navahos. Instead, they were the Air Force's Atlas, Titan, and Thor, along with the Army's Jupiter. When these new programs needed engines, however, it was again NAA that produced the rocket motors that would do the job. The Navaho's 135,000 pounds of thrust was upgraded to 139,000 and then again to 150,000 pounds. In addition to this, a parallel effort at Aerojet General developed very similar engines for the Titan [AAS History Series, vol. 13, pp. 19-35; vol. 20, pp. 133-144]. "We often talked about this basic rocket as a strong workhorse, a rugged engine," says Paul Castenholz, a test engineer who worked at Santa Susana. "I think a lot of these programs evolved because we had these engines. We anticipated how people would use them; we weren't surprised when it happened. We'd hear a name like Atlas with increasing frequency, but when it became real, the main result was that we had to build more engines and test them more stringently" [author interview, Paul Castenholz, Colorado Springs, August 18, 1988]. The Navaho of 1948, designed as a winged rocket with ramjets, stood two steps removed from the missiles that later would go on to deployment and operational status. First, the versions of 1950 and after were designed and built as high-speed aircraft with a separate rocket booster. Subsequently, those versions were replaced by the Atlas and other missiles of that era. Even though the Air Force cancelled the Navaho program in 1957, its legacy lived on. Bollay's research center, called the Aerophysics Laboratory, became the nucleus that allowed NAA to take the lead in piloted space flight. In 1955, this laboratory split into four new corporate divisions: Rocketdyne, Autonetics, the Missile Division, and Atomics International. Rocketdyne became the nation's premier builder of rocket engines. Autonetics emerged as a major center for guidance and control. The Missile Division, later renamed Space and Information Systems, built the Apollo spacecraft as well as the second stage of the Saturn V Moon rocket [Murray, Lee Atwood, pp. 47, 56, 62-64, 71]. The Navaho also left a legacy in its people. Sam Hoffman, who brought the 75,000-pound engine to success, presided over Rocketdyne as it built the main engines for the Saturn V. Paul Castenholz headed development of the J2, the hydrogen-fueled engine that powered Saturn V's upper stages. John R. Moore, an expert in guidance, became president of Autonetics. Dale Myers, who served as Navaho project manager, went to NASA as Associate Administrator for Manned Space Flight. [Author interviews: Eugene Bollay, Santa Barbara, California, January 24, 1989; Sam Hoffman, Monterey, California, July 28, 1988; Paul Castenholz, Colorado Springs, August 18, 1988; John R. Moore, Pasadena, California, May 28, 1996; Dale Myers, Leucadia, California, May 24, 1996.] Navaho's engines, including those built in the parallel effort at Aerojet General, represented a third legacy. Using such engines, Atlas, Thor, and Titan were all successful as launch vehicles. Upper stages were added to Thor which evolved into the widely-used Delta. Additional upgrades raised the thrust of its engine to 205,000 pounds. A cluster of eight such engines, producing up to 1.6 million pounds of thrust, powered the Saturn I and Saturn I-B boosters, which flew repeatedly in both the Apollo and Skylab programs.' Between 1946 and 1950, the winged rockets of the Navaho program played a pioneering role, planting seeds that would flourish for decades in aerospace technology. During the 1940s and 50s, the nation's main centers for aeronautical research operated within a small federal agency, the National Advisory Committee for Aeronautics (NACA; it became the National Aeronautics and Space Administration, NASA, in 1958). After World War II, NACA and the Air Force became increasingly active in supersonic flight. Rocket-powered aircraft such as the Bell X-1 and the Douglas Skyrocket D-558 set the pace. The X-1 broke the sound barrier in 1947; the Skyrocket approached Mach 2 only four years later. Also, between 1949 and 1951, NAA designed a new fighter, the F-100, planning it to be the first jet plane to go supersonic in level flight [Ley, Rockets, pp. 423-425; Gunston, Fighters. pp. 170-171]. Supersonic aviation brought difficult problems in aerodynamics, propulsion, aircraft design, and stability and control in flight. Still, at least for flight speeds of Mach 2 and somewhat higher, it did not involve the important issue of aerodynamic overheating. Though fitted with rocket engines, the cited aircraft were built of aluminum, which cannot withstand high temperatures. At speeds beyond Mach 4 lay the realm of hypersonic flight, where problems of heating would dominate. Nevertheless, by the early 1950s, interest in such flight speeds was increasing. This was due in part to the growing attention given to prospects for an intercontinental ballistic missile (ICBM), a rocket able to carry a nuclear weapon to Moscow. In December 1950, the Rand Corp., an influential Air Force think tank, reported that such missiles now stood within reach of technology. The Air Force responded by giving a study contract to the firm of Convair in San Diego, where, a few years earlier, the designer Karel Bossart had nurtured thoughts of such missiles. Bossart's new design, developed during 1951, called for the use of the Navaho's 120,000-pound-thrust rocket engine. The design was thoroughly unwieldy; it would stand 160 feet tall and weigh 670,000 pounds. Nevertheless, it represented a milestone. For the first time, the Air Force had an ICBM design concept that it could pursue using rocket engines that were already being developed [Neufeld, Ballistic Missiles, pp. 68-70]. Among the extraordinarily difficult technical issues faced by the ICBM, the problem of reentry was paramount. Because an ICBM's warhead would reenter the atmosphere at Mach 20 or more, there was excellent reason to believe that it would burn up like a meteor. As early as 1951, however, the NACA aerodynamicist H. Julian Allen offered a solution. Conventional thinking held that hypersonic flight would require the ultimate in slender needle-nose shapes. Allen broke with this approach, showing mathematically that the best design would introduce a nose cone as blunt or flat-faced as possible. Such a shape would set up patterns of airflow that would carry most of the heat of reentry away from the nose cone, rather than delivering this heat to its outer surface [Allen and Eggers, NACA Report 1381; Hansen, Transition, p. 3.] There was further interest in hypersonics at Bell Aircraft Corp. in Buffalo. Here Walter Dornberger, who had directed Germany's wartime rocket development, was proposing a concept similar to Eugen Sänger's skip-gliding rocket plane. The design of the rocket (known as the BomiBomber Missile) required a two-stage vehicle with each stage winged, piloted, and rocket-powered. Dornberger argued that Bomi would have the advantage of being able to fly multiple missions like any piloted aircraft, and it could be recalled once in flight. By contrast, an ICBM could fly only once and would be committed irrevocably to its mission once in flight [Spaceflight, vol. 22 (1980), pp. 270-272]. Bell Aircraft, very active in supersonic flight research, had built the X-1, which was the first through the sound barrier. Also, Bell Aircraft was building the X-1A that would approach Mach 2.5 and the X-2 that would top Mach 3 [Miller, X-Planes, pp. 25-26, 37, 41-42]. Robert Woods, co-founder of the company and a member of NACA's influential Committee on Aerodynamics, had been a leader in the design of these aircraft. He also took a strong interest in Dornberger's ideas. In October 1951, at a meeting of the Committee on Aerodynamics, Woods called for NACA to develop a new research airplane resembling the V-2, to "obtain data at extreme altitudes and speeds, and to explore the problems of reentry into the atmosphere." In January 1952, Woods wrote a letter to the committee, urging NACA to pursue a piloted research airplane capable of reaching beyond Mach 5. He accompanied this letter with Dornberger's description of Bomi. That June, at Woods's urging, the committee passed a resolution proposing that NACA increase its program in research aircraft to examine "problems of unmanned and manned flight in the upper stratosphere at altitudes between 12 and 50 miles" [AAS History Series, vol. 13, p. 296; Hansen, Transition, pp. 5-6]. NACA already had a few people who were active in hypersonics, notably the experimentalists Alfred Eggers and John Becker, who had already built hypersonic wind tunnels [Hallion, ed., Hypersonic, pp. xxxi-xxxv]. At NACA's Langley Aeronautical Laboratory, Floyd Thompson, the lab's associate director, responded to the resolution by setting up a three-man study group chaired by Clinton Brown, a colleague of Becker. In Becker's words, "Very few others at Langley in 1952 had any knowledge of hypersonics. Thus, the Brown group filled an important educational function badly needed at the time" [Ibid., p. 381]. According to Thompson, he was looking for fresh unbiased ideas and the three study-group members had shown originality in their work. Their report, in June 1953, went so far as to propose commercial hypersonic flight, suggesting that airliners of the future might evolve from boost-glide concepts such as those of Dornberger. At the more practical level, however, the group warmly endorsed building a hypersonic research aircraft. NACA-Langley already had a Pilotless Aircraft Research Division (PARD), which was using small solid-fuel rockets to conduct supersonic experiments. Brown's group now recommended that PARD reach for higher speeds, perhaps by launching rockets that could cross the Atlantic and be recovered in the Sahara Desert [Ibid., pp. 381-382; Hansen, Transition, pp. 6-9]. PARD, a NACA in-house effort, went forward rapidly. In November 1953, it launched a research rocket that carried a test nose cone to Mach 5.0. The following October, a four-stage rocket reached Mach 10.4 [Hallion, ed., Hypersonic, p. lxiv]. To proceed with a piloted research airplane, NACA's limited budget needed support from the Air Force. Here too there was cross-fertilization. Robert Gilruth, head of PARD and an assistant director of NACA-Langley, was also a member of the Aircraft Panel of the Air Force's Scientific Advisory Board. At a meeting in October 1953, this panel stated that "the time was ripe" for such a research airplane, and recommended that its feasibility "should be looked into" [Astronautics & Aeronautics, February 1964, p. 54]. The next step came at a two-day meeting in Washington of NACA's Research Airplane Projects Panel. Its chairman, Hartley Soule, had directed NACA's participation in research aircraft programs since the earliest days of the X-1 project in 1946. The panel considered specifically a proposal from Langley, endorsed by Brown's group, to modify the X-2 for flight to Mach 4.5. They rejected this concept, asserting that the X-2 was too small for hypersonic work. The panel members concluded instead that "provision of an entirely new research airplane is desirable" [Ibid.; Hansen, Transition, p. 9]. NACA's studies of such an airplane would have to start anew. In March 1954, John Becker set up a new group that took on the task of defining a design. Time was of the essence; everyone was aware that the X-2 project, underway since 1945, had yet to make its first powered flight [Astronautics & Aeronautics, February 1964, p. 53]. Becker stipulated that "a period of only about three years be allowed for design and construction." Hence NACA would move into the unknown frontiers of hypersonics using technology that was already largely in hand [Hallion, ed., Hypersonic, p. 1]. Two technical problems stood out: overheating and instability. Because the plane would fly in the atmosphere at extreme speeds, it was essential that it be kept from tumbling out of control. As on any other airplace, tail surfaces were to provide this stability. Investigations had shown that these would have to be excessively large. A Langley aerodynamicist, Charles McLellan, came to the rescue. While conventional practice called for thin tail surfaces that resembled miniature wings, McLellan now argued that they should take the form of a wedge. His calculations showed that at hypersonic speeds, wedge-shaped vertical fins and horizontal stabilizers should be much more effective than conventional thin shapes. Tests in Becker's hypersonic wind tunnel verified this approach [Astronautics & Aeronautics, February 1964, pp. 54, 56]. The problem of overheating was more difficult. At the outset, Becker's designers considered that, during reentry, the airplane should point its nose in the direction of flight. This proved unacceptable because the plane's streamlined shape would cause it to enter the dense lower atmosphere at excessive speed. This would subject the aircraft to disastrous overheating and to aerodynamic forces that would cause it to break up. These problems, however, appeared far more manageable if the plane were to enter with its nose high, presenting its flat undersurface to the air. It then would lose speed in the upper atmosphere, easing both the overheating and the aerodynamic loads. In Becker's words, "It became obvious to us that what we were seeing here was a new manifestation of H. J. Allen's 'blunt body' principle. As we increased the angle of attack, our configuration in effect became more 'blunt'" [Hallion, ed., Hypersonic, p. 386]. While Allen had developed his principle for missile nose cones, it now proved equally useful when applied to hypersonic airplanes. Even so, the plane would encounter far more heat and higher temperatures than any aircraft to date had received in flight. New approaches in the structural design of these aircraft were imperative. Fortunately, Dornberger's group at Bell Aircraft had already taken the lead in the study of "hot structures." These used temperature-resistant materials such as stainless steel. Wings might be covered with numerous small and very hot metal panels resembling shingles that would radiate the heat away from the aircraft. While overheating would be particularly severe along the leading edges of the wings, these could be water-cooled. Insulation could protect an internal structure that would stand up to the stresses and forces of flight; active cooling could protect a pilot's cockpit and instrument compartment. Becker described these approaches as "the first hypersonic aircraft hot structures concepts to be developed in realistic meaningful detail" [Ibid., p. 384]. His designers proceeded to study a hot structure built of Inconel X, a chrome-nickel alloy from International Nickel. This alloy had already demon strated its potential, when, during the previous November, it was used for the nose cone in PARD's rocket flight to Mach 5 [Ibid., p. lxiv]. The hot structure would be of the "heat sink" type, relying on the high thermal conductivity of this metal to absorb heat from the hottest areas and spread it through much of the aircraft. As an initial exercise, they considered a basic design in which the Inconel X structure would have to withstand only conventional aerodynamic forces and loads, neglecting any extra requirements imposed by absorption of heat. A separate analysis then considered the heat-sink requirements, with the understanding that these might greatly increase the thickness and hence the weight of major portions of the hot structure. When they carried out the exercise, the designers received a welcome surprise. They discovered that the weights and thicknesses of a heat-absorbing structure were nearly the same as for a simple aerodynamic structure [Astronautics & Aeronautics, February 1964, p. 58]. Hence, a hypersonic research airplane, designed largely from aerodynamic considerations, could provide heat-sink thermal protection as a bonus. The conclusion was clear: piloted hypersonic flight was achievable. The feasibility study of Becker's group was intended to show that this airplane indeed could be built in the near future. In July 1954, Becker presented the report at a meeting in Washington of representatives from NACA, the Air Force's Scientific Advisory Board, and the Navy. (The Navy, actively involved with research aircraft, had built the Douglas Skyrocket.) Participants at the meeting endorsed the idea of a joint development program that would build and fly the new aircraft by drawing on the powerful support of the Pentagon [AAS History Series, vol. 13, p. 299]. Important decisions came during October 1954, as NACA and Air Force panels weighed in with their support. At the request of General Nathan Twining, the Air Force Chief of Staff, the Aircraft Panel of the Scientific Advisory Board presented its views on the next 10 years of aviation. The panel's report paid close attention to hypersonic flight: In addition to this, NACA's Committee on Aerodynamics met in executive session to make a formal recommendation concerning the new airplane. The committee included representatives from the Air Force and Navy, from industry, and from universities [Hansen, Transition, pp. 11, 30 (footnote 22)]. Its member from Lockheed, Clarence "Kelly" Johnson, vigorously opposed building this plane, arguing that experience with earlier experimental aircraft had been "generally unsatisfactory." New fighter designs were advancing so rapidly as to actually outpace the performance of research aircraft. To Johnson, their high-performance flights had served mainly to prove the bravery of the test pilots. While Johnson pressed his views strongly, he was in a minority of one. The other committee members passed a resolution endorsing "immediate initiation of a project to design and construct a research airplane capable of achieving speeds of the order of Mach number 7 and altitudes of several hundred thousand feet" [Ibid., pp. 12-14]. With this resolution, Hugh Dryden, the head of NACA, could approach his Air Force and Navy counterparts to discuss the initiation of procurement. Detailed technical specifications were necessary and would come, by the end of 1954, from a new three-member committee, with Hartley Soule as the NACA representative. The three members used Becker's study as a guide in deriving the specifications, which called for an aircraft capable of attaining 250,000 feet and a speed of 6600 feet per second while withstanding reentry temperatures of 1200 degrees Fahrenheit [Ibid., p. 14; AAS History Series, vol. 8, p. 299]. In addition to this, as NACA and the military services reached an agreement on procurement procedures, a formal Memorandum of Understanding came from the office of Trevor Gardner, Special Assistant for Research and Development to the Secretary of the Air Force. This document stated that NACA would provide technical direction, that the Air Force would administer design and construction, and that the Air Force and Navy would provide the funding. It concluded, "Accomplishment of this project is a matter of national urgency" [Hallion, ed., Hypersonic, p. 1-6]. Now the project was ready to proceed. Under standard Air Force practices, officials at Wright-Patterson Air Force Base would seek proposals from potential contractors. Early in 1955, the aircraft also received a name: the X15. Competition between proposals brought the award of a contract for the airframe to NAA. The rocket engine was contracted to Reaction Motors, Inc. [Ibid., pp. I-iv, 11-15]. The NAA design went into such detail that it even specified the heat-resistant seals and lubricants that would be used. Nevertheless, in many important respects it was consistent with the major features of the original feasibility study by Becker's group. The design included wedge-shaped tail surfaces and a heat-sink structure of Inconel X [Astronautics & Aeronautics, February 1964, p. 54]. The X-15 was to become the fastest and highest flying airplane until the space shuttle flew into orbit in 1981. In August 1963, the X-15 set an altitude record of 354,200 feet (67 miles), with NASA's Joseph Walker in the cockpit. Four years later, the Air Force's Captain William Knight flew it to a record speed of 4,520 miles per hour, or Mach 6.72 [Hallion, ed., Hypersonic, pp. I-v, I-viii]. In addition to setting new records, the X-15 accomplished a host of other achievements. A true instrument of hypersonic research, in 199 flights it spent nearly nine hours above Mach 3, nearly six hours above Mach 4, and 82 minutes above Mach 5. Although the NACA and the Air Force had hypersonic wind tunnels, the X-15 represented the first application of aerodynamic theory and wind tunnel data to an actual hypersonic aircraft. The X-15 thus enhanced the usefulness of these wind tunnels, by providing a base of data with which to validate (and in some instances to correct) their results. This made it possible to rely more closely on results from those tunnels during subsequent programs, including that of the Space Shuttle. The X-15 used movable control surfaces that substituted for ailerons. It also introduced reaction controls: small rocket thrusters, mounted to the aircraft, that controlled its attitude when beyond the atmosphere. As it flew to the fringes of space and returned, the X-15 repeatedly transitioned from aerodynamic controls to reaction controls and back again. Twenty years later, the Space Shuttle would do the same. In another important prelude to the shuttle, the X-15 repeatedly flew a trajectory that significantly resembled flight to orbit and return. The X-15 ascended into space under rocket power, flew in weightlessness, then reentered the atmosphere at hypersonic speeds. With its nose high to reduce overheating and aerodynamic stress, the X-15 used thermal protection to guard the craft against the heat of reentry. After reentry, the X-15 then maintained a stable attitude throughout its deceleration, transitioned to gliding flight, and landed at a preselected location. The shuttle would do all these things, albeit at higher speeds. The X-15 used a rocket engine of 57,000 pounds of thrust that was throttleable, reusable, and "man-rated" — safe enough for use in a piloted aircraft. The same description would apply to the more powerful Space Shuttle Main Engine. The demands of the project pushed the development of practical hypersonic technology in a number of areas. Hot structures required industrial shops in which Inconel X could be welded, machined, and heat-treated. The pilot required a pressure suit for use in a vacuum. The X-15 required new instruments and data systems including the "Q-ball," which determined the true direction of airflow at the nose. Cooled by nitrogen, the "Q-ball" operated at temperatures of up to 3,500 degrees Fahrenheit and advised the pilot of the angle of attack suitable for a safe reentry [Ibid., pp. 157-159; AAS History Series, vol. 8, p. 306; Miller, X-Planes, p. 110]. Like the Navaho, the X-15 also spurred the rise of people and institutions that were to make their mark in subsequent years. At NACA-Langley, the X15 combined with the rocket flights of PARD to put an important focus on hypersonics and hypervelocity flight. Leaders in this work included such veterans as Robert Gilruth, Maxime Faget, and Charles Donlan [NASA SP-4308; see index references]. A few years later, these researchers parlayed their expertise into leadership in the new field of piloted space missions. In addition to this, part of NACA-Langley split off to establish the new Manned Spacecraft Center in Houston as NASA's principal base for piloted space flight. Gilruth headed that center during the Apollo years, while Faget, who had participated in Becker's 1954 X-15 feasibility study, became a leading designer of piloted spacecraft [NASA SP-4307; see index references]. The X-15 program brought others to the forefront as well. At NAA the vice president of the program, Harrison "Stormy" Storms, became president of that company's Space Division in 1960. While Gilruth was running the Manned Spacecraft Center, Storms had full responsibility for his division's elements of Apollo: the piloted spacecraft and the second stage of the Saturn V Moon rocket [resume of Harrison A. Storms]. In addition to this, Neil Armstrong, the first man to set foot on the Moon, was among the test pilots of the X-15 [Miller, X-Planes, p. 108]. Although the X-15 emerged as a winged rocket par excellence, an alternate viewpoint held that future rocket craft of this type could have many of the advantages of wings without actually having any of these structures. Such craft would take shape as "lifting bodies," wingless and bathtub-shaped craft that were able to generate lift with the fuselage. This would allow them to glide to a landing. At the same time, such craft would dispense with the weight of wings, and with their need for thermal protection. How can a bathtub generate lift, and fly? Lift is force that is generated when the aerodynamic pressure is greater below an aircraft than above it. Wings achieve this through careful attention to their shape; a properly-shaped aircraft body can do this as well. The difference is that wings produce little drag, whereas lifting bodies produce a great deal of drag. Hence the lifting body approach is unsuitable for such uses as commercial aviation, where designers of airliners seek the lowest possible drag. Space flight, however, is another matter. The lifting body concept can be traced back to the work of H. Julian Allen and Alfred Eggers, at NACA's Ames Aeronautical Laboratory near San Francisco. Allen developed the blunt-body concept for a missile's nose cone, shaping it with help from Eggers. They then considered that a reentering body, while remaining blunt to reduce the heat load, might have a form that would give lift, thus allowing it to maneuver at hypersonic speeds. The 1957 M-1 featured a blunt-nose cone with a flattened top. While it had some capacity for hypersonic maneuverability, it could not glide subsonically or land horizontally. It was hoped that a new shape, the M-2, would do these things as well. Fitted with two large vertical fins for stability, it was a basic configuration suitable for further research [Hallion, ed., Hypersonic, pp. 529, 535, 864-866]. Beginning in 1959, a separate line of development took shape within the Flight Dynamics Laboratory of Wright-Patterson Air Force Base. The program that developed sought to advance beyond the X-15 by building small hypersonic gliders, which would study the performance of advanced hot structures at speeds of up to 13,000 miles per hour, three-fourths of orbital velocity. This program was called ASSET—Aerothermodynamic/elastic Structural Systems Environmental Tests [Ibid., pp. 449-450, 505]. The program went forward rapidly by remaining small. The project's manager, Charles Cosenza, directed it with a staff of four engineers plus a secretary, with 17 other engineers at Wright-Patterson providing support [Ibid., p. 459]. In April 1961, the Air Force awarded a contract to McDonnell Aircraft Corp. for development of the ASSET vehicle. McDonnell was already building the small piloted capsules of Project Mercury; the ASSET vehicle was also small, with a length of less than six feet. Not a true lifting body, it sported two tiny and highly-swept delta wings. Its bottom, which would receive the most heat, was a flat triangle. For thermal protection, this triangle was covered with panels of columbium and molybdenum. These would radiate away the heat, while withstanding temperatures up to 3,000 degrees Fahrenheit. The nose was made of zirconium oxide that would deal with temperatures of up to 4,000 degrees [Ibid., pp. 451, 452, 464-469]. Beginning in September 1963 and continuing for a year and a half, five of the six ASSET launches were successful. They used Thor and Thor-Delta launch vehicles, the latter being a two-stage rocket that could reach higher velocities. The boosters lofted their ASSETs to altitudes of about 200,000 feet. The spacecraft then would commence long hypersonic glides with ranges as great as 2,300 nautical miles. Onboard instruments transmitted data on temperature and heat flow. The craft were equipped to float following splashdown; one of them actually did this, permitting direct study of an advanced hot structure that had survived baptism by fire [Ibid., pp. 504-519]. The success of ASSET led to the development of Project PRIME—Precision Recovery Including Maneuvering Entry. Beginning in late 1964, the contract for this Air Force project went to the Martin Co., where interest in lifting bodies had flourished for several years. Unlike ASSET, PRIME featured true lifting bodies, teardrop-shaped and fitted with fins. PRIME was slated to ride the Atlas, which was more powerful than the Thor-Delta and could reach near-orbital speeds [Hallion, Path, pp. 30-31]. Whereas ASSET had executed simple hypersonic glides, PRIME carried out the more complex maneuver of achieving crossrange, namely, flying far to the left or right of its flight path. Indeed, to demonstrate such reentry maneuvering was its reason for being. PRIME did not attempt to produce data on heating, for ASSET had covered this point nicely, nor did it break new ground in its construction. Slightly larger than ASSET, it used a conventional approach for missile nose cones that featured an aluminum structure covered with a thermally-protective "ablative" layer that would carry away heat by vaporizing in a controlled fashion during reentry. The ablative material also served as insulation to protect the underlying aluminum. With its peak speed topping 17,000 mph, PRIME could bridge the Pacific, flying from Vandenberg Air Force Base in California to Kwajalein, not far from New Guinea. In April 1967, during its best performance, PRIME achieved a crossrange of 710 miles, puting it within five miles of its target. A waiting recovery plane snatched PRIME in mid-air as it descended by parachute [Hallion, ed., Hypersonic, pp. V-ii, V-iv, 702-703]. ASSET and PRIME demonstrated the value of lifting bodies at the hypersonic end of the flight path: gliding, maneuvering, surviving reentry using advanced hot structures. Both types of craft, however, used parachutes for final descent, making no attempt to land like conventional aircraft. If lifting bodies were to truly have merit, they would have to glide successfully not only at hypersonic speeds but at the slow speed of an aircraft on a final approach to a runway. Under the control of a pilot, lifting bodies would have to maintain stable flight all the way to a horizontal touchdown. These requirements led to a second round of lifting-body projects focusing on approach and landing. These projects went forward with ASSET and PRIME at the same time. R. Dale Reed, the initiator of this second round of projects, was a sailplane enthusiast, a builder of radio-controlled model air planes, and a NASA engineer at Edwards Air Force Base. He had followed with interest the work at NASA-Ames on the M-2 lifting-body shape, and he resolved to build it as a piloted glider. He drew support from the local community of aircraft homebuilders. Designated as the M2-F1, the aircraft was built of plywood over a tubular steel frame. Completed in early 1963, the aircraft was 20 feet long and 13 feet across. The M2-F 1 needed a vehicle that could tow it along the ground to help get it into the air for initial tests. The M2-F1, however, produced a lot of drag and needed a tow car with more power than NASA's usual vans and trucks. Reed and his friends bought a stripped-down Pontiac with a big engine and a four-barrel carburetor that could reach speeds of 110 mph. The car was turned over to a funny car shop in Long Beach for modification. Like any other flight-line vehicle it was sprayed yellow and "National Aeronautics and Space Administration" was added on its side. Initial piloted tow tests showed reasonable success, allowing the project to use a C-47, called the Gooney Bird, for true aerial tests. During these tests, the Gooney Bird towed the M2-F1 above 10,000 feet, then set it loose to glide to an Edwards AFB lake bed. Beginning in August 1963, the test pilot Milt Thompson did this repeatedly. Through these tests, Reed, working on a shoestring budget, showed that the M2 shape, optimized for hypersonic reentry, could glide down to a safe landing. During much of this effort, Reed had support from the NASA director at Edwards, Paul Bikle. As early as April 1963, he alerted NASA Headquarters that "the lifting-body concept looks even better to us as we get more into it." The success of the M2-F1 spurred interest in the Air Force as well, as some of its officials, along with their NASA counterparts, set out to pursue piloted lifting-body programs that would call for more than plywood and funny cars [NASA SP-4303, pp. 148-152]. NASA contracted with the firm of Northrop to build two such aircraft, the M2-F2 and HL-10. The M2-F2 amounted to an M2-Fl built to NASA standards; the HL-10 drew on an alternate lifting-body design by Eugene Love of NASA-Langley. This meant that both NASA-Langley and NASA-Ames would each have a project. In addition to this, Northrop had a penchant for oddly-shaped aircraft. During the 1940s, the company had built flying wings that essentially were aircraft without a fuselage or tail. With these lifting bodies, Northrop would build craft now that were entirely fuselage and lacked wings. The Air Force project, the X-24A, went to Martin Co., which built it as a piloted counterpart of PRIME, maintaining the same shape [Hallion, Path, pp. 29, 31-32]. All three flew initially as gliders, with a B-52 rather than a C-47 as the mother ship. The B-52 could reach 45,000 feet and 500 mph, four times the altitude and speed of the old Gooney Bird [Miller, X-Planes, p. 153]. It had routinely carried the X15 aloft, acting as a booster for that rocket plane; now it would do the same for the lifting bodies. Their shapes differed, and as with the M2-F1, a major goal was to show that they could maintain stable flight while gliding, land safely, and exhibit acceptable pilot handling qualities [Ibid., p. 151; NASA SP-4303, p. 153]. These goals were not always met. Under the best of circumstances, a lifting body flew like a brick at low speed. Lowering the landing gear made the problem worse by adding drag. In May 1967, the test pilot Bruce Peterson, flying the M2-F2, failed to get his gear down in time. The aircraft hit the lake bed at more than 250 mph, rolled over six times, and then came to rest on its back, minus its cockpit canopy, main landing gear, and right vertical fin. Peterson, who might have died in the crash, got away with a skull fracture, a mangled face, and the loss of an eye. While surgeons reconstructed his face and returned him to active duty, the M2-F2 needed surgery of its own. In addition to an extensive reconstruction back at the factory, Northrop engineers added a third vertical fin that improved its handling qualities and made it safer to fly. Similarly, while the rival HL-10 had its own problems of stability, it flew and landed well after receiving modifications [NASA SP-4303, pp. 159, 161-162; Spaceflight, vol. 21, (1979), pp. 487-489]. These aircraft were mounted with small rocket engines that allowed acceleration to supersonic speeds. This made it possible to test stability and handling qualities when flying close to the speed of sound. The HL-10 set records for lifting bodies by making safe approaches and landings from speeds up to Mach 1.86 and altitudes of 90,000 feet [NASA SP-4303, p. 162]. The Air Force continued this work through 1975, having the Martin Co. rebuild the X-24A with a long pointed nose, a design well-suited for supersonic flight. The resulting craft, the X-24B, looked like a wingless fighter-plane fuselage. It also flew well [Miller, X-Planes, pp. 156-160]. In contrast to the Navaho and X-15 efforts, work with lifting bodies did not create major new institutions or lead existing ones in important new directions. This work, however, did extend that of the X-15 with the hot-structure flights of ASSET and the maneuvering reentries of PRIME. The piloted lifting bodies then demonstrated that, with the appropriate arrangements of fins, they could remain stable and well-controlled when decelerating through the sound barrier and gliding to a landing. They thus broadened the range of acceptable hypersonic shapes. The X-15 and lifting-body programs demonstrated many elements of a reusable launch vehicle in such critical areas as propulsion, flight dynamics, structures, thermal protection, configurations, instruments, and aircraft stability and control. However, the reason for reusability would be to save money, and an airplane-like orbiter would need a low-cost booster as a first stage. During the 1950s and 1960s, the Navy, Air Force, and NASA laid groundwork for such boosters by sponsoring pathbreaking work with solid propellants. The path to such propellants can be traced back to a struggling firm called Thiokol Chemical Corp. Its initial stock-in-trade was a liquid polysulfide polymer that took its name (Thiokol) from the Greek for "sulfur glue" and could be cured into a solvent-resistant synthetic rubber. During World War II, it found limited use in sealing aircraft fuel tanks—a market that disappeared after 1945. Indeed, business was so slow that even small orders would draw the attention of the company president, Joseph Crosby. When Crosby learned that California Institute of Technology (CIT) was buying five- and ten-gallon lots in a steady stream, he flew to California to investigate the reason behind the purchases. He found a group of rocket researchers, loosely affiliated with CIT, working at a place they called the Jet Propulsion Laboratory. They were mixing Crosby's polymer with an oxidizer and adding powdered aluminum for extra energy. They were using this new propellant in ways that would make it possible to build solid-fuel rockets of particularly large size [Fortune, June 1958, p. 109]. Crosby soon realized that he too could get into the rocket business, with help from the Army. While Army officials could spare only $250,000 per year to help him get started, to Crosby this was big money. In 1950, Army Ordnance gave him a contract to build a rocket with 5,000 pounds of propellant. A year and a half later it was ready, with a sign on the side, "The Thing." Fourteen feet long, it burned for over forty seconds and delivered a thrust of 17,000 pounds [Ibid., p. 190; Thiokol's Aerospace Facts, July-September 1973, p. 10; Saturday Evening Post, October 1, 1960, p. 87]. The best solid propellants of the day were of the "double base" type, derived from the explosives nitroglycerine and nitrocellulose. Some versions could be cast in large sizes. These propellants, however, burned in a sudden rush, and could not deliver the strong, steady push needed for a rocket booster. The new Thiokol-based fuel emerged as the first of a type that performed well and burned at a reasonable rate. These fuels drew on polymer chemistry to form as thick mixtures resembling ketchup. Poured into a casing, they then polymerized into resilient rubbery solids [Huggett et al., Solid, pp. 125-128; Ley, Rockets, pp. 171-173, 193, 436-438; Comelisse et al., Propulsion, pp. 170-174]. The Navy also took an interest in solid propellants, initially for use in antiaircraft missiles. In 1954, a contractor in suburban Virginia, Atlantic Research, set out to achieve further performance improvements. Two company scientists, Keith Rumbel and Charles Henderson, focused their attention on the use of powdered aluminum. Other researchers had shown that propellants gave the best performance with an aluminum mix of five percent; higher levels caused a falloff. Undiscouraged, Rumbel and Henderson decided to try mixing in really large amounts. The exhaust velocity, which determines the performance of a rocket, took a sharp leap upward. By early 1956, they confirmed this discovery with test firings. Their exhaust velocities, 7,400 feet per second and greater, compared well with those of liquid fuels such as kerosene and liquid oxygen [Baar and Howard, Polaris!, pp. 31-32]. By then the Navy was preparing to proceed with Polaris, a program that sought to send strategic missiles to sea aboard submarines. Initial design concepts were unpleasantly large; a submarine would be able to carry only four such missiles, and the submarine itself would be excessive in size. The breakthrough in propellants coincided with an important advance that markedly reduced the weight of thermonuclear weapons. Lighter warheads meant smaller missiles. These developments combined to yield a solid-fueled Polaris missile that was very compact. Sixteen of them would fit into a conventional-sized submarine [Journal of Spacecraft and Rockets, vol. 15 (1978), pp. 265-278]. The new propellants, and the lightweight warheads, also drew interest within the Air Force, though its needs contrasted sharply with those of the Navy. Skippers could take time in firing undersea missiles, for a submarine could hide in the depths until it was ready for launch. Admirals, however, preferred solid fuels over liquids because they presented less of a fire hazard. While the Air Force was prepared to use liquid propellants in its ICBMs, these would take time to fuel and prepare for launch—and during that time they would lie open to enemy attack. With solid propellants, a missile could be fueled in advance and ready for instant launch. Moreover, such a missile would be robust enough to fire from an underground chamber. Prior to launch, that chamber would protect the missile against anything short of a direct nuclear hit. Lieutenant Colonel Edward Hall, who had midwifed the birth of the Navaho during the 1940s, now played a leading role in this newest project. He was the propulsion officer on the staff of Major General Bernard Schriever who was responsible for the development of the Atlas, Titan, and Thor. Hall developed a passionate conviction that an Air Force counterpart of Polaris would offer considerable advantage in facing the Soviet ICBM capability. At the outset of the new project, he addressed the problem of constructing very large solid-fuel charges, called grains. He could not draw on the grains of the Polaris for that missile had grains of limited size. Hall gave contracts to all of the several solid-fuel companies that were in business at that time. Thiokol's Crosby, who had lost the Polaris contract to Aerojet General, now saw a chance to recoup. He bought a large tract of land near Brigham City, Utah, a remote area where the shattering roar of rockets would have plenty of room to die away. In November 1957, his researchers successfully fired a solid-fuel unit with 25,000 pounds of propellant, the largest to date. Meanwhile, Hall had taken charge of a working group that developed a preliminary design for a three-stage solid-fuel ICBM. Low cost was to be its strong suit, for Hall hoped to deploy it in very large numbers. Early in 1958, with the test results from Thiokol in hand, Hall and Schriever went to the Pentagon and pitched the concept to senior officials, including the Secretary of Defense. But while that missile, named the Minuteman, might be launched on a minute's notice, it would take most of 1958 to win high-level approval for a fast pace of development. Barely two years later, in early 1961, the Minuteman was ready for its first flight from Cape Canaveral. It scored a brilliant success as all three stages fired and the missile flew to full range. The Air Force proceeded to raise the Minuteman to the status of a crash program. The first missiles were operational in October 1962, in time for the Cuban Missile Crisis. Because its low cost made it the first strategic weapon capable of true mass production, the Air Force went on to deploy 1,000 of the Minuteman rockets [Emme, ed., History, pp. 155-159; Neufeld, Ballistic Missiles, pp. 227-230, 237, 239; Fortune, June 1958, pp. 190-192]. The Air Force and NASA also prepared to build solid-fuel boosters of truly enormous size for use with launch vehicles. In contrast to liquid rockets that were sensitive and delicate, the big solids featured casings that a shipyard—specifically, the Sun Shipbuilding and Dry Dock Company, near Philadelphia—would manufacture successfully. The Minuteman's first stage had a 60-inch diameter. In August 1961, United Technology Corp. fired a 96-inch solid rocket that developed 250,000 pounds of thrust. The following year saw the first 120-inch tests—twice the diameter of the Minuteman—that reached 700,000 pounds of thrust. The next milestone was reached when the diameter was increased to 156 inches, the largest size compatible with rail transport. During 1964, both Thiokol and Lockheed Propulsion Co. fired test units that topped the million-poundthrust mark. Large rocket stages can be moved by barges over water as well as by land. Aerojet was building versions with 260-inch diameters. It took some doing just to ignite such a behemoth. The answer called for a solid rocket that itself developed a quarter-million pounds of thrust, producing an eighty-foot flame that would ignite the inner surface of the big one all at once. This igniter rocket needed its own igniter, a solid motor that weighed a hundred pounds and generated 4,500 pounds of thrust. The 260-inch motor was kept in a test pit with its nozzle pointing upward. In February 1966, a night firing near Miami shot flame and smoke a mile and a half into the air that was seen nearly a 100 miles away. In June 1967, another firing set a new record with 5.7 million pounds of thrust [Quest, Spring 1993, p. 26; Astronautics, December 1961, p. 125; November 1962, p. 81; Astronautics and Aerospace Engineering, November 1963, p. 52; Astronautics & Aeronautics, February 1965, pp. 42-43]. At NASA's Marshall Space Flight Center, a 1965 study projected that production costs for a 260-inch motor would run to $1.50 per pound of weight, or roughly a dollar per pound of thrust. This contrasted sharply with the liquid-fueled Saturn V, which, with 7.5 million pounds of thrust versus 6 million for the big solid, was in the same class. Even without its Apollo moon-ship, however, the Saturn V cost $185 million to purchase, over thirty times more than the 260-inch motor. By 1966, NASA officials were looking ahead already to sizes as large as 600 inches, noting that "there is no fundamental reason to expect that motors 50 feet in diameter could not be made" [Astronautics & Aeronautics, January 1966, p. 33; NASA budget data, February 1970]. Meanwhile, the Air Force not only was testing big solids but it was preparing to use them operationally as part of the Titan program which, in a decade, had evolved from building ICBMs to assembling a launch vehicle of great power. At the outset, Titan I was a two-stage ICBM project that ran in parallel with Atlas and used similar engines in the first stage. While it was deployed as a weapon, it was never used to launch a spacecraft or satellite [Emme, ed., History, pp. 145, 147]. The subsequent Titan II represented a major upgrade as the engine contractor, Aerojet General, developed new engines that markedly increased the thrust in both stages. It too reached deployment, carrying a heavy thermonuclear warhead with a yield of nine megatons. By lightening this load somewhat, the Titan II was able to thrust a payload into orbit repeatedly. In particular, during 1965 and 1966, the Titan II carried 10 piloted Gemini spacecraft, each with two astronauts. Their weight ran above 8,300 pounds [NASA SP-4012, vol. II, pp. 83-85; Quest, Winter 1994, p. 42; Thompson, ed., Space Log, vol. 27, 1991; p. 87]. The Air Force's Titan III-A added, to the Titan II, a third stage (the "transtage") which enhanced its ability to carry large payloads. It never served as an ICBM, but worked as a launch vehicle from the start. In particular, it served as the core for the Titan III-C, which flanked that core with a pair of 120-inch solid boosters. The rocket that resulted had more than a casual resemblance to the eventual Space Shuttle, which would use two somewhat larger solid boosters in similar fashion. After lifting the Titan III-C with 2.36 million pounds of thrust, its boosters then fell away after burnout, leaving the core to ignite its first stage, high in the air. The Titan III-C had a rated payload of 23,000 pounds. NASA replaced the transtage with the more capable Centaur upper stage, which used liquid hydrogen as a high-energy fuel. This version, the Titan III-E Centaur, increased the payload to 33,000 pounds. Martin Marietta, the Titan III contractor, also proposed to delete the third stage while increasing the thrust of both the solid boosters and the core. This version, the Titan III-M, was never built, but it would have lifted a payload of 38,000 pounds [NASA SP-4012, vol. III, pp. 38-42; Quest, Fall 1995, p. 18; AAS History Series, vol. 13, pp. 19-35]. Hence during the 1960s, the X-15, ASSET, PRIME, lifting body and solid-booster efforts all combined to provide a strong basis for the Space Shuttle program. Such a program might build an orbiter in the shape of a lifting body with a hot structure for thermal protection. Piloted and crewed, it could maneuver during atmosphere entry, ride through the heat of reentry with its nose up, then transition to gliding flight and fly to a landing, perhaps at Edwards Air Force Base. Moreover, long before those early projects had reached completion (and even before some of them were underway), the Air Force set out to build a mini-shuttle that would ride a Titan III-C to orbit and then return. This project was called Dyna-Soar and, later, the X-20. During the mid-1950s, with the Bomi studies of Bell Aircraft in the background and the X-15 as an ongoing program, a number of people eagerly carried out further studies that sought to define the next project beyond the X-15. The ideas studied included Hywards (a piloted hypersonic boost-glide research aircraft), the Robo (Robot Bomber), and two reconnaissance vehicles, the System 118-P and the Brass Bell. With so many cooks in the kitchen, the Air Force needed a coordinated program in order to produce something as specific as the X-15. Its officials were in the process of defining this program when, in October and November 1957, the Soviet Union launched the world's first satellites. Very quickly, hypersonic flight became one of the means by which the U.S. might turn back the challenge from Moscow. Having read the work of Sänger, hypersonic specialists knew of his ideas for skipping entry as a way to extend the range of a suborbital aircraft. The Air Force described this maneuver as "dynamic soaring." The craft that would do this acquired the name Dyna-Soar. By early 1958, this idea was being studied seriously by a number of aeronautical contractors with the clear understanding that the Air Force intended to request proposals and build a flying prototype. In June 1958, the Air Force narrowed the competition to two contenders: Boeing and a joint Bell Aircraft and Martin Co. team [AAS History Series, vol. 17, pp. 255-259]. By then, Dyna-Soar was caught up in the first round of a controversy as to whether this craft should be the prototype of a bomber. While the powerful Air Research and Development Command (ARDC) firmly believed that Dyna-Soar should be the prototype of a piloted military spaceplane, it found it difficult to point to specific military missions that such a craft could carry out. For nuclear weapons delivery, the Air Force was already building the Atlas, Titan, and Thor. For strategic reconnaissance, the Central Intelligence Agency had launched, in 1958, a program that aimed to build automated camera-carrying satellites and put the first ones into orbit in as little as one year [Ibid., p. 260; Ruffner, ed., Corona, pp. 3-14]. Air Force Headquarters, however, with support from the Office of the Secretary of Defense, refused to consider weapon-system objectives unless ARDC could define suitable military missions. Early in 1959, Deputy Secretary of Defense Donald Quarles wrote that his approval was only "for a research and development project and did not constitute recognition of DynaSoar as a weapon system." In April, the Defense Director of Research and Engineering, Herbert York, made a clear statement of the program's objectives. Its primary goal would involve hypersonic flight up to a speed of 15,000 miles per hour, which would fall short of orbital velocity. The vehicle would be piloted, maneuverable, and capable of landing at a preselected base. York also threw a bone to ARDC, stating that it could pursue its own goal of testing military systems—provided that such tests did not detract from the primary goal. ARDC officials hastened to affirm that there would be no conflict. They promptly issued System Requirement 201, stating that Dyna-Soar would "determine the military potential of a boost-glide weapon system" [AAS History Series, vol. 17, p. 260]. In November 1959, the contract award went to Boeing. Two weeks later, the Air Force's Assistant Secretary for Research and Development, Joseph Charyk, said "not so fast." He was well aware that the project already faced strong criticism because of its cost, as well as from Eisenhower Administration officials who opposed space-based weapon systems. In addition to this, a number of technical specialists doubted that the concept could be made to work. Charyk therefore ordered a searching reexamination of the project that virtually re-opened the earlier competition. In April 1960, the Aerospace Vehicles Panel of the Air Force Scientific Advisory Board gave Dyna-Soar a go-ahead by approving Boeing's design concept, with minor changes. During the next three and a half years, the program went forward as its managers reached for higher performance. The 1960 plan called for the use of a Titan I as the launch vehicle. Because the Titan I lacked the power to put it in orbit, the Dyna-Soar would fly suborbital missions only. Over the next year and a half, however, the choice of booster changed to the Titan II and then the powerful Titan III-C. A new plan, approved in December 1961, dropped suborbital flights and called for "the early attainment of orbital flight, with the Titan III booster." This plan called initially for single-orbit missions that would not require the craft to carry an onboard retro-rocket for descent from orbit Instead the booster, launched from Cape Canaveral, would place the craft on a trajectory that would re-enter the atmosphere over Australia. It then would cross the Pacific in a hypersonic glide, to land at Edwards Air Force Base. In May 1962, the plan broadened anew to include multi-orbit flights. Dyna-Soar now would ride atop the Titan III transtage that would inject it into orbit and remain attached to serve as a retro-rocket at mission's end [Ibid., pp. 261-269]. The piloted Dyna-Soar spacecraft also emerged with highly-swept delta wings and two upturned fins at the wingtips. With a length of 35 feet, it lacked an onboard rocket engine and provided room for a single pilot only. Like ASSET, it relied on advanced hot structures, with a heat shield of columbium, well insulated, atop a main structure built from a nickel alloy that had been developed for use in jet engines [Ibid., pp. 277-279]. In September 1962, a full-scale mockup was the hit of the show at an Air Force Association convention in Las Vegas. In addition to this, the Air Force named six test pilots who would fly DynaSoar as its astronauts [Ibid., p. 269]. The question of military missions raised its head again when in mid-1961 the new Defense Secretary, Robert McNamara, directed the Air Force to justify Dyna-Soar on military grounds. Air Force officials discussed orbital reconnaissance, rescue, inspection of Soviet spacecraft, orbital bombardment, and use of the craft as a ferry vehicle. While McNamara found these reasons unconvincing, he nevertheless remained willing to let the program proceed as a research effort, dropping all consideration of a possible use of the craft as a weapon system. In an October 1961 memo to President Kennedy, McNamara proposed to "re-orient the program to solve the difficult technical problem involved in boosting a body of high lift into orbit, sustaining man in it and recovering the vehicle at a designated place" [Spaceflight, vol. 21 (1979), pp. 436-438]. This reorientation gave the project another two years of life. With its new role as an experimental craft, it was designated by Air Force Headquarters as the X-20. In this new role, however, the program could not rely on a military justification; it would have to stand on its value as research. By 1963, this value was increasingly in question. ASSET, with its unpiloted craft, was promising to demonstrate hypersonic gliding entry and hot-structure technology at far lower cost. In the realm of piloted flight, NASA now was charging ahead with its Gemini program. Air Force officials were expecting to participate in this program as well. These officials still believed that their service in time would build piloted spacecraft for military purposes. In March 1963, McNamara ordered a study that would seek to determine whether Gemini or the X-20 could better serve the role of a testbed for military missions. The results of the study gave no clear reason to prefer the latter. In October, Air Force officials, briefing the President's Scientific Advisory Committee, encountered skepticism in this quarter as well. Two weeks later, McNamara and other senior officials received their own briefing. McNamara asked what the Air Force intended to do with the X-20 after using it to demonstrate maneuvering reentry. He insisted he could not justify continuing the project if it was a dead-end program with no ultimate purpose. He canceled the program in December, stating that the purpose of the program had been to demonstrate maneuvering reentry and precision landing. The X-20 was not to serve as a cargo rocket, could not carry substantial pay loads, and could not stay in orbit for long-duration missions. He could not justify continuing with the program because it was costly and would serve "a very narrow objective" [AAS History Series, vol. 17, pp. 271-275]. At that moment, the program, well past the stage of paper studies, called for the production of 10 X-20 vehicles. Boeing had completed nearly 42 percent of the necessary tasks. While McNamara's decision drew hot criticism, he had support where it counted; the X-20 did not. Eugene Zuckert, the Air Force Secretary, continued to endorse the program to the end, but the project had little additional support among the Pentagon's civilian secretaries. In the Air Force, the Space Systems Division (SSD) was to conduct pilot training and carry out the flights. Support for the X-20, however, was lukewarm both at the SSD and at Aerospace Corp., its source of technical advice. General Bernard Schriever, commander of the ARDC [redesignated Air Force Systems Command in 1961], was also lukewarm. So was his deputy commander for aerospace systems, Lieutenant General Howell Estes [Ibid., p. 275; Hallion, ed., Hypersonic, p. II-xvii]. This was the life and death of the Dyna-Soar. From its demise one can draw several conclusions. By 1963, the program's technical feasibility was no longer in question; it was just a matter of putting the pieces together. Although aerospace vehicles were continuing to evolve at a rapid pace, no technical imperative existed that could call the X-20 into existence. The program needed a mission, a justification sufficiently compelling to win political support from high-level officials. Dyna-Soar demonstrated that even though the means were in hand to pursue the development of a vehicle resembling the Space Shuttle, such a project would stand or fall on its merits. To be built, it would require a reason capable of attracting and winning endorsement from presidential appointees and other leaders at the highest levels.
In the previous post of this series, while celebrating the 10th anniversary of the referendum in East Timor, we presented the way in which the international community stood up in support of the freedom of the Timorese people. In this piece we interview Timorese writer Abe Barreto Soares in order to disseminate Timorese Nationalism seen through the Eyes of its Poets, the essay that he has recently published [tet, pt]. As a blogger since 2007, Abe (or his cyber-pseudonym, Jenuvem Eurito, as he was called by his friends in his youth) shares his words and thoughts in four languages often analysing literary work relevant for the self determination of his country. Moreover, Abe discusses thoroughly the construction of a national conscience after the fight for independence. Taking advantage of the benefits of blogs to foster global connections and distance conversations in original ways, he describes his blogs as “sweet words, caring words, in a venue for people to talk to each other, sharing with each other on “what” and “how” life goes in the world”. But Abe's words and actions have not always been this free, as he stated during the Indonesian occupation of Timorese territory. I felt like my hands and mouth were tied. I couldn't say what I felt about East Timor. Global Voices Online (GVO): Where were you 10 years ago? Can you tell us a bit about your life? Abe Barreto Soares (ABS): During the time of the referendum, I was overseas. I happened to be in Portugal at the time. Along with other Timorese compatriots, I cast my vote in Lisbon. I left Timor-Leste in 1985 to pursue my university studies, taking English as my major at Gadjah Mada University, Yogyakarta, Indonesia. Then, I left for Canada to take part in a cultural exchange program in early September 1991. On November 12 1991 the [Santa Cruz] massacre occurred when I was about to finish my program. Being concerned for my personal safety if I was to return to Indonesia, I finally decided to stay in Canada, and seek political asylum. I spent 7 years in Canada, campaigning for a free and independent Timor-Leste through diplomacy and cultural activities (using music as a tool to alert the outside world to what was really going on in the country). I had the chance to spend a year and a half in Portugal from Spring 1998 until the Fall of 1999. Then, I went to Macau for journalistic training with a Portuguese news agency, Lusa, for six months (October 1999 until March 2000). I returned to Timor-Leste in July 2000. Since then, I have been working in UN missions in Timor-Leste both as an information assistant and a translator/ interpreter. GVO: How did you have access to Timorese literature during the Indonesian times? ABS: During the Indonesian times, while doing my studies in Yogyakarta, I came across books on Timor-Leste such as “EasTimor: Nationalism and Colonialism” by Jill Jollife, a fellow journalist, from Australia. From this book I discovered the late Timorese poet, Francisco Borja da Costa. One of the lines of his poetry appearing in the book: “smother my revolts/ with the point of your bayonet/ torture my body/in the chains of your empire/ subjugate my soul/ in the faith of your religion…/” really fired the sense of nationalism within me. And through the book “Funu: The Unfinished Saga of East Timor” by José Ramos-Horta (current President of the Republic of Timor-Leste) I discovered Fernando Sylvan. Pedem-me um minuto de silencio pelos mortos mauberes. Respondo que nem por um minuto me calarei. I answer that not for one minute shall I shut up. GVO: You often quote Timorese poet Fernando Sylvan. In what ways do you take advantage of poetry in order not to shut up, as he recommends in the above poem? ABS: A poet is a spokesperson of his or her era. He or she should break the silence when it comes to oppression. Living on this planet, we are in a constant battle between the dark and the light. A poet should be at the forefront, carrying the torch. He or she is the “warrior of the light”. (I borrow this concept from Paulo Coelho, the Brazilian writer). As an artist I have to be ready any time to engage in the spiritual war. Words are my swords. Hopefully, my words will provoke people so that they can be in tune with themselves all the time in creating harmony in this wonderful planet. GVO: Do your blogs in four different languages reflect the way people communicate in Timor? ABS: Timorese like me have to be creative in taking advantage of the ‘blessing’ of colonialism and globalization. Aside from using my own mother tongue, Tetum and my father’s mother tongue, Galole which I am good at, I also use English and Indonesian in my literary carrier. I am proud of using them to communicate what I think and feel. I would love, someday soon, to create a Portuguese blog as well. GVO: Why have you created a Korespondensia Literaria (Literary Correspondence, tet) category on one of your blogs? ABS: I created the “korrespondensia literaria” entry on my Tetum blog in an attempt to convey to the outside readers the correspondence I have had with my fellow literary friends through SMS. Practically speaking, transferring them onto a blog can be considered as a way to save those messages. As a man of letters I need to engage in a constant communication with friends the world over. I want to learn a lot from them. I want to commune the philosophy of Greenpeace, “think globally, and act locally”. [SMS:] ITA-BOOT NIA BATINA/ha’u moras todan: ha’u klamar terus/fó lisensa mai ha’u-ata atu kaer Ita-Boot nia batina/fakar mós Ita-Boot nia mina oliveira domin nian mai ha’u-ata/ hodi nune’e ha’u bele di’ak filafali ho lalais// [21:51:11//11-2-2009] 1.R. D. = “Se mak bulak ida ne’e?” [maisumenus tuku 10 kalan] 2.Suzana TP = “Diak pois há’u haruka ba suli hanesan tasi” [22:08:53//11-2-2009] 3.Atoi R. = “Obrigado maibé ha’u la kompriende” [22:18:00//11-2-2009] 4. Pe. Olá = “Sajak ne’e tau nia titulu, Jesus. Bele atrai liu” [11:55:12//12-2-2009] 5.F.Nascimento = “We matan mos, we liman diak, halo suli mai, fakar mos mai, ami iha lerek susar no terus laran. Tan Ita Boot, ami Nain deit. Laran luak tebes no kmanek wain basuk.”[12:56:05//12-2-2009] a. R.D. = Who the hell is this? [around 10 PM] b. Suzana TP= OK, I will then send back to you, flowing like a sea [22:08:53//11-2-2009] c. Atoi R = Thank you, but I do not understand. [22:18:00//11-2-2009] d. Father Ola = The title of the poem should be “Jesus”. Then it will be more attractive. [11:55:12//12-2-2009] e. F. Nascimento = The eyes of the water are opened,/the hands of the water are good./Make them flow, and shower them on us/ We are in pain and suffering/ You are the only Lord of ours/ You are really the One having a good heart and a great joy [12:56:05//12-2-2009] Lia-na’in sira-nia mehi hatutan no lolo liman ba malu Lia-na’in sira-nia mehi bidu no tebe hadulas mundu rai klaran Lia-na’in sira-nia mehi fanun ha’u, no ema lubun maka sei toba dukur The dreams of poets bidu* and tebe** circling around the Planet Earth The dreams of poets wake me up As well as the crowd who are still soundly sleeping * dance performed by men ** dance performed by both men and women holding hands in circle Blogs by Abe Barreto Soares: This post is the second of a series to commemorate the 10th anniversary of the popular referendum in East Timor, which led to the territory's internationally recognized independence. In the first post we highlighted the support of the international community for the freedom of East Timor. In this post, we interview Abe Barreto Soares who is one of the organizers of the celebration events for solidarity taking place in East Timor in August and September 2009.
Defining the Elephant Part 1 of 2 - Published on Tuesday, 24 July 2012 03:00 By Dr. Rick Patrick Pleasant Ridge Baptist Church Charles Kettering said, “A problem well-stated is half-solved.” Now that Southern Baptists are talking about the proverbial elephant in the room, it seems helpful to define that elephant as clearly as possible. Thus, I write this article not to foster division among us, but to more clearly define that division which already exists. The tension between Calvinism and Traditionalism in Southern Baptist life will never make sense to anyone who views this struggle merely as a dispute over minor doctrinal concerns. Rather, our present fault lines stem from three specific components: a theological debate, an institutional struggle and an intrinsically adversarial agenda. Unless we look at this elephant from all three sides, we will fail to comprehend the scope of our conflict resolution challenge. 1. The Theological Debate Surprisingly, of the three components in our conflict, the theological debate itself is the least contentious of all, but it clearly provides the basis for the other two. Frankly, the number of Southern Baptists in the world who get all worked up about precise theological formulations and definitions is smaller than any seminary professor or preacher cares to admit. This stems not from a lack of intellectual curiosity, but rather the desire among most Southern Baptists to avoid arguing theology and concentrate instead on loving each other, moving on and telling the world about Jesus. This desire for peace is commendable, and would be more likely if our theological disagreements simply concerned one minor issue of salvation doctrine. However, even the theological component which comprises only one-third of the elephant is more complex than it would seem at first glance, touching not only salvation doctrine, but also related views of the church, our mission and the nature of man, as summarized below. - Soteriology: Did God create man truly able and free to accept or reject God’s grace? Or did God decree, before the foundation of the world, precisely those souls which will irresistibly come to Him? - Ecclesiology: Does the church make decisions through channels of classic congregationalism or does it function with ruling or leading elders? How does it receive members, extend altar calls and make use of the Sinner’s Prayer in evangelism? - Missiology: In order to contextualize the gospel and reach our culture, will the church permit the moderate use of alcohol, a softer stance on homosexuality, and an emphasis upon issues such as environmentalism? - Anthropology: Does man’s total depravity include or exclude total inability? Does the sinful nature we inherit from Adam include or exclude inherited guilt? Is the unredeemed man best understood as lost or dead? While this multifaceted theological debate furnishes the initial conflict, the other two components are actually responsible for carrying the struggle from the ivory towers of theological reflection to the more practical matters of denominational vision and stewardship allocation. 2. The Institutional Struggle Due to church autonomy, when a congregation is Calvinized, our denomination possesses no stake or vote or any claim at all upon that church, nor should it. The Traditionalist concern in these matters is not so much the individual church’s decision, but the cumulative impact of a growing Calvinist influence upon our mutually owned institutions and agencies. There exist certain stress points in this institutional struggle found precisely in those areas where Traditionalist and Calvinist churches cooperate through direct ministry endeavors the most–areas such as publishing literature, VBS outreach, summer youth camps, seminary training and planting churches in America and around the world. We can easily exist as one denomination when we are contributing toward theologically neutral areas like disaster relief and world hunger projects. But when we preach and teach and write and evangelize together as a denomination, the theological concerns which separate our individual churches can no longer be avoided. The question becomes, “Will our individual agencies and institutions lean toward an understanding of theology that is Traditionalist or Calvinist?” The examples provided below reveal the difficulty of attempting neutrality in those areas of denominational cooperation where theology matters the most. - Publishing: If Calvinism claims 10% of the convention and 90% of those involved in The Gospel Project curriculum, are we really to believe this happened completely at random? Even if the first quarter of literature is not overtly Calvinist, one has reason to be concerned that future lessons may not preserve this same neutrality. By introducing Southern Baptists to Calvinism’s brightest stars, the gate has been opened for further Calvinist indoctrination through books and conferences down the road. Granted, the meta-narrative approach to this curriculum is a staple of modern Calvinist preaching and teaching. Should this curriculum manage to offer three years of lessons featuring this approach without even a hint of its usual Calvinist underpinnings, I will be the first to apologize. The point of this example is not to attack The Gospel Project itself, but to illustrate the difficulty in claiming theological neutrality in denominational publishing. - VBS Evangelism: Calvinists and Traditionalists approach children’s ministry differently. How can Lifeway be expected to synthesize in one curriculum an approach able to satisfy both theological camps? With changes in our VBS musical direction, many will watch to see if a strong evangelistic appeal is still present in the songs and lessons, or if the approach will be softened to reduce the number of what some characterize as premature decisions for Christ that have become such a concern among certain Calvinists. Of course, Traditionalists do not favor false professions and overly simplistic gospel presentations, but are confident in the ability of local church leaders to provide proper decision counseling for kids. Again, the point is not to critique VBS literature, but to illustrate that our theological differences are expressed not only in our publishing but also in our VBS evangelism. - Youth Camps: Reports from certain Fuge locations this year indicate a strong dose of Calvinist theology was clearly preached in some large group sessions. Sooner or later, it was bound to happen. How can we really expect Fuge to be soteriologically neutral when both camps expect them to share God’s plan of salvation? I can explain salvation to a young person from the view of either the Traditionalist or the Calvinist, but I cannot synthesize the two and do both at the same time, for no matter how much we may wish it to be otherwise, they simply contradict each other in very specific ways and can hardly be reconciled. Most of these teenagers come from churches who are not Calvinistic in their theology. To expose them to specific doctrines which their pastors, youth ministers and parents do not espouse, and to do so using camp fees paid by Traditionalist parents and churches, reveals yet another stress point as we struggle in our institutional cooperation. - Minister Training: Fast forward a few years and those youth at Fuge are now college graduates, called into ministry and preparing for seminary. With respect to the present discussion, will they receive a theologically neutral education at all Southern Baptist seminaries? Or would it not be true that Traditionalists are already encouraging their sons and daughters to attend either Southwestern or New Orleans, while Calvinists are already encouraging their sons and daughters to attend Southern or Southeastern? How could it be any clearer that our theological debate is expressing itself through a series of institutional struggles as each entity in Southern Baptist life responds by favoring either a Calvinist or a Traditionalist approach? - Church Planting: Assuming that your church is the sole sponsor for a new work, when your committee meets to select the pastor who will plant this new church, would it or would it not select a Calvinist? The church I serve has screened out Calvinists in our last three ministry searches. We would desire to plant a church that believes as we do. If a church selecting a church planter would screen for Calvinism when directly sponsoring the new work, why would it give up this desire when cooperating with other Southern Baptists to sponsor a new work through NAMB? One cannot help but wonder if all our Traditionalist churches truly desire to plant Calvinist churches whose theology and methodology provide such a stark contrast with their own. As we cooperate to plant churches, let us be theologically transparent about exactly what kind of denomination we are building. At some point, it is fair to ask the question, “Is it good stewardship for me to pay for the institutional advancement of organizations promoting doctrines I do not embrace personally, nor desire to teach my children, nor favor publishing at Lifeway, nor seek to advance through church planting?” It is precisely here, in the practical outworking of our theological disagreements through our institutional struggles, that the same elephant we might overlook in our Sunday School class or church becomes absolutely impossible to avoid at the denominational level.
Following up on Monday’s post, today I want to discuss how Public Memory may be understood differently when placed within the concept of circulation. First, however, I must issue a disclaimer: this post is decidedly more theoretical than the one on Monday, and for non-rhetoricians, I apologize in advance if the jargon detracts from what I am about to say. However, I will attempt to clarify what I mean when such esoteric terms do arise. Theories of circulation and public memory have received considerable attention from scholars across the discipline in recent years. Public memory, a source of interest amongst rhetorical scholars since the late 1980s, currently possesses enough interest among rhetorical scholars that it is often understood as a sub-discipline of the field. In recent years, as is evidenced by numerous articles including an edited volume by Carole Blair, Greg Dickinson, and Brian Ott, the study of public memory has taken a decidedly material turn. In fact, Blair, in an earlier essay, uses U.S. memorial sites to demonstrate the rhetoricity (persuasive power) of material objects. While this line of study has expanded both the definition of rhetoric and explained the way that material objects are rhetorical in ways different from that of traditional oratory often associated with the study of public address, the authors that study material forms of public memory tend to limit their studies to static places such as monuments, museums, and other “immutable” places which function to evoke public memory. Edward Casey, for examples, goes so far as to say that “Public memory is not a nebulous pursuit that can occur anywhere; it always occurs in some particular place.” It is here that Casey would most likely understand circulating memories, even in material form, as a type of collective memory, which he argues is distinct from public memory. In the context of circulation, however, this distinction lacks much analytical purchase. As Lester C. Olsen explains, “circulation” enables “a composition to address an audience of strangers who, by devoting attention to it, become its public.” Therefore, as long as a composition evokes memories and constitutes a public, then it seems as if Casey’s distinction over-limits what can be considered “public.” Instead, circulation can constitute a form of public memory that moves through space rather than being confined to place. Similarly, the scholarly discussions regarding circulation have also complicated matters for scholars of public address. Derived largely from Michael Calvin McGee’s fragmentation thesis, texts which were once analyzed as “whole” are now in need of rethinking since, as Stephen Heidt and Megan Foley observe, speeches are often fragmented. It is these fragments, not the speech as a whole, that circulates and resonates through society. According to Stephen Heidt, it is the circulation of these textual “shards” that engage in a constitutive process, explained by Maurice Charland as the way that a text interpolates “audiences into a narrative that constituted their identities and didactically animated their political activity.” Texts that evoke public memory, I argue, also function in this way. This claim is similar to that made by Kurt Ritter’s who identifies memory-making, or commemoration, as an epideictic process which “builds communal identity and values.” Commemoration, in contrast with history, is concerned less with accuracy about what “really” happened and more with how the “emotional resonance and the utility of a narrative” structures or constitutes society. What remains largely unexplored, however, is the relationship between material forms of commemoration and circulation. Of course, there are a couple of exceptions worth noting. First, Nathan Atkinson makes in-roads into this connection by talking about the way that film circulates in a way that creates an interesting connection between the past and the present which he refers to as “dual temporality,” or the phenomenon in which film presents the past in the present (more explanation/better understanding of argument probably needed here). Atkinson’s work also applies the properties of circulation to visual texts, as does the work of Keith Erickson, who specifically engages in presidential photo-opportunities as rhetorical fragments to argue that the presidency has taken a “visual turn.” While these works are limited to film and photography, and focus more on the visual component of the text rather than the embodied or material/tactile characteristics of visual-material rhetorics, nonetheless provides an opportunity to more fully explore, explain, and understand the relationship between circulation and material rhetorics which function to evoke public memory. This project takes up such an opportunity by more fully realizing how circulation enables the formation of public memory. Because scholarship regarding circulation rarely intersects with studies of public memory, little is known about the relationship between these two literature bases. In what follows, I offer an explanation of this relationship which reinforces the importance of attending to matters of circulation within the specific context of how memories of presidents are produced. A proper understanding of the forms of material/visual public memory and/or commemoration requires an understanding of place and space as they are currently understood within the scholarly community of rhetorical critics. Currently, place plays a far more important role than space in public memory scholarship. Admittedly, the distinction between space and place is a slippery one. In fact, pioneering scholars on space such as Michel de Certeau and Henri Lefebvre often use space and place interchangeably, as if they are synonyms. Nevertheless, treating these terms as different concepts is crucial to identifying gaps in public memory scholarship that this project seeks to fill. While less explicit than the literature on public memory, theories of circulation also imply a relationship to the concepts of place and space. In the most basic terms, I contend that circulation be understood in as the process by which texts move through space and places. Rather than focus on circulation in the context of fragmentation, I turn attention to circulation as a mechanism by which material/visual texts gain mobility and expand their rhetorical force throughout American culture. In a recent issue of Critical Studies in Media Communication dedicated to the spatial turn in the field of communication, the relationship between space and circulation becomes clear. Donovan Connelly, for example, states that before the digital era, the term “communications” referred to the “mediators of bodies and goods.” In other words, he continues, communication became possible through “the technologies of transportation – the roads, canals, turnpikes, bridges, and railways – that came to manifest the physical fact of the united states.” I believe that even in the 21st century, these modes of transportation and the movement of bodies and objects is still an important mechanism by which material forms of memory move through space, or are circulated in a way that constitutes a public through the evocation of these memories across space and through place. Under this framework, it becomes clear that space is at least as important if not more so as place in the formation of public memory. This claim, however, is not reflected in current public memory scholarship. Scholars of public memory, especially in visual and material forms, tend to focus on place as a locus for the harboring of memories, effectively relegating the concept of space as the raw material out of which places of public memory are constructed. This is a rather rough sketch of the theoretical framework of my project, and I welcome only the harshest of criticism in moving forward. P.S. – At the end of Dissertation Week, I will provide a complete bibliography of the sources I use and cite, although I have already attributed the quotes I use to their respective authors. I am waiting until the end to do this because I want to make sure I am clear in my own argument without getting bogged down in what everyone else has argued. Nonetheless, I will post full citations for those who want to refer to the literature in which I engage.
Gay people were determined to close the three blocks in question with or without a permit, but the Board caved to the gay people and there was no confrontation. The Sisters, (men who like to dress up as Catholic nuns) grew out of the original genderfuck theater troupe, the Cockettes. Genderfuck, for males, means dressing in female clothing and hairdos and all, but also displaying male secondary sex characteristics—beards, moustaches, hairy shoulders. NOTE: These days the Sisters have affiliated "houses" worldwide, and membership is open to many sex categories. The Sisters is a fraternal organization. They run a lively (you can imagine) weekly bingo game, and otherways raise funds for neglected community needs. They are fun and self deprecating, and they actually promote religious values. You can see on their website a very succinct mission statement, Soon after the San Francisco Order was founded in 1979, the mission of the Sisters — to promulgate universal joy, expiate stigmatic guilt and serve the community — spread far and wide. Orders can be found across the United States and in several countries around the world. The Sisters have never issued any dogma ex cathedra that this FOME can’t accept. I assume each of them claims infallibility. This quote from the Wikipedia entry shows their sense of fun: The Sisters of Perpetual Indulgence believe that many institutions and social constructs are a source of dogma, hypocrisy, guilt and shame. This has lead to encounters with the Catholic Church. One was when they staged an exorcism of the Pope on his visit to San Francisco in 1987. This story illustrates why I like them so much. It was maybe fifteen years ago, a small group of fundamentalist Christians decided to minister to San Francisco’s gay community by gathering at the main intersection of gay nightlife and handing out salacious anti-gay literature, chanting anti-gay slogans, and singing some sort of Jesus songs. It was completely legal, and should have been ignored, but some of the many gay people passing by took umbrage with nasty verbal exchanges resulting. Same thing the next night. It looked like there would eventually be trouble, so the Sisters of Perpetual Indulgence brought a bullhorn and a boom box, and led a gathering of gay people across the street. We had some talks, and played some music, and whenever the Christians would start a chant we’d drown them out with our own chant, “NO MORE GUILT!” I believe they had “ushers” helping to guide people around the Christian group so they wouldn’t be waylaid. The Christians quickly gave up. There was real trouble brewing if the Christians went unchecked. The Sisters came up with a creative, non-violent response that Martin Luther King or Mahatma Gandhi would have been proud of. Members pictured above, top to bottom, Sister Bea Attitude, Sister Harlette O'Scara, and Sister Tilly Comes Again.
What do you want to do before you die? Go to Mars? Skydive? Climb Everest? Be Ambitious: A Good Bucket List should grow over time, not shrink. Just like you. A few years ago, Pat Palumbo, a Westchester, New York-based real estate agent, faced some major health problems. She recovered, but "having that experience did seriously give me all those reflections you have when you know you're not invincible," she says. "I felt such a sense of gratitude for all this opportunity in front of me." She wanted to make the most of it. So what did she do? She went to BucketList.org and made a list of all the things she wanted to try before she kicked the bucket: Build an intricate dollhouse. Own a Jim Dine painting. Climb the Sydney Harbour Bridge, which she'd been scheduled to do years ago, but had injured her knee the day before. "People make fun of me for my list," she says. "People just associate it with dying. They don't realize it's actually a way to live." If you'd like to make a bucket list that changes your life, here are five steps to making that happen. 1. Give yourself time (and inspiration). When you sit down to make a big list of dreams, your first 15 to 20 items will inevitably involve getting on a plane. BucketList.org has a "Popular" feature, and goals like "Backpack Europe" or "Float in the Dead Sea" feature prominently. People almost universally fantasize about traveling more than they normally do. But travel's probably not the only thing you want more of in your life. So, "I'd give yourself a few weeks," says Jason Lindstrom, co-owner of BucketList.org, who recently had a Guinness at the Guinness Storehouse in Dublin and is learning to play the guitar. Keep thinking about it, and talk to other people or look at other lists for inspiration (here's mine from a few years ago). In the social media era, just as Pinterest is nudging people to upgrade their kids' lunchbox fare in ways they never thought possible, reading other people's bucket lists can stir desires you never knew you had. "It's making everyone a little more transparent," says Lindstrom. 2. Be specific--and within your sphere of influence. With goals, success begets success. While you can put "Go to Mars" on your bucket list if you'd like, you'll get more out of the process if you choose items so specific and possible you can visualize them happening. Palumbo wanted to dance in a "second line"--the crew that follows the brass band in a parade--in New Orleans. After writing down that goal, she was quick to seize an opportunity to attend a professional conference in the Big Easy. The last night of the convention, the Brass Rebirth Band performed, and Palumbo--knowing this was exactly what she'd pictured--was soon up there dancing behind them. You can also aim to score some easy wins by putting a few small purchases on your list, too. Palumbo has long been obsessed with stationery, and she decided she wanted wax seals for her envelopes after reading Daphne du Maurier's Rebecca ("Her desk, her stationery and everything had the letter R."). After listing this goal, she was primed to look around, and "When I found that wax seal in Barnes & Noble, I almost wept," she says. 3. Improve daily life. The biggest payoffs from making a bucket list don't come from taking great vacations--vacations are great anyway--but from upgrading life's more mundane experiences. G. Richard Shell, a professor at the University of Pennsylvania's Wharton School and author of Springboard: Launching Your Personal Search for Success, set a life list goal to "Listen to every great book that everybody's said they've read but no one has." Note the verb: listen. Shell consumes all this literature via audiobook in his car. When I met him at a coffee shop recently, he had been listening to David Copperfield on the way there. There are hazards--"I was on the Pennsylvania turnpike and drove right past the Valley Forge exit because I was in the middle of the Trojan War"--but he's also been reminded that "great books are really great." He's listened to the Iliad, the Odyssey, and so forth by finding "this little white space in the day I can fill in." 4. Build in accountability. Sharing your list, whether online or just by talking about it does two things. First, people might share opportunities with you--like the name of a guitar teacher if you want to learn to play. And second, you become accountable for making progress toward your goals. That can increase your odds of success. Lindstrom checks in with his brother every Monday to discuss the steps they're taking toward bucket list goals and smaller goals. "Getting feedback from people is super valuable," says Lindstron--"especially people you trust and respect." Chris Guillebeau, author of The Art of Non-Conformity, and The $100 Startup, made a goal several years ago to travel to every country in the world by April 7, 2013 (he made it). "Sharing a list helps to make it more real," he says. "In my case, I felt like I couldn't let people down. The sense that people who cared about my goals were following along gave me a sense of support when I encountered challenges." 5. Make it a living document. Exciting as it is to cross items off your list, accept this paradox: A good bucket list may grow rather than shrink over time. Because ideally you will grow, rather than shrink over time as well. Guillebeau discovered this while traveling. "As I worked on the final 50 countries, after eight years of the first 140, I became much more interested in pursuing new goals related to community," he says. "I don't think I'd have the same excitement or focus on those things without having come so far on the travel goals."
Hey gents. This fall, I'm teaching my first (college) freshman composition course, and I'm thinking about what I'll do to introduce the course. Any ideas or experiences? Begin by going over your syllabus/course outline. That way your expectations are clear from the beginning. Getting and giving contact information is important too. Since you teach outside my field of expertise, I'm not sure where you'd go from there. Right, I'll of course go over the details of the class and the syllabus, etc. I was just wondering if anyone had a certain way of introducing the class, i.e. a poem or short story. I'd like to do something a little more interesting than simply read through the syllabus. What field to do teach and what level? No longer, but I once taught biology courses at a community college, mainly Microbiology. After taking care of business, I would talk about Microbiology in general and then go into the history of the science. Nice. I'm considering opening the class with a poem or something. Still thinking things through. have them write simple poems If you use Haiku they are short I have used them to help 2nd graders write poetry. Maybe you can do something with reading some and then having them try and share what they wrote in sm groups or to the class. Congratulations on getting into the classroom. Enjoy it--the job can be exhausting, but it's also really rewarding. What approach does the department take to freshman composition? Is it taught as personal writing for part of the year and then academic writing the second part? Is it strictly composition or do you have any "Writing about X (comic books, zombies, whatever hot pop culture topic students are interested in)" courses? Depending on the type of course, there are all kinds of ways to introduce the class. Along with the business end of things (syllabus and expectations), you can do different ice-breakers or some kind of writing game just to get people loosened up. When I taught freshman comp, I usually had people talk a little bit about their attitude toward writing and their goals for the class. Letting them get invested right from the start seemed to help keep them engaged throughout the semester, which can be tough because the class is required for all majors and the bulk of the students will, at first, say they fear or don't like writing. A little natural humor always helps in the classroom. make them sweat just kidding I hope the class goes well just share your passion for writing and make them want to write I've been teaching freshman composition for 25 years, but I remember those first classes well. I've developed a "first day" process that I use to some degree each time. Of course I go over the syllabus and important information, although I now tend to put a copy of it up on the screen and provide it online for them to print themselves later (or not, as the case may be). However, the main thing I do is actually ask them some things about themselves--where they live, what they're studying, what other classes they are taking, etc. The students don't understand why I do that, but I tell them at the end of the class that I specifically try to learn a little about them so that I will know better where they are coming from. Building trust (and showing your sense of humor) is important during day one. In short, while writing a poem or reading/discussing a piece of literature is a good idea, personally I think the focus is better served getting to know them a little and giving them a picture of who YOU are. They are typically nervous, and they often have high anxiety about their writing (and about the person who will be evaluating their writing). So showing them that you are a regular person, respect them as people, and have a sense of humor (or drama, or whatever) goes a long way to establish a good foundation for what often proves to be a rewarding but challenging relationship. Best wishes as you begin this important work. Evaluating writing is a sometimes thankless job, and the progress for most writers is incremental at best. Good teachers learn to manage expectations for the students and for themselves, including how long it actually takes to mark and grade student papers. I've been doing this a long time and I love it (usually). If I can help in any way, let me know.
Interview - M A K Halliday [with G. Kress, R. Hasan and J. R. Martin; edited by R. Hasan & J. R. Martin; source: offered by J.R. Martin, 2005-03-29] GK Well Michael, the first question is why linguistics? We've heard you say that you first turned to linguistics because of frustrations you felt with the way people talked about language in literature classes. What actually frustrated you about literature teaching and why kinds of answers did you find were available in linguistics at that time? MAKH That was at school where I was trapped in a system which, in a way, I still find unbelievable. It was so over-specialised that from the age of about fourteen I was doing nothing but classics, twenty seven hours a week out of thirty three, and the others were in English. The English part I liked because it was literature and I enjoyed it very much, except when they started telling me something about language in literature. It just made no contact with what was actually there. And this worried me just as it used to worry me when people made folk observations about phonetics; I mean the kind of attitudes Barbara Horvath (1986) observed in her studies of Australian English - for example, that Australians are nasal. It is absolutely wrong, of course, but it takes time to see through these popular beliefs. You asked what was available in linguistics: the answer is - nothing. I didn't find any linguistics, as such; I just went to the library and found a book by someone called JM What, even in a high school? MAKH Yes, it was in the library. But I didn't get very far with it. I could write the critical essays when I found out what attitudes you were supposed to have, but I always thought there must be something else - some other way of talking about literature. I felt that there was more to it than what I was hearing. JM Did you think that language might provide a key or perhaps some kind of objective way of getting access to what literature was about? MAKH I doubt whether I could have formulated it in those terms, but I felt that literature was made of language so it ought to be possible to talk about that language. After all, my father was enough of a grammarian for me to know there were ways of talking about language. He was also literary scholar although he didn't particularly combine the two in so far as I am aware. I certainly wasn't far enough into it to be able to be more explicit - I think it was more prompted by trying to be more explicit, but as Jim says, I was trying to interpret some of the comments about the language of the work. RH But when did you make your real contact with linguistics, Michael? When is it that you actually began to feel that linguistics has a possibility for providing answers? MAKH Well, it was through language teaching. When I left school, it was to take the services' language training course. They took us out of school about eighteen months before we were due for national service, to be trained in languages. I was just seventeen when I left school and joined this program. Now those courses were being run at SOAS. During those eighteen months we certainly heard the name of Firth and we heard that there was such a thing as linguistics. But I don't think I learned anything about it. The initiative had originally come from Firth at the beginning of the war, who said that there was obviously going to be a war in the Far East and in Now I had in fact wanted to do Chinese anyway and I came out alright on the ones which favoured Chinese so I got my choice. But I presume that if somebody had put Chinese first and it turned out that they couldn't hear a falling tone from a rising tone, they'd have switched them into Persian or some other language. JM And that was how you really got into Chinese? JM Before that, you hadn't studied it anywhere? MAKH No. Apparently for some reason - I have absolutely no idea why - I had always wanted to go to JM Oh really. MAHK So I'm told. Apparently I wrote a story when I was about four years old about a little boy who went to RH Yes, that story is really very, very fascinating. Michael's mother showed me. It has parts of MAHK I hadn't studied Chinese at all. I really wanted to do Chinese to get out of classics; that was the main motive. I just hated classics at school and I wanted to get out. GK So you must have been very good at languages to have been called up for this test? MAHK Well, I don't think you had to be very good. It was just that you had to show that there was some chance you might possibly learn a language. So anyway, they gave us this eighteen months training and we then joined up with the services and I served a year and a half training and then about a year and a half overseas in India. After that year and a half, a small number of us, four out of the whole group that had learned Chinese, were pulled back to This was 1945 and they thought that there were years of war ahead against the Japanese. And so they increased the numbers of people being trained for the three services. But they needed more teachers; so what they did was to bring back four of us who had done well in the first batch. So John Chinnery, who is now head of the department in Anyway, even at that time I still wasn't studying linguistics, but I was becoming aware that something like linguistics existed and that there was rather a good department of linguistics just down the street. GK We've got two questions that follow up your comments - one is about language teaching and how that led into questions about linguistics, and one is about Chinese. First Michael, you've characterised yourself on numerous occasions as essentially an applied linguist who pursued linguistic theory in search of answers to questions posed by language teaching: teaching Chinese to English speakers and later in China teaching Russian and English to Chinese. Initially what was the nature of these questions and the teaching problems that posed them? MAKH Well, I was brought back to RH It was more like a realisation "these things that I thought would work didn't work". MAKH Yes. I had to explain things and I had the advantage of teaching a language which isn't your own and which you've only fairly recently learned; so at least you've formulated some of these problems for yourself - some questions about the structure. I think I began with very straight-forward questions about the grammar because there were so many things in Chinese grammar which just simply weren't described at all and we had been told nothing about them because they just weren't within the scope of traditional grammars and existing grammars of Chinese. We just had to discover them for ourselves. Now I felt very well aware of these and wanted some way of studying them. So this was the first attraction to linguistics, before any other kind - the attraction of educational or pedagogical questions which arose in my mind. GK Where did you get this consciousness about the problems of Chinese from? Had you been with Chinese people in MAKH Well, I had just under two years as a student of Chinese. So first of all I got aware of the problems simply as a learner, making mistakes, and asking in the usual way, "why didn't this work?" -making the wrong generalisations the way that a learner does. But then during the time that I'd served in GK Well perhaps we can go to the next question which is about MAKH This continues from what we were just talking about. I taught Chinese then for those two years while still in the army. It was particularly during that time that I became interested in Chinese studies generally. Then what happened was that Eve Edwards, who was the Professor of the Chinese department, and Walter Simon, who was then the Reader, felt they had these people who might be interested in studying the language properly. So they organised it in such a way that we taught our courses in the morning and we studied Chinese in the afternoon: all the Chinese courses given by the department for its students were scheduled in the afternoon. Now at that time, you could specialise in either modern or classical Chinese. I was obviously interested in modern Chinese, so we did a lot of modern Chinese literature and what we could by way of conversation. When I came out of the army in 1947, I decided that I wanted to go on and study the language and the sensible thing to do seemed to go and do it in JM What year was this? MAKH 1948. They administered the So I went around with a young Chinese who was an accountant helping them to keep the books and I wrote publicity. I did this for about six months and then, in some very very small village up in northwest So anyway the letter said "Proceed back to Peking immediately", and the conditions were that I could spend two more years in So I re-enrolled at RH What was his name. MAKH Luo Zhaunpei, Professor Luo; he died in about 1957. He took me on and I really appreciated this. I wrote essays for him and studied with him. I also went to other seminars. GK Small tutorials? MAKH Yes, they were. I can't remember that there was anything like a graduate course; it was more tutorial type of work with groups. JM Had you done some linguistics back at SOAS? MAKH No, actually not, none at all. JM This was the beginning? MAKH This was absolutely the beginning of it, this study with Luo Zhaunpei. GK Was there an indigenous Chinese linguistics? MAKH Yes there was. He knew it very well and it had a very strong tradition going back to the third or fourth centuries BC. However it didn't deal with grammar. Since there's no morphology in Chinese, traditional Chinese linguists never go into grammar. There was a very strong and very abstract phonological tradition which goes back about two thousand years, and as well there was a lexcicographical and encyclopedic tradition. There were these two traditions, yes, but not a grammatical one. JM What was the linguistic background of your teacher? MAKH He had been trained in comparative historical linguistics. MAKH I can't remember exactly. I think it very likely that he would have been in JM Was there any influence of Sapir and the other American linguistics? MAKH With Luo Zhaunpei I didn't get into this at all; but it became clear to him after six months or so that I really wanted to work more in modern studies. My own idea had been to work on Chinese dialects. I was very interested in Chinese dialects and was beginning to know something about them. So he said "Well then you need to go and work in synchronic studies; you should go and work with my friend Wang Li". So I said "All right, thank you". I assumed he was across the street, but in fact, he was in This was in May '49. So I did altogether about seven months with Luo Zhaunpei. You couldn't travel down the country of course because there was very heavy fighting; actually the last big battles were in that very month. So I took a boat out to Wang Li at that time was the Dean of the Faculty of Arts in Ling Nan, which was a private university. He took me on and that was really where I got into linguistics, through dialect studies. We did basic dialectology, field work methods, and a lot of phonetics, thank goodness. I am deeply indebted to Wang Li for having really made me work at the phonetics and phonology and also sociolinguistics - the whole notion of language in social and cultural context. All those were his contributions. JM What kind of linguist was he? Did he have a more modern, synchronic background? MAKH Yes, he had actually been trained in GK You said just now that the linguistics you studied with Wang Li included sociolinguistics. Can you say something about how he talked about the area of language and social context? MAKH There was an input from different places by this time. During this period I had become gradually and indirectly aware of some of Firth's notions and while in Canton, I think, I had actually read something of his - what finally came out as his paper 'Personality and Language in Society'. I can't remember how I'd got hold of it. It might have been through Wang Li. Some way or other I'd got some of Firth's ideas and I think Wang Li himself knew some of Firth's work. That was one input. Then secondly, of course, for political reasons, I had become very interested in Russian scholarship. Again, this had started already in Slavonic linguistics generally has always interested itself in issues such as the development of literary languages and national languages. So that was the second input. So there was the Firthian input and there was that one; and then Wang Li himself as a dialectologist was interested primarily in regional dialects, but was also interested in changing dialect patterns and the social background to these, the spread of the standard language in China, areas of contact between different dialects and the social patterns that went with them. So there were those three parts to it. GK So although you got your first interest in linguistics in MAKH Well it was fairly mixed because of all the Chinese linguists, Wang Li was the one who knew most about the Chinese tradition. One of the things that I read and was very much influenced by at the time was his own History of Chinese Phonology, which is a marvellous book. It was so simple, but so very scholarly. He showed how Chinese phonology had developed from the first century of the era through to the tenth century and how it had developed as an indigenous science and then been influenced by the Indian scholarship which came into JM How long were you in MAKH A year - well, I arrived in September and left the following May, so essentially a sort of academic year. JM Was your own research taking some sort of direction at that time? MAKH Yes, it was actually dialect field work because Wang Li was doing a survey of the dialects of the I wrote this questionnaire with a large number of sentences and I got them to give me the versions of these sentences in their own local dialects. When I went back to RH So what happened to all that data? MAKH It's lying around somewhere; but I couldn't interpret it now I don't think. It's all written in local characters. JM So you were already a grammarian even by that stage. MAKH I was really very fascinated by the differences between Mandarin and Cantonese grammar and then how these very local dialects differed in their grammar from the Cantonese. It was very interesting. GK Do you think your interest in lexis and grammar comes in some way from Chinese traditions in linguistic scholarship? MAKH I don't think so, because in those days, there wasn't a tradition unless you want to say that I was interested precisely because there wasn't anything there and therefore it had to be filled. But I don't think so. I think I was always basically interested in grammar. GK What about the lexical part? MAKH Well, there is one point which hadn't occured to me before. The earliest Chinese work of lexical scholarship is in fact a word list from about 250 BC, which is a thesaurus, and I was always interested in the thesaurus as a tool of lexicography. I have no idea whether there is any connection between those two. It had nineteen different topic headings and lists the difficult words under those headings. That year with Wang Li was just marvellous. He died recently, just in May - just within the last month. I saw him a couple of years ago - he was a marvellous man and very kind. Now the terms of the scholarship then were that I went back to GK Out of where? MAKH Out of the SOAS, totally - both the Chinese department and the linguistics department. MAKH Well that's another story. I had left They asked me when I went for the job at S.O.A.S. whether I was a member of the Communist Party. I said "No", which I wasn't. Then they asked would I undertake that I would not become a member of the Communist Party. I said "No, I wouldn't". So I didn't get the job. When I then asked the person who had questioned me about that afterwards whether that was the reason, he said "Political considerations were not absent". I thought this was the classic answer of all time. So the point is that I got shunted off to MAKH The Chinese department at JM How disappointing was this for you? You had particularly wanted to study with Firth. MAKH It was very disappointing because I wanted to study with Firth and I wanted to work on my dialect material. The price of going to My supervisor was Hallam but I was negotiating with him to be allowed to go up to JM Devious ways he finds to ride on trains! MAKH Yes, that's when I started finding you could work on trains. RH But this is extraordinary. They didn't allow you to stay at SOAS because you wouldn't give an undertaking not to enlist in the Communist Party; and then you still came back, and you were still on the premises. MAKH Yes, but I wasn't teaching. That's what they were scared of! I was not in a position to subvert. GK So your first contact with Firth had been fortuitous but when you returned from RH That's asking the entire history. MAKH "Interested in his framework". Okay. From the start when I became gradually aware of his ideas, particularly I think during that year with Wang Li, I felt very sympathetic. It seemed to me that he was saying things about language that made sense in terms of my own experience and my own interests, and I just wanted to explore those ideas further. My main concern was just to learn from him and I managed to organise it so that he took me on officially as a student. What I got of course from him was enormous, both in terms of general philosophical background and insight into language. But I didn't get a model of grammar because, as you know, Firth himself was interested in the phonology, semantics and context. He had very little to say about grammar, although he certainly considered his basic system/ structure approach was as valid in grammar as it was in phonology. My problem then, as it seemed to me, was how to develop system/structure theory so that it became a way of talking about the language of the Secret History. Now the text was a corpus - for Firth it was a text and that was fine. That meant it had its own history and had to be contextualised and recontextualised and so forth. It was also closed, in the sense that you couldn't go out and get anymore. This was 14th century Mandarin and that was it. There wasn't any more. So you treated it as it was. I was not yet, of course, aware enough to be able to ask questions about what it meant to consider it just as a text as distinct from considering it as a instance of some underlying system. But I tried to work out the notions of system and structure on the basis of what I read and what I got from Firth in phonology. JM Was W. S. Allen working on applying Firth's ideas to grammar in this period too? MAKH Yes, although I didn't actually get to know him very well. The person who helped me most other than Firth at that stage was Robins. In fact Firth got Robins to do some of the supervisions for him. I used to write essays for Robins and so forth. Robins was terribly nice and very helpful. But I didn't know Allen very well. JM Robins was on staff there? JM Allen was also? MAKH Allen was, yes. All that generation was there. Of course some were still students. JM When did you have a chance to see 'System and Structure in the Abaza verbal complex'? MAKH That was not until after I finished my thesis. JM So you really had to do this all on your own.? RH When did you finish the thesis? MAKH At four o'clock on the last day after the last extension, I can tell you that. It was an hour before they closed the offices and it was the 31st December. I can remember that, and it would have been in 1954. JM So you spent three years in MAKH Four years, because it was 1950 when I moved to JM And Robins, had he been thinking of applying system/structure theory to grammar? MAKH Not, I don't think so. He was more interested in phonological applications. I was very much on my own at that. It wasn't that there wasn't any place for the grammar for Firth. He would introduce examples in his lectures - for instance working through the forms of the German definite article as a way into raising a whole lot of interesting grammatical problems. And he was developing, at that time, the notion of colligation, which actually Harry Simon labelled for him. But it was never very much developed so that its not terribly clear what Firth was ever planning to do with it; but it was the beginnings of thinking about grammar. GK Who were the other students at that time? MAKH I'm not sure which year different ones were there but certainly listening to Firth lectures at different times during this period were, for example, Frank Palmer and Bill Jones who were themselves just getting onto the staff of that department, Bursill-Hall who then went to Canada. Mitchell was already on the staff, as were Robins, Allen, Cornochan and Eileen Whiteley. I also went to other lectures when I could - to Eileen Whitely for example and to Eugenie Henderson. I got a lot of phonetics from them as well as other things. Eileen Whitely never wrote anything, but she just had a fantastically broad range of interests. She was one of the people who really could have developed Firth's notions, especially in the direction of text - in a sematic direction. She was very very good. GK Could we return to that question about how you went about extending Firth's ideas so that they could be applied to the grammar of a language? MAKH I tried to understand, not always very successfully, the key notions that could be interpreted at a general enough level so that they could be applied to grammar as well as to phonology. For example the concept of prosody - the notion that syntagmatic relations pervaded items of differing extent. Firth as you know was concerned that you didn't start and end with phonemes, and so forth. Rather you looked over the larger patterns and then residual effects, so to speak, were handled at the smaller segments. Now I tried to apply that idea to grammar, so I began at the top. That's one very clear example, using a kind of topdown approach, beginning with what I could recognise as the highest clearly defined unit in the text, a clause, and then gradually working down. Then another basic concept, of course, was the system/structure notion, which I found very difficult - especially expressions like systems giving value to the elements of structure. I tried to set up structures in a framework that was formal in the sense that you were not relying on some kind of vague conceptual label. For example, it was possible in Chinese grammar to set up categories of noun and verb on distributional grounds. These then gave you a basis for labelling elements in the structure of the clause. RH How important was the idea of exponence for Firth? MAKH Well, it was very important. You see, there are a number of ways in which I built on his ideas that he certainly wouldn't have followed, as he made clear to me. I got on well with him and he didn't like people who weren't prepared to argue with him. But of course the cost of this was that I would often be seizing on things that he'd done and, from his point of view, misinterpreting them in some ways, in order to try and do something with them. Exponence would be one example of this. Firth had a long running argument with Allen in 1954-1956 about the nature of exponence and about the relation between the levels and exponence. As far as Firth was concerned the levels (the phonetics, the phonology, the morphology, and the grammar or whatever) were not stratified but were rather side by side, each directly relatable to its exponence. So you didn't go through phonology, so to speak, to get to the grammar. On this point Allen disagreed about the nature of this pattern. As far as Firth was concerned, there was absolutely nothing wrong with using the same bit of datum over again in setting up patterns of different strata, whereas Allen seemed to say "Well, if you'd built this particular feature in to your phonetic interpretation, you couldn't use it again in the phonology". So there were differences of this kind in the way they worked out this notion, and Allen's, in fact, was the more stratified view, although I don't think he expressed it like that. I did not follow Firth on that because I just couldn't see any way that you could get the notion of realisation into the grammar except by stratifying (although I didn't use the term realisation then). So exponence for me became this kind of chained relationship, which it was not for Firth. 2. Grammatical theory GK We would like to ask you about the grammar and our first question is about the focus on system. We think you are a great relativist and unusually modest about the claims you make for systemic theory. Your theory gives greater prominence to paradigmatic relations than any other. What are the strengths and weaknesses of this focus? MAKH Well, I didn't start out that way of course. Because that links back to what we were saying about Firth. As you know, for Firth, there was no priority between system and structure - they were mutually defining. Indeed, if anything, in the context of linguistics of the time, his emphasis was on the importance of syntagmatic relations. So in a sense, I'm going against Firth. Now why was this? Firth himself didn't really believe in "The System" in the large sense at all. His interest was not in the potential but in the typical actual. Now this meant that for him, in fact, the priority was to structure over system - not in the structuralist sense of language as an inventory of structures, but in the sense that, as he put it, the system is defined by its environment and its environment is essentially structural. So in a sense, the larger environment is the syntagmatic one. Now trying to work this out in Chinese grammar generally, I felt that I needed to be able to create the environment that was needed. The environment had to be set up in order for the general framework of the grammatical categories to make sense. But this environment seemed to me ultimately to have to be a paradigmatic one. That took a lot of steps - say 1962 when I was writing 'Syntax and the Consumer', or 1963, when I was doing the 'Laundry Card Grammars' in By various steps, I came to feel that the only way to do this was to represent the whole thing as potential - as a set of options. And this was certainly influenced by my own gut feeling of what I call 'language as a resource' - in other words, language was a mode of life, if you like, which could be represented best as a huge network of options. So that kind of came together with the notion that it had to be the system rather than the structure that was given priority. GK How do you see that now? MAKH Well, in an important sense I would think that there are a lot of purposes now for which it's important. Just to mention one of them, I think that in order to crack the code, as a probabilistic system, we have to start with a paradiagmatic model. It doesn't make sense otherwise. But, of course, it does beg a number of questions in a sense - this is something we often talk about. The great problem with the system is that it is a very static form of representation. It freezes the whole thing, and then you have to introduce the dynamic in the form of paths through the system. Your problem then is to show how the actual process of making paths through the system changes the system. This is crucial to the understanding of ontogenesis, phylogenesis - any kind of history. So I think I shall continue within that framework because that's the one I'm familiar with and I've not enough time to start re-thinking it. GK In the era of post-structuralism Firth seems more contemporary than you. I mean I already have problems with post- structuralism and the dissolution of system, but that is the contemporary flavour of thinking about things. MAKH I often get the feeling that all these -isms, whereever they raise their head, want to go too far either in this direction or in the other direction. In practice it is just not possible to have systems without the product of those systems, which are the structures; which means that the structures must be there do deduce the system. How far do we go back in this kind of argument? Either you're forced to the point where you say the entire system is, was, has always been, or you have to say that in some sense, structure, which is a constraining name for process, is where it all begins. Because otherwise you can't have systems. I would comment that these things obvious switch between extremes. There is an important sense in which you can deconstruct the system, as it were; you can remove it from your bag of tools. But you have to get it back again, if only because you can't deconstruct something if it isn't there; so there's no meaning in doing so. I think we are now at a stage where we are realising that the models we have to look at for systems are not solely in the areas this kind of post-structuralist thinking is reacting to. Their critique has almost become irrelevant, I think, in the light of much more general developments in modern scientific thinking, which really transcend the differences between human and non-human systems. Once you start looking at systems in this sense, you have to have the concepts. Take for example Jay Lemke's work in dynamic open systems. This is the sort of thing that I find interesting as a way of looking at language. And the sort of work that's being done in physics as well is totally annihilating the difference between human systems and sub-atomic systems. GK We wanted to ask you about strengths and weaknesses. Do you see any weaknesses in that greater focus on system rather than structure? MAKH Well, one I've mentioned is that it's overly synoptic. I mean it's static. Also there is the danger of its pushing the system too far apart from process/text. I mean I've tried to avoid doing that. It's one of the things that Firth so strongly objected to in Saussure - the dichotomy of langue and parole which prevents you from seeing that langue is simply a construct of parole. I would agree with that and I think that there is a danger of using system as a tool for thinking with and forcing a kind of dichotomy between the system and the text. I think those two are dangers really. GK We've got a question about function: since the late sixties, systemic grammar has always been for you "systemic functional grammar". What is the relationship between different concepts of function (for instance 'grammatical function', 'meta-functional component, and the natural relation you propose between metafunction and register) that you use? And just how critical is their place in your model? MAKH I think they're important and I think they're closely related. I have usually felt that the best way of demonstrating this relationship is developmentally because you can actually see, if you follow through the development of a mother tongue, how the system evolves in functional terms. In the beginning, function equals use, so that there is a little set of signs which relate to a simple theory, on the part of the infant, that semiosis does certain things in life. You can then watch language evolving in the child in this context. So the metafunctions are in my view, simply, the initial functions which have been reinterpreted through two steps. The first involves generalisation: initial functions become macro-functions, which are groupings which determine structure. Then macro-functions become metafunctions: modes whereby the linguistic semiotic interfaces with contexts. So I see this as very much homogeneous. The notion of the context plane as something natural is part of the same picture. GK Can you just expand on that last phrase? MAKH If language is evolved as a way of constructing reality - then it is to be predicted that the forms of organisation of language will in themselves carry a model of that reality. This means that as well as being a tool, language will also be metaphor for reality. In other words, the patterns of language will themselves carry this image, if you like. This is what I would understand by talking about a 'natural' grammar. RH Would you sat that's another way of saying that reality is the product of semiosis. RH And in that sense the question of a 'natural' relation between the grammar and the reality that it constructs has to be either answered 'yes' or it becomes a meaningless question? MAKH Okay. Right. I mean that reality has to be constructed, so it's another level of semiosis. So it's inevitable, in a sense, that the semiotic that you use to construct it will, in some sense, replicate that which you are constructing with it, since it's all part of the same process. I want to be rather cautious on this. I think we're in a phase at the moment where we are emphasizing this point. We've gone against naive realisms which assumes that there is something out there that is given and that what we have to do is to mirror it in some sense, which is certainly where I started from. And we've kind of moved into a phase of thinking again, the opposite extreme so to speak. We are now emphasising, as you were saying, the fact that it all has to be constructed. It is, in fact, a many levelled semiotic process. And that, in a sense, is an important corrective to naive realism. JM I was interested in the grammatical functions themselves, Subject, Theme and so on. You use functions and class labels in your model. How crucial is that to this functionalism idea, and this idea of a natural grammar. MAKH It's part of the picture. In order for the system to work with some kind of output, in other words to end up as speech sounds, signals or writing on whatever, there has to be this re-coding involved in it. The fact that there has to be this re-coding means that there must be a non-identity between functions and classes; otherwise you wouldn't need to re-code: you could do the lot at one level. So somewhere or another you've got to be able to talk about this. Now it seems to me then you have to decide in finding ways of interpreting language how you're going to do this. An obvious example would be formal systems. If the main priority is representing language as a formal system, then presumably you'll prefer a kind of labelling in which you have class labels and conventions for deriving functions from them. For my own part, I prefer theories to be extravagant and labelling systems to be extravagant. As a tool for thinking with, I've always found it useful to separate function and class and build that amount of redundancy into the discourse about language. It then becomes possible to operate with sets of functional labels in the grammar, things like Theme and Subject and so forth which, in turn, enable you to make the links outside. So I think it is a useful and important part of the whole process. There is a reason for wanting to separate these two, although if you focused on any one specific goal, as distinct from trying to keep it all into focus at the same time, you could do without it. And I think I would say this as a general truth. There's very little in what I've done, or what is done in systemic theory if you like, that couldn't be done more economically in some other way if that was all you were interested in doing; and, I suppose, what I've always been concerned with is to work on little bits in a way which I then don't have do abandon and re-work when I want to build them into some general picture. GK I think this relates a bit to what we were saying earlier where you were talking about system and structure. The question is: In your model the relation between various components, between strata, between ranks, between function and class, between grammar and lexis, is handled through the concept of realisation. This involves, in English at least, setting up a Token/Value structure with the component closer to expression substance as Token and the component closer to content substance as Value. This gives the Value component a meaning of temporal priority, apparent agentivity, greater abstraction, greater depth and so forth. Is what English does to this concept, in fact, what you mean by it? RH In raising this question we were trying to build in the informal kind of discussions we've been having recently on realisation. You've argued very strongly that when we say 'x realises y' then, in some sense, because of the structure or whatever, you get a pre-existence postulate there which you would like to deny. This seems to me a very important point. To my way of thinking it also links ultimately to system and structure, to the langue and parole question, and is altogether the most central concept in the whole theory. MAKH I'm with you. It is absolutely fundamental. Maybe we could have a workshop, an International Systemic Workshop, just on realisation. That would be nice. You know, the problem is you can't talk about it in English. Not only the temporal priority but of course the agentive priority gets in the way. I mean the Agent is the Token. According to the grammar of English it's the Token that does the work so to speak. I started with a fairly simple notion of something 'out there' to be realised through the code. It's, again I think, something that we have to think of in the light of recent thinking about the universe we live in as an information system. And what English does to the concept, I think, is a very important clue. I mean, what any language does to the concept has to be taken as a very important clue, a way of thinking about it. And again it's at this point that the grammar as a tool for thinking about other things becomes crucial. I think linguistics has got to accept its responsibility now, as being the core science. In a sense it has to replace physics as the leading science. RH There is another problem here. If you think in terms of languages that in their structure are very very different from Indo-Eurpean languages, well then you might expand this discussion. So in some sense to me the problem becomes circular. We perceive that there is this problem for expressing the relation of realisation of structure of English, and yet we cannot yet bring evidence from any other language that it could be otherwise, because by our way of talking, we will impose a pattern on that language. MAKH In a sense it's one of these things that probably has to be done before it's too late. What happens in practice is that people tend to borrow English (or whatever the international language is) ways of talking about things, and you want to know how they would develop otherwise. GK We have a question which is around that point of grammar and linguistics and the language shaping both the linguistics and the theory - what you think of as grammar symbolising reality. Following from the point about realisation that we made in our earlier question, to what extent have the meanings available in English or Chinese consciously or unconsciously shaped your model of language? MAKH Let me answer that quite quickly. I'm sure they have and I've tried to make it conscious. It's impossible that they couldn't so I have tried to be conscious that they are shaping it when thinking about it. One of the things I regret most is never having been able to learn another totally different language. I made two attempts to come to JM Chinese and English weren't different enough? MAKH Not really. They are different in interesting and important ways but they both have a long written tradition. GK In terms of that question about realisation, it would be nice to have a language that was far more verbal and not written, to understand how people might think about that. RH Yes. I think if one did this kind of study one would find that writing does another thing - it objectifies in a way that the oral tradition doesn't so that what you would get would be more like myths as metaphor for certain sorts of beliefs, certain sorts of perceptions, instead of this explicit analysis where the concepts are defined, placed in relation to each other clearly, and then you go and talk about their interrelations. That's the way it happens in languages that don't have a literary tradition. MAKH That's also why we're still stuck with Whorf. I don't mean by that that I want to give up Whorf, as you know; but what I mean by that is we've got nothing else yet. It's easy enough to get the mythologies, the things at that semiotic level, if you like. Now as you move into the grammar what happens is that nearly everyone working on the grammar in these languages is a universalist. So of course, they're interested in making them all look alike; and so you're left with Whorf. And it's in the grammar, you see, that I want to find new models. JM What about, say, between English and Chinese. I mean, can you point to the parts of your model where it would have been different if you hadn't known Chinese? MAKH It's very hard to say of course. I suppose that one of the things that is absolutely critical has been that for me grammar has always been syntax, since Chinese has no morphology. I cracked the Chinese code first. There are other things, yes, for example temporal categories. GK What about that point you made earlier about prosody? MAKH Yes that, of course, could have come from Firthian phonology without necessarily going through Chinese, although the Chinese helped. But it was Chinese phonology at the time of course. GK And tone? MAKH That's true. That's certainly true. Then there's the point of syntax. Then I think there are certain special features about Chinese grammar which did affect my thinking. There was something that Jeff Ellis and I wrote many years ago, which I must see if I still have, because it wasn't a bad article. It was on temporal categories in the modern Chinese verb. It was important because, you see, it was a non-tense language. Jeff was extremely well informed about aspect as he had started off in Slavonic and he had studied aspect systems round the world. So Chinese helped me to think about time relations in a non-tense sort of a way - the Chinese system of phases his a clear grammatical distinction between a kind of conative and the successive; the verb essentially doesn't mean you do something so much as you try to do it. It does not necessarily imply that you succeed. Now I don't read a naive cultural interpretation into that but it forces you to think differently about the grammar. JM Would the lack of morphology in Chinese, have been something that pushed into paradigmatics? MAKH Okay, yes. That's a good point. I mean your paradigms have to be syntactic. You can't start with a word and paradigm approach. There are no paradigms and one of my main strategies in working on Chinese was setting up syntactic paradigms. They were there already in that article in 1956 on Chinese grammar. GK A question about choice. Although you model language in terms of choice, in many respects this choice is almost never free. What is the place of your position on the probablistic nature of linguistic systems in modelling these constraints? MAKH I have always thought of language, the language system, as essentially probablistic. You have no idea how that has been characterised as absolutely absurd, and publicily ridiculed by Chomsky in that famous lecture in 1964. In any case, one point at a very simple level is that nobody is ever upset by being told that they are going to use the word the more often than they're going to use the word noun. But they get terribly upset by being told that they're going to use the active more often than the passive. Now why is that? We know of course that we have well developed intuitions about the frequencies of a word - and can bring those to the surface. But we can't bring them to the surface about grammar; and in fact all that is doing is just showing that, as always, the grammatical end of the lexico-grammar continium is very much deeper in the (gut) and it's much more threatening to have to bring it out. But it's there. The question then of what this actually means in terms of the nature of the system is extraordinarily complex and it's really does need a lot of thinking and writing up, exploring what it means in terms of real understanding of the nature of probability and statistical systems and so on. Again, what I want to do is try to bring probability into the context of a general conception of systems, dynamic open systems, of what this means in terms of physical systems. It has to be seen in that light as I was saying before. JM This seems to be something quite critical in your theory, this idea of probablistic systems, especially in terms of not losing sight of the text and the way in which the text feeds back into the system. You have to view text as passes through the system which are facilitating. MAKH I would agree with this and you have to have this notion in order to show how the system shapes the text anyway. The pass through the system in fact changes the system just as every morning if you turn on your radio they will tell you that the temperature is ten degrees and that's one below average so to speak; but that has changed the average. So everytime you talk, everytime you produce a text, you have of course changed the system. 3. Language in Education GK The next section is on language development and education. Our first question is about language in education. You've been the driving force behind language us education movements in both MAKH Well, I come from a family of teachers so I suppose that the whole educational process has always been of interest; and I had my own time as a language teacher. I've always been, if you like, motivated in working on language by the conviction that this had some practical value, and that education was the most accessible in a sense. There are a lot of other applications. Obviously an important one is clinical. But I don't know anything about that, and in any case we were a long way from actually getting linguists working together with pathologists. But it seemed in the late fifties that people were ready to think about language in education. My first position in linguistics was in the English department at Each of us had different groups of teachers that we used to work with. Now this was when I came in to mother-tongue education, because these were English teachers in the Scottish schools. It kind of reinforced my feeling that we really needed an input from linguistics. Then when I moved to London in 1963, the first thing I did was to set up this project with Nuffield money, which became the School Council Project in Linguistics in English Teaching, which was producing Breakthrough to Literacy. RH But behind this, at a deeper level, didn't you have a feeling that linguistics is a mode of action, that linguistics is for doing rather than just intellectualising? MAKH Yes, very much. I don't really separate the two in any sense at all. I've always seen it like this. My problem has always been that teachers want results too quickly. In fact the reason why we have to work in this field as academics is that we have a longer term perspective. We can say: "You've got to go back and do so much more fundamental work. You've got to back off for a bit. You can't expect result by next Tuesday". And that's where linguistics comes in. It's a mode of action but it's a mode of longer term action, if you like. You have to have the luxury of being able to look further into the future. GK We have a question about applied linguistics as a mode of action. Our question asks whether this is an expression of your political beliefs. We have a little aside here which asks whether, like Chomsky, you see linguistics and politics as unrelated spheres and, if not, how it is that you are able to make as much use as you have of Firth's ideas when your politics and his were far apart. MAKH That is an absolutely fascinating question. You'll have to stop me because I'd love to go for two hours about that. No, I do not see my linguistics and politics as separate. I see them as very closely related. To me it's very much been part of this backing-off movement. In other words, I started off when I got back to There was a lot of things going on at the time. There was the Menist school; there was the Pravda bust-up in 1950; there were current developments in English Marxism and things of this kind. Later on came the New Left, of course. But it seemed to me that any attempts to think politically about language and develop a Marxist linguistics were far too crude. They involved almost closing your eyes to what you actually knew about language in order to say things. My feeling was we should not. Of course the cost of doing this is that you may have to cease to be a Marxist, at least in a sense in which anyone would recognise you as one, in order to go away for fifty years and to really do some work and do some thinking. But you're not really abandoning the political perspective. You're simply saying that in order to think politically about something as complicated as language, you've got to take a lot longer. You've got to do a lot of work. And you've got to run the risk of forgetting that what you are doing is political. Because if you force that too much to the forefront your work will always remain at the surface; it will always be something for which you expect to have an immediate application in terms of struggle. You can't do that in the long run. You're going to pay the price that you may achieve something that's going to be useful for two weeks or two years and then it'll be forgotten. I always wanted to see what I was going towards as, in the long run, a Marxist linguistics - towards working on language in the political context. But I felt that, in order to do that, you really had to back off and go far more deeply into the nature of language. JM You were ready then for teachers' reactions to your ideas? It's the same problem of distancing. MAKH Yes, it is. Now with Firth, you see, it is very interesting because Firth was right at the other end of the political spectrum. There was in fact another interesting occasion when I went to be interviewed by him for a job at SOAS (not the same as the first one, different in a very interesting way, although with exactly the same result). It was after this interview in fact that Firth said: "Of course you'd label me a bourgeois linguist". And I said: "I think you're a Marxist", and he laughed at me. It seemed to me that, in fact, the ways in which Firth was looking at language, putting it in its social context, were in no way in conflict with what seemed to me to be a political approach. So that it seemed to me that in taking what I did from Firth, I was not separating the linguistic from the political. It seemed to me rather that most of his thinking was such that I could see it perfectly compatable with, indeed a rather necessary step towards what I understand as a Marxist linguistics. GK So Firth must have been, at some level, confused - to have contradictions in... MAKH Does that necessarily follow? RH: I don't think people's ideologies are coherent. MAKH: No, that's certainly true. RH I don't think they are. I think Firth had this ideology about language, it's role in society, about its role in forming people and all that. On the other hand he also had this very strong authoritarian attitude towards institutions and their maintenance and things. GK A question which relates to all of that - theory out of practice. To what extent has your commitment to applied linguistics influenced your model? And how has it influenced the research that you've pursued? MAKH Well it's influenced it, of course, in one sense by making sure that I never had time to do much thinking about. Yes, I think it's influenced it. It's hard to say exactly how. I mean, I've always consciously tried to feed back into thinking about language what came from, say, the experience of working with teachers. The Breakthrough materials would be one case in point. I have always tried consciously to build teachers' resources into my own thinking about language; David McKay for example, made an input with observations on children's language learning in an educational context. Then, of course, through Basil Bernstein's research and Ruqaiya's part in his unit, there was another source of input from what, in the broadest sense, is a kind of applied linguistics. RH Can I stray from the point here? It seems to me that talking to the teachers and the need of making your linguistics accessible to the pedagogical circle had a different influence on your work from that which say, for instance, contact with Bernstein's unit might have had. The first one forced you to write in a way that would make your material accessible. In other words, I do not see that the shape of the theory, the categories as such, got terribly shaped by that, (though it is always a bit doubtful to make these kind divisions). On the other hand I feel that contact with Bernstein's work had an effect of a slightly different kind in that it really fed right into theoretical thinking. MAKH That is definitely true. I wasn't prepared to shift because of teachers in what seemed to me to be short term directions just because it seemed to be something that would be a payoff in class and so on. So it was more in the form of presentation. But I think there was some input from educational applications. GK Most of your work has been in mother-tongue teaching and we were wondering how much of this was historical accident, how much by design? MAKH My first publications relevant to language teaching had a strong E.S.L. focus. In GK That's true now isn't it, in lots of ways? MAKH It's true in lots of ways although there are some ways now in which I think mother-tongue teaching is taking over. GK You've been centrally involved in two major mother-tongue research programs, the Nuffield Schools Council and the Language Development Project work in MAKH I suppose what has been achieved is a number of fads and fashions, some of which will remain. In the English Language Teaching context it seems to me there are two developments which were applied linguistic developments which were important. One is the notion of language for specific purposes which came quite squarely out of Firth's restricted languages and concept of register. And so I think that's been an important part. I think in the mother-tongue area, two things have been important. One is the awareness of the child as a human being who has been learning language essentially from birth so that the learning in school becomes continuous from that. And related to that perhaps, the notion of language as a process in education. Things have changed. The very concept of language education didn't exist twenty-five years ago, or even fifteen years ago. So I think that most of the achievements have been based on gradually raising the level of awareness of language among educators. One has to remember this sort of thing has to go on, over and over and over again. It doesn't suddenly happen. GK But are there things now that you don't any longer have to say very strongly that you might have had to say twenty years ago? MAKH Well, there are some I think, but not very many. I think you have to go on and on and on saying them every year, to each new group of students. I suppose we don't any longer have to fight the old fashioned views of correctness and language as table-manners (again we can't be complacent about these things). And we don't have to fight the notion of standard and dialect, and dialects as being inferior. I think people have moved quite a lot on that. There's a more complicated history as far as relations between spoken and written language are concerned. At one time I would have said we no longer have to fight the battle for recognition of spoken language in education. But I'm not sure about that now. I think we're going to have to gear up for a new battle there, though on a different plane certainly. Even where one doesn't feel there has been much progress, the discussion may have moved onto a different plane. I think we've always been aware, and it's certainly true now in GK We've got a question which follows that up a little bit. Your theory has been designed to solve problems or at least to play an active apart in solving them. Which parts of it do you think have been most effective and what are you most proud of in terms of what has been achieved? MAKH I suppose it ties up with this section generally. I feel that it's been in the educational area. I think I'm a little bit proud of that, and have that feeling on various levels. For example, I first started intervening myself when David Abercrombie said to me, "Will you teach on my summer school, the British Council Summer School for the Phonetics of English for Foreign Students" This was in 1959. And I said: "Certainly. What do you want me to teach?" He said "Well, you know Chinese. Teach intonation". I knew nothing about English intonation, so I started studying it, trying to describe English in such a way that the description was useful to those who were going to be working on language in the classroom, in an educational context. The fact that we are now getting to the point when people are saying: "I can use this grammar for working on language in the classroom." is an achievement. When I went to the Nuffield Foundation in 1963 I said: "I want some money for working on language in this sense, but I don't want to see any teachers for years because we're not ready for them, so to speak". And they said, "If you put the teachers in right away, we'll double your money". You can't refuse that kind of thing. Of course they were quite right. What this meant was that we used to have those weekly seminars, when we had David McKay and Peter Doughty working on grammar, from the point of view of where it was going to be used. Now at that time you didn't dare put it into the program because, certainly in One of the things I feel most happy about is the developmental interpretation that I tried to put on early language development and the importance of that for later work on language in education. That again, came out of teachers. When Language in Use was taken up in the 'Approved Schools', that is the schools run by the Home Office for children who had been before the courts, the teachers came to Geoffrey Thornton and Peter Doughty and said: "We want to use these materials. Would you lay one a workshop for us". And they asked me to go and talk. At the same time David Mckay and Brian Thompson, who were the Breakthrough team, set up a workshop for primary school teachers. Both groups asked me the same question which was: "Tell us something about the language experience of children before they come into school at all". I hadn't done anything of course at that time but I read around on what there was. Not much of it was terribly useful. Ruth Weir's was one of the best in those days. But it started me thinking on early language development. That was the time when Neil was born and when the Canadian Government wouldn't let us into RH Those were difficult, perhaps fortuitously difficult times in more than one respect. In some sense the rise of Chomsky's linguistics must have impinged on your work in the sixties and the early seventies. Why did you hold back, in spite of general acceptance of the TG framework, and how did you see yourself in relation to that whole movement? MAKH Chomsky's work quickly became a new establishment, and in many ways a rather brutal establishment actually. At The way the goals of linguistics were defined at the time, the notions embodied in all the slogans that were around, 'competence' and 'performance' and things of this kind, I just found quite unacceptable. Intellectually I thought they were simply misguided and in practical terms I thought they were no use. So that I thought that if one is really interested in developing a linguistics that has social and educational and other relevance that wasn't going to help. We just had to keep going and hope that it would wash over and we should be able to get people listening again to the kind of linguistics I thought was relevant. RH And it happened. MAKH Yes, it happened, and now we know it'll all disppear into the history of the subject eventually. 4. Language and context GK We've got a set of questions on language, linguistics and context. Our first question is on politics: You are someone whose career has been disrupted more than once because of your political beliefs. Have these experiences affected your approach to linguistics, especially linguistics as doing? MAKH No, I don't think so. I mean, yes, okay, I was witch-hunted out of a couple of jobs for political reasons. And the British Council refused to send me any where at all during that time, however much people asked. But I don't think that this has affected my approach to linguistics. Linguistics as doing is part of a political approach and I didn't suffer in the way that a lot of people suffer. Of course I've no doubt that I would have gone in very different directions had this not happened. For example, if I had been taken on and kept on in the Chinese department at SOAS I might well have stayed principally in Chinese studies and worked on Chinese rather than moving into linguistics generally. And secondly, of course, the thing that I really wanted was the job on Chinese linguistics in Firth's department. It was for purely political reasons that I didn't get that. I wish that I had that interview on tape because it would be one of the most marvellous documents ever. It would be fantastic, absolutely fantastic. RH For the analysis of ideology! MAKH Yes, it really would be. It was absolutely incredible. In any case if I'd got that, I think, I would have remained much more closely a Firthian. I wanted to get into Firth's department. If I had got into Firth's department, I would have quite definitely have worked much more within Firthian framework. You have to remember that to the extent that I have departed from Firth, certainly initially it was simply because I wasn't there in the group in any sense and therefore I wasn't able to get answers to questions, and, in some ways, to correct misunderstandings, This meant that, in a sense, I was pushed out to working on my own in two instances where in either case, if this hadn't happened, I might well have continued to work in the pre-existing frameworks, both institutional and intellectual frameworks. I'm not sorry. GK Our second question is about language and social reality. You are one of the few linguists who have followed Whorf in arguing that language realises, and is at the same time a metaphor for reality. How Whorfian is your conception of language and what part has Bernstein's theory played in shaping your views? MAKH Well, I think it's Whorfian. Partly this is because you can make Whorf mean anything you want. When I say I think my conception of language is Whorfian, you know what I mean; but for a lot of people who would interpret Whorf differently that might not be the case. I certainly follow some aspects of Whorf's work, which I think are absolutely fundamental. One is the relation of language to habitual thought and behaviour. Another one, perhaps less taken up, but which I think is fundamental, is the notion of the 'cryptotype' where it seems to me that Whorf (and of course in this he was simply following the Boas-Sapir tradition) was so right in the seeing the action at the most unconscious levels. The whole point is that the Whorfian effect takes place precisely because of what is going on at the most unconscious level. And, one might add to that, it's going on in what is an evolved human system and not, as sometimes represented, an artificial system. Language is a natural system. In fact, it is these two things, the naturalness and the unconsciousness which make these effects possible. I was arguing this with an economist about two years ago. He was saying in effect that it is only through the most conscious forms of human activity that ideologies are transmitted and that social structure and social system is maintained. And he was therefore defending sociological and economic models of research. In other words you go and study how people plan their budgets or do their shopping or whatever. And I was arguing the opposite case. He was saying: "How can you claim that language can have any influence on this because it's all so unconscious". He wasn't disputing that the processes were unconscious but saying that because they were unconscious they could have no effect on ideology. And my view is exactly the opposite - that it's at the most unconscious level that we essentially construct reality. And that, I think, is Whorfian. Therefore, particularly in terms of the grammar, it's the notion of the cryptotype that I would see as absolutely essential. JM I wondered if Chinese comes in here again in the sense that a grammar of Chinese could only be a grammar of covert categories, because there are no overt ones. MAKH It never occured to me but it may well be true. I've never thought of that. Now as far as Bernstein is concerned, he himself, as he often acknowledged, also took a great deal from Whorf. He makes the entirely valid point that Whorf is leaving out the component of the social structure. Whorf essentially went straight at the ideational level, from the language into the culture, so to speak. Bernstein has pointed out that there has to be, at least in any general theory of cultural transmission, the intermediary of the social structure. I think this is actually right. Bernstein is still, uniquely as a sociologist, someone who has built language in as an essential component to his theory, both as a theory of cultural transmissions and as a general sociological and a deep philosophical theory. He convinced me that this was possible. Perhaps this hasn't come out clearly from what went before because we talked more about the applied context, educational and other applications. But I think it's important also to say that a representation of language has to be able to interpret language in the context of more general theories of social structure, social processes and so on, and ultimately of the whole environment that we live in. In general that had never been done. In fact, the problem has always been in linguistics that linguists have always shouted loudly for the autonomy of the subject, and that always seems to me to be of very little interest. Linguistics is interesting because it's not autonomous. It has to be part of something else. Now Bernstein was the first person that made it part of something else and so the way in which he did this was obviously important. I used to argue with Bernstein when he was doing it the wrong way. Early on he was looking for syntactic interpretations of elaborated and restricted codes; I always said, "That's not where you should be looking". And he gradually moved into a much more semantic interpretation. JM What did Bernstein have that you didn't have from Malinowski or Firth? They both have context, haven't they? MAKH I think he added a coherent theory of social structure. I know he himself has now disclaimed some aspects of this but at the time, as it influenced me, he added a whole interpretative framework which enabled you to show not only the Whorfian effect, but also why patterns of educational failure were essentially class linked. In a society like the current western societies with their very strong hierarchical structures of class primarily and all the others, he asked "How were these, in fact, transmitted, maintained? What essentially is the nature of these hierarchies as semiotic constructs?" Bernstein put that in. I don't think that was there before. At the time there was all this stupid argument - Labov was trying to demolish him. But, if there was one person that needed Bernstein to give him theoretical underpinning, it was Labov. I mean, Labov doesn't make sense unless you've got something like Bernstein behind him. GK We have a question about semiotics and systemics. Your model of language has connections to the work of Saussure and Hjelmslev alongside Firth. How would you position yourself in respect to continental structuralism and what role do you see for systemic theory in relation to post-structuralism and semiotics? MAKH We need another seminar on this one. I mean, it's a good thing we didn't start with this question. Firth, as you know, was very critical of Saussure on a number of points and regarded him as somebody who was perpetuating certain ideas in the history of Western thought which he didn't like, certain basic dualities. Now Ruqaiya would say, I think, that he was misrepresenting Saussure in a number of these ways, and maybe he was. In any case it seems to me that the world after Saussure was different from the world before. That's a fact and I certainly belong to the world after, although certainly there were things in Saussure, when I first read him as a student many years ago, and re-reading subsequently, that I wouldn't accept. I do think I share Firth's suspicion of langue/parole, although from a somewhat different standpoint. As I see it, if you take the Saussurean view then you find it very difficult to show how systems evolve. But, it seemed to me that Hjelmslev had, to a certain extent, built on Saussure and also corrected that point of his; Hjelmslev's notions were much more adequate. To the extent that Hjelmslev differed from Firth, there are two important respects in which I would follow Hjelmslev. One is that Hjelmslev did have a very strong concept of a linguistic system, but a non-universalist one. This lies between the Firthian extreme which is: "There's no such thing as a language; there's only text and language events.", over to the other extreme of the universalists. Hjelmslev lies in the sort of middle position, which I think I would share. And then, of course, Hjelmslev constructed a fairly clear, useful, stratificational model. I haven't used it in the Hjelmslevian form and there are certain parallels built in between the different planes which I certainly wouldn't follow. Certainly in the attempts to construct an overall pattern at the time when I was first doing this, I was very much influenced by Hjelmslev, and that's something which Firth just didn't have. In the last five to seven years I just haven't kept up with all semiotic and post-structuralist literature, so I've got a very partial picture. I was in Urbino for two or three summers in the early seventies, late sixties. That was when I first interacted with semiotics in the continental sense. It seemed to me that the general concept embodied in semiotics was a very valuable one because it enable me to say: "Here is a context within which to study language". Partly it's simply saying: "OK. We can look at language as one among a number of semiotic systems". That's valuable and important in itself. That then let's us look at its special features. We can then ask questions about its special status - the old questions about what extent language is unique because of the connotative semiotic notion - because it is the expression through which other semiotic systems are realised. And then thirdly at a deeper plane, semiotics provided a model for representing human phenomena in general, cultures and all social phenomena as information systems. This, of course, is really a development in line with technologies it seems to me. It goes with an age in which most people are now employed in exchanging information rather than goods and services. Technology has become information technology. So our interpretations of the culture are interpretations as an assembly of information systems. This is what semiotics tried to interpret and increasingly, as I've mentioned, the physical sciences are interpreting the universe as an information system. So semiotics should provide a good home within which linguistics can flourish in this particular age, it seems to me. Now there are certain respects, of course, in which it's gone off in directions that I don't find so congenial. JM If you have a well articulated comprehensive Halliday/Bernstein model, would that be an alternative to what the Europeans have in mind? With respect to the language and ideology conference last year and the way people were talking about ideology and language, it struck me as another way of talking about things that that Halliday-Bernstein model would be interested in. It's not doing something else. Gunther should really follow this up. GK I feel that systemic theory provides the most worked out model for thinking about semiosis. And semiotics on the other hand has the ability to ask certain kinds of questions, or have a slightly different view point to look at language again. I think that's the formula of the relationship. RH One of the problems of course is what is one thinking of as an example of post-structuralism. RH If you're thinking of Derrida, that raises a different question which, at the deepest level, is really a question of realisation - the signifier and the signified and the relation between them. If you look at Bourdieu then that is a different question again and that question is the question of langue and parole, the sorts of relations that there are. MAKH Bourdieu would be much more compatible with what Jim is referring to as the Halliday/Bernstein thing. RH Yes. Greimas is yet another voice. He's not exactly what you would call a post-structuralist. But it is really very difficult with Barthes and Greimas to say exactly at what point they cease to be seen as structuralist. I myself find it very difficult to define the term structuralism. And that's what makes that question a little bit difficult to answer in one go. MAKH We need another seminar on this one too. It seems to me, that in so far as post-structuralism has become a literary theory, then some of the ideas that are used in discussions of literature and are ascribed to structuralists by people working in the general semiotic and post-structuralist field really aren't there at all. I mean they're quite different from what these people are actually saying. RH That's generated a very interesting point: how it is that a discipline retains it's old assumptions while using new names, and resists any innovations. Literary criticism is one of the disciplines that is a prime example of this kind of thing. One should study that for how to retain ideology and not to change it. GK It seems to me, just to make two comments, that structuralism and post-structuralism asks questions of linguistic theory which are important to ask. Derrida's work, for instance, really sharpens up the question about system, because it in itself is a model that works without system. It works only with the surface effects of structures. So it asks really important questions about system. But the thing that interests me most is that post-structuralism asks questions about the constitution of language uses, in linguistic terms, which linguistics, because of its concerns with the system itself, hasn't I think addressed as fully as it might. That seems to me important. Anyway, we have a question on speech and writing. Is there an implied valuation of speech over writing in your descriptive work. The second part of the question which is: How does your recent work on grammatical metaphor relate to this issue? MAKH In a sense there is an implied valuation of speech over writing in relation to this notion of levels of consciousness, if you like. It seems to me there's a very important sense in which our whole ability to mean is constructed and developed through speech, and that this is inevitably so. In other words speech is where the semantic frontiers are enlarged and the potential is basically increased. I know that one of the problems here is that there's a risk in this being interpreted like the old, early twentieth century structural linguists, who insisted on the primacy of speech over writing for other reasons. But there are things I want to say about natural spontaneous speech which do, in a sense, give it a priority. This has been partly of course political because I feel that it is essential to give speech a place alongside writing in human learning and therefore in the educational process. I still feel very strongly about that. Now the work on grammatical metaphor is partly an attempt to explore the nature of the complementarity between the speech and writing. There are modes of action and modes of learning which are more spoken, speech-like and which are more naturally associated with spoken language, and others which are more naturally associated with written language. This is something which needs to be explored. I'm always asking teachers if they feel that there are certain things in what they do which are more naturally approached through the spoken. At a deeper level differences between speech and writing have to be explored in the wider semiotic context that we're working with. We need to ask about writing as a medium, the development of the written language, and the development of technical discourse, exploring a technicalisation that is part and parcel of the process of writing and which involves grammatical metaphor. We need to ask what the nature of the realationship among these things is and between all of these and the underlying sorts of phenomena that they've used to describe. Beyond this it's the whole question of how far can we use notions of grammatical metaphor, and indeed the whole systemic approach to language, to try and understand the nature of knowledge itself. It relates to what we've been talking about in some of these seminars on a language-based theory of learning. When I started in the E.S.L. area I remember going to Beth Ingram, the psychologist at the We had a lot of useful ideas but nothing that could be thought of as a general learning theory into which this our work could be fitted. So it seems to me we have to ask the question "Well, can we build one out of language" I mean "Don't we by now know a lot more?" I am obviously influenced by Jim here who's been pointing out all along that linguistics should in fact simply take over a lot of these things and see what it can say from a language point of view. And I certainly think that we have to work towards a much more language-based theory of learning and language-based theory of knowledge. And in that notions like grammatical metaphor, and the difference between spoken and written language, are obviously fundamental. GK Our next question in a sense addresses that in a somewhat broader way. Your work has paved the way for a radically larger role for linguistics in the humanities and social sciences and perhaps beyond than has been possible in the past. What, to your mind, are the limits of semiosis? Just how far can a language-based model be pursued before turning over to other disciplines? MAKH I think that we've drawn disciplinary boundaries on the whole far too much. We had to have them of course. I think Mary Douglas sorted that one out many years ago very very well. The discourse, so to speak, had to be created in definable circumscribed realms. But the cost of this was defining these far too much in terms of the object that was being studied. Thus linguistics is the study of language, and so on. Now that is really not what disciplines are about. A discipline is really defined by the questions you are asking. And in order to answer those questions you may be studying thousands of different things. Linguists start by asking questions about language. And if you ask "Well how far do questions about language take us?" , then the answer is "They take us way beyond anywhere that we are yet operating in." The frontiers are well beyond. I don't know where they are, but they're certainly well beyond where we are at the moment. They certainly take us into a lot of questions that have been traditionally questions of philosophy, which has always been about language to the extent it's been about anything and into questions of general science. I mean, this is why I've become increasingly interested in scientific language and general problems of science. It has become increasingly clear that you can ask questions about language which turn out to take you into and even way beyond human systems. So I don't know where the frontiers are but they're certainly a great deal further than I think we've been able to push them. And, in a sense, I've tried to have this kind of perspective in view all along; I wanted a linguistics which is not defined by object language as object rather by questions. These questions began by being questions about language but eventually expand into areas that we don't expect. I certainly think that we should be fighting a lot more for the centrality of linguistics, not only in the human sciences but in science generally, at least for the foreseeable future. GK In what way do you mean that? As a means of elucidating what scientific disciplines are doing? MAKH Yes. Current thinking has been emphasising the similarities among human, and between human and non-human systems, between human and physical systems if you like. Take first of all Lemke's work, which I think is tremendously important, on dynamic open systems. He's taken over the social semiotic notion, which he's characterised these essentially physical terms. Language fits in, but then becomes a way of looking at other human semiotic systems, which are language-like in this respect and for which language serves as both the semiotic which realises them (the connotative semiotic sense) and also a model and a metaphor in a very important sense. I think you can go beyond that now into physical systems. The universe in modern physics in being thought of as one, whole, indivisible and conscious. In other words the present generation of physicists is adding consciousness to the universe, talking about exchange of information. That came originally out of quantum physics. Now my point is I want to say not "one, whole, indivisible and conscious" but rather "one, whole, indivisible and communicative". In other words I want to say the universe, in an important sense, is made of language, or at least made of something of which human language is a special case. Taking the notion of a natural grammar, one step further is to say that language is as it is because it not only models human semiotic systems (realities we construct in a very important sense); it also models natural systems. Obviously, talking like that is talking in a very abstract way; but on the other hand, I think that there is an important sense in which the situation has been reversed. Instead of modelling all our thinking in some respects on physics, as in the classical period (and from physics via biology it got into linguistics), I think there's an important sense in which in the next period the thing is going the other way round. We are going to start from the notion that the universe as a kind of language if you like, and therefore move outwards from linguistics. Towards human and then biological and then physical systems. GK A materialist linguistics. GK We have one last question which is about linguistics and machines. Very early in your career you worked on machine translation and since then your work has played a central part in a number of artificial intelligence projects. Is this because of or in spite of your socio-functional orientation? How has your recent involvement in I.S.I. influenced your thinking about language, linguistics and machines? MAKH I don't see the interest as in any sense conflicting. As you know I have never thought of either the machine or the linguistic theory as in any sense a model of human psychological processes, so there's no question of seeing some model of the brain as a common base. Now I've had one concern throughout which is that it seemed to me right from the beginning when I first tried to learn about this back in the late fifties in the Cambridge Language Research Unit that the machine was, in principle, a valuable research tool. Now that was the nature of my first interest. By seeing if we could translate Italian sentences into Chinese, which we were doing at that time, we learned more about language. I've been in and out three times now. First of all, while in the very early stages, we had some fascinating discussions and it was all great fun; but it was obvious that the technology itself was still so primitive that we were constrained by the hardware the internal housekeeping rules so that we weren't actually learning anything about language in the process. I had another interest in it which is that I felt that machine translation had an important political role to play. There were lots of cultures around the world where people were beginning to be educated in a mother tongue and if you could possibly have a machine to translate a lot of text books at least it would help the process. So there are practical concerns like that. Then in the late sixties I came back again with the project on the Linguistic Properties of Scientific English that Rodney Huddleston and Dick Hudson, Eugene Winter and Alec Henrici were working on. Henrici was the programmer and at that time we used the machine to do one or two things in systemic theory. For example he had a program for generating paradigms from a network. So you could test out a network that way. And he could even run little realisations through it. But again there were tremendous limits in the technology. At that time I started being interested in generalising and parsing programs. I wanted to test the theory and of course, I was responding to external pressure. At that time in the sixties unless you could show that your theory was totally formalisable it was of no interest and I was responding to these pressures. This was why I was interested in Henrici actually generalising clauses by the computer. But my real interest in that was that I was beginning to realise that you could no longer test grammars except with a machine, in the sense that they were too big. If you really had delicate networks, the paradigms were just huge; you had to have some way of testing this. There was still a limit on the technology then. I wanted to write the grammar in metafunctional terms. I wanted to say "I don't like the sort of transformational model where you have a deep structure and then obligatory transformations and then optional transformations on top of them. I want to be able to represent things as being simultaneous and unordered". And the answer was "Well, we can't compute this and therefore it must be wrong". I never accepted that answer. It always seemed to me to be incredibly arrogant to say that if our logic or our hardware can not do something at this stage therefore it must be wrong. So I just backed off again and I never thought I would come back into it at all. I thought that was it until Bill Mann came along when we were in Parallel processing is not a problem but there are still constraints on the extent to which each of these processes can consult all the others as it's going along and modify it's own behaviour in the light of that consultation. It seems to me that if you can get that kind of thing available then we really can learn a lot by constructing parsing and generating programs and using them to test the grammar. It's been as a research tool mainly that I have been interested in this although there obviously are practical applications that are useful. RH Where is the point where systemics needs more growth? Which direction is it going? What is your hope that systemicists would develop? MAKH Well, more of what they are doing I think. I mean we just need more people, more time, more resources, the usual thing. One of the things that we have been very weak on is any kind of clinical applications and the underlying theory that goes with those. Bruce MacKellar is the one who has certainly done most that I find interesting, but he hasn't written up much of it yet on that side. I mean, he's written an enormous amount of background material, but less about the neurolinguistics. McKellar's notion is that systemic theory is likely to be useful, moreso than others he thinks, in developing neurolinguistics. He doesn't believe that there is such a thing yet, but he sees ways of doing it. And the interesting thing is that he sees this not so much in relation to the particular representations of the grammar on the linguistic system, as in the social semiotic background to it. Now that's one development I think is very important - towards a neurolinguistics and towards clinical applications. Again we will in turn learn from these things. So I would like to see it far more used in context of aphasiology and all kinds of studies of developmental disorders. RH Let's go back to the machines and how they can be used for testing the grammar. At the present moment all they can do is test the grammar of a clause or with luck, clause complex; but they are not able to do anything yet on what constitutes a normal natural sequence of people's sayings in any context of situation without going up and building in context of situation. That was the context in which I had raised that question of probability because it seems to me that the only way that probability is going to link up with text is in some through context. MAKH Well, there has to be some sort of register model, as part of it. But I don't know that in principle there's any reason why this can't be built in, given that point that I was saying about the remaining limitations on the technology. The environment, as they call it in the I.S.I. project, which means the knowledge base and the text planner, are still very primitive. But they're primitive because we just haven't had enough people doing enough work on them. I think that given a research effort in that area then it should be possible to represent these things in such a way that they can be part of the text generation. The questions in the interview schedule were designed by Hasan, Kress and Martin and given to Halliday a few days prior to the interview. Hasan and Martin subsequently edited the interview into its present form. Horvath, a Labovian sociolinguist, was Halliday's first appointment to the Department of Linguistics he founded at the [MAKH to unpack...Further Education Training Scheme???] Firth, J R 1950 Personality and language in society. Sociological Review 42. 37-52. [Reprinted in J R Firth 1957. Papers in Linguistics 1934-1951. Halliday did in fact join the Communist Party, and was a member until 1957 when he left over the party's failure to condemn or even to properly discuss condemning Halliday's Ph.D. thesis was published as Halliday, M A K 1959 The Language of the Chinese "Secret History of the Monguls". For an overview of Firth's theory see Firth, J R 1957b A Synopsis of Linguistic Theory, 1930-1955. Studies in Linguistic Analysis (Special volume of the Philological Society). Allen, W S 1956 System and structure in the Abaza verbal complex. Transactions of the Philological Society. 127-176. Halliday, M A K 1964 Syntax and the consumer. C I J M Stuart [Ed.] Report of the Fifteenth Annual (First International) Round Table Meeting on Linguistics and Language Teaching. Halliday's first scale and category grammar of English was written on the cardboard inserts he received inside his shirts from Lemke, J. 1984 Semiotics and Education. For a summary of the discussions on synoptic and dynamic representations referred to here see Martin, J R 1985 Process and text: two aspects of semiosis. J D Benson & W S Greaves [Eds.] Systemic Perspectives on Discourse vol. 1: selected theoretical papers from the 9th Internatinal Systemic Workshop. See Halliday, M A K 1975 Learning How to Mean: explorations in the development of language. For further discussion see Halliday, M A K 1987a Language and the order of nature. [MAKH to fill in reference...] For a summary article see Halliday, M A K in press Towards probabilistic interpretations. For a retrospective overview of this initiative which produced the Breakthrough to Literacy and Language in Use materials see Pearce, J, G Thornton & D Mackay The Programme in Linguistics and Engish Teaching, See Bernstein, B. 1971, 1973, 1975, 1990 Class, Codes and Control, Vols. 1, 2, 3, 4. Halliday, M A K, A McIntosh & P Strevens 1964 The Linguistic Sciences and Language Teaching. During the late 1970s the Curriculum Development Centre in This is a fluid network of linguists and educators (anchored by Fran Christie and initiated by Halliday in 1979) which has held several conferences on language in education issues around See Gray, B 1985 Helping children to become language learners in the classroom. M Christie [Ed.] Aboriginal Perspectives on Experience and Learning: the role of language in Aboriginal education. See Halliday, M A K 1967 Intonation and Grammar in British English. See Halliday, M A K in press New ways of meaning: a challenge to applied linguistics. to appear in For discussion of these debates see Atkinson, P 1985 Language Structure and Reproduction: an introduction to the sociology of Basil Bernstein. See Hasan, R 1985 Meaning, context and text: fifty years after Malinowski. J D Benson & W For a recent statement on levels see Halliday, M A K in press How do you mean? M Davies & L Ravelli [Eds.] Papers from the Seventeenth International Systemic Congress, University of Stirling, July 1990. For the proceedings of this conference see Threadgold, T, E A Grosz, G Kress & M A K Halliday 1986 Language, Semiotics, Ideology. See Halliday, M A K 1985a An Introduction to Functional Grammar. See Halliday, M A K 1988 On the language of physical science. M Ghadessy [Ed.] Registers of Written English: situational factors and linguistic features. See for example Martin J R 1986a Grammaticalising ecology: the politics of baby seals and kangaroos. T Threadgold, E A Grosz, G Kress & M A K Halliday. Language, Semiotics, Ideology. [MAKH to add reference...???] The Information Sciences Institute in See Halliday, M A K 1962, Linguistics and machine translation. Zeitschrift fur Phonetik, Sprachwissenschaft und Kommunikationnsforschung. 15.1/2. 145-158. [republished in M A K Halliday & A McIntosh. 1966. Patterns of Langugae: papers in general, descriptive and applied linguistics. See Huddleston, R D, R A Hudson, E Winter & A Henrici 1968 Sentence and Clause in Scientific English. See McKellar, G B 1987 The place of socio-semantics in contemporary thought. R Steele & T Threadgold [Eds.] Language Topics: essays in honour of Michael Halliday.
The Scoop: R 1975, directed by Norman Jewison and starring James Caan, John Houseman, Maud Adams, and John Beck Tagline: It’s More Than Just a Game! Summary Capsule: In a not-too-distant future, James Caan and his spiky glove defy the nasty evil corporations of dystopia by playing a really violent game when they tell him not to. Rich’s Rating: Full contact reviewing! Rich’s Review: Right then, you lovely people you, before I begin this review in earnest, I’d like to point out that while many of you may already have lapsed from your oh-so-sincere New Years Resolutions to go to the gym for 9 hours every week, quit whichever habit your publicly ashamed of but secretly really enjoy (like Poolman and his exotic jelly collection… don’t ask), or to randomly give internet movie reviewers large chunks of cash (I know you’re out there somewhere, Crazy Millionaire Philanthropist), I am resolute in my determination to see mine through. And so, in keeping with the promise I made earlier in the year, this review represents the first of the films selected for me by a member of our wonderful forum community. The lucky mini-mutant in question did try to foist a review of From Justin To Kelly on me, but since the UK border controls prohibit the import of toxic waste, I was spared that trauma, and in a kinder, gentler moment, the gent in question relented, instead asking for a review of Rollerball — so Uber, this one’s for you buddy! Right, with that out of the way, let’s hop into our time machine and travel back to the days before I was even born, where we’ll be watching a film about what was the future then but is actually the recent past now, but they didn’t know that back then. If they had, Rollerball would probably have been called something like ‘Internet’, and would be the riveting study of how James Caan managed to power-level his Elven Druid to level 35 in just 6 weeks on Everquest. However, the future (our present) according to 1975 (the past) is a very different place. No longer do we care about such trivial things as ‘Countries’ or ‘Nations’ – in Rollerball 2004, we are instead all affiliated to several great and powerful corporations, who’s vast hedgemony forms some kind of weird capitalism/socialism fusion where the corporations have all the money, but then just give the people who have no money (mostly everybody) stuff. So, with most of humankind happy to trade in their intellectual freedom and right to self expression to the corporation in exchange for free TV, sofas, and a lifetimes supply of snack food and tasty soda beverages, everyone’s happy, right? No such luck. You see, what with the corporations controlling how much Joe six-pack can express himself, or indeed have any kind of independent thought at all, makes all the little people very angry. And what better way for the corporations to help vent that anger than to create a really weird sport in which lots of people get really badly hurt and/or killed? Rollerball, said sport in question, is hard to describe. Like the rules to American Football or Cricket (depending on which side of the pond you hail from), the rules to Rollerball seem strangely formless to the uninitiated. There are men on rollerskates, men on motorcycles, spiky gloves, a big metal ball and a goal of some kind. The object of the game, however, seems simply to be to kill the opposing team with this variety of props, rather than any actual ‘scoring’ of any kind — but the crowd, who are only here for the violence anyway, don’t seem to mind. Enter Jonathan E (James Caan). Somehow, in a career playing Rollerball for 10 years, he’s managed to avoid being mangled, spiked, run over, crushed, or in any way mutilated; as a result, the Rollerball crowds see him as the sports great celebrity — though, given the casualty rate in Rollerball, it’s hard for him to have any competition. Unfortunately for Jonathan, the corporation for whom his team plays aren’t as overjoyed at his success as you might think. You see, all those normal people actually liking and supporting Jonathan is making them dangerously close to ‘thinking’, and that just won’t do — so the corporation promptly tells Jonathan to retire. Now, up until this point, I have absolutely no problem with this film (apart from the fact that it’s essentially meant to be taking place around about now, and as I look out of my window I note that silver jumpsuits have yet to become the de rigueur fashion accessory they appear to be in this film). However, Jonathan’s logic here escapes me. If we imagine the conversation that might have taken place, see if you can spot where the plot of the movie and my own personal logical belief differ: CORPORATION: “Hey, Jonathan — we know you’re a big star, but how about you stop playing that insanely dangerous game where you could be mangled and mutilated at any time and retire to a life of luxury where we will provide for your every desire, including that one about the two girls in the matching silver jumpsuits. Oh, and by the way, if you say no, we’re gonna make your personal and public life as hard as possible. Whatdya say?” Now, if this is me, I’m already asking where to sign up. Jonathan E, however, is not willing to give up on his weekly brushes with death, and decides to defy the corporate commandment to cease and desist. And so free-thinking, man-of-the-people Jonathan turns his back on the Corporation that created (so to speak, not literally) him, and begins a David & Goliath war between the public hero and the faceless corporation (who’s face in the film, evil agent Bartholemew, is played to a creepy tee by John Houseman). I’ll not tell you who wins – if you want to find out, go watch the movie. Watching this again, having only really seen it once before when I was but a young teen, this is the first time I’ve actually sat down and thought about it as a film rather than just thinking “Hey, cool, that guy totally got run over by that motorcycle”. As a political allegory for control of the masses, it’s about as subtle as an Adam Sandler comedy, but the more I think about it, the more confused about which political system the movie is aimed at? The Capitalist/Socialist state which exists in Rollerball, especially things like the ‘summarising’ of modern literature, and the systems attempts to quash Jonathan’s ‘cult of personality’ seem like they’re aimed at what was then Soviet Russia… but the whole capitalism thing sits neatly with a more western approach. Maybe the point of the film is that all forms of government are bad, which, while I would be happy if I never had to pay taxes again, I feel a lack of government of any form might make such essential services such as ‘law’ and ‘safety’ a little harder to depend on. What does all this random drivel mean? Rollerball is an interesting if dated film, which ladles out spoonfuls of heavy-handed moralising in between wonderfully shot and excitingly violent Rollerball games which make up for in carnage what they lack in comprehensibility. It’s a classic of the cult sci-fi genre, and anyone who has liked films such as The Matrix or Equilibrium should be able to see the seeds of the ideas for those films present here. James Caan is a great here (and went on to be great in The Godfather soon afterward), and John Houseman is an excellent nemesis. A final word before I leave. Some of you may have noticed that a film, ostensibly a remake of this, came out in 2002. Please don’t go and watch it. I suffered. You don’t have to. - Norman Jewison said he cast James Caan as Jonathan E, the champion Rollerball player, after seeing him play Brian Piccolo, the real-life Chicago Bears running back in Brian’s Song. - According to the author, William Harrison, Rollerball was inspired by an Arkansas Razorback basketball in Barnhill Arena during the era of coach Eddie Sutton. - The game of Rollerball was so realistic the cast, extras, and stunt personnel played it between takes on the set. - There was only one “Rollerball” rink. It was redressed to appear as different cities. - During the Tokyo-Houston game, the Tokyo fans are chanting “Ganbare Tokyo!”, which translates into “Let’s Go Tokyo!” - Contrary to rumors, no one died during the filming of any of the stunts. - I wish where I worked had a Corporate Hymn. - The rules to rollerball: kill the other team. - Given the above point, I especially enjoyed the little team strategy sessions. What are they discussing? “Hey, lets run this guy over first…” Jonathan E.: Ears. Now, they’re important, too. If you liked this movie, try these: - Death Race 2000 - Any dystopian future film, from Blade Runner to Equilibrium - Or, if you’re feeling reckless, Rollerblade (2002), but don’t say I didn’t warn you.
Solider of Love and Justice – Pretty Guardian Sailor Moon Sorry Spiller Guys, this post is about feminism, but I hope you will read it also. Feminism is about empowering women, equality and social justice . . . so why is it so unpopular with young women and girls ? ? ? Most girls I know either deny they are feminists or are embarrassed to admit it. Why on earth should this be so ? ? ? What is wrong with feminism ? ? ? A new series of Sailor Moon has started in Japan and it has caused a lot of debate about the nature and role of this manga ( and later anime series ) People seem divided equally between those who think that Sailor Moon is an icon of feminist literature and those who think the complete opposite. Continue reading I am sorry Ms S – I can not see anything at all in your eye . . . This week thought we would dedicate to love ! ! ! Summer is such a romantic season ! ! ! So the to help you enjoy the romance of summer the He Said – She Said Spiller Romance Enhancement Special Operations Team have made you two playlists to help you get into the mood for love ! ! ! One playlist is Western tracks and the other one is Japanese tracks. We hope you like them ! ! ! I tried, with my choices, to give a little “selection box” showing some of the different “flavours” of love ( marzipan ! Ugh !). Love is such a difficult thing to pin down, there are all sorts of different kinds. Love of a lover, love of a child or parent, love of beer or money….mostly beer, actually..now I come to think of it..anyway..here’s some songs about it. Continue reading These girls are not just a girl and a guitar Ms S ! ! ! Mr P ! ! ! They are not just anything ! ! ! Now is a really great time in Japan for singer songwriters. They are working in a really great variety of styles and genres and in this post we want to introduce you to some of the girls who are making great music and writing fantastic songs. We think you will enjoy them – they are all much more than just a girl and guitar ! ! ! I think that we in the West sometimes get the impression that Oriental women are somewhat downtrodden and subservient. Whilst there may be an element of truth there I think things are changing in the East and women are becoming more prominent in the world of business, media , teaching etc. Especially notable is the number of Japanese music artistes who are not only performing, as they did in the past, but also writing their own material. Here are a few of our favourites. We hope you enjoy them. But Mr P ! ! ! I told you the Takushii Driba did not speak Engrish ! ! ! ♪ ♫ ♪ Engrish is so lomantic ! ! ! Japanese borrows words from everywhere ! ! ! We have stolen words from Chinese, Portuguese and of course English and made them part of our language. We also learn English at school, so of course we can not resit showing of our English skills in our songs. Putting some English into a song can make it sound cool to Asian people but singing in a second language can be dangerous ! ! ! But these tracks all feature English and we really love love them. We hope you do too ! ! ! Speak Engrish punk!!! Herro Evelyblody ! I found this topic a little traumatic. I’d hate anyone to think that I was being anything other than totally affectionate and supportive towards the musicians here who bravely, nay, courageously, take English words and mangle them into submission in the cause of being totally hip and cool and stuff. I suppose it’s no surprise, really, “rock” is pretty much and American/English invention. Anyway, here’s some tunes we love and hope you do to. Doomu Aligatto for listening
NIGHT THOUGHTS By Fred Johnston Recently I spent some time in hospital. Ironically, it was the same Galway Hospital in which the Western Writers' Centre had initiated the very first hospitals’ writer-in-residency, with poet Nuala ní Chonchúir, some years ago, an initiative quickly imitated. Here I was, then, in residence in a somewhat different manner. Now and again I would encounter a framed poem of one kind or other in a corridor or shaded room; but like most people confined to a hospital, I wanted only to get a bill of health clean enough to permit me to go home. I found that I had little interest in books or newspapers, was too nervous to read and was like a child drifting parentless around a vast and strange supermarket. I had absolutely no use for poetry. On reflection, and in the comfort of my own acre, I began to write, attempting to interpret to myself what I had experienced through poems. I don’t write poems in English any longer, but have been working them in French, with surprising success for some time now. Just as, in hospital, I found that I’d lost the voice necessary to create anything out of what I was experiencing, I’d some time since felt that I needed a different voice, a different tongue, to describe what I’d experienced in the world of Irish matters. I envied those who could write in Irish. Events had accumulated to sufficiently block out my ability adequately – in my belief – to engage in the world of Irish poetry through English; I had to go outside of English to see anew the things I wanted to write about. The framed poems on the hospital walls reminded me, if they had any effect on me whatever, merely of a world that belonged to someone else; a freer creature, not bound, as I was, by regulations and borders, an individual for whom innumerable things were still possible. Now everything I might dream or hope for was, to a greater extent, hemmed in by the decisions and conclusions of others. Hospitals are benevolent dictatorships. I was reminded of Alphonse Daudet’s account of his time under medical care, In the Land of Pain (edited by Julian Barnes), in which Barnes, in a harrowing introduction that describes the novelist Turgenev being conscious throughout a surgical procedure, asks: ‘How is it best to write about illness…?’ Daudet, suffering from syphilis, advised his fellow patients that ‘illness should be treated like an unwanted guest … Daily life should continue as normally as possible.’ Daudet responded to his illness and its accompanying pain by writing about it. That indicated to me a settlement of mind of which I was not – and as any kind of patient, I am no – always capable. Many Irish poets have written about illness, hospitals, the deaths of relatives. Brendan Kennelly comes to mind. One thinks too of John Berger’s fictional novel, The Foot of Clive. There is Solzhenitsyn’s Cancer Ward. Clearly writing about illness – that is, making proper literature out of it – even has political uses, for is shows illness as a leveller, where social class, rank and status are irrelevant. Illness can, one might suggest, be political. But as I lay in bed waiting, desperate for visitors, I could not create one single poetic phrase nor think upon or plan one decent sentence. I had no use for poetry and suspected that poetry had little use for me. I was not settles and had no creativity in me, only needs, and childish ones. The medical and nursing care was excellent: my condition of creative atrophy had nothing to do with that. I began, later, to understand that when ill the mind turns inwards; you’re wounded in your lair, the brightly-coloured cave-paintings of yesterday no longer speak. As the sharp needle inserts itself into the tightening muscle, there’s no relief in running a line of poetry through your mind. So what use is poetry these days? None, I can safely say. At least not for me. Language, when it did emerge, came out of me in tight-throated squeaks, plaintive and urgent; ‘Let me go!’ Not, ‘Let me create!’ No, poetry belonged to those others who dwelt out there, over the cropped lawns in the burling city. Having no tranquillity, I could recollect nothing of poetry and had no need of it. That came as a great surprise to me. A sobering one. I had no need of poetry here, its occasional mischievous tugging at the mind’s sleeve; here, poetry simply wasn’t important. I offer this, mind, as a personal observation and do not doubt that for many people in all kinds of situations, poetry is a solace and a friend. But looking at an elderly man with an oxygen mask over his face, the only reachable conclusion you can reach is that reciting a verso of poetry to him is neither necessary nor of any use. It lost its ability to heal, as it has lost its ability to be magical. The industrialisation of poetry over the last quartet of a century has produced all the vices of industry anywhere and of any kind, the plots, the board-room coups, the malign intent, the graven ego. The magicians have been few in number, the healer fewer still. Continually talking to ourselves, we have lost the common language of the world in which the majority of people live; we are in frames, on walls, voiceless. Overdosed on glittering prizes, residencies, competitions and festivals in exotic places, we see the ordinary as banal and beneath us. We circle the wagons to protect our brothers and sisters from criticism, even, in some cases, from deserved public censure. We have become a church, with all the scripted language of denial and rebuke. But I have learned that none of this counts for anything in the great ordinariness, the great banality, of illness. Write all we like about it afterwards, well and breathing in daylight: but we can never properly recreate that dumbness, that loss of language, on being able to speak in tongues ordinary folk do not know. When a nurse tells you, as you lie in your awkward-fitting pyjamas smelling your own sweat, to roll up your sleeve, not all the majesty of Dante can save you.
Face Recognition Using Line Edge Mapping Approach 1Department of CIT Engineering, Botswana Int’l University of Science & Technology, Botswana 2Department of Electrical and Information Engineering, Covenant University, Ota, Nigeria The research is based on development of an authentication system. Similar to these is the facial features authentication method which is a very new and unpopular method of authentication. The method is very unique with its operation as it doesn’t require contact between the individual and the authentication device. The palm and retinal scanners have motivated the invention of the authentication system. Retinal scanners are contactless authentication methods which scans the venation in the retinal of the individual which is of course unique to human beings. The technology employed in this work uses picture frames from videos, detects facial features, and or matches the face to the respective individual’s face features in the database. Authentication systems are used to identify or verify an individual as well as to distinguish the individual so far identified. This work develops an authentication system that operates with similar accuracy and speed of the human identification. At a glance: Figures Keywords: biometric, recognition system, authentication system American Journal of Electrical and Electronic Engineering, 2013 1 (3), Received August 03, 2013; Revised November 02, 2013; Accepted November 14, 2013Copyright © 2014 Science and Education Publishing. All Rights Reserved. Cite this article: - F, Ibikunle, Agbetuyi F., and Ukpere G. "Face Recognition Using Line Edge Mapping Approach." American Journal of Electrical and Electronic Engineering 1.3 (2013): 52-59. - F, I. , F., A. , & G, U. (2013). Face Recognition Using Line Edge Mapping Approach. American Journal of Electrical and Electronic Engineering, 1(3), 52-59. - F, Ibikunle, Agbetuyi F., and Ukpere G. "Face Recognition Using Line Edge Mapping Approach." American Journal of Electrical and Electronic Engineering 1, no. 3 (2013): 52-59. |Import into BibTeX||Import into EndNote||Import into RefMan||Import into RefWorks| Technology advances have taken huge steps in our modern economy for authentication, validation and distinction between individuals. Some of these unique features possessed by individuals include finger-print, signatures, retinal scanning, as well as facial features. The technology employed uses picture frames from videos, detects facial features, and or matches the face to the respective individual’s face features in the database. Authentication systems are used to identify or verify an individual as well as to distinguish the individual so far identified. Most authentication methods are based on Biometric information and this method however is partly biometric as it uses the facial features of an individual by picking of the light absorption properties of the individuals’ face. One of the main aspects of face identification is its robustness. A face recognition system would allow a user to be identified by simply walking past a surveillance camera. Robust face recognition scheme require both low dimensional feature representation for data compression purposes and enhanced discrimination abilities for subsequent image retrieval. The representation methods usually start with a dimensionality reduction procedure, since the high dimensionality of the original visual space makes the statistical estimation very difficult and time consuming. Similar to these is the facial features authentication method which is a very new and unpopular method of authentication. The method is very unique with its operation as it doesn’t require contact between the individual and the authentication device. The palm and retinal scanners have motivated the invention of the authentication system. Retinal scanners are contactless authentication methods which scanners the venation in the retinal of the individual which is of course unique to human beings. Palm authentication also known as the Hand Geometry authentication method uses the size of the palm, the shape and sizes of the fingers. The Face Recognition method however operates like a human being, its simply sees a face, processes it, and tries to identify the individual just like how the brain operates. Face recognition takes images of people, and returns the possible identity of that person. Face recognition systems are intended for use as a security system to find people in a crowd or deny access to a particular person from a sensitive area. Face authentication typically has a user position themselves in front of a camera, and then they enter their username and have the camera take an image from them. The image is compared to other images of the person. Based on this comparison the user is either granted access or denied. This paper is organized as follows. Section I examined the background which includes introduction that gave insight and helpful hints on the subject matter. Section II gives detail review of technical and academic literature on previous work on the subject of facial recognitions methods and approaches. Section III is on the system design and implementation. Section IV gives the work simulation results and analysis. Section V carries out the proposed system performance evaluation. While section VI ends with the conclusion and recommendations for other possible investigations and improvements that could be made to the work in future. 2. Literature Review Automated face recognition is a relatively new concept. Developed in the 1960s, the first semi-automated system for face recognition required the administrator to locate features (such as eyes, ears, nose, and mouth) on the photographs before it calculated distances and ratios to a common reference point which were then compared to reference data. In the 1970s, Goldstein, Harmon, and Lesk used 21 specific subjective markers such as hair color and lip thickness to automate the recognition. The problem with both of these early solutions was that the measurements and locations were manually computed. In 1988, Kirby and Sirovich applied principle component analysis, a standard linear algebra technique, to the face recognition problem . This was considered somewhat of a milestone as it showed that less than one hundred values were required to accurately code a suitably aligned and normalized face image. In 1991, Turk and Pentland discovered that while using the eigenfaces techniques, the residual error could be used to detect faces in images– a discovery that enabled reliable real-time automated face recognition systems. Although the approach was somewhat constrained by environmental factors, it nonetheless created significant interest in furthering development of automated face recognition technologies. The technology first captured the public’s attention from the media reaction to a trial implementation at the January 2001 Super Bowl, which captured surveillance images and compared them to a database of digital mugshots. This demonstration initiated much-needed analysis on how to use the technology to support national needs. The following are the methods and approaches previously used in facial recognition. These approaches produced results based on the years they were developed and deployed.2.1. Eigen Faces This is one of the most thoroughly investigated approaches to face recognition. It is also known as Karhunen-Loève expansion, eigen picture, eigenvector, and principal component. [1, 2] used principal component analysis to efficiently represent pictures of faces. They argued that any face images could be approximately reconstructed by a small collection of weights for each face and a standard face picture (eigen picture). The weights describing each face are obtained by projecting the face image onto the eigen picture. In mathematical terms, eigen faces are the principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images. The eigenvectors are ordered to represent different amounts of the variation, respectively, among the faces. Each face can be represented exactly by a linear combination of the Eigen faces. It can also be approximated using only the “best” Eigen vectors with the largest Eigen values. The best M Eigen faces construct an M dimensional space, i.e., the “face space”. The authors reported 96 percent, 85 percent, and 64 percent correct classifications averaged over lighting, orientation, and size variations, respectively. Their database contained 2,500 images of 16 individuals. As the images include a large quantity of background area, the above results are influenced by background. The authors explained the robust performance of the system under different lighting conditions by significant correlation between images with changes in illumination.2.2. Neural Networks The attractiveness of using neural networks could be due to its non-linearity in the network. Hence, the feature extraction step may be more efficient than the linear Karhunen-Loève methods. One of the first Artificial Neural Networks (ANN) techniques used for face recognition is a single layer adaptive network called WISARD which contains a separate network for each stored individual. The way in constructing a neural network structure is crucial for successful recognition. It is very much dependent on the intended application. For face detection, multilayer perception and convolution neural network have been applied. Reference proposed a hybrid neural network which combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimension reduction and invariance to minor changes in the image sample. The convolutional network extracts successively larger features in a hierarchical set of layers and provides partial invariance to translation, rotation, scale, and deformation. The authors reported 96.2% correct recognition on ORL database of 400 images of 40 individuals. The classification time is less than 0.5 second, but the training time is as long as 4 hours. The research work in used Probabilistic Decision-Based Neural Network (PDBNN) which inherited the modular structure from its predecessor- a Decision Based Neural Network (DBNN). The PDBNN can be applied effectively to: find the location of a human face in a cluttered image (Face detector); to determine the positions of both eyes in order to generate meaningful feature vectors (Eye localizer: and for Face recognizer. PDNN does not have a fully connected network topology. Instead, it divides the network into K subsets. Each subset is dedicated to recognize one person in the database. PDNN uses the Gaussian activation function for its neurons, and the output of each “face subset” is the weighted summation of the neuron outputs. In other words, the face subset estimates the likelihood density using the popular mixture-of-Gaussian model. The learning scheme of the PDNN consists of two phases, in the first phase; each subset is trained by its own face images. In the second phase, called the decision-based learning, the subset parameters may be trained by some particular samples from other face classes. PDBNN-based biometric identification system has the merits of both neural networks and statistical approaches, and its distributed computing principle is relatively easy to implement on parallel computer. It was reported that PDBNN face recognizer had the capability of recognizing up to 200 people and could achieve up to 96% correct recognition rate in approximately 1 second. However, when the number of persons increases, the computing expense will become more demanding. In general, neural network approaches encounter problems when the number of classes (i.e., individuals) increases. Moreover, they are not suitable for a single model image recognition test because multiple model images per person are necessary in order for training the systems to “optimal” parameter setting. Also long training time is required (4 hours) for Multilayer perception convolution neural network.2.3. Graph Matching Graph matching is another approach to face recognition. In , a dynamic link structure for distortion invariant object recognition which employed elastic graph matching to find the closest stored graph is presented. Dynamic link architecture is an extension to classical artificial neural networks. Memorized objects are represented by sparse graphs, whose vertices are labeled with a multi-resolution description in terms of a local power spectrum and whose edges are labeled with geometrical distance vectors. Object recognition can be formulated as elastic graph matching which is performed by stochastic optimization of a matching cost function. They reported good results on a database of 87 people and a small set of office items comprising different expressions with a rotation of 15 degrees. The matching process is computationally expensive, taking about 25 seconds to compare with 87 stored objects on a parallel machine with 23 transputers. The technique was extended in and matched human faces against a gallery of 112 neutral frontal view faces. Probe images were distorted due to rotation in depth and changing facial expression. Encouraging results on faces with large rotation angles were obtained. They reported recognition rates of 86.5% and 66.4%for the matching tests of 111 faces of 15 degree rotation and110 faces of 30 degree rotation to a gallery of 112 neutral frontal views. In general, dynamic link architecture is superior to other face recognition techniques in terms of rotation invariance; however, the matching process is computationally expensive.2.4. Geometrical Feature Matching Techniques Geometrical feature matching techniques are based on the computation of a set of geometrical features from the picture of a face. The fact that face recognition is possible even at coarse resolution as low as 8x6 pixels when the single facial features are hardly revealed in detail implies that the overall geometrical configuration of the face features is sufficient for recognition. The overall configuration can be described by a vector representing the position and size of the main facial features, such as eyes and eyebrows, nose, mouth, and the shape of face outline. One of the pioneering works on automated face recognition using geometrical features was presented in . Their system achieved a peak performance of 75% recognition rate on a database of 20 people using two images per person, one as the model and the other as the test image. The work in automatically extracted a set of geometrical features from the picture of a face, such as nose width and length, mouth position, and chin shape. There were 35 features extracted from a 35 dimensional vector. The recognition was then performed with a Bayes classifier. They reported a recognition rate of 90% on a database of 47 people. The matching process utilized the information presented in a topological graphic representation of the feature points. After compensating for different centroid location, two cost values, the topological cost, and similarity cost, were evaluated. The recognition accuracy in terms of the best match to the right person was 86% and 94% of the correct person’s faces were in the top three candidate matches. In summary, geometrical feature matching based on precisely measured distances between features may be most useful for finding possible matches in a large database such as a Mug shot album. However, it will be dependent on the accuracy of the feature location algorithms. Current automated face feature location algorithms do not provide a high degree of accuracy and require considerable computational time.2.5. Morphable Face Model Morphable face model is based on a vector space representation of faces that is constructed such that any convex combination of shape and texture vectors of a set of examples describes a realistic human face. Fitting the 3D morphable model to images can be used in two ways for recognition across different viewing conditions: Paradigm 1: After fitting the model, recognition can be based on model coefficients, which represent intrinsic shape and texture of faces, and are independent of the imaging conditions: Paradigm 2: Three-dimension face reconstruction can also be employed to generate synthetic views from gallery probe images. The synthetic views are then Internationally transferred to a second, viewpoint-dependent recognition system. More recently, the work in combines deformable 3D models with a computer graphics simulation of projection and illumination given a single image of a person; the algorithm automatically estimates 3D shape, texture, and all relevant 3D scene parameters. In this framework, rotations in depth or changes of illumination are very simple operations, and all poses and illuminations are covered by a single model. Illumination is not restricted to Lambert an reflection, but takes into account specular reflections and cast shadows, which have considerable influence on the appearance of human skin. This approach is based on a morphable model of 3D faces that captures the class-specific properties of faces. These properties are learned automatically from a data set of 3Dscans. The morphable model represents shapes and textures of faces as vectors in a high-dimensional face space, and involves a probability density function of natural faces within face space. The algorithm presented in estimates all 3Dscene parameters automatically, including head position and orientation, focal length of the camera, and illumination direction. This is achieved by a new initialization procedure that also increases robustness and reliability of the system considerably. The new initialization uses image coordinates of between six and eight feature points. The percentage of correct identification on CMU-PIE database, based on side-view gallery, was 95% and the corresponding percentage on the FERET set, based on frontal view gallery images, along with the estimated head poses obtained from fitting was 95.9%.2.6. Line Edge Map Edge Information (LEM) Line Edge Map Edge Information (LEM) is a useful object representation feature that is insensitive to illumination changes to certain extent. Though the edge map is widely used in various pattern recognition fields, it has been neglected in face recognition except in recent work reported in . Edge images of objects could be used for object recognition and to achieve similar accuracy as gray-level pictures. The above mentioned report made use of edge maps to measure the similarity of face images. 92% accuracy was achieved. Takács argued that process of face recognition might start at a much earlier stage and edge images can be used for the recognition of faces without the involvement of high-level cognitive functions. A Line Edge Map approach, proposed in extracts lines from a face edge map as features. This approach can be considered as a combination of template matching and geometrical feature matching. The LEM approach not only possesses the advantages of feature-based approaches, such as invariance to illumination and low memory requirement, but also has the advantage of high recognition performance of template matching. Line Edge Map integrates the structural information with spatial information of a face image by grouping pixels of face edge map to line segments. After thinning the edge map, a polygonal line fitting process is applied to generate the LEM of a face. An example of a human frontal face LEM is illustrated in Figure 1. The LEM representation reduces the storage requirement since it records only the end points of line segments on curves. Also, LEM is expected to be less sensitive to illumination changes due to the fact that it is an intermediate-level image representation derived from low-level edge map representation. The basic unit of LEM is the line segment grouped from pixels of edge map. A face pre-filtering algorithm is proposed that can be used as a pre-process of LEM matching in face identification application. The pre-filtering operation can speed up the search by reducing the number of candidates and the actual face LEM matching is only carried out on a subset of remaining models. The only limitation that existed in this approach has been countered by the specifications of modern computer systems and is no longer regarded as a limitation in any sense. Earlier systems had problems with storage space which made the size of each individuals face template (16 Kilobytes) bulky for obsolete computer systems. The parallel mutli-threaded processor operations of the application also posed a threat to old machines. 3. System Design and Development The facial recognition approach used in developing this application is based on Line Edge Mapping method.3.1. Line Edge Mapping Line edge mapping works with the outline of the facial features, maps out the important points as a vector line, and saves the template. Line edge map has advantage over all other methods of face recognition, because it identifies the most facial features, it has a higher accuracy than others due to this effect as in . LEM (Line Edge Mapping) consists of a series of line segments, it records only the endpoints of lines which further reduces it storage requirements.LEM matches two different images using LHD (Line Segment Hausdorff Distance). This is used in calculating the distance between lines using angular projection, parallelism, and perpendicularity of the two different lines to be matched and check if they meet the threshold for similarity. The LHD is calculated as follows: Let an array of LEMs be AL = [a1L, a2L, a3L……..apL] And another LEM be BL = [b1L, b2L, b3L……..bqL] Then the LHD Vector be represented by (aiL,bjL) àangular line matching with tolerance: This matches different lines between two images if they are at a slight angle with each other with a tolerance marking the threshold of the similarity. ϴ(aiL,bjL) represents smallest intersection angle between lines aiL and bjL. Function ‘f’ is the penalty factor that ignores the smaller angles and penalizes the greater ones. = angle, W is determined during training àparallel line matching: This matches the parallelism of different lines and compares it in two images. L║1, L║2 are the two parallel lines, the’ min’ function is the minimum distance between the edges of the lines. àperpendicular line matching: This matches the perpendicularity of different lines and compares them between two images. L┴is the distance between perpendicular points. The representations of the above are represented in the figure below: The distance between the two segments A,B can be calculated as follows: The application is developed using C#.NET. It is a Microsoft based programming language based on the Microsoft .NET Framework. It is a high level, fully object oriented programming language. The following are the packages required for the development of the coding: • Microsoft C#.NET Compiler: This is the source code compiler for C#.NET. It is integrated into VS 2010 IDE in order to make building, compilation, debugging and publishing faster. • Visual Studio 2010 IDE (Integrated Development Environment): An application software that compiles, debugs and build .NET related programming languages in a single package. • FaceSDK.NET DLL (Face Software Development Kit Dynamic Link Library): This is a DLL that comprises of functions, delegates, classes, objects etc. that consists of Line Edge Map calculations and template extraction. • SQLite3.NET DLL (Sequential Query Language Lite 3 Dynamic Link Library): This is a DLL that handles the creation, execution and data reading of SQL queries to a mobile database. 3.3.1. Camera Image Streaming The code block below shows how the camera images are captured and rendered into the PictureBox class. NET Control. Each image frame from the camera is captured in a loop, and it sets the pictureBox’s image to the camera image. This loop continues as long as the program is on and gives the pictureBox a video stream feel. 3.3.2. Database Connection String SQLite database is used to store the users’ information. The connection parameters below are used to create a new instance of an SQLite connection to DB.db (local database file in application root directory). 3.3.3. Face Template Extraction The image from the camera is process in order to identify the presence of a face and dynamically create a template which can be stored or used for various purposes (recognition / authentication). The face template consists of a byte array into which the template parameters are stored with a length of 16384,which means it consists of 16384 bytes equivalent to16KB (Kilobytes). 3.3.4. Face Template Storage The information carried by the face template contains raw byte data precisely 16KB in length. Each face template of an individual to be added to the system is 16KB, and 10 Face Templates are taken in-order to increase the flexibility of recognition. This information is saved as a BLOB data type in the database, against the individual’s name, email and database primary key ID. The block of code below is used to create a new user record in the database. 3.3.5. Face Template Data Declaration When the program loads as well as when a new user is added, public statically declared variable (An object array) is made to carry the data from the database. The object array contains four different arrays which are the “names string array”, “email string array”, “dateUTC long array”, “userid integer array” and the “face byte array”. Each of these arrays have the same array content count and are commonly match with the same array index i.e. names [i] should give a user’s name and equivalent value for his email in email [i] and face [i] for his/her saved face template, dateUTC [i] for the date of registration, userid [i] the database primary key ID for the user (i is an integer for the array index). 3.3.6. Facial Recognition and Authentication After the variables have been populated, the program listens for the presence of a face in each of the image frames captured from the camera in real time. When it detects a face it then extracts the facial template, and tries to match it with any of the templates in the array of face templates it received from the database. Below is the code block that actively extracts the face template from the image from the camera. Below is the code block that attempts to match the face template with the existing templates held in the variables, with a False Acceptable Rate (FAR) set to 0.5% (0.005) which performs a very strict matching. 4. Simulation Results The Face Detection System is integrated in the programmed software application. The major process that encompasses the Face Detection System is as follows: • Camera Selection: The Application generates a list of all the USB (Universal Serial Bus) cameras connected to the computer system, and gives you the option of the camera to use for the operation. • Real Time Image Capturing (Virtual Video Stream): The application virtualizes a video stream by simultaneously capturing picture frames from the camera, and displaying it in a picture box after it has been processed. • Image Processing: Before the image is displayed in the picture box, it is processed by the software, by appending a rectangular skeleton, encapsulating a detected face, as well as the information of the individual underneath the rectangle, if the individual has already been registered. • Face angular and arbitrary rotations: Enabling Arbitrary rotations processing extends default in-plane face rotation angle from the default value -15..15 degrees to -30..30 degrees. Enabling Angular rotations enable the software to detect a face even during in-plane rotation. Enabling any of these two parameters will reduce the application performance and increase CPU usage of the computer system. Figure 3 shows the application main window. It is the first window form that is displays when the application is ran. Figure 4 below is the setting panel where parameters can be modified to suit the applications performance. This is the Single face detection with respect to the preset Internal Resize Width. The internal resize width is the resolution the application uses to process the image from the camera. The lower it is, the higher the application’s performance, but a higher value will increase precision. • Threshold Value controls the minimum falsifiable probability that a face exists in the current image captured from the camera. After the face has been detected in the image frame from, the application extracts the facial features from the image, converts it into a template with format discussed earlier, and goes through a loop to try to match the template with an existing template in the database. The index used to detect the threshold for the similarity between two templates (of each step in the loop) is called the False Rejection Rate (FRR) or False Acceptance Rate (FAR) respectively . The FRR is approximately inversely proportional to FAR but might tend to be exponential at the extreme values. FAR is the acceptable error rate, to which two faces are allowed to have similar features. FAR being an error should be kept as low as possible to increase accuracy and also taking the Rate of recognition into consideration. The FRR being reliability should be kept as high as possible to increase matching accuracy, and taking the rate of recognition into consideration. The FAR and FRR can be used alternatively without any form of setback or uncertainty, depending on representation preference. Table 3 and Table 4 shows the tests carried out by varying the values of the FAR and FRR of the application. 5. Performance Evaluation The performance evaluation of the system is carried out with a few variables and constants. The constant parameters in this context are: Illumination and Face Posture. While the varying parameters are: Internal Resize Width of the Image processing engine and False Acceptable Rate (maximum error rate) in face template matching.5.1. Face Identification Probability While testing for optimum face identification probability by varying the internal resize with, higher internal resize width gives a higher identification probability. These values are exponential proportion until resize width reaches about 300 pixels. This can be seen to produce optimum result for the face identification parameter. Although increasing the resize width increases the probability of identification, it also has an adverse effect on the performance of the system, creating unnecessary time lags in image processing. Figure 5 shows the relationship between Face Identification Probability and internal resize width. The False Acceptable Rate (FAR) is the error value (in %) to which two different face templates can be said to match. FAR and FRR are inversely proportional to each other and are used interchangeably in the design of the system. In-order words, when working with FAR, a low value will improve the matching accuracy, while a high FRR will improve the matching accuracy. In this scenario, FAR is to be used to derive the corresponding matching accuracy by varying the FAR value as a percentage. Figure 6 shows the graph of the relationship between the FAR and the matching accuracy. Also by reducing the FAR to get a better matching accuracy, the system performance, with respect to the speed/rate of face recognition is reduced, and this creates a time lag in image processing. Optimum values of both FAR and internal resize width can be chosen based on the specification of the system that the application runs. The major aim of this work is to design and construct a facial authentication system which can be used for the secondary or primary authentication of individuals with a much faster, modern and efficient authentication technique. Future work could be done to further improve efficiency, lighting condition limitation as well as the requirements for the installation. With these additional improvements, the standard could be raised for future facial authentication system. The overall efficiency of this authentication system is approximately 85% which can be improved by developing a more complex algorithm or by increasing the facial template features without adversely affecting the speed of operation. It can also be integrated with infrared cameras in order to increase its efficiency in poor illumination situations. Using the application on more compact devices such as a mobile device that are handy will increase the usage diversity of the application, replace common mobile user access methods like finger printing and manual PIN input. |||L.Sirovich and M. Kirby, “Low-Dimensional procedure for the characterization of human faces”Journal of Optical Soc. of Am., vol. 4, pp. 519-524, 1987.| |||M. Kirby and L. Sirovich, “Application of the Karhunen- Loève procedure for the characterisation of human faces” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, pp. 831-835, Dec. 1990.| |||S. Lawrence, C.L. Giles, A.C. Tsoi, and A.D. Back, “Face recognition: A convolutional neural-network approach” IEEE Trans. Neural Networks, vol. 8, pp. 98-113, 1997.| |In article||CrossRef PubMed| |||S.H. Lin, S.Y. Kung, and L.J. Lin, “Face recognition/detection by probabilistic decision-based neural network,” IEEE Trans. Neural Networks, vol. 8, pp. 114-132, 1997.| |In article||CrossRef PubMed| |||M. Lades, J.C. Vorbruggen, J. Buhmann, J. Lange, and M. Konen, “Distortion Invariant objectrecognition in the dynamic link architecture” IEEE Trans. Computers, vol. 42, pp. 300-311, 1993.| |||Wiskott and C. von der Malsburg, “Recognizing faces by dynamic link matching” Neuroimage, vol. 4, pp. 514-518, 1996.| |||T. Kanade, “Picture processing by computer complex and recognition of human faces,” technical report, Dept. Information Science, Kyoto Univ., 1973.| |||R. Bruneli and T. Poggio, “Face recognition: features versus templates,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, pp. 1042-1052, 1993.| |||V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 25, no. 9, September 2003.| |||B. Takács, “Comparing face images using the modified hausdorff distance,” Pattern Recognition, vol. 31, pp. 1873-1881, 1998.| |||Y. Gao and K.H. Leung, “Face recognition using line edge map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 6, June 2002.| |||LuxandFaceSDK “Face Detection and Recognition Library” Developer’s Guide 2011.|
Half-breeds live on the edge of both races. You feel like you're split down the middle. Your right arm wants to unbutton your shirt while your left arm is trying to keep your shirt on. You're torn between wanting to kill everyone in the room, or buying 'em all a round of drinks. Our erratic behavior is often explained away by friends and family as "trying to be. " If you're around Indians, you're trying to be white. If you're around white friends, you're trying to be Indian. Sometimes I feel like the blood in my veins is a deadly mixture of Rh positive and Rh negative and every cell in my body is on a slow nuclear melt-down.— An American in New York LeAnne Howe, an enrolled member of the Choctaw Nation of Oklahoma, was born on April 29, 1951. She was raised in Oklahoma City, and educated in Oklahoma as well. In addition to being an American Indian author, she is a scholar, and she has read her fiction and lectured throughout the United States, Japan, and the Middle East. As an American Indian scholar, she has presented programs on recruitment and retention of American Indian students at higher education institutions. She is currently teaching at colleges and universities around the country, and she is finishing a novel. Howe has led an extraordinary professional and academic career. From 1977 to 1989, Howe worked as a newspaper journalist, Dallas/Fort Worth Metroplex. In 1984, she worked for four years on Wall Street in Institutional Sales, selling and trading government bonds. She worked the two professions at the same time. During the day, she sold bonds, and at night, she wrote for the Dallas Morning News. Over the course of the next 8 years, Howe's career shifted towards the academic world, and she began teaching, lecturing, and developing courses in Native American studies at the University of Iowa and Carleton College in Northfield, Minnesota. Her numerous publications range from short fiction anthologies to literary journals, and her work has included theater, films, and radio. In a span of ten years, Howe has been involved in five theater productions, with one radio production entitled "Indian Radio Days" in 1993. Howe wrote and directed this production, which was broadcast on American Public Radio stations throughout the Midwest, and uplinked via satellite to Alaska Public Radio stations on Columbus Day. Her work has been anthologized in several collections of short fiction. "Moccasins Don't Have High Heels" appears in both Native American Literature, edited by Gerald Vizenor, and American Indian Literature, edited by Alan Velie. "Danse de L'amour, Danse de Mort" appears in Earth Song, Sky Spirit: An Anthology of Native American Writers, edited by Clifford Trafzer. Another short fiction piece titled "Indians Never Say Goodbye" appears in Reinventing the Enemy's Language, edited by Joy Harjo. Howe's 2001 book, Shell Shaker, received the American Book Award for 2002 from the Before Columbus Foundation. She is currently on tour promoting her most recent book, Miko Kings: An Indian Baseball Story. Blog: "On the Prairie Diamond" Howe's blog on her newest book, Miko Kings: An Indian Baseball Story Aunt Lute Books The website of Howe's publisher, which includes her appearance dates Native Wiki: Leanne Howe This page includes biographical and critical information, as well as links to her work online. Report a dead link or suggest a new one by emailing email@example.com. This page was researched and submitted by Elizabeth La Ronge on 3/14/97.
Swarthmore College ‘s Externship Program is an opportunity for a current student to spend five days at the workplace of an alumnus/a in order to gain practical exposure to a career field. Naomi Liang joined us in Special Collections this January: From January 11 to January 15 I participated in an externship offered by David Conners, Digital Collections Librarian (Swarthmore alum ’03), at Magill Library’s Special Collections. The Swarthmore Extern Program entails five days of job shadowing to allow undergraduates to explore a particular field of interest. My current prospective majors are philosophy, English literature, and sociology/anthropology. Since knowledge accession, reading culture and, generally, the process of research have long been fascinations of mine, I was happy to be able to absorb librarian life during my five days at Magill. I spent much of my time working with David on digital archiving – scanning and photographing photographs used for classes, scanning books, reformatting digital audio, and cataloging art. I sat in on a meeting of TAG, the Tri-College Technology Advisory Group, where librarians worked out the final logistics of the neat-looking new service Tripod Mobile (a mobile-friendly version of the catalog for use on smart phones). During this time I also shadowed Ann Upton, Special Collections librarian and Quaker Bibliographer, who, along with David, guided me around the rare book vault and allowed me to pull out random items out of curiosity (including a beautiful 1854 edition of Walden and Christopher Morley‘s German literature notes from 1910). Ann also showed me her process of deciding which rare books or Quaker books to add to the collection. We also answered emailed reference questions regarding Quaker genealogies, and I spent a few hours working on the beginnings of a new project in Special Collections – the digitization of 19th century Quaker fiction illustrations for an exhibit on the popular depiction of Quakers. Of course, my gathered gemstones of experience at Magill were not all from work. During the coffee breaks and the all staff meeting I attended, “all staff” at Haverford consisting of only a little over 20 people, I was able to witness the collaborative and truly congenial atmosphere of a library workforce. I was amazed by and very grateful for the welcome I received by everyone, as well as for the stories I’ve heard from people in various stages of the library career – a current student, a recent graduate, and librarians who are well into their careers and love what they do, a number of whom began their life as college graduates with jobs completely unrelated to librarianship. I absolutely enjoyed my time at Haverford, and I am looking forward to my next visit to Magill’s Special Collections.
AASL webinar reshapes read-alouds into 'talk-alouds' For Immediate Release CHICAGO – An upcoming webinar from the American Association of School Librarians (AASL) will examine read-aloud strategies that promote the exchange of reader responses between children and educators. Presented by Raquel Cuperman, “Talk-Alouds: A Different Approach to Read-Alouds” will take place at 6 p.m. Central on Wednesday, Sept. 10. For more information and to register, visit ecollab.aasl.org. In “Talk-Alouds,” Cuperman will guide attendees in developing questions that promote reading discussion and demonstrate reader engagement. She will also show how to evaluate reader responses in terms of relevance, comprehension, understanding and personal viewpoint beyond the use of traditional written activities and handouts. Raquel Cuperman is the head librarian at Colegio Los Nogales in Bogotá. She has a master’s degree in Children’s Literature and Books for Children and has written several articles about reading and children’s literature for journals in Colombia and the U.S. Attendance during the live presentation of “Talk-Alouds: A Different Approach to Read-Alouds” is open to all. Only AASL members and AASL eCOLLAB subscribers will have access to the webinar archive. A seat in the webinar is guaranteed to the first 100 attendees. To register, visit “Upcoming Webinars” at ecollab.aasl.org. eCOLLAB | Your eLearning Laboratory: Content Collaboration Community, a repository of AASL professional development, provides members and subscribers with a central location to find and manage their e-learning as well as to connect with others in the learning community. eCOLLAB contains webcasts, podcasts and resources from various AASL professional development events, as well as access to a read-only version of the latest issue of Knowledge Quest. To begin utilizing eCOLLAB or to subscribe, visit ecollab.aasl.org. The American Association of School Librarians, www.aasl.org, a division of the American Library Association (ALA), empowers leaders to transform teaching and learning.
Star Trek Online MMO Maker Cryptic Acquired By Atari December 9, 2008by TrekMovie.com Staff , Filed under: Games,Star Trek Online , trackback It was announced this morning that Cryptic Studios, the maker of the new Star Trek Online MMO, have been acquired by French game publisher Infogrames, the parent company of Atari. Earlier this year the license for STO transferred from Perpetual Entertainment to Crytpic Studios after Perpetual ran into trouble, however this acquisition looks like it will have little effect on the game itself. According to the official release STO is part of the deal and will continue to be developed: The transaction encompasses all of Cryptic’s proprietary IP, tools, technology and work-in-progress and integrates all members of the leadership team and employees into Infogrames. This acquisition is a critical step in the implementation of Atari’s strategy to become a leading online game developer and publisher. Cryptic is currently developing three unique MMO franchises, planned for release over the next three years on PC and next-generation consoles, including Champions Online to be released in 2009 and Star Trek Online to be released in 2010. A third game currently in development will be announced in the near future. Of the Acquisition, Cryptic’s CEO John Needham sated: We share a common vision with Atari and their leadership team. With our game development and online platform technology skills, we’re very excited about the opportunities that this unique combination with Atari creates. I am committed to helping Atari grow into a leading online game company and look forward to being part of the team. Based on official statements and other reports in the game industry media the acquisition appears to include the entire staff currently at work on Star Trek Online and the game should continue in development, but the acquisition does put Cryptic in a strong position financially with greater resources going forward. The official statement does note a release date of 2010 for STO. In past statements, Cryptic had indicated that it was shooting for late 2009 for a release, however it is unclear if the date slipped before or after the acquisition, so this change may not be related to the Atari takeover. Meet the STO writer In other STO news, Cryptic has also published another one of their ‘meet the team’ updates, this time featuring writer Christine Thompson: Q: What do you do on Star Trek Online? A: I’m the writer for Star Trek Online, which means I handle the story, characters, plotlines, episodes, dialogue, item text – anything that needs words is my area. I’m also one of the researchers, who can dig through piles of material to find the reference images for the artists or help answer questions like “do we need beaks?” I work with the designers on the content – the things we’re making are very cool and very Trek. Q: How long have you worked in gaming, and what did you do before Star Trek Online? A: I was a writer and editor in the newspaper industry for about 13 years before coming to Cryptic Studios. I wrote everything from movie reviews to crime stories, and as an editor I did a lot behind the scenes for the production of the daily paper – laying out pages, coordinating coverage for events like the 2000 presidential election, fact-checking, editing and trimming stories and writing thousands of headlines. I joined Cryptic in 2007 to write for the web sites and work with the community team. After we got the license for Star Trek, I volunteered to help with some of the writing of the storylines and then I joined the STO team full-time. Q: What part of Star Trek Online are you the most excited about working on? A: There is so much about this game that is awesome, but I’m really excited about our episodic mission structure. We’re creating content that’s not “typical” MMO quests – we want to reflect what you’ve seen in the shows, with a variety of locations and actions in each one. I want people to be excited during episodes and eager to find out what happens next, not just wanting to finish quickly so they can get some XP and move on. Q: What is your favorite Star Trek storyline from any of the series or movies? A: If I have to pick just one I’ll say “Yesterday’s Enterprise” from TNG. It was such a fabulous story, and it was one where you knew that everything wasn’t going to turn out OK in the end. There would have to be sacrifices. And then the writers spun it around a few seasons later and brought in a new character who changed everything you thought you knew. Q: Who is your favorite Star Trek character and why? A: I’m a fan of a lot of the female characters in the series – Uhura, Crusher, Kira, Dax, Janeway, Torres. Star Trek has never shied from creating characters that are strong and multi-faceted. Trek was one of the first things I could watch where the women weren’t just plot devices, moms or damsels who needed rescuing, and that’s part of what hooked me on the shows in the first place. Q: What is an interesting fact about you that players would be surprised to know? A: My focus in college was medieval literature. I’d love to use elements from Beowulf in an STO episode. Oh, and I knit my own socks. Q: As a writer, which books would you always have on your bookshelf? A: A good dictionary and thesaurus, at least two grammar books, the collected works of Shakespeare and a copy of On Writing Well by William Zinsser. These days I’ve added the Star Trek Encyclopedia, Star Trek Star Charts and a big pile of Trek novels. I’m big into researching any topic I’m working on. I like to immerse myself in a world. I’m rewatching everything Trek (including the animated series!) and I try to read at least one Trek-related book a week. Q: Would you make a good pirate? A: I think I’ve got the ruthless streak and I know which end of the sword to point at the other pirates, but I’ve got lousy balance. I’d probably fall out of the rigging and break my neck. Q: What advice do you have for someone who wants to break into the gaming industry? A: Be patient, persistent and professional. Creating videogames is a lot of fun, but it’s also a lot of work. So be prepared to put your heart into it. Q: Is there anything you would like to add? A: Ad astra per aspera.
CIV ENGR 919: Seminar-Hydraulic Engineering and Fluid Mechanics - Catalog Description: Current research and review of literature in theoretical and applied fluid mechanics and hydraulic engineering. - Credits: 1 - Prerequisites: Grad st - Official Course Description (pdf) - Engineering Moodle Courses on the Web (formerly eCOW) - Create an Engineering Moodle Course Homepage (formerly eCOW2)
To appear in Handbook of Epistemology ed. by Matti Sintonen, et al. (Dordrecht: Kluwer). Final Draft 11/10/99 Department of Philosophy University of Pennsylvania Philadelphia, PA 1904-6304 Department of Philosophy and Center for Cognitive Science New Brunswick, NJ 08901 Department of Philosophy New Brunswick, NJ 08901 1. Introduction: Three Projects in the Study of Reason Over the past few decades, reasoning and rationality have been the focus of enormous interdisciplinary attention, attracting interest from philosophers, psychologists, economists, statisticians and anthropologists, among others. The widespread interest in the topic reflects the central status of reasoning in human affairs. But it also suggests that there are many different though related projects and tasks which need to be addressed if we are to attain a comprehensive understanding of reasoning. Three projects that we think are particularly worthy of mention are what we call the descriptive, normative and evaluative projects. The descriptive project – which is typically pursued by psychologists, though anthropologists and computer scientists have also made important contributions – aims to characterize how people actually go about the business of reasoning and to discover the psychological mechanisms and processes that underlie the patterns of reasoning that are observed. By contrast, the normative project is concerned not so much with how people actually reason as with how they should reason. The goal is to discover rules or principles that specify what it is to reason correctly or rationally – to specify standards against which the quality of human reasoning can be measured. Finally, the evaluative project aims to determine the extent to which human reasoning accords with appropriate normative standards. Given some criterion, often only a tacit one, of what counts as good reasoning, those who pursue the evaluative project aim to determine the extent to which human reasoning meets the assumed standard. In the course of this paper we touch on each of these projects and consider some of the relationships among them. Our point of departure, however, is an array of very unsettling experimental results which, many have believed, suggest a grim outcome to the evaluative project and support a deeply pessimistic view of human rationality. The results that have led to this evaluation started to emerge in the early 1970s when Amos Tversky, Daniel Kahneman and a number of other psychologists began reporting findings suggesting that under quite ordinary circumstances, people reason and make decisions in ways that systematically violate familiar canons of rationality on a broad array of problems. Those first surprising studies sparked the growth of an enormously influential research program – often called the heuristics and biases program – whose impact has been felt in a wide range of disciplines including psychology, economics, political theory and medicine. In section 2, we provide a brief overview of some of the more disquieting experimental findings in this area. What precisely do these experimental results show? Though there is considerable debate over this question, one widely discussed interpretation that is often associated with the heuristics and biases tradition claims that they have “bleak implications” for the rationality of the man and woman in the street. What the studies indicate, according to this interpretation, is that ordinary people lack the underlying rational competence to handle a wide array of reasoning tasks, and thus that they must exploit a collection of simple heuristics which make them prone to seriously counter-normative patterns of reasoning or biases. In Section 3, we set out this pessimistic interpretation of the experimental results and explain the technical notion of competence that it invokes. We also briefly sketch the normative standard that advocates of the pessimistic interpretation typically employ when evaluating human reasoning. This normative stance, sometimes called the Standard Picture, maintains that the appropriate norms for reasoning are derived from formal theories such as logic, probability theory and decision theory (Stein, 1996). Though the pessimistic interpretation has received considerable support, it is not without its critics. Indeed much of the most exciting recent work on reasoning has been motivated, in part, by a desire to challenge the pessimistic account of human rationality. In the latter parts of this paper, our major objective will be the consider and evaluate some of the most recent and intriguing of these challenges. The first comes from the newly emerging field of evolutionary psychology. In section 4 we sketch the conception of the mind and its history advocated by evolutionary psychologists, and in section 5 we evaluate the plausibility of their claim that the evaluative project is likely to have a more positive outcome if these evolutionary psychological theories of cognition are correct. In section 6 we turn our attention to a rather different kind of challenge to the pessimistic interpretation – a cluster of objections that focus on the role of pragmatic, linguistic factors in experimental contexts. According to these objections, much of the data for putative reasoning errors is problematic because insufficient attention has been paid to the way in which people interpret the experimental tasks they are asked to perform. In section 7 we focus on a range of problems surrounding the interpretation and application of the principles of the Standard Picture of rationality. These objections maintain that the paired projects of deriving normative principles from formal systems, such as logic and probability theory, and determining when reasoners have violated these principles are far harder than advocates of the pessimistic interpretation are inclined to admit. Indeed, one might think that the difficulties that these tasks pose suggest that we ought to reject the Standard Picture as a normative benchmark against which to evaluate the quality of human reasoning. Finally, in section 8 we further scrutinize the normative assumptions made by advocates of the pessimistic interpretation and consider a number of arguments which appear to show that we ought to reject the Standard Picture in favor of some alternative conception of normative standards. 2. Some Disquieting Evidence about How Humans Reason Our first order of business is to describe some of the experimental results that have been taken to support the claim that human beings frequently fail to satisfy appropriate normative standards of reasoning. The literature on these errors and biases has grown to epic proportions over the last few decades and we won’t attempt to provide a comprehensive review. Instead, we focus on what we think are some of the most intriguing and disturbing studies. 2.1. The Selection Task In 1966, Peter Wason published a highly influential study of a cluster of reasoning problems that became known as the selection task. As a recent textbook observes, this task has become “the most intensively researched single problem in the history of the psychology of reasoning.” (Evans, Newstead & Byrne, 1993, p. 99) Figure 1 illustrates a typical example of a selection task problem. What Wason and numerous other investigators have found is that subjects typically perform very poorly on questions like this. Most subjects respond correctly that the E card must be turned over, but many also judge that the 5 card must be turned over, despite the fact that the 5 card could not falsify the claim no matter what is on the other side. Also, a majority of subjects judge that the 4 card need not be turned over, though without turning it over there is no way of knowing whether it has a vowel on the other side. And, of course, if it does have a vowel on the other side then the claim is not true. It is not the case that subjects do poorly on all selection task problems, however. A wide range of variations on the basic pattern have been tried, and on some versions of the problem a much larger percentage of subjects answer correctly. These results form a bewildering pattern, since there is no obvious feature or cluster of features that separates versions on which subjects do well from those on which they do poorly. As we will see in Section 4, some evolutionary psychologists have argued that these results can be explained if we focus on the sorts of mental mechanisms that would have been crucial for reasoning about social exchange (or “reciprocal altruism”) in the environment of our hominid forebears. The versions of the selection task we’re good at, these theorists maintain, are just the ones that those mechanisms would have been designed to handle. But, as we will also see, this explanation is hardly uncontroversial. 2. 2. The Conjunction Fallacy Much of the experimental literature on theoretical reasoning has focused on tasks that concern probabilistic judgment. Among the best known experiments of this kind are those that involve so-called conjunction problems. In one quite famous experiment, Kahneman and Tversky (1982) presented subjects with the following task. Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Please rank the following statements by their probability, using 1 for the most probable and 8 for the least probable. (a) Linda is a teacher in elementary school. (b) Linda works in a bookstore and takes Yoga classes. (c) Linda is active in the feminist movement. (d) Linda is a psychiatric social worker. (e) Linda is a member of the League of Women Voters. (f) Linda is a bank teller. (g) Linda is an insurance sales person. (h) Linda is a bank teller and is active in the feminist movement. In a group of naive subjects with no background in probability and statistics, 89% judged that statement (h) was more probable than statement (f) despite the obvious fact that one cannot be a feminist bank teller unless one is a bank teller. When the same question was presented to statistically sophisticated subjects – graduate students in the decision science program of the Stanford Business School – 85% gave the same answer! Results of this sort, in which subjects judge that a compound event or state of affairs is more probable than one of the components of the compound, have been found repeatedly since Kahneman and Tversky’s pioneering studies, and they are remarkably robust. This pattern of reasoning has been labeled the conjunction fallacy. 2. 3. Base Rate Neglect Another well-known cluster of studies concerns the way in which people use base-rate information in making probabilistic judgments. According to the familiar Bayesian account, the probability of a hypothesis on a given body of evidence depends, in part, on the prior probability of the hypothesis. However, in a series of elegant experiments, Kahneman and Tversky (1973) showed that subjects often seriously undervalue the importance of prior probabilities. One of these experiments presented half of the subjects with the following “cover story.” A panel of psychologists have interviewed and administered personality tests to 30 engineers and 70 lawyers, all successful in their respective fields. On the basis of this information, thumbnail descriptions of the 30 engineers and 70 lawyers have been written. You will find on your forms five descriptions, chosen at random from the 100 available descriptions. For each description, please indicate your probability that the person described is an engineer, on a scale from 0 to 100. The other half of the subjects were presented with the same text, except the “base-rates” were reversed. They were told that the personality tests had been administered to 70 engineers and 30 lawyers. Some of the descriptions that were provided were designed to be compatible with the subjects’ stereotypes of engineers, though not with their stereotypes of lawyers. Others were designed to fit the lawyer stereotype, but not the engineer stereotype. And one was intended to be quite neutral, giving subjects no information at all that would be of use in making their decision. Here are two examples, the first intended to sound like an engineer, the second intended to sound neutral: Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful and ambitious. He shows no interest in political and social issues and spends most of his free time on his many hobbies which include home carpentry, sailing, and mathematical puzzles. Dick is a 30-year-old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well liked by his colleagues. As expected, subjects in both groups thought that the probability that Jack is an engineer is quite high. Moreover, in what seems to be a clear violation of Bayesian principles, the difference in cover stories between the two groups of subjects had almost no effect at all. The neglect of base-rate information was even more striking in the case of Dick. That description was constructed to be totally uninformative with regard to Dick’s profession. Thus, the only useful information that subjects had was the base-rate information provided in the cover story. But that information was entirely ignored. The median probability estimate in both groups of subjects was 50%. Kahneman and Tversky‘s subjects were not, however, completely insensitive to base-rate information. Following the five descriptions on their form, subjects found the following “null” description: Suppose now that you are given no information whatsoever about an individual chosen at random from the sample. The probability that this man is one of the 30 engineers [or, for the other group of subjects: one of the 70 engineers] in the sample of 100 is ____%. In this case subjects relied entirely on the base-rate; the median estimate was 30% for the first group of subjects and 70% for the second. In their discussion of these experiments, Nisbett and Ross offer this interpretation. The implication of this contrast between the “no information” and “totally nondiagnostic information” conditions seems clear. When no specific evidence about the target case is provided, prior probabilities are utilized appropriately; when worthless specific evidence is given, prior probabilities may be largely ignored, and people respond as if there were no basis for assuming differences in relative likelihoods. People’s grasp of the relevance of base-rate information must be very weak if they could be distracted from using it by exposure to useless target case information. (Nisbett & Ross, 1980, pp. 145-6) Before leaving the topic of base-rate neglect, we want to offer one further example illustrating the way in which the phenomenon might well have serious practical consequences. Here is a problem that Casscells et. al. (1978) presented to a group of faculty, staff and fourth-year students and Harvard Medical School. If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs? ____% Under the most plausible interpretation of the problem, the correct Bayesian answer is 2%. But only eighteen percent of the Harvard audience gave an answer close to 2%. Forty-five percent of this distinguished group completely ignored the base-rate information and said that the answer was 95%. 2. 4. Overconfidence One of the most extensively investigated and most worrisome cluster of phenomena explored by psychologists interested in reasoning and judgment involves the degree of confidence that people have in their responses to factual questions – questions like: In each of the following pairs, which city has more inhabitants? (a) Las Vegas (b) Miami (a) Sydney (b) Melbourne (a) Hyderabad (b) Islamabad (a) Bonn (b) Heidelberg In each of the following pairs, which historical event happened first? (a) Signing of the Magna Carta (b) Birth of Mohammed (a) Death of Napoleon (b) Louisiana Purchase (a) Lincoln’s assassination (b) Birth of Queen Victoria After each answer subjects are also asked: How confident are you that your answer is correct? 50% 60% 70% 80% 90% 100% In an experiment using relatively hard questions it is typical to find that for the cases in which subjects say they are 100% confident, only about 80% of their answers are correct; for cases in which they say that they are 90% confident, only about 70% of their answers are correct; and for cases in which they say that they are 80% confident, only about 60% of their answers are correct. This tendency toward overconfidence seems to be very robust. Warning subjects that people are often overconfident has no significant effect, nor does offering them money (or bottles of French champagne) as a reward for accuracy. Moreover, the phenomenon has been demonstrated in a wide variety of subject populations including undergraduates, graduate students, physicians and even CIA analysts. (For a survey of the literature see Lichtenstein, Fischoff & Phillips, 1982.) 2. 5. Anchoring In their classic paper, “Judgment under uncertainty,” Tversky and Kahneman (1974) showed that quantitative reasoning processes – most notably the production of estimates – can be strongly influenced by the values that are taken as a starting point. They called this phenomenon anchoring. In one experiment, subjects were asked to estimate quickly the products of numerical expressions. One group of subjects was given five seconds to estimate the product of while a second group was given the same amount of time to estimate the product of Under these time constraints, most of the subjects can only do some steps of the computation and then have to extrapolate or adjust. Tversky and Kahneman predicted that because the adjustments are usually insufficient, the procedure should lead to underestimation. They also predicted that because the result of the first step of the descending sequence is higher than the ascending one, subjects would produce higher estimates in the first case than in the second. Both predictions were confirmed. The median estimate for the descending sequence was 2250 while for the ascending one was only 512. Moreover, both groups systematically underestimated the value of the numerical expressions presented to them since the correct answer is 40,320. It’s hard to see how the above experiment can provide grounds for serious concern about human rationality since it results from of imposing serious constraints on the time that people are given to perform the task. Nevertheless, other examples of anchoring are genuinely bizarre and disquieting. In one experiment, for example, Tversky and Kahneman asked subjects to estimate the percentage of African countries in the United Nations. But before making these estimates, subjects were first shown an arbitrary number that was determined by spinning a ‘wheel of fortune’ in their presence. Some, for instance, were shown the number 65 while others the number 10. They were then asked to say if the correct estimate was higher or lower than the number indicated on the wheel and to produce a real estimate of the percentage of African members in the UN. The median estimates were 45% for subjects whose “anchoring” number was 65 and 25% for subjects whose number was 10. The rather disturbing implication of this experiment is that people’s estimates can be affected quite substantially by a numerical “anchoring” value even when they must be fully aware that the anchoring number has been generated by a random process which they surely know to be entirely irrelevant to the task at hand! 3. The Pessimistic Interpretation: Shortcomings in Reasoning Competence The experimental results we’ve been recounting and the many related results reported in the extensive literature in this area are, we think, intrinsically unsettling. They are even more alarming if, as has occasionally been demonstrated, the same patterns of reasoning and judgment are to be found outside the laboratory. None of us want our illnesses to be diagnosed by physicians who ignore well-confirmed information about base-rates. Nor do we want public officials to be advised by CIA analysts who are systematically overconfident. The experimental results themselves do not entail any conclusions about the nature or the normative status of the cognitive mechanisms that underlie people’s reasoning and judgment. But a number of writers have urged that these results lend considerable support to a pessimistic hypothesis about those mechanisms, a hypothesis which may be even more disturbing than the results themselves. On this pessimistic view, the examples of problematic reasoning, judgments and decisions that we’ve sketched are not mere performance errors. Rather, they indicate that most people’s underlying reasoning competence is irrational or at least normatively problematic. In order to explain this view more clearly, we first need to explain the distinction between competence and performance on which it is based and say something about the normative standards of reasoning that are being assumed by advocates of this pessimistic interpretation of the experimental results. 3.1. Competence and Performance The competence/performance distinction, as we will characterize it, was first introduced into cognitive science by Chomsky, who used it in his account of the explanatory strategy of theories in linguistics. (Chomsky, 1965, Ch. 1; 1975; 1980) In testing linguistic theories, an important source of data are the “intuitions” or unreflective judgments that speakers of a language make about the grammaticality of sentences, and about various linguistic properties and relations. To explain these intuitions, and also to explain how speakers go about producing and understanding sentences of their language in ordinary discourse, Chomsky and his followers proposed that a speaker of a language has an internally represented grammar of that language – an integrated set of generative rules and principles that entail an infinite number of claims about the language. For each of the infinite number of sentences in the speaker’s language, the internally represented grammar entails that it is grammatical; for each ambiguous sentence in the speaker’s language, the grammar entails that it is ambiguous, etc. When speakers make the judgments that we call linguistic intuitions, the information in the internally represented grammar is typically accessed and relied upon, though neither the process nor the internally represented grammar are accessible to consciousness. Since the internally represented grammar plays a central role in the production of linguistic intuitions, those intuitions can serve as an important source of data for linguists trying to specify what the rules and principles of the internally represented grammar are. A speaker’s intuitions are not, however, an infallible source of information about the grammar of the speaker’s language, because the grammar cannot produce linguistic intuitions by itself. The production of intuitions is a complex process in which the internally represented grammar must interact with a variety of other cognitive mechanisms including those subserving perception, motivation, attention, short term memory and perhaps a host of others. In certain circumstances, the activity of any one of these mechanisms may result in a person offering a judgment about a sentence which does not accord with what the grammar actually entails about that sentence. This might happen when we are drunk or tired or in the grip of rage. But even under ordinary conditions when our cognitive mechanisms are not impaired in this way, we may still fail to recognize a sentence as grammatical due to limitations on attention or memory. For example, there is considerable evidence indicating that the short-term memory mechanism has difficulty handling center embedded structures. Thus it may well be the case that our internally represented grammars entail that the following sentence is grammatical: What what what he wanted cost would buy in Germany was amazing. even though our intuitions suggest, indeed shout, that it is not. Now in the jargon that Chomsky introduced, the rules and principles of a speaker’s internalized grammar constitutes the speaker’s linguistic competence. By contrast, the judgments a speaker makes about sentences, along with the sentences the speaker actually produces, are part of the speaker’s linguistic performance. Moreover, as we have just seen, some of the sentences a speaker produces and some of the judgments the speaker makes about sentences, will not accurately reflect the speaker’s linguistic competence. In these cases, the speaker is making a performance error. There are some obvious analogies between the phenomena studied in linguistics and those studied by philosophers and cognitive scientists interested in reasoning. In both cases there is spontaneous and largely unconscious processing of an open-ended class of inputs; people are able to understand endlessly many sentences, and to draw inferences from endlessly many premises. Also, in both cases, people are able to make spontaneous intuitive judgments about an effectively infinite class of cases – judgments about grammaticality, ambiguity, etc. in the case of linguistics, and judgments about validity, probability, etc. in the case of reasoning. Given these analogies, it is plausible to explore the idea that the mechanism underlying our ability to reason is similar to the mechanism underlying our capacity to process language. And if Chomsky is right about language, then the analogous hypothesis about reasoning would claim that people have an internally represented, integrated set of rules and principles of reasoning – a “psycho-logic” as it has been called – which is usually accessed and relied upon when people draw inferences or make judgments about them. As in the case of language, we would expect that neither the processes involved nor the principles of the internally represented psycho-logic are readily accessible to consciousness. We should also expect that people’s inferences, judgments and decisions would not be an infallible guide to what the underlying psycho-logic actually entails about the validity or plausibility of a given inference. For here, as in the case of language, the internally represented rules and principles must interact with lots of other cognitive mechanisms – including attention, motivation, short term memory and many others. The activity of these mechanisms can give rise to performance errors – inferences, judgments or decisions that do not reflect the psycho-logic which constitutes a person’s reasoning competence. There is, however, an important difference between reasoning and language, even if we assume that a Chomsky-style account of the underlying mechanism is correct in both cases. For in the case of language, it makes no clear sense to offer a normative assessment of a normal person’s competence. The rules and principles that constitute a French speaker’s linguistic competence are significantly different from the rules and principles that underlie language processing in a Chinese speaker. But if we were asked which system was better or which one was correct, we would have no idea what was being asked. Thus, on the language side of the analogy, there are performance errors, but there is no such thing as a competence error or a normatively problematic competence. If two otherwise normal people have different linguistic competences, then they simply speak different languages or different dialects. On the reasoning side of the analogy, however, things look very different. It is not clear whether there are significant individual and group differences in the rules and principles underlying people’s performance on reasoning tasks, as there so clearly are in the rules and principles underlying people’s linguistic performance. But if there are significant interpersonal differences in reasoning competence, it surely appears to make sense to ask whether one system of rules and principles is better than another. 3.2. The Standard Picture Clearly, the claim that one system of rules is superior to another assumes – if only tacitly – some standard or metric against which to measure the relative merits of reasoning systems. And this raises the normative question of what standards we ought to adopt when evaluating human reasoning. Though advocates of the pessimistic interpretation rarely offer an explicit and general normative theory of rationality, perhaps the most plausible reading of their work is that they are assuming some version of what Edward Stein calls the Standard Picture: According to this picture, to be rational is to reason in accordance with principles of reasoning that are based on rules of logic, probability theory and so forth. If the standard picture of reasoning is right, principles of reasoning that are based on such rules are normative principles of reasoning, namely they are the principles we ought to reason in accordance with. (Stein 1996, p. 4) Thus the Standard Picture maintains that the appropriate criteria against which to evaluate human reasoning are rules derived from formal theories such as classical logic, probability theory and decision theory. So, for example, one might derive something like the following principle of reasoning from the conjunction rule of probability theory: Conjunction Principle: One ought not to assign a lower degree of probability to the occurrence of event A than one does to the occurrence of A and some (distinct) event B (Stein 1996, 6). If we assume this principle is correct, there is a clear answer to the question of why the patterns of inference discussed in section 2.2 (on the “conjunction fallacy”) are normatively problematic: they violate the conjunction principle. More generally, given principles of this kind, one can evaluate the specific judgments and decisions issued by human subjects and the psycho-logics that produce them. To the extent that a person’s judgments and decisions accord with the principles of the Standard Picture, they are rational and to the extent that they violate such principles, the judgments and decisions fail to be rational. Similarly, to the extent that a reasoning competence produces judgments and decisions that accord with the principles of the Standard Picture, the competence is rational and to the extent that it fails to do so, it is not rational. Sometimes, of course, it is far from clear how these formal theories are to be applied – a problem that we will return to in section 7. Moreover, as we’ll see in section 8, the Standard Picture is not without its critics. Nonetheless, it does have some notable virtues. First, it seems to provide reasonably precise standards against which to evaluate human reasoning. Second, it fits very neatly with the intuitively plausible idea that logic and probability theory bear an intimate relationship to issues about how we ought to reason. Finally, it captures an intuition about rationality that has long held a prominent position in philosophical discussions, namely that the norms of reason are “universal principles” – principles that apply to all actual and possible cognizers irrespective of who they are or where they are located in space and time. Since the principles of the Standard Picture are derived from formal/mathematical theories –theories that, if correct, are necessarily correct –- they appear to be precisely the sort of principles that one needs to adopt in order to capture the intuition that norms of reasoning are universal principles. 3.3 The Pessimistic Interpretation We are now, finally, in a position to explain the pessimistic hypothesis that some authors have urged to account for the sorts of experimental results sketched in Section 2. According to this hypothesis, the errors that subjects make in these experiments are very different from the sorts of reasoning errors that people make when their memory is overextended or when their attention wanders. They are also different from the errors people make when they are tired, drunk or emotionally upset. These latter cases are all examples of performance errors – errors that people make when they infer in ways that are not sanctioned by their own psycho-logic. But, according to the pessimistic interpretation, the sorts of errors described in Section 2 are competence errors. In these cases people are reasoning, judging and making decisions in ways that accord with their psycho-logic. The subjects in these experiments do not use the right rules – those sanctioned by the Standard Picture – because they do not have access to them; they are not part of the subjects’ internally represented reasoning competence. What they have instead is a collection of simpler rules or “heuristics“ that may often get the right answer, though it is also the case that often they do not. So, according to this pessimistic hypothesis, the subjects make mistakes because their psycho-logic is normatively defective; their internalized rules of reasoning are less than fully rational. It is not at all clear that Kahneman and Tversky would endorse this interpretation of the experimental results, though a number of other leading researchers clearly do. According to Slovic, Fischhoff and Lichtenstein, for example, “It appears that people lack the correct programs for many important judgmental tasks…. We have not had the opportunity to evolve an intellect capable of dealing conceptually with uncertainty.” (1976, p. 174) To sum up: According to the pessimistic interpretation, what experimental results of the sort discussed in section 2 suggest is that our reasoning is subject to systematic competence errors. But is this view warranted? Is it really the most plausible response to what we've been calling the evaluative project, or is some more optimistic view in order? In recent years, this has become one of the most hotly debated questions in cognitive science, and numerous challenges have been developed in order to show that the pessimistic interpretation is unwarranted. In the remaining sections of this paper we consider and evaluate some of the more prominent and plausible of these challenges. 4. The Challenge From Evolutionary Psychology In recent years Gerd Gigerenzer, Leda Cosmides, John Tooby and other leading evolutionary psychologists have been among the most vocal critics of the pessimistic account of human reasoning, arguing that the evidence for human irrationality is far less compelling than advocates of the heuristics and biases tradition suggest. In this section, we will attempt to provide an overview of this recent and intriguing challenge. We start in section 4.1 by outlining the central theses of evolutionary psychology. Then in 4.2 and 4.3 we discuss how these core ideas have been applied to the study of human reasoning. Specifically, we’ll discuss two psychological hypotheses – the cheater detection hypothesis and the frequentist hypothesis – and evidence that’s been invoked in support of them. Though they are ostensibly descriptive psychological claims, a number of prominent evolutionary psychologists have suggested that these hypotheses and the experimental data that has been adduced in support of them provide us with grounds for rejecting the pessimistic interpretation of human reasoning. In section 5, we consider the plausibility of this claim. 4.1 The Central Tenets of Evolutionary Psychology Though the interdisciplinary field of evolutionary psychology is too new to have developed any precise and widely agreed upon body of doctrine, there are two theses that are clearly central. First, evolutionary psychologists endorse an account of the structure of the human mind which is sometimes called the massive modularity hypothesis (Sperber, 1994; Samuels 1998). Second, evolutionary psychologists commit themselves to a methodological claim about the manner in which research in psychology ought to proceed. Specifically, they endorse the claim that adaptationist considerations ought to play a pivotal role in the formation of psychological hypotheses. 4.1.1 The Massive Modularity Hypothesis Roughly stated, the massive modularity hypothesis (MMH) is the claim that the human mind is largely or perhaps even entirely composed of highly specialized cognitive mechanisms or modules. Though there are different ways in which this rough claim can be spelled out, the version of MMH that evolutionary psychologists defend is heavily informed by the following three assumptions: Computationalism. The human mind is an information processing device that can be described in computational terms – “a computer made out of organic compounds rather than silicon chips” (Barkow et. al, 1992, p.7). In expressing this view, evolutionary psychologists clearly see themselves as adopting the computationalism that is prevalent in much of cognitive science Nativism. Contrary to what has surely been the dominant view in psychology for most of the Twentieth Century, evolutionary psychologists maintain that much of the structure of the human mind is innate. Evolutionary psychologists thus reject the familiar empiricist proposal that the innate structure of the human mind consists of little more than a general-purpose learning mechanism. Instead they embrace the nativism associated with Chomsky and his followers (Pinker, 1997). Adaptationism. Evolutionary psychologists invariably claim that our cognitive architecture is largely the product of natural selection. On this view, our minds are composed of adaptations that were “invented by natural selection during the species’ evolutionary history to produce adaptive ends in the species’ natural environment” (Tooby and Cosmides, 1995, p. xiii). Our minds, evolutionary psychologists maintain, are designed by natural selection in order to solve adaptive problems: “evolutionary recurrent problem[s] whose solution promoted reproduction, however long or indirect the chain by which it did so” (Cosmides and Tooby, 1994, p. 87). Evolutionary psychologists conceive of modules as a type of computational mechanism – viz. computational devices that are domain-specific as opposed to domain-general. Moreover, in keeping with their nativism and adaptationism, evolutionary psychologists also typically assume that modules are innate and that they are adaptations produced by natural selection. In what follows we will call cognitive mechanisms that posses these features Darwinian modules. The version of MMH endorsed by evolutionary psychologists thus amounts to the claim that: MMH. The human mind is largely or perhaps even entirely composed of a large number of Darwinian modules – innate, computational mechanisms that are domain-specific adaptations produced by natural selection. This thesis is a far more radical than earlier modular accounts of cognition, such as the one endorsed by Jerry Fodor (Fodor, 1983). According to Fodor, the modular structure of the human mind is restricted to input systems (those responsible for perception and language processing) and output systems (those responsible for producing actions). Though evolutionary psychologists accept the Fodorian thesis that such peripheral systems are modular in character, they maintain, pace Fodor, that many or perhaps even all so-called central capacities, such as reasoning, belief fixation and planning, can also “be divided into domain-specific modules” (Jackendoff, 1992, p.70). So, for example, it has been suggested by evolutionary psychologists that there are modular mechanisms for such central processes as ‘theory of mind’ inference (Leslie, 1994; Baron-Cohen, 1995) social reasoning (Cosmides and Tooby, 1992), biological categorization (Pinker, 1994) and probabilistic inference (Gigerenzer, 1994 and 1996). On this view, then, “our cognitive architecture resembles a confederation of hundreds or thousands of functionally dedicated computers (often called modules) designed to solve adaptive problems endemic to our hunter-gatherer ancestors” (Tooby and Cosmides, 1995, p. xiv). 4.1.2 The Research Program of Evolutionary Psychology A central goal of evolutionary psychology is to construct and test hypotheses about the Darwinian modules which, MMH maintains, make up much of the human mind. In pursuit of this goal, research may proceed in two quite different stages. The first, which we’ll call evolutionary analysis, has as its goal the generation of plausible hypotheses about Darwinian modules. An evolutionary analysis tries to determine as much as possible about the recurrent, information processing problems that our forebears would have confronted in what is often called the environment of evolutionary adaptation or the EEA – the environment in which our ancestors evolved. The focus, of course, is on adaptive problems whose successful solution would have directly or indirectly contributed to reproductive success. In some cases these adaptive problems were posed by physical features of the EEA, in other cases they were posed by biological features, and in still other cases they were posed by the social environment in which our forebears were embedded. Since so many factors are involved in determining the sorts of recurrent information processing problems that our ancestors confronted in the EEA, this sort of evolutionary analysis is a highly interdisciplinary exercise. Clues can be found in many different sorts of investigations, from the study of the Pleistocene climate to the study of the social organization in the few remaining hunter-gatherer cultures. Once a recurrent adaptive problem has been characterized, the theorist may hypothesize that there is a module which would have done a good job at solving that problem in the EEA. An important part of the effort to characterize these recurrent information processing problems is the specification of the sorts constraints that a mechanism solving the problem could take for granted. If, for example, the important data needed to solve the problem was almost always presented in a specific format, then the mechanism need not be able to handle data presented in other ways. It could “assume” that the data would be presented in the typical format. Similarly, if it was important to be able to detect people or objects with a certain property that is not readily observable, and if, in the EEA, that property was highly correlated with some other property that is easier to detect, the system could simply assume that people or objects with the detectable property also had the one that was hard to observe. It is important to keep in mind that evolutionary analyses can only be used as a way of suggesting plausible hypotheses about mental modules. By themselves evolutionary analyses provide no assurance that these hypotheses are true. The fact that it would have enhanced our ancestors’ fitness if they had developed a module that solved a certain problem is no guarantee that they did develop such a module, since there are many reasons why natural selection and the other processes that drive evolution may fail to produce a mechanism that would enhance fitness (Stich, 1990, Ch. 3). Once an evolutionary analysis has succeeded in suggesting a plausible hypothesis, the next stage in the evolutionary psychology research strategy is to test the hypothesis by looking for evidence that contemporary humans actually have a module with the properties in question. Here, as earlier, the project is highly interdisciplinary. Evidence can come from experimental studies of reasoning in normal humans (Cosmides, 1989; Cosmides and Tooby, 1992, 1996; Gigerenzer, 1991a; Gigerenzer and Hug, 1992), from developmental studies focused on the emergence of cognitive skills (Carey and Spelke, 1994; Leslie, 1994; Gelman and Brenneman, 1994), or from the study of cognitive deficits in various abnormal populations (Baron-Cohen, 1995). Important evidence can also be gleaned from studies in cognitive anthropology (Barkow, 1992; Hutchins, 1980), history, and even from such surprising areas as the comparative study of legal traditions (Wilson and Daly, 1992). When evidence from a number of these areas points in the same direction, an increasingly strong case can be made for the existence of a module suggested by evolutionary analysis. In 4.2 and 4.3 we consider two applications of this two-stage research strategy to the study of human reasoning. Though the interpretation of the studies we will sketch is the subject of considerable controversy, a number of authors have suggested that they show there is something deeply mistaken about the pessimistic hypothesis set out in Section 3. That hypothesis claims that people lack normatively appropriate rules or principles for reasoning about problems like those set out in Section 2. But when we look at variations on these problems that may make them closer to the sort of recurrent problems our forebears would have confronted in the EEA, performance improves dramatically. And this, it is argued, is evidence for the existence of at least two normatively sophisticated Darwinian modules, one designed to deal with probabilistic reasoning when information is presented in a frequency format, the other designed to deal with reasoning about cheating in social exchange settings. 4.2 The Frequentist Hypothesis The experiments reviewed in Sections 2.2 and 2.3 indicate that in many cases people are quite bad at reasoning about probabilities, and the pessimistic interpretation of these results claims that people use simple (“fast and dirty”) heuristics in dealing with these problems because their cognitive systems have no access to more appropriate principles for reasoning about probabilities. But, in a series of recent and very provocative papers, Gigerenzer (1994, Gigerenzer & Hoffrage, 1995) and Cosmides and Tooby (1996) argue that from an evolutionary point of view this would be a surprising and paradoxical result. “As long as chance has been loose in the world,” Cosmides and Tooby note, “animals have had to make judgments under uncertainty.” (Cosmides and Tooby, 1996, p. 14; for the remainder of this section, all quotes are from Cosmides and Tooby, 1996, unless otherwise indicated.) Thus making judgments when confronted with probabilistic information posed adaptive problems for all sorts of organisms, including our hominid ancestors, and “if an adaptive problem has endured for a long enough period and is important enough, then mechanisms of considerable complexity can evolve to solve it” (p. 14). But as we saw in the previous section, “one should expect a mesh between the design of our cognitive mechanisms, the structure of the adaptive problems they evolved to solve, and the typical environments that they were designed to operate in – that is, the ones that they evolved in” (p. 14). So in launching their evolutionary analysis Cosmides and Tooby’s first step is to ask: “what kinds of probabilistic information would have been available to any inductive reasoning mechanisms that we might have evolved?” (p. 15) In the modern world we are confronted with statistical information presented in many ways: weather forecasts tell us the probability of rain tomorrow, sports pages list batting averages, and widely publicized studies tell us how much the risk of colon cancer is reduced in people over 50 if they have a diet high in fiber. But information about the probability of single events (like rain tomorrow) and information expressed in percentage terms would have been rare or unavailable in the EEA. What was available in the environment in which we evolved was the encountered frequencies of actual events – for example, that we were successful 5 times out of the last 20 times we hunted in the north canyon. Our hominid ancestors were immersed in a rich flow of observable frequencies that could be used to improve decision-making, given procedures that could take advantage of them. So if we have adaptations for inductive reasoning, they should take frequency information as input. (pp. 15-16) After a cognitive system has registered information about relative frequencies it might convert this information to some other format. If, for example, the system has noted that 5 out of the last 20 north canyon hunts were successful, it might infer and store the conclusion that there is a .25 chance that a north canyon hunt will be successful. However, Cosmides and Tooby argue, “there are advantages to storing and operating on frequentist representations because they preserve important information that would be lost by conversion to single-event probability. For example, ... the number of events that the judgment was based on would be lost in conversion. When the n disappears, the index of reliability of the information disappears as well.” (p. 16) These and other considerations about the environment in which our cognitive systems evolved lead Cosmides and Tooby to hypothesize that our ancestors “evolved mechanisms that took frequencies as input, maintained such information as frequentist representations, and used these frequentist representations as a database for effective inductive reasoning.” Since evolutionary psychologists expect the mind to contain many specialized modules, Cosmides and Tooby are prepared to find other modules involved in inductive reasoning that work in other ways. We are not hypothesizing that every cognitive mechanism involving statistical induction necessarily operates on frequentist principles, only that at least one of them does, and that this makes frequentist principles an important feature of how humans intuitively engage the statistical dimension of the world. (p. 17) But, while their evolutionary analysis does not preclude the existence of inductive mechanisms that are not focused on frequencies, it does suggest that when a mechanism that operates on frequentist principles is engaged, it will do a good job, and thus the probabilistic inferences it makes will generally be normatively appropriate ones. This, of course, is in stark contrast to the bleak implications hypothesis which claims that people simply do not have access to normatively appropriate strategies in this area. From their hypothesis, Cosmides and Tooby derive a number of predictions: (1) Inductive reasoning performance will differ depending on whether subjects are asked to judge a frequency or the probability of a single event. (2) Performance on frequentist versions of problems will be superior to non-frequentist versions. (3) The more subjects can be mobilized to form a frequentist representation, the better performance will be. (4) ... Performance on frequentist problems will satisfy some of the constraints that a calculus of probability specifies, such as Bayes’ rule. This would occur because some inductive reasoning mechanisms in our cognitive architecture embody aspects of a calculus of probability. (p. 17) To test these predictions Cosmides and Tooby ran an array of experiments designed around the medical diagnosis problem which Casscells et. al. used to demonstrate that even very sophisticated subjects ignore information about base rates. In their first experiment Cosmides and Tooby replicated the results of Casscells et. al. using exactly the same wording that we reported in section 2.3. Of the 25 Stanford University undergraduates who were subjects in this experiment, only 3 (= 12%) gave the normatively appropriate bayesian answer of “2%”, while 14 subjects (= 56%) answered “95%”. In another experiment, Cosmides and Tooby gave 50 Stanford students a similar problem in which relative frequencies rather than percentages and single event probabilities were emphasized. The “frequentist” version of the problem read as follows: 1 out of every 1000 Americans has disease X. A test has been developed to detect when a person has disease X. Every time the test is given to a person who has the disease, the test comes out positive. But sometimes the test also comes out positive when it is given to a person who is completely healthy. Specifically, out of every 1000 people who are perfectly healthy, 50 of them test positive for the disease. Imagine that we have assembled a random sample of 1000 Americans. They were selected by lottery. Those who conducted the lottery had no information about the health status of any of these people. Given the information above: How many people who test positive for the disease will actually have the disease? _____ out of _____. On this problem the results were dramatically different. 38 of the 50 subjects (= 76%) gave the correct bayesian answer. A series of further experiments systematically explored the differences between the problem used by Casscells, et al. and the problems on which subjects perform well, in an effort to determine which factors had the largest effect. Although a number of different factors affect performance, two predominate. “Asking for the answer as a frequency produces the largest effect, followed closely by presenting the problem information as frequencies.” (p. 58) The most important conclusion that Cosmides and Tooby want to draw from these experiments is that “frequentist representations activate mechanisms that produce bayesian reasoning, and that this is what accounts for the very high level of bayesian performance elicited by the pure frequentist problems that we tested.” (p. 59) As further support for this conclusion, Cosmides and Tooby cite several striking results reported by other investigators. In one study, Fiedler (1988), following up on some intriguing findings in Tversky and Kahneman (1983), showed that the percentage of subjects who commit the conjunction fallacy can be radically reduced if the problem is cast in frequentist terms. In the “feminist bank teller” example, Fiedler contrasted the wording reported in 2.2 with a problem that read as follows: Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. There are 100 people who fit the description above. How many of them are: bank tellers and active in the feminist movement? In Fiedler’s replication using the original formulation of the problem, 91% of subjects judged the feminist bank teller option to be more probable than the bank teller option. However in the frequentist version only 22% of subjects judged that there would be more feminist bank tellers than bank tellers. In yet another experiment, Hertwig and Gigerenzer (1994; reported in Gigerenzer, 1994) told subjects that there were 200 women fitting the “Linda” description, and asked them to estimate the number who were bank tellers, feminist bank tellers, and feminists. Only 13% committed the conjunction fallacy. Studies on over-confidence have also been marshaled in support of the frequentist hypothesis. In one of these Gigerenzer, Hoffrage and Kleinbölting (1991) reported that the sort of overconfidence described in 2.4 can be made to “disappear” by having subjects answer questions formulated in terms of frequencies. Gigerenzer and his colleagues gave subjects lists of 50 questions similar to those described in 2.4, except that in addition to being asked to rate their confidence after each response (which, in effect, asks them to judge the probability of that single event), subjects were, at the end, also asked a question about the frequency of correct responses: “How many of these 50 questions do you think you got right?” In two experiments, the average over-confidence was about 15%, when single-event confidences were compared with actual relative frequencies of correct answers, replicating the sorts of findings we sketched in Section 2.4. However, comparing the subjects’ “estimated frequencies with actual frequencies of correct answers made ‘overconfidence’ disappear.... Estimated frequencies were practically identical with actual frequencies, with even a small tendency towards underestimation. The ‘cognitive illusion’ was gone.” (Gigerenzer, 1991a, p. 89) 4.3. The Cheater Detection Hypothesis In Section 2.1 we reproduced one version of Wason‘s four card selection task on which most subjects perform very poorly, and we noted that, while subjects do equally poorly on many other versions of the selection task, there are some versions on which performance improves dramatically. Here is an example from Griggs and Cox (1982). From a logical point of view, this problem would appear to be structurally identical to the problem in Section 2.1, but the content of the problems clearly has a major effect on how well people perform. About 75% of college student subjects get the right answer on this version of the selection task, while only 25% get the right answer on the other version. Though there have been dozens of studies exploring this “content effect” in the selection task, the results have been, and continue to be, rather puzzling since there is no obvious property or set of properties shared by those versions of the task on which people perform well. However, in several recent and widely discussed papers, Cosmides and Tooby have argued that an evolutionary analysis enables us to see a surprising pattern in these otherwise bewildering results. (Cosmides, 1989, Cosmides and Tooby, 1992) The starting point of their evolutionary analysis is the observation that in the environment in which our ancestors evolved (and in the modern world as well) it is often the case that unrelated individuals can engage in “non-zero-sum” exchanges, in which the benefits to the recipient (measured in terms of reproductive fitness) are significantly greater than the costs to the donor. In a hunter-gatherer society, for example, it will sometimes happen that one hunter has been lucky on a particular day and has an abundance of food, while another hunter has been unlucky and is near starvation. If the successful hunter gives some of his meat to the unsuccessful hunter rather than gorging on it himself, this may have a small negative effect on the donor’s fitness since the extra bit of body fat that he might add could prove useful in the future, but the benefit to the recipient will be much greater. Still, there is some cost to the donor; he would be slightly better off if he didn’t help unrelated individuals. Despite this, it is clear that people sometimes do help non-kin, and there is evidence to suggest that non-human primates (and even vampire bats!) do so as well. On first blush, this sort of “altruism” seems to pose an evolutionary puzzle, since if a gene which made an organism less likely to help unrelated individuals appeared in a population, those with the gene would be slightly more fit, and thus the gene would gradually spread through the population. A solution to this puzzle was proposed by Robert Trivers (1971) who noted that, while one-way altruism might be a bad idea from an evolutionary point of view, reciprocal altruism is quite a different matter. If a pair of hunters (be they humans or bats) can each count on the other to help when one has an abundance of food and the other has none, then they may both be better off in the long run. Thus organisms with a gene or a suite of genes that inclines them to engage in reciprocal exchanges with non-kin (or “social exchanges” as they are sometimes called) would be more fit than members of the same species without those genes. But of course, reciprocal exchange arrangements are vulnerable to cheating. In the business of maximizing fitness, individuals will do best if they are regularly offered and accept help when they need it, but never reciprocate when others need help. This suggests that if stable social exchange arrangements are to exist, the organisms involved must have cognitive mechanisms that enable them to detect cheaters, and to avoid helping them in the future. And since humans apparently are capable of entering into stable social exchange relations, this evolutionary analysis leads Cosmides and Tooby to hypothesize that we have one or more Darwinian modules whose job it is to recognize reciprocal exchange arrangements and to detect cheaters who accept the benefits in such arrangements but do not pay the costs. In short, the evolutionary analysis leads Cosmides and Tooby to hypothesize the existence of one or more cheater detection modules. We call this the cheater detection hypothesis. If this is right, then we should be able to find some evidence for the existence of these modules in the thinking of contemporary humans. It is here that the selection task enters the picture. For according to Cosmides and Tooby, some versions of the selection task engage the mental module(s) which were designed to detect cheaters in social exchange situations. And since these mental modules can be expected to do their job efficiently and accurately, people do well on those versions of the selection task. Other versions of the task do not trigger the social exchange and cheater detection modules. Since we have no mental modules that were designed to deal with these problems, people find them much harder, and their performance is much worse. The bouncer-in-the-Boston-bar problem presented earlier is an example of a selection task that triggers the cheater detection mechanism. The problem involving vowels and odd numbers presented in Section 2.1 is an example of a selection task that does not trigger cheater detection module. In support of their theory, Cosmides and Tooby assemble an impressive body of evidence. To begin, they note that the cheater detection hypothesis claims that social exchanges, or “social contracts” will trigger good performance on selection tasks, and this enables us to see a clear pattern in the otherwise confusing experimental literature that had grown up before their hypothesis was formulated. When we began this research in 1983, the literature on the Wason selection task was full of reports of a wide variety of content effects, and there was no satisfying theory or empirical generalization that could account for these effects. When we categorized these content effects according to whether they conformed to social contracts, a striking pattern emerged. Robust and replicable content effects were found only for rules that related terms that are recognizable as benefits and cost/requirements in the format of a standard social contract…. No thematic rule that was not a social contract had ever produced a content effect that was both robust and replicable…. All told, for non-social contract thematic problems, 3 experiments had produced a substantial content effect, 2 had produced a weak content effect, and 14 had produced no content effect at all. The few effects that were found did not replicate. In contrast, 16 out of 16 experiments that fit the criteria for standard social contracts … elicited substantial content effects. (Cosmides and Tooby, 1992, p. 183) Since the formulation of the cheater detection hypothesis, a number of additional experiments have been designed to test the hypothesis and rule out alternatives. Among the most persuasive of these are a series of experiments by Gigerenzer and Hug (1992). In one set of experiments, these authors set out to show that, contrary to an earlier proposal by Cosmides and Tooby, merely perceiving a rule as a social contract was not enough to engage the cognitive mechanism that leads to good performance in the selection task, and that cueing for the possibility of cheating was required. To do this they created two quite different context stories for social contract rules. One of the stories required subjects to attend to the possibility of cheating, while in the other story cheating was not relevant. Among the social contract rules they used was the following which, they note, is widely known among hikers in the Alps: (i.) If someone stays overnight in the cabin, then that person must bring along a bundle of wood from the valley. The first context story, which the investigators call the “cheating version,” explained: There is a cabin at high altitude in the Swiss Alps, which serves hikers as an overnight shelter. Since it is cold and firewood is not otherwise available at that altitude, the rule is that each hiker who stays overnight has to carry along his/her own share of wood. There are rumors that the rule is not always followed. The subjects were cued into the perspective of a guard who checks whether any one of four hikers has violated the rule. The four hikers were represented by four cards that read “stays overnight in the cabin”, “carried no wood”, “carried wood”, and “does not stay overnight in the cabin”. The other context story, the “no cheating version,” cued subjects into the perspective of a member of the German Alpine Association who visits the Swiss cabin and tries to discover how the local Swiss Alpine Club runs this cabin. He observes people bringing wood to the cabin, and a friend suggests the familiar overnight rule as an explanation. The context story also mentions an alternative explanation: rather than the hikers, the members of the Swiss Alpine Club, who do not stay overnight, might carry the wood. The task of the subject was to check four persons (the same four cards) in order to find out whether anyone had violated the overnight rule suggested by the friend. (Gigerenzer and Hug, 1992, pp. 142-143) The cheater detection hypothesis predicts that subjects will do better on the cheating version than on the no cheating version, and that prediction was confirmed. In the cheating version, 89% of the subjects got the right answer, while in the no cheating version, only 53% responded correctly. In another set of experiments, Gigerenzer and Hug showed that when social contract rules make cheating on both sides possible, cueing subjects into the perspective of one party or the other can have a dramatic effect on performance in selection task problems. One of the rules they used that allows the possibility of bilateral cheating was: (ii.) If an employee works on the weekend, then that person gets a day off during the week. Here again, two different context stories were constructed, one of which was designed to get subjects to take the perspective of the employee, while the other was designed to get subjects to take the perspective of the employer. The employee version stated that working on the weekend is a benefit for the employer, because the firm can make use of its machines and be more flexible. Working on the weekend, on the other hand is a cost for the employee. The context story was about an employee who had never worked on the weekend before, but who is considering working on Saturdays from time to time, since having a day off during the week is a benefit that outweighs the costs of working on Saturday. There are rumors that the rule has been violated before. The subject’s task was to check information about four colleagues to see whether the rule has been violated. The four cards read: “worked on the weekend”, “did not get a day off”, “did not work on the weekend”, “did get a day off”. In the employer version, the same rationale was given. The subject was cued into the perspective of the employer, who suspects that the rule has been violated before. The subjects’ task was the same as in the other perspective [viz. to check information about four employees to see whether the rule has been violated]. (Gigerenzer & Hug, 1992, p. 154) In these experiments about 75% of the subjects cued to the employee’s perspective chose the first two cards (“worked on the weekend” and “did not get a day off”) while less than 5% chose the other two cards. The results for subjects cued to the employer’s perspective were radically different. Over 60% of subjects selected the last two cards (“did not work on the weekend” and “did get a day off”) while less than 10% selected the first two. 4.4 How good is the case for the evolutionary psychological conception of reasoning? The theories urged by evolutionary psychologists aim to provide a partial answer to the questions raised by what we’ve been calling the descriptive project – the project that seeks to specify the cognitive mechanisms which underlie our capacity to reason. The MMH provides a general schema for how we should think about these cognitive mechanisms according to which they are largely or perhaps even entirely modular in character. The frequentist hypothesis and cheater detection hypothesis, by contrast, make more specific claims about some of the particular modular reasoning mechanisms that we possess. Moreover, if correct, they provide some empirical support for MMH. But these three hypotheses are (to put it mildly) very controversial and the question arises: How plausible are they? Though a detailed discussion of this question is beyond the scope of the present paper, we think that these hypotheses are important proposals about the mechanisms which subserve reasoning and that they ought to be taken very seriously indeed. As we have seen, the cheater detection and frequentist hypotheses accommodate an impressive array of data from the experimental literature on reasoning and do not seem a priori implausible. Moreover, empirical support for MMH comes not merely from the studies outlined in this section but also from a disparate range of other domains of research, including work in neuropsychology (Shallice, 1989) and research in cognitive developmental psychology on “theory of mind” inference (Leslie, 1994; Baron-Cohen, 1995) and arithmetic reasoning (Dehaene, 1997). Further, as one of us has argued elsewhere, there are currently no good reasons to reject the MMH defended by evolutionary psychologists (Samuels, in press). But when saying that the MMH, frequentist hypothesis and cheater detection hypothesis are plausible candidates that ought to be taken very seriously, we do not mean that they are highly confirmed. For, as far as we can see, no currently available theory of the mechanisms underlying human reasoning is highly confirmed. Nor, for that matter, do we mean that there are no plausible alternatives. On the contrary, each of the three hypotheses outlined in this section is merely one among a range of plausible candidates. So, for example, although all the experimental data outlined in 4.3 is compatible with the cheater detection hypothesis, many authors have proposed alternative explanations of these data and in some cases they have supported these alternatives with additional experimental evidence. Among the most prominent alternatives are the pragmatic reasoning schemas approach defended by Cheng, Holyoak and their colleagues (Cheng and Holyoak, 1985 & 1989; Cheng, Holyoak, Nisbett and Oliver, 1986) and Denise Cummins’ proposal that we posses an innate, domain specific deontic reasoning module for drawing inferences about “permissions, obligations, prohibitions, promises, threats and warnings” (Cummins, 1996, p. 166). Nor, when saying that the evolutionary psychological hypotheses deserve to be taken seriously, do we wish to suggest that they will require no further clarification and “fine-tuning” as enquiry proceeds. Quite the opposite, we suspect that as further evidence accumulates, evolutionary psychologists will need to clarify and elaborate on their proposals if they are to continue to be serious contenders in the quest for explanations of our reasoning capacities. Indeed, in our view, the currently available evidence already requires that the frequentist hypothesis be articulated more carefully. In particular, it is simply not the case that humans never exhibit systematically counter-normative patterns of inference on reasoning problems stated in terms of frequencies. In their detailed study of the conjunction fallacy, for example, Tversky and Kahneman (1983) reported an experiment in which subjects were asked to estimate both the number of “seven-letter words of the form ‘‑‑‑‑‑n-’ in four pages of text” and the number of “seven letter words of the form ‘‑‑‑‑ing’ in four pages of text.” The median estimate for words ending in “ing” was about three times higher than for words with “n” in the next-to-last position. As Kahneman and Tversky (1996) note, this appears to be a clear counter-example to Gigerenzer’s claim that the conjunction fallacy disappears in judgments of frequency. Though, on our view, this sort of example does not show that the frequentist hypothesis is false, it does indicate that the version of the hypothesis suggested by Gigerenzer, Cosmides and Tooby is too simplistic. Since some frequentist representations do not activate mechanisms that produce good bayesian reasoning, there are presumably additional factors that play a role in the triggering of such reasoning. Clearly, more experimental work is needed to determine what these factors are and more subtle evolutionary analyses are needed to throw light on why these more complex triggers evolved. To sum up: Though these are busy and exciting times for those studying human reasoning, and there is obviously much that remains to be discovered, we believe we can safely conclude from the studies recounted in this section that the evolutionary psychological conception of reasoning deserves to be taken very seriously. Whether or not it ultimately proves to be correct, the highly modular picture of the reasoning has generated a great deal of impressive research and will continue to do so for the foreseeable future. Thus we would do well to begin exploring what the implications would be for various claims about human rationality if the Massive Modularity Hypothesis turns out to be correct. 5. What are the implications of massive modularity for the evaluative project? Suppose it turns out that evolutionary psychologists are right about the mental mechanisms that underlie human reasoning. Suppose that the MMH, the cheater detection hypothesis and the frequentist hypothesis are all true. How would this be relevant to what we have called the evaluative project? What would it tell us about the extent of human rationality? In particular, would this show that the pessimistic thesis often associated with the heuristics and biases tradition is unwarranted? Such a conclusion is frequently suggested in the writings of evolutionary psychologists. On this view, the theories and findings of evolutionary psychology indicate that human reasoning is not subserved by “fast and dirty” heuristics but by “elegant machines” that were designed and refined by natural selection over millions of years. According to this optimistic view, concerns about systematic irrationality are unfounded. One conspicuous indication of this optimism is the title that Cosmides and Tooby chose for the paper in which they reported their data on the Harvard Medical School problem: “Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty.” Five years earlier, while Cosmides and Tooby’s research was still in progress, Gigerenzer reported some of their early findings in a paper with the provocative title: “How to make cognitive illusions disappear: Beyond ‘heuristics and biases’.” The clear suggestion, in both of these titles, is that the findings they report pose a head-on challenge to the pessimism of the heuristics and biases tradition. Nor are these suggestions restricted to titles. In paper after paper, Gigerenzer has said things like “more optimism is in order” (1991b, 245) and “we need not necessarily worry about human rationality” (1998, 280); and he has maintained that his view “supports intuition as basically rational” (1991b, 242). In light of comments like this, it is hardly surprising that one commentator has described Gigerenzer and his colleagues as having “taken an empirical stand against the view of some psychologists that people are pretty stupid” (Lopes, quoted in Bower, 1996). A point that needs to be made before we consider the implications of evolutionary psychology for the evaluative project, is that once we adopt a massively modular account of the cognitive mechanisms underlying reasoning, it becomes necessary to distinguish between two different versions of the pessimistic interpretation. The first version maintains that P1: Human beings make competence errors while the second makes the claim that P2: All the reasoning competences that people possess are normatively problematic. If we assume, contrary to what evolutionary psychologists suppose, that we possess only one reasoning competence, then there is little point in drawing this distinction since, for all practical purposes, the two claims will be equivalent. But, as we have seen, evolutionary psychologists maintain that we possess many reasoning mechanisms – different modules for different kinds of reasoning task. This naturally suggests – and indeed is interpreted by evolutionary psychologists as suggesting – that we possess lots of reasoning competences. Thus, for example, Cosmides and Tooby (1996) “suggest that the human mind may contain a series of well-engineered competences capable of being activated under the right conditions” (Cosmides and Tooby, 1996, p.17). For our purposes, the crucial point to notice is that once we follow evolutionary psychologists in adopting the assumption of multiple reasoning competences, P1 clearly doesn’t entail P2. For even if we make lots of competence errors, it’s clearly possible that we also possess many normatively unproblematic reasoning competences. With the above distinction in hand, what should we say about the implications of evolutionary psychology for the pessimistic interpretation? First, under the assumption that both the frequentist hypothesis and cheater detection hypothesis are correct, we ought to reject P2. This is because, by hypothesis, these mechanisms embody normatively unproblematic reasoning competences. In which case, at least some of our reasoning competences will be normatively unproblematic. But do researchers within the heuristics and biases tradition really intend to endorse P2? The answer is far from clear since advocates of the pessimistic interpretation do not distinguish between P1 and P2. Some theorists have made claims that really do appear to suggest a commitment to P2. But most researchers within the heuristics and biases tradition have been careful to avoid a commitment to the claim that we possess no normatively unproblematic reasoning competences. Moreover, it is clear that this claim simply isn’t supported by the available empirical data, and most advocates of the heuristics and biases tradition are surely aware of this. For these reasons we are inclined to think that quotations which appear to support the adoption of P2 are more an indication of rhetorical excess than genuine theoretical commitment. What of P1 – the claim that human beings make competence errors when reasoning? This seems like a claim that advocates of the heuristics and biases approach really do endorse. But does the evolutionary psychological account of reasoning support the rejection of this thesis? Does it show that we make no competence errors? As far as we can tell, the answer is No. Even if evolutionary psychology is right in claiming that we possess some normatively unproblematic reasoning competences, it clearly does not follow that no errors in reasoning can be traced to a normatively problematic competence. According to MMH, people have many reasoning mechanisms and each of these modules has its own special set of rules. So there isn’t one psycho-logic, there are many. In which case, the claim that we possess normatively appropriate reasoning competences for frequentist reasoning, cheater detection and perhaps other reasoning tasks is perfectly compatible with the claim that we also possess other reasoning modules that deploy normatively problematic principles which result in competence errors. Indeed, if MMH is true, then there will be lots of reasoning mechanisms that evolutionary psychologists have yet to discover. And it is far from clear why we should assume that these undiscovered mechanisms are normatively unproblematic. To be sure, evolutionary psychologists do maintain that natural selection would have equipped us with a number of well designed reasoning mechanisms that employ rational or normatively appropriate principles on the sorts of problems that were important in the environment of our hunter/gatherer forebears. However, such evolutionary arguments for the rationality of human cognition are notoriously problematic. Moreover, even if we suppose that such evolutionary considerations justify the claim that we possess normatively appropriate principles for the sorts of problems that were important in the environment of our hunter/gatherer forebears, it’s clear that there are many sorts of reasoning problems that are important in the modern world – problems involving the probabilities of single events, for example – that these mechanisms were not designed to handle. Indeed in many cases, evolutionary psychologists suggest, the elegant special-purpose reasoning mechanisms designed by natural selection will not even be able to process these problems. Many of the problems investigated in the “heuristics and biases” literature appear to be of this sort. And evolutionary psychology gives us no reason to suppose that people have rational inferential principles for dealing with problems like these. To recapitulate: If the evolutionary psychological conception of our reasoning mechanisms is correct, we should reject P2 – the claim that human beings possess no normatively unproblematic reasoning competences. However, as we argued earlier, it is not P2 but P1 – the claim that we make competence errors – that advocates of the heuristics and biases program, such as Kahneman and Tversky, typically endorse. And evolutionary psychology provides us with no reason to reject this claim. As we will see in the sections to follow, however, the argument based on evolutionary psychology is not the only objection that’s been leveled against the claim that humans make competence errors. 6. Pragmatic objections It is not uncommon for critics of the pessimistic interpretation to point out that insufficient attention has been paid to the way in which pragmatic factors might influence how people understand the experimental tasks that they are asked to perform. One version of this complaint, developed by Gigerenzer (1996), takes the form of a very general objection. According to this objection, Kahneman, Tversky and others, are guilty “of imposing a statistical principle as a norm without examining content” – that is, without inquiring into how, under experimental conditions, subjects understand the tasks that they are asked to perform (Gigerenzer, 1996, p.593). Gigerenzer maintains that we cannot assume that people understand these tasks in the manner in which the experimenters intend them to. We cannot assume, for example, that when presented with the “feminist bank teller” problem, people understand the term “probable” as having the same meaning as it does within the calculus of chance or that the word “and” in English has the same semantics as the truth-functional operator “Ù”. On the contrary, depending on context, these words may be interpreted in a range of different ways. “Probable” can mean, for example, “plausible,” “having the appearance of truth” and “that which may in view of present evidence be reasonably expected to happen" (ibid.). But if this is so, then according to Gigerenzer we cannot conclude from experiments on human reasoning that people are reasoning in a counter-normative fashion, since it may turn out that as subjects understand the task no normative principle is being violated. There is much to be said for Gigerenzer’s objection. First, he is clearly correct that, to the extent that it’s possible, pragmatic factors should be controlled for in experiments on human reasoning. Second, it is surely the case that failure to do so weakens the inference from experimental data to conclusions about the way in which we reason. Finally, Gigerenzer is right to claim that insufficient attention has been paid by advocates of the heuristics and biases tradition to how people construe the experimental tasks that they are asked to perform. Nevertheless, we think that Gigerenzer’s argument is of only limited value as an objection to the pessimistic interpretation. First, much the same criticism applies to the experiments run by Gigerenzer and other psychologists who purport to provide evidence for normatively unproblematic patterns of inference. These investigators have done little more than their heuristics and biases counterparts to control for pragmatic factors. In which case, for all we know, it may be that the subjects in these experiments are not giving correct answers to the problems as they understand them, even though, given the experimenters understanding of the task, their responses are normatively unimpeachable. Gigerenzer’s pragmatic objection is, in short, a double-edged one. If we take it too seriously, then it undermines both the experimental data for reasoning errors and the experimental data for correct reasoning. A second, related problem with Gigerenzer’s general pragmatic objection is that it is hard to see how it can be reconciled with other central claims that Gigerenzer and other evolutionary psychologists have made. If correct, the objection supports the conclusion that the experimental data do not show that people make systematic reasoning errors. But in numerous papers, Gigerenzer and other evolutionary psychologists have claimed that our performance improves – that “cognitive illusions” disappear – when probabilistic reasoning tasks are reformulated as frequentist problems. This poses a problem. How could our performance on frequentist problems be superior to our performance on single event tasks unless there was something wrong with our performance on single event reasoning problems in the first place? In order for performance on reasoning tasks to improve, it must surely be the case that people’s performance was problematic. In which case, in order for the claim that performance improves on frequentist tasks to be warranted, it must also be the case that we are justified in maintaining that performance was problematic on nonfrequentist reasoning tasks. Ad hominum arguments aside, however, there is another problem with Gigerenzer’s general pragmatic objection. For unless we are extremely careful, the objection will dissolve into little more than a vague worry about the possibility of pragmatic explanations of experimental data on human reasoning. Of course, it’s possible that pragmatic factors explain the data from reasoning experiments. But the objection does not provide any evidence for the claim that such factors actually account for patterns of reasoning. Nor, for that matter, does it provide an explanation of how pragmatic factors explain performance on reasoning tasks. Unless this is done, however, the significance of pragmatic objections to heuristics and biases research will only be of marginal interest. This is not to say, however, that no pragmatic explanations of results from the heuristics and biases experiments have been proposed. One of the most carefully developed objections of this kind comes from Adler’s discussion of the “feminist bank teller” experiment (Adler, 1984). Pace Kahneman and Tverky, Adler denies that the results of this experiment support the claim that humans commit a systematic reasoning error – the conjunction fallacy. Instead he argues that Gricean principles of conversational implicature explain why subjects tend to make the apparent error of ranking (h) (Linda is a bank teller and is active in the feminist movement) as more probable than (f) (Linda is a bank teller.). In brief, Gricean pragmatics incorporates a maxim of relevance – a principle to the effect that an utterance should be assumed to be relevant in the specific linguistic context in which it is expressed. In the context of the “feminist bank teller” experiment, this means that if people behave as the Gricean theory predicts, they should interpret the task of saying whether or not (h) is more probable than (f ) in such as way that the description of Linda is relevant. But if subjects interpret the task in the manor intended by heuristics and biases researchers, such that: 1. The term “probable” functions according to the principles of probability theory, 2. (h) has the logical form (AÙB) and 3. (f) has the form A, then the description of Linda is not relevant to determining which of (h) and (f) is more probable. On this interpretation, the judgment that (f) is more probable than (h) is merely a specific instance of the mathematical truth that for any A and any B, P(A) ³ P(A&B). Assuming that the class of bank tellers is not empty, no contingent information about Linda – including the description provided – is relevant to solving the task at hand. So, if subjects in the experiment behave like good Griceans, then they ought to reject the experimenter’s preferred interpretation of the task in favor of some alternative on which the description of Linda is relevant. For example, they might construe (f) as meaning that Linda is a bank teller who is not a feminist. But when interpreted in this fashion, it need not be the case that (f) is more probable than (h). Indeed, given the description of Linda, it is surely more probable that Linda is a feminist bank teller than that she is a bank teller who’s not a feminist. Thus, according to Adler, people do not violate the conjunction rule, but provide the correct answer to the question as they interpret it. Moreover, that they interpret it in this manner is explained by the fact that they are doing what a Gricean theory of pragmatics says that they should. On this view, then, the data from the “feminist bank teller” problem does not support the claim that we make systematic reasoning errors, it merely supports the independently plausible claim that we accord with a maxim of relevance when interpreting utterances. On the face of it, Adler’s explanation of the “feminist bank teller” experiment is extremely plausible. Nevertheless, we doubt that it is a decisive objection to the claim that subjects violate the conjunction rule in this experiment. First, the most plausible suggestion for how people might interpret the task so as to make the description of Linda relevant – i.e. interpret (f) as meaning “Linda is a bank teller who is not a feminist” – has been controlled for by Tversky and Kahneman and it seems that it makes no difference to whether or not the conjunction effect occurs (Tversky and Kahneman, 1983, p. 95-6). Thus some alternative account of how the task is interpreted by subjects needs to provided, and it is far from clear what the alternative might be. Second, Adler’s explanation of the conjunction effect raises a puzzle about why subjects perform so much better on “frequentist” versions of the “feminist bank teller” problem (section 4.2). This is because Gricean principles of conversational implicature appear to treat the single event and frequentist versions of the problem in precisely the same manner. According to Adler, in the original single event experiment the description of Linda is irrelevant to ordering (h) and (f). In the frequentist version of the task, however, the description of Linda is also irrelevant to deciding whether more people are feminist bank tellers than feminists. Thus Adler’s proposal appears to predict that the conjunction effect will also occur in the frequentist version of the “feminist bank teller” problem. But this is, of course, precisely what does not happen. Though this doesn’t show that Adler’s explanation of the results from the single event task is beyond repair, it does suggest that it can only be part of the story. What needs to added is an explanation of why people exhibit the conjunction effect in the single event version of the task but not the in the frequentist version. Finally, it is worth stressing that although the pragmatic explanations provided by Adler and others are of genuine interest, there are currently only a very small number of heuristics and biases experiments for which such explanations have been provided. So, even if these explanations satisfactorily accounted for the results from some of the experiments, there would remain lots of results that are as yet unaccounted for in terms of pragmatic factors. Thus, as response to the pessimistic interpretation, the pragmatic strategy is insufficiently general. 7. Objections based on problems with the interpretation and application of the Standard Picture Another sort of challenge to the pessimistic interpretation focuses on the problem of how to interpret the principles of the Standard Picture and how to apply them to specific reasoning tasks. According to this objection, many of the putative flaws in human reasoning turn on the way that the experimenters propose to understand and apply these normative principles. In the present section, we discuss three versions of this challenge. The first claims that there are almost invariably lots of equally correct ways to apply Standard Picture norms to a specific reasoning problem. The second concerns the claim that advocates of the pessimistic interpretation tend to adopt specific and highly contentious interpretations of certain normative principles – in particular, the principles of probability theory. The third objection is what we call the derivation problem -- the problem of explaining how normative principles are derived from such formal systems as logic, probability theory and decision-making theory. 7.1 On the multiple application of Standard Picture principles When interpreting data from an experiment on reasoning, advocates of the pessimistic interpretation typically assume that there is a single best way of applying the norms of the Standard Picture to the experimental task. But opponents of the pessimistic interpretation have argued that this is not always the case. Gigerenzer (forthcoming), for example, argues that there are usually several different and equally legitimate ways in which the principles of statistics and probability can be applied to a given problem and that these can yield different answers – or in some cases no answer at all. If this is correct, then obviously we cannot conclude that subjects are being irrational simply because they do not give the answer that the experimenters prefer. There are, we think, some cases where Gigerenzer’s contention is very plausible. One example of this sort can be found in the experiments on base rate neglect. (See section 2.3.) As Gigerenzer and others have argued, in order to draw the conclusion that people are violating Bayesian normative principles in these studies, one must assume that the prior probability assignments which subjects make are identical to the base-rates specified by the experimenters. But as Koehler observes: This assumption may not be reasonable in either the laboratory or the real world. Because they refer to subjective states of belief, prior probabilities may be influenced by base rates and any other information available to the decision-maker prior to the presentation of additional evidence. Thus, prior probabilities may be informed by base rates, but they need not be the same. (Koehler, 1996) If this is right, and we think it is, then it is a genuine empirical possibility that subjects are not violating Bayes’ rule in these experiments but are merely assigning different prior probabilities from those that the experimenters expect. Nevertheless, we doubt that all (or even most) of the experiments discussed by advocates of the heuristics and biases program are subject to this sort of problem. So, for example, in the “feminist bank teller” problem, there is, as far as we can see, only one plausible way to apply the norms of probability theory to the task. Similarly, it is implausible to think that one might respond to “framing effect” experiments by claiming that there are many ways in which the Standard Picture might be applied. 7.2 On the rejection of non-frequentist interpretations of probability theory Another way in which the pessimistic interpretation has been challenged proceeds from the observation that the principles of the Standard Picture are subject to different interpretations. Moreover, depending on how we interpret them, their scope of application will be different and hence experimental results that might, on one interpretation, count as a violation of the principles of the Standard Picture, will not count as a violation on some other interpretation. This kind of objection has been most fully discussed in connection with probability theory, where there has been a long-standing disagreement over how to interpret the probability calculus. In brief, Kahneman, Tversky and their followers insist that probability theory can be meaningfully applied to single events and hence that judgments about single events (e.g. Jack being a engineer or Linda being a bank teller) can violate probability theory. They also typically adopt a “subjectivist” or “Bayesian” account of probability which permits the assignment of probabilities to single events (Kahneman and Tversky, 1996). In contrast, Gigerenzer has urged that probability theory ought to be given a frequentist interpretation according to which probabilities are construed as relative frequencies of events in one class to events in another. As Gigerenzer points out, on the “frequentist view, one cannot speak of a probability unless a reference class is defined.” (Gigerenzer 1993, 292-293) So, for example, “the relative frequency of an event such as death is only defined with respect to a reference class such as ‘all male pub-owners fifty-years old living in Bavaria’.” (ibid.) One consequence of this that Gigerenzer is particularly keen to stress is that, according to frequentism, it makes no sense to assign probabilities to single events. Claims about the probability of a single event are literally meaningless: For a frequentist ... the term “probability”, when it refers to a single event, has no meaning at all for us (Gigerenzer 1991a, 88). Moreover, Gigerenzer maintains that because of this “a strict frequentist” would argue that “the laws of probability are about frequencies and not about single events” and, hence, that “no judgment about single events can violate probability theory” (Gigerenzer 1993, 292-293). This disagreement over the interpretation of probability raises complex and important questions in the foundations of statistics and decision theory about the scope and limits of our formal treatment of probability. The dispute between frequentists and subjectivists has been a central debate in the foundations of probability for much of the Twentieth century (von Mises 1957; Savage 1972). Needless to say, a satisfactory treatment of these issues is beyond the scope of the present paper. But we would like to comment briefly on what we take to be the central role that issues about the interpretation of probability theory play in the dispute between evolutionary psychologists and proponents of the heuristics and biases program. In particular, we will argue that Gigerenzer’s use of frequentist considerations in this debate is deeply problematic. As we have seen, Gigerenzer argues that if frequentism is true, then statements about the probability of single events are meaningless and, hence, that judgments about single events cannot violate probability theory (Gigerenzer 1993, 292-293). Gigerenzer clearly thinks that this conclusion can be put to work in order to dismantle part of the evidential base for the claim that human judgments and reasoning mechanisms violate appropriate norms. Both evolutionary psychologists and advocates of the heuristics and biases tradition typically view probability theory as the source of appropriate normative constraints on probabilistic reasoning. And if frequentism is true, then no probabilistic judgments about single events will be normatively problematic (by this standard) since they will not violate probability theory. In which case Gigerenzer gets to exclude all experimental results involving judgments about single events as evidence for the existence of normatively problematic probabilistic judgments and reasoning mechanisms. On the face of it, Gigerenzer’s strategy seems quite persuasive. Nevertheless we think that it is subject to serious objections. Frequentism itself is a hotly contested view, but even if we grant, for argument’s sake, that frequentism is correct, there are still serious grounds for concern. First, there is a serious tension between the claim that subjects don’t make errors in reasoning about single events because single event judgments do not violate the principles of probability theory (under a frequentist interpretation) and the claim – which, as we saw in section 4, is frequently made by evolutionary psychologists – that human probabilistic reasoning improves when we are presented with frequentist rather than single event problems. If there was nothing wrong with our reasoning about single event probabilities, then how could we improve – or do better – when performing frequentist reasoning tasks? As far as we can tell, this makes little sense. In which case, irrespective of whether or not frequentism is correct as an interpretation of probability theory, evolutionary psychologists cannot comfortably maintain both (a) that we don’t violate appropriate norms of rationality when reasoning about the probabilities of single events and (b) that reasoning improves when single event problems are converted into a frequentist format. A second and perhaps more serious problem with Gigerenzer’s use of frequentist considerations is that it is very plausible to maintain that even if statements about the probabilities of single events really are meaningless and hence do not violate the probability calculus, subjects are still guilty of making some sort of error when they deal with problems about single events. For if, as Gigerenzer would have us believe, judgments about the probabilities of single events are meaningless, then surely the correct answer to a (putative) problem about the probability of a single event is not some numerical value or rank ordering, but rather: “Huh?” or “That’s utter nonsense!” or “What on earth are you talking about?” Consider an analogous case in which you are asked a question like: “Is Linda taller than?” or “How much taller than is Linda?” Obviously these questions are nonsense because they are incomplete. In order to answer them we must be told what the other relatum of the “taller than” relation is supposed to be. Unless this is done, answering “yes” or “no” or providing a numerical value would surely be normatively inappropriate. Now according to the frequentist, the question “What is the probability that Linda is a bank teller?” is nonsense for much the same reason that “Is Linda taller than?” is. So when subjects answer the single event probability question by providing a number they are doing something that is clearly normatively inappropriate. The normatively appropriate answer is “Huh?”, not “Less than 10 percent”. It might be suggested that the answers that subjects provide in experiments involving single event probabilities are an artifact of the demand characteristics of the experimental context. Subjects (one might claim) know, if only implicitly, that single event probabilities are meaningless. But because they are presented with forced choice problems that require a probabilistic judgment, they end up giving silly answers. Thus one might think the take-home message is “Don’t blame the subject for giving a silly answer. Blame the experimenter for putting the subject in a silly situation in the first place!” But this proposal is implausible for two reasons. First, as a matter of fact, ordinary people use judgments about single event probabilities in all sorts of circumstances outside of the psychologist’s laboratory. So it is implausible to think that they view single event probabilities as meaningless. But, second, even if subjects really did think that single event probabilities were meaningless, presumably we should expect them to provide more or less random answers and not the sorts of systematic responses that are observed in the psychological literature. Again, consider the comparison with the question “Is Linda taller than?” It would be a truly stunning result if everyone who was pressured to respond said “Yes.” 7.3: The “Derivation” Problem According to the Standard Picture, normative principles of reasoning are derived from formal systems such as probability theory, logic and decision theory. But this idea is not without its problems. Indeed a number of prominent epistemologists have argued that it is sufficiently problematic to warrant the rejection of the Standard Picture (Harman, 1983; Goldman, 1986). One obvious problem is that there is a wide range of formal theories which make incompatible claims, and it’s far from clear how we should decide which of these theories are the ones from which normative principles of reasoning ought to be derived. So, for example, in the domain of deductive logic there is first order predicate calculus, intuitionistic logic, relevance logic, fuzzy logic, paraconsistent logic and so on (Haack, 1978, 1996; Priest et al., 1989; Anderson et al., 1992). Similarly, in the probabilistic domain there are, in addition to the standard probability calculus represented by the Kolmogrov axioms, various nonstandard theories, such as causal probability theory and Baconian probability theory (Nozick, 1993; Cohen, 1989). Second, even if we set aside the problem of selecting formal systems and assume that there is some class of canonical theories from which normative standards ought to be derived, it is still unclear how and in what sense norms can be derived from these theories. Presumably they are not derived in the sense of logically implied by the formal theories (Goldman, 1986). The axioms and theorems of the probability calculus do not, for example, logically imply we should reason in accord with them. Rather they merely state truths about probability – e.g. P(a) ³ 0. Nor are normative principles “probabilistically implied” by formal theories. It is simply not the case that they make it probable that we ought to reason in accord with the principles. But if normative principles of reasoning are not logically or probabilistically derivable from formal theories, then in what sense are they derivable? A related problem with the Standard Picture is that even if normative principles of reasoning are in some sense derivable from formal theories, it is far from clear that the principles so derived would be correct. In order to illustrate this point consider an argument endorsed by Harman (1986) and Goldman (1986) which purports to show that correct principles of reasoning cannot be derived from formal logic because the fact that our current beliefs entail (by a principle of logic) some further proposition doesn’t always mean that we should believe the entailed proposition. Here’s how Goldman develops the idea: Suppose p is entailed by q, and S already believes q. Does it follow that S ought to believe p: or even that he may believe p? Not at all… Perhaps what he ought to do, upon noting that q entails p, is abandon his belief in q! After all, sometimes we learn things that make it advisable to abandon prior beliefs. (Goldman, 1986, p. 83) Thus, according to Goldman, not only are there problems with trying to characterize the sense in which normative principles are derivable from formal theories, even if they were derivable in some sense, “the rules so derived would be wrong” (Goldman,186, p.81). How might an advocate of the Standard Picture respond to this problem? One natural suggestion is that normative principles are derivable modulo the adoption of some schema for converting the rules, axioms and theorems of formal systems into normative principles of reasoning – i.e. a set of rewrite or conversion rules. So, for example, one might adopt the following (fragment of a) conversion schema: Prefix all sentences in the formal language with the expression “S believes that” Convert all instances of “cannot” to “S is not permitted to” Given these rules we can rewrite the conjunction rule – It cannot be the case that P(A) is less than P(A&B)) – as the normative principle: S is not permitted to believe that P(A) is less than P(A&B). This proposal suggests a sense in which normative principles are derivable from formal theories – a normative principle of reasoning is what one gets from applying a set of conversion rules to a statement in a formal system. Moreover, it also suggests a response to the Goldman objection outlined above. Goldman’s argument purports to show that the principles of reasoning “derived” from a formal logic are problematic because it’s simply not the case that we ought always to accept the logical consequences of the beliefs that we hold. But once we adopt the suggestion that it is the conjunction of a formal system and a set of conversion rules that permits the derivation of a normative principle, it should be clear that this kind of argument is insufficiently general to warrant the rejection of the idea that normative principles are derived from formal theories, since there may be some conversion schema which do not yield the consequence that Goldman finds problematic. Suppose, for example, that we adopt a set of conversion rules that permit us to rewrite modus ponens as the following principle of inference: If S believes that P and S believes that (If P then Q), then S should not believe that not-Q. Such a principle does not commit us to believing the logical consequence of the beliefs that P and (If P then Q) but only requires us to avoid believing the negation of what they entail. So it evades Goldman’s objection. Nevertheless, although the introduction of conversion rules enables us to address the objections outlined above, it also raises problems of its own. In particular, it requires advocates of the Standard Picture to furnish us with an account of the correct conversion schema for rewriting formal rules as normative principles. Until such a schema is presented, the normative theory of reasoning which they purport to defend is profoundly underspecified. Moreover – and this is the crucial point – there are clearly indefinitely many rules that one might propose for rewriting formal statements as normative principles. This poses a dilemma for the defenders of the Standard Picture: Either they must propose a principled way of selecting conversion schemas or else face the prospect of an indefinitely large number of “standard pictures,” each one consisting of the class of formal theories conjoined to one specific conversion scheme. The second of these options strikes us as unpalatable. But we strongly suspect that the former will be very hard to attain. Indeed, we suspect that many would be inclined to think that the problem is sufficiently serious to suggest that the Standard Picture ought to be rejected. 8. Rejecting the Standard Picture: The Consequentialist Challenge We’ve been considering responses to the pessimistic interpretation that assume the Standard Picture is, at least in broad outline, the correct approach to normative theorizing about rationality. But although this conception of normative standards is well entrenched in certain areas of the social sciences, it is not without its critics. Moreover, if there are good reasons to reject it, then it may be the case that we have grounds for rejecting the pessimistic interpretation as well, since the argument from experimental data to the pessimistic interpretation almost invariably assumes the Standard Picture as a normative benchmark against which our reasoning should be evaluated. In this section, we consider two objections to the Standard Picture. The first challenges the deontological conception of rationality implicit in the Standard Picture. The second focuses on the fact that the Standard Picture fails to take into consideration the considerable resource limitations to which human beings are subject. Both objections are developed with an eye to the fact that deontology is not the only available approach to normative theorizing about rationality. 8.1 Why be a deontologist? According to the Standard Picture, what it is to be rational is to reason in accord with principles derived from formal theories, and where we fail to reason in this manner our cognitive processes are, at least to that extent, irrational. As Piatelli-Palmarini puts it: The universal principles of logic, arithmetic, and probability calculus ...tell us what we should ...think, not what we in fact think... If our intuition does in fact lead us to results incompatible with logic, we conclude that our intuition is at fault. (Piatelli-Palmarini, p. 158) Implicit in this account of rationality is, of course, a general view about normative standards that is sometimes called deontology. According to the deontologist, what it is to reason correctly – what’s constitutive of good reasoning – is to reason in accord with some appropriate set of rules or principles. However, deontology is not the only conception of rationality that one might endorse. Another prominent view, which is often called consequentialism, maintains that what it is to reason correctly, is to reason in such a way that you are likely to attain certain goals or outcomes. Consequentialists are not rule-adverse: They do not claim that rules have no role to play in normative theories of reasoning. Rather they maintain that reasoning in accordance with some set of rules is not constitutive of good reasoning (Foley, 1993) Though the application of rules of reasoning may be a means to the attainment of certain ends, what’s constitutive of being a rational reasoning process on this view, is being an effective means of achieving some goal or range of goals. So, for example, according to one well-known form of consequentialism – reliabilism – a good reasoning processes is one that tends to lead to true beliefs and the avoidance of false ones (Goldman, 1986; Nozick, 1993). Another form of consequentialism – which we might call pragmatism-- maintains that what it is for a reasoning process to be a good one is for it to be an efficient means of attaining the pragmatic objective of satisfying one’s personal goals and desires (Stich, 1990; Baron, 1994). With the above distinction between consequentialism and deontology in hand, it should be clear that one way to challenge the Standard Picture is to reject deontology in favor of consequentialism . But on what grounds might such a rejection be defended? Though these are complex issues that require more careful treatment than we can afford here, one consideration that might be invoked concerns the value of good reasoning. If issues about rationality and the quality of our reasoning are worth worrying about, it is presumably because whether or not we reason correctly really matters. This suggests what is surely a plausible desideratum on any normative theory of reasoning: The Value Condition. A normative theory of reasoning should provide us with a vindication of rationality. It should explain why reasoning in a normatively correct fashion matters – why good reasoning is desirable. It would seem that the consequentialist is at a distinct advantage when it comes to satisfying this desideratum. In constructing a consequentialist theory of reasoning we proceed by first identifying the goals or ends – the cognitive goods – of good reasoning (Kitcher, 1992). So, for example, if the attainment of personal goals or the acquisition of true beliefs are of value, then they can be specified as being among the goods that we aim to obtain. Having specified the appropriate ends, in order to complete the project, one needs to specify methods or processes that permit us to efficiently obtain these ends. The consequentialist approach to normative theorizing thus furnishes us with a clear explanation of why good reasoning matters: Good reasoning is reasoning that tends to result in the possession of things that we value. In contrast to the consequentialist, it is far from clear how the deontologist should address the Value Condition. The reason is that it is far from clear why we should be concerned at all with reasoning according to some set of prespecified normative principles. The claim that we are concerned to accord with such principles just for the sake of doing so seems implausible. Moreover, any appeal by the deontologist to the consequences of reasoning in a rational manner appears merely to highlight the superiority of consequentialism. Since deontologists claim that reasoning in accord with some set of rules R is constitutive of good reasoning, they are committed to the claim that a person who reasons in accordance with R is reasoning correctly even if there are more efficient ways – even better available ways – to attain the desirable ends. In other words, if there are contexts in which according with R is not the most efficient means of achieving the desirable ends, the deontologist is still committed to saying that it would be irrational to pursue more a more efficient reasoning strategy for attaining these ends. And this poses a number of problems for the deontologist. First, since it’s presumably more desirable to attain desirable ends than merely accord with R, it’s very hard indeed to see how the deontologist could explain why, in this context, being rational is more valuable than not being rational. Second, the claim that rationality can mandate that we avoid efficient means of attaining desirable ends seems deeply counter-intuitive. Moreover, in contrast to the deontological conception of rationality, consequentialism seems to capture the correct intuition, namely that we should not be rationally required to accord with reasoning principles in contexts where they are ineffective as means to attaining the desirable goals. Finally, the fact that we are inclined to endorse this view suggests that what we primarily value principles of reasoning only to the extent that they enable us to acquire desirable goals. It is, in short, rationality in the consequentialists sense that really matters to us. One possible response to this challenge would be to deny that there are any (possible) contexts in which the rules specified by the deontological theory are not the most efficient way of attaining the desirable ends. Consider, for example, the claim endorsed by advocates of the Standard Picture, that what it is to make decisions rationally is to reason in accord with the principles of decision theory. If it were the case that decision theory is also the most efficient possible method for satisfying one’s desires, then there would never be a context in which the theory would demand that you avoid using the most efficient method of reasoning for attaining desire-satisfaction. Moreover, the distinction between a pragmatic version of consequentialism and the deontological view under discussion would collapse. They would be little more than notational variants. But what sort of argument might be developed in support of the claim that decision theory is the most efficient means of satisfying our desires and personal goals? One interesting line of reasoning suggested by Baron (1994) is that decision theoretic principles specify the best method of achieving one’s personal, pragmatic goals because a system that always reasons in accordance with these principles is guaranteed to maximize subjective expected utility – i.e. the subjective probability of satisfying its desires. But if this is so, then utilizing such rules provides, in the long run, the most likely way of satisfying one’s goals and desires (Baron, 1994, p. 319-20). Though perhaps initially plausible, this argument relies heavily on an assumption that has so far been left unarticulated, namely that in evaluating a normative theory we should ignore the various resource limitations to which reasoners are subject. To use Goldman’s term, it assumes that normative standards are resource-independent; that they abstract away from issues about the resources available to cognitive systems. This brings us our second objection to the Standard Picture: It ignores the resource limitations of human reasoners, or what Cherniak calls our finitary predicament (Cherniak, 1983). 8.2 The Finitary Predicament: Resource-Relative Standards of Reasoning Over the past thirty years or so there has been increasing dissatisfaction with resource independent criteria of rationality. Actual human reasoners suffer, of course, from a wide array of resource limitations. We are subject to limitations of time, energy, computational power, memory, attention and information. And starting with Herbert Simon’s seminal work in the 1950’s (Simon 1957), it has become increasingly common for theorists to insist that these limitations ought to be taken into consideration when deciding which normative standard(s) of reasoning to adopt. What this requires is that normative theories should be relativized to specific kinds of cognitive systems with specific resources limitations – that we should adopt a resource-relative or bounded conception of rationality as opposed to a resource-independent or unbounded one (Goldman 1986; Simon 1957). But why adopt such a conception of normative standards? Moreover, what implications does the adoption of such a view have for what we’ve been calling the normative and evaluative projects? 8.2.1. Resource-Relativity and the Normative Project Though a number of objections have been leveled against resource-independent conceptions of rationality, perhaps the most commonly invoked – and to our minds most plausible – relies on endorsing some version of an ought implies can principle (OIC-principle). The rough idea is that just as in ethical matters our obligations are constrained by what we can do, so too in matters epistemic we are not obliged to satisfy standards that are beyond our capacities (Kitcher, 1992). That is: If we cannot do A, then it is not the case that we ought to do A. The adoption of such a principle, however, appears to require the rejection of the resource-independent conception of normative standards in favor of a resource-relative one. After all, it is clearly not the case that all actual and possible cognizers are able to perform the same reasoning tasks. Human beings do not have the same capacities as God or a Laplacian demon, and other (actual or possible) beings – e.g. great apes – may well have reasoning capacities that fall far short of those possessed by ordinary humans. In which case, if ought implies can, then there may be normative standards that one kind of being is obliged to satisfy where another is not. The adoption of an epistemic OIC-principle thus requires the rejection of resource-independent standards in favor of resource-relative ones. Suppose for the moment that we accept this argument for resource-relativity. What implications does it have for what we are calling the normative project – the project of specifying how we ought to reason? One implication is that it undercuts some prominent arguments in favor of adopting the normative criteria embodied in the Standard Picture. In 8.1, for example, we outlined Baron’s argument for the claim that decision theory is a normative standard because in the long run it provides the most likely way of satisfying one’s goals and desires. Once we adopt a resource-relative conception of normative standards, however, it is far from clear that such an argument should be taken seriously. In the present context, “long run” means in the limit – as we approach infinite duration. But as Keynes famously observed, in the long run we will all be dead. The fact that a method of decision-making or reasoning will make it more probable that we satisfy certain goals in the long run is of little practical value to finite beings like ourselves. On a resource-relative conception of normative standards, we are concerned only with what reasoners ought to do given the resources that they possess. And infinite time is surely not one of these resources. A second consequence of endorsing the above argument for resource-relativity is that it provides us with a prima facie plausible objection to the Standard Picture itself. If ought implies can, we are not obliged to reason in ways that we cannot. But the Standard Picture appears to require us to perform reasoning tasks that are far beyond our abilities. For instance, it seems to be a principle of the Standard Picture that we ought to preserve the truth-functional consistency of our beliefs. As Cherniak (1983) and others have argued, however, given even a conservative estimate of the number of beliefs we possess, this is a computationally intractable task – one that we cannot perform (Cherniak, 1983; Stich, 1990). Similar arguments have been developed against the claim, often associated with the Standard Picture, that we ought to revise our beliefs in such a way as to ensure probabilistic coherence. Once more, complexity considerations strongly suggest that we cannot satisfy this standard (Osherson, 1996). And if we cannot satisfy the norms of the Standard Picture, then given that ought implies can, it follows that the Standard Picture is not the correct account of the norms of rationality. Suppose, further, that we combine a commitment to the resource-relative conception of normative standards with the kind of consequentialism discussed in 8.1. This seems to have an important implication for how we think about normative standards of rationality. In particular, it requires that we deny that normative principles of reasoning are universal in two of important senses. First, we are forced to deny that rules of good reasoning are universal in the sense that the same class of rules ought to be employed by all actual and possible reasoners. Rather, rules of reasoning will only be normatively correct relative to a specific kind of cognizer. According to the consequentialist, good reasoning consists in deploying efficient cognitive processes in order to achieve certain desirable goals –e.g. true belief or desire-satisfaction. The adoption of resource-relative consequentialism does not require that the goals of good reasoning be relativized to different classes of reasoners. A reliabilist can happily maintain, for example, that acquiring true beliefs and avoiding false ones is always the goal of good reasoning. Resource-relativity does force us, however, to concede that a set of rules or processes for achieving this end may be normatively appropriate for one class of organisms and not for another. After all, the rules or processes might be an efficient means of achieving the goal (e.g. true belief) for one kind of organism but not for the other. This, of course, is in stark contrast to the Standard Picture, which maintains that the same class of rules is the normatively correct one irrespective of the cognitive resources available to the cognizer. Thus, resource-relativity undermines one important sense in which the Standard Picture characterizes normative reasoning principles as universal, namely that they apply to all reasoners. The adoption of resource-relative consequentialism also requires us to relativize our evaluations to specific ranges of environments. Suppose, for example, we adopt a resource-relative form of reliabilism. We will then need to specify the kind of environment relative to which the evaluation is being made in order to determine if a reasoning process is a normatively appropriate one. This is because, for various reasons, different environments can effect the efficiency of a reasoning process. First, different environments afford reasoners different kinds of information. To use an example we’ve already encountered, some environments might only contain probabilistic information that is encoded in the form of frequencies, while others may contain probabilistic information in a nonfrequentist format. And presumably it is a genuine empirical possibility that such a difference can effect the efficiency of a reasoning process. Similarly, different environments may impose different time constraints. In some environments there might be lots of time for a cognizer to execute a given reasoning procedure while in another there may be insufficient time. Again, it is extremely plausible to maintain that this will effect the efficiency of a reasoning process in attaining such goals as acquiring true beliefs or satisfying personal goals. The adoption of a resource-relative form of consequentialism thus requires that we reject the assumption that the same standards of good reasoning apply in all environments – that they are context invariant. 8.2.2. Resource-Relativity and the Evaluative Project We’ve seen that the adoption of a resource-relative conception of normative standards by itself or in conjunction with the adoption of consequentialism has some important implications for the normative project. But what ramifications does it have for the evaluative project – for the issue of how good our reasoning is? Specifically, does it have any implications for the pessimistic interpretation? First, does resource-relativity entail that the pessimistic interpretation is false? The short answer is clearly no. This is because it is perfectly compatible with resource-relativity that we fail to reason as we ought to. Indeed the adoption of a resource-relative form of consequentialism is entirely consistent with the pessimistic interpretation since even if such a view is correct, we might fail to satisfy the normative standards that we ought to. But perhaps the adoption of resource-relativity implies – either by itself or in conjunction with consequentialism – that that the experimental evidence from heuristics and biases studies fails to support the pessimistic interpretation? Again, this strikes us as implausible. If the arguments outlined in 8.2.1 are sound, then we are not obliged to satisfy certain principles of the Standard Picture – e.g. the maintenance of truth functional consistency – since it is beyond our capacities to do so. However, it does not follow from this that we ought never to satisfy any of the principles of the Standard Picture. Nor does it follow that we ought not to satisfy them on the sorts of problems that heuristics and biases researchers present to their subjects. Satisfying the conjunction rule in the “feminist bank teller” problem, for example, clearly is not an impossible task for us to perform. In which case, the adoption of a resource-relative conception of normative standards does not show that the experimental data fails to support the pessimistic interpretation. Nevertheless, we do think that the adoption of a resource-relative form of consequentialism renders it extremely difficult to see whether or not our reasoning processes are counter-normative in character. Once such a conception of normative standards is adopted, we are no longer in the position to confidently invoke familiar formal principles as benchmarks of good reasoning. Instead we must address a complex fabric of broadly conceptual and empirical issues in order to determine what the relevant standards are relative to which the quality of our reasoning should be evaluated. One such issue concerns the fact that we need to specify various parameters – e.g. the set of reasoners and the environmental range – before the standard can be applied. And it’s far from clear how these parameters ought to be set or if, indeed, there is any principled way of deciding how this should be done. Consider, for example, the problem of specifying the range of environments relative to which normative evaluations are made. What range of environments should this be? Clearly there is a wide range of options. So, for instance, we might be concerned with how we perform in “ancestral environments” – the environments in which our evolutionary ancestors lived (Tooby and Cosmides, 1998). Alternatively, we might be concerned with all possible environments in which humans might find themselves – including the experimental conditions under which heuristics and biases research is conducted. Or we might be concerned to exclude “artificial” laboratory contexts and concern ourselves only with “ecologically valid” contexts. Similarly, we might restrict contemporary environments for some purposes to those in which certain (minimal) educational standards are met. Or we might include environments in which no education whatsoever is provided. And so on. In short: there are lots of ranges of environments relative to which evaluations may be relativized. Moreover, it is a genuine empirical possibility that our evaluations of reasoning processes will be substantially influenced by how we select the relevant environments. But even once these parameters have been fixed – even once we’ve specified the environmental range, for example – it still remains unclear what rules or processes we ought to deploy in our reasoning. And this is because, as mentioned earlier, it is a largely an empirical issue which methods will prove to be efficient means of attaining normative ends for beings like us within a particular range of environments. Though the exploration of this empirical issue is still very much in its infancy, it is the focus of what we think is some of the most exciting contemporary research on reasoning. Most notably, Gigerenzer and his colleagues are currently exploring the effectiveness of certain reasoning methods which they call fast and frugal algorithms (Gigerenzer et al., 1999). As the name suggests, these reasoning processes are intended to be both speedy and computationally inexpensive and, hence, unlike the traditional methods associated with the Standard Picture, easily utilized by human beings. Nevertheless, Gigerenzer and his colleagues have been able to show that, in spite of their frugality, these algorithms are extremely reliable at performing some reasoning tasks within certain environmental ranges. Indeed, they are often able to outperform computationally expensive methods such as bayesian reasoning or statistical regression (Gigerenzer, et al, 1999). If we adopt a resource-relative form of consequentialism, it becomes a genuine empirical possibility that fast and frugal methods will turn out to be the normatively appropriate ones – the ones against which our own performance ought to be judged (Bishop, forthcoming). The central goal of this paper has been to consider the nature and plausibility of the pessimistic view of human rationality often associated with the heuristics and biases tradition. We started by describing some of the more disquieting results from the experimental literature on human reasoning and explaining how these results have been taken to support the pessimistic interpretation. We then focused, in the remainder of the paper, on a range of recent and influential objections to this view that have come from psychology, linguistics and philosophy. First, we considered the evolutionary psychological proposal that human beings possess many specialized reasoning modules, some of which have access to normatively appropriate reasoning competences. We noted that although this view is not at present highly confirmed it is nevertheless worth taking very seriously indeed. Moreover, we argued that if the evolutionary psychological account of reasoning is correct, then we have good reason to reject one version of the pessimistic interpretation but not the version that most advocates of the heuristics and biases program typically endorse – the thesis that human beings make competence errors. Second, we considered a cluster of pragmatic objections to the pessimistic interpretation. These objections focus on the role of pragmatic, linguistic factors in experimental contexts and maintain that much of the putative evidence for the pessimistic view can be explained by reference to facts about how subjects interpret the tasks that they are asked to perform. We argued that although there is much to be said for exploring the pragmatics of reasoning experiments, the explanations that have been developed so far are not without their problems. Further, we maintained that they fail to accommodate most of the currently available data on human reasoning and thus constitute an insufficiently general response to the pessimistic view. Next, we turned our attention to objections which focus on the paired problems of interpreting and applying Standard Picture norms. We considered three such objections and suggested that they may well be sufficient to warrant considering alternatives to the Standard Picture. With this in mind, in section 8, we concluded by focusing on objections to the Standard Picture that motivate the adoption of a consequentialist account of rationality. In our view, the adoption of consequentialism does not imply that the pessimistic interpretation false, but it does make the task of evaluating this bleak view of human rationality an extremely difficult one. Indeed, if consequentialism is correct, we are surely a long way from being able to provide a definite answer to the central question posed by the evaluative project: We are, in other words, still unable to determine the extent to which human beings are rational. (University of Pennsylvania) Adler, J. (1984) Abstraction is uncooperative. Journal for the Theory of Social Behavior, 14, 165-181. Anderson, A., Belnap, N. and Dunn, M. (eds.), 1992. Entailment: The Logic of Relevance and Necessity. Princeton: Princeton University Press. Barkow, J. (1992). Beneath new culture is old psychology: Gossip and social stratification. In Barkow, Cosmides and Tooby (1992), 627-637. Barkow, J., Cosmides, L., and Tooby, J. (eds.), (1992). The Adapted Mind: Evolutionary Psychology and the Generation of Culture. Oxford: Oxford University Press. Baron, J. (1994). Thinking and Deciding. Second edition. Cambridge: Cambridge University Baron-Cohen, S. (1995). Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press. Bishop, M. (forthcoming). In praise of epistemic irresponsibility: How lazy and ignorant can you be? M.Bishop, R. Samuels and S. Stich "Perspectives on Rationality" special issue of Synthese. Bower, B. (1996). Rational mind design: research into the ecology of thought treads on contested terrain. Science News, 150, 24-25. Carey, S. and Spelke, E. (1994). Domain-specific knowledge and conceptual change. In Hirschfeld and Gelman (1994), 169-200. Carruthers, P. and Smith, P. K. (1996). Theories of Theories of Mind. Cambridge: Cambridge University Press. Casscells, W., Schoenberger, A. and Grayboys, T. (1978). Interpretation by physicians of clinical laboratory results. New England Journal of Medicine, 299, 999-1000. Cheng, P. and Holyoak, K. (1985). Pragmatic reasoning schemas. Cognitive Psychology, 17, 391-416. Cheng, P. and Holyoak, K. (1989). On the natural selection of reasoning theories. Cognition, 33, 285-313. Cheng, P., Holyoak, K., Nisbett, R., and Oliver, L. (1986). Pragmatic versus syntactic approaches to training deductive reasoning. Cognitive Psychology, 18, 293-328. Cherniack, C. (1986). Minimal Rationality. Cambridge, MA: MIT Press. Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, MA: MIT Press. Chomsky, N. (1975). Reflections of Language. New York: Pantheon Books. Chomsky, N. (1980). Rules and Representations. New York: Columbia University Press. Chomsky, N. (1988). Language and Problems of Knowledge: The Managua Lectures. Cambridge, Ma: MIT Press. Cohen, L. (1981). Can human irrationality be experimentally demonstrated?. Behavioral and Brain Sciences, 4, 317-370. Cohen, L. (1989). An Introduction to the Philosophy of Induction and Probability. Oxford: Clarendon Press. Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with Wason Selection Task. Cognition, 31, 187-276. Cosmides, L. and Tooby, J. (1992). Cognitive adaptations for social exchange. In Barkow, Cosmides and Tooby (1992), 163-228. Cosmides, L. and Tooby, J. (1994). Origins of domain specificity: The evolution of functional organization. In Hirschfeld and Gelman (1994), 85-116. Cosmides, L. and Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58, 1, 1-73. Cummins, D. (1996). Evidence for the innateness of deontic reasoning. Mind and Language, 11, 160-190. Deheane, S. 1997. The Number Sense: How the Mind Creates Mathematics. Oxford: Oxford University Press. Dawes, R. M. 1988. Rational Choice in an Uncertain World. San Diego: Harcourt. Evans, J. S., Newstead, S. E. and Byrne, R. M. (1993). Human Reasoning: The Psychology of Deduction. Hove, England: Lawrence Erlbaum Associates. Fiedler, K. (1988). The dependence of the conjunction fallacy on subtle linguistic factors. Psychological Research, 50, 123-129. Fodor, J. (1983). The Modularity of Mind. Cambridge, MA: MIT Press. Foley, R. (1993). Working Without a Net: A Study of Egocentric Epistemology. New York: Oxford University Press. Gelman, S. and Brenneman K. (1994). First principles can support both universal and culture-specific learning about number and music. In Hirschfeld and Gelman (1994), 369-387. Gigerenzer, G. (1991a). How to make cognitive illusions disappear: Beyond 'heuristics and biases'. European Review of Social Psychology, 2, 83-115. Gigerenzer, G. (1991b). On cognitive illusions and rationality. Poznan Studies in the Philosophy of the Sciences and the Humanities, Vol. 21, 225-249. Gigerenzer, G. (1993). The bounded rationality of probabilistic models. In K. I. Manktelow and D. E. Over (eds), Rationality: Psychological and Philosophical Perspectives. London: Routledge. Gigerenzer, G. (1994). Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa). In G. Wright and P. Ayton (eds.), Subjective Probability. New York: John Wiley. Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky (1996). Psychological Review, 103, 592-596. Gigerenzer, G. (1998). Ecological intelligence: An adaptation for frequencies. In D. Cummins and C. Allen (eds), The Evolution of Mind. New York: Oxford University Press. Gigerenzer, G. and Hug, K. (1992). Domain-specific reasoning: Social contracts, cheating and perspective change. Cognition, 43, 127-171. Gigerenzer, G., and Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684-704. Gigerenzer, G., Hoffrage, U., and Kleinbslting, H. (1991). Probabilistic mental models: A Brunswikean theory of confidence. Psychological Review, 98, 506-528. Gigerenzer, G. (forthcoming). The Psychology of Rationality. Oxford University Press, New York Gigerenzer, G., Todd, P., and the ABC Research Group (1999). Simple Heuristics That Make Us Smart. New York: Oxford University Press. Goldman, A. (1986). Epistemology and Cognition. Cambridge, MA: Harvard University Press. Griggs, R. and Cox, J. (1982). The elusive thematic-materials effect in Wason's selection task. British Journal of Psychology, 73, 407-420. Haack, S. (1978). Philosophy of Logics. Cambridge: Cambridge University Press. Haack, S. (1996). Deviant Logic, Fuzzy Logic: Beyond Formalism. Chicago: Chicago University Press. Harman, G. (1983). Logic and probability theory versus canons of rationality. Behavioral and Brain Sciences, 6, p. 251. Harman, G. (1986). Change of View. Cambridge, MA: MIT Press. Hertwig, R. and Gigerenzer, G. (1994). The chain of reasoning in the conjunction task. Unpublished manuscript. Hirschfeld, L. and Gelman, S. (1994). Mapping the Mind. Cambridge: Cambridge University Press. Hutchins, E. (1980). Culture and Inference: A Trobriand Case Study. Cambridge, MA: Harvard University Press. Jackendoff, R. (1992). Languages of the Mind. Cambridge, MA: MIT Press. Kahneman, D., Slovic, P. and Tversky, A. (eds.), (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Kahneman, D. and Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237-251. Reprinted in Kahneman, Slovic and Tversky (1982). Kahneman, D. and Tversky, A. (1982). The psychology of preferences. Scientific American, vol. 246 (1), 160-173. Kahneman, D. and Tversky, A. (1996). On the reality of cognitive illusions: A reply to Gigerenzer's critique. Psychological Review, 103, 582-591. Kitcher, P. (1992). The naturalists return. The Philosophical Review, 101, no. 1, 53-114. Koehler, J. (1996). The Base-Rate Fallacy Reconsidered. Behavioral and Brain Sciences, 19, 1-53. Leslie, A. (1994). ToMM, ToBY, and agency: Core architecture and domain specificity. In Hirschfeld and Gelman (1994), 119-148. Lichtenstein, S., Fischoff, B. and Phillips, L. (1982). Calibration of probabilities: The state of the art to 1980. In Kahneman, Slovic and Tversky (1982), 306-334. Manktelow, K. and Over, D. (1995). Deontic reasoning. In S. Newstead and J. St. B. Evans (eds), Perspectives on Thinking and Reasoning. Hillsdale, N.J.: Erlbaum. von Misses, R. (1957). Probability, Statistics and Truth. Second edition, prepared by Hilda Geiringer, New York: Macmillan. Nisbett, R. and Ross, L. (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs, NJ: Prentice-Hall. Norenzayan, A., Nisbett, R. E., Smith, E. E., & Kim, B. J. (1999). Rules vs. Similarity as a Basis for Reasoning and Judgment in East and West . Ann Arbor: University of Michigan. Nozick, R. (1993). The Nature of Rationality. Princeton: Princeton University Press. Oaksford, M. and Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review, 101, 608-631. Osherson, D. N. (1996). Judgement. In E.E. Smith and D. N. Osherson (eds), Thinking: Invitation to Cognitive Science, Cambridge (MA): MIT Press. Peng, K., & Nisbett, R. E. (In press). Culture, dialectics, and reasoning about contradiction. American Psychologist. Piattelli-Palmarini, M. (1994). Inevitable Illusions: How Mistakes of Reason Rule Our Minds. New York: John Wiley & Sons. Pinker, S. (1994). The Language Instinct. New York: William Morrow and Co. Pinker, S. (1997). How the Mind Works. New York: W. W. Norton. Plous, S. (1989). Thinking the unthinkable: the effects of anchoring on the likelihood of nuclear war. Journal of Applied Social Psychology, 19, 1, 67-91. Priest, G., Routley, R. and Norman, J. (eds.), (1989). Paraconsistent Logic: Essays on the Inconsistent. Munchen: Philosophia Verlag. Samuels, R. (1998). Evolutionary psychology and the massive modularity hypothesis. British Journal for the Philosophy of Science. 49, 575-602. Samuels, R. (In Press). Massively modular minds: Evolutionary psychology and cognitive architecture. In P. Carruthers (ed.) Evolution and the Human Mind. Cambridge University Press. Samuels, R. (In preparation). Naturalism and normativity: Descriptive constraints on normative theories of rationality. Samuels R., S. Stich and M. Bishop (In press) Ending the rationality wars: How to make disputes about human rationality disappear. In R. Elio (ed.) Common Sense, Reasoning and Rationality, Vancouver Studies in Cognitive Science, Vol. 11. New York: Oxford University Press. Savage, L. J. (1972). The Foundations of Statistics. London: J. Wiley. Schwarz, N. (1996) Cognition and Communication: Judgmental Biases, Research Methods and the Logic of Conversation. Hillsdale, NJ: Erlbaum. Segal, G. (1996). The modularity of theory of mind. In Carruthers and Smith (1995), 141-157. Shallice, T. (1989). From Neuropsychology to Mental Structures. Cambridge: Cambridge University Press. Simon, H. A. 1957. Models of Man: Social and Rational. New York: Wiley. Slovic, P., Fischhoff, B., and Lichtenstein, S. (1976). Cognitive processes and societal risk taking. In J. S. Carol and J. W. Payne (eds.). Cognition and Social Behavior. Hillsdale, NJ: Erlbaum. Sperber, D. (1994). The modularity of thought and the epidemiology of representations. In Hirschfeld and Gelman (1994), 39-67. Sperber, D., Cara, F. and Girotto, V. (1995). Relevance theory explains the selection task. Cognition, 57, 1, 31-95. Stein, E. (1996). Without Good Reason. Oxford: Clarendon Press. Stich, S. (1990). The Fragmentation of Reason. Cambridge, MA: MIT Press. Sutherland, S. (1994). Irrationality: Why We Don't Think Straight!. New Brunswick, NJ: Rutgers University Press. Tooby, J. and Cosmides, L. (1995). Foreword. In Baron-Cohen (1995). Tooby, J. and Cosmides, L. (1998). Ecological Rationality and the Multimodular Mind. Manuscript. Trivers, R. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35-56. Tversky, A. and Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131. Reprinted in Kahneman, Slovic and Tversky (1982). Tversky, A. and Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgement. Psychological Review, 90, 293-315. Wilson, M. and Daly, M. (1992). The man who mistook his wife for a chattel. In Barkow, Cosmides and Tooby (1992), 289-322. Zagzebski, L. (1996) Virtues of the Mind: An Inquiry into the Nature of Virtue and the Ethical Foundations of Knowledge. New York: Cambridge University Press. For detailed surveys of these results see Nisbett and Ross, 1980; Kahneman, Slovic and Tversky, 1982; Baron, 1994; Piatelli-Palmarini, 1994; Dawes, 1988 and Sutherland, 1994. Plous (1989) replicated this finding with an experiment in which the subjects were asked to estimate the likelihood of a nuclear war – an issue which people are more likely to be familiar with and to care about. He also showed that certain kinds of mental operations – e.g. imagining the result of a nuclear war just before making your estimate – fail to influence the process by which the estimate is produced. Though see Peng & Nisbett (in press) and Norenzayan, et al. (1999) for some intriguing evidence for the claim that there are substantial inter-cultural differences in the reasoning of human beings. Though at least one philosopher has argued that this appearance is deceptive. In an important and widely debated article, Cohen (1981) offers an account of what it is for reasoning rules to be normatively correct, and his account entails that a normal person’s reasoning competence must be normatively correct. For discussion of Cohen’s argument see Stich (1990, chapter 4) and Stein (1996, Chapter 5). Precisely what it is for a principle of reasoning to be derived from the rules of logic, probability theory and decision theory is far from clear, however. See section 7.3 for a brief discussion of this problem. In a frequently cited passage, Kahneman and Tversky write: “In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors.” (1973, p. 237) But this does not commit them to the claim that people do not follow the calculus of chance or the statistical theory of prediction because these are not part of their cognitive competence, and in a more recent paper they acknowledge that in some cases people are guided by the normatively appropriate rules. (Kahneman and Tversky, 1996, p. 587) So presumably they do not think that people are simply ignorant of the appropriate rules, but only that they often do not exploit them when they should. To say that a cognitive structure is domain-specific means (roughly) that it is dedicated to solving a restricted class of problems in a restricted domain. For instance, the claim that there is a domain-specific cognitive structure for vision implies that there are mental structures which are brought into play in the domain of visual processing and are not recruited in dealing with other cognitive tasks. By contrast, a cognitive structure that is domain-general is one that can be brought into play in a wide range of different domains. It is important to note that the notion of a Darwinian module differs in important respects from other notions of modularity to be found in the literature. First, there are various characteristics that are deemed crucial to some prominent conceptions of modularity that are not incorporated into the notion of a Darwinian module. So, for example, unlike the notion of modularity invoked in Fodor (1983), evolutionary psychologists do not insist – though, of course, they permit the possibility – that modules are informationally encapsulated and, hence, have access to less than all the information available to the mind as a whole. Conversely, there are features of Darwinian modules that many modularity theorists do not incorporate into their account of modularity. For instance, unlike to the notions of modularity employed by Chomsky and Fodor, a central feature of Darwinian modules is that they are adaptations produced by natural selection (Fodor, 1983; Chomsky, 1988). (For a useful account of the different notions of modularity see Segal, 1996. Also, see Samuels , in press.) Cosmides and Tooby call “the hypothesis that our inductive reasoning mechanisms were designed to operate on and to output frequency representations” the frequentist hypothesis (p. 21), and they give credit to Gerd Gigerenzer for first formulating the hypothesis. See, for example, Gigerenzer (1994, p. 142). Cosmides and Tooby use ‘bayesian’ with a small ‘b’ to characterize any cognitive procedure that reliably produces answers that satisfy Bayes’ rule. This is the text used in Cosmides & Tooby’s experiments E2-C1 and E3-C2. In yet another version of the problem, Cosmides and Tooby explored whether an even greater percentage would give the correct bayesian answer if subjects were forced “to actively construct a concrete, visual frequentist representation of the information in the problem.” (34) On that version of the problem, 92% of subjects gave the correct bayesian response. Still other hypotheses that purport to account for the content effects in selection tasks have been proposed by Oaksford and Chater (1994), Manktelow and Over (1995) and Sperber, Cara and Girotto (1995). So, for example, Slovic, Fischhoff and Lichtenstein (1976, p. 174) claim that “It appears that people lack the correct programs for many important judgmental tasks…. We have not had the opportunity to evolve an intellect capable of dealing conceptually with uncertainty.” Piatelli-Palmarini (1994) goes even further when maintaining that “we are … blind not only to the extremes of probability but also to intermediate probabilities” – from which one might well adduce that we are simply blind about probabilities (Piatelli-Palmarini, 1994, p.131). See Samuels et al. (In press) for an extended defense of these claims. For critiques of such arguments see Stich (1990) and Stein (1996). Though, admittedly, Tversky and Kahneman’s control experiment has a between-subjects design, in which (h) and (f) are not compared directly. Schwartz (1996) has invoked a pragmatic explanation of base-rate neglect which is very similar to Adler’s critique of the "feminist bank teller problem" and is subject to very similar problems. Sperber et al. (1995) have provided a pragmatic explanation of the data from the selection task.. This is assuming, of course, that (a) these principles apply at all (an issue we will address in section 7.2) and (b) people are not interpreting the problem in the manner suggested by Adler. On occasion, Gigerenzer appears to claim not that frequentism is the correct interpretation of probability theory but that it merely one of a number of legitimate interpretations. As far as we can tell, however, this makes no difference to the two objections we consider below. Though we take consequentialism to be the main alternative to deontology, one might adopt a “virtue-based” approach to rationality. See, for example, Zagzebski (1996). Though see Stich (1990) for a challenge to the assumption that truth is something we should care about. And even if there is some intrinsic valuable to reasoning in accord with the deontologists rules, it is surely plausible to claim that the value of attaining desirable ends is greater. Actually, this argument depends on the additional assumption that one’s subjective probabilities are well-calibrated – that they correspond to the objective probabilities. Though OIC-principles are widely accepted in epistemology, it is possible to challenge the way that they figure in the argument for resource-relativity. Moreover, there is a related problem of precisely which version(s) of this principle should be deployed in epistemic matters. In particular, it is unclear how the model expression “can” should be interpreted. A detailed defense of the OIC-principle is, however, a long story that cannot be pursued here. See Samuels (in preparation) for a detailed discussion of these matters. One example of a fast and frugal algorithm is what Gigerenzer et al. call the recognition heuristic. This is the rule that: If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value (Gigerenzer, et al., 1999). What Gigerenzer et al. have shown is that this very simple heuristic when combined with an appropriate metric for assigning values to objects can be remarkably accurate in solving various kinds of judgmental tasks. To take a simple example, they have shown that the recognition heuristic is an extremely reliable way of deciding which of two cities is the larger. For instance, by using the recognition heuristic a person who has never heard of Dortmund but has heard of Munich would be able to infer that Munich has the higher population, which happens to be correct. Current research suggests, however, that the value of this heuristic is not restricted to such ‘toy’ problems. To take one particularly surprising example, there is some preliminary evidence which suggests that people with virtually no knowledge of the stock market, using the recognition heuristic, can perform at levels equal to or better than major investment companies!
Miller, Hugh (DNB00) |←Miller, George||Dictionary of National Biography, 1885-1900, Volume 37 |Miller, James (1706-1744)→| MILLER, HUGH (1802–1856), man of letters and geologist, son of Hugh Miller by his second wife Harriet, was born at Cromarty on 10 Oct. 1802. His father, who came of a long line of seafaring men of Scandinavian descent, was lost in the Moray Firth with his trading-sloop and all hands on 9 Nov. 1807. His mother was great-granddaughter of Donald Ross or Roy, a sage and seer of Celtic race long remembered in Ross-shire. As a child Hugh was a keen observer of nature and a collector of shells and stones, while he evinced much interest in literature. But when sent to the school of his native burgh he proved incorrigibly self-willed, and left it after a violent personal encounter with the dominie, on whom he revenged himself in some stinging verses. Wild and intractable, he formed his companions into a gang of rovers and orchard robbers; but at the same time he infected some of them with his own love of reading and rhyming, and edited a boyish 'Village Observer,' to which several of them contributed. At seventeen he was apprenticed to a stone-mason, abandoned his boyish frowardness, and became an excellent workman. His occupation gave his mind its scientific cast. He saw ripple-marks on the bed of his first quarry; and thus 'the necessity that had made him a quarrier taught him also to be a geologist.' On 11 Nov. 1822 his apprenticeship ceased and he became a journeyman mason. Miller thenceforth pursued his craft in different parts of the highlands and lowlands of Scotland, sometimes in towns he was in Edinburgh in 1824-5 oftener in the open country. Always observing, reflecting, and writing, he developed a strongly religious temperament, and devotion to the Christian faith became the determining principle of his life. He soon formed the acquaintance of persons of literary taste, among them Dr. Carruthers of the 'Inverness Courier,' and Alexander Stewart, minister of Cromarty. In 1829 he published 'Poems written in the Leisure Hours of a Journeyman Mason,' a volume that attracted the favourable attention of some distant critics, among them Leigh Hunt, but it lacked fire or facility, and he wisely abandoned poetry for prose. He contributed in 1829 'Letters on the Herring Fishery' to the 'Inverness Courier;' they were reprinted separately, and gave promise of much literary capacity. At thirty-two, in 1834, his reputation in his native town brought him an accountantship in the branch of the Commercial Bank recently established there. On 7 Jan. 1837 he married, after a long courtship, Lydia Falconer Fraser [see Miller, Lydia Falconer], a lady of great mental refinement. He showed some interest in his work at the bank by publishing 'Words of Warning to the People of Scotland,' in which he advocated the continuance of the one-pound-note circulation. But he made his first mark in literature in 1835 when he issued 'Scenes and Legends of the North of Scotland,' the traditions of his native Cromarty, and a little later he contributed largely to Mackay Wilson's 'Tales of the Borders.' But while he thoroughly studied the antiquities of his native town, he did not neglect the geological examination of the neighbouring country which he had begun as a stonemason's apprentice. Geology formed the subject of a chapter in his 'Scenes and Legends.' He explored the fossil fish-beds of the old red sandstone about Cromarty; and when Dr. John Malcolmson and Professor Fleming of Aberdeen visited the town, he met them and discussed geological problems. He soon began to correspond with Murchison and Agassiz, and to collect the materials for a work on the 'Old Red Sandstone.' Since 1834 Miller had been an intensely interested spectator of the attempts of the Church of Scotland to neutralise the effects of the law of patronage, and to secure to the Scottish people the right of freely elect- ing their pastors. In May 1839 the House of Lords decided that the rights of patronage were 'inconsistent with the exercise of any volition on the part of the people, however expressed/ Miller and others saw that an ecclesiastical reform bill for Scotland was needful to restore the Scottish people's rights, and to rouse popular feeling on the question he published two powerful pamphlets, 'A Letter to Lord Brougham' and 'The Whiggism of the Old School,' 1839, in which he ably stated the popular view. In January 1840 he was offered by the leaders of his party the non-intrusionists the editorship of their new organ, the 'Witness,' a bi-weekly newspaper. He accepted the post with diffidence, but, once settled at the editorial desk in Edinburgh, he proved that he was in his right place. He impressed his personality on the paper, and it rapidly attained a very wide circulation. His leading articles, to which he devoted the utmost care, were invariably brilliant and convincing. The movement grew, and Miller's part in it was only second to that of Chalmers. Signatures to non-intrusion petitions increased fivefold. At the general election of 1841 all the Scottish parliamentary candidates, with a single exception, were advocating some popular modification of patronage. In 1843 the disruption came, and the free church, embracing two-thirds of the members of the church of Scotland, was established. In the free church, at the outset, Miller saw an opportunity for realising his ideal of a national church. The free church, reared alongside the establishment (which he at that time held with Chalmers to have become a 'moral nullity'), was to overshadow and absorb it without self-aggrandisement, and by pure moral force. 'The church of the future,' he insisted, 'must be missionary, not political.' But, to his sorrow, the free church, after the death of Chalmers, and under other leaders, abandoned, in his opinion, her high claims by identifying her position with that of a dissenting sect. Throughout this exciting period science was Miller's relaxation. In 1840 his well-known book on 'The Old Red Sandstone, or New Walks in an Old Field,' appeared serially in the 'Witness,' and was re-published in 1841, with remarkable figures of 'Old Red' fishes from his own pencil. By this work, wrote Buckland, geologists were astonished and delighted. They at once accorded to the old red sandstone, as a formation, an importance scarcely before recognised. His technical ichthyology was based on Agassiz's contemporary researches among the fishes of the 'Old Red,' but it contained important improvements, and the best part of the work was founded entirely on original observation. 'The more I study the fishes of the "Old Red,'" wrote Professor Huxley twenty years afterwards, 'the more I am struck with the patience and sagacity manifested in Hugh Miller's researches, and by the natural insight, which in his case seems to have supplied the place of special anatomical knowledge.' His common sense gave him a grasp of the scientific method in palaeontology, while his imagination enabled him to pictorially restore ancient physical geographies. In 1845, broken down in health by excessive labour, he visited England, and his 'First Impressions of England and its People' appeared in 1846. In 1847 he published 'Footprints of the Creator, or the Asterolepis of Stromness.' This was a reply to the 'Vestiges of Creation,' and a contribution both to Christian apologetics and to palaeontology. Many of the fossils described were supplied to Miller by his friend, Robert Dick [q. v.] of Tlmrso. To the American edition Agassiz affixed a memoir of the writer. The doctrine of development Miller here held to be irreconcilable with the dogmas of Christianity. He argued for the miracle of creation versus the law of development, and set himself to prove that the earliest fossils, and more especially the fishes of the 'Old Red,' were as advanced of their kind as those that have lived since or that live now. In 1848 Miller contributed a geological section to McCrie's work on the Bass Rock, and in 1852 he published his autobiography 'My Schools and Schoolmasters.' 'Truly I am glad,' wrote Thomas Carlyle to him of this work, 'to condense the bright but indistinct rumour labelled to me by your Name, for years past, into the ruddy-visaged, strong-boned, glowing Figure of a Man which I have got, and bid good speed to, with all my heart ! You have, as you undertook to do, painted many things to us : scenes of life, scenes of Nature, which rarely come upon the canvas ; and I will add, such Draughtsmen too are extremely uncommon in that and in other walks of painting. There is a right genial fire in the Book, everywhere nobly tempered down into peaceful, radical heat, which is very beautiful to see. Luminous, memorable ; all wholesome, strong, and breezy, like the "Old Red Sandstone Mountains" in a sunny summer day.' Miller's last volume, which received its final corrections on the day of his death, 'The Testimony of the Rocks '(1857) mainly deals, like 'The Footprints,' with the borderland between science and religion. Miller took the six days of creation as synonymous with six periods, and sublimed them into representative visions of the progress of creation. 'Rightly understood,' says Miller, speaking of Genesis, 'I know not a single truth that militates against the minutest or least prominent of its details.' In the meantime, in 1845, 'The Witness' became the joint property of Miller and his business partner, Robert Fairby, and its sentiments henceforth diverged from those held by the leaders of the free church. In politics Miller was an 'old whig,' or independent liberal 'whig in principle, tory in feeling' and his political independence gave, in the words of the 'Scotsman,' 'dignity and character to the newspaper press of Scotland.' In education he supported the national, not the sectarian, view, and favoured no such narrow restriction of subjects as some of his co-religionists adopted, and in 'Thoughts on the Education Question' (1850) he outlined a scheme now substantially law. Conscious of the growing power of the masses he advocated, besides education, a moderate extension of the franchise, the abolition of entail, and the curtailment of the game laws. He exposed and denounced the Sutherlandshire clearings and the intolerant refusal of sites to the free church, but he countenanced no vision of clearing the proprietors. To chartism he was hostile, strikes he discouraged, and he accepted a poor law for Scotland with regret, deeming it to have been rendered necessary by the inefficiency of the old church administration of relief. Puritan in temper, he deemed Ireland in need of education and protestantism, and the grant to Maynooth he would gladly have seen converted into a grant to science. In the words of Dr. John Brown, Miller was the 'inexorable taskmaster' of his own energies, and with characteristic tenacity he worked on at his newspaper or his books when he needed rest. The seeds of the 'stonemasons' disease' had been sown in his constitution in early manhood, and his frame was subsequently weakened by repeated attacks of inflammation of the lungs. Under the strain of bodily illness his intellect suddenly gave way, and on the night of 2 Dec. 1856 he died by his own hand. Miller's features were rugged, but his calm, grey eyes and pleasing smile softened their austerity. His voice was gentle. Not mixing much in general society, he reckoned himself a working man to the end, but he carried himself with much natural stateliness. There is an early calotype by D. O. Hill, which though not very distinct in its lineaments, and certainly too aggressive in its expression, is more suggestive of Miller's strength of character than any other likeness. A portrait by Bonnar belongs to the family. A bust, by William Brodie, is in the National Portrait Gallery, Edinburgh. Miller's chief works, other than those mentioned, are : 1. 'The Whiggism of the Old School, as exemplified by the Past History and Present Position of the Church of Scotland,' 1839. 2. 'Memoir of William Forsyth,' 1839. 3. 'The Two Parties in the Church of Scotland exhibited as Missionary and Anti-missionary,' 1841. 4. 'Scenes and Legends of the North of Scotland ; or the Traditional History of Cromarty,' 1850. 5. ' The Fossiliferous Deposits of Scotland,' 1854. 6. 'Geology versus Astronomy; or the Conditions and the Periods ; being a View of the Modifying Effects of Geologic Discovery on the Old Astronomic Inferences respecting the Plurality of Inhabited Worlds,' Glasgow . 7. 'Voices from the Rocks ; or Proofs of the Existence of Man during the Paleozoic Period,' 1857. 8. 'The Cruise of the Betsy; or a Summer Ramble among the Fossiliferous Deposits of the Hebrides,' ed. by W. S. Symonds, 1858. 9. 'Essays,' ed. by P. Bayne, 1862. 10. 'Tales and Sketches,' ed. Mrs. Miller, 1863. 11. 'Edinburgh and its Neighbourhood, Geological and Historical,' ed. by Mrs. Miller, 1864. [Life and Letters of Hugh Miller by Peter Bayne, 1871 ; Miller's My Schools and Schoolmasters ; personal knowledge.]
As mentioned in my last blog post, I recently participated in a birds-of-a-feather (BOF) on semantic graph/database processing at Supercomputing 2010 (SC10). My general research interest is in high-performance computing (HPC) for the semantic web, so this BOF was a great fit. At the BOF, I very briefly made three suggestions to HPC researchers; in this blog post, I expand on and explain these suggestions. I welcome feedback, particularly from those in the semantic web community who have something to share with the supercomputing community. 1. There is a need for good benchmarks from a HPC perspective. By “good,” I primarily mean that the datasets and queries need to be realistic. In other words, the data should reflect data that occurs in the real world, and queries should reflect queries that would be posed by actual users or systems. By “HPC perspective,” I mean that it needs to test strong scaling (change in time for fixed total dataset size and varying number of processors) and weak scaling (change in time for fixed dataset size per processor and varying number of processors). The Lehigh University Benchmark (LUBM) has arguably been the most widely used benchmark likely because it is one of the earliest benchmarks that provide a data generator and a standard set of queries. It is targeted towards inferencing. However, LUBM datasets are not only synthetic, but they are quite unrealistic. In addition to uniform distribution of data, it suffers from other inadequacies like few links between universities and the use of a single, nonsensical phone number for every person (“xxx-xxx-xxxx”). Therefore, LUBM datasets do not provide a realistic data distribution and thus cannot test the ability of systems to handle realistic selectivity and skew. There is also the Berlin SPARQL Benchmark (BSBM) , but it is “built around an e-commerce use case” and “illustrates the search and navigation pattern of a consumer looking for a product” . From a HPC perspective, we will likely be more concerned with overall run-time of queries or reasoning processes (or whatever other interesting processes) rather than handling interaction with users. Finally, there is SP2Bench . This is perhaps the most useful benchmark for SPARQL benchmarking. It provides a data generator that mimics statistical properties of DBLP data, and it provides a set of sensible queries. Therefore, the dataset is more realistic than LUBM, and it is focused on SPARQL query (whereas LUBM focuses on reasoning). However, there is still a need for a good reasoning benchmark from a HPC perspective. It’s difficult to be more specific than that because providing such a benchmark is still very much an open research topic. Clearly there needs to be an ontology that uses features from various reasoning standards (e.g., RDFS, OWL) and a corresponding data generator. There should also be some way to verify validity of inferences based on certain entailments. Again, this is very much an open research topic which is why I made the suggestion but have few answers myself. 2. Consider existing reasoning standards as starting points. This may be the more controversial of my suggestions, but there is good reason for it. Recent history indicates that the reasoning standards continue to iteratively evolve based on the needs of the community. Consider RDFS (by which I mean RDFS entailment as defined in RDF Semantics). First of all, it is technically undecidable , but in a way that is trivial and easily overcome. Secondly, few systems (in my experience) completely support inferences based on literal generalization, XML literals, and container-membership properties. Other rules, like “everything is a resource,” are generally trivial and uninteresting. More commonly, implementations align with a fragment of RDFS that I call RDFS Muñoz (originally termed the ρdf fragment), which essentially boils down to domains, ranges, subclasses, and subproperties. Perhaps Muñoz said it best: “Efficient processing of any kind of data relies on a compromise between the size of the data and the expressiveness of the language describing it. As we already pointed out, in the RDF case the size of the data to be processed will be enormous, as current developments show …. Hence, a program to make RDF processing scalable has to consider necessarily the compromise between complexity and expressiveness. Such a program amounts essentially to look for fragments of RDF with good behavior with respect to complexity of processing.” Consider also OWL 1. How many scalable systems completely support one of the OWL 1 fragments (Lite, DL, Full)? I cannot say for sure, but my impression from experience and feedback from others is that the cost for higher expressivity can often be too expensive in terms of performance, especially as you scale dataset size. Perhaps it is for this reason that OWL Horst (originally termed the pD* fragment) has gained popularity as (arguably) the most widely supported OWL fragment. Now there is OWL 2. OWL 2 RL (a fragment of OWL 2) is “inspired by description logic programs and pD* [OWL Horst]” . The SAOR paper from ISWC 2010 has already shown a subset of OWL 2 RL rules for which closure can be efficiently produced in parallel. So my point is this. Reasoning standards capture well-defined and understood fragments, but research and practice continue to explore subfragments that are suitable for certain problems, and as the subfragments become stable and gain popularity, they inspire future standards. It is an iterative process, so it is not necessary to become obsessed with fully complying with existing standards (unless that is actually necessary to meet your use case). It is probably more interesting to search for fragments of the standards that fit certain HPC paradigms. 3. Review the literature to reconsider approaches that were once considered less viable. This suggestion seems obvious. As an example, I recently did a literature review of parallel join processing, and one thing I noticed is that a majority of the literature is focused on shared-nothing architectures. In 1992, DeWitt and Gray stated: “A consensus on parallel and distributed database system architecture has emerged. This architecture is based on a shared-nothing hardware design ….” However, in 1996, Norman, Zurek, and Thanisch directly opposed (or reversed) the claim of DeWitt and Gray saying: “We argue that shared-nothingness is no longer the consensus hardware architecture and that hardware resource sharing is a poor basis for categorising parallel DBMS software architectures if one wishes to compare the performance characteristics of parallel DBMS products.” The popularity of the shared-nothing paradigm was probably further fueled by the advent of inexpensive supercomputing by way of Beowulf clusters and Networks of Workstations (around the mid 90’s). However, many modern supercomputers provide shared-disk and shared-memory paradigms. The Blue Gene/L in our Computational Center for Nanotechnology Innovations (CCNI) is networked with a General Parallel File System (GPFS). Making use of GPFS, the Blue Gene/L could be considered shared-disk in a programmatic sense. The Cray XMT uses large shared-memory. Rahm points out that a major advantage of shared-disk is its potential for truly dynamic load-balancing , so lets look back at some of the shared-disk and shared-memory research that has been done [12-15]. All of that just to say, a review of literature is in order. Potential sources of inspiration include parallel databases, parallel graph algorithms, deductive databases, and graph databases. Ph.D. Student, Patroon Fellow Tetherless World Constellation Rensselaer Polytechnic Institute Guo, Pan, Heflin. LUBM: A benchmark for OWL knowledge base systems. JWS 2005. Bizer, Schultz. The Berlin SPARQL Benchmark. IJSWIS 2009. Schmidt, Hornung, Lausen, Pinkel. SP2Bench: A SPARQL Performance Benchmark. ICDE 2009. Weaver. Redefining the RDFS Closure to be Decidable. RDF Next Steps 2010. Muñoz, Pérez, Gutierrez. Simple and Efficient Minimal RDFS. JWS 2009. ter Horst. Completeness, decidability and complexity of entailment for RDF Schema and a semantic extensions involving the OWL vocabulary. JWS 2005. Hogan, Pan, Polleres, Decker. SAOR: Template Rule Optimisations for Distributed Reasoning over 1 Billion Linked Data Triples. ISWC 2010. DeWitt, Gray. Parallel Database Systems: The Future of High Performance Database Systems. Communications of the ACM 1992. Norman, Zurek, Thanisch. Much Ado About Shared-Nothing. SIGMOD Record 1996. Rahm. Parallel Query Processing in Shared Disk Database Systems. SIGMOD Record 1993. Lu, Tan. Dynamic and Load-balanced Task-Oriented Database Query Processing in Parallel Systems. EDBT 1992. Märtens. Skew-Insensitive Join Processing in Shared-Disk Database Systems. IADT 1998. Moon, On, Cho. Performance of Dynamic Load Balanced Join Algorithms in Shared Disk Parallel Database Systems. Workshop on Future Trends of Distributed Computing Systems 1999.
January/February Adult Programming Scheduled If one of your New Year’s Resolutions is to expand your educational horizons, Massanutten Regional Main Library has extensive adult programming lined up starting January 13. Judith Townsend Rocchiccioli has always had a passion for writing and believes writing is good for the soul. Rocchiccioli will be visiting author on Monday, January 13th , 1 PM. Rocchiccioli will discuss her three books, Chaos at Crescent City Medical Center, The Imposter, and Viral Intent at the first January session and will advise individuals on how to publish and market their writing at the January 27th Rocchiccioli is a native Virginian who holds graduate and doctoral degrees from Virginia Commonwealth University and the University of Virginia. She has been a practicing clinical nurse for over 25 years and is currently a professor of Nursing at James Madison University. She is also the author of numerous academic and health-related articles and documents. Her novels are based on her clinical experiences and teaching in New Orleans and have received excellent reviews on Amazon. Assistant Director of the Furious Flower Poetry Center at James Madison University, Elizabeth Hoover, kicks off the February adult programming. Hoover, a poet herself, received an MFA in Creative Writing and an MA in English Literature at Indiana University. She currently is conducting archival research on poet Robert Hayden and will be writing a full-length biography of Hayden. She also is the author of three biographies for young adults. Hoover’s “Furious Flower: A Revolution in African American Poetry” presentation will be Monday, February 3, 1 PM, in the Grand Meeting Room of Main. Book for the ‘Burg @ Your Library is for all who have read or who would enjoy a discussion of the book, Mountains Beyond Mountains: The Quest of Dr. Paul Farmer, a Man Who Would Cure the World by Pulitzer Prize winning author Tracy Kidder. The book is about Paul Farmer, a Harvard-trained infectious disease specialist and anthropologist, his atypical childhood, and his work in rural Haiti and around the world. Books are available for checkout at the Massanutten Regional Library, the James Madison University Library, both in hardcopy and e-book, the Eastern Mennonite University Library, and through the JMU Office of Environmental Stewardship and Sustainability. One discussion of the Book for the ‘Burg will be held on Saturday, February 8, 1 PM at Massanutten Regional Main Library. Author John A. Cassara will discuss his book, Demons of Gadara, at the February 10, 1 PM meeting. Formerly a covert intelligence officer during the Cold War and a Treasury Special Agent in the U.S. Secret Service and US Customs Service, Cassara has turned to writing since his retirement from the government. Cassara has spent his career investigating money laundering, trade fraud, international smuggling, and has developed expertise in Middle East money laundering. Cassara’s final assignment was with the Treasury’s Office of Terrorism and Financial Intelligence. Keeping Up with Yesterday, a book about the historical and social issues regarding the Harrisonburg African American Community and authored by Professor Ruth M. Toliver, will be available for sale after the author’s formal presentation on February 24, 1 PM. Toliver, a retired D.C. public school teacher and a Harrisonburg native, also planned and helped design an exhibit addressing “Education in Harrisonburg during Segregated Times” for the Harrisonburg-Rockingham Historical Society, Dayton, Virginia. Events at Massanutten Regional Library are free and open to the public. For additional information and an extensive schedule of adult programming through April, contact Cheryl Metz, 434.4475, ext. 129, or Cheryl Griffith, 434.4475, ext. 121. Check out more speakers scheduled through your local media or check www.mrlib.org
I have found myself saying yes to things when I really should have said no. Sometimes I say yes because I don't want to let the person down. Sometimes I say yes because of ego. Sometimes I say yes because at first thought, it sounds wonderful! Sometimes I say yes because I really think I have the time. Sometimes I say yes because I really think I can do it all. Here's a great blog post from Tim Ferriss's blog (he's the author of the The 4-Hour Workweek: Escape 9-5, Live Anywhere, and Join the New Rich It talks about Edmund Wilson's Decline Letter and asks the question: How much more could you get done if you eliminated even one type of request? This is from Tim's Blog-- Edmund Wilson, recipient of both the Presidential Medal of Freedom and the National Medal for Literature, was one of the most prominent social and literary critics of the 20th century. He realized, like most uber-productive people, that, while there were many behaviors needed to guarantee high output, there was one single behavior guaranteed to prevent all output: Trying to please everyone. He had a low tolerance for distraction and shunned undue public acclaim. To almost all inquiries, he would respond with the following list, putting a check mark next to what had been requested… Edmund Wilson regrets that it is impossible for him without compensation to: contribute to books or periodicals do editorial work judge literary contests make after-dinner speeches Under any circumstances to: contribute to or take part in symposiums take part in chain-poems or other collective compositions contribute manuscripts for sales donate copies of his books to libraries autograph books for strangers supply personal information about himself supply photographs of himself allow his name to be used on letter-heads receive unknown persons who have no apparent business with him. I thought it was pretty interesting and a little amusing. (Though a little tough guy attitude with the no autograph books for strangers, I think that's bad manners myself.) A long time ago I said I'd say yes to the things that have to do with poetry and family. But I've expanded that, now say yes to things that get me outside and moving, as well as good times with friends. And it's not that I wasn't seeing my friends, but my priorities were family and poetry. Now with a little more time because I'm out of grad school, I have a little (a lot) more social time. So much of life comes down to our choices, intentional or not, we are constantly carving out the path in front of us. It's always interesting to see where we end up. * * *
Reviewed for JHISTORY by John Jenks of Dominican University In any history of British journalism, the name George Newnes usually surfaces alongside modifying phrases such as “one of the harbingers.” As the founder of Tit-Bits (1881) he was one of the pioneers of British New Journalism a catch all term that encompassed the shift toward a mass circulation commercial press with an emphasis on drama, sensation, and shorter, disconnected news items. Newnes’ breakthrough made him a fortune, brought him a minor title, but left him, ultimately, as a footnote. Most historians genuflect toward Newnes, but pay more attention to the lives of his more flamboyant successors, such as Alfred Harmsworth (Lord Northcliffe, publisher of the Daily Mail). By contrast Kate Jackson’s book, George Newnes and the New Journalism in Britain, 1880-1910: Culture and Profit, sets out to tell the story of Newnes’ rise and consolidation of a publishing empire that developed an array of magazines and newspapers with myriad ways of relating to their readers. Jackson undertakes an interdisciplinary work, combining the methods of literary criticism, media studies and history to analyze Newnes’ publications and their relationships to British journalism and culture. She goes far behind the usual focus on Tit-Bits and discusses how Newnes’ other publications functioned as shapers, mirrors, and imagined communities for British culture. Readers hoping to discover how Newnes’ publications envisioned the audiences and their concerns, or how the dominant cultural anxieties and needs were exploited and addressed, will find a great deal of information here. Readers looking for information about Newnes as a journalist, publisher, or biographical subject will find a great deal less. Jackson acknowledges that the heavy emphasis on textual analysis was partially dictated by the lack of extensive archival documentation on Newnes. The records for Newnes’ company were put in deep storage during World War II and are now “impossible to locate or view” (p. 5). However, Jackson liberally mines works by contemporaries, or near-contemporaries, such as Hulda Friederichs, to help fill the gaps. (Friederichs published The Life of Sir George Newnes, which was based on Newnes’ own autobiographical notes, in 1911, one year after Newnes’ death [p. 2].) For historians of American journalism, this book could provide valuable comparisons, presented in an accessible way, to the turn-of-the-century American magazine revolution and its cultivation of consumer culture. It would be less useful for newspaper historians. The book itself is divided into three main sections: After a lengthy introduction, in which Jackson outlines methodological and historiographical issues, and provides a brief biography of Newnes, she goes through early examples of the New Journalism, Newnes’ development of new formats, and his exploitation of niche markets in his last decade. She starts her march through the Newnes catalog with the path-breaking, plebeian Tit-Bits and The Million, then moves up to The Strand, Newnes’ successful upper middle-class magazine. She covers his political involvement in The Westminster Gazette and his foray into imperialism and exoticism in The Wide World Magazine, then finishes with his search for niche markets in The Ladies’ Field and The Captain. Newnes’ methods and persona, she argues, are the threads that bind these disparate publications. And key to that was the New Journalism. For Newnes’ publications, especially his early efforts, the New Journalism meant briefer stories (the tit-bit—or “tidbit,” according to the Associated Press), sentimentality and drama, and a real effort to reach readers through technical innovations, promotions and contests, and a carefully calibrated public persona. Jackson argues that Newnes used the tools of New Journalism to create his publications as not only texts, but also contexts interactive guides for people to live their lives and build communities in times of flux (p. 20). This argument has problems: Newnes was heavily involved in Tit-Bits and The Strand, but was almost an absentee owner with The Ladies’ Field and The Captain. It would be safer to say that Newnes had pioneered certain developments in New Journalism, which then became general journalistic techniques that he and others continued to draw upon. Also, there could be more in this book about how other publishers might have influenced Newnes. As mentioned above, there is comparatively little biographical material in Jackson’s book, and that is generally drawn from published contemporary works. But the relevant details are as follows: Newnes was born in 1851, the son of a Derbyshire Congregational minister, and was educated at Congregational schools and the City of London School, where he was steeped in an atmosphere of Liberalism and Christianity. At one point he considered the ministry as a career. But his first vocation was sales. He signed on as an apprentice for a haberdasher, shifted to sales, and rose quickly in the business. Here he learned the techniques of persuasion and the needs and desires of the emerging British consumer. Earlier biographers had identified Newnes’ own interest in entertaining and informative snippets of information as the inspiration for Tit-Bits, which he launched in Manchester in 1881 (pp. 47-48). Other accounts have given credit to his wife for suggesting he turn his scrapbook hobby into a business. After the success of his early publishing ventures he devoted time and effort to the Liberal cause and to the life of a benevolent country gentleman at his North Devon summer home. Newnes himself was an ardent Liberal, joining the Liberal Club, serving as a Liberal MP from 1885 to 1895, and from 1900 to1910, and heavily subsidizing the Liberal Westminster Gazette. In the North Devon town of Lynton, the site of his country home, he donated money for a new Town Hall, a Congregational church and other civic improvements. Jackson links this country squire paternalism to his paternalistic management of his publications, going so far as to see him as an “editor-squire” (pp. 25-26). Newnes built his fortune in the 1880s, consolidated it and diversified into other publishing ventures in the 1890s, then squandered money in the 1900s in a number of ill-judged investments (p. 27). In this last decade of his life, Newnes became less and less visible in the operations of his empire (p. 207). Jackson also mentions that bad health, a drinking problem and a “failing mind” contributing to his problems, but gives very little additional information (p. 28). Newnes died broke in 1910. (To help readers interested in specific publications, the rest of the review is organized by publication.) The tit-bits themselves were snippets of information, short stories, pieces of advice, jokes, correspondence and advertisements presented in a 16-page weekly (p. 56). Newnes juiced up the appeal of the newspaper through a continual series of contests and promotions. One context offered a suburban London “Tit-Bits villa” for the best short story submitted. But the best-known promotion was the Tit-Bits Insurance scheme in which the survivors of any Tit-Bits reader killed in a railway accident, and found with a copy of the newspaper on the body, would be given 100 pounds. By 1891 there had been 36 cases. Tit-Bits quickly found a market, selling an average of 500,000 copies a week, and spawning imitators such as Answers and Pearson’s Weekly. The Millions, Newnes’ other paper, was a minor variation of the Tit-Bits theme, with an added bonus of color illustrations. But when the novelty wore off and production costs mounted, Newnes shut it down (p. 84). Jackson situates Tit-Bits and The Millions in the cultural atmosphere of late Victorian Britain, with rising consumerism and a growing lower-middle class hungry for self-improvement and knowledge. That knowledge was presented in bite-size morsels for the Tit-Bits reader, with no apologies and no shame. Many of the factoids reflected the fascination with statistics, efficiency and the paraphernalia of modern industrial life. Both Tit-Bits and The Millions sought and found a sense of community with their readers, encouraging them to contribute items, participate in contests and come to the newspaper for advice. Newnes used innovative advertising and promotional techniques as well. This strategy kept sales high and developed an army of “loyal Tit-Bitites” who sent in items, participated in the contests, and bought the newspaper (p. 69). Newnes also was an active and public philanthropist, further advancing Tit-Bits’ communal cause. Jackson goes so far as to describe it as a “journalistic, discursive equivalent of a settlement house” (p. 75). Tit Bits was the paper that made Newnes and also the one in which he was most heavily involved. His relations with the other publishing ventures were more distant. Unlike Tit-Bits, there was little interaction with the readers, and, unlike the Westminster Gazette there was little Liberal reform. Instead, The Strand sought to reflect its target audience. As one former editor wrote, “‘for more than half a century it faithfully mirrored their tastes, prejudices and intellectual limitations’” (p. 88). But it did so in an engaging style that, Newnes wrote, had been inspired by the American magazines Harper’s and Scribner’s (p. 92). Jackson also speculates that he may have been influenced by the success of the British magazine Punch, which had developed a team of creative artist, writers and editors who worked in a jovial collaboration (pp. 100-101). In the midst of turn-of-the-century anxieties, The Strand provided reassurance and stability in its selections of role model profiles, illustrations and contributions (a mix of short stories and factual articles) from most of the literary middleweights of late Victorian Britain from Arthur Conan Doyle (the Sherlock Holmes’ author) to P.G. Wodehouse. Newnes used the personal tone and various New Journalism techniques, such as the character sketch and celebrity profile, to create a respectable yet reasonably exciting tone for The Strand. In an era of conflict between commercialism and art, The Strand struck a middle ground (p. 92). Newnes was apparently involved in the management of The Strand, as general editor, but Jackson does not have much evidence connecting him to specific decisions or policies. True, Newnes wrote some articles and was selected as one of the magazine’s feature celebrities, but he was more important for the image he projected through his involvement. “His image traveler, public benefactor, proprietor, celebrity was a guarantee of social and professional respectability and commercial viability” (p. 103). Newnes’ involvement in The Westminster Gazette is usually written off as a play for political and journalistic respectability--subsidizing a high-toned serious newspaper--and the party obligation of a devoted Liberal. Writers such as Stephen Koss have generally taken that approach as they describe in detail the political importance of the Gazette and other turn-of-the-century party newspapers. But Jackson argues that Newnes imbued the Gazette with the spirit and practices of New Journalism, putting his stamp on yet another publication. Jackson points to The Westminster Gazette’s use of “the interview, the parliamentary sketch, the black-and-white illustration” as signs that the New Journalism of Tit-Bits and The Strand had been transported to the more rarefied neighborhood of Westminster (p. 134). Other characteristics, such as alliterative headlines, sporting news, literary contests, and celebrity profiles, were also signs, as was the printing of the newspaper on green-tinted paper to improve visibility (pp. 138-141). There is, however, little evidence that Newnes influenced the style of the paper other than by osmosis. He met with his editor once every two weeks and exercised the restraint of a “‘constitutional monarch’” in the running of the paper (p. 143). Newnes had a personal interest in exploration and the exotic. He had sponsored an Antarctic expedition in 1898, invested in Australian minerals, and in 1898 had published two accounts in The Strand of his own travels. The account of his trip to Egypt is related in detail (pp. 168-171). Newnes, of course, did not pioneer imperialistic themes in journalism and literature. There had been a long history of it (well described by Said in Culture and Imperialism) and in the 1890s some of the premier exponents were writers such as Rudyard Kipling and Rider Haggard. In addition, changes in communication technology made a publication like The Wide World Magazine a likely success. As Jackson explains, the reconceptualization of time and space, begun with the telegraph and continuing with the telephone, film and radio, stimulated the public imagination and created a market for the accessible exoticism that was at the heart of the Wide World Magazine appeal (p. 175). Jackson goes on to link The Wide World Magazine to the modernist sentiment of “a temporally thickened present” as seen in novels such as James Joyce’s Ulysses (p. 176). In The Wide World Magazine it is best expressed in the montage of the “Odds and Ends” section in which readers could examine pictures and text from around the world collected on a single page (p. 177). The frequent use of maps, with highlights for the locations of that issue’s features, further heightened this sense (p. 178). Earlier magazines aimed at the middle classes had concentrated primarily on domestic concerns, but Ladies’ Field sought to guide and interact with women who had money and were eager to navigate the complicated social world of the late Victorian and Edwardian upper middle classes (p. 211). This new woman needed plenty of advice in a rapidly changing world (p. 212). But, increasingly, that advice focused not on Victorian social niceties, but new habits of consumption (p. 217). The ideal woman became a shopper. This editorial focus reinforced the advertisements in general tone and with specific references, and led to the creation of the magazine as a seamless whole promoting a new culture of consumption (p. 217). Overall, the Ladies’ Field mixed the long-standing traditions of society journalism with the Newnesian innovations of correspondence columnists, intimate tones, competitions, heavy reliance on advertising, and the emphasis on short “tit-bits” of information (p. 233). Youth crime was seen as one of the symptoms of this decline. Up to the 1890s, juvenile delinquency and crime were seen as class-based problems of the poor and working class. But, to complicate things, in the 1890s the idea of an in-between period adolescence as a distinct stage and a distinct incubator of misbehavior became popular in British cultural discourse. Even middle-class youth could be vulnerable (pp. 240-241). Many of the youth magazines seemed to exacerbate those fears, highlighting lurid tales of violence with sensationalistic crimes and flouting of authority. The “penny dreadful” weekly magazines had been widely blamed for inciting “hooliganism” and other anti-social behavior. While Newnes was aiming at the same age demographic, he had a different approach. The Captain would be respectable, more expensive (a six-penny monthly) and aimed at the small public-school sector and the much larger group that sought to identify with and emulate that elite group. (Many of the letters and later reminiscences testify to the power that public school mythology had on working-class and lower-middle class boys.) To this end it highlighted Muscular Christianity, imperialism, and a public school ethos of competition and good sportsmanship. Restlessness and violence would be directed outward, on the athletic field or in service of the empire, and not inward. Key to that was the highlighting of the athlete-hero, specifically C.B. Fry, who wrote a regular column for the paper. (Newnes later spun off Fry with his own sporting magazine, imaginatively named C.B. Fry’s Magazine.) The Captain’s stories and illustrations provided new role models for its readers clean-minded sportsmen, muscular empire-builders and public school gentlemen. Articles giving career advice frequently made that explicit: “Men are wanted, and now….when we are all imbued with the ideas of imperialism and realizing as never before that the strength and greatness of our country lie in our colonies and now is the time to bring the lesson home” (p. 253). Sometimes Jackson uses thin biographical details to support too much weight. In the introduction, Jackson links Newnes’ reading of James Fennimore Cooper in the late 1850s and early 1860s to Newnes’ editorial choices for The Wide World Magazine and The Captain in the early 1900s. Jackson suggests it, at least through juxtaposition (p. 21). Other links are more solid. Did his religious background keep him from developing the Weekly Dispatch as a sensationalistic newspaper? (p.26) Jackson argues that it very well may have. Some other biographical details could have been used more skeptically. Although Newnes was clearly a generous and civic-minded man, Jackson may take too much at face value tributes by employees and residents of North Devon, where he had built his country house (p. 22, 29). But when she approaches the Newnes publications with the tools of textual and cultural analyses, her conclusions are more convincing. Her introduction and notes indicate that she is well-grounded in the leading works of cultural theory, and she does a strong job of providing a detailed context for each of the publications from anxieties about male adolescents in The Captain to the shift in the reading public during the 1880s that made Tit-Bits so appealing. Unfortunately, there are a few minor errors in the book. James Joyce, Virginia Woolf and other modern writers worked in the first quarter of the 20th century, not in the first quarter of the 19th century (p. 172). Harold Spender’s The Fire of Life was published in 1926, not 1859 (footnote, p. 141); Viscount Camrose did not spell his name Cambrose (p. 280), and renowned typographer Stanley Morison spelled his surname with just one r (p. 6). Finally, nearly two pages of the introduction are repeated verbatim in the conclusion (pp. 28-29, pp. 263-264). These errors do not, however, detract from the importance of this book. Jackson’s work makes a serious contribution to our knowledge of turn-of-the-century British magazine publishing and reading culture, and how these Newnes publications served their readers. There’s just not as much about George Newnes. This book will be valuable to both American and British cultural historians, especially magazine historians. 1. Matthew Engel, Tickle the Public: One Hundred Years of the Popular Press (London: Gollancz, 1996), p. 54. 2. S. J. Taylor, The Great Outsiders: Northcliffe, Rotheremere and the Daily Mail (London: Weidenfeld & Nicolson, 1996), p. 11. 3. Stephen Koss, The Rise and Fall of the Political Press in Britain (Chapel Hill: University of North Carolina Press, 1984). John Jenks is assistant professor of communication studies at Dominican University, River Forest, Ill. He received his doctorate in 2000 from the University of California—Berkeley. Copyright (c) 2003 by JHistory. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission related to JHistory book reviews, please contact the JHistory book reviews editor, Dr. Dane S. Claussen (firstname.lastname@example.org; Telephone: 412-392-3412; Fax: 412-392-3917). Published by JHistory (February 2003).
Some more about audiobooks today. I still remember my shock and dismay a couple of years ago when I clicked on to the New York Times book page and found an advertisement of much a younger, more handsome and vaguely Mediterranean-looking young man who oozed sex appeal as he looked out at me from the screen with headphones on his ears. “Why Read?” asked the caption. Surely this was the demise of Western Civilization as we knew it, to say nothing of being a poor marketing strategy for a newspaper industry increasingly casting about in vain for new readers. Nevertheless, it seems to me that audiobooks have developed a generally sexy and sophisticated cache for literary types that other shorthand ways to literature typically lack. As an English professor, I’ve been intrigued lately that a number of colleagues around and about have told me they listen to audiobooks to “keep up on their reading.” To some degree I’ve always imagined this as a slightly more sophisticated version of “I never read the book, but I watched the movie,” which has itself been about on a par with reading Sparknotes. However, as I mentioned in yesterday’s post, another colleague recently took issue with my general despairing sense that the reading of literature, at least, is on the decline, no matter the degree to which students may be now reading interactively on the web. “Yes,” she said, “but what about audiobooks?” She went on to cite the growth in sales over the past few years as evidence that interest in literature may not be waning after all. My immediate response is a little bit like that of Scott Esposito over at Conversational reading. In a post a couple of years ago Scott responded to an advocate of audiobooks with the following: Sorry Jim, but when you listen to a book on your iPod, you are no more reading that book than you are reading a baseball game when you listened to Vin Scully do play-by-play for the Dodgers. It gets worse: [Quoting Jim] But audio books, once seen as a kind of oral CliffsNotes for reading lightweights, have seduced members of a literate but busy crowd by allowing them to read while doing something else. Well, if you’re doing something else then you’re not really reading, now are you? Listen Jim, and all other audiobookphiles out there: If I can barely wrap my little mind around Vollmann while I’m holding the book right before my face and re-reading each sentence 5 times each, how in the hell am I going to understand it if some nitwit is reading it to me while I’m brewing a cappuchino on my at-home Krups unit? It’s not reading. It’s pretending that you give a damn about books when you really care so little about them that you’ll try to process them at the same time you’re scraping Pookie’s dog craps up off the sidewalk. I have to grin because Scott is usually so much more polite. Nevertheless, I cite Scott at length because viscerally, in the deepest reaches of my id, I am completely with him and he said it better than I could anyway. However, it’s worth pausing over the question of audiobooks a little further. I don’t agree with one of Scott’s respondents over at if:book, who describes listening to audiobooks as a kind of reading. But it is an experience related to reading, and so it’s probably worth parsing what kind of experience audiobooks actually provide and how that experience fits in with our understanding of what reading really is. As I’ve said a couple of times, I think we lose sight of distinctions by having only one word, “reading,” that covers a host of activities. I don’t buy the notion that listening can be understood as the same activity as reading, though the if:book blog rightly points out the significance of audiobooks to the visually impaired. Indeed, one of my own colleagues has a visual disability and relies on audiobooks and other audio versions of printed texts to do his work. Even beyond these understandable exceptions, however, Scott’s definition of reading above privileges a particular model of deep reading that, in actual fact, is relatively recent in book history. Indeed, going back to the beginnings of writing and reading, what we find is that very few people read books at all. Most people listened to books/scrolls/papyri being read. The temple reader and the town crier are the original of audiobooks and podcasts. In ancient Palestine, for instance, it’s estimated that in even so bibliocentric a culture as that of the Jewish people only 5 to 15% of the population could read at all, and the reading that went on often did not occur in deep intensive reading like that which Scott and I imagine when we think about what reading really is. Instead, much of the experience of reading was through ritual occasions in which scriptures would be read aloud as a part of worship. This is why biblical writers persistently call on people to “Hear the Word.” This model of reading persists in Jewish and Christian worship today, even when large numbers of the religious population are thoroughly literate. See Issachar Ryback’s “In Shul” for an interesting image from the history of Judaism. Indeed, in the history of writing and reading, listening to reading is more the norm than not if we merely count passing centuries. It wasn’t until the aftermath of the Reformation that the model for receiving texts became predominantly focused on the individuals intense and silent engagement with the written word of the book. In this sense, we might say that the Hebrews of antiquity weren’t bibliocentric so much as logocentric—word-centered but not necessarily book-centered. Along these lines, the model of intense engagement—what scholars of book history call “intensive reading”—is only one historical model of how reading should occur. Many scholars in the early modern period used “book wheels” in order to have several books open in front of them at the same time. This is not exactly the same thing as multi-tasking that Scott abhors in his post, and it’s not exactly internet hypertexting, but it is clearly not the singular absorption in a text that we’ve come to associate with the word “reading.” “Reading” is not just the all-encompassing absorption that I’ve come to treasure and long for in great novels and poems, or even in great and well-written arguments. Indeed, I judge books by whether they can provide this kind of experience. Nevertheless, “Reading” is many things. But to recognize this is not exactly the same thing as saying “so what” to the slow ascendancy of audiobooks, and the sense that books, if they are to be read at all, will be read as part of a great multi-tasking universe that we now must live in. Instead, I think we need to ask what good things have been gained by the forms of intensive reading that Scott and I and others in the cult of book lovers have come to affirm as the highest form of reading. What is lost or missing if a person or a culture becomes incapable of participating in this kind of reading. By the same token, we should ask what kinds of things are gained by audiobooks as a form of experience, even if I don’t want to call it a form of reading. I’ve spent some time recently browsing around Librivox.org, which I’ll probably blog about more extensively in a future post. It’s fair to say that a lot of it turns absolutely wonderful literature into mush, the equivalent of listening to your eight-year-old niece play Beethoven on the violin. On the other hand, it’s fair to say that some few of the readers on that service bring poetry alive for me in a way quite different than absorption in silence with the printed page. As I suggested the other day, I found Justin Brett’s renditions of Jabberwocky and Dover Beach, poems I mostly skim over when finding them in a book or on the web, absolutely thrilling, and I wanted to listen to everything I could possibly find that he had read. This raises a host of interesting questions for a later day. What is “literature.” Is it somehow the thing on the page, or is it more like music, something that exists independently of its graphic representation with pen and ink (or pixel and screen). What is critical thinking and reading? I found myself thrilled by Brett’s reading, but frustrated that I couldn’t easily and in a single glance see how lines and stanzas fit together. I was, in some very real sense, at the mercy of the reader, no matter how much I loved his reading. This raises necessary questions about the relationship between reader and listener. Could we tolerate a culture in which, like the ancient Greeks and Hebrews, reading is for the elite few while the rest of us listen or try to listen. At the mercy and good will of the literate elite—to say nothing of their abilities and deficiencies as oral interpreters of the works at hand.
Jesus the Nazarene and the Pharisees of Beit Shammai The Divine Mission to Bring the “Good News” to the Gentiles By the Seven Noahide Laws By Robert Mock M.D. Hillel said: Be You of the Disciples of Aaron, One who Loves Peace, Pursues Peace, Loves Mankind and Brings Them Nigh to the Torah. Avot 1:12 Over the last fifty years the scholarship on the Primitive (Early) Christian Church has emerged into a new reality. As an isolated nomadic preacher of the “Good News” who rose from a peasant family in Galilee, most Orthodox Christians believe that Jesus was barely literate, never studied in a rabbinic yeshiva, may not have even known Hebrew, yet he confronted the hierarchy of the Jewish temple culture, was crucified by Roman as a royal aspirant to the throne of David and viewed as a rebel against the Caesar of Rome. We may now see that this is not “evidence based history” of Jesus the Nazarene. The Jesus that emerges is a fully engaged rabbi, identified with the Hasidim, the disciples of Hillel the Elder and the Essene disciples of Menahem the Essene. He was an ultra-Orthodox Torah observant, Shabbat (Seventh-day Sabbath) worshipping, kosher dieting, festival observing Jew who worshipped the temple of the Lord. Yahshua can now be seen as living a life to the highest standard of the Hasidim. As part of the entire Jewish religious and social culture, Yahshua (Jesus) was a halakhah observant Jew who followed every “jot and tittle” of the Torah, taught a message of the “Good News of Salvation” that was fully part of the Mosaic traditions of the Torah at Sinai. Yet Jesus the Nazarene was not just an “episode of human history” divorced from Divine Revelation, He was an emissary of that Divine Revelation. Is it inconceivable that the Almighty One of Israel would send emissaries to this planet earth? This concept is accepted in Jewish history, theology, philosophy and mysticism. There is also amply evident in their sacred history. We remind ourselves of the emissaries that were sent to Abraham prior to the destruction of Sodom and Gomorrah? How about the archangels Michael (Daniel 10:13, 21, 12:1) and/or Gabriel (Daniel 8:16, 9:21, Luke 1:19, 26) that were sent to the Prophet Daniel and Miriam the mother of Jesus the Nazarene? (Daniel 8:16, 9:21, Luke 1:19,26) In critical moments of human history, the Divine One has chosen to invade our physical world, called by the sages “Malkhut,” and alter the course of history. What is known in Jewish mysticism, the study of the invisible spiritual worlds, is that each of these emissaries represented an “emanation” from the Ein Sof, the Unseen and Unknowable monotheistic God of Israel. According to sacred history, “Enoch walked with God” (Genesis 5:24) and talked with the emissaries and the God of the Spirits (Book of Enoch chapter 14-15, Book of Jasher chapter iii, Jubilees 4:16-26) and Elijah, the greatest of the Israelite prophets, was taken to the world of the Spirits in a Ma'aseh Merkavah, the Divine Chariot. There is nothing in Jewish mysticism or theology that forbids the idea that the God of the Universes would not send an emissary such as His “only Begotten Son” to intervene in the course of Jewish history. When the Israelites escaped from Egypt, according to the Jewish sages, the children of Israel had sunk to the 49th level of Tuma (spiritual degradation). In their 49 day forced march to the mount called Sinai, their spiritual education on each day was to repair and elevate them until on the 50th day when they stood around Sinai, they would be prepared to meet their God. If the God of Israel would miraculously intervene and save Israel from Egypt when they had sunk into such spiritual decadence would He not intervene when His people descended to the same 49th level of Tuma seventy seven years before the destruction of Jerusalem and the temple of Herod As we have now seen, the decay and corruption of the High Priest and Torah scholars called the Shammaite Pharisees had taken the Jewish people back down to the 49th level of Tuma (spiritual degradation). The House of Ananus and the Shammaite Pharisees were threatening to destroy the entire Jewish race with their anti-Goyim (anti-Gentile) attitude and their love and worship of the Golden Calf. And so they did, when forty years after Yahshua HaMaschiach (Jesus the Messiah) was hanged from a tree near the site where the Red Heifer’s ashes were burned on the slopes of the Mount of Olives on 30 CE, the temple of the Lord in Jerusalem became a fiery inferno. The “Omens” during those years were only a footnote to that history. The Roman general Titus wanted to save the temple but the Jewish Zealots had imbedded themselves into the Jewish population and made the temple into a citadel forcing the Romans to destroy the temple down to her foundation. The sacred history suggests that such a “Being” came as an emissary from the emanation called the “Tiferet”, the one archetypical realm in the World of the Divine that represented the fullest manifestation of the Divine One that could be revealed to the human race. Seven, or the fullest measure of the ten emanations from the Sefirot of the Ein Sof, the Unknowable God of Israel, streamed forth through the life of this one emissary called by the sages of Israel, Metatron, the “Angel that stands before the Presence of the Divine One.” It was also not inconceivable to a halakhah observing Jew that the “Tiferet” called “Beauty” (Zechariah 11:7, 10) would be incarnated into a babe and live among His people as the “fullest expression of the Torah living in the flesh.” For it was one Torah observing Jew, the Deputy High Priest or Sagan of the Nazarene Sanhedrin who wrote: Besorah according to Yochanan (Gospel (Good News) according to John 1:1-5, 14 - “In the beginning was the Torah, and the Torah was with God and the Torah was God. The same was in the beginning with God. All things were made through Him and without Him nothing was made that was made. In Him was life and the life was the light of men. And the light shines in the darkness, and the darkness did not comprehend it…And the Torah became flesh and dwelt among us, and we beheld His glory, the glory as of the only begotten of the Father, full of grace and truth.” In one of the more pronounced supernatural identities that Jesus gave of Himself was to Caiphas in the nightly staged illegal interrogation that sealed His fate and destiny. The high priest, Caiphas, pronounced that it was blasphemy. Matthew 26:64 – “I say to you, hereafter you will see the Son of Man sitting at the right hand of the Power, and coming on the clouds of heaven.” This same imagery of the glorified Jesus the Nazarene was also seen by the Stephen, the Nazarene deacon who also sealed his fate with Jonathan the son of Ananus the Elder, now the high priest after Caiphas, the son-in-law of Ananus, was deposed. He received a visionary epiphany of Yahshua in his transcendent position as the Tiferet, Metatron or the Angel that stands before the Presence of the Almighty One. Acts 7:55 – “But he (Stephen), being full of the Holy Spirit, gazed into heaven and saw the glory of God, and Jesus standing at the right hand of God, and said, “Look! I see the heavens opened and the Son of Man standing at the right hand of God.” Here we add a third testimony for the Biblical scholars who are so depended on their understanding of the Q document. They believe that the gospel stories of Matthew, Luke and John are not eye-witness primitive accounts of the “sayings of Jesus.” As such, the foundation of the composition of the four gospels, come from the Q document as the primitive sayings of Jesus, and the rest of the gospels were all written after the fall of Jerusalem in 70 CE. Tzemach Tzedek Chabad Lubavitch Synagogue (1841) looking over Ancient Pillars of the Cardo, the main thoroughfare in Byzantine Jerusalem – Photo by Robert Mock This testimony stands outside the “critical” investigation of the authenticity of the biblical gospels. It was given as a historical extra-biblical testimony written by Hegesippus, quoted by Eusebius, of the testimony of James the brother of Jesus the priests and Pharisees at the temple. It was a testimony that sealed Yacov HaTzaddik’s (James the Just’s) death sentence at the hands of Ananus, the son of Ananus the Elder who was now the high priest in the temple at Jerusalem on 62 CE. Hegesippus in his Memoirs, quoting James the Just – “Why do you ask me regarding Jesus the Son of Man? He is now sitting in the heavens on the right hand of the Great Power, and is about to come on the clouds of Heaven.” Ananus the Elder was probably the “young priest” who notified the high priest in Jerusalem of the non-orthodox appearing pregnancy of Mary and Joseph. He was the high priest in 6 CE when the youth Jesus (Yahshua) went to his bar mitzvoth year Passover and was questioned by the chief priests and elders of Judah. It was Ananus the Elder who was the father-in-law of the high priest, Caiphas, who sentenced Jesus to death. Ananus was the father of Jonathan the high priest who sentenced Stephen the deacon of the Hebrew Nazarene Ecclesia to death. Ananus the Elder was the father of Ananus the Younger, the high priest who now sentenced Jesus’ brother, James the Just, to death and had his bludgeoned body thrown over the temple parapet to the Kidron Valley below. In the decade before the birth of Yahshua ben Yosef (Jesus son of Joseph), radical political and social movements were being instituted in the entire religio-philosophical arena of Judaism. A large body of Torah scholars as disciples of Hillel and Menahem the Elder left in protest or escaped for their lives to the “wilderness.” They understood that their country was descending into the same spiritual decadence that existed with the Israelites just prior to their redemption from their exile in Egypt. These disciples were sent on a rescue mission to the gentile nations in the hopes that it may also save their own people. Their Torah master, Hillel, according to Falk, “was an Essene and probably the Teacher of Righteousness (Moreh Tzedek) in the Essene Dead Sea Scroll document called the Damascus Document. This “Teacher of Righteousness” was called a disciple of Ezra the scribe. (Sanhedrin 11a) Was Ezra the first of the Essenes? Not necessarily so. It was Kaufman Kohler who wrote the seminal article on the Essenes for the Jewish Encyclopedia stated: Inside the Tzemach Tzedek Chabad Lubavitch Synagogue (1841) in the Jewish Quarter – Photo by Robert Mock Kaufman Kohler – “Philo, who calls the Essenes "the holy ones," after the Greek ὅσιοι, says in one place that ten thousand of them had been initiated by Moses into the mysteries of the sect, which, consisting of men of advanced years having neither wives nor children, practiced the virtues of love and holiness and inhabited many cities and villages of Judea, living in communism as tillers of the soil or as mechanics according to common rules of simplicity and abstinence. (Jewish Encyclopedia, “Essene”; Quoted by Eusebius, "Præparatio Evangelica," viii. 11) Kohler later recognized that throughout the history of the Hebrews there were Essene types. Claiming as direct descent from the Hasidim, they cited that the origin of the Essene brotherhood that came from Moses at Sinai. Yet these Essene types or proto-types included: Adam, Enoch, the Patriarchs especially Abraham, called the “Watik” because “he rose early” for prayer. They also included Shem-Melchizedek, Job, as the philanthropist and teacher of mystic lore, King Saul, Jesse the father of King David plus his ancestors, Obed, Boaz, and Salma. This also included the Rechabites, Jonadab, the founder of the “Water-Drinkers”, Jabez, and Jethro the Kenite who was possibly the founder of the Essene Jericho colony. How could these Essenes have existed as according to the Talmud in Bava Batra 3B, it is documented that “Herod slew all the rabbis when he came to power, with the exception of Bava ben Buta whom he blinded?” (Ibid 116) This testimony has been disputed by Jewish historians. At the same time this testimony is accepted by Rabbi Harvey Falk for the reasons that he writes: Rabbi Harvey Falk – “The Tosafists declare that this account cannot be accepted literally, since the Sons of Bathyra – who served as Nasi before Hillel – and Hillel served in a rabbinic office at the time. However, we know from Josephus that Herod held the Essenes in high esteem (Menahem, who was initially Hillel’s Av Bet Din, was an Essene who had foretold Herod’s rise to power), and Hillel’s affiliation with the Essenes would explain why he was spared. As for the Sons of Bathyra, see Ta’anit 3A where it is recorded that the later R. Joshua ben Bathyra – of the same family – was called Son of Bathyra before his ordination. We may assume from this tradition that the Sons of Bathyra of Hillel’s time were similarly not ordained rabbis. This would explain Herod’s sparing them, as well as their lack of scholarship which led to Hillel’s appointment (Pesahim 66A).” Even though Hillel the Elder remained as the head of the Sanhedrin, with the exodus of most of Hillel’s disciples to the Damascus, Qumran and other Essene villages, the leading School of the Pharisees shifted. According to this same history, the influence of the Shammaites began to dominate the temple culture at Jerusalem from the time Herod started rebuilding the temple to its final destruction by the Roman forces of Titus. The Four Sephardic (Spanish Jews expelled in 1492) Synagogues – Ben Zakkai Synagogue (1610), Central Synagogue (1830s), Prophet Elijah Synagogue (1625), and the Isambuli synagogue (1857) – Photo by Robert Mock When the School of Shammai came into ascendant power and authority in the temple hierarchy, it was also at the same time that the authors of the Damascus Document, discovered in the caves at Qumran, spoke of a breach of trust and definite animosity towards the “Pharisees” who were adherents of the School of Shammai. Here in the Damascus Document, they were called traitors (bogdim) or “men of war.” Using the overlapping histories between the disciples of the School of Shammai, the exile of the disciples of Menahem the Essene and Hillel, the ministry of Jesus the Nazarene and the degradation of social order in the Jewish society under the mantle of the High Priest family of Ananus the Elder, we open up a potential new understanding of the Essenes and the Damascus Document with the Jewish Nazarenes and Gentile Christians with the New Testament Gospels. Rabbi Harvey Falk ventures with this historical context to identify the unnamed parties mentioned in the Damascus Document. As he writes: Rabbi Harvey Falk – “The “Expounder of the Torah” (Doresth ha-Torah) is Menahem, former Av Bet Din of the Pharisees; the “Babbling Preacher” (Motif) and “Man of Lies” (Ish ha-Kazav) is Shammai, leader of an opposing group of Pharisees; the “Builders of a Rickety Wall” (Bonai Haitz) and “Men of War” (Anshe Milhama) are the School of Shammai, and would probably allude to a time when they had gained ascendancy over Bet Hillel; the “Princes” and “Nobles of the People” are the eighty pairs of disciples of Hillel and Menahem, who are described as having departed from the Land of Judah to sojourn in “the Land of Damascus” in order to search for God. They are further described as having entered a “new covenant,” and living in encampments with their wives and children. This would have probably taken place about 20 BCE and would therefore identify this scroll as having been written by the disciples of Menahem and Hillel, who joined the Essenes when the Shammaites gained control of the Pharisees.” (Ibid page 54) At the same time that Yahshua ben Yosef (Jesus son of Joseph) was introducing a “New Covenant” called the Brit Hadassah (New Testament) with His disciples, He was also introducing to them the foundation of the Damascus “New Covenant” that was begun almost fifty years prior with the disciples of Hillel and Menahem the Essene. About this same time, there came a prophecy that the School of Shammai (Beit Shammai) would come to its end at the time of the destruction of the temple of Jerusalem. (Ibid 117) Orthodox Worshipper in the Ramban (Moses ben Nahman (Nahmanides) Synagogue – Photo by Robert Mock Not only that, the Jewish rabbinate after the exile from Jerusalem came to believe if the opinions of Shammai did not agree with the opinions of Hillel that they would be considered null and void as seen in the following quote. (Ibid 117) Berakhot 36B - The opinion of Bet Shammai when it conflicts with that of Bet Hillel is no Mishnah.” The later opinion of the rulings of the School of Shammai came to such a low opinion that according to Berakhot 11A, it states that “he, who observes the teachings of Bet Shammai, deserves death.” With these ideas and concepts swirling throughout the land of Judea and Galilee, are we to assume that Jesus the Nazarene was not a participant in this dynamic dialogue? Was he not trying to capture the minds and souls of the Jewish people for His “Abba”, whom He called the “Father in heaven?” In fact, to put Jesus the Nazarene into this historical picture, we feel compelled to suggest that He was a defendant of the House of Hillel. We shall revisit later the dialogues that Jesus the Nazarene had with the “Scribes and the Pharisee” and see the validation of His debates with those attacks He made against the Pharisees of the House of Shammai. It is one of these attacks depicting Jesus’ rebukes to the Pharisees when he said: John 8:44 – “You are of your father the devil, and your will is to do your father’s desires.” While Jewish and Christian scholars have portrayed this ideological battle between Jesus and the Shammai Pharisees to be unique, they have failed to document the verbal exchange of the first century Jewish sage, Dosa ben Harkinas who criticized his brother Jonathan for ruling in a legal case concerning a levirate marriage (Yevamot 16A) according to the legal decisions by the Beit Shammai. In this criticism, Harkinas called his brother, “the first-born of Satan.” Apparently this label of the Pharisees of the School of Shammai as followers or descendants of Satan was more widespread than thought previously. (Ibid 118) These texts have been misinterpreted by Christian scholars over the millenniums. It as been the assumption that these texts were attacks by Jesus against the entire Jewish people. Yet it was these same Jewish people who were baptized by the thousands into the Hebrew Nazarene Ecclesia in Jerusalem because of their love and devotion to Yahshua whom they believed was their messiah. Within a decade after the death of Jesus, we find the Nazarenes had become a party within Judaism that outnumbered the Sadducees and the Pharisees of the House of Shammai. Later it would be these same Jewish leaders, who in essence fulfilled and validated the prophecy of Jesus that the teachings of Shammai would be “nullified.” (Ibid 118) After the death of Jesus, the Hebrew Nazarene Ecclesia now under the leadership of Jesus’ brother, James the Just, quickly asserted its presence in the city of Jerusalem. After the fifty days of Omer between the Festivals of Passover and Pentecost and the very public expression of the Ruach HaKodesh (Holy Spirit) upon the disciples of Jesus, the followers were not only upbeat but excited. One thing the Primitive Christians did was to re-introduce into the Jewish lexicon a new word – BOLD. They were not embarrassed. They were not ashamed. They were not to be humiliated. And they were Jewish, through and through. The 13th century Franciscan Construction over the Synagogue at the site of the House with the Upper Terrace – Photo by Robert Mock (2006) Acts 4:31 – “And when they had prayed, the place where they were assembled together was shaken; and they were all filled with the Holy Spirit, and they spoke the word of God with boldness.” This was not a Gentile phenomenon. This was a Jewish phenomenon. This had not to do with ideological, theological, philosophical, exegetical, or hermeneutical differences. This had to do to the fact that according to the testimony of the Jewish Book of Acts, they were “all of one accord.” (Acts 2:1) The focus of all the Primitive followers of the Nazarene was like taking the scattered and diffuse beauty of light and focusing it into one laser beam. The intense focus of that laser was on one man; Jesus the Nazarene. Let us read what happens next. Here in the temple during Pentecost, there rises up that fisherman that was thought to be a bumbling, unschooled Galilean, not even trained under the great yeshiva sages in Jerusalem. Yes Simon Peter rises up and speaks to the masses of Jewish men in the Court of the Jews on Pentecost, speaking as a distinguished orator. Acts 2:22-24 – “Men of Israel, hear these words: Jesus of Nazareth, a Man attested by God to you by miracles, wonders, and signs which God did through Him in your midst, as you yourselves also know – Him, being delivered by the determined purpose and foreknowledge of God, you have taken by lawless hands, have crucified, and put to death; whom God raised up, having loosed the pains of death, because it was not possible that He should be held by it.” Bear in mind, this was not a Christian speaking to or against the Jewish people. This was a “Man of Israel” speaking in the temple of Herod to thousands of Jewish “Men of Israel.” On that same day, three thousand “men of Israel” were baptized in the mikvahs surrounding the temple. Why? They were baptized in the same manner that the Hasidim or Essenes were baptizing in the wilderness and along the Jordan River. The Mikvahs Baths at the Residence of the High Priest Caiphas near the Tomb of David – Photo by Robert Mock (2006) It was now twenty years since the death of Hillel the Elder (10 CE). This sage of Judah lived for 120 years like Moses did of old. He was not only blessed with a long life, but was one of three notable Jewish sages whose paths crossed in that historical flow of time that lived 120 years; Hillel the Elder, Rabban Johanan ben Zakkai and Rabbi Akiba. (“Sefer Yohassin”, the Sephardic Sages by International Sephardic Leadership Council) Simeon I was the son of Hillel the Elder. Upon the death of his father, Hillel the Elder, Simeon I became the Nasi or President of the Great Sanhedrin. Throughout most of the life of Jesus (10 – 30 CE), Simeon was the leader of the Jewish people. It is fascinating that Simeon I died probably the same year that Jesus the Nazarene was crucified. The life of Simeon I was a puzzle to Rabbi Harvey Falk. In his testimony we find this dilemma. Rabbi Harvey Falk – “Jewish scholars have long been mystified as to why Simeon - son of Hillel and father of R. Gamaliel the Elder – who served as the Nasi following Hillel’s death, is not quoted or discussed even once in the entire Talmudic literature (except for one brief statement that he succeeded Hillel [Shabbat 15A]). I believe that the Talmud is thereby telling us that the School of Hillel reached its nadir in his time, and that he had no say at all in the affairs of the community.” Simeon I, the son of Hillel the Great, was a figure head in the Sanhedrin, held hostage by the disciples of Shammai. During his twenty year tenure as the Nasi or President of the Great Sanhedrin, a dramatic shift had occurred in the world view of Jewish Phariseism. The Mikhvah Baths of the Hebrew Nazarene Ecclesia near the Synagogue of the Nazarenes (Upper Room) over the Tomb of David – Photo by Robert Mock (2006) The Pharisees of the School of Hillel who believed that there was such a person as a righteous Gentile who had an “inheritance in the world to come” were exiting from the center of Jewish religious politics on a special mission to the Gentiles. At the same time, the Pharisee School in control of the synagogue religious culture in Jerusalem for the last sixty years before the temple was destroyed were the disciples of the Judean Torah scholar, Shammai the Elder, Nasi Hillel’s Vice President (Av Beis Din). Brooding and rigid in his religious ideology, Shammai the Elder placed protective barriers around Judaism. He virtually shut out the “Good News” that there is a God of Creation who loves all mankind and desires a relationship with each one of them. Here was the rise of the anti-Goyimism (anti-Gentile attitude) of the Jewish people that is even felt today by some Jews The crucifixion of Jesus on 30 CE also saw the rise of the fourth party of the Jews: the Hebrew Nazarenes. The death of Simeon I and the death of Jesus the Nazarene also saw the mantle of the Nasi Prince of Judah placed on the son of Simeon I, Rabban Gamaliel I the Elder. Here was the rabbinic master and teacher of that young Pharisee called Shaul, the later Apostle Paul, who was one of the leading students of the Pharisees in the yeshiva (school) of Gamaliel. The disciples of Shammai were equally determined to destroy the disciples of Jesus. They had great allies, the Sadducee High Priestly family of the House of Ananus the Elder and the reigning high priest, Caiphas, the son-in-law of Ananus. This powerful priestly family was the most potent of the enemies of Jesus the Nazarene. They had a “blood oath” against the family of Jesus that eventually took the life of his brother, James the Just, the first Nasi of the Nazarene Sanhedrin. He was murdered by the influence and leadership of the High Priest, Ananus son of Ananus in 62 CE. It was here at the Gate called Beautiful where the pilgrims entered into the temple complex, Peter, accompanied by John the Apostle, healed a man who was lame since birth. This scene caused quite a commotion on the colonnade to the east of Herod’s temple called Solomon’s Porch. Here it states that the “priests, the captain of the temple, and the Sadducees came upon them, …and they laid hands on them, and put them in custody until the next day.” (Acts 4:3) The dynamics in the city of Jerusalem was tense as now the swelling ranks of the Nazarenes had now grown to five thousand men. (Acts 4:4) Then we read about the next day: Acts 4:5-6 – “The rulers, elders, and scribes, as well as Annas the high priest, Caiphas, John and Alexander and as many as were of the family of the high priest were gathered together in Jerusalem.” When Peter and John were brought before them, they were challenged, “by what power or by what name have you done this? The Entrance to the Tomb of David just below the Upper Room – Photo by Robert Mock (2006) Once again, Peter returns the verbal volley this time to the leading rulers of the temple hierarchy and shocks them by his “boldness.” Acts 4:8-12 – “Then Peter (Kepha), filled with the Holy Spirit, said to them, ‘Rulers of the people and elders of Israel: If we this day are judged for a good deed (tov mitzvah) done to a helpless man, by what means he has been made well, let it be known to you all, and to all the people of Israel (kol Yisra’el), that by the Name of Jesus the Messiah of Nazareth whom you crucified (impaled), whom God raised from the dead, by Him this man stands here before you whole. This is the stone which was rejected by you builders, which has become the chief corner-stone (Rosh pina).’ Nor is there salvation in any other, for there is no other Name under heaven (shamayim) given among men by which we must be saved.” Within this oratorical rebuttal was a “pun” when Peter proclaimed, “This is the stone which was rejected by you builders.” The chief Pharisees, called Beit Shammai, were disciples of Shammai, whose name meant, “builder.” Yes, Peter was directly speaking to the Shammaites, when he said, “rejected by you “builders.” The “boldness” of Peter and John shocked the temple elders and leaders because these two were perceived as “uneducated and untrained.” Their testimony states that the rulers “marveled” and that the boldness was because “they had been with Jesus.” (Acts 4:13) Peter and John were threatened not speak the “Name of Jesus” and were sent away without punishment. The plot thickens. Peter and John return “to their own companions and report all that the chief priests and elders had said to them. While in prayer, the testimony states that “the place where they were assembled together was shaken; and they were all filled with the Ruach HaKodesh (Holy Spirit) and they spoke the word of God with boldness.” (Acts 4:31) This was not the same outpouring of the Holy Spirit witnessed days before in the temple. We may justly assume that this was a second manifestation of the Holy Spirit while they were meeting in the Upper Terrace, or the House with the Upper Room that stood over the tomb of David, the assembly room and future synagogue of the early Primitive Nazarenes. The momentum of the early Nazarenes was astounding. The city of Jerusalem was riveted with the unfolding drama of the emerging power phenomenon. The people, the peasants, and the vast populous from the surrounding villages began to bring their sick and their emotionally tormented to be healed. As the tzit-tzit tassels on the hems of Jesus’ prayer shawl brought healing to the bleeding woman, the people now came in the hope that even the “shadow of Peter passing by might fall on some of them.” (Acts 5:15) The carefully crafted power structure of the elite in charge of the “Temple of Herod Inc” noticed with alarm that their revenue base was rapidly shrinking. The portrayal of the three days of the healing ministry of the “kingdom of God” during the Passover week while Jesus the Nazarene and the Passover Lamb were being examined in the temple was now being repeated by the emissaries or apostles of that rabbi they believed to be the Jewish messiah. They now “boldly” proclaimed and demonstrated in the temple the living reality of the “kingdom of God.” Anyone less than an orthodox Jew would have been stoned immediately. There was no question that the orthodoxy or the halakhah of the Jewish apostles of Jesus was according to the “letter of the Law (Torah).” The fear, of the apostate high priest and the temple Pharisee elite of Beit Shammai, was not towards the orthodoxy of the Peter, John and the other disciples of Jesus. It was the masses of the Jewish people who loved and adored Yahshua HaMaschiach (Jesus the Messiah) that they feared. The real fear within the House of Ananus and the Beit Shammai was caused by the power of the “Name” of Yahshua. This fear was not an ideological or theological battle. This fear was the living reality that was swarming around the entire temple culture in Jerusalem. The temple revenues were plunging. The House of Herod, Inc and the “Golden Calf” that they were serving was heading into an economic and spiritual bankruptcy. They feared that the Jews may actually begin to love the Romans, whom they despised. They feared that Judea might become the center of the world. They would then be required to seek and to spread “hesed,” the acts of loving kindness of the “Good News of the God of Creation” to the entire Roman world. Their paranoia fear was that the Temple of Herod might actually draw millions of Gentiles to its premises in adoration to the God of Israel. No longer would they be a reclusive religious country club with a seclusive membership list called a “covenant.” The Golden Temple Menorah constructed according to Comprehensive Halakhic Research by the Temple Institute – Prepared to be Placed in the Future Temple in Jerusalem – Photo by Robert Mock This was the fear that the Messiah of the Jews truly had come. As their prophetic sage Isaiah proclaimed; the covenant to the Jews was not an exclusive path to salvation. They were to be prepared for a “messiah” that would bring them a covenant of righteousness with the Messiah so that they could be a “light to the (Gentile) world.” Isaiah 42:5-6 (parts) – “Thus says God the Lord, Who created the heavens and stretched them out…I, the Lord, have called You in righteousness, and will hold Your hand; I will keep You and give You as a covenant to the people (Jews), as a light to the Gentiles.” They saw this reality in living color as Jesus first started His ministry in Galilee with the “call” in the lands of Zebulon and Naphtali to the “people who sat in darkness…and upon those…light as dawned” (Matthew 4:15-16): Matthew 4:17, 19 – “Repent, for the kingdom of heaven is at hand…Follow Me, and I will make you fishers of men.” Then in Jesus’ first private recorded teaching with his disciples (talmidim) after healing, teaching and preaching about the “kingdom of heaven,” in the hills surrounding Galilee, He instructed His disciples in a very Jewish way of how to become a Tzaddik (righteous man): Matthew 5:14-16 – “You are the light of the world, a city that is set on a hill cannot be hidden. Nor do they light a lamp and put it under a basket, but on a lamp stand, and it gives light to all who are in the house. Let your light so shine before men, that they may see your good works and glorify your Father in heaven.” While the voice of the populous swelled with praise and adoration to the blessings of the Lord, the high priest emeritus Ananus and his family dynasty were furiously plotting how they could break the power of this economic and political crisis that was surrounding them. And so the testimony of the Acts of the Apostles continues: Acts 5:17-18 – “Then the high priest rose up, and all those who were with him (which is the sect of the Sadducees), and they were filled with indignation, and laid their hand on the apostles and put them in the common prison.” This time it was not just Peter and John but others and possibly all of the apostles that were taken to the prison of the high priest. Here we witness a strange act. In the middle of the night, an “angel” clad in a white robe not unlike the sacred garments of the Essenes, penetrated the secured underground dungeon beneath the house of Caiphas the high priest. He led the apostles out of the prison with the command, “Go, stand in the temple and speak to the people all the words of this life.” The “Broad Wall”, 22 feet thick, built by King Hezekiah in the 8th century to Enlarge Jerusalem for the Refugees from the Northern Kingdom after the Syrian Invasion in 722 BCE – to the Right are Remains of Houses Torn down as Described in Isaiah 22:10 – Photo by Robert Mock In stunned surprise, rather than in prison, the next day the apostles were found teaching on the temple proper, when “the high priest and those with him came and called the council together, with all the elders of the children of Israel.” (Acts 5:21) The officers that were sent to get the apostles out of prison stuttered in their dismay with the realization that the secured perimeters of the prison of Ananus and Caiphas had been penetrated and compromised. Even the security of the inner sanctum of the palatial guarded residence of Ananus the Elder had apparently been breached. Acts 5:23 – “Indeed we found the prison shut securely and the guards standing outside before the doors, but when we opened them we found no one inside!” It was a carefully controlled entry from the temple as the temple captain went with his officers to escort the apostles again before the council of the High Priest. Why were the protocols so strictly adhered to? “They feared the people lest they should be stoned.” (Acts 5:26) Here was the power and respect commanded by the apostles of Jesus the Nazarene. The populous, the poor, the disenfranchised, the “settlers,” were like the Jews living in S’derot under the shadow of the Qassam rockets from the Hamas terrorists in Gaza. The peasants were on the Apostle’s side. Jesus was their Master. Let us carefully delineate the difference between the different participants in this tense drama. There was the House of Ananus and the influential high priest power brokers of the Sadducees. One of the reasons the House of Ananus had their political power was in part with the collaboration and collusion of the Pharisees of the Beit (school or disciples) of Shammai. In the first century, the Pharisees of Beit Shammai were in charge of the institutional Jewish religion and were in control of the matrix or fabric of Jewish life. This was because the entire fabric of Jewish life was centered in the Mosaic rituals held in the Temple of Herod. As we now see, it was the disciples of Shammai, in collusion with the high priests Ananus and Caiphas, who were responsible for the illegal mock trial of Jesus. It was they who in violation of Torah Law turned a fellow Jew over to the Romans in order to be tried on sedition charges against the Caesar of Rome. The Jewish people have been innocent of the centuries of charges that the “Jews” killed Jesus and stand innocent to this day. On the other hand, there were the Hasidim that were closely known as the Essenes who were buttressed by the disciples of the Beit (school or disciples) of Hillel the Elder. They viewed the temple hierarchy as apostates in the House of Zadok of the House of Aaron. They were descendants of the House of Zadok that had the divine authority to run the temple complex that had become corrupt like the sons of the Eli, the judge of Israel. The Western Wall (Ha-Kotel ha-Maaravi) of the Temple Mount called the “Wailing Wall” – Photo by Robert Mock Outside of these power brokers, there were the peasants, the masses of the Jewish population who were not affiliated with a sect or party. Yet something was happening. The carefully crafted lines of authority were being demolished. Instead of the monetary income flowing into the temple coffers controlled by the House of Ananus from the sale of animals for the temple sacrifices, the masses of the population were swelling the city of Jerusalem after the Pentecost not to bring sacrifices but to witness and experience the “kingdom of God” in living reality. No wonder the temple guards feared for their lives. This same fear was noted no less then two months prior when Jesus the Nazarene threw out the money changers in the temple and was healing the sick and teaching in the temple just before His death. Caiphas and Ananus, the leading rulers and high priests in the temple of Herod now confronted the apostles: Acts 5:27-28 – “And the high priest asked them, saying, ‘Did we not strictly command you not to teach in this Name? And look, you have filled Jerusalem with your doctrine, and intend to bring this Man’s blood on us!’” The “blood oath” was upon them and Caiphas and Ananus felt guilty as charged. The reply by the apostles was: Acts 5:29-32 – “We ought to obey God rather than men. The God of our fathers raised up Jesus whom you murdered by hanging on a tree. Him, God has exalted to His right hand to be Prince and Saviour, to give repentance to Israel and forgiveness of sins. And we are His witnesses to these things, and so also is the Holy Spirit whom God has given to those who obey Him.” How Jewish this whole scene is portrayed in historical reality. We see no Roman cross. We see no “blood oaths” upon the Jewish people but only upon the Jewish elite who forsook their God as the Shabbatian Jews do so today. We see a corrupt Jewish government and religious leadership that did not fulfill the commands of the Lord that is no different than the government of Israel led by Armalgus and many of the compromised Jewish rabbis today. We do not see the demonstration of the “attribute” in God’s sefirot called “hesed,” that demonstrates a life of giving loving kindness not only to the Jews but to all mankind. Gamaliel, the Nasi, in Defense of the Nazarenes Out of the midst of this surreal scene of corruption and hatred, the testimony affirms that “they (high priest and temple leaders) were furious and plotted to kill them (apostles).” (Acts 5:33) Yet, out of this chaos, we see the reasoned voice of a Hasid, a pious one. The Encased Golden Menorah near the Justinian Era Cardo in Jerusalem (7th century CE) – Photo by Robert Mock Acts 5:33-39 – “When they (the Council of the Sanhedrin) heard this, they were furious and plotted to kill them. Then one in the council stood up, a Pharisee named Gamliel, a teacher of the law held in respect by all the people, and commanded them to put the apostles outside for a little while. And he said to them: “Men of Israel, take heed to your selves what you intend to do regarding these men. For some time ago Theudas rose up, claiming to be somebody. A number of men, about four hundred joined him. He was slain, and all who obeyed him were scattered and came to nothing. After this man, Judas of Galilee rose up in the days of the census, and drew away many people after him. He also perished, and all who obeyed him were dispersed. And now I say to you, keep away from these men and let them alone; for if this plan or this work is of men, it will come to nothing; but if it is of God, you cannot overthrow it – lest you even be found to fight against God.” How easy it would have been to wipe out the entire hierarchy of the Primitive Nazarenes in one swoop. The High Priest and the Sanhedrin had the power to perform such an act. They proved that fact sixty days prior when Pontus Pilate cowered in the presence of the High Priest and the Shammaites. But the allegiance of the masses of the people, the Essenes and the untold number of Hasidim of Beit Hillel and Levitical priests in the temple gave them pause. They knew that such a division might cause a civil war and their own personal and economic destruction. In spite of the moderating voices of the Grand Elder who sat on the seat of David, the Roman Decurion and wealthy tin merchant, Joseph of Arimathea, and the wealthy grain merchant, Nicodemus, who as an Essene was seeking the deeper mysteries of the Spirit, the Great Sanhedrin had gone radical. With the demonic fury of the House of Ananus and the radical isolationism and the radical anti-Goyimism (anti non-Jewish Gentiles) of the Pharisees of the Beit Shammai, the Great Sanhedrin was almost fully under the control of radical Jewish elements that were anti-Torah and anti-God of Israel. This anti-Goyimism of the Beit Shammai would someday be repaid back with anti-Semitism from the Christian and Islamic world. Rothschild Family Residence built in 1871 in the Batei Makhase Square – Photo by Robert Mock This testimony of the defense by Rabbi Gamliel ben Simeon ben Hillel in the Acts of Apostles was “bold” and risky. This testimony leaves out one important historical record; Rabban Gamliel was the Nasi or President of the Great Sanhedrin in Jerusalem. He was the revered scholar and head of the School of Gamliel of the Beit (School) Hillel, the son of Simeon I, the grandson of Hillel the Elder called the Great. As the head of the Great Sanhedrin, he was in the minority with most of his religio-political support in exile at Qumran and other Essene centers in Judea, Galilee and around the Middle East. Alone he stood in front of the Sanhedrin that only two months prior had imprisoned and sought to kill Joseph of Arimathea and Nicodemus for their defense of Jesus the Nazarene. It appears that Rabbi Gamaliel understood the ultimate purpose of the mission and destiny of Jesus the Nazarene; to form a religion for the Gentile nations according the Halakhah, the legal commands of the Jews. Out of this mission would also be a “Christian” half way house for the restoration of the “Lost Sheep of the House of Israel” to eventually be restored into a complete covenant relationship with their God. The protecting finger of the God of Israel can be seen protecting his own people, like “lost sheep”. This is somewhat like Daniel of old who stood in the court of Nebuchadnezzar the Great and later Cyrus the Great in defense and protection of his people the Jews. So also this sage of the Sanhedrin stood out in defense of the Jewish emissaries (apostles) of Yahshua HaMaschiach (Jesus the Messiah). This was not a Christian-Jewish battle for these participants were all self-proclaiming, Torah observing Jews. In the aftermath of the Bar Kochba Revolt in 135 CE, one of the greatest rabbis of the post-temple era that rose in prominence was Rabbi Meir. He was a disciple of Rabbi Akiva and was recognized as the third generation of the Tanna sages after the destruction of the temple in 70 CE. He lived in the land of Israel during the reigns of the Sanhedrin Nasis Rabbi Simeon II ben Gamliel (142-161 CE) and Rabbi Yehudah (Judah) HaNassi (the Prince) (165-192 CE). During these years, under his leadership, all the teachings of all the Jewish sages were gathered and compiled into six volumes called the Mishnah. Roman Columns of Unknown Provenance in front of the Rothschild Mansion in the Jewish Quarter – Photo by Robert Mock According to the legends of the Jews, after the last stand of the Bar Kochba Revolt and the death of Simon ben Kosiba in 135 CE at Bethar, the Roman forces of Hadrian persecuted the Jewish scholars severely. This occurred in part because Rabbi Akiva had given his authority and blessing that Simon ben Kosiba was the Jewish messiah. One of the ten unforgettable Ten Pious Sages that were killed by the Romans, Rabbi Akiva, the mentor and illustrious teacher of Rabbi Meir, was murdered by the Romans before Meir’s own eyes. They killed him by combing Akiva’s flesh with iron combs until he expired. Even more so, Rabbi Meir’s father-in-law, the Tanna sage, Rabbi Hanina ben Tradyon, was murdered by being burnt alive. According to a report by Dio Cassius, Julius Severus, the Roman general sent by Hadrian to Judea to quell the Jewish rebel revolt of Bar-Kochba, destroyed the city of Bethar after a siege of two and a half years. The Jewish city was reputed to have 500 schools with no less than 500 students in each school. As Dio Cassius reported: Dio Cassius - “Julius Severus…razed fifty of their best fortresses and 985 of their most important villages, and that 580,000 men were killed in the sorties and battles, and the number of those who perished by famine, disease, and fire, could not be defined, so that most the whole of Judaea became a desert, as it had been predicted before the war.” (Dio Cassius, LXIX, 14 cited in Adolph Buchler’s “Economic Condition of Judea after the Destruction of the Second Temple.”) During this time, Rabbi Meir fled to Babylon. When the persecution ended in the reign of Caesar Anthony Pius, Meir returned to the Land of Israel. According to his colleagues, it was stated about Rabbi Meir that "whoever saw him studying the holy Torah, got the impression that he was tearing up mountains and grinding them to dust upon each other." Rabbi Jose, a friend and rabbinic colleague of Meir, made this statement, “He is a great personage, a holy and humble man. We continue to look for the traces of the Hasidim after the destruction and exile of the Jewish people from Jerusalem. According to the rabbinic sages, Rabbi Meir was noted for his adherence to a high level of Levitical purity. It has been suggested by Kaufman Kohler, in his comprehensive article on the “Essenes”, written for the Jewish Encyclopedia, that the “remnant of the Essenes” could be found as part of the disciples or students of Rabbi Meir. They were known as the “Edah Kdoshah” or the “Congregation of the Saints.” In an atmosphere where they remained Levitically pure, they spent their time in study, prayer, and work in the “Kehilla Kaddisha de-vi-Yrushalayim” or the “Holy Congregation in Jerusalem.” (C. Rabin devoted a whole chapter in the Qumran Studies to the similarities between the Edah Kdoshah and the Essenes cited by Ibid 65-66) The Western Wall of the Temple Mount – Photo by Robert Mock In fact it was a subsect of the Pharisees called the Haverim of the Pharisees that Rabbi Meir at Tanna continued to live out his life as a model for his students. Meir adopted the ethical spirit lived both by Hillel and the Essenes where a Torah scholar should be one who “loves mankind” (Avot 6:1) even though the social culture around him was in chaos and a low level of spiritual purity. It appears that Rabbi Meir was devoting his life in preserving an “ideal” by recreating a synthesis or symbiosis of both Pharisaism and Hasidism, or the Hasidic (pious) life of a Pharisee. As we move into the 2nd century CE and search for the pious Hasidim after Rabbi Meir, we meet Rabbi Hiyya, a Tanna-Amora that was recognized by scholars to be a practicing Hasidim or Essene. In his possession were a group of Megilat Setarim (secret scrolls) which according to Kohler and the editors of the Soncino Talmud (Shabbat 6B) were Essene or other esoteric teaching documents. In the Shabbat 6B or the Bava Mezia 92A, it attributes the author of the Megilat Setarim to an unknown sage called Issi ben Judah. In the Baraita (Pesahim 113B), it is noted that there were four sages called Issi, one of which is Issi ben Judah, yet who were they? Then there were two sages linked to Joseph that have been identified as Issi ben Akabia. As if to confuse the issue, we find in Sotah 15B, Issi ben Judah and Issi ben Menahem having a dispute over temple issue. Was this a debate between one Essene and another? Can we be so bold to suggest that Issi ben Menahem was Menahem the Essene, who was the av beis din (“father of the court” in Hillel the Great’s Sanhedrin? What about the passage in Niddah 36B where Rabbi Assi states, “I am Issi ben Judah who is Issi ben Gur-Aryeh.” Was he identifying himself by the names that he was known in the Essene community? Again we ask, who were the “Issi”? As we dig deeper, we find in the Jerusalem Talmud (Baca Kamma 3:7) that identifies Issi ben Akabia with Yose Kittunta cited by the Mishnah as the last of the important Hasidim in Sotah 49A. The weaving thread between all of these personages is the Hasidim or the Essenes. (Ibid 45-6) According to Rabbi Harvey Falk, the connection between the title of Issi and the Essenes should not be too difficult. As stated: The Jerusalem Cardo Shopping Center in what use to be Main Street Byzantine Jerusalem – Photo by Robert Mock Rabbi Harvey Falk – “It would seem to me that the literal resemblance of Issi to Iss’im (plural for Essenes) is too close to be ignored. It would therefore suggest that Issi ben Judah was an Essene, and their preference for anonymity is probably in line with Hillel’s teaching in Avot 1:13: Rabbi Hillel the Elder – “A name made great is a name destroyed.” (Avot 1:13) As Rabbi Falk summarizes: Rabbi Harvey Falk – “Our interpretation would therefore be that all were members of the same secretive Essene organization, who adopted the appellation (Issi) upon joining the group.” (Ibid 60) This opens another mystery in the life of Jesus. As the Muslim people wait for the arrival of their messiah, the return of the “Hidden Mahdi” who will guide them through the final events of earth’s history, they also await the final messiah of Islam, Isa ibn Miriam (Jesus son of Mary). Does this give us any hints lost in the ancient migration of the followers of the Nazarenes that escaped into the deserts of Arabia as to the affiliation or identity of Jesus the Nazarene with the Essenes and the Hasidim, the disciples of Hillel and Menahem the Essene? When the temple of Herod was destroyed in 70 CE, the “voice of the Lord” was heard proclaiming that the “words of Hillel” would become the respected and ascendant teaching of Judaism. The teachings of Hillel would become the Jewish halakhah, the practical way of living a life of Torah. The “spirit” nor the “teachings” of Shammai were not to remain a part of Judaism. The influence of Shammai disappeared. The question we must ask, did remnants of the “spirit” of Shammai remain? Throughout the ages the anti-Goyim (anti-Gentile) voice of Judaism continued to be heard even to its harshest extent in the words of some of the rabbanim that the gentiles must be killed and destroyed to advance the kingdom of God. It was the Hillel inspired mission of Yahshua HaNotzri (Jesus the Nazarene) that led to the first Jewish attempt to bring the concept of the Universal God of Creation to the entire gentile world. This was done predominately by the “Apostle to the Gentiles” the Pharisee called Shaul. This same Shammaite Pharisee Shaul later became the Apostle Paul, after he converted and became a disciple of Hillel the Great and Menahem the Essene. He became part of the Hebrew Nazarene Ecclesia in Damascus, it is believed that were also Essene disciples of Hillel and Menahem. Jewish Owner of a Shop in the Cardo with Jerusalem Memorabilia – Photo by Robert Mock We have learned of the influence of the Shammaites upon the life of that young Pharisee who was also trained under the feet of Gamaliel the Great, the grandson of Hillel the Great. Only until this radical Zealot, Shaul, on his way to Damascus, with a commando Jewish force to imprison or kill the members of the Hebrew Nazarene Ecclesia, did the remarkable “voice of Yahshua” bring Shaul to a conversion experience that changed the entire course of the history of revealed truth. Through the ministry of the Apostle Paul as the “Apostle to the Gentiles”, the God of Israel would also become the God of the proto-Christians that were converts to the Jewish messiah, Yahshua HaNotzri (Jesus the Nazarene). We know through the historical evidence that the early Christians later apostatized with the Gnostic influence of Simon the magician, who first met the Apostle Peter in Samaria and later moved to the capital city of Rome. There at the Basilica of Prassedes, the roots of Simon’s Gnosticism became interwoven into the primitive Nazarene Roman Christian culture as the soon to be Roman Orthodoxy. Yet we cannot deny that the primitive Nazarene Christians, to the core, were Jewish or gentiles associated and affiliated with Nazarene Judaism. In Damascus, we then observe the Apostle Paul being sent to the deserts of Arabia for his continuing theological education under the disciples of Yahshua HaNotzri (Jesus the Nazarene). Why? This is where the leadership of the Nazarenes had escaped prior to the destruction of Jerusalem as the forces of the Romans were heading towards the Middle East in 64-65 CE. The Hebrew Nazarene Ecclesia, by this time, was under the leadership of Simeon ben Cleopus, the cousin (and possibly half-brother) of Jesus. He had become the second president, or nasi, nominated for the Nazarene Sanhedrin, when James the Just, the brother of Jesus, was stoned and clubbed to death in 63 CE. This occurred under the orders of the high priest, Ananus the Younger, the son Ananus the Elder who was instrumental in condemning Jesus (Yahshua) to death in 30 CE. It was after the death of James the Just, when the entire Nazarene congregation safely escaped into the wilderness of Perea, under the courageous leadership of Simeon ben Cleopus, ahead of the arriving Roman forces. Many of the Nazarenes continued to flee further into the deserts beyond Damascus, and beyond, across the Fertile Crescent. It is here that we find the names of the Arabian tribes reflecting possibly the ancient remnants of the Jewish Nazarene Ecclesia. Moshe at the Shorashim Biblical Shop on Tiferet Israel Street – Photo by Robert Mock It might not be a jump in logic to suggest that the truths of Judaism, the stories of the TaNaKh and the prophets that included a deep respect for Jesus the Nazarene as a prophet, were transmitted from the fertile sacred traditions of the Arabian Desert to the mind of Muhammad who became the prophet of Islam. Here was possibly a Jewish vestige or remnant of the ancient Nazarenes who vanished into the deserts of Arabia in the centuries after the destruction of Jerusalem. During this era, the footsteps of the Hasidim “pure ones” also began to fade and vanish. There were many Jewish rabbis, maggids (Torah teachers), and Torah scholars. Instead of breaking new ground in the “revealed truth” of God, the Jewish sages devoted their lives to the preservation of the TaNaKh, the essence of Judaism and the Oral Law that provides the matrix of the Jewish halakhah today. Yet, they kept their convent with the God of Israel to preserve the oracles of God given to them as the Pentateuch (Torah), the Writings, and the Prophets. Shorashim Biblical Shop on Tiferet Israel Street – Photo by Robert Mock These were the years of the era that the Jewish Mishnahs, the Gomorrah’s, and the Babylonian and Jerusalem Talmud were formalized. It was in these years that the sages of Judah began to digest, archive, and reformulate the concepts of Judaism. This was a dynamic process in which the opinions of many rabbis contributed over the centuries, many times with dissenting opinions. Here was also the era when Judaism became institutionalized, codified, and many commentaries solidifying the collective understanding of the great Jewish minds on Torah law, the divine concepts of God, and the interaction of the God of Israel in the history of Hebrews. It was in this Islamic era when the “woman fled into the wilderness” for 1260 years. Under the protection of the Sunni Caliphates of Islam, the Jewish people were protected from the days of the Imamate of the 4th Imam of Islam, Ali (‘Alī ibn Abī Ṭālib - 599-661), who was accepted by both the Sunni’s and the Shi’as in 657 CE, until the proclamation by a “gentile power” that the Jewish people needed a new homeland by the British Prime Minister Lord Balfour in the 1917 with the Balfour Declaration. During this 1260 years, the Jewish people participated in the “Golden Age of Islam” from 750 -1200 CE while the Roman Catholic Gentile world was living in the “Dark Ages”. Here the great academies of the Jews at Sura and Pumbedita in Iraq excelled in the studies of the Torah. The Jewish people also were experiencing enormous growth during the “Golden Age” of the Moors in Spain when Maimonides was codifying the Jewish Law in Spain. Then later, during the Mongolian Invasion that destroyed the Sunni Abbasid Caliphate in Baghdad, the Umayyad Caliphate in Cordoba Spain continued to protected the Jewish people during the rise of the Catholic Inquisition. During the 20th century, the gentile world saw the gradual decline and destruction of the last visible remnant of the ancient children of Israel during World War II and the massacre of the Jews in the Holocaust. If any person, including President Ahmadinejad of Iran, can deny that the holocaust of the Jews during World War II even occurred, in spite of the overwhelming documentary evidence by testimony, pictures, and the vast German and Allied archives, how easy would it be to deny that there is a God of Israel who created this planet earth, if the Jewish people were to disappear off the face of this earth? Is not the Islamic vision to control the whole earth predicated on the messianic zeal to destroy the last remnant of God’s chosen people? Quiet Arch Walkway in the Jewish Quarter of Old Jerusalem – Photo by Robert Mock Here in Jerusalem, we will see the last stand of the God of Israel and the children of Abraham, as the “nations of the earth surround Jerusalem”. Let us seriously look over the millennium with the rise of fall of empires and nations. Of all the seventy tribes and peoples that God planted on the face of the earth, only the Jewish people remain today that have kept continuously their history, culture, identity, ancient records, their oracles, their tribal descents, with their laws, and customs. Plus, they kept all the detailed records of the opinions, dissent, and commentaries of the Torah, the law of God that was given to them on Mount Sinai. Every other nation rose up and cast their fame across the lands and then disappeared with only scattering myths and legends to tell us that they were once there. Yet the Jews kept a living testimony and extensive records, history, and historical documentary of their existence. While they stand naked and exposed in their historical transparency, all the gentile nations, whose ancestors worshipped sticks and stones, bring only legends to the instigative table of history. As Jewish prosperity has risen out of the ashes of the “European holocaust, that massacred their best sages and leaders plus half of the international Jewish population, today the Jews have become the envy of the nations as the gentile world lusts (“I must have it now”) for possession of Jerusalem. Saddam’s Babylon has fallen, now for the second time in history. Now Ahmadinejad’s Persia now is lusting for possession of Jerusalem. Is not the thrust, of both the radical Shi’a Islam in Iran and Shiite Hezbollah along with the Sunni al-Qaeda and Hamas the same? In spite of their purported great geo-political divide, held together by one religious-ideological bridge; they both passionately desire the complete annihilation of Israel and the entire Jewish race? Is it their quest not to just destroy Israel, or the orthodox Jews, but every trace of Jewish blood on the face of the earth? With Greece (America) at the gates of Baghdad and the largest American naval armada in history sitting in the Persian Gulf and the Arabian Sea, Rome (Papal European NATO) has settled at the gates of Lebanon with the largest NATO naval armada in the Eastern Mediterranean Sea since World War II. How can the God of Israel give any more evidence that the “Signs of the Time” have now arrived? Ruins of the Tipheret Yisrael Synagogue along the Tiferet Yisrael Street, one of the First Synagogues Destroyed by the Jordanians in 1948 War – Photo by Robert Mock It has been estimated that the number of Jewish people, during the lifetime of Yahshua, consisted of 10% of the total population of the Roman world. Today the Jewish population has declined, after the holocaust in 1948, to only 1% of the entire population of that same region once controlled by the Roman Empire. Also today, the Jewish people have dropped to only 0.1% of the total world population. Why could not the Jewish people, by genocide, future holocausts, pogroms, or just assimilation into the populations of the world, disappear altogether? They could become like the ten Northern Tribes of Israel who completely disappeared twenty-five centuries prior? The eradication of the “promised line of Isaac” would then be complete. The Davidian lineage that brought us the Jewish messiah called Yahshua ben Yosef, would vanish or be usurped by a Davidian Mason, Davidian Persian, or a Shabbatian Jewish anti-Messiah. Gone also would be the lineage of the hoped for messiah, David, son of David that the Jewish people are awaiting. The joy of the satanic hosts of heaven would be profound. As Satan sought to destroy the pure seed of Adam at the time of the flood of Noah, so Satan is seeking to destroy the pure genetic seed of Abraham, Isaac and Jacob today. In the city of Jerusalem, that the God of Israel proclaimed was His own Zion, the forces of the evil one will seek to divide and eventually conquer as his own possession. Today the Rabbis of Judaism represent the School of Hillel but some act like they are part of the School of Shammai. There is a recent trend to pull away from their Christian brethren, and return to an anti-Goyim (anti-gentile) attitude. They would do better to accept Yehoshua and challenge His Torah. Red Poppies at Jerusalem – Photo by Robert Mock Yet, many Christians today criticize the Jewish people severely for worshipping the Talmud instead of the TaNaKh, the Jewish Scriptures, as their source of biblical truth. In their covenant relationship with the God of Israel, they were to guard, protect, and preserve the Torah. The Talmud is their legal documentation on the challenges and rebuttals on how to live a life of Torah. It is a legal account of all the arguments for and against a halakhic dispute. As such, God of Israel may be asking them that this is an account that for eternity will be kept as a testament of how humans struggled with sin and how to come into oneness and fellowship with the Divine. Is it not interesting how we criticize the Jews in their reverence for the Talmud for here they have preserved their arguments and debate over religious fine points of the Law while we Christians bring our religious amnesia? For two millenniums, the Christian world forgot that Jesus was an orthodox, Sabbath keeping, temple worshipping, festival celebrating, and a halakhah observing Jew. He was not a Palestinian for Palestine did not exist. He was an educated rabbi (teacher of the Torah) that spoke Hebrew, Aramaic, and Greek. He was not a poorly educated peasant of abject poverty. He grew up in a messianic culture with royal Davidian heritage, and He lived His life as the fullest manifestation of the Torah, while living a life as a Hasid, above and beyond the “letter of the Law”. After the destruction of Jerusalem and the temple of Herod, rabbinic Judaism turned inward. They were desperate to preserve the oracles of God plus the traditions and the oral legacy of the Mosaic Law before its sages were totally wiped out by the Roman persecutions and pogroms. Yet as the Apostle Paul stated, the Jews were blinded by their own God for His own Divine Purpose. Romans 9:3-5; 11:1-2 (parts), 7-12 - “For I could wish that I myself were accursed from Christ for my brethren, my country men according to the flesh, who are Israelites, to whom pertain the adoption, the glory, the covenants, the giving of the law (Torah), the service of God, and the promises; of whom are the fathers and from whom, according to the flesh, Christ came, who is over all, the eternally blessed God, Amen.” I say then, has God cast away His people? Certainly not! For I also am an Israelite, of the seed of Abraham, of the tribe of Benjamin. God has not cast away His people whom He foreknew…What then? Israel has not obtained what it seeks, but the elect (Nazarenes) have obtained it, and the rest were blinded. Just as it is written: “God has given them a spirit of stupor, Eyes that they should not see, And ears that they should not hear, To this very day.’ (Isaiah 29:10,13) And David says: ‘Let their table become a snare and a trap, A stumbling block and a recompense to them. Let their eyes be darkened, So that they do not see, And bow down their back always.’ (Psalms 69:22-23) I say then, have they stumbled that they should fall? Certainly not! But through their fall, to provoke them to jealousy, salvation has come to the Gentiles. Now if their fall is riches for the world, and their failure riches for the gentile, how much more their fullness!” Orthodox Jews Praying at the Western Wall at the Kotel – Photo by Robert Mock The God of Israel gave the Jewish people tunnel vision so that they were never forget their purpose; to preserve the testimony of their God from Sinai, that He is the Creator of this world and continues the love the descendants of that human creature He created in His own Image (Tiferet) called the red man, “Adam”. It would be the role of the God of Israel to bring the right geo-political changes on this earth so that all men, Jew and Gentile alike, would have an opportunity to await the coming of the messiah. For those with “eyes that can see” that day of the messiah is at hand. The question must be asked, “Is it a burning desire in your life to see the coming of the messiah in your lifetime? Are you eagerly watching, preparing, and waiting for His return? Are you preparing your life and your family’s lives for the social, political, religious, and earth changes that are coming with the arrival of the birth-pangs of the messiah? Many of you want to say, yes, but are scared to do so? It was the thesis of Rabbi Isaac Luria that the School of Shammai that were the “Pharisees” in power in first century Judaism might represent a shadow picture of how the Torah will be observed in the future messianic era. May we suggest that the theological idealism of the School of Shammai will once again return? If so, we might want to reconsider the philosophical ideology of the School of Shammai. The disciples of the School of Shammai were not faulted for their brilliant legal understanding on how to live a life of Torah. The problem was, they did not practice what they preached. They were hypocrites! It was Yahshua (Jesus), who stood in opposition to the hypocrisy of the Shammaite Pharisees in Jerusalem, also stated that the legal rulings of the House of Shammai, as they sat on the “Seat of Moses” in the Great Sanhedrin, were correct. Jewish Males in Study and Prayer at the Western Wall – Photo by Robert Mock Matthew 23:2 – “The scribes and the Pharisees sit in Moses’ seat. Therefore whatever they tell you to observe, that observe and do, but do not do according to their works; for they say, and do not do.” There are some that may suggest that this article on Jesus and the Shammaites might be unfairly critical of the Pharisees in the days of Yahshua. We must not forget, it was the scribes and the Pharisees of the School of Shammai that Jesus reserved these forceful condemnations: Matthew 23:23-25, 27-28 – “Woe to you, scribes and Pharisees, hypocrites! For you pay tithe of mint and anise and cummin, and have neglected the weightier matters of the law: justice and mercy and faith; These you ought to have done, without leaving the others undone; Blind guides, who strain out a gnat and swallow a camel. Woe to you, scribes and Pharisees, hypocrites! For you cleanse the outside of the cup and dish, but inside they are full of extortion and self-indulgence… Woe to you, scribes and Pharisees, hypocrites! For you are like whitewashed tombs which indeed appear beautiful outwardly, but inside are full of dead men’s bones and all uncleanness.. Even so you also outwardly appear righteous to men, but inside you are full of hypocrisy and lawlessness.” The issue was not that they truly brilliantly understood how to apply the Torah to everyday life, but that they failed to live it. They did not follow their own teachings. This is why they were called hypocrites. They portrayed an outer religious demeanor but inside they did not follow justice, mercy, or have faith (emuna) in the Sovereignty of the God of Israel. They wanted to be the providence for the Jews in Judea, usurping the role of the true Providence. They were “blind guides” who inside used extortion to satisfy their lives of indulgence. Their exteriors were like whitewashed tombs while inside they were spiritually unclean. Instead of following the letter of the Law, inside they were lawlessness (Torahlessness). Over and over, Jesus would state the halakhah of the Jews and then provide a spiritual interpretation of that legal ruling of the Law of Moses. It was during this time in the era of the first messianic revival of Jesus the Nazarene that He preached and taught about living of life of Torah in the Kingdom of God. Here He gave a vision of that future prophetic messianic era. In that future era, the “conceptual” ideals of Shammai may not only become reality but be felt to be the most “pragmatic” and practice way of living a life of holiness in an era when the Torah will be revered around the world. The Site where the Jewish People Plan to Build the Third Temple to the God of Abraham, Isaac, and Jacob – Photo by Robert Mock No longer would the Jews have to fear the anti-Semitic hatred of the world against the Jews. No longer would the imperial nations of the world seek to destroy the Torah culture of the remnant of the Hebrew people. By this time, the fulfillment of Ezekiel’s vision (Ezekiel 37:15-28) depicting the joining of the stick “For Judah and for the children of Israel” with the stick “For Joseph, the stick of Ephraim, and for all the house of Israel” will have been complete. According to Rabbi Harvey Falk in the book of his study, “Jesus the Pharisee, A New Look at the Jewishness of Jesus”, we note that Hillel the Elder, though noted for his piety and humility, was not well known among his contemporaries. In fact, when he was appointed to the chief position as the Nasi or President of the Great Sanhedrin in Jerusalem, according to the Talmud (Pesahim 66A), the sons of Bathyra, the family of Shemaya, the prior Nasi, had not heard about Rabbi Hillel. When you look at the remarks of Hillel in Avot, it suggests that he blamed his appointment to this high office upon the fact that neither they nor any other candidate could be considered a man of diligent study and integrity. It also suggested that the qualities of the Hasidim (pious ones) were at a dearth in the Land of Israel. Avot 2:5 by Hillel the Elder – “Where there are no men, strive to be a man.” It was noted in the Talmudic passage in Berakhot 63A which states that in the presence of others who are able to teach the Torah, you should keep to yourself and not try to compete. It was noted by scholar Resh Lakish that immediately following this observation that a scroll of the Hasidim (Essene) was discovered. In this scroll was the emphasis on constant devotion. (Ibid 45) In fact, the Talmud as well as the authors of the Dead Sea Scrolls, Hillel and his disciples in the School of Hillel plus the Essenes were referred to in the Talmud (Sanhedrin 11A) as Hasidim. They were not described as Pharisees neither were they described as scribes. Over and over we have seen the clashes in ideology and philosophy as being between the Schools of Hillel and Shammai. More important, the secretive sect of the Essenes, in their attitude towards the Righteous Gentile, was closely aligned with the philosophical outlook of the Sanhedrin President or Nasi, Hillel the Elder. Torah Ark Scroll at the Western Wall at the Kotel in Jerusalem – Photo by Robert Mock There may soon come the day of the Messiah when David, son of David will rule and the Messiah of Israel will return to claim His own. May we suggest that in that millennial “Day of the Lord” the “kingdom of God” will be fully instituted with an era of Torah Enlightenment that will spread over the entire earth? Isaiah 2:2-5 – “Now it shall come to pass in the later days, that the mountain of the Lord’s house shall be established on the top of the mountains, and shall be exalted above the hills; and all nations shall flow to it. Many people shall come and say, ‘Come, and let us go up to the mountain of the Lord, to the house of the God of Jacob; He will teach us His ways, and we shall walk in His paths.’ For out of Zion shall go forth the law, and the Word of the Lord from Jerusalem. He shall judge between the nations, and rebuke many people; They shall beat their swords into plowshares, and their spears into pruning hooks; nation shall not lift up sword against nation, neither shall they learn war anymore. O house of Jacob, come and let us walk in the light of the Lord.” There are many concepts of what the future millennial era will be like. Some millennial concepts are other-worldly new earths where we will have the privilege to traverse the universes in our glorified new bodies like Yahshua’s post-resurrection body. There are other millennial visual concepts that depict a this-worldly earth where the Jews, with the return of their Lost Tribal brothers of the House of Israel called Ephraim or Joseph, will reunite and a Torah Renaissance will radiate over this planet earth. Jesus the Nazarene also gave us a shadow picture of how the “kingdom of God” will function in that future “Day of the Lord” when the Messiah will reign for a thousand years. Yahshua portrayed this era when the foundation of social, political, and religious order will be founded on the principles of living a life of Torah observance. During the days of Rabbi Yahshua, the superior intellectual understanding of how to live a life of Torah (called halakhah) was best articulated by the School of Shammai. Yet the “spirit” of a life of Torah was best articulated and lived best by the hasidim of the School of Hillel and Menahem the Essene. When Yahshua strode unto the stage of first century temple culture in Judea, He lived the “spirit” of Torah as the life of an Hasid, yet He understood the legal and intellectual halakhic understanding of living a life of Torah like the School of Shammai. We must not forget, Yahshua did not accuse the Pharisees of not understanding how to live a life of Torah. He accused them because they were hypocrites and preached one thing and lived another. Yahshua reframed the message of both schools of Torah study. He gave a spiritual understanding to the disciples of the School of Shammai and a purer legal framework on Torah observance to the disciples of the School of Hillel and Menahem. As such, Yahshua took the pure message of Torah as given from Mount Sinai and showed how, with His Father in heaven living within their hearts, a true Torah filled person will exude the radiance of God’s holy love. Maybe we will see that when the era of the Messiah arrives, the “spirit of Hillel” will be blended with the “Torah understanding of Shammai”. Out of this symbiosis and synthesis, the doors of Torah observance will be opened to the whole world with an understanding that all men of all races may open up their hearts with love and adoration to the Holy One of Israel. With the fading of anti-goyim (anti-gentile) attitude of Shammai Judaism coupled the “love for all mankind” by the hasids of Hillel, supported with the superb intellectual understanding of Torah by the Shammaites, the Golden Era of Torah Enlightenment will finally have its day to bless all the world with presence of the “Only Begotten Son of the Father in Heaven” that will allow the minds of all men to observe directly the glory of the Almighty One of Israel. May that “Day of the Lord” come soon! Index for “The Divine Mission to Bring the “Good News” to the Gentiles” Go to Part Four – “Jesus the Nazarene and the Pharisees of Beit Shammai” Go to Part One – Letter of Rabbi Jacob Emden concerning Jesus the Nazarene - Go to Part Two – Go to Part Three – Go to Part Four – Go to Part Five – Index for “The Hebrew Nazarene Ecclesia of Jerusalem” Go to Part Eighteen – “The Flight of the Hebrew Nazarene to the Wilderness of Perea” Go to Part Eighteen – “Jesus the Nazarene and the Pharisees of the School of Shammai” Go to Part One Go to Part Two Go to Part Three Go to Part Four Go to Part Five Go to Part Six Go to Part Seven Go to Part Eight Go to Part Nine Go to Part Ten Go to Part Eleven Go to Part Twelve Go to Part Thirteen Go to Part Fourteen Go to Part Fifteen Go to Part Sixteen Go to Part Seventeen Go to Part Eighteen Go to Part Nineteen Go to Part Twenty The Last of the Nazarenes and the Desposyni (“The Lord’s Kin”)Go to Part Twenty Index for “Spring Jewish Festivals – Passover and Pentecost” “Looking at the Death of Yehoshua (Jesus) from the Perspective of First Century Judaism” Go to Part One – Go to Part Two – Go to Part Three – Go to Part Four – Go to Part Five – Go to Part Six – “The Earthquake at Yahshua’s Death and the Time of the End” Go to Part One – Go to Part Two – Go to Part Three – When does the First Month of Nissan (Aviv) Begin? Index for “The Hebrew Nazarene Ecclesia of Jerusalem” Go to Part Seventeen Go to Part Eighteen The Jewish Quarter by Bible Places Jerusalem Cardo by Bible Places The 19 Years of Jordanian “Guardianship” and Desecration of Jerusalem by Jewish Virtual Library Jerusalem City of David and Area G – by Bible Places The Cardo by the Department of Jewish Zionist Education The Cardo by the Agency for Jewish Education The Emperor Justinian and Jerusalem (527-586 BCE by New Jerusalem Mosaic, Hebrew University Jerusalem in Early Christian Times by Israel MFA The Empire Strikes Back in Jerusalem’s Cardo by Jerusalem Post Jerusalem through Jewish Eyes by Jerusalem Post Jerusalem, the Eternal City by Israel MFA The Cardo in 3D in the Old City of Jerusalem by 3Disrael, inisrael.com Jerusalem – University of North Florida The Medaba Map and the Cardo Jerusalem, the Nea Church and the Cardo by Jewish Virtual Library Byzantine Jerusalem and the Medaba Map by James Lancaster Ph.D. Jerusalem on the Medaba Map by New Jerusalem Mosaic, Hebrew University The Nea Church by New Jerusalem Mosaic, Hebrew University The Menorah by the Restoration Foundation The City of David Jerusalem’s City of David and Area G by Bible Places City of David by the Elad Foundation Area G by Virtual Jewish Library House of Ahiel Opens Door on Jephthat by The Messianic Times Area G by New Jerusalem Mosaic, Hebrew University The Citadel of David by Dig the Bible Rabbi Jacob Emden Message from BibleSearchers BibleSearchers scans the world for information that has relevance on the time of the end. It is our prayer that this will allow the believers in the Almighty One of Israel to “watch and be ready”. Our readiness has nothing to do trying to halt the progression of evil on our planet earth. In our readiness, we seek to be prepared for the coming of the Messiah of Israel so that goodness and evil will be manifested in its fullest. Our preparation is a pathway of spiritual readiness for a world of peace. Our defender is the Lord of hosts. The time of the end suggests that the Eternal One of Israel’s intent is to close out this chapter of earth’s history so that the perpetrators of evil, those that seek power, greed and control, will be eliminated from this planet earth. The wars of the heavens are being played out on this planet earth and humans will live through it to testify of the might, power, justice and the love of the God of Israel. In a world of corruption and disinformation, we cannot always know what the historical truth is and who is promoting evil or mis-information. We cannot guarantee our sources but we will always seek to portray trends that can be validated in the Torah and the testimony of the prophets of the Old and the New Testament. FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml
West Valley City • Lance Larsen was playing a game of pick-up basketball last Monday with colleagues from Brigham Young University when he got the call that he would be Utah's fourth poet laureate. "I don't play basketball like a poet. I'm kind of rough and tumble," Larsen joked Thursday afternoon, minutes after Utah Gov. Gary R. Herbert officially announced the 52-year-old Idaho native would serve the next five years as Utah's newest ambassador of assonance and viceroy of verse. After Gov. Herbert's announcement, Larsen quoted 20th-century writer Franz Kafka. "A book must be the axe to break the sea frozen inside us," Larsen told attendees at the annual Mountain West Arts Conference at West Valley City's Utah Cultural Celebration Center. The same holds true for poetry, a play or any other work of art, Larsen said. "Poetry gives us a second chance at life. I hope I can do that during the next five years." "Rough and tumble" as Larsen's basketball game may be, his Utah colleagues in poetry note that his skills are as formidable as they are accomplished. Jacqueline Osherow and Katharine Coles, both professors of English at the University of Utah and Guggenheim Fellowship recipients for their own poetry, count themselves impressed. "We're often taken by surprise by their weight and depth," writes Osherow on the back of Larsen's 2009 collection, Backyard Alchemy. Coles, the state's outgoing poet laureate, praised Larsen's "brainy" verse. "It's embedded in the sensory details of everyday life, and thoughtful, and spiritually attuned," she said. A professor and associate chair of the English department at BYU, Larsen lives in Springville with his wife Jacqui, a painter, and two of their four children who remain at home. Son of a geologist father and home economics professor, he first found himself captivated by poetry after reading James Wright's "A Blessing" as a 21-year-old student at BYU, after which there was no turning back. After finishing a master's thesis on Willa Cather at BYU, he earned his doctorate in literature and creative writing at the University of Houston. His fourth collection of poems, Genius Loci, will be published later this year by University of Tampa Press. In addition to Coles, Larsen follows St. George poet David Lee and Logan poet Kenneth W. Brewer as past poets holding the position. Created in 1997, the state's poet laureate is chosen from among names first gathered by a committee formed by the Utah Arts Council. After debate and discussion, the council then narrows its choices to three. Those names are sent to the governor, who makes the final selection. "The selection process is a lot like 'American Idol,' but with less singing," Larsen quipped. The post is non-paid, although the Utah Arts Council occasionally foots the bill for expenses when the poet travels within the state. "The literary arts are an essential part of our state's rich cultural heritage," Gov. Herbert said in announcing his choice. "Mr. Larsen's artistic accomplishments, and teaching service make him ideally suited to continue the tradition of bringing poetry and literature to the people of Utah." Coles ended her term by reading poems inspired by her recent visit to Antarctica. She also reminisced about the readings and workshops she presented in Utah schools, libraries and church basements. The same morning her appointment was announced by then-Gov. Jon Huntsman, Coles said she got a phone call from an elderly man asking if she would read his late wife's poetry to assess its quality. "Anyone who has the chance to do it, should do it," Coles said. "Unfortunately it's something that just falls on you." Leadership in the Arts Awards In addition to announcing Utah's new poet laureate on Thursday, Gov. Gary R. Herbert also presented the Governor's Leadership in the Arts Awards to four recipients. Teri Orr • Recognized for 18 years of work as director of the Park City Performing Arts Foundation, which has helped give the resort town a national profile on the arts scene. Stephen Goldsmith • Founder of Artspace nonprofit of affordable living and working spaces for Salt Lake City artists, was honored for his vision of "artists as city-builders." Chris Roberts • A teacher noted for his 34 years of integrating arts into the classroom, setting an example for other Utah educators in the advancement of the arts. The City of Ogden • The city was recognized for fostering a creative environment for arts with its urban setting and diverse population. "Not Necessarily at Rest" Rocks stacked at corners of a squatter's camp, colored bottles hanging from a tree. Broken oyster shells lining a dirt pathway to match the hems of clouds trundling their gossip over open-air markets towards the sea. How can those who watch us not be moved by our puny tries at beauty the gods who look down, the dead who sometimes look up? Yearning works through us, whiskers to tail, the way a yawning cat converts stretching into praise. Lance Larsen, from his 2009 collection Backyard Alchemy
Approval for Changes to French Minor Executive Memorandum 89-001 January 18, 1989 From: Robin S. Wilson, President Subject: Approval for Changes to French Minor I approve the proposed revisions to the minor in French for implementation effective Fall 1989. The revisions, which change the total number of course requirements for the minor from 20-26 and 12-32 units, include the following: Upper-division requirements: 12 Units 9 Units selected from each of the following pairs: FREN 100A - Composition and Advanced Grammar - 3 Units FREN 100B - Composition and Advanced Grammar - 3 Units FREN 140A - Survey of French Literature - 3 Units FREN 140B - Survey of French Literature - 3 Units FREN 110 - Advanced Conversation or Phonetics - 3 Units FREN 120 - Advanced Conversation or Phonetics - 3 Units 3 additional units from any other upper-division French course. Lower division requirements remain the same.
by Gerald Boerner Today’s profile examines the life and contributions of the controversial author and critic, Susan Sontag. She was a controversial author who wrote critically about war, especially the fighting in the former Yugoslavia units. She was also a critic of photography, writing On Photography which presented a different perspective on the images collected by most photographers. She was especially hard on the photojournalists claiming that when they could not find anything to photograph: the problem, according to Sontag, was they couldn’t find anything BAD to photograph! GLB “War is, first of all, noise. Incredible noise. In Sarajevo, it was like that– all the time. That sound– except– well, between three and five in the morning. Sometimes it– it would be silent.” — Susan Sontag “When photojournalists report that "there was nothing to photograph," what this usually means is that there was nothing terrible to photograph.” — Susan Sontag “Susan Sontag is not a photographer, yet her famous book ON PHOTOGRAPHY is required reading in almost every serious photography course in the world.” — Bill Moyers “The taste for futurology, or prophecy, is of at least equal importance. But this taste also confirms the prevailing unreality of the real historical past. Some novels which are situated in the past…” — Susan Sontag “We’re not animals. We’re not just people sheltering in our basements and standing on bread lines and water lines getting killed. You know people– when I came back people said, ‘Well who went to the theater?’ I said, "The same people that went to the theater before the war.” — Susan Sontag “I don’t think images can stop war, because I don’t think images just come all wrapped up with their meanings– very apparent to us. I think the images, as I say, they’ll disgust you with war in general, but they won’t tell you which of the wars, let’s say, that might be worth fighting…” — Susan Sontag “Well, they can– of course they can’t convey the totality. That goes without saying. No image can. But it’s also the– when you watch things through an image, it’s precisely affirming that you’re safe. Because you are watching it. You’re here and not there. And in a way you’re also– you’re– you’re innocent. You’re not doing it.” — Susan Sontag “As long as a particular disease is treated as an evil, invincible predator, not just a disease, most people with cancer will indeed be demoralized by learning what disease they have. The solution is hardly to stop telling cancer patients the truth, but to rectify the conception of the disease, to de-mythicize it.” — Susan Sontag The quotes included in this posting were taken from the public quotation sites which do not indicate that they are covered by any special copyright restrictions. Likewise, the images included in this posting were obtained under the Creative Commons Attribution-ShareAlike License from the Wikipedia.com web site which did not state any restrictions on their use. This blog makes every attempt to comply with the legal rights of copyright holders. This posting is intended for the educational use of photographers and photography students and complies with the “educational fair use” provisions of copyright law. For readers who might wish to reuse some of these images should check out their compliance with copyright limitations that might apply to that use. Susan Sontag (1933 – 2004) Susan Sontag was an American author, literary theorist, and political activist. She also wrote a perceptive essay on photography and was the companion of Anne Liebowitz for years. Sontag, born Susan Rosenblatt, was born in New York City to Jack Rosenblatt and Mildred Jacobsen, both Jewish Americans. Her father ran a fur trading business in China, where he died of tuberculosis when Susan was five years old. Seven years later, her mother married Nathan Sontag. Susan and her sister, Judith, were given their stepfather’s surname, although he never formally adopted them. Sontag grew up in Tucson, Arizona, and, later, in Los Angeles, where she graduated from North Hollywood High School at the age of 15. She began her undergraduate studies at Berkeley but transferred to the University of Chicago in admiration of its famed core curriculum. At Chicago, she undertook studies in philosophy and literature alongside her other requirements (Leo Strauss, Richard McKeon and Kenneth Burke were among her lecturers) and graduated with a B.A. She did graduate work in philosophy, literature, and theology at Harvard with Paul Tillich, Jacob Taubes and Morton White et al. After completing her Master of Arts in philosophy and beginning doctoral work at Harvard, Sontag was awarded a University Women’s Association scholarship for the 1957-1958 academic year to St Anne’s College, Oxford, where she had classes with Iris Murdoch, J. L. Austin, Alfred Jules Ayer, Stuart Hampshire and others. Oxford did not appeal to her, however, and she transferred after Michaelmas term of 1957 to the University of Paris. It was in Paris that Sontag socialised with expatriate artists and academics including Allan Bloom, Jean Wahl, Alfred Chester, Harriet Sohmers and Maria Irene Fornes. Sontag remarked that her time in Paris was, perhaps, the most important period of her life. It certainly provided the grounding for her long intellectual and artistic association with the culture of France. At 17, while at Chicago, Sontag married Philip Rieff after a ten-day courtship. The philosopher Herbert Marcuse lived with Sontag and Rieff for a year while working on his book Eros and Civilization. Sontag and Rieff were married for eight years throughout which they worked jointly on the study Freud: The Mind of the Moralist that would be attributed solely to Philip Rieff as a stipulation of the couple’s divorce in 1958. The couple had a son, David Rieff, who later became his mother’s editor at Farrar, Straus and Giroux, as well as a writer in his own right. The publication of Against Interpretation (1966), accompanied by a striking dust-jacket photo by Peter Hujar, helped establish Sontag’s reputation as "the Dark Lady of American Letters." Movie stars like Woody Allen, philosophers like Arthur Danto, and politicians like Mayor John Lindsay vied to know her. In the movie Bull Durham, her fictional writing was disparaged in a speech by the fictional Crash Davis to the fictional Annie Savoie in which Davis says he believes her novels are "over-rated, self-indulgent crap." In her prime, Sontag avoided all pigeonholes. Like Jane Fonda, she went to Hanoi, and wrote of the North Vietnamese society with much sympathy and appreciation (see "Trip to Hanoi" in Styles of Radical Will). She maintained a clear distinction, however, between North Vietnam and Maoist China, as well as East-European communism, which she later famously rebuked as "fascism with a human face." Grave of Susan Sontag Sontag died in New York City on 28 December 2004, aged 71, from complications of myelodysplastic syndrome which had evolved into acute myelogenous leukemia. Sontag is buried in Montparnasse Cemetery, in Paris. Her final illness has been chronicled by her son, David Rieff. Sontag’s literary career began and ended with works of fiction. After teaching philosophy and theology at Sarah Lawrence College, City University of New York and Columbia University under Jacob Taubes from 1960 to 1964, Sontag left academia and devoted herself to full-time writing. At age 30, she published an experimental novel called The Benefactor (1963), following it four years later with Death Kit (1967). Despite a relatively small output, Sontag thought of herself principally as a novelist and writer of fiction. Her short story "The Way We Live Now" was published to great acclaim on 26 November, 1986 in The New Yorker. Written in an experimental narrative style, it remains a key text on the AIDS epidemic. She achieved late popular success as a best-selling novelist with The Volcano Lover (1992). At age 67, Sontag published her final novel In America (2000). The last two novels were set in the past, which Sontag said gave her greater freedom to write in the polyphonic voice. It was as an essayist, however, that Sontag gained early fame and notoriety. Sontag wrote frequently about the intersection of high and low art and the form/content dichotomy across the arts. Her celebrated and widely-read 1964 essay "Notes on ‘Camp’" was epoch-defining, examining an alternative sensibility to that which would see the best art in terms of its seriousness. It gestured towards and expounded the "so bad it’s good" concept in popular culture for the first time. In 1977, Sontag wrote the essay On Photography, which gave media students and scholars an entirely different perspective of the camera in the modern world. The essay is an exploration of photographs as a collection of the world, mainly by travelers or tourists, and the way we therefore experience it. She outlines the concept of her theory of taking pictures as you travel: The method especially appeals to people handicapped by a ruthless work ethic – Germans, Japanese and Americans. Using a camera appeases the anxiety which the work driven feel about not working when they are on vacation and supposed to be having fun. They have something to do that is like a friendly imitation of work: they can take pictures. Sontag suggested photographic "evidence" be used as a presumption that "something exists, or did exist", regardless of distortion. For her, the art of photography is "as much an interpretation of the world as paintings and drawings are", for cameras are produced rapidly as a "mass art form" and are available to all of those with the means to attain them. Focusing also on the effect of the camera and photograph on the wedding and modern family life, Sontag reflects that these are a "rite of family life" in industrialized areas such as Europe and America. To Sontag "picture-taking is an event in itself, and one with ever more peremptory rights – to interfere with, to invade, or to ignore whatever is going on". She considers the camera a phallus, comparable to ray guns and cars, which are "fantasy-machines whose use is addictive". For Sontag the camera can be linked to murder and a promotion of nostalgia while evoking "the sense of the unattainable" in the industrialized world. The photograph familiarizes the wealthy with "the oppressed, the exploited, the starving, and the massacred" but removes the shock of these images because they are available widely and have ceased to be novel. Sontag saw the photograph as valued because it gives information but acknowledges that it is incapable of giving a moral standpoint although it can reinforce an existing one. Sontag championed European writers such as Walter Benjamin, Roland Barthes, Antonin Artaud, E. M. Cioran, and W. G. Sebald, along with some Americans such as María Irene Fornés. Over several decades she would turn her attention to novels, film, and photography. In more than one book, Sontag wrote about cultural attitudes toward illness. Her final nonfiction work, Regarding the Pain of Others, re-examined art and photography from a moral standpoint. It spoke of how the media affects culture’s views of conflict. A new visual code [Photography] In her Essay On Photography Sontag says that the evolution of modern technology has changed the viewer in three key ways. She calls this the emergence of a new visual code. Firstly, Sontag suggests that modern photography, with its convenience and ease, has created an overabundance of visual material. As photographing is now a practice of the masses, due to a drastic decrease in camera size and increase of ease in developing photographs, we are left in a position where “just about everything has been photographed” (Sontag, Susan, (1977), On Photography 3). We now have so many images available to us of: things, places, events and people from all over the world, and of not immediate relevance to our own existence, that our expectations of what we have the right to view, want to view or should view has been drastically affected. Arguably, gone are the days that we felt entitled of view only those things in our immediate presence or that affected our micro world; we now seem to feel entitled to gain access to any existing images. “In teaching us a new visual code, photographs alter and enlarge our notion of what is worth looking at and what we have the right to observe”. This is what Sontag calls a change in “viewing ethics”. Secondly, Sontag comments on the effect of modern photography on our education, claiming that photographs “now provide most of the knowledge people have about the look of the past and the reach of the present”. Without photography only those few people who had been there would know what the Egyptian pyramids or the Parthenon look like, yet most of us have a good idea of the appearance of these places. Photography teaches us about those parts of the world that are beyond our touch in ways that literature can not. Sontag also talks about the way in which photography desensitizes its audience. Sontag introduces this discussion by telling her own story of the first time she saw images of horrific human experience. At twelve years old, Sontag stumbled upon images of holocaust camps and was so distressed by them she says “When I looked at those photographs something broke… something went dead, something is still crying”. Sontag argues that there was no good to come from her seeing these images as a young girl, before she fully understood what the holocaust was. For Sontag the viewing of these images has left her a degree more numb to any following horrific image she viewed, as she had been desensitized. According to this argument, “Images anesthetize” and the open accessibility to them is a negative result of photography. Sontag examines the relationship between photography and reality. Photographs are depicted as a representation of realism. Sontag claimed that “such images are indeed able to usurp reality because first of all a photograph is not only an image, an interpretation of the real; it is also a trace, something directly stenciled off the real (Sontag, Susan (1982), The Image World 350). It is a resemblance of the real as the photograph becomes an extension of the subject. However, the role of the photograph has changed, as copies destroy the idea of an experience. The image has altered to convey information and become an act of classification. Sontag highlights the notion that photographs are a way of imprisoning reality- making the memory stand still. Ultimately images are surveillance of events that trigger the memory. In modern society, photographs are a form of recycling the real. When a moment is captured it is assigned a new meaning as people interpret the image in their own manner. Sontag depicts the idea that images desensitize the real thing, as people’s perceptions are distorted by the construction of the photograph. However this has not stopped people from consuming images; there is still a demand for more photographs. Therefore, Sontag has impacted the audience’s understanding of reality, as photographs have adapted to a form of surveillance. Susan Sontag brought out some uses of the photography, “Photography has become one of the principal devices for experiencing something, for giving an appearance of participation” (Sontag,1977), such as memorizing and providing evidence. She also states that “to collect photography is to collect the world.” (Sontag,1997) Sontag believes that photography implies that we know about the world if we accept it as the camera records it. She states that photography has ‘become one of the principal devices for experiencing something, for giving an appearance of participation’. She refers to photographs as memento mori, where to take a photograph is to participate in another person’s mortality, vulnerability and mutability. The progression from the written word to capturing an image shifts the weight of the interpretation from the author to the receiver. Sontag believes however that ‘photographed images do not seem to be statements about the world so much as pieces of it, miniatures of reality that anyone can make or acquire’. It is a slice in time and in effect, is more memorable than moving images for example, videos. It fills the gaps in our mind of the past and present. Even though photography has such effect, there are limits to photographic knowledge of the world. The limitations are that it can never be interpreted ethical or political knowledge. It will always be some kind of sentimentalism, whether cynical or humanist. Our modern day society can be described as a society feeding on aesthetic consumerism. There is an addiction and a need to constantly have reality confirmed and experiences enhanced by photographs. Criticism and Acclaim On Photography won the National Book Critics’ Circle Award for 1977 and was selected among the top 20 books of 1977 by the editors of the New Times Book Review. In 1977, William H. Gass, writing in the New York Times, said the book "shall surely stand near the beginning of all our thoughts upon the subject" of photography. In a 1998 appraisal of the work, Michael Starenko, wrote in Afterimage magazine that "On Photography has become so deeply absorbed into this discourse that Sontag’s claims about photography, as well as her mode of argument, have become part of the rhetorical ‘tool kit’ that photography theorists and critics carry around in their heads." Sontag’s work is literary and polemical rather than academic. It includes no bibliography, and few notes. There is little sustained analysis of the work of any particular photographer and is not in any sense a research project as often written by Ph.D students. Many of the lesser reviews from the world of art photography that followed On Photography’ at the time of its publication were skeptical and often hostile, such as those of Colin L. Westerbeck and Michael Lesey. In 2004 Sontag published a partial refutation of the opinions she espoused in On Photography in her 1994 collection of essay Regarding the Pain of Others. This book may be deemed as a postscript or addition to On Photography. Sontag’s publishing history includes a similar sequence with regard to her work Illness as Metaphor from the 1970s and AIDS and Its Metaphors a decade later, which included a revision of many ideas contained in the earlier work. Sontag became aware of her attraction to women in her early teens and wrote in her diary aged 15, "so now I feel I have lesbian tendencies (how reluctantly I write this)." Aged 16, she had her first sexual encounter with a woman: "Perhaps I was drunk, after all, because it was so beautiful when H began making love to me …. It had been 4:00 before we had gotten to bed … I became fully conscious that I desired her, she knew it, too…." In the early 1970s, Sontag was romantically involved with Nicole Stéphane (1923-2007), a Rothschild banking heiress turned movie actress. Sontag later engaged in a committed relationship with photographer Annie Leibovitz, with whom she was close during her last years; choreographer Lucinda Childs, writer Maria Irene Fornes, and other women. In an interview in The Guardian in 2000, Sontag was quite open about her bisexuality: "Shall I tell you about getting older?", she says, and she is laughing. "When you get older, 45 plus, men stop fancying you. Or put it another way, the men I fancy don’t fancy me. I want a young man. I love beauty. So what’s new?" She says she has been in love seven times in her life, which seems quite a lot. "No, hang on," she says. "Actually, it’s nine. Five women, four men." Background and biographical information is from Wikipedia articles on: Susan Sontag can be found at… Susan Sontag: On Photography can be found at… Boston Review: An Interview with Susan Sontag… PBS NOW with Bill Moyers: Transcript: Susan Sontag — A Bill Moyers Interview…
Best Online Theology & Related Religion Degree Programs While schools do not necessarily have to be Christian in order to offer Theology degree programs, many Christian colleges offer this program and other subjects related to religion. Many of the schools below are Christian and offer online programs for Theology, and the schools can be contacted for free information with the links provided. |If you are considering an AA in Christian Ministry or a BA in Christian Leadership & Ministry, Ohio Christian University offers online degree programs. These degrees are designed for students wishing to enter Christian Ministry as a chaplain or in evangelistic work. A variety of concentrations are available. |If you are considering an online degree in Religion, Grand Canyon University offers a BA in Theology, Christian Studies, Youth Ministry, and Christian Leadership. This program uses biblical studies as the foundation for a deeper understanding of theology, philosophy and history. Graduates acquire the necessary leadership skills for effective ministry. |Liberty University offers online degree programs MA in Theological Studies, or a BA or AA in Religion, Liberty University. The School of Religion provides a quality Christian education in a relevant format that is based on Christian teachings and principles. Students are prepared for careers in ministry and theology. |If you are considering an online degree in Biblical Leadership, Southwestern Christian University has a program. This degree focuses on the administrative issues and special training required for effective leadership in ministry environments. It can be completed in as few as 18 months. Coursework is in writing, history, and literature. |For those seeking a PhD in Pastoral Community Counseling, Argosy University offers an online degree program. This program is designed to prepare pastoral counselors to address individual and communal development in a fashion that this ethically responsible. It integrates the engagement of knowledge and development of skills with practical research. |If you are considering a BS in Bible Studies, Lancaster Bible College has an online degree program. This program is designed for the student that wants intensive bible education along with the opportunity for specific concentrations and elective courses. Students are able to choose courses from various professional departments, while focusing on ministry work. |Students interested in studying religion online should consider a degree in Liberal Studies. St. Leo University offers both an AA and a BA. This program provides a broad perspective on human behavior, ideas, and values through a multidisciplinary study of the social and natural sciences. A focus is placed on developing critical thinking skills. |Click here to view more Theology and Religion programs| What Types of Theology Degrees are Available Online? Most people who study theology do not actually finish school with a “theology degree.” The field is so rich and diverse that many niche specialties are available, and most students walk out of college with something like a Master of Divinity, or MDiv degree, in a particular type of religious or ministerial discipline. Some of the degrees available from theological schools, or seminaries, are as follows: - Master of Arts – Christian Leadership: This degree can prepare you for a career as a pastor or minister, or other type of Christian leader. Christian leadership encompasses both clergy and non-clergy positions in religious institutions. - Master of Divinity: An MDiv can lead to a career in religious scholarship, ministry, or just about any other work within a religious context. This is the most common degree issue by seminaries and divinity schools, and can be completed with an emphasis in some particular religion or sector of faith studies. - Master of Arts – Chaplaincy: Chaplains provide religious council and services in specialized settings. Hospitals and prisons especially tend to have chaplains on duty to serve the religious needs of patients and inmates. - Master of Religious Education: Teaching religion, in the context of a bible study class, Sunday school, or a university seminary, can be a rewarding career choice for those who want to dedicate their lives to religion without actually becoming clergy. How Long Does It Take to Get a Theology Degree? Depending on what level of degree you are seeking, a theology degree can take anywhere from one to four years, and a program of study that earns both a bachelor’s and master’s degree in a theology related field could take five years or more. Below is a list of common theology degrees, and an approximate number of credit hours and years that it would take to achieve them at a full time class schedule. - Associate of Arts – Religion: Generally, associate’s degrees take two years of full time study; approximately 60 credit hours. Part time schedules are often a possibility, and the number of years required can change drastically depending how many or few classes a student takes at once. - Bachelor of Arts – Religion: Four years is the standard length of time for finishing a bachelor’s degree, but statistics show that most students take five to six years to complete their baccalaureate studies. Most bachelor’s degrees comprise 120 credit hours. - Master of Arts – Christian Leadership: A Master of Arts degree can take between one and three years, depending on whether your bachelor’s degree was in the same field as your master’s. If you studied religion as an undergraduate, you can likely find an accelerated path through graduate school. - Master of Divinity: An MDiv program can take from 80-100 credit hours, and can be completed in three years with a full time course load. Many students take classes part time and finish their degree more slowly so they can continue to work while they are in school. What Careers are Available to Someone with a Theology Degree? Ministry or pastoring at a church are two of the primary careers sought by those who study theology, but these are not the only options. Teaching at a religious high school or a university are viable options for theology students, as are chaplaincy and religious research and scholarship. Working in a religious establishment usually involves a lot of communication and collaboration with parishioners, clergy, and other religious workers. While it is not crucial to have the exact same beliefs as those you work with, some shared form of faith or religious belief is usually the basis for relationships between members of a given religious community, including employees of that community. Though religious organizations are far and away the largest employers of clergy and others with theology degrees, there is also a strong trend of hiring religious workers in the medical establishment. Hospitals and military bases always have chaplains or even priests, to perform last rites and provide other religious services to patients and soldiers both on a day to day basis and in times of great need. What Are Some Common Classes Required for Theology Students? The focus of your Bachelor of Theology, Master of Theology, or MDiv degree will determine what types of classes you take, but as with most degree programs, there is a core curriculum that is relatively similar across programs and universities. Some of the core classes required for theological study include: - Biblical Interpretation: Focuses on understanding the symbolic meaning of biblical stories. Some scholars choose to interpret the Bible as the literal word of God, where others believe that changes in human context over the millennia have left the Bible open for interpretation. Biblical interpretation can be a contentious subject for theologians in different camps. - Philosophy of Religion: Rather than examining the precepts of any particular religion, this course examines the institution of religion as a whole. Whether you believe in God or not, religion has demonstrable influence on human events, and a philosophy of religion course seeks to explore this dynamic as a whole. - Intercultural Ministry: Even within a single religion or sect, different cultures have profoundly different approaches to religious practices. Intercultural ministry may involve traveling abroad to preach and evangelize, or just building relationships between members of a faith from different secular cultures. - Pastoral Ministries: This course is an introduction to some of the responsibilities of church pastors and other clergy. Learning to minister to parishioners as a group, and provide the individual counsel that many expect from their religious leaders, is likely to be a large part of your career after you finish a degree in theology. - Homiletics: Homiletics is essentially the study of preaching. A homily is a broad term for various types of religious speaking, but is essentially the same as a sermon or delivery of catechism. If your emphasis is on a religion other than Christianity or Judaism, Biblical Interpretation classes will likely be replaced with a study of the holy book most relevant to your religion. How Can I Become Ordained? Becoming a priest or pastor and moving up in the ranks of religious officials is a long term goal for many theology students. The process of being ordained as a minister varies across denominations, but if your ultimate goal is to become a leader in your denomination of choice, there are a few steps you can follow toward realizing that goal. - Being an active member of the denomination you want to be a leader in is a must. Knowing the politics and apocryphal beliefs that have built up on a congregational and denominational level in your religion will help you navigate the path toward leadership. - Baptism is likely a necessary step in becoming the leader of a church. Most congregations will shy away from hiring a pastor who is unbaptized in the faith. - Volunteering at a church or seminary can help you decide whether religious leadership is really an ideal job for you. However interested you are in theology, if the day to day work of pastoring does not keep you excited, then you might want to look into other career paths, such as religious teaching or research. Where Are Other Resources for Aspiring Theology Students? There is a strong community of theological scholars and religious leaders, so finding a mentor or at least a good blog on the subject shouldn’t be hard. Listed below are a few sites that can help you learn more about what theology school is like, and help you decide whether studying theology is the best path for you. - Helm’s Deep is the philosophical theology blog of Paul Helm, a teaching fellow at Regent College in Vancouver. - Restorative Justice is a blog kept by Howard Zehr, a professor of restorative justice and peace-building at Eastern Mennonite University. - That Theology Student recently wrapped up, and is more of a peek into the world of a theology student than a resource of hard info about theology. The blog was updated consistently for a while, and may offer some insight into what life is like for one theology student, though new updates are no longer being published. How to Get Started? Studying theology, whether for personal enlightenment or professional advancement, requires a certain level of respect for the power of religion and the global communities founded upon it. To start on your path towards a degree in theology, visit the websites of some of the schools below and learn more about their theology degree offerings.
Here is a late-breaking news story, direct from the campaign trail in Wisconsin, Minnesota and Iowa. The Kerry campaign, in co-ordination with shadowy 527 groups funded by corporate backers and disgraced Senator Bob Torricelli, is using liberal progressives, including Michael Moore and Noam Chomsky in their literature attacking Nader in the closing days of the campaign. Here is what a leaflet distributed by the Nader Factor in Wisconsin, said: “Michael Moore, Noam Chomsky, Tim Robbins, Cornel West and Susan Sarandon have called on Nader to unite against Bush.” The leaflet then goes on to use Winona LaDuke’s name as supporting John Kerry. The Nader Factor is actually a front group for corporate interests, including military, construction, computer, diet foods, and communications businesses, and is run by David Jones. They are funded by the National Progress Fund who worked through Americans for Jobs and Healthcare, whose first major venture of the 2004 campaign was to aggressively attack Howard Dean in Iowa. Those advertisements were in part funded by the disgraced former New Jersey Senator Robert Torricelli and his criminally convicted corporate backers. Torricelli gave $50,000 to the effort. During the Democratic primary, Tricia Enright, who was Dean’s communications director, described David Jones advertisements saying: “Whoever is behind this should crawl out from underneath their rock and have the courage to say who they are. It is hateful, it's cynical, it's exactly the kind of ad that keeps people from voting, that keeps people from getting involved in the process.” After the primaries she teamed up with David Jones and others to form The Nader Factor which was funded by the National Progress Fund and Americans for Jobs and Healthcare. The Nader Factor seems closely tied to the Kerry/Edwards campaign. * Enright left as president of The Nader Factor to work for the Kerry/Edwards campaign in Pennsylvania. * Jones and other Nader Factor staff attended the Democratic convention where seminars were held on how to keep Nader off the ballot, minimize his vote and raise money for so-called “independent” 527s. * The Nader Factor shares funders with the Kerry campaign including Bernard Schwartz, chairman of Loral corporation which is a satellite communication company that is part of the military-industrial complex and S. Daniel Abraham, former chairman of Slim-Fast (the Center for Science and the Public Interest describes their product unfavorably as “high-fructose corn syrup, yogurt-flavored coating, partially defatted peanut flour, honey, high-maltose corn syrup, and the other ingredients in a Slim-Fast Meal On-The-Go Bar.”) * The registered e-mail address of Americans for Jobs, the major funder of The Nader Factor, belongs to Mark W. Ward, a client specialist in the Washington, D.C., office of the billion dollar law firm of Skadden, Arps, Slate, Meagher & Flom LLP. One of the largest law firms in the world, Skadden is also the fifth most generous career patron to Senator Kerry, its employees directly donating $125,550 to his campaigns, its clients lavishing many times more than that over the years to John Kerry or the Democrats. The Nader Factor modus operandi is to spread false statements about the Nader/Camejo campaign. For example they are handing out literature that makes the false claim: “Mr. Nader is now accepting funds from the Bush front group Swift Vote Veterans for Truth.” In fact, the Nader/Camejo campaign has received no funds from the Swift Vote Veterans for Truth. Some individuals who have funded Swift Vote have sent unsolicited donations to the campaign, but the leadership of the group says they will be voting for Nader on Election Day. Michael Moore, Noam Chomsky and other progressives like Jeff Cohen and Norman Solomon should not want to be in bed with some of the sleaziest corporate elements in American politics. They need to renounce this group in the closing days of the campaign. These, and other liberals, have accepted the falsehoods that Nader is accepting organized Republican support. Liberals must now face up to their own truth: they are complicit in Democratic Party dirty corporate tricks. Carl J. Mayer, an attorney, is an advisor to the Nader for President campaign. He ran for Congress on the Green Party ticket. He is the author of Shakedown: The Fleecing of the Other Articles by Carl Mayer
Governor Daniel K. Tarullo At the Federal Reserve Bank of San Francisco Conference on Challenges in Global Finance: The Role of Asia, San Francisco, California June 12, 2012 Shadow Banking After the Financial Crisis The three decades preceding the financial crisis were characterized in the United States by the progressive integration of traditional lending and capital markets activities. This trend diminished the importance of deposits as a source of funding for credit extension in favor of capital market instruments sold to institutional investors. It also altered the structure of the financial services industry, both transforming the activities of broker-dealers and fostering the emergence of large financial conglomerates. Although the structure of foreign banking systems was less noticeably changed, many foreign banks drew increasingly on the resulting wholesale funding markets and made significant investments in the mortgage-backed securities that had proliferated in the first decade of this century. The financial crisis underscored the failure of the American regulatory system to keep pace with these developments and revealed the need for two reform agendas. One must be aimed specifically at the problem of too-big-to-fail institutions. The other must be directed at the so-called shadow banking system, which refers to credit intermediation involving leverage and maturity transformation that is partly or wholly outside the traditional banking system. As I have noted on other occasions, most reforms to date have concentrated on too-big-to-fail institutions, though many of these reforms have yet to be fully implemented. The shadow banking system, on the other hand, has been only obliquely addressed, despite the fact that the most acute phase of the crisis was precipitated by a run on that system. Indeed, as the oversight of regulated institutions is strengthened, opportunities for arbitrage in the shadow banking system may increase. Today I want to focus on the development of a regulatory reform agenda for the shadow banking system. As those who have been following the academic and policy debates know, there are significant, ongoing disagreements concerning the roles of various factors contributing to the rapid growth of the shadow banking system, the precise dynamics of the runs in 2007 and 2008, and the relative social utility of some elements of this system. Conclusions drawn from these debates will be important in eventually framing a broadly directed regulatory plan for the shadow banking system. However, as it is neither necessary nor wise to await such conclusions in order to begin implementing a regulatory response, I will follow my discussion of the vulnerabilities created by shadow banking with some suggestions for near- and medium-term reforms. Fragility of the Shadow Banking System It is not my purpose here today to discuss the history and complex nature of the shadow banking system. There is a rich and growing academic literature devoted to this task. However, I do want to identify some features of shadow banking that are reasonably well-established and particularly salient for reform efforts. First, and in many respects foremost, it bears noting that the use of the term "shadow banking" refers not simply to the functions of credit intermediation and maturity transformation. Shadow banking also refers to the creation of assets that are thought to be safe, short-term, and liquid, and as such, "cash equivalents" similar to insured deposits in the commercial banking system. Of course, as many financial market actors learned to their dismay, in periods of stress these assets are not the same as insured deposits. The years preceding the financial crisis saw a surge in the volume of dollar-denominated, seemingly safe, seemingly liquid financial instruments. The causal interplay of factors leading to this surge is still actively debated. But it seems reasonably clear that both a rise in the demand by investors for safe, liquid assets as tools for precautionary or transactional liquidity and a rise in demand for short-term financing by certain borrowers--notably financial intermediaries looking to fund longer-term assets--played important, probably reciprocally reinforcing roles. Examples of investor demand for safe, liquid assets are not hard to identify. One source has been foreign official investors, mostly emerging market countries, which invested about $1.6 trillion in the United States in the four years preceding the crisis, largely in U.S. Treasury and agency securities. Much of this activity arose from the investment of foreign exchange reserves by countries running large current account surpluses. Some of these reserves were undoubtedly built up as a precautionary measure in light of the financial problems in emerging markets during the late 1990s, while others are attendant to policies of managed exchange rates. This official sector demand for safe assets was largely if not entirely focused on U.S. government securities, rather than cash equivalents. But this source of demand absorbed roughly 80 percent of the increase in U.S. Treasury and agency securities over the four-year period, potentially crowding out other investors and thereby increasing their demand for cash equivalents that appeared to be of comparable safety and liquidity. A second source of demand has been nonfinancial firms, which responded to the market disruptions associated with defaults by Enron and other firms more than a decade ago by boosting their holdings of cash. The pressure to hold large amounts of cash likely increased when a major ratings agency began publishing liquidity risk assessments of nonfinancial firms. A third source of demand for cash equivalents resulted from the adoption of more elaborate investment strategies by many institutional investors. For example, as more such investors used derivatives or short-selling as part of their overall strategies, they needed cash or cash-like instruments for margining and other collateral purposes. Moreover, of course, as the amount of assets under professional management increased, the demand for safe, liquid investments also inevitably increased, since intermediaries need a place to park funds that are awaiting investment or needed to meet unexpected withdrawals. The growing demand for safe and liquid assets was met largely by the shadow banking system's creation of assets that were seemingly safe and seemingly liquid.1 New varieties of shadow-banking activities were created, some pre-existing types grew larger, and the shadow banking system became much more internationalized. For example, the volume of asset-backed commercial paper, or ABCP, grew enormously. Many ABCP vehicles issued short-term, highly rated liabilities and bought longer-term, highly rated securities, often mortgage-backed securities. Many of the vehicles were sponsored abroad, especially by European banks, which issued dollar-denominated ABCP in the U.S. market and bought dollar-denominated assets in the U.S. market. The overall volume of this activity was very large, although the net flows between the U.S. and Europe were not, leaving European bank sponsors of such ABCP vehicles with a huge exposure when market participants stopped believing that ABCP was risk-free. It now seems clear that the tail risk associated with many shadow-banking instruments was not understood by many market actors, including both sellers and buyers. An important contributing factor on the buyers' side that helped set the stage for the 2007-2008 financial crisis was the widespread acceptance that risk-free assets could be created by augmenting what was already thought to be a low-risk asset with a promise from a large financial institution to provide liquidity or bear credit losses in the unlikely event that such support might be needed. When, in stressed conditions, the credibility of the promise came into question, the susceptibility to runs increased dramatically. In some cases, there were explicit contractual provisions for liquidity support or credit enhancements, such as were provided to ABCP vehicles by their sponsoring banks. In other cases, the support was more implicit, and was conveyed in the marketing of the assets or through an historical pattern of providing support. Forms of implicit credit support were present in a variety of important funding channels and, to a considerable degree, persist today. Three examples are money market funds, the triparty repo market, and securities lending. Money market funds aim to maintain a stable net asset value of one dollar per share and to meet redemption requests upon demand. As such they are the very model of a nonbank "deposit" or cash equivalent.2 Unlike other mutual funds, money market funds are allowed to round their net asset values to one dollar per share so long as the underlying value of each share remains within one-half cent of a dollar. But a drop in the unrounded net asset value of more than one-half of one percent causes a money fund to "break the buck," a scenario in which losses, at least in theory, would be passed along to the fund's investors. However, fund sponsors historically have absorbed losses whenever necessary to prevent funds from breaking the buck, with only two exceptions. Even though they had no legal obligation to do so, sponsors voluntarily supported their funds more than 100 times between 1989 and 2003, presumably because allowing a fund to break the buck would have damaged the sponsor's reputation and franchise. This tendency was well understood by investors. Indeed, a standard reference book on money markets states that a "money fund run by an entity with deep pockets, while it may not have federal insurance, certainly has something akin to private insurance . . . likely to prove adequate to cover any losses sustained by the fund."3 Many money funds sustained significant capital losses when the market for asset-backed commercial paper collapsed in the summer and fall of 2007. As in previous decades, losses at money funds were absorbed by the funds' sponsors. Indeed, money funds were seen as highly safe in 2007 and received large net inflows as concerns about other portions of the financial system increased. But when, in 2008, the Reserve Primary Fund did not provide support for the relatively small losses at its money market fund, the illusion that money funds were effectively as safe as insured bank accounts was shattered. A general run on money funds ensued. Within two days, investors withdrew nearly $200 billion from prime money market funds, about 10 percent of their assets. This contributed to severe funding pressures for issuers of commercial paper. The run ultimately prompted--and was stopped by--unprecedented interventions by the Treasury and the Federal Reserve to provide insurance and liquidity support to the industry. A second example is the triparty repo market, which had grown to about $2.8 trillion of outstanding financing by early 2007. In general, a repo, or "repurchase agreement," is the sale of a security with an agreement to repurchase the security at a later date; the economics of repos are similar to that of short-term loans collateralized by longer-term assets. So-called triparty repos, typically used by broker-dealers to raise financing from cash-rich institutions such as money market funds, insurance companies, and some central banks, utilizes a particular settlement mechanism. The third party in this triparty market is a clearing bank, which handles settlement through accounts held at that institution by the broker-dealers who are cash borrowers and the cash lenders. Because the composition and size of broker-dealers' inventories can change rapidly with the levels of trading activity, broker-dealers find the very flexible and inexpensive short-term financing offered by triparty repos to be extremely attractive. To the extent that this borrowing appeared riskless to lenders, broker-dealers were potential suppliers through triparty repos of the safe, liquid assets that were in such demand. Broker-dealers who borrow in the triparty repo market want to have access to their securities for routine trading purposes--for example, to make deliveries to clients during the day. To allow for that, the market developed a critical operational feature called the "daily unwind." Each day, the clearing banks "unwind" all repo trades, returning securities to borrowers and cash to lenders, even for longer-dated term transactions. However, the securities still require financing during the day. To this end, borrowers rely on intraday overdrafts at the two major clearing banks. At the end of the day, the transactions are "re-wound." Thus, the risks associated with the portfolios of securities are fully transferred twice each day. The lenders in this market widely believed that the two clearing banks would always unwind their maturing trades in the morning, returning cash to their account, despite no contractual provision requiring that the clearing banks do so. The fact that lenders believed they were protected in this way by the clearing bank helped perpetuate the illusion that, particularly when lending overnight, they were invested in a money-like asset that would always be highly liquid and safe, even though in reality the borrower was usually an entity that could go bankrupt. This illusion faded as the financial crisis progressed. Significant strains were created by concerns about the financial strength of the broker-dealers, uncertainty about the value of the underlying collateral, and belated recognition that the clearing banks were not contractually obligated to unwind maturing trades. Only when the prospect of dealer failures became very real--for example, in the case of Countrywide's broker-dealer affiliate in August 2007 and Bear Stearns in March 2008--did the lenders appear to see these risks clearly. In addition, the presumed stabilizing function of collateral was weakened, since a default by a dealer or clearing bank could leave lenders with securities posted as collateral that they had no desire, operational capacity, or even, in some cases, legal authority to hold, or at least liquidate in an orderly way. The response at that point was to flee, ignoring the protection putatively afforded by collateral. Only because of unprecedented official-sector action did the triparty repo market not suffer the same kind of disastrous run as did money market funds. A broad run on triparty repos would have severely impacted all major broker-dealers and thus the U.S. securities industry as a whole. The Primary Dealer Credit Facility--instituted on an emergency basis immediately after the failure of Bear Stearns--provided emergency lending to dealers, injected liquidity into the system, and provided a backstop that reassured markets. This public-sector support prevailed where implicit private-sector support had come into question, and helped stabilize the triparty repo market. My third example of a funding channel characterized by tacit credit support is the securities lending market, which is driven in large part by demand for securities by financial institutions wanting to establish short positions or needing collateral to support other transactions. Securities lenders in this market are typically owners of large pools of securities such as pension plans, endowments, and insurance companies. The securities borrower posts collateral, usually cash in the United States, which a custodian bank then typically invests on behalf of the securities lender in supposedly safe and liquid investments, including money market funds, triparty repos, and other short-term instruments. The gains from these reinvestment activities provide a significant amount--in some cases, all--of the compensation to the securities lender associated with participating in the lending program.4 The custodian banks all but universally provided a contractual indemnification to the securities lender that required them to absorb any losses to the securities lenders if the securities were not returned. But the investment returns, and risk of loss on the reinvestment of cash collateral that would have to be returned to the borrowers of securities, generally were not covered by such indemnifications. Nonetheless, a number of securities lenders seemed to believe otherwise, and in many cases their expectations were fulfilled as custodian banks agreed during the financial crisis to bear at least some of the losses from cash collateral reinvestment programs. Although the experiences of money market funds, triparty repos, and securities lending vary in the details, they all share a common underlying pathology: Offering documents with stern warnings notwithstanding, explicit and implicit commitments combined with a history of discretionary support to create an assumption, even among sophisticated investors, that low-risk assets were free of credit and liquidity risk – effectively cash, but with a slightly higher return. This risk illusion led to pervasive underpricing of the risks embedded in these money-like instruments and made them an artificially cheap source of funding. The consequent oversupply of these instruments contributed importantly to systemic risk. Reliance on private mechanisms to create seemingly riskless assets generally worked in the relatively calm years leading up to the financial crisis and, to some extent, well into the crisis. But, in many cases, discretionary support came into question at the time of acute financial-market stress, precisely when it was needed most, as questions arose about the ability or willingness of large financial institutions to follow through on their implicit commitments. Investors were reminded of their potential exposure, leading to wholesale and sometimes disorderly flight. The unwinding of this risk illusion helped transform a dramatic correction in real estate valuations--which itself would have had serious consequences for the economy--into a crisis that threatened the entire financial system. Shaping a Regulatory Response Ideally, a regulatory response to the shadow banking system would be grounded in a full understanding of the dynamics that drove its rapid growth, the social utility of its intermediation activities, and the risks they create. Such a response would be comprehensive, meaning that it would cover in an effective and efficient manner any activities that create these vulnerabilities, without regard to how the activities were denominated, what transaction forms were used, or where they were conducted. Of course, many of the key issues are still being debated, and even those who agree on the desirability of a comprehensive response may differ on its basic form. We should continue to seek the analytic and policy consensus that must precede the creation of a regulatory program that meets these conditions. More work is needed on fundamental issues such as the implications of private money creation and of intermediaries behaving like banks but without bank-like regulation. These implications are potentially quite profound for central banking and banking regulation, considering that the shadow banking system has caused the volume of money-like instruments created outside the purview of central bank and regulatory control to grow markedly. But regulators need not wait for the full resolution of contested issues or the development of comprehensive alternatives, nor would it be prudent for them to do so. We should act now to address some obvious sources of vulnerability in the financial system. I believe that the foregoing discussion of implicit support for various shadow banking instruments helps identify areas where misunderstanding and mispricing of risk are more likely, with the result that destabilizing runs are a real possibility. Let me then suggest three more-or-less immediate steps that regulators here and abroad should take, as well as a medium-term reform undertaking. First, we should create greater transparency with respect to the various transactions and markets that comprise the shadow banking system. For example, large segments of the repo market remain opaque today. In fact, at present there is no way that regulators or market participants can precisely determine even the overall volume of bilateral repo transactions--that is, transactions not settled using the triparty mechanism. It is encouraging that the Treasury Department's new Office of Financial Research is working to improve information about this market, while the Securities and Exchange Commission is considering approaches to enhanced transparency in the closely related securities lending market. Second, the risk of runs on money market mutual funds should be further reduced through additional measures to address the structural vulnerabilities that have persisted even after the measures taken by the SEC in 2010 to improve the resilience of those funds. The SEC is currently considering several possible reforms, including a floating net asset value, capital requirements, and restrictions on redemption. Clearly, as suggested by Chairman Schapiro, action by the SEC to address the vulnerabilities that were so evident in 2008, while also preserving the economic role of money market funds, is the preferable route. But in the absence of such action, there are several second-best alternatives, including the recent suggestion by Deputy Governor Tucker of the Bank of England that supervisors consider setting new limits on banks' reliance on funding provided by money market funds. A third short-term priority is to address the settlement process for triparty repurchase agreements. Some progress has been made since 2008, but clearly more remains to be done. An industry-led task force established in 2009 orchestrated the implementation of some important improvements to the settlement process. The unwind, with its reliance on vast amounts of discretionary and uncommitted intraday credit from the two clearing banks, was pushed to later in the day, reducing the period during which the intraday credit was extended. In addition, new tools were developed for better intraday collateral management, and an improved confirmation process was instituted. Though these were useful steps, the key risk reduction goal of the effective elimination of intraday credit has not yet been achieved. A second phase of triparty reform is now underway, with the Federal Reserve using its supervisory authority to press for further action not only by the clearing banks, who of course manage the settlement process, but also by the dealer affiliates of bank holding companies, who are the clearing banks' largest customers for triparty transactions. But this approach alone will not suffice. All regulators and supervisors with responsibility for overseeing the various entities active in the triparty market will need to work together to ensure that critical enhancements to risk management and settlement processes are implemented uniformly and robustly across the entire market, and to encourage the development of mechanisms for orderly liquidation of collateral, so as to prevent a fire sale of assets in the event that any major triparty market participant faces distress. In the medium term, a broader reform agenda for shadow banking will first need to address the fact that there is little constraint on the use of leverage in some key types of transactions. One proposal is for a system of haircut and margin requirements that would be uniformly applied across a range of markets, including OTC derivatives, repurchase agreements, and securities lending. Work is ongoing to develop globally uniform margin requirements for OTC derivatives, but there is not yet an agreement to develop globally uniform margin requirements for securities financing transactions. Such a margining system would not only limit leverage, but--to the extent it is in fact uniform--also diminish incentives to use more complicated and less transparent transactional forms to increase leverage or reduce its cost. Some proponents suggest that such systems of uniform haircut and margin requirements could also dampen the observed procyclical character of many collateralized borrowings that results from changes in margins and haircuts following general economic or credit trends. The shadow banking system today is considerably smaller than at the height of the housing bubble six or seven years ago. And it is very likely that some forms of shadow banking most closely associated with that bubble have disappeared forever. But as the economy recovers, it is nearly as likely that, without policy changes, existing channels for shadow banking will grow, and new forms creating new vulnerabilities will arise. That is why I suggest what is, in essence, a two-pronged agenda: first, near-term action to address current channels where mispricing, run risk, and potential moral hazard are evident; and, second, continuation of the academic and policy debate on more fundamental measures to address these issues more broadly and proactively. 1. Insured demand deposits at traditional banks can help meet the needs of large investors, but only to a limited extent. Such accounts are unattractive to large investors because of the limited scale of Federal Deposit Insurance Corporation (FDIC) deposit insurance; large deposits are, beyond the insurance cap, effectively unsecured exposures to a single bank, and small deposits at multiple banks are inconvenient. The expansion of FDIC insurance to all noninterest bearing accounts, regardless of size--which occurred in November 2008 and which is scheduled to expire at the end of this year--has made deposits more attractive and more heavily utilized. Return to text 2. See Patrick McCabe (2010), "The Cross Section of Money Market Fund Risks and Financial Crises(PDF) ," Finance and Economics Discussion Series 2010-51 (Washington: Board of Governors of the Federal Reserve System, September). Return to text 4. Much of the attention devoted to securities lending in the wake of the crisis focused on the program run by AIG. In addition to general issues involving the reinvestment of cash collateral, AIG's securities lending program had more specific and fundamental flaws that go beyond the concerns discussed here. Return to text
Marietta Pritchard: Two books for spring by three local authors It’s the words on the page that really matter, but writers also choose how they want to present themselves in their pictures on book jackets. On the back of “Good Prose,” by Tracy Kidder and Richard Todd, the two authors appear in separate shots, but with their heads tilted at similar angles, their amused expressions suggesting the start of a convivial conversation. Nancy Frazier’s portrait on the dustjacket of “I, Lobster, a Crustacean Odyssey” is bursting with energy and pleasure. She can’t wait to tell you what she’s discovered. Full disclosure: Nancy is an old friend and colleague. She was my editor and mentor when I was a new writer at the Gazette. In later years, Dick Todd was my editor and colleague at New England Monthly, and Tracy Kidder was a writer and frequent presence at that magazine. All three are area residents, although Nancy has a second home in Waldoboro, Maine. It was on that rocky coast that she became fascinated – no, more like obsessed – with lobsters, and started on the odyssey of the title, seeking evidence of that strange and wonderful creature in all the most likely and unlikely places. Lobsters may be shy beings that prefer the dark of the deep, but Nancy has found them out and has shone her light on them everywhere – in ancient art and literature, in contemporary writing, in popular culture, on key chains and in museums as well as down-home restaurants, celebratory festivals, and fancy cookery. She takes them seriously, considering their possible feelings, their anatomy and physiology, their mating and migrating habits. But she also takes them lightly, enjoying elucidating the ways in which they have been represented and written about. This is a delightful book, reflecting the delight taken by its writer in her discoveries, as one thing, one bit of information or observation leads to the next. Here, for instance is the beginning of a section titled “Weird Medicine” in a chapter titled “SF: Are We All Lobsters Yet?” “Science fiction is not only about outer space. It includes everything from brain surgery to robotic inventions. And when lobsters appear, it is not necessarily they who are the deviants, monsters or caricatures. Sometimes the human being is the villain, as is the case with the evil doctor in the next science fiction tale.” Accompanying Nancy on the twists and turns of her travels through time and space in search of the essence of lobster is, as the saying goes, a real trip. Kidder and Todd’s “Good Prose,” is subtitled “The Art of Nonfiction, Stories and Advice from a Lifetime of Writing and Editing,” which may sound like heavy-duty stuff, but take a clue from those writers’ portraits. This is a running conversation between two old friends, the ball tossed back and forth between them as they write about their separate and mutual experience of dealing with words – juggling, prodding and deleting them. I found more reassurance than advice in this book. Here is Kidder, for instance, describing his “process” after gathering material for his book, “House.” He retires to his office to try to make sense of it, much as other nonfiction writers do: “We sit at desks in our offices, apart from the world, gazing at those noteboks stacked on our tables, hoping there are stories in them, but once again unsure.” From his side, Todd tells of some of the trials of an editor’s life – some of it in dealing with Kidder’s writing. At the same time he also offers some good, typically witty warnings about, the “familiar rules about writing,” which, he says are “dangerous if taken literally.” Take the mandate to avoid the verb “to be,” for instance. “The verb ‘to be’ and the passive voice are unfairly maligned,” writes Todd. “God invented both for a reason. Just turn to the Bible: ‘In the beginning was the word, … and the word was God.’ No one would accuse that verb of weakness.” Perhaps the best thing about this book for me are the samples of good writing. In the first eight pages, I’m reminded of memorable bits from “Moby-Dick,” from Capote’s “In Cold Blood,” from Nabokov’s “Speak Memory” and from Orwell’s “Homage to Catalonia” – a pretty good reading list for the spring. Marietta Pritchard can be reached at firstname.lastname@example.org
An Inward Journey Book comes to you as a free Dhamma book. This book is designed and published by Inward Path Publisher (IPP) which aims at disseminating the noble teachings of Wisdom and Compassion of the Buddha to a wide segment of readers through the printing of Dhamma books and booklets. RELEASE DHAMMA BOOKS Inward Path is delighted to inform our Dhamma friends that we have release these Dhamma books for free distribution. PAIR OF MESSENGERS: Clam with Insight from The Buddha's Lips insight are the two great wings of Buddhist meditation. The present work offers a fresh perspective on a timeless theme. It is lavishly furnish with extracts from the Buddha's discourses, carefully elucidated for the modern reader. Calm and insight emerge as complementary aspects of a comprehensive approach to cultivation of the mind, together delivering the message of Truth. This meditative strategy reflects in practical form the Buddha's radical insights into the nature of reality. Melding method with inspiration, this work embodies the principle that meticulous inquiry is the ideal foundation for true faith. is a Western Buddhist monk. Since receiving higher ordination in 1994, he has lived in forest monasteries developing meditation and studying the Buddha's teachings. 31 PLANES OF EXISTENCE Transcribed from Ven. Acara Suvanno Mahathera's Dhamma talks by YK Leong DREAMS OF KING PASENADI Ven. Acara Suvanno Mahathera After the highly successful launching of his biography "Striving to be a Nobody" the Venerable Bhante Suvanno has given us, Inward Path, his permission to transcribe his Dhamma talks recorded in cassettes into booklets for those keen on reading his talks. This book, The Sixteen Dreams of King Pasenadi" will be the first of many to come; and is compiled by YK Leong, the author of "Striving to be a Nobody". This book is an explanation of the dreams of King Pasenadi and predicts the future of the world and which predictions are interestingly enough showing signs of coming to pass. The world will be in turmoil and unusual changes in climatic conditions will be prevalent. The ignorant will make up the ruling governments of the world. Sounds familiar does Acara Suvanno Mahathera is the abbot of the Buddhist Hermitage, Lunas. He is also the chief monk of Mi Tor See in Penang. He is today 82 years of age and very active in propating the Dhamma. Many of his talks can be found in cassettes, CDs and VCDs. Bhante is well versed in the Tipitaka and a pioneer of Theravada Buddhism in Malaysia. OF KING MILINDA Bhikkhu Panha is, with good reason, a famous work of Buddhist literature, probably compiled in the first century B.C. It presents Buddhist doctrine in a very attractive and memorable form as a dialogue between a Bactrian Greek king, Milinda, who plays the 'Devil's Advocate' and a Buddhist sage, Nagasena. The topics covered include most of those questions commonly asked by Westerners such as "If there is no soul, what is it that is reborn?" and "If there is no soul, who is talking to you now?" abridgement (IJ051/01) provides a concise presentation of this masterpiece of Buddhist literature. The introduction outlines the historical background against which the dialogues took place, indicating the meeting of two great cultures-that of ancient Greece and the Buddhism of the Indus valley which was a legacy of the great Emperor Asoka. WITH LOVE & OTHER MEDITATIONS Ven. Visuddhacara to traffic jam meditation to the ordinary business of eating & drinking, to waking up & sleeping, Ven. Visuddhacara shares his understanding of how we can integrate mindfulness & lovingkindness into every facet of our modern life. booklet (IJ048/01), Ven. Visuddhacara shares his limited understanding of this practice of mindfulness and loving-kindness with a view to encourage all of us to walk the path. A Buddhist monk of Malaysian nationality, he was born in 1953 on the island of Penang. He has been practising mindfulness meditation since 1982. He presently resides and teaches meditation in Penang besides travelling abroad to conduct retreats. (Kindly refer to above for international order) THAT LEADS TO NIBBANA: Sayadaw U Kundalabhivamsa book (IJ017/98) Sayadaw U Kundala, drawing from the experience of many years of teaching, explains the path that The Buddha clearly mapped our 2500 years ago. He clearly and methodically, in an easy to read form, points out the benefits of practicing Vipassana meditation and the method of practice. He has then gone on to explain the five Indriya (controlling faculties) and the nine factors which sharpen the Indriya, leading one along the path to liberation as taught by The Buddha. foundation of morality and striving with ardent effort in Vipassana meditation, as taught by The Buddha, will result in progression along a clearly defined path of insight, with the final fruits being the noble and pure liberation of Nibbana. Freedom from suffering. (Kindly refer to above for international order) WHO CALLED HIMSELF TATHAGATA Sayadaw U Silananda At the beginning of most discoursed (suttas), the word 'Bhagava' is commonly used to refer to the Buddha to mean 'World-honored One' or simply 'Lord'. However the word 'Tathagata' although not so frequently used, is very significant as the Buddha employs this term whenever He wishes to refer to Himself. Reading this booklet (IJ042/00), one not only gains a theoretical knowledge of the term 'Tathagata', but may also be inspired to work towards one's liberation without further delay. U Silananda is a renowned Buddhist scholar and meditation teacher. He is presently the abbot of Dhammananda Vihara in Half Moon Bay, California, USA. Sayadaw leads meditation retreats throughtout America and also abroad in Japan, Europe and Asia. CHARACTERISTICS Sayadaw U Silananda I talk about Vipassana, I say Vipassana is seeing things in various ways, seeing mind and matter or mental and physical phenomena in light of their characteristics - seeing things as impermanent, as painful or unsatisfactory and an insubstantial or non-self. These are the three characteristics of everything that is in the world. We find these characteristics or these marks in every thing and every being - Sayadaw U Silananda. meant by everything in the world?" What is impermanent? How do we know that something is impermanent? What is it that is suffering? And what is it that is anatta, that is soulless? In this short essay booklet (IJ047/00), Sayadaw U Silananda explained the different aspects of 'seeing' or understanding Sayadaw U Thittila & Venerable Pandit P. Sri Rebirth: A Buddhist Concept (IJ0031/99) explains the nature of death and rebirth - how our kamma and last thought-moment can condition our next birth. Rebirth, however, is not something to be desired. It is but a recycle of suffering. It is better, therefore, to end craving which is the cause of rebirth. Sayadaw U Thittila was one of the greatest Burmese scholar monks and missionaries of this century. Venerable Pandit P. Sri Pemaratana Nayaka Thero was the respected late abbot of the Mahindarama Buddhist temple, Penang. Inward Path relies entirely on donations to enable us to continue to make Dhamma available for free distribution. Your kind support is warmly appreciated. If you wish to have the Dhamma books sent to you (especially International orders), kindly let us have your address. We will appreciate if you could also reimburse us for the postage (seamail) of USD10.00 (for parcel of books weight less than 2 kgs). If you are paying by cheque, kindly add extra USD5.00 to cover for bank clearing charges. Please DO NOT send CASH by mail.
Writers: Poets, Authors, Novelists, Literature Welsh writers share the bardic tradition with stories and adventures written for over a millenium. Today we can enjoy story telling in many formats and this celtic skill has adopted the new mediums. Wales has produced some of the finest poets, novelists and even screen writers in history. Explore out list of Welsh writers to find Dylan Thomas, Alexander Cordell, Craig Thomas, Dick Francis, Bertrand Russell, Ian Hislop and may many more!
November 22, 2005 Colorado Alliance for Cruelty Prevention Present: Kate Arganese, Kathleen Schoen, Jan Mickish, Diane Balkin, Joan Casey, Corey Price, Kay Dahlinger, Phil Tedeschi, Jennifer Fitchett, Jayme Nielson-Foley, Mary Toornman, Barbara Riedell, John Cohen, John Cogley, Sheila Rappaport, William Riedell, Theresa Abeyta. I. Collaboration with the Aurora Link A. Kay Dahlinger, Chief Probation Officer in Aurora and an Aurora Link member, reported that the Aurora Link is pleased to be joining CACP’s meetings. The Aurora Link is an organization with the same goal as CACP: to educate and help to improve/end the Link between animal and human violence. B. Currently, the Aurora Link has one “Training” planned. It will be in Adams-Broomfield Judicial District on January 6, 2006. This event is meant to be an introductory luncheon and not a “Training.” It was suggested that the Aurora Link should make this luncheon a CBA event so that travel expenses can be covered by the grant. 1. Kathleen Schoen, Colorado Bar Association Staff, and Joan Casey, Animal Assistance Foundation, volunteered to contact their contacts in Ft. Morgan to determine their interest. 2. Diane Balkin, Denver Deputy District Attorney, volunteered to contact the State Veterinarians (large and small animals) and Police Academies. 3. Jan Mickish, Mickish Consulting, volunteered to contact the Visiting Nurses Association. 4. Corey Price volunteered to contact the Colorado Education Association. 5. John Cogley volunteered to contact Domestic Violence Treatment Providers. a. Phil Tedeschi, Denver University Animal-Assisted Social Work Coordinator, volunteered Denver University after January 1, 2006, to host a CACP Training for Colorado Domestic Violence Treatment Providers. This will be advertised through the Colorado Department of Criminal Justice, Domestic Violence Offender Management Board listing of trainings. 6. Joan Casey volunteered to contact Colorado Non Profit Organization. Also, she volunteered to look into the possibility of providing books on the subject to pediatrician’s waiting rooms. 7. Mary Toornman, Denver City Attorney’s Office, volunteered to contact the directors of local dog shows to determine if CACP could have a presence at dog shows across the state. 8. Other possible organizations suggested were: groomer organizations, farrier organizations, sex offender treatment providers, CAFCA, and the American Kennel Association. A. Handouts with lists of trainings that CACP has completed were handed out at the meeting for discussion. C. CACP members briefly discussed integrating public Trainings into the curriculum. This idea was met favorably, and will be revisited at the next meeting. B. Future trainings were discussed, and several members volunteered to contact different organizations to determine their interest in CACP training. III. Collaboration efforts with Professor Phil Tedeschi and Professor Jennifer Fitchett A. Professor Phil Tedeschi and Professor Jennifer Fitchett, informed CACP members on their intent to launch an institute at the University of Denver school of social work that focuses on the interaction between humans and animals. This program will promote new literature, will generate new research, and teach students worldwide. In addition, upon completion of the program, students will be issued a certificate in honor of their accomplishment. B. CACP members discussed several ways that their involvement would be appropriate: they can offer internships (through the CBA and CACP) to students involved in the program, give several Trainings to students, help the students create a student Link program on the campus, etc. C. Jennifer Fitchett will develop a survey to distribute to CACP members to gauge how Phil and she can be most useful to the Committee and vice a versa. IV. CACP’s next meeting is Tuesday, January 24, 2006, at 12 p.m. The meeting will take place at the CBA office building (1900 Grant Street, Suite 300, Denver, CO, 80203).
US 6914817 B2 Structures, methods of manufacturing and methods of use of electrically programmable read only memories (EPROM) and flash electrically erasable and programmable read only memories (EEPROM) include split channel and other cell configurations. An arrangement of elements and cooperative processes of manufacture provide self-alignment of the elements. An intelligent programming technique allows each memory cell to store more than the usual one bit of information. An intelligent erase algorithm prolongs the useful life of the memory cells. Use of these various features provides a memory having a very high storage density and a long life, making it particularly useful as a solid state memory in place of magnetic disk storage devices in computer systems. 1. For an array of electrically alterable memory cells divided into blocks of cells that are re-settable together to a starting state and having means for addressing individual cells within said blocks to program and read their states, said memory cells individually including a field effect transistor having a threshold voltage level that is variable in accordance with an amount of net charge stored therein, a method of operating the array, comprising the steps of: establishing a plurality of effective threshold voltage levels in excess of two that correspond to a plurality of detectable states of the individual cells in excess of two, programming the effective threshold voltage level of at least one addressed cell within one of said blocks from a starting level to one of the plurality of effective threshold voltage levels by altering the amount of charge stored in said at least one addressed cell until the effective threshold voltage of said at least one addressed cell is substantially equal to one of said plurality of effective threshold voltage levels, wherein the state of said at least one addressed cell is set to one of said plurality of states by a method comprising: applying a given voltage to said addressed cell for a predetermined time sufficient to move the effective threshold voltage level of the addressed cell from the starting level toward said one of the plurality of threshold voltage levels, thereafter reading an electrical parameter of the addressed cell to determine whether the effective threshold voltage of the addressed cell has reached said of the plurality of threshold voltage levels, and repeating the voltage applying and reading steps until it is detected by reading step that the effective threshold voltage of the addressed cell has been set to said one of the plurality of threshold voltage levels, reading the states of the memory cells of individual blocks with the assistance of an error correction scheme that can tolerate a number X of bad cells, and applying erase conditions to the memory cells of individual blocks until a number of cells N remaining unerased is equal to or less than X. 2. The method according to 3. The method according to This is a continuation of application Ser. No. 08/154,162, filed Nov. 17, 1993 now U.S. Pat. No. 6,570,790, which is a division of application Ser. No. 07/777,673, filed Oct. 15, 1991 and now issued as U.S. Pat. No. 5,268,319, which in turn is a division of application Ser. No. 07/381,139, filed Jul. 17, 1989 and now issued as U.S. Pat. No. 5,198,380, and which in turn is a division of original application Ser. No. 07/204,175, filed Jun. 8, 1988, now issued as U.S. Pat. No. 5,095,344. The foregoing patents are also incorporated herein by this reference. This invention relates generally to semiconductor electrically programmable read only memories (Eprom) and electrically erasable programmable read only memories (EEprom), and specifically to semiconductor structures of such memories, processes of making them, and techniques for using them. An electrically programmable read only memory (Eprom) utilizes a floating (unconnected) conductive gate, in a field effect transistor structure, positioned over but insulated from a channel region in a semiconductor substrate, between source and drain regions. A control gate is then provided over the floating gate, but also insulated therefrom. The threshold voltage characteristic of the transistor is controlled by the amount of charge that is retained on the floating gate. That is, the minimum amount of voltage (threshold) that must be applied to the control gate before the transistor is turned “on” to permit conduction between its source and drain regions is controlled by the level of charge on the floating gate. A transistor is programmed to one of two states by accelerating electrons from the substrate channel region, through a thin gate dielectric and onto the floating gate. The memory cell transistor's state is read by placing an operating voltage across its source and drain and on its control gate, and then detecting the level of current flowing between the source and drain as to whether the device is programmed to be “on” or “off” at the control gate voltage selected. A specific, single cell in a two-dimensional array of Eprom cells is addressed for reading by application of a source-drain voltage to source and drain lines in a column containing the cell being addressed, and application of a control gate voltage to the control gates in a row containing the cell being addressed. This type of Eprom transistor is usually implemented in one of two basic configurations. One is where the floating gate extends substantially entirely over the transistor's channel region between its source and drain. Another type, preferred in many applications, is where the floating gate extends from the drain region only part of the way across the channel. The control gate then extends completely across the channel, over the floating gate and then across the remaining portion of the channel not occupied by the floating gate. The control gate is separated from that remaining channel portion by a thin gate oxide. This second type is termed a “split-channel” Eprom transistor. This results in a transistor structure that operates as two transistors in series, one having a varying threshold in response to the charge level on the floating gate, and another that is unaffected by the floating gate charge but rather which operates in response to the voltage on the control gate as in any normal field effect transistor. Early Eprom devices were erasable by exposure to ultraviolet light. More recently, the transistor cells have been made to be electrically erasable, and thus termed electrically erasable and programmable read only memory (EEprom). One way in which the cell is erased electrically is by transfer of charge from the floating gate to the transistor drain through a very thin tunnel dielectric. This is accomplished by application of appropriate voltages to the transistor's source, drain and control gate. Other EEprom memory cells are provided with a separate, third gate for accomplishing the erasing. An erase gate passes through each memory cell transistor closely adjacent to a surface of the floating gate but insulated therefrom by a thin tunnel dielectric. Charge is then removed from the floating gate of a cell to the erase gate, when appropriate voltages are applied to all the transistor elements. An array of EEprom cells are generally referred to as a Flash EEprom array because an entire array of cells, or significant group of cells, is erased simultaneously (i.e., in a flash). EEprom's have been found to have a limited effective life. The number of cycles of programming and erasing that such a device can endure before becoming degraded is finite. After a number of such cycles in excess of 10,000, depending upon its specific structure, its programmability can be reduced. Often, by the time the device has been put through such a cycle for over 100,000 times, it can no longer be programmed or erased properly. This is believed to be the result of electrons being trapped in the dielectric each time charge is transferred to or away from the floating gate by programming or erasing, respectively. It is the primary object of the present invention to provide Eprom and EEprom cell and array structures and processes for making them that result in cells of reduced size so their density on a semiconductor chip can be increased. It is also an object of the invention that the structures be highly manufacturable, reliable, scalable, repeatable and producible with a very high yield. It is yet another object of the present invention to provide EEprom semiconductor chips that are useful for solid state memory to replace magnetic disk storage devices. Another object of the present invention is to provide a technique for increasing the amount of information that can be stored in a given size Eprom or EEprom array. Further, it is an object of the present invention to provide a technique for increasing the number of program/read cycles that an EEprom can endure. These and additional objects are accomplished by the various aspects of the present invention, either alone or in combination, the primary aspects being briefly summarized as below: 1. The problems associated with prior art split channel Eprom and split channel Flash EEprom devices are overcome by providing a split channel memory cell constructed in one of the following ways: (A) In one embodiment, one edge of the floating gate is self aligned to and overlaps the edge of the drain diffusion and the second edge of the floating gate is self aligned to but is spaced apart from the edge of the source diffusion. A sidewall spacer formed along the second edge of the floating gate facing the source side is used to define the degree of spacing between the two edges. Self alignment of both source and drain to the edges of the floating gate results in a split channel Eprom device having accurate control of the three most critical device parameters: Channel segment lengths L1 and L2 controllable by floating gate and control gate, respectively, and the extent of overlap between the floating gate and the drain diffusion. All three parameters are insensitive to mask misalignment and can be made reproducibly very small in scaled-down devices. (B) In a second embodiment of the split channel Eprom a heavily doped portion of the channel adjacent to the drain diffusion is formed by a novel, well-controlled technique. The length Lp and doping concentration of this channel portion become the dominant parameters for programming and reading, thereby permitting the formation of a split channel structure which is relatively insensitive to misalignments between the floating gate and the source/drain regions. 2. A separate erase gate is provided to transform a Eprom device into a Flash EEprom device. The area of overlap between the floating gate and the erase gate is insensitive to mask misalignment and can therefore be made reproducibly very small. 3. In some embodiments of this invention, the erase gate is also used as a field plate to provide very compact electric isolation between adjacent cells in a memory array. 4. A new erase mechanism is provided which employs tailoring of the edges of a very thin floating gate so as to enhance their effectiveness as electron injectors. 5. A novel intelligent programming and sensing technique is provided which permits the practical implementation of multiple state storage wherein each Eprom or flash EEprom cell stores more than one bit per cell. 6. A novel intelligent erase algorithm is provided which results in a significant reduction in the electrical stress experienced by the erase tunnel dielectric and results in much higher endurance to program/erase cycling. The combination of various of these features results in new split channel Eprom or split channel Flash EEprom devices which are highly manufacturable, highly scalable, and offering greater storage density as well as greater reliability than any prior art Eprom or Flash EEprom devices. Memories that utilize the various aspects of this invention are especially useful in computer systems to replace existing magnetic storage media (hard disks and floppy disks), primarily because of the very high density of information that may be stored in them. Additional objects, features and advantages of the present invention will be understood from the following description of its preferred embodiments, which description should be taken in conjunction with the accompanying drawings. There are two distinctly different approaches in the prior art of Flash EEproms. A triple polysilicon device was described by J. Kupec et al. in 1980 IEDM Technical Digest, p. 602 in an article entitled “Triple Level Polysilicon EEprom with Single Transistor per Bit”. An improvement to the Kupec device was proposed by F. Masuoka and H. Iizuka in U.S. Pat. No. 4,531,203, issued Jul. 23, 1985. Variations on the same cell are described by C. K. Kuo and S. C. Tsaur in U.S. Pat. No. 4,561,004 issued Dec. 24, 1985, and by F. Masuoka et al. in an article titled “A 256K Flash EEprom Using Triple Polysilicon Technology”, Digest of Technical Papers, IEEE International Solid-State Circuits Conference, Febraury 1985, p. 168. The second approach is a double polysilicon cell described by G. Samachisa et al., in an article titled “A 128K Flash EEprom Using Double Polysilicon Technology”, IEEE Journal of Solid State Circuits, October 1987, Vol. SC-22, No. 5, p. 676. Variations on this second cell are also described by H. Kume et al. in an article titled “A Flash-Erase EEprom Cell with an Asymmetric Source and Drain Structure”, Technical Digest of the IEEE International Electron Devices Meeting, December 1987, p. 560, and by V. N. Kynett et al. in an article titled “An In-System Reprogrammable 256K CMOS Flash Memory”, Digest of Technical Papers, IEEE International Solid-State Circuits Conference, February 1988, p. 132. A cross-section of the Samachisa cell is shown in FIG. 1. Transistor 100 is an NMOS transistor with source 101, drain 102, substrate 103, floating gate 104 and control gate 109. The transistor has a split channel consisting of a section 112 (L1) whose conductivity is controlled by floating gate 104, in series with a section 120 (L2) whose conductivity is controlled by control gate 109. Programming takes place as in other Eprom cells by injection of hot electrons 107 from the channel at the pinchoff region 119 near the drain junction. Injected electrons are trapped on floating gate 104 and raise the conduction threshold. voltage of channel region 112 and therefore of transistor 100. To erase transistor 100 the oxide in region 112 separating between the floating gate 104 and drain diffusion 102 and channel 112 is thinned to between 15 and 20 nanometers, to allow electronic tunneling of trapped electrons 108 from the floating gate to the drain. In the Samachisa cell the appropriate voltages applied to achieve programming are VCG=12V, VD=9V, VBB=0V, VS=0V, and to achieve erase are VCG=0V, VD=19V, VBB=0V, VS=floating. Samachisa points out that the electrical erase is not self-limiting. It is possible to overerase the cell, leaving the floating gate positively charged, thus turning the channel portion L1 into a depletion mode transistor. The series enhancement transistor L2 is needed therefore to prevent transistor leakage in the overerase condition. The Samachisa cell suffers from certain disadvantages. These are: The Kynett and Kume cells ( The Kynett cell can be contrasted with the Samachisa cell: Kupec's cell employs essentially the Kynett cell without a thin tunnel dielectric over the source, channel, or drain, and with a third polysilicon plate covering the entire transistor and acting as an erase plate. A cross sectional view of the Kupec device is shown in Masuoka's approach to Flash EEprom overcomes most of the disadvantages of the Samachisa, Kynett and Kupec cells. The transistor channel width (W), as well as the edges of the source and drain diffusions are defined by the edges 305 of a thick field oxide formed by isoplanar oxidation. Oxide 332 of thickness in the 25 to 40 nanometers range is used as isolation between the floating gate and the substrate. Masuoka adds an erase gate 330 disposed underneath the floating gate along one of its edges. This erase gate is used to electrically erase floating gate 304 in an area of tunnel dielectric 331 where the floating gate overlaps the erase gate. Tunnel dielectric 331 is of thickness between 30 and 60 nanometers. Masuoka specifies the following voltages during erase: VS=0V, VD=0V, VCG=0V, VERASE=20V to 30V. Comparing the Masuoka cell with the Samachisa and Kynett cells: From the foregoing analysis it is clear that while the Masuoka prior art cell successfully addresses most of the problems encountered by Samachisa and Kynett, it itself has disadvantages not encountered by Samachisa or Kynett. Masuoka and Samachisa both use a split channel Eprom transistor for programming. In the split channel eprom transistor, the portion L2 of the channel length controlled by control gate 109, 309 has a fixed enhancement threshold voltage determined by the p+ channel doping concentration 360. The portion L1 of the channel length controlled by floating gate 104 (Samachisa) and 304 (Masuoka) has a variable threshold voltage determined by the net charge stored on the floating gate. Other prior art split channel Eprom transistors are described by E. Harari in U.S. Pat. No. 4,328,565, May 4, 1982 and by B. Eitan in U.S. Pat. No. 4,639,893, Jan. 27, 1987. The Harari split channel Eprom transistor 300 d is shown in cross section in The Eitan split channel Eprom transistor 400 is shown in cross sections in The addition of a fixed threshold enhancement transistor in series with the floating gate transistor decouples the floating gate from the source diffusion. This allows the channel length L1 to be made very small without encountering punchthrough between source and drain. Furthermore, transistor drain-turnon due to the parasitic capacitive coupling between the drain diffusion and the floating gate is eliminated because the enhancement channel portion L2 remains off. Eitan shows that the shorter the length L1 the greater the programming efficiency and the greater the read current of the split channel Eprom transistor. For Flash EEprom devices the series enhancement channel L2 acquires additional importance because it allows the floating gate portion L1 to be overerased into depletion thereshold voltage without turning on the composite split channel transistor. The disadvantages incurred by the addition of the series enhancement channel L2 are an increase in cell area, a decrease in transistor transconductance, an increase in control gate capacitance, and an increase in variability of device characteristics for programming and reading brought about by the fact that L1 or L2 or both are not precisely controlled in the manufacturing process of the prior art split channel devices. Samachisa, Masuoka and Eitan each adopt a different approach to reduce the variability of L1 and L2: Samachisa's transistor 100 ( Masuoka's transistor 300 ( Eitan's transistor 400 ( It should be pointed out that even with the most advanced optical lithography systems available today in a production environment it is difficult to achieve an alignment accuracy of better than ±0.25 microns between any two mask layers. Therefore the variability in L2 or L1 inherent to any structure which is alignment sensitive can be as much as approximately 0.5 microns from one extreme to the other. Another prior art split channel Eprom device which attempts to achieve the objective of accurately establishing L1 and L2 is disclosed by Y. Mizutani and K. Makita in the 1985 IEDM Technical Digest, p. 63, shown in cross section in Yet another prior art device which has a split channel with a well controlled L1 and L2 is disclosed by A. T. Wu et al. in the 1986 IEDM Technical Digest, p. 584 in an article entitled “A Novel High-Speed, 5-Volt Programming Eprom Structure with Source-Side Injection”. A cross section of the Wu prior art transistor is shown in Another prior art Eprom transistor which does not have a split channel structure but which seeks to achieve two distinct channel regions to optimize the Eprom programming performance is disclosed by S. Tanaka et al. in 1984 ISSCC Digest of Technical Papers, p. 148 in an article entitled “A Programmable 256K CMOS Eprom with On Chip Test Circuits”. A cross section of this device is shown in I.a. Split Channel Eprom Transistor with Self Aligned Drain Diffusion and Self Aligned Spaced Apart Source Diffusion P-type substrate 563 is typically 5 to 50 Ohms centimeter, p+ channel doping 560 a is typically in the range of 1×1016 cm−3 to 2×1017 cm−3, dielectric film 564 a is typically 20 to 40 nanometers thick, dielectric film 567 a is typically 20 to 50 nanometers thick, floating gate 504 a is usually a heavily N+ doped film of polysilicon of thickness which can be as low as 25 nanometers (this thickness will be discussed in Section VII) or as high as 400 nanometers. Control gate 509 is either a heavily N+ doped film of polysilicon or a low resistivity inter-connect material such as a silicide or a refractory metal. Of importance, edge 523 a of N+ drain diffusion 502 a formed by ion implantation of Arsenic or Phosphorus is self aligned to edge 522 a of floating gate 504 a, while edge 521 a of N+ source diffusion 501 a formed by the same ion implantation step is self aligned to, but is spaced apart from, edge 550 a of the same floating gate 504 a, using a sidewall spacer (not shown in The key steps for the formation of channel portions L1 and L2 are illustrated in The thickness of the conformal spacer layer determines the width of the sidewall spacer, and therefore also the length of channel portion L2. Typically for an L2 of 400 nanometers a spacer layer of approximately 600 nanometers thickness is used. In the present invention, the spacer can be significantly wider, it is used along one edge only, and it is used not to define a lightly doped source or drain but rather to define the series enhancement transistor channel portion L2. The next step is a masking step. Photoresist 591 a, 591 b ( Next, spacers 592 a, 592 b and the protective film 566 a are removed ( The remaining part of the process is standard: The surface of the structure is covered with a thick passivation layer 568, usually phosphorous doped glass or a Borophosphosilicate glass (BPSG). This passivation is made to flow in a high temperature anneal step. Contact vias are etched (not shown in Comparing split channel transistor 500 a of A new method is disclosed for manufacturing the split channel Eprom transistor 1400 which results in much better control of the parameter Lp and of the surface channel doping concentration 1413 than is provided by the DSA (Diffusion Self Align) approach of the Tanaka prior art transistor 400 e ( The main steps in this new method for the fabrication of a memory array of transistors 1400 are as follows: 1. In the structure of 2. A photoresist mask P.R.1 (1470) is used to define source and drain regions in long parallel strips extending in width between edges 1471, 1472 of openings in the photoresist. Exposed oxide layer 1473 is now wet etched in a carefully controlled and timed etch step which includes substantial undercutting of photoresist 1470. The extent of undercutting, which is measured by the distance Lx between oxide edges 1476 and 1478, will eventually determine the magnitude of parameter Lp. Typically, Lx is chosen between 300 nanometers and 700 nanometers. The three parameters critical for a reproducible Lx are the concentration and temperature of the etch solution (hydrofluoric acid) and the density (i.e., lack of porosity) of the oxide 1473 being etched. These can be well controlled sufficiently so that a timed undercutting etch step results in well controlled etched strips of width Lx and running parallel to edges 1471, 1472 of the long openings in the photoresist. In fact, for values of Lx below approximately 500 nanometers, it is easier to achieve a reproducible Lx through controlled sideway etching than by controlling the line width of long, narrow line in a photoresist layer. An example of the use of sideway etching self aligned to an edge in a similar fashion (but to achieve the different purpose of forming a very narrow guard ring) can be found in the prior art article by S. Kim titled “A Very Small Schottky Barrier Diode with Self-Aligned Guard Ring for VLSI Application”, appearing in the 1979 IEDM Technical Digest, p. 49. 3. At the completion of the sideway etch step a second, anisotropic etch is performed, using the same photoresist mask P.R.1 to etch away long strips of the exposed silicon nitride film 1474. Edges 1471, 1472 of P.R.1 (1470) are used to form edges 1480, 1481 respectively in the etched strips of nitride layers. 4. Arsenic ion implantation with an ion dose of approximately 5×1015 cm−2 is performed with an energy sufficient to penetrate oxide film 1475 and dope the surface in long strips of N+ doped regions (1402, 1401). Photoresist mask P.R.1 can be used as the mask for this step, but nitride layer 1474 can serve equally well as the implant mask. P.R.1 is stripped at the completion of this step. 5. An implant damage anneal and surface oxidation step follows, resulting in 200 to 300 nanometers of silicon dioxide 1462 grown over the source and drain diffusion strips. The temperature for this oxidation should be below 1000° C. to minimize the lateral diffusion of the N+ dopants in regions 1402, 1401. If desired it is possible through an extra masking step to remove nitride layer 1474 also from the field regions between adjacent channels, so as to grow oxide film 1462 not only over the source and drain regions but also over the field isolation regions. 7. Top oxide 1473, nitride 1474 and thin oxide 1475 are now removed by etching. This etching also reduces the thickness of the oxide layer 1462 protecting the source and drain diffusions. It is desirable to leave this film thickness at not less than approximately 100 nanometers at the completion of this etch step. 8. The remaining steps can be understood in relation to the structure of 9. A second dielectric 1466, 1411 is grown or deposited on top of the substrate and floating gate strips, respectively. This can be a layer of silicon dioxide or a combination of thin films of silicon dioxide and silicon nitride, of combined thickness in the range between 20 and 50 nanometers. 10. A second layer of polysilicon is deposited, doped N+ (or silicided for lower resistivity), masked and etched to form control gates 1409 in long strips running perpendicular to the strips of floating gates and source/drain strips. Each control gate strip is capacitively coupled to the floating gate strips it crosses over through dielectric film 1411 in the areas where the strips overlap each other. Control gates 1409 also control the channel conduction in channel portions L2 not covered by the floating gate strips. Each strip of control gates is now covered by a dielectric isolation film (can be thermally grown oxide). 11. Using the strips of control gates as a mask, exposed areas of dielectric 1466, 1411 and of the strips of first polysilicon floating gates are etched away. The resulting structure has long strips, or rows, of control gates, each row overlying several floating gates 1404 where the outer edges of each floating gate are essentially self aligned to the edges defining the width of the control gate strip. These edges are now oxidized or covered with a deposited dielectric to completely insulate each floating gate. Field areas between adjacent rows of cells or between adjacent strips of source and drain regions are now automatically self aligned to the active device areas and do not require space consuming isoplanar oxidation isolation regions. (Of course, it is also possible to fabricate transistor 1400 with source, drain and channel regions defined by the edges of a thick isoplanar oxidation isolation layer, or to rely for field isolation on oxide 1462 grown also in the field regions, see the option described in step 5 above.) The Eprom cell of this embodiment has several advantages over the prior art Eprom cells: Transistor 600 a of Transistor 600 a is erased by tunneling of electrons from floating gate 504 a to erase gates 530, 535, across tunnel dielectrics 531 a, 561 a on the sidewalls and top surface of the floating gate where it is overlapped by the erase gate. Tunnel dielectric film 531 a, 561 a is normally a layer of Silicon Dioxide grown through thermal oxidation of the heavily N+ doped and textured polycrystalline silicon comprising the floating gate. It is well known in the industry (see for example an article by H. A. R. Wegener titled “Endurance Model for textured-poly floating gate memories”, Technical Digest of the IEEE International Electron Device Meeting, December 1984, p. 480) that such a film, when grown under the appropriate oxidation conditions over properly textured doped polysilicon allows an increase by several orders of magnitude of the conduction by electron tunneling even when the film is several times thicker than tunnel dielectric films grown on single crystal silicon (such as the tunnel dielectric films used in the prior art Samachisa and Kynett devices). For example, a tunnel dielectric oxide grown to a thickness of 40 nanometers on N+ doped and textured polysilicon can conduct by electronic tunneling approximately the same current density as a tunnel dielectric oxide of 10 nanometers thickness grown on N+ doped single crystal silicon under identical voltage bias conditions. It is believed that this highly efficient tunneling mechanism is a result of sharp asperities at the grain boundaries of the polysilicon which is specially textured to enhance the areal density of such asperities. A commonly practices technique is to first oxidize the surface of the polysilicon at a high temperature to accentuate the texturing, then stripping that oxide and regrowing a tunnel oxide at a lower temperature. The oxide film capping such an asperity experiences a local amplification by a factor of four to five of the applied electric field resulting in an efficient localized tunnel injector. The advantage provided by the thicker films of tunnel dielectric is that they are much easier to grow in uniform and defect-free layers. Furthermore the electric field stress during tunneling in the thick (40 nanometer) tunnel dielectric is only 25 percent of the stress in the thin (10 nanometer) tunnel dielectric, assuming the same voltage bias conditions. This reduced stress translates into higher reliability and greater endurance to write/erase cycling. For these reasons, all Flash EEprom embodiments of this invention rely on poly-poly erase through a relatively thick tunnel dielectric. In the embodiment of The manufacturing process can be somewhat simplified by implementing erase gates 530, 535 in the same conductive layer as that used for control gate 509. However the spacing Z between the edges of the control gate and the erase gate (and hence the cell size) would then have to be significantly greater than is the case when the control gate and erase gates are implemented in two different conductive layers insulated from each other by dielectric film 567 a. In fact, in the triple layer structure 600 a of A point of significance is the fact that the tunnel dielectric area contributing to erase in each cell consisting of the combined areas of 531 a and 561 a, is insensitive to the mask misalignment between edges 532 a, 562 a of floating gate 504 a and erase gates 530, 535. (Note that each erase gate, such as 535, is shared between two adjacent cells, such as 600 a and 600 c in this case). Any such misalignment will result in a reduction of the area of the tunnel dielectric at one edge of the floating gate, but also in an increase of equal magnitude in the area available for tunneling at the other edge of the floating gate. This feature permits the construction of a cell with very small area of tunnel dielectric. By contrast the prior art triple layer Flash EEprom cells of Masuoka and Kuo referenced above are sensitive to mask misalignment and therefore require a structure wherein the nominal area provided for tunnel erase may be much larger than the optimum such area, in order to accommodate for the worst case misalignment condition. Another distinguishing feature of this embodiment relative to the Masuoka cell of Typical bias voltage conditions necessary to erase memory cells 600 a, 600 b, 600 c and 600 d are: VERASE (on all erase gates 530, 535, 536)=15V to 25V applied for between 100 milliseconds and 10 seconds (the pulse duration is strongly dependent on the magnitude of VERASE) VCG=0V, VBB=0V. VD and VS can be held at 0V or at a higher voltage between 5V and 10V, so as to reduce the net voltage experienced during erase across dielectric film 565 a in areas such as 563 ( III. Self Aligned Split Channel Flash EEprom Cell with Field Plate Isolation A 2×2 array of Flash EEprom cells in accordance with another embodiment of this invention is shown in topological view in Split channel Flash EEprom transistor 700 a employs three conductive layers (floating gate 704 erase gates 730, 735 and control gate 709) formed in the same sequence as described in section II in conjunction with the Flash EEprom transistor 600 a of The elimination of the thick isoplanar oxide inside the array of memory cells (this isoplanar oxide may still be retained for isolation between peripheral logic transistors) has several advantages: 1. The surface stress at the silicon-silicon dioxide boundary due to a prolonged thermal isoplanar oxidation cycle is eliminated inside the array, resulting in less leaky source and drain junctions and in higher quality gate oxides. 2. For a given cell width, the elimination of the isoplanar oxide allows the effective channel width W under floating gate 704 to extend all the way between the two edges 732 a, 762 a of the floating gate. By comparison, effective channel width W of transistor 600 a ( 3. From capacitive coupling considerations (to be discussed in section VI below) the efficiency of tunnel erase is higher in cells where coupling of the floating gate to the silicon substrate 763 is greatest. In transistor 700 a the entire bottom surface area of the floating gate is tightly coupled to the substrate 763 through the thin gate dielectric 764. By contrast, in transistor 600 a ( 4. The width of control gate 709 between its edges 744 and 774 defines channel width W2 of the series enhancement channel portion L2 ( In a memory array source diffusion 701 and drain diffusion 702 can be formed in long strips. If transistor 500 a is used as the Eprom transistor, then source diffusion edge 721 is self aligned to the previously discussed sidewall spacer (not shown) while drain diffusion edge 723 is self aligned to edge 722 of floating gate 704 a. In areas between adjacent floating gates 704 a, 704 c the source and drain diffusion edges (721 x, 723 x in IV. Self Aligned Split Channel Flash EEprom Cell with Erase Confined to the Vertical Edges of the Floating Gate. Another embodiment of the self aligned split channel Flash EEprom of this invention can result in a cell which has smaller area than cells 600 aand 700 a of the embodiments described in Sections II and III respectively. In this third embodiment the area for tunnel erase between the floating gate and the erase gate is confined essentially to the surfaces of the vertical sidewalls along the two edges of each floating gate. To best understand how cell 800 a of this embodiment differs from cell 700 a a 2×2 array of cells 800 a, 800 b, 800 c and 800 d are shown in Cell 800 a has a floating gate 804 a formed in a first layer of heavily N+ doped polysilicon. This gate controls the transistor conduction in channel portion L1 ( The erase gates are insulated from control gate 809 by dielectric insulator 897 which is grown or deposited prior to deposition of erase gates 830, 835, 836. Tunnel erase dielectrics 831 a, 861 a are confined to the surface of the vertical edges 832 a, 862 a of the floating gate 804 a. Erase gate 830 also provides a field plate isolation over oxide 862 in the field between adjacent devices. The thickness of all conducting and insulating layers in structure 800 are approximately the same as those used in structure 700 a. However, because the erase gate is implemented here after, rather than before the control gate, the fabrication process sequence is somewhat different. Specifically (see 1. Floating gates 804 a, 804 c are formed in long continuous and narrow strips on top of gate oxide 864. The width of each such strip is L1 plus the extent of overlap of the floating gate over the drain diffusion. 2. Dielectric 867 is formed and the second conductive layer (N+ doped polysilicon or a silicide) is deposited. 3. Control gates 809 are defined in long narrow strips in a direction perpendicular to the direction of the strips of floating gates. The strips are etched along edges 844, 874, and insulated with relatively thick dielectric 897. 4. Edges 844, 874 (or the edges of insulator spacer 899 formed at both edges of control gate strip 809) are then used to etch dielectric 867 and then, in a self aligned manner to also etch vertical edges 832 a and 862 a of the underlying floating gate strips, resulting in isolated floating gates which have exposed edges of polysilicon only along these vertical walls. 5. Tunnel dielectric films 831 a, 861 a are formed by thermal oxidation of these exposed surfaces. 6. A third conductive layer is deposited, from which are formed erase gates 830 in long strips running in between and parallel to adjacent strips of control gates. These erase gates also serve as field isolation plates to electrically isolate between adjacent regions in the memory array. Flash EEprom transistor 800 a can be implemented in conjunction with any of the split channel Eprom transistors of this invention (transistors 500 a and 1400) or with any of the prior art split gate Eprom transistors of Eitan, Samachisa, Masuoka or Harari. For example, an array of Flash EEprom transistors 800 a can be fabricated by adding a few process steps to the fabrication process for the split channel Eprom transistor 1400 ( Steps 1 through 10 are identical to steps 1 through 10 described in Section I.b. in conjunction with the manufacturing process for split channel Eprom transistor 1400. Steps 11, 12, and 13 are the process steps 4, 5, and 6 respectivly described in this section IV in conjunction with split channel Flash EEprom transistor 800 a. Cell 800 a results in a very small area of tunnel erase, which is also relatively easy to control (it is not defined by a mask dimension, but rather by the thickness of the deposited layer constituting the floating gates). For this reason, this cell is the most highly scalable embodiment of this invention. V. Self Aligned Split Channel Flash EEprom Cell with a Buried Erase Gate. A 2×2 array of Flash EEprom cells 900 a, 900 b, 900 c and 900 d in accordance with a fourth embodiment of this invention is shown in topological view in Transistor 900 a is a split channel Flash EEprom transistor having channel portions L1 and L2 formed by self alignment as in Eprom transistor 500 a or in a non self aligned manner as in Eprom transistor 1400. Erase gate 930 is a narrow conductive strip sandwiched between floating gate 904 a on the bottom and control gate 909 on top. Erase gate 930 is located away from edges 932 a, 962 a of the floating gate. These edges therefore play no role in the tunnel erase, which takes place through tunnel dielectric 931 confined to the area where erase gate 930 overlaps floating gate 904 a. Erase gate 930 also overlaps a width We of the series enhancement channel portion L2. During read or programming, erase gate 930 is held at 0V, and therefore the channel portion of width We does not contribute to the read or program current. The only contribution to conduction in channel portion L2 comes from widths Wp and Wq where the channel is controlled directly by control gate 909. Channel portion L1 however sees conduction contributions from all three widths, Wp, Wq and We. Edges 932 a, 962 a of floating gate 904 a can be etched to be self aligned to edges 944, 974 respectively of control gate 909. This then permits the formation of channel stop field isolation 998, by implanting a p type dopant in the field regions not protected by the control gate or floating gate ( One advantage of cell 900 a is that erase gate strips 930, 936 can be made very narrow by taking advantage of controlled undercutting by for example isotropic etchings of the conductive layer forming these strips. This results in a small area of tunnel erase, which is insensitive to mask misalignment. Furthermore the channel width Wp and Wq is also insensitive to mask misalignment. This embodiment of Flash EEprom can also be implemented in conjunction with prior art split channel Eproms cells such as the Eitan, Harari, Samachisa or Masuoka cells. VI. Device Optimization Specifically these are: CG=Capacitance between Floating gate 1104 and control gate 1109. CD=Capacitance between Floating gate 1104 and drain diffusion 1102. CB=Capacitance between Floating gate 1104 and substrate 1163. CE=Capacitance between Floating gate 1104 and erase gate 1130. CT=CG+CD+CB+CE is the total capacitance. Q is the net charge stored on the floating gate. In a virgin device, Q=0. In a programmed device Q is negative (excess electrons) and in an erased device Q is positive (excess holes). The voltage VFG on Floating gate 1104 is proportional to voltages VCG, VERASE, VD, VBB and to the charge Q according to the following equation: In all prior art Eprom and Flash EEprom devices as well as in embodiment 600 a of this invention, the dominant factor in CT is CG, the coupling to the control gate. However, in embodiments 700 a, 800 a and 900 a CB is also a major contributor by virtue of the fact that the entire bottom surface of the floating gate is strongly coupled to the substrate. a. Electrical Erase During erase, the typical voltage conditions are VCG=0V, VD=0V, VS=0V, VBB=0V and VERASE=20V. Therefore, substituting in equation (1), The split channel Flash EEprom device can be viewed as a composite transistor consisting of two transistors T1 and T2 in series— Prior art split channel Flash EEprom devices erase with a single pulse of sufficient voltage VERASE and sufficient duration to ensure that VT1 is erased to a voltage below VT2 (curve b) in This invention proposes for the first time a scheme to take advantage of the full memory window. This is done by using the wider memory window to store more than two binary states and therefore more than a single bit per cell. For example, it is possible to store 4, rather than 2 states per cell, with these states having the following threshold voltage: Multistate memory cells have previously been proposed in conjunction with ROM (Read Only Memory) devices and DRAM (Dynamic Random Access Memory). In ROM, each storage transistor can have one of several fixed conduction states by having different channel ion implant doses to establish more than two permanent threshold voltage states. Alternatively, more than two conduction states per ROM cell can be achieved by establishing with two photolithographic masks one of several values of transistor channel width or transistor channel length. For example, each transistor in a ROM array may be fabricated with one of two channel widths and with one of two channel lengths, resulting in four distinct combinations of channel width and length, and therefore in four distinct Conductive states. Prior art multistate DRAM cells have also been proposed where each cell in the array is physically identical to all other cells. However, the charge stored at the capacitor of each cell may be quantized, resulting in several distinct read signal levels. An example of such prior art multistate DRAm storage is described in IEEE Journal of Solid-State Circuits, February 1988, p. 27 in an article by M. Horiguchi et al. entitled “An Experimental Large-Capacity Semiconductor File Memory Using 16-Levels/Cell Storage”. A second example of prior art multistate DRAM is provided in IEEE Custom Integrated Circuits Conference, May 1988, p. 4.4.1 in an article entitled “An Experimental 2-Bit/Cell Storage DRAM for Macrocell or Memory-on-Logic Applications” by T. Furuyama et al. To take full advantage of multistate storage in Eproms it is necessary that the programming algorithm allow programming of the device into any one of several conduction states. First it is required that the device be erased to a voltage VT1 more negative than the “3” state (−3.0V in this example). Then the device is programmed in a short programming pulse, typically one to ten microseconds in duration. Programming conditions are selected such that no single pulse can shift the device threshold by more than one half of the threshold voltage difference between two successive states. The device is then sensed by comparing its conduction current IDS with that of a reference current source IREF, i (i=0,1,2,3) corresponding to the desired conduction state (four distinct reference levels must be provided corresponding to the four states). Programming pulses are continued until the sensed current (solid lines in Large memory systems typically incorporate error detection and correction schemes which can tolerate a small number of hard failures i.e. bad Flash EEprom cells. For this reason the programming/sensing cycling algorithm can be automatically halted after a certain maximum number of programming cycles has been applied even if the cell being programmed has not reached the desired threshold voltage state, indicating a faulty memory cell. There are several ways to implement the multi-state storage concept in conjunction with an array of Flash EEprom transistors. An example of one such circuit is shown in During programming, the four data inputs Ii (I0, I1, I2 and I3) are presented to a comparator circuit which also has presented to it the four sense amp outputs for the accessed cell. If Di match Ii, then the cell is in the correct state and no programming is required. If however all four Di do not match all four Ii, then the comparator output activates a programming control circuit. This circuit in turn controls the bit line (VPBL) and word line (VPWL) programming pulse generators. A single short programming pulse is applied to both the selected word line and the selected bit line. This is followed by a second read cycle to determine if a match between Di and Ii has been established. This sequence is repeated through multiple programming/reading pulses and is stopped only when a match is established (or earlier if no match has been established but after a preset maximum number of pulses has been reached). The result of such multistate programming algorithim is that each cell is programmed into any one of the four conduction states in direct correlation with the reference conduction states IREF, i. In fact, the same sense amplifiers used during programming/reading pulsing are also used during sensing (i.e., during normal reading). This allows excellent tracking between the reference levels (dashed lines in In actual fact, although four reference levels and four sense amplifiers are used to program the cell into one of four distinct conduction states, only three sense amplifiers and three reference levels are required to sense the correct one of four stored states. For example, in Note that the same principle employed in the circuit of c. Improved Charge Retention In the example above, states “3” and “2” are the result of net positive charge (holes) on the floating gate while states “1” and “0” are the result of net negative charge (electrons) on the floating gate. To properly sense the correct conduction state during the lifetime of the device (which may be specified as 10 years at 125° C.) it is necessary for this charge not to leak off the floating gate by more than the equivalent of approximately 200 millivolts shift in VT1. This condition is readily met for stored electrons in this as well as all prior art Eprom and Flash EEprom devices. There is no data in the literature on charge retention for stored holes, because, as has been pointed out above, none of the prior art devices concern themselves with the value VT1 when it is more negative than VT2, i.e., when holes are stored on th floating gate. From device physics considerations alone it is expected that retention of holes trapped on the floating gate should be significantly superior to the retention of trapped electrons. This is because trapped holes can only be neutralized by the injection of electrons onto the floating gate. So long as the conditions for such injection do not exist it is almost impossible for the holes to overcome the potential barrier of approximately 5.0 electronvolts at the silicon-silicon dioxide interface (compared to a 3.1 electron volts potential barrier for trapped electrons). Therefore it is possible to improve the retention of this device by assigning more of the conduction states to states which involve trapped holes. For example, in the example above state “1” had VT1=+2.0V, which involved trapped electrons since VT1 for the virgin device was made to be VT1=+1.5V. If however VT1 of the virgin device is raised to a higher threshold voltage, say to VT1=+3.0V (e.g. by increasing the p-type doping concentration in the channel region 560 a in d. Intelligent Erase for Improved Endurance The endurance of Flash EEprom devices is their ability to withstand a given number of program/erase cycles. The physical phenomenon limiting the endurance of prior art Flash EEprom devices is trapping of electrons in the active dielectric films of the device (see the Wegener article referenced above). During programming the dielectric used during hot electron channel injection traps part of the injected electrons. During erasing the tunnel erase dielectric likewise traps some of the tunneled electrons. For example, in prior art transistor 200 ( A second problem with prior art devices is that during the erase pulse the tunnel dielectric may be exposed to an excessively high peak stress. This occurs in a device which has previously been programmed to state “0” (VT1=+4.5V or higher). This device has a large negative Q (see equation (2)). When VERASE is applied the tunnel dielectric is momentarily exposed to a peak electric field with components from VERASE as well as from Q/CT (equations (2) and (3)). This peak field is eventually reduced when Q is reduced to zero as a consequence of the tunnel erase. Nevertheless, permanent and cumulative damage is inflicted through this erase procedure, which brings about premature device failure. To overcome the two problems of overstress and window closure a new erase algorithm is disclosed, which can also be applied equally well to any prior art Flash EEprom device. Without such new erase algorithm it would be difficult to have a multistate device since, from curve (b) in The sequence for a complete erase cycle of the new algorithm is as follows (see FIG. 12): 1. Read S. This value can be stored in a register file. (This step can be omitted if S is not expected to approach the endurance limit during the operating lifetime of the device). 1a. Apply a first erase pule with VERASE=V1+n ΔV, n=0, pulse duration=t. This pulse (and the next few successive pulses) is insufficient to fully erase all memory cells, but it serves to reduce the charge Q on programmed cells at a relatively low erase field stress, i.e., it is equivalent to a “conditioning” pulse. 1b. Read a sparse pattern of cells in the array. A diagonal read pattern for example will read m+n cells (rather than m×n cells for a complete read) and will have at least one cell from each row and one cell from each column in the array. The number N of cells not fully erased to state “3” is counted and compared with X. 1c. If N is greater than x (array not adequately erased) a second erase pulse is applied of magnitude greater by ΔV than the magnitude of the first pulse, with the same pulse duration, t. Read diagonal cells, count N. This cycling of erase pulse/read/increment erase pulse is continued until either N≦X or the number n of erase pulses exceed nmax. The first one of these two conditions to occur leads to a final erase pulse. 2a. The final erase pulse is applied to assure that the array is solidly and fully erased. The magnitude of VERASE can be the same as in the previous pulse or higher by another increment ΔV. The duration can be between 1 t and 5 t. 2b. 100% of the array is read. The number N of cells not fully erased is counted. If N is less than or equal to X, then the erase pulsing is completed at this point. 2c. If N is greater than X, then address locations of the N unerased bits are generated, possibly for substitution with redundant good bits at the system level. If N is significantly larger than X (for example, if N represents perhaps 5% of the total number of cells), then a flag may be raised, to indicate to the user that the array may have reached its endurance end of life. 2d. Erase pulsing is ended. 3a. S is incremented by one and the new S is stored for future reference. This step is optional. The new S can be stored either by writing it into the newly erased block or off chip in a separate register file. 3b. The erase cycle is ended. The complete cycle is expected to be completed with between 10 to 20 erase pulses and to last a total of approximately one second. The new algorithm has the following advantages: (a) No cell in the array experiences the peak electric field stress. By the time VERASE is incremented to a relatively high voltage any charge Q on the floating gates has already been removed in previous lower voltage erase pulses. (b) The total erase time is significantly shorter than the fixed VERASE pulse of the prior art. Virgin devices see the minimum pulse duration necessary to erase. Devices which have undergone more than 1×104 cycles require only several more ΔV voltage increments to overcome dielectric trapped charge, which only adds several hundred milliseconds to their total erase time. (c) The window closure on the erase side (curve (b) in In a Flash EEprom memory chip it is possible to implement efficiently the new erase algorithm by providing on chip (or alternatively on a separate controller chip) a voltage multiplier to provide the necessary voltage V1 and voltage increments ΔV to nΔV, timing circuitry to time the erase and sense pulse duration, counting circuitry to count N and compare it with the stored value for X, registers to store address locations of bad bits, and control and sequencing circuitry, including the instruction set to execute the erase sequence outlined above. VII. Edge Tailored Flash EEprom with New Erase Mechanism Flash EEprom embodiments 600 a, 700 a, 800 a, and 900 a of this invention use tunnel erase across a relatively thick dielectric oxide grown on the textured surface of the polysilicon floating gate. Wegener (see article referenced above) has postulated that asperities—small, bump-like, curved surfaces of diameter of approximately 30 nanometers, enhance the electric field at the injector surface (in this case, the floating gate) by a factor of 4 to 5, thereby allowing efficient tunnel conduction to occur even across a relatively thick tunnel dielectric film (30 to 70 nanometers). Accordingly, there have been in the prior art efforts, through process steps such as high temperature oxidation of the polysilicon surface, to shape the surface of the polysilicon so as to accentuate these asperities. Although such steps are reproducible, they are empirical in nature, somewhat costly to implement, and not well understood. A new approach is disclosed in this invention which results in a highly reproducible, enhanced electric field tunnel erase which is more efficient than the asperities method yet simpler to implement in several EEprom and Flash EEprom devices. In this approach, the floating gate layer is deposited in a very thin layer, typically in the range between 25 and 200 nanometers. This is much thinner than floating gates of all prior art Eprom, EEprom or Flash EEprom devices, which typically use a layer of polysilicon of thickness at least 200 nanometers, and usually more like 350 to 450 nanometers. The prior art polysilicon thickness is chosen to be higher than 200 nanometers primarily because of the lower sheet resistivity and better quality polyoxides provided by the thicker polysilicon. In certain prior art devices such as the Eitan split channel Eprom the floating gate also serves as an implant mask ( The reason for going to such a thin layer of polysilicon is that the edges of the floating gate in such a thin layer can be tailored through oxidation to form extremely sharp-tipped edges. The radius of curvature of these tipped edges can be made extremely small and is dictated by the thickness of the thin polysilicon film as well as the thickness of the tunnel dielectric grown. Therefore, tunnel erase from these sharp tips no longer depends on surface asperities but instead is dominated by the tip itself. As an illustration of this modification, consider Flash EEprom transistor 800 a ( By contrast, the cross section view of modified transistor 800M is shown in During oxidation of the thin vertical edges of floating gate 804M to form tunnel dielectric layers 861M, 831M, both top and bottom surfaces of the thin floating gate at its exposed edges are oxidized. This results in extremely sharp tips 8701, 870 r being formed. These tips serve as very efficient electron injectors (shown by arrows across tunnel dielectrics 861M, 831M). Injected electrons are collected as in transistor 800 a by erase gates 835, 830, which overlap these sharp-tipped edges. Apart from the very efficient and highly reproducible injector characteristics inherent to the thin floating gate of transistor 800M there is an additional benefit in that the capacitance between the floating gate at its tip and the erase gate is much smaller than the corresponding capacitance in all other embodiments, including transistor 800 a. Therefore, from equations (1), (2) and (3) in section VI.a., since Two other points are worth noting. First, the very thin floating gate should not be overly heavily doped, to avoid penetration of the N+ dopant through polysilicon 804M and gate dielectric 864. Since floating gate 804M is never used as a current conductor, a sheet resistivity of between 100 and 10,000 Ohms per square in quite acceptable. Secondly, it is necessary to ensure that the sharp tips of the floating gate are adequately spaced apart or isolated from control gate 809M as well as substrate 860 or the source or drain diffusions (not shown in Although a thin floating gate layer provides a relatively straight forward approach to achieving after oxidation sharp-tipped edges, other approaches are possible to achieve sharp-tipped edges even in a relatively thick floating gate layer. For example, in In the device of VIII. Flash EEprom Memory Array Implementations The Flash EEprom cells of this invention can be implemented in dense memory arrays in several different array architectures. The first architecture, shown in A second Flash EEprom memory array architecture which lends itself to better packing density than the array of The array can be erased in a block, or in entire rows by decoding the erase voltage to the corresponding erase lines. While the embodiments of this invention that have been described are the preferred implementations, those skilled in the art will understand that variations thereof may also be possible. In particular, the split channel Flash EEprom devices 600 a, 700 a, 800 a and 900 a can equally well be formed in conjunction with a split channel Eprom composite transistor 500 a having channel portions L1 and L2 formed in accordance with the one-sided spacer sequence outlined in
Twelve lucky final year undergraduate sudents from the University of Birmingham will get a once in a lifetime opportunity to gain first hand career insight and guidance from the following high profile Birmingham alumni through the Alumni Leadership Mentoring Programme. Sir Liam Donaldson Former Chief Medical Officer (MSc, Anatomy 1976; DSc, Honorary Degree, 2005) Chairman of the World Health Organisation’s patient safety initiative, Sir Liam Donaldson graduated from the University of Birmingham with an MSc in Anatomy and went on to pursue a career as a surgeon before being appointed as Chief Medical Officer in 1998. During his 12 year tenure, he helped shape groundbreaking legislation into stem cell research, infectious disease control and organ and tissue retention and it was his vision that influenced the 2007 smoking ban. Knighted in 2002 for his service to healthcare, Sir Liam stepped down as the UK’s chief medical advisor in May 2010 and is currently the Chancellor of the University of Newcastle and a visiting lecturer at the University of Leicester. Baroness Doreen Massey Chair, The National Treatment Agency For Substance Misuse (PGCE, Education, 1962; BA, French Language & Literature, 1961) Health education specialist Baroness Doreen Massey has been a Labour member of the House of Lords since 1999 and is a former teacher and education advisor, graduating from the University of Birmingham with a BA in French Language and Literature and a PGCE in Education. She was the Director of Family Planning Association from 1989-1994 and has published a range of books and training resources on health and sex education. She is currently Chair of The National Treatment Agency for Substance Misuse, a board member of UNICEF, a member of the national Advisory Council for Alcohol and Drug Education and President of the Brook Advisory Centres. Baroness Massey has campaigned for statutory PSHE and sex and relationships education, and chaired the all party group on children. Sir Charles George Chairman, The Stroke Association (Interc BSc, Anatomy, 1962; MBChB 1965; MD, Medicine, 1974; DSc Honorary Degree, 2003) Native Brummie, Professor Sir Charles George has had a distinguished career in medical education and public health, graduating from Birmingham with an Intercalated BSc in Anatomy in 1962 and MBChB in Medicine in 1965. He spent four years at Hammersmith Hospital and tutoring at the Royal Postgraduate Medical School. He then joined the University of Southampton as Professor of Clinical Pharmacology and served six years as Dean of Medicine and three years as Dean of the Faculty of Medicine, Health and Biological Sciences. Professor Sir Charles was knighted for his services to medicine and medical education in 1998, was formerly Chairman of the Board of Science and Education at the British Medical Association. and is currently Chairman of The Stroke Association, Emeritus Professor of Clinical Pharmacology at Southampton University and He is also the former Medical Director of the British Heart Foundation where he campaigned strongly to improve the health of the nation working on the prevention of heart disease. Bupa Group Medical Director and Bupa Foundation Deputy Chair (MB ChB, Medicine, 1976) Recently appointed as Chair of the Guild Trustee Board, Dr Andrew Vallance-Owen graduated from the Birmingham Medical School in 1976 and is now accountable for the safety and quality of care of Bupa’s 11 million customers. During his time at the University of Birmingham, Dr Vallance-Owen was President of the Guild of Students (1974/1975). A trained surgeon, he then joined the staff of the British Medical Association - he was Scottish Secretary and then became Head of Policy and a spokesman for both the BMA and the medical profession. At Bupa he has maintained his external profile; he is a keen advocate of improved doctor-patient communications, measurement of clinical performance and shared decision making. Managing Director, Goldman Sachs International (BCom, Industrial Economics & Business Studies, 1986) Partner and Managing Director at the global investment bank, Goldman Sachs, Martin Devenish was awarded his Bachelors of Commerce in Industrial Economics and Business Studies from the University of Birmingham in 1986 where he went on to spend four years at HSBC Asset Management, before joining Goldman Sachs in 1992. Martin is an active member of the University’s Development Advisory Council and a great supporter of the Access to Birmingham (A2B) scheme which offers scholarships and support to ensure fair access for the brightest and best potential students regarding their personal circumstances. He is also a ‘lay’ member of the University’s Council. Chairman, Premier Oil Plc (BSc, Civil Engineering, 1968; Deng, Honorary Degree, 2008) Chartered Civil Engineer and Fellow of the Royal Academy of Engineering and the Institution of Civil Engineers, Mike Welton graduated from the University of Birmingham with a Civil Engineering degree in 1968. He is currently the Chairman of Premier Oil Plc, an independent oil company founded in the Caribbean in 1934. Mike was previously Chairman of construction materials group Hanson plc, Chairman of support services provider Global Solutions Ltd and Chief Executive of international engineering and construction services group Balfour Beatty plc. Mike is also a non-executive director of High Speed Two (HS2) Ltd, a government organisation established to examine high speed rail to the West Midlands and beyond which will revolutionise rail travel in the local area. Director, J Sainsbury’s (BSc, Physics, 1982) Group Commercial Director of J Sainsbury’s, Physics graduate Mike Coupe has worked for many of the country’s biggest supermarket chains in senior management roles, including ASDA, Tesco and Iceland, before his appointment as Board Director of the Big Food Group. He was made a member of the Operating Board at J Sainsbury’s in 2004, Executive Director in 2007 and began his new role in 2010 where he is responsible for trading, marketing, IT and online. Cilla Snowball CBE Group Chairman and Group CEO, AMV BBDO (BA, French, 1981) Group Chairman and Group Chief Executive of advertising and communications company AMV BBDO. French graduate Cilla Snowball has worked in advertising roles since leaving the University of Birmingham in 1981. Her career began as a trainee with Allen Brady & Marsh, before moving to Ogilvy & Mather as an Account Manager. She then joined AMV in 1992 and has held a senior management position for the past nine years at AMV BBDO, the largest advertising and communications Group in the UK. Named as the 50th most influential person in the media sector by The Guardian, Cilla was awarded a CBE for services to the advertising industry in January 2009. As well as sitting on the board of Business in the Community, Cilla became a ‘lay’ member of Council in 2010 and is on the Remuneration Committee. Phyllida Lloyd CBE Director of plays, opera and film including Mamma Mia! on stage and screen (BA, English, 1979; DLitt, Honorary Degree, 2009) A leading British director and an English graduate from the University of Birmingham, Phyllida Lloyd directed the UK’s highest ever grossing film, Mamma Mia, transforming her stage version, seen by a global audience of more than 45 million, to the silver screen. Phyllida spent five years working in BBC television drama after graduation before winning a bursary from the Arts Council of Great Britain to be a Trainee Director at the Wolsey Theatre in Ipswich and a flourishing directing career followed, including staging the English National Opera’s production of Wagner’s Valkyrie at Glastonbury music festival. Awarded a CBE in the 2010 New Year’s Honours for services to drama, Phyllida is currently directing The Iron Lady, a biopic of Margaret Thatcher and is also a supporter of the University’s new auditorium build. Actress (BA, Drama & Theatre Arts, 1988) Household name and Laurence Olivier Award winner for Best Actress in Much Ado About Nothing, Tamsin Greig graduated from the University of Birmingham with a degree in Drama and Theatre Arts in 1988. Her professional career began as an administrator for The Family Planning Association, before acting roles in Green Wing, Black Books and Love Soup followed. She is best known as the voice of Debbie Aldridge in the BBC radio series, The Archers, although her character spends a lot of time living in Hungary due to Tamsin’s other work commitments. Chorus Director, City of Birmingham Symphony Choruses (DMus, Honorary Degree, 2008) Simon Halsey is one of the world's leading conductors of choral repertoire, regularly conducting prestigious orchestras and choirs worldwide. Halsey holds the position of Chief Conductor of the Berlin Radio Choir, frequently collaborating with such conductors as Sir Simon Rattle, Claudio Abbado and Marek Janowski. He has been Chorus Director of the CBSO Chorus for over 25 years, and works closely there with the orchestra's Music Director Andris Nelsons. He is in his seventh season as Principal Conductor, Choral Programme for The Sage Gateshead and works on numerous choral projects each season at the venue, including regular concerts conducting the Northern Sinfonia. Chief Executive, Arts Council England (BA, English, 1982) Chief Executive of the Arts Council, English graduate Alan Davey is well known for his passionate advocacy of the arts and has worked at the Department of Health, the Department of National Heritage and the Department for Culture, Media and Sport. He is responsible for designing the National Lottery which has created more than 2,600 UK millionaires since it began in 1994. The Arts Council England invests in artistic experiences, including theatre, music, literature, dance and photography, to enrich people’s lives. He is also a visiting scholar at the University of Maryland. Baroness Patience Wheatcroft Former Editor of the Wall Street Journal Europe (LLB, Law, 1972) Former journalist and successful businesswoman, Baroness Wheatcroft graduated from the University of Birmingham with an LLB Law degree and has since worked on several national newspapers, most recently as the Editor-in-Chief at Wall Street Journal Europe. A prize winning reporter, winner of the Wincott Award for Senior Financial Journalism and London Press Club Business Journalist of the Year, Baroness Wheatcroft launched Retail Week, a specialist trade magazine in 1988, is a former Business Editor of The Times and a former Sunday Telegraph Editor. She has recently been appointed as a Conservative life peer and is a visiting fellow at Oxford University’s Centre for Corporate Reputation and a member of the British Olympic Association Advisory Board.
Low-Level Laser Therapy Supported Surgical Treatment of Bisphosphonate Related Osteonecrosis of Jaws: A Retrospective Analysis of 11 Cases. Abstract Objective: The aim of this study is to evaluate and report on low-level laser therapy (LLLT) supported medical-surgical treatment outcomes of 11 patients with bisphosphonate related osteonecrosis of the jaws (BRONJ) lesions. Background data: BRONJ is a severe clinical condition, which adversely affects patients' lives. Even though various treatment modalities have been proposed, the ideal approach still remains to be debated. LLLT stands out among supportive approaches because of its favorable effects on tissue healing. Materials and methods: Eleven patients diagnosed with Stage II or III lesions (American Association of Oral and Maxillofacial Surgeons [AAOMS] classification) were included in the study. All patients received LLLT applications during the postoperative period in addition to medical and surgical treatment. Laser applications covering the entirety of the surgical site were performed with GaAlAs diode laser with the following parameters: 808 nm wavelength, 0.5 W power, continuous wave, noncontact mode at 0.5-1 cm distance from the oral mucosa, spot size 0.28 cm2 (R=6 mm), for 3 sec per point (10 sec per cm2), and energy density of 5 J/cm2(energy per point,1.4 J). Results: Elimination of previously recorded symptoms and a stable mucosal closure was achieved in all patients. Primary healing was achieved in seven patients and secondary healing course was observed in four patients. Permanence of obtained positive outcomes was noted in follow-up periods. Conclusions: Treatment of advanced BRONJ lesions with a combination of antibiotic therapy, surgical removal of the lesion, and consecutive low-level diode laser applications provided favorable results in all patients. In consideration of our findings, it can be assumed that LLLT may serve as a safe and effective adjunct to medical-surgical treatment of BRONJ lesions. Laser GaAlAs (860 nm) photobiomodulation for the treatment of bisphosphonate-induced osteonecrosis of the jaw. School of Dentistry, Graduation Program, Federal University of Bahia, Salvador, Bahia, Brazil. The aim of this article is to report a case of bisphosphonate-induced osteonecrosis (ONJ-BP) of the jaw treated by curettage of the necrotic bone, low-level laser therapy (LLLT), and antibiotic therapy. ONJ-BP is characterized by painful ulcerations of the oral mucosa, is prone to bone necrosis that does not heal within 8 weeks after diagnosis, and is often difficult to treat. No definitive standard of care has been established for ONJ-BP. LLLT improves wound healing, relieves pain, and appears to be a promising treatment modality for patients with ONJ-BP. MATERIALS AND METHODS: An 82-year-old man taking intravenous bisphosphonate presented with ONJ-BP after tooth extraction. The patient was treated by LLLT using a GaAlAs diode laser with the following settings: wavelength, 860 nm; 70 mW; continuous wave; and spot size 4 mm(2). An energy density of 4.2 J/cm(2) per point was applied in a punctual contact manner every 48 h for 10 days, in association with antibiotic therapy and curettage of the necrotic bone. Reduction in painful symptoms was reported after the second irradiation session, and tissue healing was complete at the end of the third week following oral curettage. The patient was followed up for 12 months and exhibited good oral health and quality of life. The therapeutic protocol used in this study had a positive effect on tissue healing and remission of painful symptoms, resulting in better oral health and quality of life for the patient. Surgical Approach and Laser Applications in BRONJ Osteoporotic and Cancer Patients. Oral Medicine, Pathology and Laser-Assisted Surgery Unit and Section of Dentistry, Department of ENT/Dental/Ophthalmological and Cervico-Facial Sciences, University of Parma, Via Gramsci, 14-43100 Parma, Italy. Bisphosphonates-related Osteonecrosis of the Jaw (BRONJ) has been reported with increasing frequency in literature over last years, but its therapy is still a dilemma. One hundred ninety patients affected by BRONJ were observed between January 2004 and November 2011 and 166 treated sites were subdivided in five groups on the basis of the therapeutical approach (medical or surgical, traditional or laser-assisted approach, with or without Low Level Laser Therapy (LLLT)). Clinical success has been defined for each treatment performed as clinical improvement or complete mucosal healing. Combination of antibiotic therapy, conservative surgery performed with Er:YAG laser and LLLT applications showed best results for cancer and noncancer patients. Nonsurgical approach performed on 69 sites induced an improvement in 35 sites (50.7%) and the complete healing in 19 sites (27.5%), while surgical approach on 97 sites induced an improvement in 84 sites (86.6%) and the complete healing in 78 sites (80.41%). Improvement and healing were recorded in 31 (81.5%) and 27 (71.5%) out of the 38 BRONJ sites treated in noncancer patients and in 88 (68.75%) and in 69 (53.9%) out of the 128 in cancer patients. Photomed Laser Surg. 2011 Jan 16. [Epub ahead of print] Observation of Pain Control in Patients with Bisphosphonate-Induced Osteonecrosis Using Low Level Laser Therapy: Preliminary Results. Romeo U, Galanakis A, Marias C, Vecchio AD, Tenore G, Palaia G, Vescovi P, Polimeni A. 1 Department of Oral Sciences, “Sapienza” University of Rome , Rome, Italy . Abstract Background: Bisphosphonate-related osteonecrosis of the jaw (BRONJ) is an adverse side effect associated with bisphosphonate (BP) therapy, especially when parenteral BP administration is used. Patients affected by BRONJ present wide areas of exposed necrotic bone, particularly after surgical oral procedures. The main symptom is pain that is poorly controlled by common analgesic drugs. Recently, many studies have pointed to the beneficial effect of low-level laser therapy (LLLT) in pain reduction for many pathological conditions. The purpose of this study is to investigate whether LLLT could be helpful in managing BRONJ by reducing the problems associated with this condition and the use of analgesic drugs. Methods: Twelve patients affected by BRONJ were monitored at the Complex Operative Unit of Oral Pathology. Among these patients, only seven referred to pain in necrotic areas and were recruited for LLLT. Laser applications were performed with a double diode laser simultaneously emitting at two different wavelengths (??=?650?nm and ??=?904-910?nm, spot size?=?8?mm). All of the patients were irradiated with a fluence of 0.053?J/cm(2) for 15?min five times over a period of 2 weeks, in a non-contact mode, ?1?mm from the pathologic area. The patient’s maximum and minimum pain was recorded using a numeric rating scale (NRS) evaluation before and after the treatment. Statistical analysis was performed using the Kruskal-Wallis test. Results: Six patients showed significant pain reduction, and only one patient indicated a worsening of the symptoms, which was probably related to a reinfection of the BRONJ site, which occurred during the study. A statistically significant difference (p?<?0.05) was found between the NRS rates before and after the protocol. Conclusions: This pilot study suggests that LLLT may be a valid technique to support the treatment of BRONJ-related pain, even though the low number of cases in this study does not permit any conclusive consideration. Lasers Med Sci. 2010 Jul 29. [Epub ahead of print] Low-level laser therapy supported teeth extractions of two patients receiving IV zoldendroate. Kan B, Altay MA, Ta?ar F, Akova M. Faculty of Dentistry, Department of Oral and Maxillofacial Surgery, Hacettepe University, Ankara, Turkey, email@example.com. BRONJ (bisphosphonate-related osteonecrosis of jaws) is a frequently encountered disease, particularly in the maxillofacial region, and a consequence of bisphosphonate use. Treatment of BRONJ remains controversial, as efficiency of medical and surgical approaches as well as a combination of these methods with supportive treatments have not been clearly demonstrated in the literature. In recent years, laser usage alone or in combination with the main therapy methods, has become popular for the treatment of bisphosphonate-related osteo-necrosis of jaws. In this article, we present the successful management of two dental patients who had high potentials for BRONJ development as a result of chemo and radiotherapy combined with IV zoledronic acid application. Multiple consecutive teeth extractions followed with primary wound closure and LLLT applications were performed under high doses of antibiotics prophylaxis. Satisfactory wound healing in both the surrounding soft and hard tissues was achieved. LLLT application combined with atraumatic surgical interventions under antibiotics prophylaxis is a preferable approach in patients with a risk of BRONJ development. Adjunctive effect of LLLT in addition to careful infection control on preventing BRONJ was reported and concluded. Minerva Stomatol. 2010 Apr;59(4):181-213. Biphosphonate-Related Osteonecrosis of the Jaw (BRONJ) therapy. A critical review. Vescovi P, Nammour S. Director of EMDOLA (European Master Degree on Oral Laser Applications), Unit of Oral Pathology and Medicine and laser-assisted Oral Surgery, Section of Dentistry - firstname.lastname@example.org. Bisphosphonate-related osteonecrosis of the jaw (BRONJ) is an area of uncovered bone in the maxillo-facial region that did not heal within 8 weeks after identification by health care provider, in a patient who was receiving or had been exposed to Bisphosphonate Therapy (BPT) without previous radiation therapy to the craniofacial region. Low-grade risk of ONJ is connected with oral BPT used in the treatment of osteopenia, osteoporosis and Paget's disease (from 0.01% to 0.04%) while higher-grade risk is associated with intravenous (IV) administration in the treatment of multiple myeloma and bone metastases (from 0.8% to 12%). The management of BRONJ currently is a dilemma. No effective treatment has yet been developed and interrupting BPT does not seem to be beneficial. Temporary suspension of BPs offers no short-term benefit, whilst long term discontinuation (if systemic conditions permit it) may be beneficial in stabilizing sites of ONJ and reducing clinical symptoms. The use of oral antimicrobial rinses in combination with oral systemic antibiotic therapy -penicillin, metronidazole, quinolones, clindamycin, doxycycline, erythromycin- is indicated for Stages I and II of Ruggiero's Staging. The role of hyperbaric oxygen therapy is still unclear but some benefits of this treatment have recently been described in association with discontinuation of BPT and conventional therapy (medical or/and surgical). Surgical treatment, in accordance to the AAOMS Position Paper, is reserved to patients affected by Stage III of BRONJ even if in the last version (2009) a superficial debridement is indicated to relieve soft tissue irritation also in the stage II (lesions being unresponsive to antibiotic treatment). Aggressive surgical treatment may occasionally results in even larger areas of exposed and painful infected bone. Surgical debridement or resection in combination with antibiotic therapy may offer long-term palliation with resolution of acute infection and pain. Mobile segments of bony sequestrum should be removed without exposing unaffected bone. If pathological fractures or complete mandibular involvement are observed, if the medical condition of the patients allows it the affected bone portion may be resected and primary bone reconstruction or revascularization graft may be carried out. Ozone therapy in the management of bone necrosis or in extractive sites during and after oral surgery in patients treated with BPs may stimulate cell proliferation and soft tissue healing. Laser applications at low intensity (Low Level Laser Therapy – LLLT) have been reported in the literature for the treatment of BRONJ. Biostimulant effects of laser improve reparative process, increase inorganic matrix of bone and osteoblast mitotic index and stimulate lymphatic and blood capillaries growth. Laser can be used for conservative surgery, whereby necrotic bone is vaporized, until healthy bone is reached. The Er:YAG laser wavelength has a high degree of affinity for water and hydroxyapatite, hence both soft and bone tissues can be easily treated. An additional advantage of the Er:YAG laser is its bactericidal and possible biostimulatory action, accelerating the healing of both soft and bone tissues, in comparison to conventional treatments. Long-term, prospective studies are required to establish the efficacy of drug holidays in reducing the risk of BRONJ for patients receiving oral BPs even if it has been suggested that BPT may be discontinued for three months before the surgical procedures and bone turnover markers (CTx, NTx, PTH, 1,25-dihydroxy vitamin D) may be checked. However it must be recognized that interindividual variability, gender, age, physical activity, and seasonal and circadian variation exist that can result in difficulty in interpreting these assays and more research is needed. Laser application (LLLT and laser surgery) nowadays appears to be a promising modality of BRONJ treatment, being safe and well tolerated, and it permits the minimally invasive treatment of early stages of the disease. Photomed Laser Surg. 2010 Apr;28(2):179-84. Effect of low-level laser irradiation on bisphosphonate-induced osteonecrosis of the jaws: preliminary results of a prospective study. Scoletta M, Arduino PG, Reggio L, Dalmasso P, Mozzati M. Oral Surgery Unit, Dentistry Section, Department of Clinical Physiopathology, University of Turin, Turin, Italy. OBJECTIVE: The aim of this study was to detail the clinical efficacy of low-level laser therapy (LLLT) for the management of bisphosphonate-induced osteonecrosis of the jaws (ONJ-BP). BACKGROUND: ONJ-BP is the correct term, recently emerged, to describe a significant complication in a subset of patients receiving drugs such as zoledronic acid, pamidronate, and alendronate. No definitive standard of care has been set for ONJ-BP and no definitively agreed guidelines have been provided. There is currently no consensus on the correct approach to the issue. MATERIALS AND METHODS: The investigators studied a prospective cohort of 20 patients affected by ONJ-BP, who received biostimulation with a pulsed diode laser (GaAs). Patients were exposed to a 904-nm infrared laser (50 kHz, 28.4 J/cm(2) energy density, 40% duty cycle, spot size 0.8 cm). Outcome variables were the size of lesions, edema, visual analogue score of pain, presence of pus, fistulas, and halitosis. Preoperative results were compared with the postoperative outcome and statistically evaluated. RESULTS: Four weeks after LLLT, a statistically significant difference was observed for reported pain (p = 0.0001), clinical size (p = 0.0034), edema (p = 0.0005), and presence of pus and fistulas (p = 0.0078 and p = 0.03, respectively). CONCLUSION: This study suggests that LLLT would appear to be a promising modality of treatment for patients with ONJ-BP, providing that clinical efficacy is safe and well tolerated, especially by those patients who require conservative treatment. Of course, this needs to be addressed further in larger and randomly controlled studies in different clinical settings. Lasers Med Sci. 2010 Jan;25(1):101-13. Epub 2009 Jun 19. Surgical approach with Er:YAG laser on osteonecrosis of the jaws (ONJ) in patients under biphosphonate therapy (BPT). Vescovi P, Manfredi M, Merigo E, Meleti M, Fornaini C, Rocca JP, Nammour S. Oral Medicine and Laser-Assisted Surgery Unit- Section of Dentistry – Department of ENT/Dental/Ophtalmological and Cervico-Facial Sciences, EMDOLA (European Master Degree on Oral Laser Applications) – University of Parma, Parma, Italy. email@example.com. Osteonecrosis of the jaw (ONJ) in patients on long-term bisphosphonate Therapy (BPT) has been reported with increasing frequency in literature over the past 4 years. Therapy for this condition is still a dilemma. Temporary suspension of BPT offers no short-term benefits; hyperbaric oxygen has no proven efficacy and therefore is not recommended. Intermittent or continuous antibiotic therapy with surgical debridement can be beneficial to palliate the symptoms. Er:YAG laser can be used to eliminate necrotic bone portions by partial or total resection as an alternative to conventional rotary devices. In our study, 91 patients affected by ONJ-BP lesion, for a total of 115 ONJ sites were observed between January 2004 and May 2008 (Department of Odontostomatology, University of Parma). Fifty-five ONJ sites were considered for this study in four different groups, retrospectively identified on the basis of treatment performed (G1-G4). G1: 13 ONJ-BP sites were treated with medical therapy (amoxicillin 1gr x 3/die per os with metronidazole 250 mg x 2/die per os) for at least 2 weeks; G2: 17 ONJ-BP sites received medical treatment in association with cycles of low-level laser therapy (LLLT) applications performed using an Nd:YAG laser (1,064 nm) once a week for 2 months; G3: 13 ONJ-BP sites were surgically treated (sequestrectomy of necrotic bone, debridement, corticotomy/surgical removal of alveolar and/or cortical bone); G4: 12 ONJ-BP sites were treated with surgical therapy performed using an Er:YAG laser (2,940 nm) in association with LLLT. Clinical success has been defined for each treatment performed as: (a) complete mucosal healing free from signs and symptoms (classified as stage "0") or (b) transition from a higher to a lower stage (Ruggiero staging) for at least 3 months. All the ONJ-BP sites treated with Er:YAG laser (G4 group) had a clinical improvement (100%) and 87.5% of sites had a complete mucosal healing with a mean follow-up of 13 months. The result obtained in the G4 is extremely significant in comparison with those obtained by medical treatment alone or in a traditional surgical approach. Thanks to the high degree of affinity of this wavelength for water and hydroxyapatite, both soft and bone tissues can be easily treated. This technique can also be used for conservative operations whereby necrotic bone is vaporized until healthy bone is reached. In addition, an additional advantage of the Er:YAG laser is its bactericidal and possible biostimulatory action, accelerating the healing of both soft tissues and bone tissues, in comparison to conventional treatments. In conclusion, from our experience, it is possible to observe that an early conservative surgical approach with Er:YAG laser associated with LLLT, for BP-induced ONJ could be considered as more efficient in comparison with medical therapy or other conventional techniques. |Ther Clin Risk Manag. 2009; 5: 217–227.Published online 2009 March 26.||PMCID: PMC2697532| Copyright © 2009 Borgioli et al, publisher and licensee Dove Medical Press Ltd. Biphosphonates-related osteonecrosis of the jaw: Clinical and physiopathological considerations. Alberto Borgioli,1 Christian Viviani,1 Marco Duvina,1 Leila Brancato,1 Giuseppe Spinelli,1 Maria Luisa Brandi,2,3 and Paolo Tonelli1 1Department of Odontostomatology, Dental School; 2Department of Internal Medicine; 3DeGene Spin-off, Medical School, University of Florence, Florence, ItalyCorrespondence: Maria Luisa Brandi, Department of Internal Medicine, University of Florence, Medical School, Viale Pieraccini, 6. 50139 Florence Italy, Tel +39 0554 296 586, Fax +39 0554 296 585, Email firstname.lastname@example.org This is an Open Access article which permits unrestricted noncommercial use, provided the original work is properly cited. Since osteonecrosis of the jaw was related to biphosphonate administration by Marx, studies showing clinical symptoms, drug and surgical therapies overwhelmed the literature. Furthermore, the literature demonstrated the correlation between chronic biphosphonate adsumption and osteonecrosis of the jaw onset. Nitrogen-containing biphosphonates are widely used for the management of metastatic cancer, for prevention and treatment of osteoporosis, for the treatment of Paget's disease, and for the management of acute hypercalcemia. According to our experience, the treatment of BRON-J's lesions is difficult and prolonged. For this reason, in order to avoid these complications it is mandatory to perform a risk staging in patients who must undergo biphosphonate administration. When pharmacologic treatments with antibiotics and local antiseptics are not able to control the development of BRON-J's complications, the clinicians should perform radical surgical treatments such as the resection of the bone involved. Keywords: osteonecrosis of the jaw, biphosphonates, BRON-J Osteonecrosis of the jaw is a chronic osteomielitis that recognizes a multifactorial genesis, connected to both local and systemic factors. The relevant systemic factors capable to influence development of osteonecrosis of the jaw encompass immunosuppression, chemotherapy, corticosteroid therapy, and endocrine diseases. Pharmacologic-related osteonecrosis of the jaw (BRON-J) in oncologic patients treated with intravenous biphosphonates was an unknown clinical entity until 2003, when Marx described 36 cases of BRON-J in patients affected by malignant tumors.1 Biphosphonates stand as an important group of drugs for the treatment of metabolic and oncologic pathologies involving the skeletal system. Biphophonates act by inhibiting osteoclastic bone resorption. The most common drugs utilized in the prevention and therapy of osteoporosis are: alendronate, risendronate, ibandronate, and clodronate. Pamidronate and zolendronate are utilized in the prophylaxis of bone complications and in the hypercalcemia associated to multiple myeloma and to metastatic bone disease due to breast and prostatic cancer. All these chemical substances are characterized by a high power and selectivity. Nowadays, the literature demonstrates the correlation between chronic biphosphonate assumption and onset of osteonecrosis of the jaw. BRON-J: history and definition Since Marx's study other studies on BRON-J have been published. In 2004, Ruggiero and colleagues published 63 cases of BRON-J, with the majority of cases being dependent on the use of intravenous biphosphonate administration in cancer patients and only few patients treated with oral biphosphonates for osteoporosis.2 In 2005, Marx published 119 cases of BRON-J and correlated it to the type of drug used, to the invasiveness of the oral treatments, to the dose, and to the assumption length for a given drug.3 In the same year, Scientific Societies published the first position paper on the topic. The American Academy of Oral Medicine described clinical manifestations of these lesions, suggested potential clinical ways to prevent and to treat the affected patients.4 Several authors later reported extensive revisions of myeloma and metastatic cancer disease treated with intravenous biphosphonates, correlating the extension and evolution of this complication to the type of drug and to the length of treatment.5–7 Intravenous biphosphonates became a standard therapy for the control of complications in metastatic bone disease, such as pain, local compression, spontaneous fractures, and hypercalcemia. In several clinical studies, nitrogen-containing molecules (ibandronate, pamidronate, zolendronate) showed to be more effective in controlling manifestations of systemic malignant bone disease if compared to clodronate, with zoledronate being the most potent drug in reducing bone lesions extension and in delaying the development of the first bone metastasis.8,9 Despite the high risk of BRON-J development in oncologic patients the American Society of Clinical Oncology10 recommends the use of zoledronic acid even in patients with asintomatic metastases or disease in progress.11 In 2006, significant data emerged from an American Association of Oral and Maxillofacial Surgeons position paper that reported a consistent incidence of BRON-J, depending on prolonged biphosphonate treatment together with other related risk factors, such as tooth-alveolar bone pathological conditions of inflammatory nature.12 This is especially true for patients suffering from multiple myeloma, and breast or prostatic cancer. The American Association of Oral and Maxillofacial Surgeons also declared a clear disease staging, from not visible oral lesions to more severe clinical pictures, such as the presence of bone sequestrum and jaw osteolytic complications, proposing different therapeutic protocols based on the stage of this pathology.12 The American Association of Oral and Maxillofacial Surgeons12 established universal criteria for the BRON-J tassonomic picture that was valid when three phenotypes were present: - 1) Previous or in progress assumption of biphosphonates; - 2) Exposed necrotic bone of the jaws for more than eight weeks; - 3) No history of radiotherapy of the maxillofacial region. A strict correlation between BRON-J and chronic administration of biphosphonates, with incidence ranging from 0.8% to 12% is a well recognized phenomenon.13 Until 2002, however, the incidence was less than a single case out of 10,000 treated patients1 and these data refer to patients treated with nonaminobiphosphonates, such as etidronate or clodronate, at doses used in the therapy of osteoporosis. With the introduction of aminobiphosphonates (risedronate, zoledronate, ibadronate, and aledronate) more powerful in inhibiting bone resorption and in preventing osteoporotic fractures, the incidence of this complication grew to a relevant proportion of patients, especially in these with cancer, with multiple myeloma or metastatic breast, prostatic, or kidney cancer. As suggested by the American Society of Clinical Oncology,14 zoledronate and pamidronate show a high potency in inhibiting bone and are the chosen therapies in the treatment of the malignant disorders of the skeleton. The potency of these molecules together with their intravenous administration at high dosage in oncologic patients represent the basis for the high incidence of BRON-J in these subjects when compared to osteoporotic patients treated for the prevention of fragility fractures.15 Another potential factor that plays a role in the development of BRON-J is the affinity for the hydroxyapatite crystals by the aminobiphosphonate, with zoledronate showing the highest affinity versus other molecules of this group. The American Association of Oral and Maxillofacial Surgeons12 pointed out to other potential risk factors for BRON-J onset, such as systemic corticosteroid therapy, smoke, alcohol, bad oral hygiene conditions, chemotherapy, radiotherapy, diabetes, and blood clot diseases. The permissive local factors are: oral surgical treatments, flogistic lesions, and an excessive pressure of the removable denture on a thin mucosa.16 There is no doubt that many factors must occur in BRON-J development, such as intravenous prolonged biphosphonate administration, and a chronic or an acute periodontitis, both responsible for medullary osteomyelitis of the jaw.3 The higher incidence in females than in males could be referred to a longer exposure to oral therapy with biphosphonates in osteoporosis and to a higher incidence of breast cancer17 in women versus men. From a pathological point of view, BRON-J, such as osteomyelitis, begins in the undifferentiated connective bone tissue, in the Haversian wall vasa, and in the bone marrow spaces. The process progression towards the cortical bone and the periostium leads to the concurring presence of several anatomopathologic aspects of the lesion: osteolysis associated to essudation or to weak growth of granulation tissue; osteonecrosis with slow but progressive bordering of sequestra; suppurated oral/extra-oral fistula caused by superinfection of necrotic tissue; absence of bone remodelling, hence bone condensation at the border of the sequestrum; and hypotrophy or atrophy following loss of the bone sequestra with low coverage of defect by soft tissues. According to the American Association of Oral and Maxillofacial Surgeons position paper, one can stage BRON-J's patients in two groups: patients treated with aminobiphosphonates with no exposed bone segments (patients at risk of BRON-J development); and patients presenting BRON-J with exposed and necrotic bone segments. Bone exposure for more than eight weeks can worse the clinical picture. In fact, the exposed areas located in the lower jaw and having different size (from post-extraction socket site to larger areas or multiple areas) remain asymptomatic and with no signs of flogosis. The necrotic bone areas and exposed bone cause pain due to acute inflammation of surrounding soft tissues. Mucosa is reddish, swollen, bleeding and strongly painful on light pressure. Teeth close to the involved bone are often mobile and a local reactive limphoadenopathy can be noticed. The clinician can still manage this stage of pathology with conservative procedures and medical therapy; including antibacterial agents to fight infections that involve the exposed and necrotic bone.12 Purulent debris are present in endoral abscessual cavities (if the purulent swelling is held by periostium and then by perimaxillary muscels) or in extraoral cavities called perioral phlegmon (if the osteolytic area is beyond perimaxillary muscles insertions). In the first case purulent material drains in the oral cavity. In the second case, purulent material drains in preconstituted anatomical spaces delimited by connective tissue layers of the neck. Maxillary phlegmon can involve canine or buccal spaces. Mandibular phlegmon can involve submental, submaxillary, sublingual, submandibular spaces. In this case the phlogosis exceeding these anatomical limits can spread (for contiguity or through the lymphatic system) to the secondary spaces such as pterygomandibular, lateropharyngeal, masseteric and pterigo-maxillar spaces.18 The different thickness of the cortexes justifies the earlier externalization if the pathologic process is located in the upper jaw. In the lower jaw the osteolytic damage tend to became deeper sometimes reaching the inner edge of the mandibula (Figures 1, ,2).2). The involved bone can fracture spontaneously because of its reduced elasticity. Furtheremore the purulent material can compress nerve endings, causing local paresthesia.19,20 This phase, often following the relapse of the oncologic disease and/or the antiblastic treatment associated to corticosteroid therapy is defined as “complicated phase.” |Figure 1Osteolytic lesion in the lower jaw of patient treated with Zometa® for 12 months (59 years old, female, breast cancer).| |Figure 2Worsening of the lesion after sequestrectomy in patient treated with Zometa® for 12 months (59 years old, female, breast cancer).| This phase is characterized by purulent phlogosis, by the presence of fistulae (Figure 3), by spontaneous fractures, by compromized general physical condition with fever, and by reactive adenopathy. In this stage conservative treatments associated to prolonged antibiotic therapy can be useless because of the gravity and the extension of the process, with a radical surgery being more indicated. |Figure 3Cutaneous fistula in patient treated with Aredia® and Zometa® for 24 months (48 years old, female, breast cancer).| BRON-J diagnosis is quite clear if one refers to anamnesis, natural history of the oncologic pathology and/or biphosphonates administration. The evidence of the clinical lesion is confirmed with conventional X-rays showing a radiopaque sequestrum usually rounded by diffused radiolucency with a blurred contour due to the higher mineralization of the jaw. This aspect, due to the fixation of calcium in the bone tissue, is responsible for the patchy-ragged multilocular appearance of the involved area and it assumes a higher definition when a radiolucent osteolytic process with a central radiopaque mass of necrotic bone is identified at its periphery. Computed tomography (CT) can help allow a higher definition of the necrotic foci and their relationships with the surrounding anatomical structures, making possible to quantify the bone sclerosis status. However, CT is not useful either in the staging of the asymptomatic patients or in the differential diagnosis between a primary tumor (with osteolytic aspect and ill-defined borders) and metastatic spreads of prostatic or breast cancer with sclerotic aspect (Figure 4). With CT it can be easier to detect mandibular myeloid lesions in high vascularized areas with their “mould”, regular and well defined characteristics. In these cases the use of a contrast medium can help to better identify the lesions. |Figure 4Computed tomographic scan of the bone sequestrum in patient treated with Zometa® for 5 months (75 years old, female, breast cancer).| Once the sequestrum and the periosteal reactive bone deposition have been identified by CT, magnetic resonance imaging (MRI) is useful to evaluate the quality of overlying soft tissues and the medullary edema, which is a sign of ischemia and bone necrosis. The scintigraphy (Tc99-scan) is the most sensitive diagnostic device to identify maxillary edema with vascular alterations and to localize bone necrosis even at early stages of the disease. Nevertheless this diagnostic technique has a limit: Tc99-scan is not able to make a differential diagnosis with the metastatic process.21,22 The biopsy of the bone lesions must be carefully evaluated, because the procedure itself may damage bone tissue by creating a wound which can hardly heal.23 Nitrogen-containing biphosphonates are used widely for the management of metastatic cancer in bone (intravenous zoledronic acid or pamidronate), for the prevention and treatment of osteoporosis (oral alendronate, risedronate, and ibandronate) for the treatment of Paget's disease of bone (intravenous pamidronate and oral aledronate and risedronate), and for the short-term management of acute hypercalcemia (intravenous zoledronic acid and pamidronate).24,25 The nitrogen moiety attached to the side chain of the middle carbon of the phosphorus–carbon–phosphorus biphosphonate backbone renders these drugs much more potent as inhibitors of bone resorption than the bisphosphonates that do not contain nitrogen (etidronate and clodronate). Bisphosphonates reduce the survival and function of osteoclasts, the bone-resorbing cells. These antiresorptive actions largely account for the drugs'efficacy in conditions in which the rate of bone resorption exceeds the rate of bone formation. Until recently, the only adverse events of substantial consequence associated with the nitrogen-containing bisphosphonates were upper gastrointestinal intolerance (with oral administration) and a short-lived acute phase reaction characterized by fever, myalgias, and an influenza-like syndrome (with intravenous administration). Now another potential complication of these agents–osteonecrosis of the jaw–has surfaced.26,27 The Florence experience Based on these premises, the aim of the present study was the description of clinical and anatomopathological aspects of the disease, based on our experience in the management of BRON-J. From February 2004 to September 2006, 19 patients (14 females and 5 males) with BRON-J undergoing intravenous biphosphonate treatments for cancer were examined at the Oral Surgery Department of the Florence University Hospital.28,29 The mean age was 66.4 ± 11.7. In 14 patients the used biphosphonate was zolendronate, in one patient pamidronate and in four patients both drugs were administrated. The mean interval administration was 12 months (minimum 5 months, maximum 36 months). In nine patients (47.4%) the oncologic disease was breast cancer, in six patients (31.5%) myeloma, in three patients (15.8%) prostatic cancer, and in one patient (5%) colon cancer. All the patients were chosen following strict diagnostic criteria. The most frequent symptoms were: spontaneous pain, swelling, odontogenic abscesses, oral fistulas, bone exposure due to mucosal ulcer, post-extraction alveolitis, and local limphoadenopathy. The trigger factors were considered to be tooth extractions in 10 patients (52.6%), local concussion (inadequate removable total denture, edentulous ridges) in two (10.5%), root canal treatment in two (10.5%), and surgery in three (15.7%). In some cases it was not possible to identify a trigger factor. In 10 patients (52.6%) a pre-existing inflammatory lesions appeared to worsen the development of the disease. The patients were treated with mouth rinses (chlorhexidine gluconate 0.12% three times/daily); local or systemic antibiotic therapy (amoxicillin 1 g three times/daily; repeated local application of methronidazol) and, in case of mycotic overinfection, with fluconazole 200 mg/daily. Furthermore patients underwent hyperbaric treatment. The treatment of this lesion is extremely difficult and prolonged. There are no data to support any therapeutic choice: surgery often worsens the pathology. Surgical curettage to achieve mechanical debridement are indicated in patients with no complications. More invasive surgical treatment (such as deeper courettage, sequestrectomies, large resections, and vascularized bone grafts) are indicated after clinical changes characterized by clinical symptoms (pain, fever), oral or extra oral fistula, necrotic tissue, pathologic fractures and ineffective antibiotic treatment. In our study 13 patients out of 19 were treated with curettage and two with major surgery (segmentary mandibulectomy). Four patients were not operable. After one-year follow-up, in most of the cases complete healing was not observed, although therapeutic protocol was strictly applied. All cases of maxillary location (two out of 19) reached complete healing thanks to secondary wound closure after two months from surgery. On the contrary we observed only symptoms of improvement in case of mandibular location, probably for the reduced regenerative capacity at this site. In all patients pharmacological biphosphonate treatment was suspended. The interruption of biphosphonate assumption is one of the most difficult decision and should be taken in agreement with the oncologist. According to Migliorati and colleagues31 the suspension of biphosphonate treatment is mandatory, even though there is no immediate clinical improvement. Results and discussion The treatment of these lesions is extremely difficult and prolonged. There are no data to prefer any therapeutic choice over another, even though surgery appears to worsen the disease's course. Surgical curettage to achieve mechanical debridement is indicated in patients without complications. Chemical debridement is carried out with antiseptic irrigations and with iodine gauze. Re-infection prevention is improved by local ointment use and 0.12% chlorhexidine daily rinses. Surgical procedures to achieve a mechanical debridement of necrotic tissue, broad spectrum antibiotic treatment for a long period, and local antibiotic use are of benefit before the progression to bone exposure and to small bone sequestra. More invasive surgical treatment (such as deeper curettage, sequestrectomies, large resections and vascularized bone grafts) are indicated in the occurrence of systemic clinical symptoms (pain, fever), of oral or extra-oral fistulas, of necrotic tissue, of pathologic fractures, and of lack of response to antibiotic treatment. The necrotic tissue curettage, sequestrectomy, sliding flap procedure (in two cases with oro-antral communication) and peduncle vascularized bone graft (in case of fracture) were the surgical treatments used in order to stop osteonecrotic lesion progression (Figure 5). |Figure 5Spontaneous fracture of the lower jaw in patient treated with Aredia® and Zometa® for 24 months (43 years old, female, breast cancer).| Metastatic foci were not shown by histological examination both in the lesion core and in the neighbouring bone tissue. Macroscopic healthy bone samples showed cortical necrosis with well preserved lamellar bone. Furthermore, empty osteocytic lacunae were detected and medullary bone tissue appeared necrotic. All cases of maxillary location reached complete healing. In the majority of the cases of extra-maxillary location, 14 patients (73.6%) complete healing was not achieved, although the therapeutic protocols were strictly applied.28,29 We observed only symptoms of improvement when the location was in the lower jaw: five patients (26.3%), probably for the reduced regenerative capacity at this site. Following the American Association of Oral and Maxillofacial Surgery's staging and treatment criteria,12 two different clinical courses have been identified: early clinical course, where a small bone sequestrum was identified (Figure 6); and late clinical course, where large neocrotic areas worsened by suppurative phlogosis were detected (Figure 7). |Figure 6Early clinical picture of the lower jaw in patient treated with Aredia® and Zometa® for 24 months (48 years old, female, breast cancer).| |Figure 7Advanced lesion of the upper jaw in patient treated with Zometa® for 12 months (69 years old, male, multiple myeloma).| The present data showed a higher incidence of BRON-J in patients treated with intravenous zolendronate and pamidronate. Clinical pictures varied from a more limited osteonecrosis areas with or without suppurative phlogosis to larger osteonecrotic areas with suppurative phlogosis, jaw fractures and extra-oral fistulae (Figure 8). |Figure 8Abscessual complication of necrotic bone lesion in patient treated with Zometa® for 11 months (61 years old, male, prostatic cancer).| In nine patients (47.3%) we noticed a heavy odontalgia following the extraction of teeth located in the maxillary area involved by BRON-J. Before teeth extraction, the pain was referred to periodontitis involving both the involved teeth and the maxillary area close to them. The role of biphosphonates in the onset of the lesion was confirmed by the time elapsed between drug assumption and the lesions' development (about 18 months for zoledronate and about six years for pamidronate),30 with reports of lesions initiated even after five months from the beginning of treatment.31,32 All the patients of our study underwent a drug treatment longer than six months. The length of biphosphonate treatment represents a risk factor for BRON-J along with chemotherapy, multiple myeloma, renal failure, corticosteroid treatment, anemia, hypoproteinemia, infections etc.30,34 Six patients out of 14 under chemotherapeutic and radiotherapy treatment presented larger tissue necrosis refractory to the applied therapeutic protocols. In our study, preferential location of osteonecrotic lesions was in the lower jaw: in 14 patients (73.6%) the location was in the mandible, in two patients (10.6%) was both mandible and maxillary, and in three patients (15.8%) was only maxillary. The location in the mandible seems to be explained by terminal vascularization, lower quantity of trabecular bone in the lower jaw, and more frequent microinjures due to removable denture and masticatory forces.3,20 In order to categorize patients with BRON-J, the American Association of Oral and Maxillofacial Surgeons recognized three stages of the disease.12 In stage 1, the bone is exposed but there is no soft tissue inflammatory swelling. Sometimes there is pain before bone exposure. In stage 2, bone is exposed with associated pain and soft tissue infection. In stage 3, the patient is affected by the pathologic fractures, oral and extra-oral fistulae. According to previous publications.16,35,36 and American Association of Oral and Maxillofacial Surgeons' guidelines, we treated patients in Stage 1 (five patients; 26,3%) and Stage 2 (nine patients; 47.4%), with small sequestra, using pharmacologic and conservative protocols. In Stage 3 (five patients; 26.3%), when the large suppurative necrotic area did not heal, the conservative treatment led to poor results so a more invasive surgical treatment should be indicated (Table 1).37 |Table 1Treatment of patients with osteonecrosis of the jaw| Under a therapeutic point of view the clinician should be paid attention to the perimaxillary soft tissue study, and to their vascularization since a periostium and mucosa highly vascularized are the only possibility to try to cover the necrotic area after the removal of the sequestrum. Oxygen therapy with a hyperbaric chamber is useful to prepare the patient to the surgical treatment and platelet-rich plasma to improve soft tissue attachment (Figure 9).38–42 |Figure 9Use of platelet-rich plasma in the surgical treatment of the lesion in patient treated with Zometa® for 12 months (59 years old, female, breast cancer).| In conclusion, BRON-J shows a complex clinical picture of unclear pathogenesis, even though it seems clearly related to intravenous biphosphonate administration. Numerous retrospective studies confirmed that pharmacologic and surgical therapies are not able to cure this complication, whose consequences are extremely invalidating for the patient. For this reason, several scientific societies underlined the importance of a risk staging for preventing the development of the disease in oncologic patients treated with intravenous biphosphonates as an adjuvant intervention.4,12,23,24 In the most severe cases the treatment should guarantee: pain relief, control of the infection, prevention of the necrotic area spreading, and of the development of new contiguous lesions. When pharmacologic treatments with antibiotics and local antiseptics are not able to control the development of BRON-J's complications, the clinicians should perform radical surgical treatments, such as the resection of the bone involved followed by reconstructive surgery with vascularized bone grafts.16 Today, prevention is mandatory in patients who have to be treated with biphosphonates43 and in those that are under treatment for a long period of time. A multidisciplinary team composed by oncologists, pathologists, bone metabolism specialists, dentists, oral surgeons, and maxillofacial surgeons must cooperate to carefully evaluate the patients' clinical conditions, general and local risk factors, radiological and biohumoral exams, are useful in the prevention and in the staging of the disease. The authors report no conflicts of interest in this work. 1. Marx R. Pamidronate (Aredia) and zoledonate (Zometa) induced avascular necrosis of the jaws: a growing epidemic. J Oral Maxillofac Surg. 2003;61:1115–1117. [PubMed] 2. Ruggiero SL, Mehrota B. Osteonecrosis of the jaws associated with the use of bisphophonates; a review of 63 cases. J Oral Maxillofac Surg. 2004;62:527–534. [PubMed] 3. Marx RE, Sawatari Y, Fortin M, Broumand V. Bisphosphonates-induced exposed bone (osteonecrosis/osteopetrosis) of the jaws: risk factors, recognition, prevention and treatment. J Oral Maxillofac Surg. 2005;63:1567–1575. [PubMed] 4. Migliorati CA, Casiglia J, Epstein J, Jacobsen PL, Siegel MA, Woo SB. Managing the care of patients with bisphosphonate-associated osteonecrosis: an American Academy of Oral Medicine position paper. J Am Dent Assoc. 2005;136:1658–1668. Erratum in: J Am Dent Assoc. 2006;137:26. [PubMed] 5. Badros A, Weichel D. Osteonecrosis of the jaw in multiple myeloma patients: clinical features and risk factors. J Clin Oncol. 2006;24:945–952. [PubMed] 6. Dimopoulos MA, Kastritis E. Osteonecrosis of the jaw in patient with multiple myeloma treated with bisphosphonates: evidence of increased risk after treatment with zoledronic acid. Haematologica. 2006;91:968–971. [PubMed] 7. Tosi P, Zamagni E. Osteonecrosis of the jaw in newly diagnosed multiple myeloma patient treated with zoledronic acid and thalidomide-dexamethasone. Blood. 2006;108:3951–3952. [PubMed] 8. Major PP, Cook RJ. Multiple event analysis of zoledronic acid trials in patients with cancer metastases to bone. Proc Am Soc Clin Oncol. 2003;22:762. 9. Rosen LS, Gordon DH. Zoledronic acid is superior to pamidronate for the treatment of bone metastases in breast carcinoma patients with at least one ostelytic lesion. Cancer. 2004;100:36–43. [PubMed] 10. Berenson JR, Hillner BE, Kyle RA, et al. American Society of Clinical Oncology Bisphosphonates Expert Panel. American Society of Clinical Oncology clinical practice guidelines: the role of bisphosphonates in multiple myeloma. J Clin Oncol. 2002;20:3719–3736. [PubMed] 11. Body JJ. Breast cancer: bisphosphonates therapy for metastatic bone disease. Clin Cancer Res. 2006;12(20 Suppl):6258–6263. 12. American Association of Oral and Maxillofacial Surgeons Position paper on bisphosphonate-related osteonecrosis of the jaws, approved by the Board of Trustees September 25, 2006 [cited on 2008 Dec 6]Available from http://www.aaoms.org/docs/position_papers/osteonecrosis.pdf. 13. Durie BGM, Katz M, editors. Osteonecrosis of the jaws and bisphosphonates N Engl J Med 2005. 35399–102.discussion 99–102. [PubMed] 14. Hillner BE, Ingle JN. ASCO 2003 update on the role of Bisphosphonates and bone health issues in women with breast cancer. J Clin Oncol. 2003;21:40–42. 15. Bilezikian JP. Osteonecrosis of the jaw. Do Bisphosphonates pose a risk? N Engl J Med. 2006;355:2278–2281. [PubMed] 16. Ruggiero SL, Fantasia J, Carlson E. Bisphosphonate-related osteonecrosis of the jaw: background and guidelines for diagnosis, staging and management. Oral Surg Oral Med Oral Path Oral Rad Endod. 2006;102:433–441. 17. Conte PF, Guarnieri V. Safety of intravenous and oral bisphosphonates and compliance with dosing regimens. Oncologist. 2004;9(Suppl 4):28–37. [PubMed] 18. Peterson LJ, Ellis E. Contemporary oral and maxillofacial surgery. New York: Mosby; 1998. pp. 418–432. 19. Viale PH, Lin A. Exposed bone in oral cavities. J Clin Oncol Nurs. 2005;9:355–357. 20. Ficarra G, Beninati F, Rubino I, et al. Osteonecrosis of the jaws in periodontal patients with a history of bisphosphonates treatment. J Clin Periodontol. 2005;32:1125–1128. 21. Chiandussi S, Biasiotto M, Dore F, Cavalli F, Cova MA, Di Leonarda R. Clinical and diagnostic imaging of bisphosphonate-associated osteonecrosis of the jaws. Dentomaxillofac Radiol. 2006;35:236–243. [PubMed] 22. Hermans R, Fossion E, Ioannides C, Van de Bogaert W, Ghekiere J, Baert AL. CT findings in osteoradionecrosis of the mandibule. Skeletal Radiol. 1996;25:31–36. [PubMed] 23. Woo SB, Hellstein JW, Kalmar JR. Narrative [corrected] review: bisphosphonates and osteonecrosis of the jaws. Ann Intern Med. 2006;144:753–761. [PubMed] 24. Resza AA, Rodan GA. Nitrogen-containing bisphosphonates mechanism of action. Mini Rev Med Chem. 2004;4:711–719. [PubMed] 25. Green JR. Bisphosphonates preclinical review. Oncologist. 2004;9(Suppl 4):3–13. [PubMed] 26. Nase JB, Suzuki JB. Osteonecrosis of the jaw and oral bisphosphonate treatment. J Am Dent Assoc. 2006;137:1115–1119. [PubMed] 27. Vescovi P, Merigo E, Melet M, Manfredi M. Bisphosphonate-associated osteonecrosis (BON) of the jaws: a possibile treatment? J Oral Maxillofac Surg. 2006;64:1460–1462. [PubMed] 28. Tonelli P, Duvina M, Brancato L, Viviani C. Osteonecrosis of the jaw: A dramatic complication in patients with history of bisphosphonates treatment and bone disease. Study of 19 cases. Monaco: Poster Session in International Symposium, Osteology; May 10–12, 2007. 29. Borgioli A, Tonelli P, Brandi ML, Giombetti A, Duvina M, Spinelli G, Brancato L. Osteonecrosi dei mascellari da bifosfonati. L'Esperienza Fiorentina: Aspetti clinici e terapeuticiAlessandria: Abstract Presentation in Workshop: BRONJ, present and future, gennaio 20, 2007. 30. Durie BG. Osteonecrosis of the jaw and bisphosphonates. N Engl J Med. 2005;353:99–102. [PubMed] 31. Migliorati CA, Schubert MM, Peterson DE, Seneda LM. Bisphosphonate-associated osteonecrosis of mandibular bone. Cancer. 2005;104(1):83–93. [PubMed] 32. Jimenez-Soriano Y, Bagan JV. Bisphosphonates as a new cause of drug-induced jaw osteonecrosis: an update. Med Oral Patol Oral Cir Bucal. 2005;10(Suppl 2):88–91. [PubMed] 33. Robinson NA. Bisphosphonates a word of caution. Ann Acad Med Singapore. 2004;33:48–49. [PubMed] 34. Novartis Pharmaceuticals CorporationAppendix 11: Expert panel recommendation for the prevention, diagnosis and treatment of osteonecrosis of the jaw. Oncologic Drugs Advisory Committee (ODAC), Meeting March 4, 2005 [cited 2008 Dec 6]. Available from: http://www.fda.gov/OHRMS/DOCKETS/AC/05/briefing/2005-4095B2_02_12-Novartis-Zometa-App-11.pdf. 35. Mehrotra B, Ruggiero S. Bisphosphonates complications including osteonecrosis of the jaw. Hematology Am Soc Hematolol Educ Program. 2006;3:356–360. 36. Ruggiero SL, Gralow J, Marx RE, Hoff AO, Schubert MM, Huryn JM. Practical guidelines for the prevention, diagnosis and treatment of osteonecrosis of the jaw in patients with cancer. J Clin Oncol Prac. 2006;2:7–14. 37. Zavras AL, Zhu S. Bisphosphonates are associated with increased risk for jaw surgery in medical claims data: is it osteonecrosis? J Oral Maxillofac Surg. 2006;64:917–923. [PubMed] 38. Martins M, Saraceni G, Koga DH, Feber O, Olivetra dos Santos M, Zardetto C. Treatment of avascular osteonecrosis of the mandibule in cancer patients with a history of bisphosphonate therapy by combining bone resection and autologous platelet-rich plasma: report of 3 cases. J Oral Maxillofac Surg. 2007;65:349–355. [PubMed] 39. Shimura K, Shimazaki C, Taniguchi K, et al. Hyperbaric oxygen in addition to antibiotic therapy is effective for bisphosphonate-induced osteonecrosis of the jaw in a patient with multiple myeloma. Int J Hematol. 2006;84:343–345. [PubMed] 40. Tonelli P, Brancato L, Paggetti B, Duvina M, Borgioli A. La terapia iperbarica nel trattamento dell'osteomielite dei mascellari. Rome: Sessione Poster Collegio dei Docenti in Odontoiatria Roma; 2004. 41. Giombetti A, Borgioli A, Brancato L, Spinelli G. L'Osteomielite farmacologica dei mascellari. Montecatini Terme: Poster Presentation, Congresso Nazionale della Società Italiana di Chirurgia Orale (SICO): la Pianificazione del trattamento in Chirurgia Orale; Ottobre 7–8, 2005. 42. Borgioli A, Duvina M, Brancato L, Duvina G, Tonelli P. Bad and good bisphosphonates in implantology: Clinical report. Rome: Sessione Poster Collegio dei Docenti in Odontoiatria Roma; 2007. 43. Mavrokokki T, Cheng A, Stein B, Goss A. Nature and frequency of bisphosphonate-associated osteonecrosis of the jaws in Australia. J Oral Maxillofac Surg. 2007;65:415–423. [PubMed] Lasers Med Sci. 2009 Nov;24(6):849-56. Epub 2009 Mar 11. Osteonecrosis of the jaws caused by biphosphonates: evaluation of a new therapeutic approach using the Er:YAG laser. Angiero F, Sannino C, Borloni R, Crippa R, Benedicenti S, Romanos GE. Pathological Anatomy, University of Milan-Bicocca, Ospedale S Gerardo Monza, Milan, Italy. email@example.com A series of 49 patients diagnosed with osteonecrosis and all treated with latest-generation bisphosphonates was reviewed retrospectively to evaluate the use of erbium-doped: yttrium, aluminum, and garnet laser (Er:YAG) in terms of clinical outcome, and examine current trends from the clinical-therapeutic standpoint. Pathology reports on specimens submitted over the previous 7 years from either the mandible or the maxilla were reviewed; 49 patients were identified as having osteonecrosis of the jaws. For each of these cases, the medical history and profile were evaluated; 19 were treated with conservative therapy, 20 with radical surgery, and 10 with Er:YAG laser (2,940 nm). Of the 20 patients treated surgically (bone baquette, curettage, sequestrectomy of the necrotic bone), some required re-treatment, which resulted in bone fracturing. None of the patients were treated successfully. The 19 cases treated conservatively produced an improvement in symptoms, but not remission of the lesions. Of the ten patients treated with Er:YAG laser, six achieved total remission of signs and symptoms, four an improvement, and re-treatment was required in one case. Our present approach is to recommend intensive prophylactic care before the administration of bisphosphonates, and great caution is advised even in simple maneuvers like curettage, because this may exacerbate the avascular process. The use of Er:YAG laser appears to be promising (within the limits of our experience). It can be concluded that at 1 year of laser surgery, the treatment led to significant improvements in clinical parameters, and may represent a valid alternative, although studies on a larger scale are needed. Lasers Surg Med. 2009 Jan;41(1):26-30. A preliminary report about treatment of biphosphonate related osteonecrosis of the jaw with Er:YAG laser ablation. Stübinger S, Dissmann JP, Pinho NC, Saldamli B, Seitz O, Sader R. Hightech Research Center of Cranio-Maxillofacial Surgery, University of Basel, Basel, Switzerland. firstname.lastname@example.org BACKGROUND AND OBJECTIVES: This preliminary report describes a new laser-assisted treatment option for the emerging complication of bisphosphonate related osteonecrosis (BON) of the jaw. MATERIALS AND METHODS: In eight tumour patients (three women, five men) ten bony lesions in the maxilla and mandible in the course of intravenous bisphosphonate therapy were treated with a variable square pulsed (VSP) Er:YAG laser. For the treatment, the Er:YAG laser was applied with a pulse energy of 1,000 mJ, a pulse duration of 300 microseconds, and a frequency of 12 Hz (energy density 157 J/cm(2)). The spot size was 0.9 mm and the handpiece was kept at a distance of about 10 mm from the bone surface. The diseased bone was ablated exclusively with the Er:YAG laser by subsequently sweeping the bone surface in a well directed scanning mode. RESULTS: The surgical procedure and postoperative wound healing were without any complications and a complete soft tissue recovering was achieved within 4 weeks. During follow-up examinations over 12 months soft tissue conditions were stable. The pulsed laser ablation caused a characteristic microstructured and craggy bone surface without a condensation or a smear layer on the laser rims. CONCLUSION: The bone ablation technique using a VSP Er:YAG laser yielded promising clinical results without impairment of wound healing. A further analyse of the chemical, physical and pharmacological aspects of laser assisted treatment of BON lesions is necessary to get a safe and reliable treatment protocol for bisphosphonate-related osteonecrosis of the jaw. Rev Belge Med Dent. 2009;64(2):87-95. Surgical treatment of maxillary osteonecrosis dur to biphosphonates using an Er:YAG (2940 nm) laser. Discussion of 17 clinical cases. [Article in French] Vescovi P, Merigo E, Manfredi M, Meleti M, Fornaini C, Bonanini M, Rocca EP, De Moor RJ, Nammour S. Department of ENT/Dental/Ophthalmological and Cervico-Facial Sciences, Università degli Studi di Parma, Via Gramsci 14, 43100 Parma, Italy.email@example.com. Reports of cases of ONJ are significantly increased during the last five years as a iatrogenic complication of therapy with bisphosphonates (BPT). The aim of this work is to present the advantages of surgery using Er:YAG laser for treatment of ONJ. Er:YAG laser can gradually reach the healthy bone without causing any heating damage of tissues. This device results very versatile and gives the possibility of choose among different surgical techniques depending by the case (e.g.: vaporization or ostectomy). Moreover, different studies have demonstrated the presence of both bactericidal and biomodulating effect on bone and surrounding tissues, with biostimulation of microcirculation and neoangiogenesis. Seventeen sites of ONJ, classified according to the staging system developed by Ruggiero and observed in 12 patients with multiple myeloma (9 patients), bone metastases (2 patients) and osteoporosis (1 patient), were treated with Er:YAG laser (Fidelis Plus, Fotona-Slovenia). Laser device was used in non-contact or near-contact way (VSP, 300 m3 30 Hz, Fluence 60 J/cm2) on 17 sites (4 Stage I and 13 Stage II) on 3 different types of surgery: sequestrectomy + debridement, sequestrectomy + corticotomy and vaporization. For an average follow-up of 9 months (SD +/- 6 months), complete healing of ONJ (Stage 0) was obtained for 13 sites (76.5%) and resolution of symptoms was obtained (Stage 1) for 3 sites (17.5%). For one site at Stage II (6%), recovery was obtained but this result was not maintained over 3 months. Positive results were independent by the anatomical area (mandible or maxilla), primary disease (osteoporosis, multiple myelomas or metastasis) and discontinuation of BPT before surgery. Er:YAG laser (2940 nm), in our experience, represents a valid therapeutic option for ONJ-BP related, especially in early stages of the disease. Photomed Laser Surg. 2008 Feb;26(1):37-46 Nd:YAG laser biostimulation in the treatment of biphosphonate-associated osteonecrosis of the jaw: clinical experience in 28 cases. Vescovi P, Merigo E, Manfredi M, Meleti M, Fornaini C, Bonanini M, Rocca JP, Nammour S. Unit of Oral Pathology and Medicine, Section of Dentistry, Department of ENT/Dental/Ophthalmological and Cervico-Facial Sciences, University of Parma, Parma, Italy. firstname.lastname@example.org. OBJECTIVE: To research an efficient treatment for the management of bisphosphonate-associated osteonecrosis. BACKGROUND DATA: Necrosis of the jawbone has recently been described in association with systemic bisphosphonate therapy with drugs including zoledronic acid, pamidronate, and alendronate. The extent and clinical characteristics of bisphosphonate-associated osteonecrosis (BON) of the jaw are extremely variable, and range from the presence of fistulae in the oral mucosa or orofacial tissues, to large exposed areas of necrotic bone within the oral cavity. Clinical signs and symptoms commonly reported include pain, swelling, the presence of pus, loose teeth, ill-fitting dentures, and paresthesias of the inferior alveolar nerve when the necrosis affects the mandible. Fractures have also been reported. The treatment of BON of the jaw is still controversial since no therapy has proven to be efficacious as shown by the literature on the subject. MATERIALS AND METHODS: In this study we report results achieved with 28 patients affected by BON of the jaw, who received treatment with the Nd:YAG laser alone or in combination with conventional medical or surgical treatment. Clinical variables such as severity of symptoms, presence of pus, and closure of mucosal flaps before and after therapy were evaluated to establish the effectiveness of laser irradiation. The 28 patients with BON were subdivided into four groups: eight patients were treated with medical therapy only (antibiotics with or without antimycotics and/or antiseptic rinses), six patients were treated with medical and surgical therapy (necrotic bone removal and bone curettage), six patients were treated with medical therapy associated with laser biostimulation, and eight patients were treated with medical therapy associated with both surgical therapy and laser biostimulation. RESULTS: Of the 14 patients who underwent laser biostimulation, nine reported complete clinical success (no pain, symptoms of infection, or exposed bone or draining fistulas), and three improved their symptomatology only, with a follow-up of between 4 and 7 mo. CONCLUSIONS: While the results reported in this study are not conclusive, they indicate that laser therapy has potential to improve management of BON. Acta Biomed. 2006 Aug;77(2):109-17. Bone necrosis of the jaws associated with biphosphonate treatment: a report of twenty-nine cases. Merigo E, Manfredi M, Meleti M, Guidotti R, Ripasarti A, Zanzucchi E, D'Aleo P, Corradi D, Corcione L, Sesenna E, Ferrari S, Poli T, Bonaninil M, Vescovi P. Unit of Oral Pathology and Medicine, Section of Odontostomatology, Department of ENT/Dental/Ophthalmological and Cervico-Facial Sciences, University of Parma, Parma, Italy. email@example.com Bone necrosis of the jaws is often related to head and neck radiotherapy, to surgical procedures at maxillary or mandibular level but also to various local and systemic factors such as haematological diseases, haemoglobinopathies and systemic lupus eritematosus; its pathogenesis maybe associated with defects of vascularization. Bisphosphonate are synthetic analogues of pyrophosphate used for the treatment of hypercalcemia in patients with malignancies and bone metastasis and for the treatment of many other disorders such as metabolic bone diseases, Paget's disease, and osteoporosis; their pharmacological activity is related to the inhibition of the osteoclastic function which leads to resorption and reduction of bone vascularization. Since the end of 2003 Bisphosphonate-associated Osteonecrosis (BON) has become an increasing problem and the test of that is the increase of the relative published case report and case series. Here we report 29 cases of bone necrosis of the jaws in patients treated with pamidronate (Aredia), zoledronate (Zometa) and alendronate: 15 underwent surgical procedures and 14 occurred spontaneously. Among these patients (21 females, 8 males; mean age between 45 and 83 years); 14 were treated for bone metastasis, 12 for multiple myeloma and 3 for osteoporosis. Bone necrosis involved only maxilla in 7 patients, only mandible in 20 patients and both in 2 patients. Six patients had multiple osteonecrotic lesions, 3 contemporary lesions and 3 non contemporary. In these patients we performed 3 kinds of therapy, associated or not: medical therapy (with antibiotic drugs, antimycotics and antiseptic mouthwashes), surgical therapy with curettage or sequestrectomy and Nd:YAG laser biostimulation.