text
stringlengths
187
652k
id
stringlengths
47
47
dump
stringclasses
9 values
url
stringlengths
14
4.47k
file_path
stringlengths
125
141
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
46
155k
score
float64
2.52
5.31
int_score
int64
3
5
Bailes, Amy F. PT, MS, PCS; Reder, Rebecca OTR/L, OTD; Burch, Carol PT, DPT, MEd In this large urban Midwest pediatric teaching hospital Occupational Therapy (OT) and Physical Therapy (PT) services are delivered to children as inpatients and outpatients. Therapists in this hospital serve children of all ages and with a variety of diagnoses. Yet, there is little evidence to guide our decision making with regard to therapy frequency. It has been our experience that parents, therapists, and physicians may possess the belief that a child should receive therapy services for as long as the family wants them. It also has been our experience that children with chronic disabilities continue to see a therapist with varying frequencies that have not been well documented or related to effectiveness. The frequency of therapy and when to discontinue therapy may not be agreed on by different team members. In addition, at this pediatric hospital parents and physicians may complain when OT or PT services are discontinued or decreased. The purpose of this article is to describe the guidelines we developed to assist physical therapists and occupational therapists when determining frequency of therapy services in a pediatric teaching hospital. A thorough literature review and a review of other professional documents yielded only 6 resources that addressed the topic of service delivery in the pediatric setting for physical therapists and occupational therapists. In 1994, Montgomery1 published factors to consider when making decisions about frequency and duration of pediatric therapy services. These included cognitive ability, motivational level, physical environment, caregiver availability, diagnosis and prognosis, child’s age, and functional goals. These items were listed and it is unknown whether they are of equal importance. Missing also was an objective way to measure the factors and their effect on making frequency of therapy decisions. In 1998, the Oregon Health Sciences University published guidelines2 specific for medically based outpatient PT and OT for children with special health needs. Their guidelines stated that medically based outpatient therapy services will be periodic and episodic to address specific functional problems related to emerging issues of health, growth, development, environment, and family context. They also indicated there will be periods when the child will be in a steady state and not require therapy services. Although a suggested continuum of frequency was provided, Oregon’s guidelines did not provide specific guidelines or recommended frequencies of therapy services for children who are hospital inpatients. The Iowa Department of Education described services for OT and PT services in the schools3 taking into account the students’ potential to benefit from therapeutic intervention, whether children are at a critical period of skill acquisition or regression related to their development or disability, how much of the program requires the expertise of a therapist when deciding amount of service, and the degree to which their motor problems interfere with their educational goals. Iowa’s guidelines described a continuum of direct, integrated, or consultative services for the school environment. Iowa’s modes of service delivery were not specific to a pediatric hospital setting. However, the concepts of a critical period of skill acquisition and how much of the intervention requires expertise of a therapist add important components for consideration in an pediatric medical setting where both inpatients and outpatients are served. Long et al4 stated that PT services in the outpatient environment are usually short term while recovering from illness or injury. For children with chronic disabilities treatment is transitional before community services are used. Yet, Long et al did not discuss whether children with long-term needs should continue to see therapists in an outpatient medical setting and at what frequency. Recently, Effgen5 stated that direct delivery usually occurs in the medical setting, but she did not address guidelines for which frequency of service should be provided. Lastly, the Guide to Physical Therapist Practice6 stated that frequency and duration of therapy will vary greatly and is based on a variety of factors that are listed for each practice pattern. For example, for children in Practice Pattern 5C-Impaired Motor Function and Sensory Integrity Associated with Nonprogressive Disorders of the Central Nervous System—Congenital Origin or Acquired in Infancy or Childhood, cognitive maturation and periods of rapid growth may also influence frequency and duration of therapy services. In a related manner, the Guide to OT Practice7 recognized that an OT intervention plan typically includes the type, amount, frequency, and duration of therapy; yet, neither ranges nor guidelines for these factors were given. Similarly, the OT Practice Framework8 defined OT intervention approaches as create/promote, establish/restore, maintain, modify, and prevent but did not suggest frequency guidelines for delivery of these different approaches. None of the findings in the literature could be applied across the continuum of care common to this pediatric hospital serving both inpatients and outpatients. Hence, a solution was sought to provide recommendations for frequency modes of service delivery in this large urban teaching pediatric hospital. This hospital serves children on inpatient acute care and rehabilitation units, in addition to serving outpatients at 9 locations in and around our metropolitan area. In 2006, a total of 71,313 outpatient visits and 28,636 inpatient visits were logged by 123 OTs and PTs employed as clinical staff. Within the hospital’s Division of OT and PT, the Leadership Team is responsible for coordinating strategies, objectives and priorities for the division, and encouraging teamwork to ensure quality services for patients and families. The Leadership Team consists of the senior clinical director, division director, coordinator of clinical operations, education coordinator, performance improvement coordinator, and an Occupational Therapist II and a Physical Therapist II. The Leadership Team felt there was a need to improve care by developing guidelines for the therapist when deciding the appropriate frequency for service delivery. This group met weekly over several months to cull the literature for published guidelines. Because of the absence of specific guidelines in the literature related to all aspects of a pediatric medical setting, the team established modes of frequency of care specific to the facility’s needs. These guidelines were established for several purposes. The primary purpose was to address our professional duty to plan for discharge throughout the intervention process and to terminate services when appropriate. Second, the guidelines were intended to assist in educating the patient and family regarding changes they may experience in frequency of therapy as the patient’s needs change. Furthermore, the guidelines were intended to decrease unwarranted variation in care for patients with similar diagnoses and therapy needs, and to provide therapists with a tool to use in discussions with patients, families, and physicians about how decisions are made regarding frequency of care. Throughout, it was assumed that transition through various intervention frequencies was appropriate to achieve optimal outcomes. Patient care plans were based on the OT and PT evaluation findings. The frequency guidelines were then developed from factors found in the literature and grouped as follows: first, potential to participate and benefit from the therapy process which takes into account diagnosis, age, prognosis, motivation, and functional goals1; second, presence of a critical period for skill acquisition or potential for regression; third, amount of therapist expertise needed3; and fourth, the level of support present to assist the patient in attaining goals, which also takes into account family context.2 The factors selected for inclusion were not intended to be all inclusive, rather they were deemed most appropriate and helpful for this pediatric hospital setting. The subjective nature of the factors was acknowledged throughout the process. DESCRIPTION OF THE FREQUENCY MODES Four modes of intervention frequency were developed and are currently being applied across the continuum of care from the time the child is an inpatient at this facility through the outpatient course of care. These 4 modes and factors to consider in determining appropriate frequency are listed in Table 1. In all modes the therapist involves the family so that therapy can be carried out in other more natural settings. The Intensive Frequency Mode The intensive frequency mode varies based on the individual needs of each patient and ranges from 3 to 11 visits per week. Use of the intensive frequency mode is considered appropriate for children who have a condition that is changing rapidly, need frequent modification in their plan of care, and require a high frequency of intervention for a limited duration to achieve a new skill or recover function lost due to surgery, illness, or trauma. The intensive frequency mode is appropriate for inpatients as well as some outpatients. For example, some of the inpatient groups that are served by the intensive treatment mode are orthopedic, hematology/oncology, and rehabilitation. Outpatients that are served would be those children recently discharged from the inpatient rehabilitation unit who continue to demonstrate more rapid changes in function. The Weekly or Bimonthly Frequency Mode The weekly or bimonthly frequency mode is for children who demonstrate continuous progress toward established goals; the frequency ranges from 1 to 2 times a week to every other week. These children do not have a condition that is changing rapidly. The children require problem solving and clinical decision making skills of a physical therapist or occupational therapist for regular visits for a limited time. In our setting, most often children served in this frequency mode would be outpatients. However, inpatients that are on the transitional care unit awaiting discharge are also served weekly or with bimonthly sessions. These children are frequently on ventilators and have complex medical needs and discharge training and planning can take long periods of time. The Periodic Frequency Mode The periodic frequency mode includes monthly therapy visits or regularly scheduled intervals between visits. This mode is most often used for outpatients and is appropriate for children who cannot yet participate in or tolerate more frequent therapy sessions. These children might have therapy needs that are reassessed and addressed on a periodic basis as part of comprehensive management in a specialty clinic; or weekly or biweekly therapy is not a high priority due to other family issues or priorities. The therapist typically provides updates to a home program in the periodic frequency mode. The Consultative Frequency Mode The consultative frequency mode is episodic or “as necessary.” The guide to physical therapist practice6 defines consultation as the rendering of professional or expert opinion by a physical therapist and usually does not involve direct intervention. The guide to OT practice7 acknowledges that consultation may be a method of service delivery. Consultative services are often needed when the child improves or regresses, when the child is ready to perform a new task as a result of changes in age, development, environment, or when new assistive technology becomes available. The child who is doing well in the community may receive community services, yet require occasional consultation with a therapist in this medical setting to ensure gains continue or to address emerging concerns. Transition and Termination of Services Transition involves the process of preparing for or facilitating change, such as from one frequency mode of treatment to another. The guidelines presented may make transitions smoother from one frequency mode of service to another. Transitioning services from weekly to periodic to consultative may be appropriate steps as part of a plan of care before discharge or discontinuation. Discharge is the process of ending therapy services that have been provided during a single episode of care when the anticipated goals and outcomes have been achieved. Discontinuation is the process of ending therapy services that did not result in the desired outcome and can occur when the patient/family declines continued intervention, the patient is unable to continue to progress toward goals, or when the physical therapist determines the patient will no longer benefit from therapy.6 Similarly, for OT services discontinuation is recommended when there is lack of objective evidence of progress or it is determined the child is not benefiting from the OT services provided.7 IMPLEMENTATION IN PHYSICAL AND OT Intensive education was provided by the Leadership Team about these guidelines and frequency modes to all occupational and physical therapists at this hospital before implementation. Education included instruction on the guidelines as well as training in how to best communicate this information to parents and referral sources. A family brochure was developed to describe the guidelines and various frequency modes of service. All families currently receive this brochure at the initiation of therapy services. At the same time, the therapist reviews the recommended therapy frequency with the family. The brochure is revisited with the family as necessary when transitioning care from one frequency to another. This discussion prepares families for changes in frequency and intensity of therapy services for their child that are likely to occur. This assists families to better understand the reasons different frequencies of therapy may be appropriate for their child at different times throughout the course of care. These discussions foster the collaborative relationship between the therapist and family, which supports optimal patient outcomes. A 7-year-old boy was admitted to this facility after being struck by a moving vehicle. His injuries included a severe traumatic brain injury, facial fracture, and right tibial fracture. His Glasgow Coma Score on admission was 3 and magnetic resonance imaging showed multiple focal shear injuries and bilateral frontal and temporal lobe contusions. Initially he received OT and PT services daily Monday through Saturday under the intensive frequency mode while in the intensive care unit before being transferred to the inpatient rehabilitation unit. The goal of therapy while in the intensive care unit was to provide positioning to prevent loss of functional mobility and range of motion because of his medical condition. Once stable he was transferred to the inpatient rehabilitation unit where he continued under the intensive frequency mode and received PT and OT each 2 times a day and once on Saturday while he demonstrated more rapid progress. On admission to the inpatient rehabilitation unit he was dependent for mobility, transfers, and self-care. He was beginning to open his eyes and localize to people’s voices in his room. He was on the inpatient rehabilitation unit approximately 3 months. At discharge, he was demonstrating fair sitting ability and required some assistance for stand pivot transfers and activities of daily living. He was able to follow simple commands and used some spontaneous words. He was discharged to home with nursing services 7 days/week and outpatient OT and PT. He continued under the intensive frequency mode receiving OT and PT 3 times a week as an outpatient while he was still demonstrating rapid progress. At approximately 1 year after his injury, he transitioned to the weekly frequency mode where he received both OT and PT once each week. He was ambulating short distances with assistance. He continued to show progress toward goals but not as rapidly, and the goals of therapy were to transition some of the program to the caregiver and to assist with his programming needs in his school. After 3 years of receiving OT and PT services under the weekly frequency mode, the therapists felt that he was no longer making measurable gains, in part due to significant behavior issues that affected his ability to participate in therapy. The family was offered and denied assistance from a neuropsychologist and community groups to assist with the behavioral issues. At the time the therapists thought he was unable to benefit from their services and wanted to decrease his therapy frequency. The team met to discuss the frequency guidelines with the family and the referring physician. Also, the guide to OT practice7 and the guide to physical therapist practice6 were shared that address transitions, discharge, and discontinuation of therapy services. Using the frequency guidelines and the above-mentioned references the team was able to come to an agreement to change modes of frequency. At that time the child transitioned to the periodic frequency mode under which he was seen first monthly and then every 2 months to address equipment needs, home programming, and integration of therapy activities into his daily routine. After 2 years, when his family sought assistance with his behavior and it was thought he could benefit, he was transitioned to the weekly frequency mode of PT to work on a trial of power mobility and weekly OT to work on increasing his independence with self-care. Although this case does not cover all the potential transitions to and from modes of service, it hopes to provide one example of how the guidelines can be used. The guidelines and frequency modes have been shared locally and nationally with peers at hospital sponsored continuing education conferences. Managers of other pediatric hospitals may find these guidelines helpful to physical therapists and occupational therapist in their setting. Complaints regarding change in therapy frequency from parents and physicians may decrease if these guidelines are clearly communicated when services are initiated. This information could be shared with payers to assist them in understanding service delivery in a specific pediatric medical setting. Future work may describe utilization patterns using these guidelines for frequency of care for different practice patterns. Ultimately, it would be helpful to determine which factors are most important in deciding frequency of therapy, how to measure these factors, and which frequency is needed to obtain optimal outcomes in children served by occupational and physical therapists in this pediatric hospital setting. This information would be valuable in understanding how children respond to different frequencies of service so that managers can better plan resource allocation and track resource utilization and assess outcomes in this pediatric teaching hospital. The authors thank Carol R. Scheerer, EdD, OTR/L, of the Department of Occupational Therapy at Xavier University for her editorial assistance in preparing this article. © 2008 Lippincott Williams & Wilkins, Inc.
<urn:uuid:5d836fc9-ed10-4051-bff6-a393b87b086c>
CC-MAIN-2016-50
http://journals.lww.com/pedpt/Fulltext/2008/02020/Development_of_Guidelines_for_Determining.11.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.96426
3,447
2.515625
3
Deep Space 1 Country USA Mission Asteroid/Mars/Comet Flyby Mission Launch Date October 15, 1998 Launch Vehicle Delta 7326 Spacecraft Mass 474 kg Key Dates Jul 28, 1999 - Asteroid 1992 KD 2001 - Comet Borrelly? End of Mission October 1999 Comments First New Millennium Mission The first New Millennium Program technology-validation mission is also its first deep-space mission, or DS1. During its two-year primary mission, DS1 will test 12 revolutionary technologies destined for future missions. NASA's vision of 21st-century space exploration includes numerous spacecraft to study a diversity of objects in the solar system - the Sun, the planets, and asteroids and comets (the "small bodies") - and beyond. New technologies and capabilities are needed for fast, flexible, cost-efficient access to space. The new technologies on board the DS1 spacecraft will be proven in arduous spaceflight conditions, so that 21st-century missions can use them with confidence. The DS1 spacecraft will also be the first to use an onboard, autonomous system to navigate to celestial bodies. The system will make navigation decisions about spacecraft trajectory and targeting of celestial bodies with little assistance from Earth controllers. To get an idea of the power of combining ion propulsion and autonomous navigation, imagine a car driving itself across the United States from Los Angeles, California, to Washington, D.C., and parking itself in a designated space upon arrival - after having completed the entire trip on one tank of fuel. Ron Baalke, STARDUST Webmaster, firstname.lastname@example.org
<urn:uuid:d2de4107-988d-4778-b785-9b2fe9ccd740>
CC-MAIN-2016-50
http://stardust.jpl.nasa.gov/comets/ds1.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.858356
327
3.0625
3
HISTORY OF MANITOWOC COUNTY - Ralph Plumb, 1904I. Descriptive 1 II. The Indians 8 III. Early Settlement 16 IV. Growth and Foreign Immigration 32 V. Means of Communication 42 VI. Marine 55 VII. Railroads 85 VIII. Military 112 IX. Politics 133 X. Village and City Government 167 XI. Churches 183 XII. Societies and Organizations 227 XIII. Education 243 XIV. The Press 255 XV. The Professions 278 XVI. Banks and Banking 281 XVII Business and Industry 288 Errata and additions 316 Appendixes 293(A), 294(B), 300(C), 313(D) Index CHAPTER XIII. EDUCATION.The interest shown in education in a community is, perhaps, the best test of the character of that community. There is no place where the future can be so shaped as in the schoolroom. Manitowoc county has good reason to feel proud of her past in respect to her educational history, for it is a matter of common knowledge that she has stood among the foremost counties of the state and that her efforts have gained wide recognition. As regards her public, private and parochial institutions of learning there has always been a spirit of enterprise prevailing. The self- sacrifice of the pioneer in giving his child an education in the face of almost insurmountable difficulties, is worthy of emulation and forms a peculiarly American characteristic. The first school established in the county succeeded the first settlement by a year. It was in the winter of 1837-1838 that a few pioneers at the mouth of the Manitowoc decided to light the torch of knowledge. This was done by the raising of a private subscription and the hiring of one S. M Peake to instruct the children of the community, twelve in number, P. P. Smith being the oldest. The primitive school held its sessions in the Jones warehouse at the corner of Sixth and Commercial Streets and instruction continued only through P 244 the winter months. In the spring Mrs. L. M. Potter, who had formerly been a teacher in the government school at Green Bay, opened a school at the Rapids, which continued in existence for some time, among the pupils being P. P. Smith and others from Manitowoc. Two years later a public school was established at the Rapids, the town hall being utilized for the purpose. A gentleman by the name of Beardsley was the first teacher and among his pupils were D. La Counte, P. P. Smith, D. Sackett, Giles and Erwin Hubbard and Joseph La Counte. In 1844 the county board chose E. L. Abbot, O. C. Hubbard and Oliver Clawson school commissioners and divided the county into three districts:--Two Rivers, Rapids and Manitowoc, schools being established at each and elections for district officers were held on October 10th. During the next five years the population remained almost stationary and as late as 1849 there were only seven school districts in the county. The Manitowoc school district, known later as No. 1, by that time had grown to such proportions that a commodious building was necessary and in 1848 the legislature authorized it to levy a tax of $350 for a new school. The money was accordingly raised and the next year a two story frame structure erected on North Seventh Street. This building for many years was the usual public gathering place for the villagers as well. In the same year a private German school was established in the town of Kossuth and George Peterson started a similar institution in the village of Mishicot, both being supplanted by public schools a few months later. At that time the average school year in the county was seven months and only a little over one half of the children attended regularly, owing to long distances and poor roads. The first gathering of the county teachers and those interested in education occurred at the courthouse at the village of Rapids in May 1849. Albert Wheeler acted as chairman and K. K. Jones as secretary. State Superintendent Root was present and addressed the pedagogues, recommending new plans and particularly the system of teachers' institutes. The meeting adopted resolutions favoring the formation of a county organization and the following P 245 were chosen officers:--president, James Bennett; Vice Presidents, P. Pierce, of Rapids and B. F. Sias of Two Rivers; Treasurer, William Ham, of Manitowoc; Secretary, E. H. Ellis of Rapids; directors, H. H. Smith, of Two Rivers, W. F. Adams, of Meeme, Alden Clark and K. K. Jones, of Manitowoc. Some attempts were also made at the introduction of the graded system of schools soon after. The extensive Irish and German immigration of the early fifties had an important influence on the county in an educational way since the favor, with which both nationalities view the school, is too well known to need remark. These sturdy pioneers rapidly settled both the rural and village communities and the log schoolhouse was a necessary attendant upon their advent. By the end of the year 1850 the first schools in the present limits of the townships of Centerville, Cato, Newton, Rockland, Meeme, Mishicot and Liberty had been established and within a few years the starting of schools in the other townships followed. The reports of the state superintendent of public instruction show a remarkable growth in one year alone. In 1850 90 out of 169 children in the county attended school, in 1851 633 out of 769; in 1850 there was received from the state funds $118; in 1851 the amount was $560. Much of the state school lands were situated in the county, there being 22,321 acres as late as 1852. The wages paid teachers in the county at this time averaged $23.50, which was higher than that maintained anywhere in the state. Among the pioneer country school teachers were Mrs. G. W. Burnett, Misses Theresa Mott, Harriet Higgins and Jane Jackson and Asa Holbrook, James Evers, John Stuart and J. Cohen. An atmosphere was created favorable to education in the Irish settlements in Meeme, particularly under the tutelage of Henry Mulholland, Sr., and Patrick O'Shea, resulting in the production of a coterie of bright minds, whose names became well known in educational circles of a later period. In the village of Manitowoc progress was also rapid. The growing needs resulted in the formation of several private schools, among them one taught by A. Wittmann in 1854, another in connection with the German Luth- P 246 eran Church started in the same year and a third taught by Rev. Melancthon Hoyt of St. James Church, established two years later. School District No. 1 was ably served in the early fifties by Jos. Vilas, who had just arrived in Wisconsin and in 1856 O. R. Bacon, one of the chief figures in the educational history of the county assumed charge. He was thirty-five years of age at the time and was a man of considerable ability. After six years at the head of the school he resigned, serving as a paymaster during the war and later went into business at Manitowoc, dying June 18, 1882. By 1856 the village had become so large that a new district became necessary and Dr. A. C. Gibson was hired by the residents on the south side of the river to open a school, which was done in the Esslinger building on Franklin Street in May. Later a frame building was erected for its occupancy at the corner of South Seventh and Washington Streets. Dr. Gibson remained in charge until the fall of 1858, when he accepted a position in the Two Rivers school and was succeeded by Jared Thompson, who was a man of high scholarly attainments. The interest shown in education is evidenced by the large attendance at a teachers' gathering held at Sheboygan in 1859, the following from Manitowoc County participating, Misses A. Birchard, S. E. Butler, C. M. Cooper, E. Tucker and Messrs. O. R. Bacon, C. S. Canright and Jared Thompson, all of Manitowoc; Henry Mulholland of Meeme, Joseph Stevenson, of Buchanan and Misses C. Honey and C. Williams and Messrs. J. B. Lord and J. W. Peck of Two Rivers. In the fall of the next year the teachers of the county held a convention in the Presbyterian Tabernacle at Manitowoc. By 1860 according to the state report there were 86 districts in the county, the average school year was six months, 3971 out of 7887 children of school age attended and $4972 was received from the state. The value of school buildings was at that time $15,769, while the average teacher's wages were $22.24 for males and $15.42 for females. By way of comparison the report of 1870 is taken, showing the result of ten years growth. At the later date 7810 out of the 14254 P 247 children of school age were in attendance, the state aid had increased to $5647, the value of school property to $35,760 and the average teachers' wages to $40.36 for males and $26.85 for females, there being 183 teachers in the county. In the First Ward School professor Thompson was succeeded in 1860 by W. F. Eldredge, C. S. Canright acting as assistant. The former served until October 1861, when he entered the army. He was a young man of great popularity and after years of honorable service for his country he moved to Yankton, Dakota where he died in 1895. During the war the first district school was taught for some time by O. F. De Land but later was under the joint charge of four ladies, Misses Warbuss, Burritt, Squires and Bennett. The office of county superintendent of schools was created by legislative act in 1861 and in that year Manitowoc county elected the first incumbent of that position, B. J. Van Valkenburgh being the Democratic and Fred Borcherdt the Republican candidate. The former won by a majority of 280 votes but resigned to go to the war in October of the next year, C. S. Canright being chosen to fill the vacancy temporarily until the fall election, at which J. W. Thombs, the Democratic candidate defeated Henry Sibree. Superintendent Thombs was succeeded by Jere Crowley, who was elected in 1863 over W. F. Eldredge by 608 majority. Crowley served in this office until his death five years later, being elected over Joseph Smith in 1865 and over A. M. Richter in 1867. Under his supervision education was systematized and regular examinations introduced, the county being divided into five districts for that purpose. Seventy-four teachers' certificates were granted in the county in the first year of his incumbency, which number had increased to 93 in 1870, to 152 in 1880 and decreased to 114 in 1890. The close of the war marked a great increase in educational facilities. In Manitowoc a Lutheran school was erected in 1866 and a year later a Roman Catholic school started. Private schools were maintained by Mesdames S. Hill and Barnes and by Miss Maria Martin. In February 1865 J. F. Silsbee became the teacher in the south side district. It was P 248 during his incumbency that an order from the state superintendent closing the German department in all schools that maintained such instruction created so much adverse comment. After some months he was succeeded by Prof. McMullin, who in turn gave way to Prof. Scudder, a graduate of the University of Wisconsin. At Two Rivers $5000 was voted for a new school in 1866 and a year later the new building was dedicated, J. F. Silsbee having charge. On October 29, 1866 the Third Ward School in Manitowoc was started in a brick building 35 by 50 feet on South Tenth street with Miss Minnie McGinley as principal. The condition of the other schools also became so crowded that the small buildings were totally incapable of holding the pupils, so that on the north side the primary department was divided and taught by C. M. Barnes and Miss Mary Shove in two private houses on North Sixth street and on the south side the intermediate and primary departments were removed to the corner of South Seventh and Jay streets. A sub- primary or kindergarten was also established under Miss Anna Metz at about this time. Michael Kirwan was elected county superintendent in 1869 over C. S. Canright by over seven hundred majority and two years later defeated O. R. Bacon, being elected a third time to the office in 1873 by a unanimous vote. During his six years of office the condition of the schools was much improved and the esprit du corps among the teachers maintained at a high level. Large teachers' institutes were held annually, that in 1870 being the first, in which great interest was manifested, over one hundred pedagogues being in attendance. During this time O. H. Martin, D. F. Brainerd, J. F. A. Greene, L. J. Nash and later J. N. Stewart had charge of the North Side High School, while on the south side B. R. Anderson and C. A. Viebahn were successful teachers in the First Ward and W. A. Walker and J. Luce in the Third Ward. At Two Rivers among the teachers during this period, that is down to 1875, were J. S. Anderson, G. A. Williams, W. N. Ames, Charles Knapp and John Nagle, the latter acting as principal until 1877, in which year also Two P 249 Rivers voted in favor of the establishment of a free high school. In Manitowoc an effort was made to consolidate the schools and to establish a central high school in 1869 but it signally failed when put to a vote. The early seventies were also an era of schoolhouse building. In 1871 the First Ward School was constructed on South Eighth and Hamilton streets, the structure being dedicated on January 29th of the succeeding year. In 1868 the state legislature passed an act enabling the first or north side district to levy a tax not to exceed $25,000 in order to provide for the erection of a new school, which was then found a necessity. It was, however, four years before the residents of the district saw their way clear to build the structure, the cornerstone being laid with great ceremony on July 25, 1872, orations being delivered upon the occasion by Judge Anderson, Hubert Falge and others. Principal Stewart, who was then at the head of the school later became the president of the State Teachers' Association, was the author of several educational works and taught for many years at Janesville. His successor was Hosea Barns, who had charge of the school from 1874 to 1877, later entering the Baptist ministry and finally retiring to his home in Kenosha County after a life of usefulness. By the last year of his incumbency at Manitowoc the new brick building below Union Park was ready for occupancy and the high school was duly instituted. Two Rivers also erected a school in the seventies, the value of the two structures then possessed by her being $12,000. Many parochial schools were started by the Catholics and Lutherans throughout the county, including the Roman Catholic School at Two Rivers in 1877, which has always been particularly well attended, St. Ambrosius Academy at St. Nazianz and the girl's school at Alverno. In 1875 W. A. Walker, who had been a teacher in the Third Ward, was elected county superintendent over A. M. Richter and served two terms, being reelected without opposition. By the end of his incumbency there were 108 schoolhouses in the county, valued at $104,366, besides nineteen private schools. The funds received from the state in 1880 were P250 $6528, the average teachers' wages being $44.13 for males and $30.15 for females, while out of 15,919 children of school age, 8428 attended the public schools. Efforts were made in September 1872 to form a county teachers' association, but although officers were elected,--C. A. Viebahn being selected president, W. A. Walker vice president and Miss Emma C. Guyles secretary,--the organization did not prove successful. Reorganization took place in 1875, however, Hosea Barns being chosen president, John Nagle secretary and Miss Alice P. Canright treasurer, since which time annual meetings have been held and the association has played an important part in educational affairs. The instructional forces of the city schools underwent many and frequent changes during the late seventies and early eighties. In the Third Ward School Prof. Luce was succeeded by J. A. Hussey in 1876 who in turn gave way to O. S. Brown. In 1879 Principal Hussey ran for county superintendent on the Democratic ticket but was defeated by Prof. Viebahn of the First Ward School by 561 majority. Two years later Mr. Brown was the Republican candidate but met defeat at the hands of John Nagle, the Democratic nominee by a narrow majority, the latter having already filled out the term of Prof. Viebahn, since the latter had in 1881 accepted a position in the faculty of the Whitewater Normal School, which he has since held. Prof. Viebahn did much for education in Manitowoc County and was once honored with the presidency of the State Teachers' Association. Prof. C. E. Patzer soon became the principal of the Third Ward School and under his guidance it advanced rapidly. On the north side Prof Barns was succeeded for two years by J. P. Briggs, who in 1880 gave way to Prof. McMahon. The latter resigned to go abroad for study a year after he had accepted the position and J. M. Rait, who had been a teacher at Two Rivers, then assumed charge of the school for two years. In the first ward the vacancy caused by the resignation of C. A. Viebahn was filled by the selection of F. G. Young in 1880. After serving the district only three years he resigned, took a post graduate course at John Hopkins University and later became a pro- P 251 fessor in the University of Oregon. His successor was John Miller, who later resigned and the vacancy filled by the appointment of P. H. Hewitt. The latter for eight years conducted the school, placing it among the foremost by his incessant endeavors. Ill health compelled him to resign in 1894 and a year or so later he died of consumption. At two Rivers J. M. Rait acted as principal of the high school from 1877 to 1881, being succeeded by A. Thomas for three years, he later giving way to Arthur Burch, who in turn was succeeded by C. O. Marsh in 1887. Mr. Burch was another county teacher who attained the presidency of the State Teachers' Association. A new high school was built in the village of Kiel in 1884and among the principals, who have been in charge of the institution are P. H. Hewitt, J. C. Kamp, A. W. Dassler, G. M. Morrisey and A. O. Heyer. About fifty pupils are in regular attendance. All during the eighties John Nagle was county superintendent of schools, being selected unanimously in 1884 and 1886 and defeating A. Guttmann in 1888 by 1354 majority. His administration was a strong one and he became known throughout the state as a leading educator, being chosen president of the state association at one time. By 1890, the end of his administration, the state aid had increased to $17,543,7430 of the 14,891 children of school age were in attendance at school and the value of the buildings was $141,869, while the average of teachers' wages had reached the highest point attained before or since, being $49 for males and $32 for females, there being 155 teachers in the county at the time. The history of education in the county during the last ten or fifteen years of the nineteenth century was one of rapid development. In the first district Prof. Rait resigned at the end of the school year in 1883 and moved to Minneapolis and as his successor E. R. Smith of Burlington was chosen. A man of wide experience and great intellectual power for seven years he continued to exercise a beneficial influence on the school and when he resigned to embark in business great regret was felt. His successor, C. Fredel, remained but two years and gave way to H. J. Evans, an energetic instructor, P 252 who introduced many reforms in the school and soon had it on the accredited list of the state university. The district had grown so large that at the annual school meeting held in 1891 it was decided to build another structure, which was accordingly done. The building committee consisted of L. J. Nash, G. G. Sedgwick and A. J. Schmitz and a site was chosen at the corner of North Main and Huron streets, the school being named after Chas. Luling. An addition to this school was built in 1899 at a cost of $12,000. In 1901 the average attendance in the high school was 180, in the Park School as a whole 569 and in the Luling School 360. In the fall of 1902 Prof. P. G. W. Kellar the present principal assumed charge. The First Ward School by the resignation of Principal Hewitt found it necessary to cast about for another man and Prof. C. E. Patzer was accordingly chose, continuing at the head of the institution for three years. Mr. Patzer had served four years as county superintendent, defeating A. Guttmann the Republican candidate in 1890 and being chosen unanimously at the next election. He was a man of much administrative ability and secured a position for his school on the accredited list. Resigning in 1897 to accept a position as professor in the Milwaukee Normal School, he was succeeded by W. Luehr, who proved to be a very able instructor. In the third ward Albert Guttmann became principal in the fall of 1886 and during seventeen years of able service he has done much for the school. The old facilities proving inadequate in 1891 a new schoolhouse was begun on South Twelfth street, being completed in the course of a year at a cost of $25,000. In 1900 still another building was erected, this time in the Fifth ward on Twenty- First street at a cost of $20,000. In the fourth district, a small division in the southern part of the city set off in the seventies, a new school was also erected at about the same time. All the schools of the city are maintained under the old district and school meeting system, although much talk of consolidation, particularly in regard to the high school, has taken place. Among the principals of the Two Rivers High School in the nineties were A. W. Dassler, E. R. Smith, E. B. Carr, O. P 253 B. O'Neil and C. W. Van de Walker. For the county superintendency A. Dassler was successful in 1894 but after one term was defeated by E. R. Smith, who was a Republican. After an able administration he was in turn defeated in 1898 by F. C. Christianson who was reelected twice without a partisan contest. According to his report of that year the receipts from the state were $15,674, 8733 children attended school out of 15,783 of school age and there were 171 teachers in the county, the average wages being $44 for males and $31 for females. A county training school for teachers, the third in the state, was opened in September 1901 under charge of Prof. F. S. Hyer and Miss Rose Cheney in the Fifth Ward School and much interest has been taken in the innovation. Parochial schools have also kept in the van of progress. A new building for the Roman Catholic School in Manitowoc was constructed in the later eighties and the German Lutherans completed a similar structure in 1891. In nearly every village and hamlet there are church schools, the Lutherans maintaining ten in the county and the Roman Catholics an even larger number. A private school entitled the Lake Shore Business College was established by Prof. C. D. Fahrney in Manitowoc in 1891 but suspended after five years of existence. Some years later the Wisconsin Business College was established and led a successful career under the able instruction of Principal C. F. Moore. A school for the deaf and dumb was instituted by the city with state aid in 1893 but it ceased to exist after seven years. Libraries always play an important part in education. On January 23 1868 in a letter to C. H. Walker, Col. K. K. Jones, of Quincy, Ill. offered to give Manitowoc a library, provided an association was formed and the maintenance of the institution assured. The offer was accepted with eagerness and a public meeting held on February 1st, of which Joseph Vilas acted as chairman and Henry Sibree as secretary. A committee was appointed, consisting of O. B. Smith, H. Sibree, D. J. Easton and A. D. Jones, to make final arrangements and an association was formed on February 29th with C. H. Walker president, J. F. Guyles vice president, P 254 Peter Johnston treasurer and O. B. Smith secretary. The association was duly incorporated by the legislature, the charter providing for a board of nine directors, to be elected annually, any subscriber to the amount of four dollars being given the privilege to vote at the meetings. The library was installed in a building on York street and was well supported and patronized for many years, many social and literary functions being given for its benefit. It was maintained until 1888, when the several hundred books it then possessed passed into the temporary care of the Y. M. C. A., being later transferred to the rooms of the Calumet Club and then to the north side school until added to the new city library. Although attempts were made to revive the enterprise from time to time Manitowoc was without a library until 1899, when as the result of the work of Miss Stearns of the State Library Commission, assisted by many of the local ladies, interested in education a favorable sentiment was created and sufficient funds accumulated for the opening of the institution. The following were, in November, appointed the first city library board:--L. J. Nash, E. Schuette, N. Torrison, Dr. John Meany, John Nagle, Dr. A. C. Fraser, F. C. Canright, Mesdames J. S. Anderson and Max Rahr. Rooms were secured in the Postoffice Building and the library proved a most successful enterprise. Andrew Carnegie donated $25,000 for a city library in 1902 and the work of erection was soon decided upon. In January 1891 Joseph Mann of Milwaukee donated $1000 to the city of Two Rivers for a public library and about $2100 was raised by others in support of the institution. It was opened soon after and has been well patronized, receiving at various times considerable municipal support. District school libraries have been quite generally established throughout the county also, forming a valuable adjunct to the regular facilities.
<urn:uuid:1988a922-1e2c-4e94-a2ee-49c2aad82eca>
CC-MAIN-2016-50
http://www.2manitowoc.com/RPlumb13.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.984751
5,702
2.828125
3
In the first paragraph we read: "The physics of elementary particles in the 20th century was distinguished by the observation of particles whose existence had been predicted by theorists sometimes decades earlier. There were also particles no one had predicted that just appeared. Five of them are of interest to me here. In order of increasing modernity, they are the neutrino, the pi meson, the antiproton, the quark and the Higgs boson." That list is of expected, not unexpected, particles; it should have been attached to the first sentence of the article, not to the sentence that actually precedes it. The article is actually about expected particles; consequently, its subtitle is also misleading. This confusion must be due to mistaken editing. Likewise, in the sixth paragraph: "And we know that there are three distinct kinds and that they are all massive. This means that they move at speeds close to that of light." On the contrary, what means that they move at speeds close to that of light is not that they are massive, but that they are *almost* massless. posted by Joseph Fineman February 20, 2012 JSTOR, the online academic archive, contains complete back issues of American Scientist from 1913 (known then as the Sigma Xi Quarterly) through 2005. The table of contents for each issue is freely available to all users; those with institutional access can read each complete issue. View the full collection here. An early peek at each new issue, with descriptions of feature articles, columns, and more. Issues contain links to everything in the latest issue's table of contents.News of book reviews published in American Scientist and around the web, as well as other noteworthy happenings in the world of science books. To sign up for automatic emails of the American Scientist Update and Scientists' Nightstand issues, create an online profile, then sign up in the My AmSci area.
<urn:uuid:34b38615-f808-4b3e-97c5-b4259a656a4c>
CC-MAIN-2016-50
http://www.americanscientist.org/comments/comment_detail.aspx?id=326&pubID=14731
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.966374
392
2.734375
3
Milestones are more than just stepping stones on the critical path to the completion of a project. Read more to discover why setting milestones is an important managerial tool in project planning, evaluating outcomes, and rewarding team members. Milestones in Project Planning and Scheduling This article is the first part of a series that discusses the effective use of milestones in project planning and scheduling. This opening article provides an introduction to project milestone planning, emphasizing its importance and providing general guidelines on how to select milestones that will be the most relevant for your project. What is a Milestone? We encounter milestones in all aspects of our lives, as an individual striving to achieve our life goals, as an employee working to advance an organization's mission, and as a member of the human race trying to expand our collective knowledge and understanding about the world and beyond. Astronaut Neil Armstrong summed up the concept of a milestone perfectly as he stepped onto the lunar surface and enthusiastically declared, "That's one small step for man, one giant leap for mankind." Milestones are the small steps that lead to the ultimate goal whether it be developing of new product or service or advancing the exploration of space to the far reaches of the universe. In its basic form, a milestone is an important event marked on a timeline and recognized when successfully reached. Milestones are the building blocks for the project's schedule and often create forward momentum to propel the project along to completion. They can also be used effectively as primary checkpoints to see how well your project is doing and whether the project is on schedule and on budget. Guidelines for Setting Milestones When embarking on project milestone planning, you will first need to create a work breakdown structure to get an overview of all the tasks in a manageable outline or diagram. With this overview in hand begin to look for opportunities to setup milestones around the completion of key tasks and activities. Try to visualize a timeline of the important events that will advance the project to the next level. For example, in NASA's "Race to the Moon" that began with President Kennedy's pledge in 1961 to put a man on the moon by the end of the decade, there were several significant milestones achieved before the Apollo 11 mission, including the successful missions of the Ranger series of unmanned probes that photographed, studied, and soft landed on the Moon. When selecting milestones be conscience of these parameters: Frequency – As a project manager, you may be tempted to overuse milestones as a motivation tool to keep the team moving along the ladder to reach the surface of success, but don't fall into the trap of labeling every task completion as a milestone. In turn, don't adopt the other extreme approach by ignoring or not recognizing significant and relevant events as milestones particularly at junctions of the critical path. A good compromise is to consistently designate important deliverables as milestones. Timing – Milestones that are spaced too far apart will not have the benefit of the momentum derived from motivating team members by recognizing their major achievements. However, when milestones, appropriately represented as diamonds in MS Project, are placed too closely together they quickly lose their luster and distinctiveness. As a rule of thumb try to space milestones at intervals for no longer than every two weeks for projects of several months in duration. Visibility – Milestones need to be placed prominently in the project's schedule and tracked periodically. Make sure that your milestones have been incorporated into your project scheduling, calendar, or other project tracking software program. Accountability – Milestones are commitments that must be met on time. If a milestone is missed, it needs to be addressed immediately by reexamining the resources to determine if they are properly matched to the objectives. Fallibility – It may sound counter-intuitive, but you should select challenging milestones that carry a degree of risk for failure. Not every venture of NASA undertaken to pave the way for the Apollo 11 mission was successful. Ranger 3, an unmanned probe sent to study the Moon missed its target by 22,000 miles. Don't forget to treat milestones as learning experiences and opportunities to make adjustments early in the project's execution. By keeping these guidelines in mind when project planning milestones, you will able to achieve an appropriate balance between the easy and challenging milestones that will inspire your team members to stay motivated and feel a greater sense of accomplishment. Author's own experience in project milestone planning. "American Experience | Race to the Moon | Timeline | PBS." PBS: Public Broadcasting Service. http://www.pbs.org/wgbh/amex/moon/timeline/index.html (accessed March 7, 2011). " Moon Timeline." AbsoluteAstronomy.com. http://www.absoluteastronomy.com/timeline/Moon (accessed March 7, 2011). Image Credit: Astronaut Buzz Aldrin (Apollo 11 Mission) courtesy of NASA's public domain license at Wikimedia Commons Milestones in Project Planning and Scheduling This series of articles provides information and guidance on incorporating and managing milestones in project planning and scheduling. - Project Milestone Planning - Project Planning Typical Milestones - Filling MS Project Milestones
<urn:uuid:0927eebc-0bd5-40b7-9a74-7c60c3d2486d>
CC-MAIN-2016-50
http://www.brighthubpm.com/project-planning/68427-successful-project-milestone-planning/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.926841
1,044
3.078125
3
The role of Clay soil in Vineyards There are three major types of clay: illite, kaolinite and montmorillonite-smectite. They are made of feldspar, quartz, carbonates and organic material. In nature, clay is almost always mixed together with other materials. Few varietals like a large percentage of clay as it is difficult to plow and aerate. Root systems have a difficult time penetrating and expanding in these soils. Because clay has a lot of minerals, grapevines need some to survive. In general, between 5%-10% clay is the best range for quality grape growing. > Related Articles
<urn:uuid:47504c86-36f3-4095-8dfe-26854e09f2ab>
CC-MAIN-2016-50
http://www.calwineries.com/learn/grape-growing/clay
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.951885
137
3.59375
4
Franklin Delano Roosevelt began his first presidential term riding a tidal wave of public support. In the 1932 election, he crushed dour incumbent Herbert Hoover and carried the Democrats to a solid majority in Congress. Following his inauguration, legislators gave Roosevelt unprecedented authority to remake the American presidency. The simultaneous rise in popularity of radio and FDR’s political fortune is an interesting historical twist of fate. Radio brought news alive, but left people free to create images in their imaginations. FDR’s distinctive voice and jollity flowed into people’s homes. His disability was invisible. Radio helped make this possible. Through this means of mass communication, FDR could convey his ideas effectively, sitting in his estate in Hyde Park, New York or in the White House. Because FDR was such a masterful communicator, he was able to use his speeches, press conferences, and radio broadcasts, to shape American history. Evidence of FDR’s successful use of the spoken word is widespread. The power of his "Day of Infamy" speech led the nation to unite behind the President’s call to war, and his fireside chats gained him support from the people for innovative and controversial social programs. The other was his relationship with the public. As with any successful politician, FDR’s power came from the people. Radio provided him with a direct link to his voting public and the next generation of voters. His use of radio helped him win people’s hearts. Historians still debate FDR’s true significance in history--saint or manipulator, or somewhere in between. However, Franklin Roosevelt was the Great Communicator, and his impact on America resonates even today. Complete Collection: 48 reels in 3 parts
<urn:uuid:687b5284-f3b6-40aa-b0ca-fdeb7c267173>
CC-MAIN-2016-50
http://www.cengage.com/search/productOverview.do?N=197&Ntk=P_EPI&Ntt=1842060481625545205367401481860104875&Ntx=mode%2Bmatchallpartial
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.971291
355
3.640625
4
Clearly the utilisation of daylight reduces the need for artificial light and thus should form an important part of an adaptive facade's control strategy for reducing building energy use. However, the introduction of natural light is not a guarantee of visual comfort. Physiologically, daylight can cause visual discomfort when distributed unevenly in a room, resulting in patterns of high contrast. Outdoor views can make an interior seem dark and gloomy, and direct sunlight can make a room too bright. Both of these examples can cause discomfort glare and in the worst cases disability glare. Such inadequacies lead to occupants closing blinds and switching on lights, resulting in the unnecessary use of electric lighting. The Commission Internationale de l’Eclairage (CIE) defines glare as: "visual conditions in which there is excessive contrast or an inappropriate distribution of light sources that disturbs the observer or limits the ability to distinguish details and objects." Glare is quantified by a glare index, depending mainly on window illuminance and reflections within the room. Glare caused by a direct view of the sky is considered to be acceptable if the glare index at a particular point in the room, does not exceed the recommended level for the particular operation. There are various forms of glare indices available for the designer, these include: the British Glare Index, based on research by Hopkinson and Pertherbridge; and the CIE Glare Index proposed by Einhorn. In practice, the use of these indices in blind control is limited by the nature of the light sensors used, the many assumptions required and an analytical method that can not account for the subjective human responses often associated with visual comfort. Indeed an occupants decision on preferred blind angle often depends upon a trade off of perceptions. Vision is the most developed of our senses and it can affect an individual’s mood and cognition. It is not adequate to simply provide adequate illumination levels to satisfy the multidimensional nature of visual comfort. Daylight within buildings is provided for people, therefore daylighting design should respond to their visual and perceptual needs. As these needs are so variable and difficult to measure we must allow the occupant the luxury of being able to make adjustments.
<urn:uuid:836ef723-dab3-48c2-a12f-68294f29c892>
CC-MAIN-2016-50
http://www.cwct.co.uk/ibcwindow/adaptive/glare.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.917895
440
3.515625
4
Aristotle & Logic: Syllogisms & Inductive Reasoning Syllogistic logic and inductive logic are key forms of persuasion in the Ethics. According to Aristotle, scientific knowledge "starts from what is already known...[and] proceeds sometimes through induction and sometimes by syllogism" (VI.3 p. 140). The difference between syllogism and induction is as follows: "induction is the starting-point which knowledge even of the universal presupposes, while syllogism proceeds from the universals" (V1.3 p. 140). A. Syllogisms (a type of Deductive reasoning) Syllogisms consist of three parts: - general statement ("universal") - particular example An example from Reeve's Practices of Reason, p. 12 - All plants in which sap solidifies at the joint between leaf and stem in autumn are deciduous. - All oak trees have sap that solidifies at the joint between leaf and stem in autumn. - Therefore, all oak trees are deciduous. Another example from a Humanities 110 lecture (12/1/95): - One should not seek delights that violate the sacred guest/host relationship - (a) I am now a guest in Helen's house; (b) fulfilling our reciprocal desire would be delightful; (c) but she is the wife of my host - Therefore, making love with Helen would be a violation of the guest/host relationship, and I should not do it. Sample hypothetical example from a student paper - To be rational means one must act consistently, take multiple factors into account, and choose what is "best". - Antigone acts consistently, takes multiple factors into account, and chooses what is "best". - Therefore, Antigone is rational. An example you have found in the Ethics: B. Inductive Reasoning According to Daniel Sullivan, "inductive reasoning involves a transition from the sensible singular to the universal" (Fundamentals of Logic 114). For example: And this fire warms, And this fire warms, etc. Sample inductive reasoning from a hypothetical student paper: In The History, Thucydides dumps on confidence In The Bacchae, Euripides dumps on confidence
<urn:uuid:36d71b3e-1b6c-43a4-8bd5-bb77ff5f4716>
CC-MAIN-2016-50
http://www.reed.edu/writing/paper_help/reasoning.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.918125
483
4.03125
4
MALARIA, ARTEMISININ RESISTANCE-SOUTHEAST While the communicable disease wreaks its heaviest toll in Africa, it's in nations along the The availability of therapies using the drug artemisinin has helped cut global malaria deaths by a quarter in the past decade. But resistance to it emerged on the Thai-Cambodia border in 2003, and has since been confirmed in The report warns that could be a health catastrophe in the making, as no alternative anti-malarial drug is on the horizon. The UN World Health Organization, or WHO, is warning that what seems to be a localized threat could easily get out of control and have serious implications for global health. "Absent elimination of the malaria parasite in the Mekong, it is only a matter of time before artemisinin resistance becomes the global norm, reversing the recent gains," writes Dr Christopher Daniel, former commander of the US Naval Medical Research Center, in the report for a conference at the Mosquitoes have developed resistance [it is the malaria parasite which develops resistance, not the mosquito] to antimalarial drugs before. The same happened with the drug chloroquine, which helped eliminate malaria from Europe, North America, the Caribbean, and parts of Asia and Nowhere are the challenges in countering the threat to drug-resistance greater than in In a third of townships, there been virtually no public health presence for years. It's an issue of regional concern as factors: delays in giving treatment, use of counterfeit or substandard drugs, and prescribing artemisinin on its own rather than in combination with another longer-acting drug to ensure that all malaria-carrying parasites in a patient's bloodstream are killed off. The Center for Strategic and International Studies is advocating greater Communicated by: ProMED [The emergence of artemisinin resistance in We have previously argued that the development of resistance is best contained by providing the population with free malaria drugs ensuring a full course of treatment with drugs which contain the active compounds in the required doses (Schlagenhauf P, Petersen E: Antimalaria drug resistance: the mono-combi-counterfeit triangle. Expert Rev Anti Infect Ther. 2009; 7(9): 1039-42). Free drugs are provided to patients with HIV and tuberculosis and should be provided to malaria patients as well to remove the market for counterfeit and substandard drugs. Phone: +27 (011) 025 3297 Fax: +27 087 9411350 / 1 Postal address: SASTM, PO Box 8216, Greenstone, 1616, South Africa Physical address: SASTM, 27 Linksfield Road Block 2 a Dunvegan Edenvale Registered as a Nonprofit Organisation 063-296-NPO The content and opinions are neither pre-screened nor endorsed by the SASTM. The content should neither be interpreted nor quoted as inherently accurate or authoritative. The information provided in SASTM Newsflashes is collected from various news sources, health agencies and government agencies. Although the information is believed to be accurate, any express or implied warranty as to its suitability for any purpose is categorically disclaimed. In particular, this information should not be construed to serve as medical advice for any individual. The health information provided is general in nature, and may not be appropriate for all persons. Medical advice may vary because of individual differences in such factors as health risks, current medical conditions and treatment, allergies, pregnancy and breast feeding, etc. In addition, global health risks are constantly evolving and changing. International travelers should consult a qualified physician for medical advice prior to departure.
<urn:uuid:6655dcae-e01b-46fc-9bf7-afbcb3df5c83>
CC-MAIN-2016-50
http://www.sastm.org.za/News/Details/207
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.924912
759
2.921875
3
This page is about the meaning, origin and characteristic of the symbol, emblem, seal, sign, logo or flag: Pi. The number pi (symbol: π) /paɪ/ is a mathematical constant that is the ratio of a circle's circumference to its diameter, and is approximately equal to 3.14159. It has been represented by the Greek letter "π" since the mid-18th century, though it is also sometimes written as pi. π is an irrational number, which means that it cannot be expressed exactly as a ratio of two integers (such as 22/7 or other fractions that are commonly used to approximate π); consequently, its decimal representation never ends and never settles into a permanent repeating pattern. The digits appear to be randomly distributed, although no proof of this has yet been discovered. π is a transcendental number – a number that is not the root of any nonzero polynomial having rational coefficients. The transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass and straight-edge. Asymmetric, Open shape, Monochrome, Contains both straight and curved lines, Has no crossing lines. More symbols in Greek Symbols: Greek alphabet letters and symbols are used as math and science symbols. read more » More symbols in Mathematical Symbols: This is a list of symbols found within all branches of mathematics. read more »
<urn:uuid:64c9b158-9887-4117-873e-a384afc990d0>
CC-MAIN-2016-50
http://www.symbols.com/symbol/1317
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.948385
301
3.328125
3
Woolwich, Kent, London A parliamentary report of 1777 recorded parish workhouses in operation at Woolwich accommodating up 100 inmates, and at Plumstead for up to 45 inmates. Between 1838 and 1868, the parish of Woolwich was part of the Greenwich Poor Law Union. In 1868, as part of a large number of changes to London's poor law organisation around that time, a new Woolwich Union was created which took in three parishes formerly belonging to the Lewisham Union. The new union officially came into existence on 10th March, 1868. Its administration was overseen by a Board of Guardians, 17 in number, representing its constituent parishes of Charlton next Woolwich (3 Guardians), Kidbrooke (1), Plumstead (5), and Woolwich (8). The Tewson Road Workhouse On 2nd April 1870, the foundation stone for the new Woolwich Union workhouse was laid by the Revd Francis Cameron. It bore the inscription "The poor ye have always with you". The workhouse was situated at Tewson Road, between Skittles Alley (now Riverdale road) and Cage Lane (now Lakedale Road) at the south side of Plumstead High Street, and was designed by the firm of Church and Rickwood. In 1872, a separate infirmary was erected to the south of the workhouse. The new buildings consisted of three ward blocks with central staff quarters, kitchens, stores, offices and committee rooms. The wards included accommodation for children and maternity patients, and a special sick bay for vagrants from the casual ward at Hull Place at the north of the workhouse. The site location and layout are shown on the 1914 map below: From around 1904, birth certificates of those born in the workhouse carried a euphemistic address so as not to stigmatise them in later life. The Woolwich workhouse's address for this purpose was 79b Tewson Road, Woolwich. In the 1920s, the workhouse became known as the Woolwich Institution, and the infirmary as the Plumstead and District Hospital. In 1930, following the formal end of the workhouse system, control of the site passed to the London County Council. It was then renamed St Nicholas Hospital and, at that time, had 320 beds. As part of the changes, many of the walls that formerly separated different classes of workhouse inmates were removed as shown on below of the same of the workhouse. In World War Two, the whole of the northern block was destroyed by in a single bomb attack. In 1945, the hospital suffered further damage from a flying bomb. The hospital has now closed and the site has been completely redeveloped. The Goldie Leigh Homes In 1899, the Woolwich union erected the Goldie Leigh children's cottage homes site at Bostall Heath, to the south-east of Woolwich. A receiving home at 43-47 Parkdale Road, Plumstead, which dealt with children prior to their being transferred to the homes. The name Goldie Leigh is believed to derive from the estate of Basil Heron Goldie (1792-1849), son of Lieutenant-General Thomas Goldie of Dumfries (c.1750-1804) and Amelia Leigh (1756-1845) of North Court, Shorwell, Isle of Wight. Their home, Goldie Leigh Lodge, was situated in what later became the hospital grounds. The Goldie Leigh site location and layout are shown on the 1914 map below. The homes comprised a row of houses along Lodge Lane, together with an infirmary, laundry and other buildings. Each "cottage" housed a group of around sixteen children under the care of a house mother. In 1914, the Goldie Leigh site was rented out to the Metropolitan Asylums Board for use as a hospital for the treatment of ringworm. Ringworm was an infectious disease of the scalp, common amongst pauper children, for which the treatment centre had previously been the Downs School at Sutton. Goldie Leigh then gradually expanded its remit to cover a score of different conditions of the skin and scalp. In 1930, the site was taken over by the London County Council. By 1938 it had 248 beds, a school with five classrooms, a craft room and a large hall for long-stay patients. The hospital ran its own Girl Guides, Brownies, Boy Scouts and Wolf Cub packs. In 1961, with a decline in demand for the hospital treatment of skin conditions, part of the hospital — the Bostall Unit — was adapted for the care of children who were classed mentally subnormal. It continued to provide hostel accommodation for children with disabilities until 1988. The Goldie Leigh Hospital now provides a range of out-patient services including physiotherapy, occupational therapy, and psychiatric assistance. Note: many repositories impose a closure period of up to 100 years for records identifying individuals. The Ancestry website has two collections of London workhouse records: - The London Workhouse Admission and Discharge Records (1738-1930) are searchable by name. - The Poor Law and Board of Guardian Records, 1430-1930 are more extensive but only provide browsable page images. - The FindMyPast website has workhouse / poor law records for Westminster. - London Metropolitan Archives, 40 Northampton Road, London EC1R OHB. Holdings include: Admissions and discharges (1896-1944); Creed registers (1871-1943); Guardians' minute books (1868-1930); etc. Unless otherwise indicated, this page () is copyright Peter Higginbotham. Contents may not be reproduced without permission.
<urn:uuid:b6c18704-d989-4dc3-90eb-06211fafdce5>
CC-MAIN-2016-50
http://www.workhouses.org.uk/Woolwich/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.961304
1,180
2.734375
3
MINUTE FOR PEACE DAY December December 22 is just three days before Christmas. In our new millennium Christmas is of increasing importance, with its message of "Peace and Good Will." Let every radio and TV station fill the day with minutes of music and words that inspire peaceful actions. Help us unite as one human family in new understanding and care for this wonderful nest in the stars: Planet Earth, our home. A Minute for Peace Day brought global attention for the way to peace back in 1963. December 22, 1963 was when we ended the period of mourning for President Kennedy with a global minute of silent prayer for peace on our planet. That special minute (1 p.m. in Dallas, 1900 GMT) was broadcast worldwide and affected people all over the world. Let's turn the tables on 9-11 by joining all over the world in Minutes for Peace all day on December 22 -- just three days before Christmas. Christmas can then be a turning toward peace, with our neighbor and our world. World Trade Center Tragedy and What To do. The horrific event on 9-11 showed us the power of hate. But love is more powerful than hate and December 22 provides a special opportunity to prove it. "Hatred does not cease by hatred. Hatred ceases only by love." Gandhi, and others, have demonstrated non-violent methods of opposing what was wrong. Martin Luther King described the power of love in his book "Strength To Love" -- and gave his life to prove it. Now, strength to "kill" is being advocated as the way to stop violence. To settle differences, war is advocated far more than peace. The search engine google.com has 39,000,000 items when you type in "War" and only 13,000,000 when you type in "Peace." Actions good or bad begin in the mind. Here is a way to reverse the damage done to people's thinking by media's headlines for violence and silence about the proven benefits of forgiveness, compassion and cooperation for common goals. The World Trade Center tragedy was the result of media failure to feature the work of the Franciscans and many other groups who were seeking the peaceful nurture of people and planet. A new idea that came from Minute for Peace and Earth Day, was the idea that we can now all think of ourselves as Trustees of Earth. In this age of Space exploration we know -- more than former generations -- that we are one human family and have only one Earth. With care and use of new technology we can now eliminate poverty, pollution and violence. All we need is a clear vision of our goal and reports on Internet - and in the media - of every successful effort to think and act as Trustees of Earth -- in ecology, economics and ethics. This course of action can appeal to the most people on our planet and do the most good. Then a new spirit of cooperation will engulf the world. With half the money we spend on wars we can make our planet a Garden of Eden. As we honestly work together we will see all around us the waste of wealth and its unfair monopolization by those in power. The solution is not to condemn the few in positions of power, but to demonstrate solutions and win their support -- not by the power of money or military might -- but by the power of truth, of good ideas and good will. Then with the power of the words "Love one another" we will reverse the direction of "9-11"and welcome the beginning of an era of peaceful progress in the new millennium. On December 22, talk peace, think peace, pray for peace, have faith for peace all over the world - in the home, at work in global relations of countries and corporations -- in all human institutions. As soon as you receive this message, take action. Pray, and Plan what you will do. Call your friends. Contact media, churches, colleges. Act now -- with faith that accents the positive and negates the negative. "Oh the faith that works by love." By Founder of Earth Day 4924 E. Kentucky Circle Dr. Denver, CO 80246
<urn:uuid:5e6a4a26-722c-4b94-a1b7-a72af576faac>
CC-MAIN-2016-50
http://www.wowzone.com/minute_for_peace_day.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.940754
881
2.75
3
Which Joe gave his name to ‘sloppy joes’? We look at five interesting sandwiches and their lexical origins. 1Relating to the state of Mississippi. Relating to or denoting the early part of the Carboniferous period in North America from about 363 to 323 million years ago, following the Devonian and preceding the Pennsylvanian. - ‘Kindle recognized two different fossil fauna assemblages in the limestone and the divided the limestone into two distinct lithologic units of Devonian and Mississippian age.’ - ‘Compared with Pennsylvanian times, early Mississippian tropical wetland ecosystems are poorly understood.’ - ‘More specifically, the western part of the county is composed mainly of Devonian and Mississippian sandstones and shales with some Silurian and Ordovician rocks present.’ - ‘The first important evolutionary event was the development of wings, which must have taken place in the Early Carboniferous although currently no Mississippian insects are known.’ - ‘The boundary between the Upper and Lower subdivisions in Europe is known to be below the boundary between the Pennsylvanian and the Mississippian subsystems in North America.’ - 2.1Archaeology Relating to or denoting a settled culture of the southeastern US, dated to about AD 800–1300. - ‘The size and complexity of Cahokia, and the influence of its ideology, must have created enormous changes in the smaller Mississippian communities of the southeastern cultural complex.’ - ‘In terms of lithic technology, Mississippian culture retained the system of nonformalized tool production begun during the Late Woodland period.’ - ‘This volume on the Mississippian town and mound center called Bottle Creek is a must-read for scholars, researchers, and students of Mississippian culture.’ - ‘He has conducted archaeological research in southern Illinois for almost 30 years, including work with Mississippian cultures of the region.’ - ‘The Cahokia flea started itching in my ear when I posted about the exhibit of Woodlands and Mississippian artifacts that is coming to Washington next month.’ 1A native or inhabitant of Mississippi. - ‘Politically, white Mississippians disfranchised nearly all black voters in 1890.’ - ‘The Atlanta Journal Constitution, one of the largest circulation newspapers in the South, reported last week that ‘some Mississippians were perplexed at hearing the decision.’’ - ‘Land surveyors used common names only, and many of these names lacked specificity or were used only by Mississippians at the time.’ - ‘Organized by the Student Nonviolent Coordinating Committee, Freedom Summer was a call to Northern white college students to join black Mississippians in the drive to register black voters in the South.’ - ‘That meant that, country or city, black or white, northeast Mississippians were encouraged to see themselves as being in the same boat.’ - ‘Attorneys for the plaintiffs say the issue is not about money, but that the settlement doesn't address admission standards or add programs that the Black institutions need to provide a quality education to all Mississippians.’ - ‘It is important that the board take a stand even though the action ‘may not impact one single vote’ when Mississippians go to the polls April 17 to decide the flag issue, Ross says.’ - ‘I think that's how most Mississippians would respond to it.’ - ‘Less than three years ago, Mississippians voted to keep the Confederate stars and bars on the state flag by a 2-to - 1 margin, and opinion polls suggest most Georgians are of a like mind.’ - ‘Her scholarship resonated with me as I recalled my mom, a Mississippi native, describing Chinese Mississippians who had both a Southern drawl and an entrepreneurial history in her neck of the woods that was a century old.’ - ‘A Georgian or a Mississippian may admit to being merely a Southerner… but no Texan, given the opportunity, ever said otherwise than ‘I'm from Texas.’’ - ‘A native Mississippian, he received his bachelor's degree in architecture from Auburn University in Alabama, in 1974.’ - ‘It does so by emphasizing that many gay Mississippians chose to remain in or return to this predominantly rural and small-town state, and by treating those who did with a minimum of pathos or nostalgia.’ - ‘In these early works, one can see how Mockbee, a fifth-generation Mississippian, first reconstituted forms and materials as elements for new solutions.’ - ‘Native Mississippians working on the line were at first perplexed, then angry, as line-speeds increased and new jobs were filled by workers from Mexican town they had never heard of, like Oaxaca and Chiapas.’ - ‘By failing to acknowledge upfront that black New Orleanians - and perhaps black Mississippians - suffered more from Katrina than whites, the TV talkers may escape potential accusations that they're racist.’ - ‘In the end, as Martin Luther King Jr. prophesied, they liberated white Americans too, including white Mississippians, by removing this historic stain from our society.’ - ‘According to the census, Mississippi has the fourth-lowest median income in the US; the per capita income of black Mississippians is 51% that of their white counterparts.’ - ‘The Deep South was Reagan country, and white Mississippians regarded Reagan as their native son.’ - ‘It was a line that Lott said he'd been working on for a while, and it produced loud applause from hundreds of Mississippians gathered at Founders' Square, the centerpiece of the historic fair.’ the MississippianThe Mississippian period or the system of rocks deposited during it. - 2.1Archaeology The Mississippian culture or period. - 2.1Archaeology The Mississippian culture or period. We take a look at several popular, though confusing, punctuation marks. From Afghanistan to Zimbabwe, discover surprising and intriguing language facts from around the globe. The definitions of ‘buddy’ and ‘bro’ in the OED have recently been revised. We explore their history and increase in popularity.
<urn:uuid:b7a0105a-f25e-471d-b237-b2eb9de446b9>
CC-MAIN-2016-50
https://en.oxforddictionaries.com/definition/us/mississippian
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.9508
1,406
3.171875
3
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? You can change this under Settings & Account at any time. The History of Psychology Transcript of The History of Psychology Scientists and philosophers begin questioning the beliefs of the catholic church. They challenged the idea that behavior was caused from an external source rather than an internal source. Scientific and intellectual advances would eventually lead to the birth of psychology in the 1800's. Wilhelm Wundt and his pupils founded a branch of psychology called Structuralism. Structuralism involved two branches: objective senses (includes touch, smell, sight, taste, and hearing) and subjective feelings which included emotional responses. Structuralists believed that the human mind worked based on these two branches. All data was qualitative and collected through examination. Although structuralism was not scientific, it would lead to the continuation of psychology today. Following structuralism, functionalism was the idea that one would adapt and learn from their experiences. William James, who published what many consider the first modern psychology textbook, was one of the founders of functionalism and convinced that experience cannot be broken, ten years after Wundt. Unlike structuralism, functionalism included studies on behavior in laboratories and questioned "What do certain behaviors and mental processes accomplish for the person (or animal)?". Functionalists proposed that behavior was based on experience. If a certain behavior succeeded in achieving the results that lead to what one wanted, then the behavior would be repeated in order to produce the same results. However, functionalism is not considered a natural science as it could only be observed and not measured. Created by German scientists Max Wertheimer,Kurt Koffka, and Wolfgang Köhler during the 1920's in response to Wundt's structuralism, the Gestalt branch psychology was a school of thought based upon the idea that the whole of perception was larger than the sum of its parts. In other words, the mind gives shape or "Gestalt" to the parts of perception while also filling in the gaps depending on the context. Gestalt psychology also states that learning is an active and purposeful undertaking rather than a mechanical action as described by functionalism. In addition, Gestalt maintains that learning is mostly accomplished through insight and reorganization of perceptions that allow an individual to solve a problem. Socrates began psychological thought with the use of introspection or "knowing thyself." Plato, a student of Socrates, recorded Socrates's wisdom and studied his psychological thought, primarily introspection. Aristotle, one of Plato's students, further continued Socrates's and Plato's psychological studies. Aristotle's view on psychology and the mind was more scientific and he believed the mind and human behavior followed certain laws. Europeans believed that psychological disorders were caused by demons as a punishment for the sins of the possessed. During this time tests were conducted to test for possession but almost always resulted in the death of the accused. After contracting an eye disorder from looking at the sun too much for his study of after images, Gustav Fechner resigned from the fields of science. However, after his recovery, he began studying the mind and it's relation to the body. Fechner is known as one of the founders of modern experimental psychology and his clearest contribution is how psychology could become a quantitative science as the mind was vulnerable to measurement and mathematical treatment. For example, Fechner constructed the "Golden Section Hypothesis" and asked multiple observers to choose the "best" and "worst" rectangle. Results showed that the most appealing or "best" rectangle to the eye were the ones with a ratio between 3:5 and 5:8. This became known as the "Golden Section". John B. Watson believed studying the mind's consciousness in humans and animals is unscientific. He explained that only oneself can understand one's consciousness and impossible for others to know. He instead believed only observable behavior can be studied scientifically. 1924 AD Psychoanalytic Created by Sigmund Freud in the late 1880's, psychoanalysis focused on the unconscious mind and the reasons behind certain kinds of mental ailments. Freud cited repressed urges and memories as the cause of irregular behavior and ailments and helped his patients through examinations of their past and traumatic events in their lives. In this, he developed what his patients would go on to term the "talking cure". His theories by themselves exaggerated the influence of unconscious thoughts and urges, but he laid the foundation of psychoanalysis for later psychologists such as Erik Erikson. Erikson would use this basis to go on to formulate his own theories about individual social growth, which described different stages in life, like those in a videogame, that an individual, depending on their "proficiency" in that stage, would emerge from, feeling either a sense of mastery (ego strength) or inadequacy. This idea remains relevant even in modern day psychology. Modern day psychology focuses on sociocultural, biological, and cognitive levels of analysis. This split into three main branches came about in the 1950's and 60's with the advent of more advanced technology and ways to measure and observe both the mind and brain. -Biological psychology, or the study of the brain and how it affects our mind focuses on how different brain conditions and kinds of damage affect our thought processes as well as individual differences between different people's mental processes from a genetic point of view. -Cognitive psychology, based on observing mental processes such as memory and emotion, came about around the 1950's. This shift from measuring observable behavior to observing the mind and mental processes is known as the "cognitive revolution" and was a result of technological advances giving us a means of observing the brain. These psychologists work in tandem with neuroscientists and anthropologists to determine how our memories and emotions affect our behavior -Sociocultural psychology focuses on how our sociological environment affects our thoughts and behaviors. Although there have been collaborations between anthropologists and psychologists for years, the study of culture has largely stayed within the boundaries of anthropology. However, in recent years, psychologists have begun studying the nature of individuals within a social context as opposed to individual people. John B. Watson *-not to scale
<urn:uuid:bebd4013-524c-460f-903c-9aefb7629afd>
CC-MAIN-2016-50
https://prezi.com/j1oczuycokqy/the-history-of-psychology/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.961423
1,375
3.8125
4
This page in a nutshell: Let’s treat each other with kindness, understanding, and unselfishness. WikiLove can make Wikipedia a better place. Wikilove is a word used to refer to a spirit of understanding and kindness toward other users on wiki. Because many different kinds of people edit here in Simple English Wikipedia, it is easy for conflicts to happen often, and for a discussion to sink into incivility (rudeness), and rudeness to flamewars. So, in fighting with each other, we can forget what the real goal of Simple English Wikipedia is: to share what we know in simple English to make an encyclopedia anybody can change. Wikipedia is not a place to argue. It is a place to learn and share new things. If we remember this love of knowledge and continue to be civil, we will be able to make a healthier and happier Simple English Wikipedia! Follow our rules–they make it easier to work with one another. Try to be nice to each other. Remember that we are all people and we like to feel appreciated! When you make a comment, try saying something nice first. For example, you can say "thank you" or smile at them. Don't lose your temper, and don't say "me first". Try to keep your cool. Getting angry and lashing back will only hurt yourself, and, in the end, what people think of Wikipedia. Forgive other people. Keep the good memories. Forget the bad ones.
<urn:uuid:e79f8fcf-454d-4be3-b793-2aaff8a90cf5>
CC-MAIN-2016-50
https://simple.wikipedia.org/wiki/Wikipedia:WikiLove
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.92275
311
2.71875
3
Location: National Soil Erosion Research2012 Annual Report 1a. Objectives (from AD-416): Principal focus of the CEAP Watershed Studies is to evaluate the effects and benefits of conservation practices at the watershed scale, in support of policy decisions and program implementation. 1b. Approach (from AD-416): The effects of conservation activities on water and soil quality will be assessed at the watershed scale using models such as ARS' Soil and Water Assessment Tool, in combination with ARS long-term watershed data sets, expertise, and resources. 3. Progress Report: The money received from Natural Resources Conservation Service (NRCS) for the St. Joseph River Conservation Effects Assessment Project (CEAP) was used to service the St. Joseph River watershed CEAP Watershed Assessment Study, through a Specific Cooperative Agreement with DeKalb County Soil and Water Conservation District (SWCD), which is responsible for servicing field equipment and processing samples in the watershed. DeKalb SWCD also ensured data collection of cropping system attribute data from the watershed. Funds from NRCS were also used to assist with calibration and validation of the Agricultural Policy/Environmental eXtender (APEX) model for the St. Joseph River watershed, to allow assessment of conservation practices at the field scale within the watershed.
<urn:uuid:20a17fe9-da48-44d6-8fff-6ae8c594d6f6>
CC-MAIN-2016-50
https://www.ars.usda.gov/research/project/?accnNo=421151&fy=2012
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.934573
274
2.625
3
Many people think of maggots as the creepy, crawly larva of a fly. But maggots go through several of their own distinct life cycle stages before maturing into a fly. In the life cycle of all arthropods, there are several molts, or instars, in which the insect will shed its exoskeleton to evolve into a new form. In the life cycle of the maggot, there are three instars before it reaches its pre-pupa phase. Before the beginning of the first instar, the mature fly will lay more than 300 eggs in a carcass, each one holding a fly larva. Once the eggs are laid, it takes only a single day for them to hatch. During the first instar, the larva migrates into the body (carcass) and feeds on the body fluids. The second instar occurs, only a day after the beginning of the first instar. During the second instar, the maggots huddle together in large masses and work to feed on the tender rotting flesh. The second instar takes only one day to complete. During this period, the maggots still move and feed en masse, but they are also growing exponentially in size. The third instar takes only two days to complete. During the pre-pupa phase, the maggot will migrate away from the corpse to find a more suitable (safe) place to pupate. After four days, the pupa phase begins, and the maggot transforms within its puparium into an adult fly. This stage takes up to 10 days. Once the fly emerges, they are ready to lay eggs of their own within two days.
<urn:uuid:e470e0ee-dd87-420b-a13c-53af866487fc>
CC-MAIN-2016-50
https://www.cuteness.com/article/life-cycle-maggots
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.950584
353
3.90625
4
You now start to convert the requirements you identified into acceptance tests. At this stage you’re actually using the Gherkin language, which consists of fewer than a dozen key words and a few pieces of punctuation. The best (and easiest to read) documentation for the Gherkin language is at the Behat project, so for more detail, please refer to http://docs.behat.org/en/gherkin/index.html. Given, When, and Then are Gherkin keywords, but for the purposes of writing the acceptance test, they are used to describe the context in which an event will happen, and then describe the expected outcome. Now, at this point, it’s necessary to understand the connection between the Cucumber feature and the other two aspects of Cucumber—the command itself, and the step definitions. Step definitions are methods written in a high-level programming language which set up the scenarios, and perform various tests to determine whether the resulting state matches the intended state. At the time of writing, there are at least eight supported languages for writing step definitions. We’re going to be using Ruby. The steps in the scenario map directly to the step definitions. Here’s a trivial example: Given an adding machine When I input the numbers "1" "2" "3" Then the answer should be "6" The matching step definitions might say: Given /^an adding machine$/ do machine = AddingMachine.new end When /^I input the numbers "([^"]*)" "([^"]*)" "([^"]*)"$/ do |arg1, arg2, arg3| answer = machine.add(arg1, ...
<urn:uuid:84b5b774-5e60-4fc8-b2ac-54d2cfbf1b53>
CC-MAIN-2016-50
https://www.safaribooksonline.com/library/view/test-driven-infrastructure-with/9781449309718/ch06s03.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.870398
361
3.046875
3
As the key intermediary between the classrooms, the individual school and the education system as a whole, effective school leadership are essential to improve the efficiency and equity of schooling. Within each individual school, leadership can contribute to improve student learning by shaping the conditions and climate in which teaching and learning occur. Beyond the school borders, school leaders can connect and adapt schools to changing external environments. And at the school-systems interface, school leadership provides a bridge between internal school improvements processes and externally initiated reform. But school leadership does not operate in static educational environments. As countries are seeking to adapt their education systems to the needs of contemporary society, the expectations for schools and school leaders have changed profoundly. Many countries have made schools more autonomous in their decision making while centralising standards and accountability requirements and demanding that schools adopt new research-based approaches to teaching and learning. In line with these changes, the roles and responsibilities of school leader have expanded and intensified. Given the increased autonomy and accountability of schools, leadership at the school level is more important than ever. Get your grade or your money back using our Essay Writing Service! The challenge facing education in the 21st century is to make changes to achieve higher levels of learning for all children (Ramsey, 2002). At the time of the present study, public schools are undergoing scrutiny and criticism of such magnitude; it is difficult to predict the future of public education. An increased emphasis on accountability and school improvement, including the utilization of ICT among teachers to enhance student achievement, is at the forefront of all education debates. Research has shown that appropriate use of ICTs to catalyze a paradigm shift in both content and pedagogy that is the heart of education reform in the 19th century. ICT-supported education to enhance the success of the ongoing knowledge and skills that will give the students continuous learning if properly designed and implimented. Leveraging ICT in an appropriate manner enables new methods of teaching and learning, especially for students in exploring exciting ways of problem solving in the context of education. New ways of teaching and learning is supported by constructivist learning theory and paradgm shift from prinbcipal and teacher-centered pedagogy of memorization and rote-learning to focus on student centered. (Thijs, A., et al. ,2010) Furthermore, the utilization ICT learning procedures and tools in the educational process, obviously leads to revolutionary changes in the roles of both teachers and learners as the emergence of new teaching and learning environment and finally for new virtual training that ultimately aims to facilitate the tools and resources to support communication and interaction as well as disseminate teaching materials via the web will in order to encourage promote enhance collaboration and cooperation among participants in the learning process. On the other hand, many author such as Salinas (2003) agree on the fact that the integration of ICT in education produce a set of transformations which transform all the elements that take part in the educational process such as organizations, students, curriculum, and notably, they affect teachers' role, function and behavior . Nevertherless, investments in information and communication technology (ICT) for enhancing formal and non-formal education systems are essential for schools improvement (Tong & Trinidad, 2005). According to Betz (2000), information technology will only be successfully implemented in schools if the principal actively supports it, learns as well, provides adequate professional development and supports for his/her staff in the process of change. In fact, school principals have a main responsibility for implementing and integrating ICT in schools (Schiller, 2003). Anderson and Dexter (2005) carried out a study on technology leadership behaviors of school principals and found that "although technology infrastructure is important, technology leadership is even more necessary for effective utilization of technology in schools" (p.49). Moreover, various other research studies support the literature that leadership is an important key factor in effective use of technology in education (Schiller, 2003; Anderson & Dexter, 2005). Therefore, it can be said that technology leadership behaviors are important to successful implementation of educational technology plans (Chang, Chin & Hsu, 2008). As such, the principal has consistently been recognized as a significant factor in school effectiveness of change process. The complexity of the job of a school administrator has demanded highly developed skills to carry out the many functions of the school operation. Exceptional leaders have always been rare, but many believe that they can be made as well as born (Abrashoff, 2002). At the same time, there is limited understanding about the ways that school leaders make a difference particularly in new technology integration. Principal leadership, along with the effectiveness of classroom teachers, has a great impact on student progress. The relationship of an administrator's leadership style and its affect on teachers and student achievement has become critically important in continued research. Role of Principal Always on Time Marked to Standard Several definition of a principal, the first six do not mention their role as the leader of a school. Though, there are key phrases that most certainly apply to the position; highest in rank, authority, most considerable, and important. The definitions go on to mention that which pertains to a prince or being princely, along with a leader or one who takes the lead. What may be considered ironic is that "acts independently" is included as well. Because the role of a principal is extremely fluid, being shaped by a diverse set of concerns and values, conceptualizations are problematic (Brown, 2005). Evidence should be visible in a school of what a principal believes as a principal and what the school stands for (NAESP, 2001). The test of good leadership is the achievement of change in a system. Change can be difficult; however, it is necessary to abandon the past to pursue the future (Bell-Hobbs, 2008). Examining the ways in which principals lead their schools through change, and its effect on teachers' attitude towards technology as well as student achievement and is critical to future educational research. Traditionally, the principal resembled the middle manager suggested in William Whyte's 1950's classic The Organization Man as an overseer of buses, boilers and books. Today, in a rapidly changing era of standards-based reform and accountability, a different conception has emerged one closer to the model suggested by Jim Collins' (2001) Good to Great, which draws lessons from contemporary corporate life to suggest leadership that focuses with great clarity on what is essential, what needs to be done and how to get it done. This shift brings with it dramatic changes in what public education needs from principals. They can no longer function simply as building managers, tasked with adhering to district rules, carrying out regulations and avoiding mistakes. They have to be (or become) leaders of learning who can develop a team delivering effective instruction. Wallace's work since 2000 suggests that this entails five key responsibilities: Shaping a vision of academic success for all students, one based on high standards Creating a climate hospitable to education in order that safety, a cooperative spirit and other foundations of fruitful interaction prevail. Cultivating leadership in others so that teachers and other adults assume their part in realizing the school vision. Improving instruction to enable teachers to teach at their best and students to learn at their utmost. Managing people, data and processes to foster school improvement. In addition, schools are no different. Principals who get high marks from teachers for creating a strong climate for instruction in their schools also receive higher marks than other principals for spurring leadership in the faculty, according to the research from the University of Minnesota and University of Toronto. (Bradley Portin, Paul Schneider, Michael DeArmond and Lauren Gundlach., 2003) In fact if test scores are any indication, the more willing principals are to spread leadership around, the better for the students. One of the most striking findings of the universities of Minnesota and Toronto report is that effective leadership from all sources such as principals, influential teachers, staff teams and others - is associated with better student performance on math and reading tests. The relationship is strong albeit indirect: Good leadership, the study suggests, improves both teacher motivation and work settings. This, in turn, can fortify classroom instruction. "Compared with lower-achieving schools, higher-achieving schools provided all stakeholders with greater influence on decisions," the researchers write.( Karen Seashore Louis, Kenneth Leithwood, Kyla L. Wahlstrom, Stephen E. Anderson et al. ,2010) The better results are due to collaboration between two parties. "The higher performance of these schools might be explained as a consequence of the greater access they have to collective knowledge and wisdom embedded within their communities," the study concludes.( Karen Seashore Louis, Kenneth Leithwood, Kyla L. Wahlstrom, Stephen E. Anderson et al. ,2010) Principals may be relieved to find out, moreover, that their authority does not wane as others' waxes. Clearly, school leadership is not a zero-sum game. "Principals and district leaders have the most influence on decisions in all schools; however, they do not lose influence as others gain influence," Karen Seashore Louis, Kenneth Leithwood, Kyla L. Wahlstrom, Stephen E. Anderson et al., 2010). Indeed, although higher-performing schools awarded greater influence to most stakeholders. Little changed in these schools' overall hierarchical structure. (Kenneth Leithwood, Karen Seashore Louis, Stephen Anderson, Kyla Wahlstrom,2004) .University of Washington research on leadership in urban school systems emphasizes the need for a leadership team role led by the principal and including assistant principals and teacher leaders and shared responsibility for student progress, a responsibility "reflected in a set of agreements as well as unspoken norms among school staff."( Knapp et al., 2003) This Essay is a Student's Work This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work School leaders are in charge of connecting and adapting schools to their surrounding environments. According to Hargreaves et al. (2008), school leaders will increasingly need to lead "out there" beyond the school, as well as within it, in order to influence the environment that influences their own work with students. In small towns and rural areas, school leaders have traditionally stood among the most important leaders in their communities. While it may be argued that urbanisation, immigration and school size have weakened school-community ties, these and other pressures on family structures have at the same time contributed to make the community responsibilities of school leaders even more important today. Principal play an important role in strengthening the ties between school personnel and the communities that surround them (Fullan, 2001). Principals of the most successful schools in challenging circumstances are typically highly engaged with and trusted by the schools' parents and wider community (Hargreaves et al., 2008). They also try to improve achievement and well-being for children by becoming more involved with other partners such as local businesses, sports clubs, faith-based groups and community organisations and by integrating the work of the school with welfare, law enforcement and other agencies (PricewaterhouseCoopers, 2007). Moreover, in rapidly changing societies, the goals and objectives to be achieved by schools and the ways to get there are not always clear and static. In increasingly globalised and knowledge-based economies, schools must lay the foundations for lifelong learning while at the same time dealing with new challenges such as changing demographic patterns, increased immigration, changing labour markets, new technologies and rapidly developing fields of knowledge. Consequently of these developments, schools are under enormous pressure to change and it is the role of Principal to deal effectively with the processes of change. The roles and responsibilities of school leadership in each of these scenarios would vary widely. School leaders must master the new forms of pedagogy themselves and they must learn how to monitor and improve their teachers' new practice. Moreover, instead of serving as head teacher primus inter pares, they have to become leaders of learning responsible for building communities of professional practice. Methods of evaluation and professional development take more sophisticated application and principals must embed them into the fabric of the work day. While practices vary across countries, it is clear that school leadership is generally expected to play a more active role in instructional leadership: monitoring and evaluating teacher performance, conducting and arranging for mentoring and coaching, planning teacher professional development and orchestrating teamwork and cooperative instruction. Countries also note a shift in emphasis from more administration- and management-type functions to leadership functions of providing academic vision, strategic planning, developing deeper layers of leadership and building a culture and community of learning. As a result of the increasing central mandates and programmes, changing student populations and growing knowledge about effective practice, schools are under enormous pressure to change and it is the school leader's role to manage the processes of change. The transformation of policy into results occurs most critically through the adaptation of practice in the school and classroom. This process is complex and must be led intentionally and skilfully. In some cases, resistance to change needs to be overcome with carefully structured support, relevant information, a clear sense of purpose and goals and opportunities to learn requisite skills (Hall and Hord, 2005). While some changes are purely technical and can be readily accomplished, more significant change calls for deeper adjustment of values and beliefs about the work (Heifetz, 1998). Sophisticated skills of "adaptive" (Heifetz and Linsky, 2002) and "transformational" leadership (Burns, 1978; Leithwood, 1992; Leithwood and Jantzi, 1990; Leithwood and Jantzi, 2000) are needed here. Brief Understanding of Leadership The term "leader" has been included in the English language since about 1300 A.D., while the term leadership was introduced about 1800 A .D . (Stogdill, 1974, p . 7) . Historically speaking, the leadership position in past years was occupied by the person exhibiting most prowess, strength or power. Today, the leadership position seems to be dependent on the group that person leads and exerts some authority over. The leader maintains his position as long as group needs and/or goals are met. Yura (1976) indicated that regardless of their purpose, needs or goals, all groups have a basic commonality: they rely on leadership. A review of the literature revealed that earlier studies were directed at defining the ingredients of leadership. Despite those efforts, it appears that much remains unknown. At this point in time, it has been recognized that there is no clear cut agreement on the definitions of leadership styles or behaviour. This lack of consensus has led to much confusion on the topic. Amid all this, most authorities agree leadership styles can be learned and there is no one best style of leadership. Stogdill and Coons concentrated on two aspects of leader behavior : (1) What does an individual do while he operates as a leader, and (2) How does he go about what he does? As a working definition they stated, "Leadership, as tentatively defined, is the behavior of an individual when he is directing the activities of a group toward a shared goal" (Stogdill and Coons, 1957, pp . 6-7) . In 1977, Hersey and Blanchard defined leadership as "the process of influencing the activities of an individual or group in efforts. Toward goal achievement in a given situation" (Hersey and Blanchard, 1977, p. 84). From these definitions it follows that the leadership process is a function of the leader, followers and other situational variables. Barnard (1969) agreed that leadership is an involvement of the three variables listed above. In his discussion on "The Nature of Leadership," he stated that, "Whatever leadership is, I shall now make the much over simplified statement that it depends on three things: (1) the individual, (2) the followers, and (3) the conditions". Behavioral leadership theory focuses on what the leader does. It is different from personal trait theory because behavior can be observed. The observable behavior is not dependent upon either individual characteristics or the situation (Moloney, 1979, p. 23). Barnard (1969) defined leadership .as "the quality of the behavior of individuals whereby they guide people or their activities in organized effort"(p. 83). Researchers and writers have amassed a large body of literature in defining leadership. The results of the leadership definitional process have been plagued with uncertainties. This phenomenon Halpin (1958) cited in his attempt to define leadership . In his review of the literature, he stated : Leadership has been defined in numerous ways . The definition proposed here derives its value primarily from the relation to the body of theory being developed . In some respects it is more comprehensive than other more usual definitions ; in others it is more restricted . To lead is to engage in an act that initiates a structure-in-interaction or part of the process of solving problems . Halpin (1958) Stogdill (1974) devoted a chapter in his book to the definition of leadership . He, like Halpin, recognized the complexities of defining leadership . He was explicit in stating that : There are almost as many different definitions of leadership as there are persons who have attempted to define the concept. Nevertheless, there is sufficient similarity between definitions to permit a rough scheme of classification As a result of the research and theory about leadership behavior that was developed after 1945, Gerth and Mills (1953) stated : To understand leadership attention must be paid to : (1) the traits and motives of the leader as a man, (2) images that selected publics hold of him and their motives for following him, (3) the features of the role that he plays as a leader, (4) the institutional context in which he and his followers may be involved . (p . 405) Furthermore, leadership can be described by reference to two core functions. One function is providing direction; the other is exercising influence. Whatever else leaders do, they provide direction and exercise influence. This does not imply oversimplification. Each of these two leadership functions can be carried out in different ways, and the various modes of practice linked to the functions distinguish many models of leadership. In carrying out these two functions, leaders act in environments marked variously by stability and change. These conditions interact in complementary relationships. While stability is often associated with resistance and maintenance of the status quo, it is in fact difficult for leaders and other educators to leap forward from a wobbly foundation. To be more precise, it is stability and improvement that have this symbiotic relationship. Leaping forward from a wobbly foundation may well produce change, but not change of the sort that most of us value falling flat on your face is the image that comes to mind. Wobbly foundations and unwise leaping help to explain why the blizzard of changes adopted by our schools over the past half century have had little effect on the success of our students. School reform efforts have been most successful in those schools that have needed them least Elmore (1995). These have been schools with well-established processes and capacities in place, providing foundations on which to build in contrast to those schools, the ones most often of concern to reformers, short on essential infrastructure. In understanding these concept in a clarification of leadership means leadership is all about organizational improvement; more specifically, it is about establishing agreed-upon and worthwhile directions for the organization in question, and doing whatever it takes to prod and support people to move in those directions. Our general definition of leadership highlights these points: it is about direction and influence. Stability is the goal of what is often called management. Improvement is the goal of leadership. There are as many definitions of leadership as there are theorists. Theorists no longer explain leadership in terms of the individual or the group. They believe that the characteristics of the individual and the demands of the situation interact in such a manner as to permit one, or perhaps a few, persons to rise to leadership status. Principal Leadership Style Various researchers have tried to interpret school leadership in different manner. Peretomode (1991) stated the importance of Leadership in school for accomplishment of school programmes, objectives and attainment of educational goals. Cheng (1994) proposed that leadership in educational institutions compose of five major dimensions, namely: structural leadership, human leadership, political leadership, cultural leadership and educational leadership. These five dimensions describe the role and functions of school leader. However the functions of principal put a variety of demands and challenges for the principal Mestry and Grobler (2004). In an attempt to explain the requirements of a competent principal Cranston (2002) explained the skills and capacities which principals are expected to possess. Principals' competencies can be measured from various dimensions; from the perceptions of students, teachers, parents, communities and their employers. For instance, Scotti Jr. and William (1997) agreed that teachers' perceptions of their principals' leadership is one of the many variables, which affect a school's productivity. Teachers' perception of principals' leadership style and behaviour is also positively related to teachers' morale Hunter-Boykin and Evans (1995). Luo (2004) further contended that perceptions about principals as leaders by their teachers indicate an important dimension to evaluate the leaders capacities. According to him, understanding how teachers perceive their principals leadership capacities has a great significance and providing evidence for improvement of school leadership. Research has also demonstrated that teacher' perceptions of their principals' capabilities style and their working conditions will determine the organizational climate and culture of the school. Such perceptions will also impact on the performance of the school. Research on leadership in non-school contexts is frequently driven by theory referred to by one of our colleagues as adjectival leadership models.â€- A recent review of such theory identified, for example, 21 leadership approaches that have been objects of considerable theoretical and empirical development. (Yammarino, Dionne, Chun, & Dansereau, 2005). Seventeen have been especially attractive, and some of them have informed research in school contexts.( Leithwood & Duke ,1999). Here are several best example of leadership style: Contingent leadership. Encompassing research on leadership styles, leader problem solving, and reflective leadership, this two-dimensional conception of leadership explains differences in leaders'effectiveness by reference to a task or relationship style and to the situations in which leaders find themselves. To be most effective, according to this model, leaders must match their styles to their settings. Participative leadership. Addressing attention to leadership in groups, shared leadership (Pearce & Conger, 2003) and teacher leadership, (York-Barr & Duke, 2004). This model is concerned with how leaders involve others in organizational decisions. Research informed by the model has investigated autocratic, consultative, and collaborative sharing styles. Transformational and charismatic leadership. This model focuses on ways in which leaders exercise influence over their colleagues and on the nature of leader-follower relations. Both forms of leadership emphasize communicating a compelling vision, conveying high performance expectations, projecting self confidence, modeling appropriate roles, expressing confidence in followers'ability to achieve goals, and emphasizing collective purpose. (Leithwood & Jantzi, 2006). Nevertheless, leadership research also has been informed by models developed specifically for use in school- and district-level settings. Of these, the instructional leadership model is perhaps the most well known. It bears some resemblance to more general, task-oriented leadership theories. (Dorfman & House, 2004). The instructional leadership concept implies a focus on classroom practice. Often, however, specific leadership practices required to establish and maintain that focus are poorly defined. The main underlying assumption is that instruction will improve if leaders provide detailed feedback to teachers, including suggestions for change. It follows that leaders must have the time, the knowledge, and the consultative skills needed to provide teachers in all the relevant grade levels and subject areas with valid, useful advice about their instructional practices. While these assumptions have an attractive ring to them, they rest on shaky ground, at best; the evidence to date suggests that few principals have made the time and demonstrated the ability to provide high quality instructional feedback to teachers. (Nelson & Sassi ,2005). Importantly, the few well-developed models of instructional leadership posit a set of responsibilities for principals that go well beyond observing and intervening in classrooms' responsibilities touching on vision, organizational culture, and the like. (Andrews & Soder (1987), Duke (1987), and Hallinger ,2003). In addition, studies of school and principals leadership are replete with other adjectives purporting to capture something uniquely important about the object of inquiry such as learning leadership,( Reeves (2006). constructivist leadership, (Lambert et al. ,1995). and change leadership.( Wagner et al. 2006). Nonetheless, Boykin and Evans (1995) found that majority of the principals were rated as ineffective by their teachers. This reflects that there is a big discrepancy between what the principals' are and how they are perceived by the teachers. And in Hong Kong, the images of the principal in the mind of pre-service primary teachers were found to be negative. Lee, Walker and Bodycott, (2000). A study by Luo and Najjar (2007), investigated Chinese principal leadership capacities as perceived by master teachers. Unlike in many developed countries where studies on principals' competencies are available in multitude, such studies are still at its low in Malaysia. Keeping in mind the importance of role of the principal as a leader within the secondary school system, it is imperative to examine the leadership style in facilitaing change such as integrating ICT within school context. This is particularly so because of the fact that schools in this country serve for the large section of national students. Most studies in this country have focused on leadership qualities, rather than leadership style. The study therefore intends to fill this gap by investigating the perception of teachers on the leadership style of their principals in terms of facilitating change in implementing ICT utilization among teachers within school setting. Leadership Change Facilitator Style Previous research on leaders has explored traits, such as height, race, and gender. The work of Fiedler (1978) suggested that leaders' style was dependent upon contingencies; meaning that different styles are needed for different styles. Blake and Mouton (1964) wrote that how a leader leads was in two dimensions; one in task and one in relationships. It was thought that the most effective leaders had high levels in both task and people skills. The level of maturity of the followers was thought to be reflective of the leaders' success by Hersey and Blanchard (1988). Nearly all of the research on leaders and leadership models was built upon business and industry contexts. Educational organizations, namely schools, have much less to draw upon for research on leaders. What is lacking even more is the examination of leaders within the change processes. Research is rich in the areas of leadership and leaders. Debates are not difficult to find on the topics of effective leadership; what makes it, who has it, and how does one do it. An essential component to effective leadership in today's schools is the facilitation of change. How leaders implement changes can lead to either the success or the failure of any innovation. Change continues as a theme in all educational discussions. In 1992, Fullan and Miles wrote about getting reform right in schools. "We can say flatly that reform will not be achieved until these seven orientations have been incorporated into the thinking and reflected in the actions of those involved in change efforts" (p. 744). Those seven orientations are listed in Figure 2. One of the objectives of this research, like a few preceding it, is to identify the specific kinds of combinations of behaviors that principals can and should exhibit on a day-to-day basis to bring about increases in student achievement through implenting ICT utilization among teachers. Figure 1. Fullan and Miles' orientations of change. If the role of the principal is critical, then it should be possible to identify principals' actions that directly relate to increasing the academic performance of students on standardized testing. An understanding that has been developed through the work of Hall, Hord, and Griffin (1980) is the principle that not all principals are the same. "Principals view their role and priorities differently and operationally define their roles differently in terms of what they actually do each day" (Hall, Ruthoford, Hord, & Huling, 1984) All leaders have a style. That has been established in research on industrial organizational leadership, change process, and educational administration. What has not been established is that there is not an operational definition of style. Furthermore, there is not a distinction drawn between leader behavior and leader style. The terms, and more troubling, the concepts have been used interchangeably. In most studies, followers were asked to identify individual behaviors of leaders, not the leaders' behaviors in total. In 1978, Thomas conducted a study on 60 schools, looking at the role of school principals in managing diverse educational programs. As a result of this study, she identified three patterns of principal behavior, and identified them as: Director, Administrator, and Facilitator. Director principals maintained an "active interest in all aspects of the school from curriculum and teacher to budgeting and scheduling." Administrator principals were said to make decisions "in areas affecting the school as a whole," this, leaving teachers with a great deal of autonomy. Facilitator principals thought of themselves as colleagues of the faculty, and "perceived their primary role to be supporting and assisting teachers in their work." The conclusions of this study were that schools under the leadership of a Director or Facilitator principal had a greater degree of implementation in programs than did schools lead by n Administrator principal. Hall and Hord (2006, 2011) identified varying approaches to change in leadership called the Change Facilitator Styles. These are defined through the leaders' use of behaviors that the researchers call "interventions." Each style is a composite of a particular set of behaviors and views about ones role in leading change efforts and different perspectives about how to approach change and the processes that are connected to it. Principals with different styles send signals to their staff with their actions and words. The effects of these varying Change Facilitator Styles are observable in the degree and amount of success that followers (typically a staff or staff members) have in implementing and using any one change. In past studies various researchers have found that teachers have more or less success in implementing innovations depending on the Change Facilitator Style of their principal. Change Facilitator Style emerged out of change process research over the last twenty years (Hall, et al., 1984; Hall & Hord, 2006, 2011). A distinct behavioral composite is represented in each style on how principals lead implementation efforts in schools. The original research identified and defined three Change Facilitator Styles: Initiators, Managers, and Responders as identify in figure 3. Clear and strongly held vision Listens and then decides Achievement and student success Aggressively seeks resources Creative use of policies Back their teachers There is a personal side few ideas about future directions Concerned about others perceptions Lets others take the lead Delay in making decisions Decisions are made one at a time Struggles in making big decisions Most influenced by last person Down play size/significance of innovations Sees teachers as strong Controlling budgets and resources are primary considerations Rules, procedures, and policies frame their view Try to attend all meetings and events Changes are cushioned at the beginning Once start, implement quickly Implementation are to acceptable level Figure 2 : Leadership Change Failitator Style (Hall, et al., 1984) Each study in the United States (Hall & George, 1999) and other countries, including Australia (Schiller, 2003), Belgium (Vandenberghe, 1988), and Taiwan (Shieh, 1996) established the existence of the three Change Facilitator Styles and their direct relationship with teacher success in implementing new curriculum and instructional programs. In an earlier study (Hall, et al, 1982) involving teachers' Stages of Concern, Levels of Use, and Innovation Configurations (Hall & Hord, 2006, 2011), a theme developed that data had dramatically different results in different schools with what was thought to be the same implementation processes. After further examination and extensive dialogue, it was realized that it was differences in how the principals led the change efforts that appeared to explain the differences in extent of implementation success. From that, emerged the concept of Change Facilitator Styles. As obvious as it may seem, principals are not all the same. Each one views his or her role differently, has different priorities, and has a personal definition of their role. Style and behaviors must be differentiated for the purpose of Change Facilitator Style as well as this study. Style represents the overall tone and pattern of a leader's approach. "Behaviors are a leader's individual, moment to moment actions, such as talking to a teacher in the corridor, chairing a staff meeting, writing a memo, and talking on the telephone. The overall accumulated pattern and tone of these behaviors form a person's style" (Hall & Hord, 2006, p. 211-212). Over the next two decades, a number of studies were conducted related to principal Change Facilitator Style and the extent of teacher implementation success. More recently, there has been a study that explored relationships between Change Facilitator Style and student test scores. The study was based upon, and extended upon an initial study, Examining Relationships between Urban Principal Leadership and Student Learning. The original study was conducted with site-based principals of schools and the 2006 state exams from the Hartford Public School system. (Hall, et al., 2008) The studies of principals revealed three distinct Change Facilitator Styles: Initiator, Manager, and Responder. These represent three contrasting approaches to the processes of change. The Role of ICT Tools In Education ICT stands for Information and Communication Technologies and is defined ""Diverse set of Technological tools and resources used to communicate, and to create, disseminate, store and manage information (Blurton, C, 1999). ICT has become a very importance part in education and management processes. A role in facilitating the success of ICT is to massiveness absorption of knowledge as it can provide extraordinary opportunities for developing countries to improve their education systems, especially in teaching and learning foreign language. In recent years they has been wave of interest in how computers and the Internet can be fully utilize to improve the efficiency and effectiveness of education at all levels in both formal and informal settings. However, ICT is more than just technology, although the old technology such as radio, telephone and television have been given less attention in this time, therefore, they can still be trusted with a longer history as a learning tools. (Cuba, L, 1986). For example, radio and television been used for classroom learning for more than forty years. However printed material remains the cheapest and most convenient method of all and therefore it is undoubtedly being the most dominant delivery mechanism in both developed and developing countries (Potashnik, M. and J. Capper, 2002). The use of computers and the Internet is still within minimum exposure in the developing countries due to expensive cost of access and limited infrastructure. Normally, different technology is rather combining with other traditional method instead of single mechanism delivery. The United Kingdom Open University (UKOU). Established in 1969 as the first open and distance learning education institution in the world, still depends on the print-based materials equipped with radio, television. Recently introduce its online program (open.ac.uk, 2011) similarly; Indira Gandhi National Open University in India combines the use of print, audio and video recordings, radio and television, and audio technology conference in its learning and teaching process. (Ignou.ac.in, 2011) Dictionary - View detailed dictionary In addition, ICT can help those with special educational needs to obtain greater autonomy. This can help children that remain hospitalized to be connected with their classroom (Maidenhead, 2004). Plus, it encourages less performing students improve, by allowing them to perform exercises at their own pace, and increase the self-esteem of those who are not used with formal learning. Improving the quality of education and training is a critical issue, especially in the development of education. ICTs can play the role to improve the quality of education in several ways, such as, by increasing student motivation and engagement. ICTs can also facilitate the improvement of basic skills, and improve teachers training. (Haddad, Wadi D. and Jurich, Sonia, 2002), "ICT is a tool of transformation, when used appropriately, can promote the shift to learner-centered environment" Motivating to learn. Facilitating the acquisition of basic skills Enhancing teacher training. According Cabero (2001), "the learning process could be increased by the interaction and reception of information based on the flexibilization time space accounted by utilization and integration of ICT tools. The possibility of the suggested changes in the model of communication as well as teaching and learning methods used by teachers, which providing way for new scenarios that support individual or collaborative learning. Gisbert (2003) reminds us that although telecommunications network was a powerful and transmitting information and the computer becomes an important tool for teaching and learning centers, they educational potential may be minimal if not accompanied by other pedagogical action. New teaching and learning system which is set around the telematics network, offers a new perspective on traditional concepts of space and time and requires a redefinition of traditional pedagogical models, as the role of teachers and students on the one side, and reconfiguration and management of educational organizations of the most remarkable changes that must be overcome in the 21st century education field. Many authors and institutions like the European Network, ICC (2002) coincide in emphasizing the communicative possibilities and great training that ICT contain. According to them, a new pedagogical model of the organization and should exploited by teachers so that they can offer the kind, cooperative self-help and life-long learning of future citizenship. Dictionary - View detailed dictionary The utilization ICT learning procedures and tools in the educational process, obviously leads to revolutionary changes in the roles of both teachers and learners as the emergence of new teaching and learning environment and finally for new virtual training that ultimately aims to facilitate the tools and resources to support communication and interaction as well as disseminate teaching materials via the web will in order to encourage promote enhance collaboration and cooperation among participants in the learning process. On the other hand, many author such as Salinas (2003) agree on the fact that the integration of ICT in education produce a set of transformations which transform all the elements that take part in the educational process such as organizations, students, curriculum, and notably, they affect teachers' role, function and behavior . Types of ICT Tools Information and Communication Technology consists of various tools and systems that can be exploited by capable and creative teachers to improve teaching and learning situations. Lim and Tay (2003) classification of ICT tools as : 1) Informative tools - Internet, Network Virtual Drive, Intranet systems, Homepage, etc. 2) resignation devices - CD-ROM, etc. 3) Constructive tools - MS Word, PowerPoint, FrontPage, Adobe Photoshop, Lego Mindstorm, etc. 4) Communicative tools - e-mail, SMS, etc. 5) Collaborative tools - discussion boards, etc. forum The five categories of ICT tools listed above are discussed in more detail under the following headings. Informative tools are applications that provide large amounts of information in various formats such as text, graphics, sound, or video. Informative tools can be regarded as a passive repository of information (Chen & Hsu, 1999). Examples include tools and information resources of the existing multimedia encyclopedia of the Internet. The Internet is a huge electronic database, and researchers consider the Internet as the most significant ICT tools in e-learning environments. Pew Internet & American Life Project did a survey in 2002 showed them three out of five children under the age of 18 and more than 78% of children between the ages of 12 and 17 lines. Key findings from this study are found in Levin & Arafeh (2002) shows that students rely on the internet to help them do their homework. In short, students consider the Internet as a virtual textbook, reference library, virtual tutor, learn to study shortcuts and virtual study groups (McNeely, 2005). Listen Situating tools  is a system that lay the students in the environment where it involve a context and the occurrence of a situation. Examples of such systems include simulation, virtual reality and multi-user domain. Situating tools software tools such as CD-ROM. CD-ROM offers hypermedia application which gives better opportunities for teachers to enhance learning environment. Hypermedia application covers more than one of the following media such as text, audio, graphic images (still images), animation and video clips. Hypermedia applications are well integrated in the learning environment to enhance student autonomy and thinking (Cheung & Lim, 2000). A multimedia presentation topic will help students to conceptualize the ideas of the real world by integrating the theories in the practical application of real-world situations. It is to increase students' ability to use the conceptual tools of the discipline in authentic practice. (Phillips, 2004) Multimedia able to put the amazing array of resources on student and lecturer resources on teaching and student 1control. "Multimedia learning active learning to create a more dynamic, interactive, collaborative, and satisfying" (Supyian, 1996) constructive tool is a general purpose tool that can be used to manipulate information, construct their own knowledge or visualize students understanding. Construction tools such as Microsoft Word or Powerpoint has a strong impact in the educational environment and is widely used in most organizations in the form of memos, reports, letters, presentations, record routine information, giving businesses the most (McMahon, M. 1997.) In learning a second language , Microsoft Word manage to help students to make correct sentences and texts as well as modern word processors include spell checking and dictionaries and grammar checkers. Therefore, teachers can use the software to promote writing in the curriculum. PowerPoint is a presentation graphics program packaged as part of Microsoft Office for Windows or Macintosh. Although generally used for developing business presentations , it is also very advantageous in the context of increase creativity among students. While word processing program is the most common computer applications used, as a spreadsheet like Excel is just as important in teaching and learning of English. Students will be exposed to learning design and statistical data using the Excel program that can be automated through the formula. Communicative tools are systems that allow easy communication between teachers and students or between students outside the physical barrier classroom. (Chen, D., Hsu, JJF, and Hung, D. 2000) It is including e-mail, electronic bulletin boards, chat, teleconference and electronic whiteboard. Synchronous communicative tools such as chat or video conference enable real-time communication while using the tools of communicative asynchronous (eg e-mail and electronic whiteboard) is a system in which exchange of messages between people are not 'live' but somehow delayed. Communicative tool most appropriate for activities requiring more time to think before responding. Utilization of electronic mail is increasing day by day. E-mail is the most commonly used on the Internet. It is easy to use as it is a primarily text-based system and simple communication tool for teachers and students that allows students to dominate class beyond physical barrier. (Chen, D., Hsu, JJF, and Hung, D., 2000.) Collaboration tools of ICT is currently the focus of much interest and emerging as development of new tools that make online collaborative projects draw a realistic option for a distributed group work. Internet can be used for many collaborative activities such as meetings, discussions are taking place, working in the document, information dissemination, and other tasks. Interactive electronic whiteboard is not just used as tools for meeting and development, but recently became the most popular tool among teachers. whiteboard is an electronic device that interfaces with the computer where the computer image is displayed on the board that can be manipulated interactively (Weiser and Jay, 1996). This tool is increasingly popular with teachers, when used in conjunction with a computer and a video projector that produces interactive learning community. Instead of having to crowd around one or two computers, interactive whiteboard not only display the materials, but also to respond to human interaction with computer commands and orders on a touch screen. In addition, these technologies provide impulsive information sharing, constructing knowledge and stimulate personal growth. (Mona, 2004). Other collaborative tools, such as E-mail messaging, Wireless Application Protocol (WAP) and General Packet Radio Services (GPRS) embedded in micro-browser equipped mobile phones or GPRS enabled handheld computers are other ICT tools that that can link students in different geographic locations exceeding the boundaries of class. In addition, the development of mobile phone and PDA allows learners to exchange information in a short time simultaneously and asynchronously, and provides flexibility for one-one, one-to-many and many-to-many communication, especially for the online discussion forum. (Lim & Lee, 2002). In conclusion, a "learning is no longer seen as a solitary activity, but is described as taking place through social interaction with peers, mentors and experts" (Kings, 1998). Benefits of Utilizing ICT In Education It important to analyze the facts occur within the new technology and the learning process should be viewed in the perspective of learning English. Most of the simple basic use of ICT devices in the educational environment that leads to the following benefits: Increase in pupils' motivation, enthusiasm and confidence Positive association with attainment Learning possibilities expanded via collaboration, interaction and communication in the target language Potential for differentiation according to individual pupil need The utilization of ICT tools in education was increasingly felt in recent times and benefits the students with the appropriate exploitation. It facilitates latest information for user by at a click of a mouse. According to Ofsted (2002), ICT tools can perform four essential functions as follows: The speed and automatic functions of ICT allow a teacher to demostrate , explore and clarify aspects of the teaching method which enable the students learn more effectively; The capacity and coverage of ICT to assist the teachers and pupils easily access to for historical event or current formation the temporary nature of information stored, processed and presented using ICT enable simpler method as documents could be change and correced by editing software provided in the programs. the interactive way in which information is stored, processed and presented can enable teachers and students to explore the model, to communicate effectively with others and present information effectively to different audiences.. Research has shown that appropriate use of ICTs to catalyze a paradigm shift in both content and pedagogy that is the heart of education reform in the 19th century. ICT-supported education to enhance the success of the ongoing knowledge and skills that will give the students continuous learning if properly designed and implimented. Leveraging ICT in an appropriate manner enables new methods of teaching and learning, especially for students in exploring exciting ways of problem solving in the context of education. New ways of teaching and learning is supported by constructivist learning theory and paradgm shift from teacher-centered pedagogy of memorization and rote-learning to focus on student centered. (See Table 1 for a comparison of a traditional pedagogy and an emerging pedagogy enabled by ICTs)
<urn:uuid:0f62df69-46df-4569-b08f-469f0a533bda>
CC-MAIN-2016-50
https://www.ukessays.com/essays/education/the-introduction-to-the-role-of-principal-education-essay.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00342-ip-10-31-129-80.ec2.internal.warc.gz
en
0.952986
9,860
3.328125
3
Note: A source of great confusion, hostility, and fear, evolutionary theory gets people all riled up. Some see evolution as a challenge to their faith in God; others find comfort in evolution as an alternative to traditional religion. But there is one facet in which these warring parties generally agree—evolution implies atheism. This interpretation is powerful, unambiguous, and one that many in our contemporary society, both liberal and conservative, have learned to embrace. But despite our culture's efforts to equate evolution with atheism, it simply isn’t true. BioLogos is working to correct this cultural bias, and today we will examine one scientist who completely confounds this common but erroneous assumption about evolution and Christian belief. In my previous essay, I discussed “Darwin’s finches” and how surprisingly little Charles Darwin himself had to say about them. In fact, it was actually the British ornithologist David Lack (1910-1973) who conducted the critical research that immortalized the finches in biology textbooks and popular lore. In 1973, the eminent German zoologist Ernst Mayr wrote: Already well known among professional ornithologists, his work on the Galapagos finches gave David Lack world fame… There is no modern textbook of zoology, evolution or ecology which does not include an account of his work.1 Decades have passed since Mayr wrote these words, and David Lack’s name has largely faded from public discourse. On the other hand, the Galapagos finches have become one of the most recognized symbols of evolution in the world today. Does it really matter whether Lack or Darwin gets credit for describing the evolution of these remarkable birds? Insofar as evolutionary theory contrasted with religious belief, it makes a big difference. In a culture that is eager to equate evolution with atheism, it should come as no surprise that these birds are only known as “Darwin’s finches”. Darwin’s personal struggles and ultimate rejection of Christianity are well documented, and people are eager to link his loss of faith to his evolutionary theory. David Lack, on the other hand, began his scientific career as an agnostic, but shortly after publishing his famous book on the evolution of Galápagos finches, he converted to Christianity!2 A Christian at the forefront of evolutionary biology Lack’s Christian conversion did not mark the end of his scientific achievements, either. In fact, he continued as a prolific researcher until just weeks before he died. Among his many achievements, he was Director of the Edward Grey Institute of Field Ornithology (1945-1973), Fellow of the Royal Society, and President of both the International Ornithological Congress (1962-66) and the British Ecological Society (1964-65). His fellow scientists held him in great esteem: He was described as one of the most outstanding among world ornithologists; he was certainly this, but he was also one of the world’s leading evolutionists. All the time one saw developing his use of birds as material for the study of wider, deeper, biological problems.3 Clearly David Lack was an outstanding scientist, and his commitment to Christianity did not tarnish, hinder, or undermine his research on evolution. But we might also ask, what was Lack like as a Christian? Did he keep his faith hidden from view, afraid that it might compromise his reputation as a scientist? Ernst Mayr, who interacted with David Lack professionally and personally for nearly 40 years, had this to say: I have known only few people with such deep moral convictions as David Lack. He applied very high standards to his own work and was not inclined to condone shoddiness, superficiality and lack of sincerity in others. This did not always go well with those who preferred to compromise in favour of temporary expediency. David had been raised in an environment in which great stress was layed on moral principles and this attitude was later reinforced by his Christian faith. This explains his extraordinary unselfishness and modesty, and his great devotion to his family, to his students, to his friends, and to all the things that he lived for. The equanimity, indeed serenity, with which he faced death after his terminal cancer had been diagnosed is further evidence of the strength which his faith gave him.4 Like Asa Gray5 before him, and Francis Collins6 after, David Lack was an sincere, devout Christian, as well as a leading scientist who employed evolutionary theory to make brilliant discoveries about the natural world. Though Lack did not see any conflict between his scientific and Christian beliefs, he was sympathetic to the concerns of his fellow Christians. Therefore, ten years after publishing his masterpiece on Darwin’s Finches, Lack wrote another book entitled Evolutionary Theory and Christian Belief: The Unresolved Conflict. Originally published in 1957, this book deals with the very same science and faith questions that Christians struggle with today— topics like randomness and chance, death in nature, miracles, and evolutionary ethics. While it would be unreasonable to expect anyone to completely resolve these matters, Lack offered numerous insights both as a devout Christian and one of the world’s leading biologists. Let’s take a brief look at how Lack addressed some of these questions. Blind Chance or Divine Plan? Evolutionary theory does not invoke supernatural forces in explaining the history of life on Earth; instead, it relies on naturally-occurring processes to account for the vast diversity of life. Additionally, it explains animal behavior largely in terms of survival and reproduction, without appealing to any higher purpose of life. Taken together, does this imply that God is absent, and that our lives are ultimately meaningless? David Lack responded, Behind the criticism that Darwinism means that evolution is either random or rigidly determined lies the fear that evolution proceeds blindly, and not in accordance with a divine plan. This is another problem that really lies outside the terms of reference of biology. It is true that biologists have inferred that, because evolution occurs by natural selection, there is no divine plan; but they are being as illogical as those theologians whom they rightly criticize for inferring that, because there is a divine plan, evolution cannot be the result of natural selection.7 When rendering judgment on the ultimate meaning of life, biologists are speaking from their person beliefs, not from scientific authority. Moreover, Lack pointed out that many science enthusiasts have employed the concept of “randomness” in ambiguous and misleading ways: Mutations are random in relation to the needs of the animal, but natural selection is not. Selection, as the word implies, is the reverse of chance.8 In support of his view, Lack pointed out that convergent evolution has produced uncanny resemblances between distantly-related species across the world, notably among marsupials in Australia. Different evolutionary trajectories can lead to very similar results.9 Death in Nature After addressing concerns about the seeming “randomness” of evolution, Lack turned to another great concern, the role of death in natural selection: Various writers–some Christian and others agnostic–have been troubled about natural selection not only because it seems too random, but also because it is so unpleasant.10 Genetic mutations are generally harmful, and for evolution by natural selection to produce new forms of life, an awful lot of organisms must die. For many Christians, it is inconceivable that a loving and merciful God would allow death on such a vast scale. But Lack also pointed out that rejecting evolutionary theory doesn’t actually get rid of the problem of death. Regardless of what we think about evolution, the brute fact of mass extinction remains. Fossils of innumerable animals, plants, and microorganisms clearly demonstrate that the vast majority of species that have ever lived are now dead. It may be quite troubling for us to observe that our planet is a giant graveyard of natural history, but rejecting evolution will not change this fact. Some Christians conclude that death could not have been part of the divine plan; instead, it must be the work of the devil, or the result of human sin. But this interpretation contains an implicit assumption that death is always evil. Is this really true? David Lack offered two intriguing insights: - For a population to maintain a stable size, all births must be balanced by a corresponding number of deaths. A world in which no animals die is a world in which no animals are born. That means no reproduction, no courtship, and by implication, no singing birds—much to the dismay of ornithologists and people in love! - Some people, taking cues from Isaiah 11:6-7, suppose that in a perfect world, animals only eat plants. But in fact, plants themselves depend on the bacterial decay of dead organisms. If animals didn't die, then essential nutrients would disappear from the ground, and plants could not continue to grow. Eventually, there would be nothing left for animals to eat, and all life would cease.11 Many Christians are uncomfortable with evolutionary theory because it denies a miraculous, supernatural origin of life. They fear that if those miracles are denied, it might lead people to reject the possibility of miracles altogether, including the central feature of the Christian faith—the resurrection of Jesus from the dead. As a devout Christian, David Lack certainly affirmed the fundamental tenets of the gospel. But at the same time, he explained to his readers that invoking miracles to account for unusual features of the natural world is not particularly helpful when trying to deepen our understanding of God’s great multitude of creatures: [The biologist's] research depends on repeated observations. It need not, as popularly supposed, consist solely, or even mainly of measurements and experiments, but unless events are repeated, they cannot be assessed by science. Hence truly unique events come outside the domain of science, though biologists are not usually convinced when told they must, therefore, leave such problems as miracles to others. For one of the chief ways in which research has advanced is through the discovery of apparent exceptions to the known rules, and if further study shows the exceptions to be replicable, new regularities are revealed from which modified rules can be propounded. This method has been so successful that the biologist tends to doubt whether there are any types of irregularity, or seeming irregularity, that will not yield to it.12 But just because a scientist cannot repeat a particular event doesn’t mean it didn’t happen. Both natural history and human history contain unique events that only happened once. As we peer into the past, the difficulty of discerning fact from fiction inspires us to further investigate the mysteries that surround us. David Lack’s book Evolutionary Theory and Christian Belief was quite insightful, but his enduring achievements took place in evolutionary biology, a place where many Christians are afraid to tread. While it is significant that he himself found no contradiction between his faith and his science, perhaps the greatest testament to the compatibility between Christian faith and evolution is the life he led as a believer in both. As we saw in Ernst Mayr’s candid praise, Lack reflected the light of Christ through both his personal and his professional relationships. Today, many voices in our culture still insist that evolution is incompatible with a sincere faith in Jesus, but a careful look at history demonstrates otherwise. In the future, perhaps more people of faith will have confidence to study biology knowing that one of the most iconic symbols of evolution—the Galapagos finches—owe their fame in large part to a devout Christian named David Lack.
<urn:uuid:f67a7a9c-af1f-4c77-bca9-784e2c121072>
CC-MAIN-2016-50
http://biologos.org/blogs/archive/david-lack-evolutionary-biologist-and-devout-christian
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.970141
2,369
2.65625
3
You have a unique medical history. Therefore, it is essential to talk with your doctor or healthcare provider about your personal risk factors and/or experience with ear infections. By talking openly and regularly with your healthcare provider, you can take an active role in your care. General Tips for Gathering Information Here are some tips that will make it easier for you to talk to your healthcare provider: Specific Questions to Ask Your Healthcare Provider About Ear Infections About Your Risk of Developing Ear Infections About Treatment Options About Lifestyle Changes Acute otitis media (AOM). EBSCO DynaMed Plus website. Available at: http://www.dynamed.com/topics/dmp~AN~T116345/Acute-otitis-media-AOM. Updated May 17, 2016. Accessed October 4, 2016. Ear infections in children. National Institute on Deafness and Other Communication Disorders website. Available at: http://www.nidcd.nih.gov/health/hearing/pages/earinfections.aspx. Published March 2013. Accessed September 21, 2015. Middle ear infections. American Academy of Pediatrics Healthy Children website. Available at: http://www.healthychildren.org/English/health-issues/conditions/ear-nose-throat/Pages/Middle-Ear-Infections.aspx. Updated August 20, 2015. Accessed September 21, 2015. Last reviewed September 2015 by Michael Woods, MD Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition. Copyright © 2012 EBSCO Publishing All rights reserved. What can we help you find?close ×
<urn:uuid:2f9b6e98-138d-4b17-a0d1-3e7a67dfd37b>
CC-MAIN-2016-50
http://mbhs.org/health-library?ArticleId=19359
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.886206
422
2.765625
3
Essay, Research Paper The Use of Deception in William Shakespeare’s Twelfth Night Deception is a key theme William Shakespeare’s Twelfth Night. The characters must use deception to obtain good things, escape bad situations, or to play cruel hilarious ticks on other people. One example of deception is when Viola clothes herself in men’s clothing in order to obtain a job under the Duke of Illyria, Orsino. During another scene Sir Andrew, Fabian, Maria, and Sir Toby Belch trick Malvolio into making a fool of himself. A third example of deception is when Feste the jester disguises himself as Sir Topas and taunts Malvolio. Each of these scenes and characters helps depict the different uses of deception. The first example of deception is Viola’s decision to dress as a man. She must do this in order to survive. Viola is a young woman who narrowly escaped a shipwreck along with her twin brother, Sebastian. Unfortunately, the twins where separated during the shipwreck and each believes the other perished. Viola has no way of survival other than to dress as a man and serve Orsino. Viola says: "For such disguise as haply shall become the form of my intent. I’ll serve this duke…for I can sing…That will help allow me very worthy his service". (Shakespeare, 54-59) While serving as a messenger between the Orsino and his love Olivia, Olivia happens to fall in love with Viola instead of the Duke. Later a captain finds Viola’s brother, Sebastian, on the shore of Illyria. They both go into town and Olivia sees Sebastian. Sebastian and Viola happen to be wearing the exact same clothes, thus making it difficult to tell the two apart. Olivia mistakenly proposes to Sebastian. Despite the fact that Sebastian has never met Olivia, he accepts the marriage. After the Duke discovers Viola’s gender, he falls in love with her and they wed. A second example of deception is the cruel trick that Sir Andrew, Fabian, Maria, and Sir Toby Belch play on Malvolio. Maria, Olivia’s "lady-in-waiting", writes a note in her mistress’s handwriting saying that Olivia falls for men who wear high yellow stockings and smile all the time. Sir Toby says: "He shall think by the letter that thou wilt drop that they come from my niece, and that she’s in love with him." (Shakespeare, 157) The conspirators then place the note in Maria’s garden, a place where Malvolio surely will find it. They do this to Malvolio because he had ruined their rambunctious fun the night before. Malvolio finds the letter and reads it: "…cast thy humble slough, and appear fresh. Be opposite with kinsman, surly with servants…Remember who commended thy yellow stockings, and wished to see thee ever cross-gartered ". (Shakespeare, 139-145) Later, Malvolio confronts Olivia and she thinks he is insane. Malvolio gets put in a cage and becomes isolated for his behavior. Far a third and final example of deception, Feste disguises himself as Sir Topas to further annoy Malvolio. Maria asks Feste to dress up in a gown and hat and put on a long beard, to disguise himself as Sir Topas. She asks him to do this because she wants to see Malvolio further tormented. Feste, while disguised, asks Malvolio what he thinks of Pythagoras. When Malvolio responds, from his prison, that he disagrees with the beliefs of Pythagoras, Feste says that he will remain caged forever. Malvolio then desperetly begs Feste to free him and tries to convince him that he is sane. Malvolio says: "…there was never man so notoriously abused! I am well in my wits, fool, as though art." (Shakespeare, 87-88). Feste eventually has pity for the mistreated servant and sets him free. Deception pervades William Shakespeare’s Twelfth Night. One example involves Viola dressing up as a man. A second example involves the conspiracy of Maria, Sir Toby, Sir Andrew, and Fabian to make a fool of Olivia’s servant Malvolio. The third example involves tormenting Malvolio purely for enjoyment. Deception is used in the play to work into good situations, avoid difficult situations, and to play abusive yet humorous jokes on other characters in the play. Shakespeare, William. Twelfth Night. New York: Harcourt, Brace & World, Inc., 1968.
<urn:uuid:2854a1c8-06ad-4d71-ba27-a6f4874e1cd5>
CC-MAIN-2016-50
http://mirznanii.com/a/70460/the-uses-of-deception-in-twelth-night
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.940746
1,000
3.484375
3
One of the most common communicable diseases that can be encountered in hospital admissions is Shingles or called as herpes zoster. Herpes zoster is a contagious type of disease that involves the peripheral nervous system as caused by reactivation of the varicella-zoster virus. This disease is characterized with blisters which as very painful. The immunosuppressed or had previous exposure with chicken pox can experience this. The virus’ action can travel to spinal and cranial sensory ganglia and the posterior gray matter of the spinal cord. Upon contact with the blisters, the person who did not experience chicken pox will develop chicken pox instead of shingles. People can be diagnosed with shingles after careful physical assessment. Laboratory studies may also reveal an increase in white blood cells as the body tries to combat this viral infection. - Neurologic pain described as painful, tingling or burning sensation - Body weakness - Burning sensation of the blister sites - Febrile episodes - Skin vesicles along the peripheral sensory nerves, the blisters actually outline the nerves - Classic location: along the trunk, thorax or face - Lack of appetite - Joint pains - Previous exposure with chicken pox - Commonly appeared on age group of older than 60 years old - Those whose immune system is weakened due to underlying illness or medications - Admission to an isolation room is advised. - Pharmacologic intervention involves the following: analgesics, corticosteroids, acetic acid or white petrolatum and antiviral agents such as Acyclovir, famciclovir and valacyclovir. - Cold baths and lotions must be started. - Pregnant women should not go near the person with shingles as it may be infectious. - Give antihistamines in order to prevent itching. - Monitor the vital signs. - Observe proper measures in containing the infectious agent. - Observe proper personal protective gear as well as routine hand washing before and after the procedures. - Encourage complete compliance with the medications being ordered. - Teach the patient as well as the folks about proper disposal of materials which are in contact with the patient. - Listen to the ideas and perception of the patient about being isolated, the prognosis as well as recovery from the present status. - Teach the patient about the possibility of having post herpetic neuralgia or pain on the sites of shingles which may be present for months or even years. Proper medications can be given in order to lessen the pain.
<urn:uuid:51c1d37c-4f95-40b4-a1dc-e17e72e5eab2>
CC-MAIN-2016-50
http://rnspeak.com/medical-and-surgical-nursing/shingles-nursing-management/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.92382
535
3.78125
4
In the United States, the election process occurs in two steps: 1. Nomination, in which the field of candidates is narrowed 2. General election, the regularly scheduled election where voters make the final choice of officeholder Chapter 7, Section Nonpartisan Primary Candidates are not identified by party labels Runoff Primary If a required majority is not met, the two people with the most votes run again Closed Primary Only declared party members can vote. Types of Direct Primaries Open Primary Any qualified voter can take part. Blanket Primary Qualified voters can vote for any candidate, regardless of party Chapter 7, Section Candidates must gather a required number of voters’ signatures to get on the ballot by means of petition. Minor party and independent candidates are usually required by State law to be nominated by petition. Petition is often used at the local level to nominate for school posts and municipal offices. Chapter 7, Section Chapter 7, Section Congress has the power to set the time, place, and manner of congressional and presidential elections. Congress has chosen the first Tuesday after the first Monday in November of every even-numbered year for congressional elections, with the presidential election being held the same day every fourth year. States determine the details of the election of thousands of State and local officials. Most States provide for absentee voting, for voters who are unable to get to their regular polling places on election day. Some States within the last few years have started to allow voting a few days before election day to increase voter participation. Elections are primarily regulated by State law, but there are some overreaching federal regulations. Precincts A precinct is a voting district. Precincts are the smallest geographic units used to carry out elections. A precinct election board supervises the voting process in each precinct. Polling Places A polling place is where the voters who live in a precinct go to vote. It is located in or near each precinct. Polling places are supposed to be located conveniently for voters. Chapter 7, Section Voting was initially done orally. It was considered “manly” to speak out your vote without fear of reprisal. Paper ballots began to be used in the mid-1800s. At first, people provided their own ballots. Then, political machines began to take advantage of the flexibility of the process to intimidate, buy, or manufacture votes. In the late 1800s, ballot reforms cleaned up ballot fraud by supplying standardized, accurate ballots and mandating that voting be secret. Chapter 7, Section History of the Ballot Chapter 7, Section Electronic vote counting has been in use since the 1960s. Punch-card ballots are often used to cast votes. Vote-by-mail elections have come into use in recent years. Online voting is a trend that may be encountered in the near future. Chapter 7, Section Small contributors Wealthy supporters Nonparty groups such as PACs Temporary fund-raising organizations Candidates Government subsidies Private and Public Sources of Campaign Money Early campaign regulations were created in 1907, but feebly enforced. The Federal Election Campaign Act (FECA) of 1971 was passed to replaced the former, ineffective legislation. The FECA Amendments of 1974 were passed in response to the Watergate scandal. Buckley v. Valeo invalidated some of the measures in the FECA Amendments of Most significantly, it also stipulated that several of the limits that the 1974 amendments placed on spending only apply to candidates who accept campaign money from the government, not those who raise money independently. The FECA Amendments of 1976 were passed in response to Buckley v. Valeo. Chapter 7, Section The Federal Election Commission (FEC) enforces: the timely disclosure of campaign finance information limits on campaign contributions limits on campaign expenditures provisions for public funding of presidential campaigns Chapter 7, Section “More loophole than law…” —Lyndon Johnson Soft money —money given to State and local party organizations for “party-building activities” that is filtered to presidential or congressional campaigns. $500 million was given to campaigns in this way in Independent campaign spending —a person unrelated and unconnected to a candidate or party can spend as much money as they want to benefit or work against candidates. Issue ads —take a stand on certain issues in order to criticize or support a certain candidate without actually mentioning that person’s name. Chapter 7, Section
<urn:uuid:1f9416c1-87a1-4c0c-bdf9-54ffa2787d7d>
CC-MAIN-2016-50
http://slideplayer.com/slide/3594560/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.96405
885
3.78125
4
the Arts & Humanities with C&IT University of Durham, Courtyard Building, 10am-4.30pm This workshop principally focused on the evaluation of digital resources and on collaborative online learning here for links to workshop resources and handouts | Concepts and Databases: managing your bibliography with EndNote 11th May 1999, Oxford University Computing Services This workshop provided an overview of bibliographical issues: what is bibliography; how you should create a bibliography; sources of bibliographic information; conventions and elements of style; an overview of on-line bibliographic databases and an overview of bibliographic packages. Special attention in hands-on sessions was given to EndNote 3.0.1 | Click here to view resources from the workshop | Art - Digital Culture in the 21st Century. A Colloquium. Oxford University Union. 21st April, 1999. CTI Textual Studies were co-organisers of this successful event. Some proceedings of the event can be located at http://info.ox.ac.uk/ctitext/beyond/. 'Show and Tell' event, Oxford University Computing Services, 18 March 1999 - Frances Condron, CTI Textual Studies' Project Officer, presented a 'Web Poster' about the Assisting Small group Teaching through Electronic Resources (ASTER) project at this event which showcased some of the developments at Oxford which are using new technology to enhance learning, teaching & scholarship. To view the poster, click Textual Studies have run the following workshops: Using information technology to teach literary studies Queen's University Belfast, 13 May 1998 - This workshop was organised by CTI Textual Studies as part of the CTI's Quality Learning with Technology series in Northern Ireland. The CTI Centre for Textual Studies held two half-day workshops on 13 May 1998, which looked at two different aspects of the use of technology in teaching literary studies. The workshops were as follows: Using the Internet in Teaching and Learning Literary Studies. This workshop included case study examples of the use of email and web in teaching literature, with an explanation and discussion of the technologies and their various merits. The hands-on session guided participants through the process of locating and evaluating web resources for use in teaching and Locating and Using Electronic Texts. This workshop examined the issues and practicalities of making use of electronic texts in teaching and learning. Presentations covered existing sources of electronic texts; evaluating their quality; an overview and demonstration of selected text analysis tools; and examples of using electronic texts in teaching and learning. - Textual Studies and the World Wide Web: Using the Internet to Enhance Teaching and Learning in Literature (English and Non-English), Theology, Philosophy, Classics, Film and Drama Studies University of Newcastle, 11 March 1998 - This workshop was organised by Netskills in association with CTI Textual This was a practical workshop for specialists in literature (English and Non-English), theology, philosophy, classics, film and drama studies who required an introduction to networked information retrieval and to creating their own material on the World Wide Web. The morning and afternoon consisted of a mixture of topics and practical exercises. Included was a session on resource retrieval using - Teaching European Literature and Culture with C&IT (Communication and Information Technologies): A One-Day Conference University of Oxford, 18 March 1998 - This one-day conference provided a forum for academics using C&IT in their teaching and research to share expertise with their colleagues. Formal papers were followed by a panel session with an opportunity for discussion and dialogue on topics such as What are the implications for the future of scholarship? Where does the digital medium fit into the study of literature and culture? Virtual environment - real learning? - Themes addressed included: - Teaching literature and culture with the World Wide Web; Digital environments in teaching; Technology and the study of the text; Electronic editions and - The timetable is still available. - Multimedia Shakespeare to Teach Performance and A Half-Day Workshop University of Oxford, 30 March 1998 - The Open University/BBC Shakespeare Multimedia Research Project is developing interactive educational tools about Shakespeare in performance. This half day workshop, run in conjunction with the CTI Centre for Textual Studies, introduced participants to the work of the Project and in particular demonstrated the pilot CD-ROM 'King Lear in Performance'. Participants will had the opportunity to see how new technologies can enable students to bring together text, image, and idea to present their own interpretation of Shakespeare's plays. Sessions included discussion of: mediating performance, packaging expert opinion, imaginative resolution of copyright problems, how Shakespeare can subsidize the arts, and a glimpse at future prospects. - The Impact of Communications and Information Technology on Learning and Teaching in the Humanities University of St Andrews, 6 November 1997 - This one-day workshop was intended for Scottish academic teaching and support staff working in the Humanities. It aimed to raise awareness of the impact IT can have on learning and teaching in the Humanities. The day consisted of presentations from the CTI Centre for Textual Studies (University of Oxford) including using electronic texts and the Internet for teaching; case studies from humanities staff at St Andrews; hands-on demonstration of resources including Chadwyck-Healey's Literature Online (LION) database; and group discussion on the practical implications of Dearing's recommendations concerning IT in learning and teaching. - Developing a Cheap and Cheerful Electronic Library: a Workshop for the Non-Technical (19 June 1997) - This workshop, run in association with the On-Demand Publishing in the Humanities Project, reported on progress in the "On Demand Publishing in the Humanities" project, funded by JISC as part of the eLib programme, and carried out by Liverpool John Moores University. The aim of the project is to create a "cheap and cheerful" WWW-based model for networking texts in an academic environment. - Open Workshop: Multimedia Resources (28 April 1997) - This Open Workshop gave participants a chance to explore and evaluate a range of multimedia resources (CD-ROM and networked) which may be appropriate for use as teaching resources. Subjects covered included English literature, film studies, and theology. - Computer-Assisted Film and Drama Studies, St Anne's College, Oxford (17 March 1997) - This one-day conference introduced the use of computers to the study and teaching of film and dramatic performance. It provided an overview and discussion of the applications, resources, and projects currently available together with a small exhibition of digital resources. The programme with hyperlinks is still available. - Using Internet Tools to Build a Virtual Classroom (21 February 1997) - This workshop, jointly with the JTAP-funded project, 'Virtual Seminars for Teaching Literature' discussed the use of tools and applications currently available which might usefully be employed in the classroom (ranging from email discussion lists through to MOO/MUD environments). There is a Web page associated with the day and also a selection of the proceedings - Strategies for Studying Textual Sources (7 February 1997) - This workshop, run in association with the CTI Centre for History, Archaeology & Art History, explored a variety of approaches to textual sources. Included was discussion and demonstrations of preparing and encoding electronic texts, managing text corpora, text searching and analysis. The workshop was aimed at both historians and literary scholars interested in manipulating texts for research and teaching purposes. - Open Workshop: Text Analysis Tools (28 - This was the first in a series of Open Workshop which are designed to provide academics with hands-on experience of a range of applications. There were opportunities to test applications using printed tutorials. Tools at this Open Worshop included WordCruncher for Windows, WordSmith Tools, MonoConc for Windows, concordancers for Macintosh, and demonstrations of OCR software for scanning texts. - Electronic Resources for the Humanities (2 Feb /22 Mar 1996) - This workshop, run in association with te Networked Resources in the Humanities project (funded by the British Library), introduced a range of electronic resources, networked and CD-ROM-based. Participants were also introduced to Project Electra, an electronic scholarly resource for women's writings in English from 1780-1830. - Creating World Wide Web Pages for Teaching (28 Mar 1996) - This workshop was designed for academics who had some experience of browsing the World Wide Web but who may not have considered its use for teaching. Discussion included: the appropriate and inappropriate uses of the Web in teaching, and the numerous ways in which the Web interface may be used (including the Isaac Rosenberg tutorial). The workshop also included a practical aspect introducing the digitization of non-textual materials such as images and audio and the guided creation of a demonstration page using HTML. - Computers and the Teaching of Theology (29 Mar 1996) - This workshop introduced the use of computers in the study and teaching of theology, providing an overview of the applications and resources currently available, ranging from full-text databases, biblical analysis software, to 'theology in action' on the Internet.
<urn:uuid:310eae64-d4b2-4347-b1ec-4f2236f042ed>
CC-MAIN-2016-50
http://users.ox.ac.uk/~ctitext2/service/workshop/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.898532
2,021
2.546875
3
Surprising Spiral Structure Spotted by ALMA - New observations reveal the secrets of a dying star Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have discovered a totally unexpected spiral structure in the material around the old star R Sculptoris. This is the first time that such a structure, along with an outer spherical shell, has been found around a red giant star. It is also the first time that astronomers could get full three-dimensional information about such a spiral. The strange shape was probably created by a hidden companion star orbiting the red giant. This work is one of the first ALMA early science results to be published and it appears in the journal Nature this week. A team using the Atacama Large Millimeter/submillimeter Array (ALMA), the most powerful millimetre/submillimetre telescope in the world, has discovered a surprising spiral structure in the gas around the red giant star R Sculptoris. This means that there is probably a previously unseen companion star orbiting the star. The astronomers were also surprised to find that far more material than expected had been ejected by the red giant. “We’ve seen shells around this kind of star before, but this is the first time we’ve ever seen a spiral of material coming out from a star, together with a surrounding shell,” says the lead author on the paper presenting the results, Matthias Maercker (ESO and Argelander Institute for Astronomy, University of Bonn, Germany). Because they blow out large amounts of material, red giants like R Sculptoris are major contributors to the dust and gas that provide the bulk of the raw materials for the formation of future generations of stars, planetary systems and subsequently for life. Even in the Early Science phase, when the new observations were made, ALMA greatly outperformed other submillimetre observatories. Earlier observations had clearly shown a spherical shell around R Sculptoris, but neither the spiral structure nor a companion was found. "When we observed the star with ALMA, not even half its antennas were in place. It's really exciting to imagine what the full ALMA array will be able to do once it's completed in 2013," adds Wouter Vlemmings (Chalmers University of Technology, Sweden), a co-author of the study. Late in their lives, stars with masses up to eight times that of the Sun become red giants and lose a large amount of their mass in a dense stellar wind. During the red giant stage stars also periodically undergo thermal pulses. These are short-lived phases of explosive helium burning in a shell around the stellar core. A thermal pulse leads to material being blown off the surface of the star at a much higher rate, resulting in the formation of a large shell of dust and gas around the star. After the pulse the rate at which the star loses mass falls again to its normal value. Thermal pulses occur approximately every 10 000 to 50 000 years, and last only a few hundred years. The new observations of R Sculptoris show that it suffered a thermal pulse event about 1800 years ago that lasted for about 200 years. The companion star shaped the wind from R Sculptoris into a spiral structure. “By taking advantage of the power of ALMA to see fine details, we can understand much better what happens to the star before, during and after the thermal pulse, by studying how the shell and the spiral structure are shaped,” says Maercker. “We always expected ALMA to provide us with a new view of the Universe, but to be discovering unexpected new things already, with one of the first sets of observations is truly exciting.” In order to describe the observed structure around R Sculptoris, the team of astronomers has also performed computer simulations to follow the evolution of a binary system. These models fit the new ALMA observations very well. "It’s a real challenge to describe theoretically all the observed details coming from ALMA, but our computer models show that we really are on the right track. ALMA is giving us new insight into what's happening in these stars and what might happen to the Sun in a few billion years from now," says Shazrene Mohamed (Argelander Institute for Astronomy, Bonn, Germany and South African Astronomical Observatory), a co-author of the study. “In the near future, observations of stars like R Sculptoris with ALMA will help us to understand how the elements we are made up of reached places like the Earth. They also give us a hint of what our own star's far future might be like,” concludes Matthias Maercker. Source : European Southern Observatory
<urn:uuid:e6dd5f1c-d196-452c-832c-b67e13d63b81>
CC-MAIN-2016-50
http://www.asdnews.com/news-45468/Surprising_Spiral_Structure_Spotted_by_ALMA.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.949615
980
3.1875
3
Gemini Planet Imager's first-light image of the light scattered by a disk of dust orbiting the young star HR4796A. This narrow ring is thought to be dust from asteroids or comets left behind by planet formation; some scientists have theorized that the sharp edge of the ring is defined by an unseen planet. The left image shows normal light, including both the dust ring and the residual light from the central star scattered by turbulence in the Earth's atmosphere. The right image shows only polarized light. Leftover starlight is unpolarized and hence removed from this image. The light from the back edge of the disk is strongly polarized as it scatters towards us. Lawrence Livermore National Laboratory
<urn:uuid:4443ae51-3b28-4cc7-a953-3488e20b1d09>
CC-MAIN-2016-50
http://www.astronomy.com/sitefiles/resources/image.aspx?item=%7BF18E54FC-A35A-41C9-ADE7-9BB07584FD79%7D
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.918242
145
3.53125
4
King Lear Act 3, Scene 2 The storm begins to roar in this scene, and Lear enters the stage with the Fool. He has no royal procession behind him anymore. Gone are the benefits of a stately official. Matching the storm's angry voice with his own, Lear calls on the higher powers to bring down full revenge against his two unappreciative daughters. In a softer voice, he asks the same higher powers to take note of his pitiful state. Kent enters, and asks who is there. The Fool replies: "Marry, here's grace and a cod-piece; that's a wise man and a fool." Act 3, Scene 2, lines 40-41 The underlying joke being that the Fool is the wise man, and Lear is the fool. Kent begs Lear to seek shelter and get out of the storm, but Lear refuses. He needs to cry out against his enemies. He says he is "more sinned against than sinning" (line 60). Finally, Lear feels badly that he has dragged the Fool with him into the horrible storm, so he leaves and takes refuge in a haven discovered by Kent. The Fool, now clearly the wise man, says: "Then shall the realm of Albion come to great confusion" (lines 91-92). Albion is another name for England, which the Fool notes will now itself suffer the turn of Fortune's wheel.
<urn:uuid:0e8e64b8-5949-4bd7-8f8a-7121887a391e>
CC-MAIN-2016-50
http://www.bookrags.com/notes/kl/part11.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.974826
286
2.65625
3
Good dental hygiene is crucial for kids with braces. You've seen how excited your child has been anticipating that happy day in the future when the braces finally come off. To make sure his smile is everything he's been hoping for, it's important to keep his teeth clean and ensure that he develops good brushing habits while still wearing braces. If your child will be in braces for any length of time, you can prevent any long-term issues, such as cavities under the braces or discoloration around where the braces are bonded to the teeth, by establishing the following routines: Rinsing and Brushing Three to four times per day, have your child rinse his mouth with water to loosen food that might be caught in the braces, then brush thoroughly. You can learn more about brushing techniques in the Colgate Oral Care resources, but it is important to brush regularly with braces because food can easily be lodged in and behind the braces, creating pockets of potential decay. Each night before bed, have your child rinse with a fluoride rinse after brushing to help keep the teeth strong and healthy. Once per day, you should help your child floss. Flossing with braces can be difficult, but you can use many flossing options that will help ensure the gums stay healthy. Flossing helps to loosen food debris and plaque at and under the gum line that would otherwise harden into tartar. It can also help reach the nooks and crannies in the teeth that might be difficult to reach with a toothbrush. Every six months, take your child to his regular dentist for a cleaning and a checkup. His dentist can point out areas that need more attention, help make sure you're keeping his teeth healthy, and clean in and around the braces. Often, your dentist and dental hygienist can suggest helpful tools or ideas to keep your child's teeth healthy while the braces are on. Good Dental Hygiene Away from Home You can help your child keep his teeth clean when he is home, but when he is at school or traveling, there are other challenges. Send a travel toothbrush and toothpaste to school with your child so that he can get in the habit of stopping at the restroom each day after lunch to rinse and brush. When you travel, make sure to make time for good dental care on the road.
<urn:uuid:a1c356fa-aea8-4daa-9ec8-6e5b0c0f9c9f>
CC-MAIN-2016-50
http://www.colgate.com/en/us/oc/oral-health/cosmetic-dentistry/early-orthodontics/article/good-dental-hygiene-is-critical-for-kids-with-braces-0213
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.965655
480
2.921875
3
The resource has been added to your collection learning to form your alphabet and numbers This resource was reviewed using the Curriki Review rubric and received an overall Curriki Review System rating of 2, as of 2009-01-12. Sticky wicks come in a multicolored pacakge available at most teacher stores. They are thin pieces of yarn coated in wax that will stick to charts, pages of a book, and laminated items and can be removed easily. Teachers and students can use them to find words in a story and circle them or underline parts of a story, etc. What is sticky wick?
<urn:uuid:2c1d1578-7862-42ce-b3ed-d37ce0f0f6cd>
CC-MAIN-2016-50
http://www.curriki.org/oer/Alphabet-and-Numbers/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.967041
132
3.109375
3
The Goose Oglethorpe County was named for a pond of at least fifty acres located near a small stream that connects to the Broad River of Georgia. Tradition claims that the pond was named for the wild geese that gathered there during the winter. Goose Pond has always retained occupants, but its heyday as a prosperous plantation and political community began to decline during the nineteenth century. The site was originally part of the 1773 "New Purchase" or "Ceded Lands" obtained from Creek and Cherokee Indians. It became part of Wilkes County when the legislature created that political division in 1777. An early reference to the area mentions a William Candler, Richmond County, who in October 1773 was to receive 100 acres starting at an abandoned field above the Goose Pond Creek if he brought settlers onto the land within a nine-month period. Records do not indicate that Candler occupied his holdings. A few immigrants settled in the Goose Pond area before 1780, including North Carolinian and Revolutionary patriot Elijah Clarke, as well as Holman and John Freeman of Virginia. But the American Revolution (1775-83) disrupted settlement in the Goose Pond, as for all of Wilkes County. A later arrival, George Mathews of Augusta County, Virginia, spearheaded settlement immediately after that conflict. Mathews gained familiarity with Wilkes County when serving in Georgia during the last years of the Revolution. He petitioned the General Assembly for numerous grants of land and in 1783 purchased a disputed title to an 800-acre tract south of the Broad River and west of the Long Creek, known as the Goose Pond. His homesite later became known as the Mattox farm. Mathews soil and economic opportunities on the Georgia frontier with friends and neighbors in Virginia. Many decided to relocate their families to the Goose Pond as part of the Virginia migration to the Wilkes frontier during the 1780s and 1790s. Among the Virginia emigrants were the Taliaferro, McGeehee, Harvie, Johnson, Marks, Meriwether, and Lewis families. They formed a cohesive community based upon Virginia cultural practices as well as marital, kinship, and business ties that extended to Petersburg, Georgia, and the Edgefield District of South Carolina. In 1793 the Goose Pond community became part of newly formed Oglethorpe County. Its residents actively influenced economic, religious, and political developments of the state. Most Virginia settlers established an economy of tobacco plantations and grain production. They helped introduce a wider practice of slavery to the Georgia frontier and intensified that practice as cotton production gained popularity during the early 1800s. So vigorously did residents cultivate these crops that by 1827 most of the original pond had been drained for agriculture. Planters and farmers of the Goose Pond community created an extensive market for their crops, making contacts in Augusta and in Charleston, South Carolina. Methodist revival during the early 1800s, when such itinerant preachers as Bishop Francis Asbury spoke in the community and stayed as a guest of the James Marks family. The first camp meeting in the area was in 1801 or 1802. Goose Pond also produced prominent state and national leaders. Some of the more influential residents or individuals closely associated with that community included Meriwether Lewis, who lived there as an adolescent, George Mathews, Benjamin Taliaferro, George R. Gilmer, General David Meriwether, William Wyatt Bibb, and William Harris Crawford. Media Gallery: Goose Pond
<urn:uuid:8825c43a-f17d-4449-aa3d-8d7a3bb525c9>
CC-MAIN-2016-50
http://www.georgiaencyclopedia.org/articles/history-archaeology/goose-pond
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.969547
704
3.578125
4
After more than one decade of researches on association rule mining, efficient and scalable techniques for the discovery of relevant association rules from large high-dimensional datasets are now available. Most initial studies have focused on the development of theoretical frameworks and efficient algorithms and data structures for association rule mining. However, many applications of association rules to data from different domains have shown that techniques for filtering irrelevant and useless association rules are required to simplify their interpretation by the end-user. Solutions proposed to address this problem can be classified in four main trends: constraint-based mining, interestingness measures, association rule structure analysis, and condensed representations. This chapter focuses on condensed representations that are characterized in the frequent closed itemset framework to expose their advantages and drawbacks. Association Rule Mining In order to improve the extraction efficiency, most algorithms for mining association rules operate on binary data represented in a transactional or binary format. This also enables the treatment of mixed data types, resulting from the integration of multiple data sources for example, with the same algorithm. The transactional and binary representations of the example dataset D, used as a support in the rest of the chapter, are shown in Table 1. In the transactional or enumeration format represented in Table 1(a) each object, called transaction or data line, contains a list of items. In the binary format represented in Table 1(b) each object2 is a bit vector and each bit indicates if the object contains the corresponding item or not.
<urn:uuid:fd0e4548-77ec-404a-b206-142ef7872da6>
CC-MAIN-2016-50
http://www.igi-global.com/chapter/frequent-closed-itemsets-based-condensed/8446
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.913886
296
2.609375
3
In March 2010, the Democratic Party of Japan government pushed through the Diet a law to exempt tuition for high school students irrespective of their families’ annual income. The law was based on the idea that society as a whole should support students, who will build the future Japan, whether they are from rich families or from poor families. Under the system, public high schools do not collect tuition from students and about ¥120,000 is provided annually to each student studying at private high schools — the equivalent of public high school tuition. But on Aug. 27, the Liberal Democratic Party and Komeito, which form the current ruling coalition, agreed to introduce an income cap for the tuition exemption and support system. From April 2014, high school students whose families annually earn ¥9.1 million or more will not be able to receive benefits from the system. This is a bad decision. It will destroy an education policy that has taken root after three years and is supported by parents and educators. It will also cause trouble for local governments and schools, forcing them to change by-laws and computer programs, and to collect tuition from parents who do not wish to cooperate. It is not far-fetched to say that the two parties just want to change the system because it was a legacy of the DPJ government. In the campaign for the July Upper House election, the LDP said that the current tuition exemption and support system constitutes pork barrel. But its thinking runs counter to the idea that society as a whole should support children irrespective of their families’ financial status. If the party thinks that the system violates the ideal of equality, it should raise the taxes paid by wealthy families rather than exclude their children from the tuition exemption and support system. The decision by the LDP and Komeito also runs counter to the relevant provisions of the International Covenant on Economic, Social and Cultural Rights. The multilateral treaty calls for the gradual introduction of free education for junior and senior high schools, and colleges and universities. Japan for a long time did not accept the provisions. But Japan finally accepted the provisions in September 2012, 33 years after it ratified the treaty. It is clear that the decision by the LDP and Komeito represents a retreat from the ideal of the provisions. According to a 2013 report issued by the Organization for Economic Cooperation and Development, Japan’s public outlays for educational organizations accounted for only 3.6 percent of its gross domestic product in 2010, lower than the average 5.4 percent for OECD member countries and the lowest among 30 member countries whose data are mutually comparable. Japan occupied the lowest position for three consecutive years. The LDP and Komeito plan to use the money saved by the introduction of an income cap to increase tuition support for private school students and to give grants for students from poor families. Instead of this approach, they should seriously consider increasing the education budget itself.
<urn:uuid:e328caaa-a987-4808-84a1-ab5511d01b28>
CC-MAIN-2016-50
http://www.japantimes.co.jp/opinion/2013/09/15/editorials/keep-tuition-exemption/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.963311
587
2.671875
3
Larvae of the small white cabbage butterfly are a pest in agricultural settings. This caterpillar species feeds from plants in the cabbage family, which include many crops such as cabbage, broccoli, Brussel sprouts etc. Rearing of the insects takes place on cabbage plants in the greenhouse. At least two cages are needed for the rearing of Pieris rapae. One for the larvae and the other to contain the adults, the butterflies. In order to investigate the role of plant hormones and toxic plant chemicals in resistance to this insect pest, we demonstrate two experiments. First, determination of the role of jasmonic acid (JA - a plant hormone often indicated in resistance to insects) in resistance to the chewing insect Pieris rapae. Caterpillar growth can be compared on wild-type and mutant plants impaired in production of JA. This experiment is considered "No Choice", because larvae are forced to subsist on a single plant which synthesizes or is deficient in JA. Second, we demonstrate an experiment that investigates the role of glucosinolates, which are used as oviposition (egg-laying) signals. Here, we use WT and mutant Arabidopsis impaired in glucosinolate production in a "Choice" experiment in which female butterflies are allowed to choose to lay their eggs on plants of either genotype. This video demonstrates the experimental setup for both assays as well as representative results. 26 Related JoVE Articles! A Practical Guide to Phylogenetics for Nonexperts Institutions: The George Washington University. Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab. Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models Design and Operation of a Continuous 13C and 15N Labeling Chamber for Uniform or Differential, Metabolic and Structural, Plant Isotope Labeling Institutions: Colorado State University, USDA-ARS, Colorado State University. Tracing rare stable isotopes from plant material through the ecosystem provides the most sensitive information about ecosystem processes; from CO2 fluxes and soil organic matter formation to small-scale stable-isotope biomarker probing. Coupling multiple stable isotopes such as 13 C with 15 O or 2 H has the potential to reveal even more information about complex stoichiometric relationships during biogeochemical transformations. Isotope labeled plant material has been used in various studies of litter decomposition and soil organic matter formation1-4 . From these and other studies, however, it has become apparent that structural components of plant material behave differently than metabolic components (i.e . leachable low molecular weight compounds) in terms of microbial utilization and long-term carbon storage5-7 . The ability to study structural and metabolic components separately provides a powerful new tool for advancing the forefront of ecosystem biogeochemical studies. Here we describe a method for producing 13 C and 15 N labeled plant material that is either uniformly labeled throughout the plant or differentially labeled in structural and metabolic plant components. Here, we present the construction and operation of a continuous 13 C and 15 N labeling chamber that can be modified to meet various research needs. Uniformly labeled plant material is produced by continuous labeling from seedling to harvest, while differential labeling is achieved by removing the growing plants from the chamber weeks prior to harvest. Representative results from growing Andropogon gerardii Kaw demonstrate the system's ability to efficiently label plant material at the targeted levels. Through this method we have produced plant material with a 4.4 atom%13 C and 6.7 atom%15 N uniform plant label, or material that is differentially labeled by up to 1.29 atom%13 C and 0.56 atom%15 N in its metabolic and structural components (hot water extractable and hot water residual components, respectively). Challenges lie in maintaining proper temperature, humidity, CO2 concentration, and light levels in an airtight 13 atmosphere for successful plant production. This chamber description represents a useful research tool to effectively produce uniformly or differentially multi-isotope labeled plant material for use in experiments on ecosystem biogeochemical cycling. Environmental Sciences, Issue 83, 13C, 15N, plant, stable isotope labeling, Andropogon gerardii, metabolic compounds, structural compounds, hot water extraction Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana Institutions: Fraunhofer USA Center for Molecular Biotechnology. -mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana plants with Agrobacteria carrying launch vectors. Optimization of Agrobacterium cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana , N. excelsiana × N. excelsior ) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium harboring pBID4-GFP (Tobacco mosaic virus -based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium laboratory strain GV3101 showed the highest protein production compared to Agrobacteria laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin). Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study Institutions: RWTH Aachen University, Fraunhofer Gesellschaft. Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody A Technique to Screen American Beech for Resistance to the Beech Scale Insect (Cryptococcus fagisuga Lind.) Institutions: US Forest Service. Beech bark disease (BBD) results in high levels of initial mortality, leaving behind survivor trees that are greatly weakened and deformed. The disease is initiated by feeding activities of the invasive beech scale insect, Cryptococcus fagisuga , which creates entry points for infection by one of the Neonectria species of fungus. Without scale infestation, there is little opportunity for fungal infection. Using scale eggs to artificially infest healthy trees in heavily BBD impacted stands demonstrated that these trees were resistant to the scale insect portion of the disease complex1 . Here we present a protocol that we have developed, based on the artificial infestation technique by Houston2 , which can be used to screen for scale-resistant trees in the field and in smaller potted seedlings and grafts. The identification of scale-resistant trees is an important component of management of BBD through tree improvement programs and silvicultural manipulation. Environmental Sciences, Issue 87, Forestry, Insects, Disease Resistance, American beech, Fagus grandifolia, beech scale, Cryptococcus fagisuga, resistance, screen, bioassay Experimental Protocol for Manipulating Plant-induced Soil Heterogeneity Institutions: Case Western Reserve University. Coexistence theory has often treated environmental heterogeneity as being independent of the community composition; however biotic feedbacks such as plant-soil feedbacks (PSF) have large effects on plant performance, and create environmental heterogeneity that depends on the community composition. Understanding the importance of PSF for plant community assembly necessitates understanding of the role of heterogeneity in PSF, in addition to mean PSF effects. Here, we describe a protocol for manipulating plant-induced soil heterogeneity. Two example experiments are presented: (1) a field experiment with a 6-patch grid of soils to measure plant population responses and (2) a greenhouse experiment with 2-patch soils to measure individual plant responses. Soils can be collected from the zone of root influence (soils from the rhizosphere and directly adjacent to the rhizosphere) of plants in the field from conspecific and heterospecific plant species. Replicate collections are used to avoid pseudoreplicating soil samples. These soils are then placed into separate patches for heterogeneous treatments or mixed for a homogenized treatment. Care should be taken to ensure that heterogeneous and homogenized treatments experience the same degree of soil disturbance. Plants can then be placed in these soil treatments to determine the effect of plant-induced soil heterogeneity on plant performance. We demonstrate that plant-induced heterogeneity results in different outcomes than predicted by traditional coexistence models, perhaps because of the dynamic nature of these feedbacks. Theory that incorporates environmental heterogeneity influenced by the assembling community and additional empirical work is needed to determine when heterogeneity intrinsic to the assembling community will result in different assembly outcomes compared with heterogeneity extrinsic to the community composition. Environmental Sciences, Issue 85, Coexistence, community assembly, environmental drivers, plant-soil feedback, soil heterogeneity, soil microbial communities, soil patch From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory. Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g. , signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding The Infiltration-centrifugation Technique for Extraction of Apoplastic Fluid from Plant Leaves Using Phaseolus vulgaris as an Example Institutions: University of Oxford, University of the Basque Country (UPV/EHU), University of Exeter. The apoplast is a distinct extracellular compartment in plant tissues that lies outside the plasma membrane and includes the cell wall. The apoplastic compartment of plant leaves is the site of several important biological processes, including cell wall formation, cellular nutrient and water uptake and export, plant-endophyte interactions and defence responses to pathogens. The infiltration-centrifugation method is well established as a robust technique for the analysis of the soluble apoplast composition of various plant species. The fluid obtained by this method is commonly known as apoplast washing fluid (AWF). The following protocol describes an optimized vacuum infiltration and centrifugation method for AWF extraction from Phaseolus vulgaris (French bean) cv. Tendergreen leaves. The limitations of this method and the optimization of the protocol for other plant species are discussed. Recovered AWF can be used in a wide range of downstream experiments that seek to characterize the composition of the apoplast and how it varies in response to plant species and genotype, plant development and environmental conditions, or to determine how microorganisms grow in apoplast fluid and respond to changes in its composition. Plant Biology, Issue 94, Apoplast, apoplast washing fluid, plant leaves, infiltration-centrifugation, plant metabolism, metabolomics, gas chromatography-mass spectrometry Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites Institutions: Royal Military College of Canada, Queen's University. The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g. carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation. Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment A Technical Perspective in Modern Tree-ring Research - How to Overcome Dendroecological and Wood Anatomical Challenges Institutions: Swiss Federal Research Institute WSL. Dendroecological research uses information stored in tree rings to understand how single trees and even entire forest ecosystems responded to environmental changes and to finally reconstruct such changes. This is done by analyzing growth variations back in time and correlating various plant-specific parameters to (for example) temperature records. Integrating wood anatomical parameters in these analyses would strengthen reconstructions, even down to intra-annual resolution. We therefore present a protocol on how to sample, prepare, and analyze wooden specimen for common macroscopic analyses, but also for subsequent microscopic analyses. Furthermore we introduce a potential solution for analyzing digital images generated from common small and large specimens to support time-series analyses. The protocol presents the basic steps as they currently can be used. Beyond this, there is an ongoing need for the improvement of existing techniques, and development of new techniques, to record and quantify past and ongoing environmental processes. Traditional wood anatomical research needs to be expanded to include ecological information to this field of research. This would support dendro-scientists who intend to analyze new parameters and develop new methodologies to understand the short and long term effects of specific environmental factors on the anatomy of woody plants. Environmental Sciences, Issue 97, Cell parameters, dendroecology, image analysis, micro sectioning, microtomes, sample preparation, wood anatomy Semi-High Throughput Screening for Potential Drought-tolerance in Lettuce (Lactuca sativa) Germplasm Collections Institutions: United States Department of Agriculture. This protocol describes a method by which a large collection of the leafy green vegetable lettuce (Lactuca sativa L.) germplasm was screened for likely drought-tolerance traits. Fresh water availability for agricultural use is a growing concern across the United States as well as many regions of the world. Short-term drought events along with regulatory intervention in the regulation of water availability coupled with the looming threat of long-term climate shifts that may lead to reduced precipitation in many important agricultural regions has increased the need to hasten the development of crops adapted for improved water use efficiency in order to maintain or expand production in the coming years. This protocol is not meant as a step-by-step guide to identifying at either the physiological or molecular level drought-tolerance traits in lettuce, but rather is a method developed and refined through the screening of thousands of different lettuce varieties. The nature of this screen is based in part on the streamlined measurements focusing on only three water-stress indicators: leaf relative water content, wilt, and differential plant growth following drought-stress. The purpose of rapidly screening a large germplasm collection is to narrow the candidate pool to a point in which more intensive physiological, molecular, and genetic methods can be applied to identify specific drought-tolerant traits in either the lab or field. Candidates can also be directly incorporated into breeding programs as a source of drought-tolerance traits. Environmental Sciences, Issue 98, Lettuce, Lactuca sativa, drought, water-stress, abiotic-stress, relative water content High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado. Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics. Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil Designing Silk-silk Protein Alloy Materials for Biomedical Applications Institutions: Rowan University, Rowan University, Cooper Medical School of Rowan University, Rowan University. Fibrous proteins display different sequences and structures that have been used for various applications in biomedical fields such as biosensors, nanomedicine, tissue regeneration, and drug delivery. Designing materials based on the molecular-scale interactions between these proteins will help generate new multifunctional protein alloy biomaterials with tunable properties. Such alloy material systems also provide advantages in comparison to traditional synthetic polymers due to the materials biodegradability, biocompatibility, and tenability in the body. This article used the protein blends of wild tussah silk (Antheraea pernyi ) and domestic mulberry silk (Bombyx mori ) as an example to provide useful protocols regarding these topics, including how to predict protein-protein interactions by computational methods, how to produce protein alloy solutions, how to verify alloy systems by thermal analysis, and how to fabricate variable alloy materials including optical materials with diffraction gratings, electric materials with circuits coatings, and pharmaceutical materials for drug release and delivery. These methods can provide important information for designing the next generation multifunctional biomaterials based on different protein alloys. Bioengineering, Issue 90, protein alloys, biomaterials, biomedical, silk blends, computational simulation, implantable electronic devices Technique for Studying Arthropod and Microbial Communities within Tree Tissues Institutions: Northern Arizona University, Acoustic Ecology Institute. Phloem tissues of pine are habitats for many thousands of organisms. Arthropods and microbes use phloem and cambium tissues to seek mates, lay eggs, rear young, feed, or hide from natural enemies or harsh environmental conditions outside of the tree. Organisms that persist within the phloem habitat are difficult to observe given their location under bark. We provide a technique to preserve intact phloem and prepare it for experimentation with invertebrates and microorganisms. The apparatus is called a ‘phloem sandwich’ and allows for the introduction and observation of arthropods, microbes, and other organisms. This technique has resulted in a better understanding of the feeding behaviors, life-history traits, reproduction, development, and interactions of organisms within tree phloem. The strengths of this technique include the use of inexpensive materials, variability in sandwich size, flexibility to re-open the sandwich or introduce multiple organisms through drilled holes, and the preservation and maintenance of phloem integrity. The phloem sandwich is an excellent educational tool for scientific discovery in both K-12 science courses and university research laboratories. Environmental Sciences, Issue 93, phloem sandwich, pine, bark beetles, mites, acoustics, phloem Testing the Physiological Barriers to Viral Transmission in Aphids Using Microinjection Institutions: Cornell University, Cornell University. Potato loafroll virus (PLRV), from the family Luteoviridae infects solanaceous plants. It is transmitted by aphids, primarily, the green peach aphid. When an uninfected aphid feeds on an infected plant it contracts the virus through the plant phloem. Once ingested, the virus must pass from the insect gut to the hemolymph (the insect blood ) and then must pass through the salivary gland, in order to be transmitted back to a new plant. An aphid may take up different viruses when munching on a plant, however only a small fraction will pass through the gut and salivary gland, the two main barriers for transmission to infect more plants. In the lab, we use physalis plants to study PLRV transmission. In this host, symptoms are characterized by stunting and interveinal chlorosis (yellowing of the leaves between the veins with the veins remaining green). The video that we present demonstrates a method for performing aphid microinjection on insects that do not vector PLVR viruses and tests whether the gut is preventing viral transmission. The video that we present demonstrates a method for performing Aphid microinjection on insects that do not vector PLVR viruses and tests whether the gut or salivary gland is preventing viral transmission. Plant Biology, Issue 15, Annual Review, Aphids, Plant Virus, Potato Leaf Roll Virus, Microinjection Technique Testing Nicotine Tolerance in Aphids Using an Artificial Diet Experiment Institutions: Cornell University. Plants may upregulate the production of many different seconday metabolites in response to insect feeding. One of these metabolites, nicotine, is well know to have insecticidal properties. One response of tobacco plants to herbivory, or being gnawed upon by insects, is to increase the production of this neurotoxic alkaloid. Here, we will demonstrate how to set up an experiment to address this question of whether a tobacco-adapted strain of the green peach aphid, Myzus persicae, can tolerate higher levels of nicotine than the a strain of this insect that does not infest tobacco in the field. Plant Biology, Issue 15, Annual Review, Nicotine, Aphids, Plant Feeding Resistance, Tobacco Use of Arabidopsis eceriferum Mutants to Explore Plant Cuticle Biosynthesis Institutions: University of British Columbia - UBC, University of British Columbia - UBC. The plant cuticle is a waxy outer covering on plants that has a primary role in water conservation, but is also an important barrier against the entry of pathogenic microorganisms. The cuticle is made up of a tough crosslinked polymer called "cutin" and a protective wax layer that seals the plant surface. The waxy layer of the cuticle is obvious on many plants, appearing as a shiny film on the ivy leaf or as a dusty outer covering on the surface of a grape or a cabbage leaf thanks to light scattering crystals present in the wax. Because the cuticle is an essential adaptation of plants to a terrestrial environment, understanding the genes involved in plant cuticle formation has applications in both agriculture and forestry. Today, we'll show the analysis of plant cuticle mutants identified by forward and reverse genetics approaches. Plant Biology, Issue 16, Annual Review, Cuticle, Arabidopsis, Eceriferum Mutants, Cryso-SEM, Gas Chromatography Non-radioactive in situ Hybridization Protocol Applicable for Norway Spruce and a Range of Plant Species Institutions: Uppsala University, Swedish University of Agricultural Sciences. The high-throughput expression analysis technologies available today give scientists an overflow of expression profiles but their resolution in terms of tissue specific expression is limited because of problems in dissecting individual tissues. Expression data needs to be confirmed and complemented with expression patterns using e.g. in situ hybridization, a technique used to localize cell specific mRNA expression. The in situ hybridization method is laborious, time-consuming and often requires extensive optimization depending on species and tissue. In situ experiments are relatively more difficult to perform in woody species such as the conifer Norway spruce (Picea abies ). Here we present a modified DIG in situ hybridization protocol, which is fast and applicable on a wide range of plant species including P. abies . With just a few adjustments, including altered RNase treatment and proteinase K concentration, we could use the protocol to study tissue specific expression of homologous genes in male reproductive organs of one gymnosperm and two angiosperm species; P. abies, Arabidopsis thaliana and Brassica napus . The protocol worked equally well for the species and genes studied. AtAP3 were observed in second and third whorl floral organs in A. thaliana and B. napus and DAL13 in microsporophylls of male cones from P. abies . For P. abies the proteinase K concentration, used to permeablize the tissues, had to be increased to 3 g/ml instead of 1 g/ml, possibly due to more compact tissues and higher levels of phenolics and polysaccharides. For all species the RNase treatment was removed due to reduced signal strength without a corresponding increase in specificity. By comparing tissue specific expression patterns of homologous genes from both flowering plants and a coniferous tree we demonstrate that the DIG in situ protocol presented here, with only minute adjustments, can be applied to a wide range of plant species. Hence, the protocol avoids both extensive species specific optimization and the laborious use of radioactively labeled probes in favor of DIG labeled probes. We have chosen to illustrate the technically demanding steps of the protocol in our film. Anna Karlgren and Jenny Carlsson contributed equally to this study. Corresponding authors: Anna Karlgren at Anna.Karlgren@ebc.uu.se and Jens F. Sundström at Jens.Sundstrom@vbsg.slu.se Plant Biology, Issue 26, RNA, expression analysis, Norway spruce, Arabidopsis, rapeseed, conifers Environmentally Induced Heritable Changes in Flax Institutions: Case Western Reserve University. Some flax varieties respond to nutrient stress by modifying their genome and these modifications can be inherited through many generations. Also associated with these genomic changes are heritable phenotypic variations 1,2 . The flax variety Stormont Cirrus (Pl) when grown under three different nutrient conditions can either remain inducible (under the control conditions), or become stably modified to either the large or small genotroph by growth under high or low nutrient conditions respectively. The lines resulting from the initial growth under each of these conditions appear to grow better when grown under the same conditions in subsequent generations, notably the Pl line grows best under the control treatment indicating that the plants growing under both the high and low nutrients are under stress. One of the genomic changes that are associated with the induction of heritable changes is the appearance of an insertion element (LIS-1) 3, 4 while the plants are growing under the nutrient stress. With respect to this insertion event, the flax variety Stormont Cirrus (Pl) when grown under three different nutrient conditions can either remain unchanged (under the control conditions), have the insertion appear in all the plants (under low nutrients) and have this transmitted to the next generation, or have the insertion (or parts of it) appear but not be transmitted through generations (under high nutrients) 4 . The frequency of the appearance of this insertion indicates that it is under positive selection, which is also consistent with the growth response in subsequent generations. Leaves or meristems harvested at various stages of growth are used for DNA and RNA isolation. The RNA is used to identify variation in expression associated with the various growth environments and/or t he presence/absence of LIS-1. The isolated DNA is used to identify those plants in which the insertion has occurred. Plant Biology, Issue 47, Flax, genome variation, environmental stress, small RNAs, altered gene expression A Faster, High Resolution, mtPA-GFP-based Mitochondrial Fusion Assay Acquiring Kinetic Data of Multiple Cells in Parallel Using Confocal Microscopy Institutions: Tufts School of Medicine, Wake Forest Baptist Medical Center, Boston University Medical Center. Mitochondrial fusion plays an essential role in mitochondrial calcium homeostasis, bioenergetics, autophagy and quality control. Fusion is quantified in living cells by photo-conversion of matrix targeted photoactivatable GFP (mtPAGFP) in a subset of mitochondria. The rate at which the photoconverted molecules equilibrate across the entire mitochondrial population is used as a measure of fusion activity. Thus far measurements were performed using a single cell time lapse approach, quantifying the equilibration in one cell over an hour. Here, we scale up and automate a previously published live cell method based on using mtPAGFP and a low concentration of TMRE (15 nm). This method involves photoactivating a small portion of the mitochondrial network, collecting highly resolved stacks of confocal sections every 15 min for 1 hour, and quantifying the change in signal intensity. Depending on several factors such as ease of finding PAGFP expressing cells, and the signal of the photoactivated regions, it is possible to collect around 10 cells within the 15 min intervals. This provides a significant improvement in the time efficiency of this assay while maintaining the highly resolved subcellular quantification as well as the kinetic parameters necessary to capture the detail of mitochondrial behavior in its native cytoarchitectural environment. Mitochondrial dynamics play a role in many cellular processes including respiration, calcium regulation, and apoptosis1,2,3,13 . The structure of the mitochondrial network affects the function of mitochondria, and the way they interact with the rest of the cell. Undergoing constant division and fusion, mitochondrial networks attain various shapes ranging from highly fused networks, to being more fragmented. Interestingly, Alzheimer's disease, Parkinson's disease, Charcot Marie Tooth 2A, and dominant optic atrophy have been correlated with altered mitochondrial morphology, namely fragmented networks4,10,13 . Often times, upon fragmentation, mitochondria become depolarized, and upon accumulation this leads to impaired cell function18 . Mitochondrial fission has been shown to signal a cell to progress toward apoptosis. It can also provide a mechanism by which to separate depolarized and inactive mitochondria to keep the bulk of the network robust14 . Fusion of mitochondria, on the other hand, leads to sharing of matrix proteins, solutes, mtDNA and the electrochemical gradient, and also seems to prevent progression to apoptosis9 . How fission and fusion of mitochondria affects cell homeostasis and ultimately the functioning of the organism needs further understanding, and therefore the continuous development and optimization of how to gather information on these phenomena is necessary. Existing mitochondrial fusion assays have revealed various insights into mitochondrial physiology, each having its own advantages. The hybrid PEG fusion assay7 , mixes two populations of differently labeled cells (mtRFP and mtYFP), and analyzes the amount of mixing and colocalization of fluorophores in fused, multinucleated, cells. Although this method has yielded valuable information, not all cell types can fuse, and the conditions under which fusion is stimulated involves the use of toxic drugs that likely affect the normal fusion process. More recently, a cell free technique has been devised, using isolated mitochondria to observe fusion events based on a luciferase assay1,5 . Two human cell lines are targeted with either the amino or a carboxy terminal part of Renilla luciferase along with a leucine zipper to ensure dimerization upon mixing. Mitochondria are isolated from each cell line, and fused. The fusion reaction can occur without the cytosol under physiological conditions in the presence of energy, appropriate temperature and inner mitochondrial membrane potential. Interestingly, the cytosol was found to modulate the extent of fusion, demonstrating that cell signaling regulates the fusion process 4,5 . This assay will be very useful for high throughput screening to identify components of the fusion machinery and also pharmacological compounds that may affect mitochondrial dynamics. However, more detailed whole cell mitochondrial assays will be needed to complement this in vitro assay to observe these events within a cellular environment. A technique for monitoring whole-cell mitochondrial dynamics has been in use for some time and is based on a mitochondrially-targeted photoactivatable GFP (mtPAGFP)6,11 . Upon expression of the mtPAGFP, a small portion of the mitochondrial network is photoactivated (10-20%), and the spread of the signal to the rest of the mitochondrial network is recorded every 15 minutes for 1 hour using time lapse confocal imaging. Each fusion event leads to a dilution of signal intensity, enabling quantification of the fusion rate. Although fusion and fission are continuously occurring in cells, this technique only monitors fusion as fission does not lead to a dilution of the PAGFP signal6 . Co-labeling with low levels of TMRE (7-15 nM in INS1 cells) allows quantification of the membrane potential of mitochondria. When mitochondria are hyperpolarized they uptake more TMRE, and when they depolarize they lose the TMRE dye. Mitochondria that depolarize no longer have a sufficient membrane potential and tend not to fuse as efficiently if at all. Therefore, active fusing mitochondria can be tracked with these low levels of TMRE9,15 . Accumulation of depolarized mitochondria that lack a TMRE signal may be a sign of phototoxicity or cell death. Higher concentrations of TMRE render mitochondria very sensitive to laser light, and therefore great care must be taken to avoid overlabeling with TMRE. If the effect of depolarization of mitochondria is the topic of interest, a technique using slightly higher levels of TMRE and more intense laser light can be used to depolarize mitochondria in a controlled fashion (Mitra and Lippincott-Schwartz, 2010). To ensure that toxicity due to TMRE is not an issue, we suggest exposing loaded cells (3-15 nM TMRE) to the imaging parameters that will be used in the assay (perhaps 7 stacks of 6 optical sections in a row), and assessing cell health after 2 hours. If the mitochondria appear too fragmented and cells are dying, other mitochondrial markers, such as dsRED or Mitotracker red could be used instead of TMRE. The mtPAGFP method has revealed details about mitochondrial network behavior that could not be visualized using other methods. For example, we now know that mitochondrial fusion can be full or transient, where matrix content can mix without changing the overall network morphology. Additionally, we know that the probability of fusion is independent of contact duration and organelle dimension, is influenced by organelle motility, membrane potential and history of previous fusion activity8,15,16,17 In this manuscript, we describe a methodology for scaling up the previously published protocol using mtPAGFP and 15nM TMRE8 in order to examine multiple cells at a time and improve the time efficiency of data collection without sacrificing the subcellular resolution. This has been made possible by the use of an automated microscope stage, and programmable image acquisition software. Zen software from Zeiss allows the user to mark and track several designated cells expressing mtPAGFP. Each of these cells can be photoactivated in a particular region of interest, and stacks of confocal slices can be monitored for mtPAGFP signal as well as TMRE at specified intervals. Other confocal systems could be used to perform this protocol provided there is an automated stage that is programmable, an incubator with CO2 , and a means by which to photoactivate the PAGFP; either a multiphoton laser, or a 405 nm diode laser. Molecular Biology, Issue 65, Genetics, Cellular Biology, Physics, confocal microscopy, mitochondria, fusion, TMRE, mtPAGFP, INS1, mitochondrial dynamics, mitochondrial morphology, mitochondrial network Measurement of Leaf Hydraulic Conductance and Stomatal Conductance and Their Responses to Irradiance and Dehydration Using the Evaporative Flux Method (EFM) Institutions: University of California, Los Angeles . Water is a key resource, and the plant water transport system sets limits on maximum growth and drought tolerance. When plants open their stomata to achieve a high stomatal conductance (gs ) to capture CO2 for photosynthesis, water is lost by transpiration1,2 . Water evaporating from the airspaces is replaced from cell walls, in turn drawing water from the xylem of leaf veins, in turn drawing from xylem in the stems and roots. As water is pulled through the system, it experiences hydraulic resistance, creating tension throughout the system and a low leaf water potential (Ψleaf ). The leaf itself is a critical bottleneck in the whole plant system, accounting for on average 30% of the plant hydraulic resistance3 . Leaf hydraulic conductance (Kleaf = 1/ leaf hydraulic resistance) is the ratio of the water flow rate to the water potential gradient across the leaf, and summarizes the behavior of a complex system: water moves through the petiole and through several orders of veins, exits into the bundle sheath and passes through or around mesophyll cells before evaporating into the airspace and being transpired from the stomata. Kleaf is of strong interest as an important physiological trait to compare species, quantifying the effectiveness of the leaf structure and physiology for water transport, and a key variable to investigate for its relationship to variation in structure (e.g. , in leaf venation architecture) and its impacts on photosynthetic gas exchange. Further, Kleaf responds strongly to the internal and external leaf environment3 can increase dramatically with irradiance apparently due to changes in the expression and activation of aquaporins, the proteins involved in water transport through membranes4 , and Kleaf declines strongly during drought, due to cavitation and/or collapse of xylem conduits, and/or loss of permeability in the extra-xylem tissues due to mesophyll and bundle sheath cell shrinkage or aquaporin deactivation5-10 . Because Kleaf can constrain gs and photosynthetic rate across species in well watered conditions and during drought, and thus limit whole-plant performance they may possibly determine species distributions especially as droughts increase in frequency and severity11-14 We present a simple method for simultaneous determination of Kleaf on excised leaves. A transpiring leaf is connected by its petiole to tubing running to a water source on a balance. The loss of water from the balance is recorded to calculate the flow rate through the leaf. When steady state transpiration (E , mmol • m-2 ) is reached, gs is determined by dividing by vapor pressure deficit, and Kleaf by dividing by the water potential driving force determined using a pressure chamber (Kleaf This method can be used to assess Kleaf responses to different irradiances and the vulnerability of Kleaf Plant Biology, Issue 70, Molecular Biology, Physiology, Ecology, Biology, Botany, Leaf traits, hydraulics, stomata, transpiration, xylem, conductance, leaf hydraulic conductance, resistance, evaporative flux method, whole plant Linking Predation Risk, Herbivore Physiological Stress and Microbial Decomposition of Plant Litter Institutions: Yale University, Virginia Tech, The Hebrew University of Jerusalem. The quantity and quality of detritus entering the soil determines the rate of decomposition by microbial communities as well as recycle rates of nitrogen (N) and carbon (C) sequestration1,2 . Plant litter comprises the majority of detritus3 , and so it is assumed that decomposition is only marginally influenced by biomass inputs from animals such as herbivores and carnivores4,5 . However, carnivores may influence microbial decomposition of plant litter via a chain of interactions in which predation risk alters the physiology of their herbivore prey that in turn alters soil microbial functioning when the herbivore carcasses are decomposed6 . A physiological stress response by herbivores to the risk of predation can change the C:N elemental composition of herbivore biomass7,8,9 because stress from predation risk increases herbivore basal energy demands that in nutrient-limited systems forces herbivores to shift their consumption from N-rich resources to support growth and reproduction to C-rich carbohydrate resources to support heightened metabolism6 . Herbivores have limited ability to store excess nutrients, so stressed herbivores excrete N as they increase carbohydrate-C consumption7 . Ultimately, prey stressed by predation risk increase their body C:N ratio7,10 , making them poorer quality resources for the soil microbial pool likely due to lower availability of labile N for microbial enzyme production6 . Thus, decomposition of carcasses of stressed herbivores has a priming effect on the functioning of microbial communities that decreases subsequent ability to of microbes to decompose plant litter6,10,11 We present the methodology to evaluate linkages between predation risk and litter decomposition by soil microbes. We describe how to: induce stress in herbivores from predation risk; measure those stress responses, and measure the consequences on microbial decomposition. We use insights from a model grassland ecosystem comprising the hunting spider predator (Pisuarina mira ), a dominant grasshopper herbivore (Melanoplus femurrubrum ),and a variety of grass and forb plants9 Environmental Sciences, Issue 73, Microbiology, Plant Biology, Entomology, Organisms, Investigative Techniques, Biological Phenomena, Chemical Phenomena, Metabolic Phenomena, Microbiological Phenomena, Earth Resources and Remote Sensing, Life Sciences (General), Litter Decomposition, Ecological Stoichiometry, Physiological Stress and Ecosystem Function, Predation Risk, Soil Respiration, Carbon Sequestration, Soil Science, respiration, spider, grasshoper, model system Extracellularly Identifying Motor Neurons for a Muscle Motor Pool in Aplysia californica Institutions: Case Western Reserve University , Case Western Reserve University , Case Western Reserve University . In animals with large identified neurons (e.g. mollusks), analysis of motor pools is done using intracellular techniques1,2,3,4 . Recently, we developed a technique to extracellularly stimulate and record individual neurons in Aplysia californica5 . We now describe a protocol for using this technique to uniquely identify and characterize motor neurons within a motor pool. This extracellular technique has advantages. First, extracellular electrodes can stimulate and record neurons through the sheath5 , so it does not need to be removed. Thus, neurons will be healthier in extracellular experiments than in intracellular ones. Second, if ganglia are rotated by appropriate pinning of the sheath, extracellular electrodes can access neurons on both sides of the ganglion, which makes it easier and more efficient to identify multiple neurons in the same preparation. Third, extracellular electrodes do not need to penetrate cells, and thus can be easily moved back and forth among neurons, causing less damage to them. This is especially useful when one tries to record multiple neurons during repeating motor patterns that may only persist for minutes. Fourth, extracellular electrodes are more flexible than intracellular ones during muscle movements. Intracellular electrodes may pull out and damage neurons during muscle contractions. In contrast, since extracellular electrodes are gently pressed onto the sheath above neurons, they usually stay above the same neuron during muscle contractions, and thus can be used in more intact preparations. To uniquely identify motor neurons for a motor pool (in particular, the I1/I3 muscle in Aplysia ) using extracellular electrodes, one can use features that do not require intracellular measurements as criteria: soma size and location, axonal projection, and muscle innervation4,6,7 . For the particular motor pool used to illustrate the technique, we recorded from buccal nerves 2 and 3 to measure axonal projections, and measured the contraction forces of the I1/I3 muscle to determine the pattern of muscle innervation for the individual motor neurons. We demonstrate the complete process of first identifying motor neurons using muscle innervation, then characterizing their timing during motor patterns, creating a simplified diagnostic method for rapid identification. The simplified and more rapid diagnostic method is superior for more intact preparations, e.g. in the suspended buccal mass preparation8 or in vivo9 . This process can also be applied in other motor pools10,11,12 or in other animal systems2,3,13,14 Neuroscience, Issue 73, Physiology, Biomedical Engineering, Anatomy, Behavior, Neurobiology, Animal, Neurosciences, Neurophysiology, Electrophysiology, Aplysia, Aplysia californica, California sea slug, invertebrate, feeding, buccal mass, ganglia, motor neurons, neurons, extracellular stimulation and recordings, extracellular electrodes, animal model Measuring Cation Transport by Na,K- and H,K-ATPase in Xenopus Oocytes by Atomic Absorption Spectrophotometry: An Alternative to Radioisotope Assays Institutions: Technical University of Berlin, Oregon Health & Science University. Whereas cation transport by the electrogenic membrane transporter Na+ -ATPase can be measured by electrophysiology, the electroneutrally operating gastric H+ -ATPase is more difficult to investigate. Many transport assays utilize radioisotopes to achieve a sufficient signal-to-noise ratio, however, the necessary security measures impose severe restrictions regarding human exposure or assay design. Furthermore, ion transport across cell membranes is critically influenced by the membrane potential, which is not straightforwardly controlled in cell culture or in proteoliposome preparations. Here, we make use of the outstanding sensitivity of atomic absorption spectrophotometry (AAS) towards trace amounts of chemical elements to measure Rb+ transport by Na+ - or gastric H+ -ATPase in single cells. Using Xenopus oocytes as expression system, we determine the amount of Rb+ ) transported into the cells by measuring samples of single-oocyte homogenates in an AAS device equipped with a transversely heated graphite atomizer (THGA) furnace, which is loaded from an autosampler. Since the background of unspecific Rb+ uptake into control oocytes or during application of ATPase-specific inhibitors is very small, it is possible to implement complex kinetic assay schemes involving a large number of experimental conditions simultaneously, or to compare the transport capacity and kinetics of site-specifically mutated transporters with high precision. Furthermore, since cation uptake is determined on single cells, the flux experiments can be carried out in combination with two-electrode voltage-clamping (TEVC) to achieve accurate control of the membrane potential and current. This allowed e.g. to quantitatively determine the 3Na+ transport stoichiometry of the Na+ -ATPase and enabled for the first time to investigate the voltage dependence of cation transport by the electroneutrally operating gastric H+ -ATPase. In principle, the assay is not limited to K+ -transporting membrane proteins, but it may work equally well to address the activity of heavy or transition metal transporters, or uptake of chemical elements by endocytotic processes. Biochemistry, Issue 72, Chemistry, Biophysics, Bioengineering, Physiology, Molecular Biology, electrochemical processes, physical chemistry, spectrophotometry (application), spectroscopic chemical analysis (application), life sciences, temperature effects (biological, animal and plant), Life Sciences (General), Na+,K+-ATPase, H+,K+-ATPase, Cation Uptake, P-type ATPases, Atomic Absorption Spectrophotometry (AAS), Two-Electrode Voltage-Clamp, Xenopus Oocytes, Rb+ Flux, Transversely Heated Graphite Atomizer (THGA) Furnace, electrophysiology, animal model Fabrication of Nano-engineered Transparent Conducting Oxides by Pulsed Laser Deposition Institutions: Politecnico di Milano, Instituto Italiano di Tecnologia. Nanosecond Pulsed Laser Deposition (PLD) in the presence of a background gas allows the deposition of metal oxides with tunable morphology, structure, density and stoichiometry by a proper control of the plasma plume expansion dynamics. Such versatility can be exploited to produce nanostructured films from compact and dense to nanoporous characterized by a hierarchical assembly of nano-sized clusters. In particular we describe the detailed methodology to fabricate two types of Al-doped ZnO (AZO) films as transparent electrodes in photovoltaic devices: 1) at low O2 pressure, compact films with electrical conductivity and optical transparency close to the state of the art transparent conducting oxides (TCO) can be deposited at room temperature, to be compatible with thermally sensitive materials such as polymers used in organic photovoltaics (OPVs); 2) highly light scattering hierarchical structures resembling a forest of nano-trees are produced at higher pressures. Such structures show high Haze factor (>80%) and may be exploited to enhance the light trapping capability. The method here described for AZO films can be applied to other metal oxides relevant for technological applications such as TiO2 Materials Science, Issue 72, Physics, Nanotechnology, Nanoengineering, Oxides, thin films, thin film theory, deposition and growth, Pulsed laser Deposition (PLD), Transparent conducting oxides (TCO), Hierarchically organized Nanostructured oxides, Al doped ZnO (AZO) films, enhanced light scattering capability, gases, deposition, nanoporus, nanoparticles, Van der Pauw, scanning electron microscopy, SEM The Use of High-resolution Infrared Thermography (HRIT) for the Study of Ice Nucleation and Ice Propagation in Plants Institutions: Agricultural Research Service (USDA-ARS), Kearneysville, WV, University of Innsbruck, University of Saskatechewan. Freezing events that occur when plants are actively growing can be a lethal event, particularly if the plant has no freezing tolerance. Such frost events often have devastating effects on agricultural production and can also play an important role in shaping community structure in natural populations of plants, especially in alpine, sub-arctic, and arctic ecosystems. Therefore, a better understanding of the freezing process in plants can play an important role in the development of methods of frost protection and understanding mechanisms of freeze avoidance. Here, we describe a protocol to visualize the freezing process in plants using high-resolution infrared thermography (HRIT). The use of this technology allows one to determine the primary sites of ice formation in plants, how ice propagates, and the presence of ice barriers. Furthermore, it allows one to examine the role of extrinsic and intrinsic nucleators in determining the temperature at which plants freeze and evaluate the ability of various compounds to either affect the freezing process or increase freezing tolerance. The use of HRIT allows one to visualize the many adaptations that have evolved in plants, which directly or indirectly impact the freezing process and ultimately enables plants to survive frost events. Environmental Sciences, Issue 99, Freeze avoidance, supercooling, ice nucleation active bacteria, frost tolerance, ice crystallization, antifreeze proteins, intrinsic nucleation, extrinsic nucleation, heterogeneous nucleation, homogeneous nucleation, differential thermal analysis
<urn:uuid:d9780007-7e8a-4372-a33b-abc4a292dd9b>
CC-MAIN-2016-50
http://www.jove.com/visualize/abstract/25664764/biogeographic-patterns-structural-traits-cnp-stoichiometry-tree-twigs
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.898661
12,510
3.28125
3
The Original Oxygen Transfer Fish Transport Bag Don't be fooled by other bags claiming to do what Kordon's Breathing Bags do - there are no "Second Generation Breathing Bags" that are produced from the same or improved material or that can claim the same Oxygen Transfer Rate as genuine Kordon Breathing Bags. A TOTALLY UNIQUE NEW WAY TO SHIP FISH: Breathing Bags are a completely new approach to the shipping of live fishes, as well as aquatic invertebrates and aquatic plants, in plastic bags. The special plastic film used in the Breathing Bags generates the constant transfer of carbon dioxide out of the water in the bag through the walls of the bag, and the absorption of oxygen from the atmosphere through the bag walls into the water in the bag. This provides a constant source of fresh oxygen that fish and other aquatic specimens use to breathe. Kordon ® Breathing Bags™ represent a new approach to the problems of shipping live fishes and other aquatic animals and aquatic plants, including over long distances or for extended time periods. The product development staff at Kordon, teamed with plastics chemical engineers, have taken a technology first developed in space/military research and refined it to produce the bags being offered today. HOW THE UNIQUE BREATHING BAG FILM WORKS: The Breathing Bags are constructed of a special film that has a micro-porosity that allows the transfer of simple and complex gas molecules through the plastic wall of the bag. Carbon dioxide and oxygen in particular, as well as other gases - are constantly passing through the plastic bag via the micro-porosity. In other words - the plastic has gaps so small that water molecules cannot pass through - yet gas molecules can move freely. This provides a true "breathing" bag in place of a non-porous "barrier" bag as is used in traditional plastic polyethylene bags. As long as there is a breathable atmosphere outside the Breathing Bag, the fish or animals inside will not run out of oxygen. Carbon dioxide exits the bags at 4 times the rate oxygen enters the bags, thereby constantly purging the water of toxic carbon dioxide, and allowing oxygen to replace it in the water. Kordon has shipped millions of bags around the world (termed "Sachets") containing living foods (tubifex worms, brine shrimp, daphnia, glass worms, etc.) for aquarium fishes using the Breathing Bag technology. Hundreds of thousands of Breathing Bags have been used successfully to ship fishes, coral reef animals, and aquatic plants. OLD TRADITIONAL SHIPPING METHOD: Prior to the invention of Breathing Bags, the only plastic bags available for shipping fishes and aquatic invertebrates were made of polyethylene and had created a non-porous "solid-film barrier bag". There was no porosity-mechanism to allow the passage of gasses through the bag wall. When using these "barrier" bags any oxygen required to sustain the life of the fish or other aquatic life inside the bag must - of necessity - be added as a gas inside the bag prior to sealing. This process has many problems. 1 - High concentrations of oxygen can cause flammable conditions. 2 - The presence of oxygen gas inside the bag takes up a lot of valuable shipping space. 3 - Once the supplied oxygen is used up there is no more available - fish can quite literally drown in traditional old-fashioned "barrier-bags". 4 - A bag partially full of water with the rest filled with oxygen allows the contents to slosh during transport, stressing and possibly injuring fishes. 5 - Toxic carbon dioxide from the fishes' breathing builds up in the water, displacing the oxygen. The oxygenated air in the bags may not be satisfactory for fishes' breathing, because (particularly from sources in underdeveloped countries), the bottled oxygen may be contaminated. IT IS TIME FOR A CHANGE: Time in the bag has always been the cause of losses in shipping live fish. With old, traditional air-chamber bagging methods there has always been a short time span between bagging the fish and getting safely to their destination. You typically have a short time span allowed before the air in the bags is depleted of all available oxygen and the fish begin to deteriorate. Long distance fish shipping using the old methods has always required periodic opening of the bags and adding of new oxygen. With Kordon Breathing Bags fish have been sealed into bags and sent on long transfer trips that have lasted for 7 to 10 days with no re-bagging and no addition of oxygen. The fish shipments have repeatedly arrived at their destination with very low losses and healthy, non-stressed fish. The continual flow ow carbon dioxide out of the bag and oxygen into the bag allows for safer shipping no matter how far the distance. In comparison - using the Kordon Breathing Bags allows for no sloshing and no stress. The Breathing Bags are sealed with as little air inside as possible. Ideally only water touches the inner surface of the bag. No air chamber of added oxygen means no slosh-zone and turbulant travel for the fish inside. You can test this by laying a filled Breathing Bag on a flat surface and allowing the fish to settle down. Picking up one edge of the bag - you can roll it until it is totally reversed - upside down - yet the fish inside will not move at all. No sloshing, no jiggling... no stress. Less stress equals less losses or injuries during shipping or transfer of live fish. Item No. 50201 - Breathing Bag™ 7" x 14" (full case is 2000 pieces) Item No. 50202 - Breathing Bag™ 9" x 16" (full case is 2000 pieces) Item No. 50203 - Breathing Bag™ 12" x 19" (full case is 1250 pieces)
<urn:uuid:11aadb34-dc1c-41eb-a53e-d62ead518a4e>
CC-MAIN-2016-50
http://www.kordon.com/kordon/products/aquarium-pond-accessories-2/breathing-bags
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.925366
1,231
2.65625
3
As if there weren't already enough reasons to be creeped out in the dark, scientists exploring a remote cave network in Venezuela have discovered a new species of cricket that swims instead of jumps and has an appetite for flesh, according to the BBC. Scientists, who were exploring the caves with a BBC/Discovery Channel/Terra Mater TV film crew for an upcoming documentary, were able to film the bizarre new species as it was being discovered. At one point the cricket nearly ripped off a chunk of its handler's thumb. Assuming there aren't any larger, scarier carnivores still lurking elsewhere in the shadows of the cave, it is believed that this cricket is the apex predator in its environment. The one trait that makes this cricket particularly unique, though, is its ability to swim. "[It's] the most unbelievable thing I've ever seen," said biologist and presenter, Dr. George McGavin. "It swims underwater and uses its front legs as a proper breaststroke and its hind legs kicking out. It was just amazing." It also appears to have evolved specialized palps for ultra-sensitive tasting in its dark environment. Most species of troglobites, or cave-dwelling animals, have evolved to live without eyes, instead relying on their senses of taste, hearing and touch (or occasionally some other specialized sense). The cricket was one of three new species discovered on the expedition. Scientists also found a cave catfish that had evolved large sensitive organs on the front of its head to help it navigate in the dark. The foreboding cave environment had also caused the fish's skin to become pale, and leave it with only remnants of eyes. Thirdly, they discovered a new species of harvestmen — a type of arachnid that includes the daddy-longlegs — that had lost its eyes entirely. "If we'd had the time there would have been other [discoveries] there," said McGavin. "You can't really as a biologist, put into words how it feels to see something, to film something that's never been named." Caves have become hot spots for new species discoveries in recent years, as scientists have learned to appreciate how these isolated environments can cause rapid speciation. Organisms that originally colonize cave environments typically become isolated from their ancestral populations on the surface. The harsh environment, combined with in-breeding, can select for obscure adaptations in short order. The cricket, which is so new that scientists have yet to name it, was found two miles into the cave network. That's a long way from the surface, and far away from any other species of cricket. That's probably good news, though. This is one creature you wouldn't want lurking in your swimming pool. Related animal story on MNN: 10 creatures that thrive in caves
<urn:uuid:4f4fca01-7251-4a84-975d-2798855cfc8d>
CC-MAIN-2016-50
http://www.mnn.com/earth-matters/animals/stories/swimming-flesh-eating-cricket-discovered-in-south-american-cave
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.981637
580
3.078125
3
Note: Sample below may appear distorted but all corresponding word document files contain proper formattingExcerpt from Term Paper: The development of the prototyping methodology The Benefits of using Prototyping today The evolution of Rapid Prototyping The creation and development of three banking websites using prototypes Prototyping for banking related GUI sing mobile phones for banking Banking systems using ATMs and ADCs Prototyping in the Banking Field What is Prototyping? The Web defines prototyping as the term that is used to describe the process by which physical mock ups or models are made up out of the proposed designs. In the days before the wide usage of the computer aided technology, prototyping was done using traditional models. Today however, prototyping is done using three dimensional computer models. This method is definitely more efficient as well as quicker than the traditional methods. The computer-aided prototyping is also referred to as 'Rapid Prototyping'. (Fundamentals of Graphics Communication, 3/e) Sometimes, certain partial aspects of the program are created using this method, and this helps the user to understand the problems or virtues of the program before it is implemented anywhere. Users are offered a chance to find out and correct the mistakes if any in the program by the prototype that has been created. (Chapter 14 - Programming and Languages) One of the most complex problems being faced by the businesses of today, including that of Banking, is the re-designing and the integration of the existing business processes that have been in use up until now. The development of the prototyping methodology: One of the most useful methodologies that these businesses are finding extremely useful and practical is the method of re-designing a business process design named 'Business Process Re-engineering Methodology' that explains in great detail how the existing design can be changed and re-modeled to suit the newer business processes and therefore the newer requirements. This new design is referred to as the 'Product-Based Development Design', and the innate strength of this method lies in the fact that the method of 'prototyping' is used extensively throughout the procedures described. Prototyping is mooted as the method by which the end-user can use his inputs for the purpose of authenticating and validating the process designs described. It is a fact that all over the world, all the various businesses are being either fine-tuned or re-designed or re-engineered or value added or right sized or re-aligned, in the name of Business Process Re-engineering or what is also known as 'BPR' to dramatically improve the existing business. From the time of the creation of the BPR, when it was solely used for improving and optimizing all the various business processes within one single organization, to a wider usage wherein various business processes across several different enterprises are re-designed using the methods prescribed by the BPR. A BPR generally tackles a two-fold challenge: a technical challenge, which is in other words the problem of developing a new process design that would improve in a dramatic way the existing process design, and the second challenge being that of the socio cultural changes that are an inevitable result of the severe organizational changes that may cause the persons involved in the organization to react bitterly towards the changes. The focusing on the socio cultural changes to be brought about and the management of the project is seen as important in the BPR method rather than the method of trying to introduce technical changes. Another area of focus in the BPR method is the utilization of the anecdotal and descriptive points-of-view, rather than the prescriptive and quantitative view generally used by other methods of re-designing. The basic idea of creating a method that would solve technical problems in the technical BPR challenge is the Product-Based Development Design, a method whose innate strength lies in using prototyping. (Using Prototyping in the Product-Driven design of Business Processes) The Benefits of using Prototyping today: Prototyping is the method that has been used for many years, probably from the early eighties onwards, as the method by which problems if any can be disposed of or 'flushed away', and user acceptance in the field of information systems development can be made very real and possible. From that time onwards, prototyping has been accepted as the one good method to be used in all fields of the development of interactive systems. One example where prototyping has been used successfully to improve the business is when it was used in 'Rapid Application Development'. What are the benefits of using the method of prototyping to develop and support the various business processes of today? These benefits can be seen as being two-fold. Primarily, the first steps or the 'process steps' as they are known, that describe the initial steps that are taken in the development of a new business process, are taken by a human being who is completely thorough and familiar with all the information technology that is required by him in order to fulfill the various steps involved in the process, or these steps are fully automated with absolutely no human interference, thereby minimizing the chances for the creation of any mistakes whatsoever. One example of the benefits of the usage of this process is demonstrated by the BPR process inculcated in a Dutch Social Insurance Company, wherein the first eighteen steps of the required twenty-four steps needed to carry out the initial processes of the new design were completely automated. The different specifications for the purpose of the execution of the steps must be taken from the various objectives prescribed in the process steps that they are to support; otherwise, the entire system would be termed as less-than-adequate. This is where the importance of developing and utilizing a prototyping design is seen clearly. Design processes would be protected by the elimination of design errors or insufficiencies, if any, and the subsequent efforts in design development would become more efficient and quick. Another reason for the development and usage of a prototype in a BPR process is that when end users are confronted with the basic design even before the design has actually been implemented; the opinion of the end user can be obtained without further expenses. (Using Prototyping in the Product-Driven design of Business Processes) When seen in the context of 'change management', this is very important since it not only saves costs but also demonstrates the viability of the design being created and developed. The end user therefore has the opportunity to actually use the design, and they are requested to give their feedback, and this helps the organization to see the pros and cons of the design before vast amounts of funds are used in its implementation, after which it may not be of nay great benefit; it may even fail miserably. When the end user uses the design, the company knows the value of the design and then the management can either go ahead with the necessary changes if any inculcated within the design, or shelve the design. The end user, however, has to be made aware of the fact that though the new process design is different from the design that they have been using all the time before the development of the new one, the purpose behind both remains the same, that is, the purpose of delivering the very same products as it was doing earlier. (Using Prototyping in the Product-Driven design of Business Processes) Therefore it can be stated that when PBD or Product-Based Development is used for the purpose of developing and implementing changes using the system of prototyping, the system of PBD is in itself the translation or changing of an innate manufacturing idea to be adaptable to the world of administrative processes such as banking, government, and insurance and so on. For example, when the design of an informational product like a mortgaging loan or an insurance permit needs to be changed and modified, its basic structure is decomposed and dismantled into a number of informational elements, which are in turn used to arrive at a process design. The actual information elements of a product may be inherently related to each other in different ways. For example, when a consumer loan is considered, the primary fact to be considered is whether the loan can be granted to the applicant, and if so, the amount of the loan, and the other conditions of granting a loan to an individual must also be evaluated. Therefore, when the granting decision is taken as being one of the elements of the administrative process, then the reason for the application of the loan, as well as the credentials of the applicant must be taken into consideration. This information will be used to determine the value of the granting decision of the loan. PDB in this instance prescribes the application of essential logic to the representation of the facts of the case. Therefore it can be said that the inter-dependencies between all the involved elements and the logic are determine the various specifications of the product that is involved. This method can be used very successfully…[continue] "Prototyping In The Banking Field" (2004, December 08) Retrieved December 5, 2016, from http://www.paperdue.com/essay/prototyping-in-the-banking-field-59095 "Prototyping In The Banking Field" 08 December 2004. Web.5 December. 2016. <http://www.paperdue.com/essay/prototyping-in-the-banking-field-59095> "Prototyping In The Banking Field", 08 December 2004, Accessed.5 December. 2016, http://www.paperdue.com/essay/prototyping-in-the-banking-field-59095 The last century has seen an increase in the level of international purchases which has been supported by the developments in transportation and technology. Goods can move faster than before with developments in logistics. The negotiation and forming contracts for purchase with companies and communicate with potential suppliers in distant countries is also easier than in the past with the internet and tools such as video conferencing and emails. Hyundaicard's Marketing Strategy: Case Study Write a full case analysis: HyundaiCard's Marketing Strategy Hyundaicard's marketing strategy General overview of Hyundaicard Current marketing strategy assessment Financial analysis Strategic alternatives Assessment of the strategic alternatives Implementation plan Exhibits Segmentation criteria Qualitative ranking of alternatives Market share of credit card companies Preference of payment Hyundaicard financial statements This essay is a case study for Hyundaicard marketing strategy. Hyundaicard is a company situated in Korea's credit card industry. It forms the basis of the case study because of its' Benchmarking Keyloggers for Gathering Digital Evidence on Personal Computers Keyloggers refers to the hardware or software programs, which examine keyboard and mouse activity on a computer in a secretive manner so that the owner of the computer is not aware that their actions are monitored. The keyloggers accumulate the recorded keystrokes for later recovery or remotely convey it to the person employing them. Keyloggers aimed to serve as spyware and currently
<urn:uuid:228005b0-b97c-4ae8-91a5-36214684bfbc>
CC-MAIN-2016-50
http://www.paperdue.com/essay/prototyping-in-the-banking-field-59095
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.950287
2,256
3.046875
3
You are here WASHINGTON - The obesity epidemic in America may soon overtake tobacco as the leading cause of preventable deaths, Surgeon General David Satcher warned today. He looked to schools, the fast-food industry and communities to eat healthier and exercise more. Around 300,000 people a year die from illnesses directly caused or worsened by being overweight. Some 60 percent of adults are overweight or obese, as are nearly 13 percent of children, rates that have steadily risen over the past decade. Satcher called for a national attack on obesity like the one federal health officials declared on smoking. He recommends daily physical education for every grade in schools, healthier food choices in schools, safer playing areas for children, more physical activity provided by employers, and healthier portion sizes. In addition, inner cities are encouraged to study fast-food marketing practices and find ways to offer affordable fruits and vegetables. The National Restaurant Association rejected as "simplistic" the idea that fast-food joints cause obesity, and the National Soft Drink Association urged more focus on Satcher's exercise recommendations, calling vending machines in schools adequately regulated. The Agriculture Department has targeted childhood obesity as a major concern and will take some action, though just what hasn't been decided, said Ron Vogel of the special nutrition program. Officials are helping schools to improve lunch nutrition. While USDA has authority to restrict use of vending machines only if they are in cafeterias, it is considering whether to seek broader authority.
<urn:uuid:89d6b5fe-e56d-4f2e-b966-9b45bed79700>
CC-MAIN-2016-50
http://www.progressivegrocer.com/node/66699
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.952074
297
2.765625
3
The place of Project Based Learning and Computer Games in the learning of our students. This year in Year 8 Religion and Philosophy, we are working on a teaching and learning project exploring the benefits of game based learning. Through this project, students will explore religious beliefs from multiple perspectives, with a view to developing a higher order understanding of different world religions. Project based learning is a way of teaching and learning that means that students are immersed in a project over a period of time to allow them to not only gain knowledge of the topic, but to also deepen their understanding and engage in higher order, critical thinking processes. In 2008, Dr Helen Farley at the University of Queensland carried out research on the use of an Internet-based Multi-User Virtual Environment (MUVE) for the teaching and learning of first year undergraduate Religious Studies students. As I read through her research, I could see links with many of our curriculum and MYP goals and could see benefits from our Year 8 students at Somerset College embarking on a similar project. Not only would students be learning about different religious beliefs through a project based approach, but it would also allow students to explore these beliefs through a first person perspective in a virtual world. The past term has been spent in preparation for the unit, exploring the process and the educational theory behind 'games based learning'. I have had the opportunity to discuss the project with academics involved in this field from the University of Southern Queensland, Griffith University and Bond University. I am very grateful for the time and support they have offered for this project. The project will take two parts. In Term Three, students will undertake a project exploring a particular world religion. Once they have explored the beliefs, rituals and practices of one religion, they will prepare to enter the second phase of the project. In this phase they will construct an avatar (a character in the 'game') that represents a follower of the religion they have studied. Then they will enter the game and interact with other avatars from their class. The aim is for them to explore other religious communities that are in the game and find out through discussion about different religions. Obviously this will be a closed, carefully monitored 'world', or 'island' as they are called, with only our students allowed on to the island. The students will keep a blog of their experiences in the game, with challenges to promote higher order, critical thinking. As we get closer to the start date, more information will be provided to Year 8 parents. It promises to be an exciting Semester 2! « Back to Index
<urn:uuid:e6bec587-6b6d-4f1e-bb64-2758e94e2aae>
CC-MAIN-2016-50
http://www.somerset.qld.edu.au/News/Senior/Teaching-and-Learning-Projects-4/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.958594
521
2.984375
3
What happens when you introduce invasive burrow dwelling rodents to an island populated by burrow nesting birds? Turns out it’s not so great for the birds. A recent study of Atlantic petrels (Pterodroma incerta) has documented the unexpectedly severe impact of the invasive house mouse (Mus musculus) on hatchling survival. The Atlantic petrel is listed as endangered by the International Union for Conservation of Nature, and is very likely now limited in range to a single island, Gough Island, in the South Atlantic Ocean. Previous populations on another island were extirpated (driven to local extinction) by the invasive black rat (Rattus rattus). Gough Island is free of rats, but population demographic models of the last population of Atlantic petrels shows a shrinking population size. So what’s causing it? Filming the nest burrows of Atlantic petrel and other ground nesting sea birds has revealed the culprit: Invasive predators are often most devastating in an island context. Because islands are limited in space and resource availability, island fauna often evolve without major predation pressure. When a predator from the mainland invades, the island species don’t know how to defend themselves, and they have nowhere to run. There may be up to 2 million mice on small Gough Island alone (only 65 km2), and when food becomes scarce in the winter, they turn to their conveniently burrowing seabird neighbors. But we aren’t talking about your average house mouse, here. These mice are capable of eating very large seabirds; these petrel chicks weigh more than half a kilogram, and on the same island, mice also prey on albatross chicks weighing more than 10 kilograms. The mice on Gough Island reportedly reach up to half again as large as an average house mouse – up to 10 inches in length, including the tail. This may be an interesting example of an invasive animal demonstrating island gigantism. Generally, smaller mice are better able to avoid predators. But on Gough Island, there are no significant mouse predators, so there’s no advantage to being small. But there might be an advantage for larger size, if larger mice are better able to hunt large seabird chicks, and therefore better able to get a meal in the winter, and leave more progeny behind. Natural selection acts on invasive species, just like anything else, making them better adapted, and able to take advantage of, their novel habitats. R. M. Wanless, et al. (2012). Predation of Atlantic Petrel chicks by house mice on Gough Island Animal Conservation DOI: 10.1111/j.1469-1795.2012.00534.x
<urn:uuid:57c343d3-d466-4124-9b7e-1f9241396e97>
CC-MAIN-2016-50
https://alienplantation.wordpress.com/2012/07/19/insert-rodents-of-unusual-size-joke-here/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.92487
564
3.8125
4
Recently, This American Life did an episode on how easy it is to overdose on acetaminophen (also called paracetamol in countries outside of the US). You can listen to the entire episode in the link below: What I find interesting is why this drug is so dangerous. In the liver, there are a group of enzymes that oxidize many of the drugs and organic substances that enter your body. In fact, there are genes for 57 various types of these enzymes. These are called Cytochrome P450s. Whenever you see warning labels like “don’t eat grapefruit with this drug,” it is usually because of the way the liver cytochromes are processing these compounds. So, what happens when you take acetaminophen? Most of it is processed to a non-harmful compound. However, 5% of this drug is processed by a liver cytochrome known as CYP2E1. This 5% becomes a toxic compound (NAPQI). Generally, the liver can take care of this substance through a reaction with a compound known as glutathione. However, in cases of overdose, there is not enough glutathione to convert the bad NAPQI, and it hangs out in the liver and reacts with cellular membranes. Essentially, your liver cells are slowly killed.
<urn:uuid:fa8a4ef4-51f4-4fb3-ab01-dfcd8fe0bd3e>
CC-MAIN-2016-50
https://scientificfemanomaly.com/2013/09/24/the-dangers-of-acetaminophen/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00150-ip-10-31-129-80.ec2.internal.warc.gz
en
0.966536
276
2.96875
3
It's a familiar scene. The friend or relative gives a present to the small child. The parent, holding the young child, says "Say thank you for the gift." "Thank you," the child says, parrot-like. Cute. but is it the child really learning to express sincere gratitude? Not necessarily. Requiring children to display manners and courtesies that they are too young to understand gives the appearance of good manners, but the child may not develop the sincere concern for others that can make those manners a lifelong habit. Children learn more deeply from example and encouragement than from being told what to do. Forcing the routine formalities of good manners onto children who don't understand the words or actions may eventually mean sacrificing a truer, slower development of the child in favor of quicker, less sincere results. Rather than encouraging meaningless mimicry, parents should teach their children manners by example. Decide the courtesies you wish your children to display and then teach those courtesies by showing them to your child. Some parents automatically do this. When their baby hands them the pacifier, they are likely to say "thank you" when accepting it from the child. Such a child is likely to also voice "thank you" when he or she is old enough to talk. Parents should think of themselves as role models for their children in the world of manners. In the long run, children are more likely to do what their parents do, than what their parents tell them to do. Children will imitate polite behavior their parents use, but it takes time. Learning acceptable and polite behavior is a long process for children; it is part of the longer process of growing up. In addition to guiding their children with good example, parents should also remember to reinforce their children's good behavior with compliments. At least once a day, when the child has said 'please' or shown you some other polite consideration, hug the child and show how pleased you are with him or her. This kind of recognition makes a strong impression on children and will make them feel more positive about themselves and the behaviors they are learning. To insure the development of sincere good manners in children, parents should be careful about how they correct bad manners and negative behavior. Children should be corrected without being ridiculed or suppressed. When a child does something upsetting, confront the child honestly with how you feel about such behavior. For example, a parent might say, "I'm unhappy that you're banging your dish on the table. That noise hurts my head. Please keep your dish on the table so I can hear what everyone is saying." Children, by their very youth and inexperience, may not be aware that making excessive noise at the table bothers other people. They may need to have such behavior pointed out to them before they can change it. Children should also be held responsible for their behavior. If a child tips over her glass of milk, the mother may hand her a rag and say, "Please wipe up the milk. I'll help with this part over here and you do the puddle near you." Learning such responsibility is an important foundation for later good manners and concern for other people's time and feelings. And even the best-behaved children have to be allowed time and space in which they can freely express their feelings. Children have upsets and problems just like adults. Parents can expect their children to relapse occasionally into less desirable manners. At that point, it is important for parents to deal with the real issue at hand, which may be a problem in school or with a friend, or an illness coming on, rather than the issue of good manners. Teaching real respect and consideration for others can begin with parents' sincere concern for the child's view of the problem. Try to talk it through with the child, and when the emotion has been expressed, help the child work out ways to deal with the problem. Then the child may be able to return to his or her former good manners. At what age should a child begin learning manners and the courtesies that should be a part of daily life? If children are learning from example, as they should, those examples should begin even when the child is still in the crib. Good manners should be a part of life, not something we put on for company, or something we expect from children but not ourselves. Source: Suzanne West, Department of Human Development and Family Studies, New York State College of Human Ecology, Cornell University. Parent Pages was developed by Cornell Cooperative Extension of Suffolk County. HD 28 Last updated October 16, 2015
<urn:uuid:bcb9b41a-f0ac-4b94-be44-94966bd6cfa7>
CC-MAIN-2016-50
http://ccetompkins.org:80/family/parent-pages/challenging-behaviors/learning-good-manners-by-example
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.967499
932
3.609375
4
That pest was the Emerald Ash Borer (EAB) Agrilus planipennis. County Extension Hotlines and Professional Arborists began to get calls about Ash trees that were dropping leaves and dying for no apparent reason. I had firsthand knowledge of this situation as I had two Green Ash trees in my yard that were displaying the same symptoms. Dieback began at the top of the canopy and worked its way downward. Entomologists from Michigan State University and the U.S. Forest Service were called in and finally identified the problem. An immediate quarantine of six southeastern Michigan counties was implemented with hopes of stopping the spread of the EAB. It was initially thought that this insect was only capable of flying a half-mile per year. I was fortunate or maybe unfortunate enough to be heavily involved with Michigan State University Extension Service through my volunteer work with the Master Gardeners. Educational programs were quickly put together to instruct the public about this pest and encourage them not to move firewood outside of the quarantine zones. We were speeding the word via, newspapers, television, radio--anyway we could. Sadly it was too little too late, experts were amazed at the speed at which the EAB was moving. It was moving not only by flight, but by folks hauling infested firewood to their cottages. It was also hitching rides on nursery trees and logs for lumber. Many of the subdivisions built in the 1960s and '70s had been landscaped exclusively with ash trees. They make great street trees, nice shape and fast-growing. By 2005 most of these trees had succumbed and these subdivisions looked like war zones with dead trees for block after block. When the EAB was initially discovered, most governmental and university sources believved it was was 100% fatal. The only control was to cut the tree down and "chip" it to ensure the bugs were dead. Later they discovered that the EAB could survive a "chipper". However progress was being made quickly in controlling the EAB. In 2004 soil drenches of imidacloprid began to show promise of eradicating the EAB. As of today this product seems to be very effective at controlling this pest. There are also products available that are injected directly into the tree that are quite effective. The EAB has infected Ash trees in 14 states and Canada. After arriving in Michigan, it has traveled as far west as Missouri, Iowa, and Minnesota, as far south as Kentucky, Tennessee and Virginia, and east to New York and Pennsylvania, including all of the states in between. If you have an ash that begins to drop leaves or wilts or if you see a tree with these symptoms in a park or other area, let someone know. These trees can be saved. You now have a defense that those of us in Michigan didn't have back in 2002. Of the millions of ashes that grew in Michigan I estimate that 95% have been lost to the EAB. Don't let that happen to your trees. My thanks to Dr. David Roberts of Michigan State University (The EAB GURU) for of this information. Dr. Roberts spearheaded much of the research in control of this pest.
<urn:uuid:2ed2becb-fe60-4007-a4c5-7cedf5aa47e1>
CC-MAIN-2016-50
http://davesgarden.com/guides/articles/view/3184/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.984965
659
2.734375
3
Page 2 of 3 Although faceprinting seems like a reasonable method there are more sophisticated techniques available. The oddly-named “eigenface” analysis works by trying to find sets of standard faces. “Eigen” is German for “characteristic” and there is a mathematical technique for finding the smallest number of combinations of features which best represent any data you care to give it. It works by looking for combinations of features that differ the most across the sample provided. The combination of measurements that varies the most is called the first eigenfeature and it’s the one that most separates the faces. It is clearly the one to use if you are trying to match a new face to the collection. The feature that the faces vary on the most after the first one has been taken into account is the second eigenfeature and it’s the next best to use. You can carry on in this way automatically generating a set of eigenfeatures until you have achieved the classification accuracy you need or until you run out of computing power. A set of eigenfaces – that they don’t look recognisable by a human is one of the drawbacks of the system A face reduced to its eigenfeatures is called an eigenface. The method may be powerful but it does tend to work in ways that aren’t intuitive. We like to think of faces as particular combinations of big ears, narrow eyes and so on but eigenfeatures are defined mathematically and don’t always correspond to anything a human understands. In most cases this doesn’t matter but when you are shown a photo of an eigenface don’t expect to recognise it! In practice an eigenface recognition system has many of the same stages as a faceprint system. A scene is scanned until a face shape is located. It is then rescanned at higher resolution and the face image extracted. Standard facial features are then identified – eyes, nose and mouth – and are used to warp the image to a standard form. For example, the face is rotated to make the eyes level. Once the face is in standard form it is analysed into its eigenfeatures and matched in the database of eigenfaces. The good news is that once you know what the eigenfeatures are it’s quick and easy to analyse a face. The bad news is that it takes a lot of computer power to work out what the eigenfeatures should be and to create a database of eigenfaces. Currently eigenface recognition is still under development and there are many variations on the basic scheme. The best eigenface analysis seems to offer the highest recognition accuracy and the lowest false alarm rate. In addition it also seems to be harder to fool using a disguise or changing hairstyles. A third, although less practical alternative at the moment, is to use a neural network to learn what people look like. If you want to know about this approach in detail see: Neural Networks. In principle it should be easy to recognise faces using a neural network. All you have to do is show the network the faces over and over again until they are recognised correctly. Once trained the neural network will, in theory, recognise the people used to train it. A neural network mimics the way the brain works and learns to recognise faces by being shown them repeatedly. The problem with a neural network is that it takes a long time to learn even a small set of faces. It takes thousands of training sessions to achieve even a reasonable recognition rate. What is more, if you want to add another face it has to be retrained!
<urn:uuid:db9b66cd-81d5-4fb5-8c6f-558af110a491>
CC-MAIN-2016-50
http://i-programmer.info/babbages-bag/1091-face-recognition.html?start=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.942406
751
3.484375
3
Trading Around the World - eBook Bundle Created as a supplement to existing middle- school world geography and world history courses, the 5 units in this guide introduce students to basic economic concepts. - Economic Survival: Resources, Production, and Scarcity - Working and Living Together: The Importance of Trade - Gross Domestic Product: Measuring the Income of Nations - Productivity: The Key to Increasing the Wealth of Nations - Economic Systems: How Nations Organize Their Economies Each unit includes instruction materials, group activities, and individual projects.
<urn:uuid:fc90a158-d1c3-4bac-80c1-9bf4601d85fd>
CC-MAIN-2016-50
http://store.councilforeconed.org/products/trading-around-the-world
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.760394
113
3.875
4
- Historic Sites A Village Disappeared On the sixtieth anniversary of Pearl Harbor, the granddaughter of a Japanese detainee recalls the community he lost and the fight he waged in the Supreme Court to win back the right to earn a living November/December 2001 | Volume 52, Issue 8 To the casual visitor, terminal island in Los Angeles Harbor is no more than a complex of dull warehouses and empty lots. The waterfront may feature a lonely boat or two and the streets suffer the occasional rumbling tractor trailer, but few people come here, adding to the gloom of this industrial neighborhood. I see a very different place. I imagine a bustling main street with lively shops. 1 see people hurrying to their jobs and children playing in the schoolyard. I hear the voices of my family as they discuss their daily activities. Seventy-years ago, Terminal Island was the site of a Japanese fishing village and the home of my grandfather. Then, almost overnight in 1942., it was abandoned. Born in 1888 in a village north of Tokyo, my grandfather, Torao Takahashi, was the only child of a once-aristocratic samurai family. The Meiji Restoration of just two decades earlier had finished off the feudal organization and isolationism of the Tokugawa Era and had opened Japan’s doors to the West. Curious about life beyond the Land of the Rising Sun, students sought education abroad, and workers pursued prosperity in the fields of Hawaii and California. My grandfather left his hometown in 1907. Arriving in Seattle but soon making his way down to Northern California, his first goal was to learn English. He enrolled in a public high school while working as a “schoolboy” for a white American family in San Jose. The job was thought a demeaning one for males, the equivalent of being a maid in Japan but the only work young Japanese men could get that allowed time for study. For several years, he continued his education, managing to acquire considerable fluency in English. In 1914 my grandfather arrived in Wilmington, a community in Los Angeles, and not long afterward he crossed the channel to Terminal Island. During this same period, he briefly returned to Japan to claim his family-arranged bride, Natsu Arai, my grandmother. In six years—1918 to 1924—they had six children, including my father, Kenichi. Terminal Island is not a natural landform; it’s a human invention that was created when the city of Los Angeles built its deep-water port around the turn of the century. Several small islands in San Pedro Bay formed its foundation. Today, when people speak of the vanished community of Terminal Island—the place I envision when I go there—they are referring to the Japanese neighborhood of East San Pedro, on the island’s west end. Another community, simply called Terminal, grew up farther east, where an eclectic mixture of immigrants—Sicilians, Slovenians, Portuguese, Mexicans, Filipinos, and Japanese—lived side by side working the fishing, lumber, and shipping industries. But East San Pedro, my grandfather’s home, was almost too percent Japanese and dependent exclusively on fishing. The first Japanese immigrants in the area were abalone divers operating off nearby White Point around 1900. This was lucrative work, but it proved short-lived as catches dwindled and the state of California discouraged further expansion of the industry. Before long, the abalone hunters turned to sardines, tuna, and other fish. East San Pedro was a Japanese village on American soil. At its peak in the 19305, about 3,000 first- and second-generation Japanese lived there, outnumbering all the other immigrant groups on Terminal Island. Unlike other Japanese communities in the United States, the isolated residents here maintained their native identity with significant success. They ate Japanese foods and celebrated Japanese holidays. Children spoke English within the walls of their public-school classrooms but immediately slipped back into Japanese once outside. “We used to catch hell from the schoolteachers for speaking Japanese,” remembers my father. Baseball, zoot suits, and jitterbugging eventually found their place among the second generation, or nisei, but such influences from the larger non-Japanese society were limited compared with the impact they had on the mainland. East San Pedro was a fishing village in every respect. Even the streets were named accordingly—Albacore, Bass, Barracuda, Tuna—and the main wharf lay adjacent to Fish Harbor. Some men fished alone, trolling rock cod, halibut, and other catches for the fresh seafood market, but most worked for the canneries. Early in the last century, A. P. Halfhill discovered that steaming albacore tuna produced pale flesh similar to that of poultry, and he coined the phrase “chicken of the sea.” Halfhill, Frank Van Camp (who took up the Chicken of the Sea brand), and other pioneer fish packers set up canneries in cities all along the California coast to meet the increasing consumer demand for canned tuna and sardines. Most of Terminal Island’s canneries lined Fish Harbor, employing men as fishers and women as cannery workers. For both it was a 24-hour on-call operation: The fish dictated the fishers’ schedules, while the incoming boats determined the cannery workers’ shifts. Each cannery had a unique whistle call that blew the moment one of its boats pulled in. From decks often squirming with the catch, the crew immediately unloaded their tonnage for processing. Mothers regularly arose in the dead of night, leaving their sleeping children in bed and hustling to their jobs in the processing lines.
<urn:uuid:058b8927-826c-4d5b-b30b-dc612b086d36>
CC-MAIN-2016-50
http://www.americanheritage.com/content/village-disappeared
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.969546
1,184
3.390625
3
Imagine a day when the crops planted by the farmer down the road aren’t grown to supply food, but to produce electricity. This scenario is still largely unimaginable in the United States, but it’s been a reality for more than a decade in Europe, where new technology provides both a growing source of clean, renewable energy and financial stability to family farmers. The energy is called biogas, and it’s based on the principals of decomposition and fermentation. Biogas systems, also called methane digesters, transform organic materials such as manure, crops, or food waste into methane, a fuel much like natural gas. Methane can be used to generate electricity and provide heat, and even a moderately sized biogas system can create enough electricity to pay for itself relatively quickly by selling power back to the grid. In Germany, Austria, and Italy, it’s estimated that more than 4,000 biogas systems are in operation on farms of various sizes, mini-power plants that produce electricity around the clock. Some farmers who once ran dairy operations have even sold off their milk cows and now produce crops expressly for the purpose of making electricity. For five days in early June, a ten-person delegation from upstate New York and Vermont visited six farms in Austria, which is a European leader in renewable energy. The trip was organized by MWK Biogas North America Corp., the US affiliate of a German designer of state-of-the-art biogas systems. In Europe, MWK and its subsidiaries boast annual sales of between 80- and 100-million Euros annually. The Americans’ aim was to observe how this cutting-edge technology is being applied to create efficient and cost-effective green energy systems, and consider how that technology can be imported to the US. “I had to go to Austria to see how prehistoric the United States is,” says Albert Floyd, a community leader who owns the general store in Randolph Center, Vermont. Like others on the trip, he returned amazed by the widespread and sophisticated applications of all types of alternative energy technology in Austria, and by its beautiful farms and pristine countryside, untouched by suburbia. “This country is run by the oil and automobile industries,” Floyd says of the US. “We have our heads in the sand.” The object of the delegation’s scrutiny was the newest generation of methane digesters designed by MWK. These biogas systems are comprised of a series of two to six interconnected, sealed tanks that break down organic materials in the absence of air. The technology mimics the process that takes place inside the four chambers of a cow’s stomach. Cows and other ruminants, such as sheep and goats, are able to break down the cellulose in grass and other plants to produce nutrients. In nature, methane is a by-product of the process, as anyone who’s stood near a gassy cow knows. In the biogas business, that gaseous substance is the end product. According to those who have seen biogas systems at work, methane digesters are clean and durable, and emit only an inoffensive, earthy odor. The methane is stored and used like natural gas as a fuel to create electricity. Co-generated heat, which accounts for half the energy embodied in methane, is captured to heat buildings or water, or for other purposes. The solid by-product of the process can be sold as organic fertilizer. Biogas systems use big bladders to store methane, preventing its release into the atmosphere. Methane is 23 times more potent than carbon dioxide as a greenhouse gas, and some farmers are able to gain compensation for preventing the release of methane through biogas production, which is carbon-neutral. Sylke Chesterfield of Rensselaer, New York, organized the tour. Chesterfield describes herself as an “affiliated agent” for MWK. Her introduction to the firm was fortuitous. In 2004, Chesterfield, who was born and raised in Germany and had been self-employed in business consulting and public policy for more than a decade, decided to try her hand as a translator. Her first client was a group of US businesspeople who were pursuing a partnership with MWK’s chief designer and CEO, Matthais Wackerbauer. The original business deal fell through, but Chesterfield, convinced of the brilliance of Wackerbauer’s biogas designs, continued to work with him to introduce his technology into the North American market. Chesterfield brings the boundless enthusiasm of a convert to her position as so-called “chief bottlewasher” for the American venture. “It was the first time I could be passionate about something,” she says, allowing that she couldn’t find any “down side” to Wackerbauer’s systems to check her interest. With no official title and, as of June, no contract, Chesterfield has worked from her Rensselaer home to facilitate MWK’s North American start-up for a year, “based on my conviction,” she says, “that the technology is the best thing since sliced bread, and that the projects in development will actually come to fruition.”
<urn:uuid:7d5a3e38-3ac4-4746-8ef6-d468844b7eaf>
CC-MAIN-2016-50
http://www.chronogram.com/hudsonvalley/rural-electrification/Content?oid=2167386
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.965297
1,100
3.34375
3
At the G20 summit in September, China, the host country, and the US will present a peer review of each other’s fossil fuel subsidies, becoming the first two countries to do so under the G20. Following years of feet-dragging on the question of subsidy reform, the review hints at progress on the creation of a new shared governance model. The review, countries hope, will lead to clear actions, taken on a voluntary, bottom-up basis; and to the adoption of a practical and effective model of bilateral cooperation within a multilateral framework. In China, this is the first time the government has formed a cross-departmental, cross-sector group of experts to systematically examine the state of fossil fuel subsidies, undertaken with the aim of building a long-term domestic energy and subsidy strategy. For both China and the US, it is also an opportunity to use the international stage to force domestic policy change. Numerous researchers have analysed the scale of fossil fuel subsidies, how to reform them, and the impact of those reforms. But those outcomes have largely been neglected. In the aftermath of the global financial crash in 2008 the benefits of ending fossil fuel subsidies became more apparent. In economic terms, governments could save spending. In social terms, getting rid of price supports would end a system that tends to benefit those with higher incomes; and environmentally, it would discourage unnecessary consumption of fossil fuels and, as a result, help tackle a range of environmental problems. The time was right, and at the September 2009 summit in Pittsburgh G20 leaders committed to “phase out and rationalise, over the medium term, inefficient fossil fuel subsidies”. Two months later APEC leaders made a similar promise when they met in Singapore. But in the three years since, the G20 has failed to introduce the tangible measures needed to turn high-level political commitments into action. The only mechanism available has been a non-binding, voluntary self-reporting system. Following Pittsburgh, finance ministers were instructed to develop a voluntary peer review process to push fossil fuel reforms forward. In 2013, a methodology was published by which G20 members could evaluate the policies and policy outcomes of other nations. The results of those evaluations would, in theory, inform future G20 decisions. In 2013, the US made China an offer: for the two nations to complete the G20’s first round of fossil fuel subsidy peer review jointly. China quickly agreed and during US vice-president Joseph Biden’s visit to China, in late 2013, the two sides announced the peer review would go ahead. This speed was possible due to the two countries’ existing dialogues outside the G20 framework, on the economy, energy and climate issues. The understanding and trust built up through the China-US Strategic and Economic Dialogue was particularly important. The review provides actual experience of how to use the “inventory method” to analyse fossil fuel subsidies, particularly when dealing with cross-subsidies; and of how to use non-G20 cooperative mechanisms, such as the China-US Strategic and Economic Dialogue, to further other processes. While it is not yet possible to comment on the review’s outcome, pending its completion, G20 members can still learn something from the process. Timeline: China and the US work on fossil fuels - July 2014 Terms of reference for the peer review are agreed upon May 2015 Members of the peer review group are confirmed 2014 - 2015 Teams of Chinese and US experts decide on definitions, scope and methodology of the peer review process Early 2016 The Chinese review is completed May 2016 The US review is expected September 2016 Outcomes to be submitted at the G20 meeting in Hangzhou Bilateral peer reviews of fossil fuel subsidies are not a G20 invention – APEC members New Zealand and Peru have already successfully completed the process. By looking at the outcomes of that review (submitted by Peru in November 2014 and New Zealand in September 2015) we can predict possible outcomes for the China-US review. First, expenditure on fossil fuel subsidies is not the focus of the process; the key is the identification and reform of subsidies which are inefficient (despite the greater media attention paid to the former). For example, Peru offers tax breaks for fossil fuel producers, and some consumer subsidies, in its Amazon region. The peer review with New Zealand assessed the scale of these subsidies but spent more time analysing the impact these have. It found that while the aim of the subsidies is to stimulate economic development in the Amazon, 14 separate tax exemptions resulted in the loss of government income equivalent to 0.46 percent of GDP, while economic growth in the region is still the country’s lowest. The subsidies failed to attract investment or create jobs. In light of this, it was recommended that these subsidies be cut. However, the report also found that the monthly issuing of vouchers for natural gas to low-income groups – a consumer subsidy - was an effective policy that could be expanded nationwide. Second, the peer review model accounts for different circumstances of each nation when considering the starting point for reform, and the definition and scope of subsidies. In Peru, the fuel price stabilisation mechanism was identified as causing wasteful consumption; the peer review process suggested using other economic policies, such as interest rates, instead. While in New Zealand, a wider range of policies were considered, eight in total, including supply-side subsidies and the support given to fossil fuel research and development projects. Such differing scopes are permissible under the existing peer review framework. The policy and reform suggestions resulting from the process will reflect, to a degree, the trends in ongoing fossil fuel subsidy debates. One aspect of this is that discussions must solicit the opinions of stakeholders and the reform process must be more transparent. For example, the peer review experts praised public disclosure measures taken through New Zealand’s support for oil industry research and development, and of the natural gas subsidies for lowest-income populations in Peru. China and the US are at different stages of development, have varying degrees of openness regarding government finances, and a sharply contrasting make-up of their energy structure. Their reports will reflect these differences and the outcome of the peer review process may well include a “common but differentiated” programme for reform.
<urn:uuid:8d470a2a-1510-4363-a66a-3b29a596e56d>
CC-MAIN-2016-50
http://www.eco-business.com/news/us-and-china-ready-peer-reviews-of-fossil-fuel-subsidies/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.943735
1,293
2.78125
3
The stages of labor are commonly broken down into three main phases. This is misleading, however, in that the first stage is comprised of three sub-phases and is what most people identify as labor. Stage I consists of early labor, active labor and transition. Stage II is the pushing stage and Stage III is the birth of the placenta. It is common practice to define the phases of Stage I by the degree of cervix dilation and length of each labor contraction. However, these are truly poor indicators of how a labor will progress. This is because dilation doesn't usually occur in a completely linear fashion. In other words, your cervix may go from 1 to 4 centimeters dilated, skipping all the degrees in between. Your contractions may resemble those seen in transition while you are only 3 centimeters dilated and you could be holding that baby only minutes later. Always remember that the focus should be on you as a person, the mother as a whole, and not just your cervix. It's the same principle that applies when a nurse or doctor relies too heavily on machines when one glance at a mother's ashen skin and ragged breathing should tell them that something is seriously wrong. Only after she or the baby goes into distress do they realize that the machines were malfunctioning. In the case of a home birth, most mothers do not submit to cervical checks since they provide little useful information and are poor predictors of labor progression. In these cases, the care provider more accurately relies on powers of observation to judge progress through the stages of labor. Some mothers will even push their babies out never having known their degree of cervical dilation. Instead, they wait for an uncontrollable urge to push to tell them its time rather than an arbitrary cervical check. Early labour takes up the majority of the birthing experience. It is characterized by contractions that are regular but may not be very close together or last very long. The contractions may be 10 minutes apart and last only 30-45 seconds. This is the most comfortable of the stages of labor, easing the body into the process. In this phase, dilation is to a maximum of 4 centimeters. Active labor is more intense with longer, stronger, more intense contractions that may be 3-5 minutes apart and last up to 60 seconds. This is the beginning of the serious phase, where relaxation comes into play and the birth companion's role becomes more prominent. Dilation is usually from 5-7 centimeters. Transition is by far the most challenging, although the shortest, phase of birthing. This can cause overwhelming sensations which might falter your focus. This is the phase usually depicted in mainstream media. These contractions are stronger and longer and finish dilating the cervix. They usually last 90-120 seconds with breaks of about a minute or two in between. Generally this phase only last for 30 minutes to 2 hours. A time distortion may also be experienced in this phase that makes it seem to pass more quickly and may make this period difficult to remember clearly after the birth. Experiencing grogginess or a mental fog are also common. Nausea can also set in as well as involuntary painless shaking from the intensity. Women are especially vulnerable to suggestion at this time, which can be used to enhance or to hinder the birth. The pushing stage, the second phase of labor, begins once 10 centimeters has been reached. This will end with the much-anticipated birth of the baby. This stage can last a few minutes or several hours. In a natural birth, the pushing phase is typically much shorter than in a medicated one. Women commonly report this as the most empowering part of the birth experience and as the most motivating and comfortable. Pushing is usually much more manageable than transition. The pushing contractions are of a different variety than those previously experienced. The body will push independently of intentional effort. This is the purpose of the uterine contraction, to first fully dilate and efface the cervix, and then to expel the baby from the uterus. True "pushing" is rarely required. The most effective course of action is to let your body guide your efforts by not pushing until an overwhelming urge is felt. Reaching 10 centimeters dilation alone does not necessarily mean the body is ready to push. The baby may not yet be in the best position or the tissues may not yet have had enough time to gently stretch on their own. Pushing too soon wastes energy and can lead to complications such as fetal distress, malpositioning, pulled ligaments, perineal tearing and forceps or vacuum extraction. If a lull in contractions is experienced, simply letting the baby drift down on its own is advisable. This preserves energy for you and the baby. It also makes for a slow, controlled delivery with less chance of tearing and can eliminate the sometimes-reported "ring of fire" when perineal tissues stretch rapidly to accommodate the baby's head as it crowns. When the baby reaches your arms, the final of the stages of labor, the placenta delivery, often receives little attention. It begins with the birth of the baby and ends with the arrival of the placenta. On average, it takes roughly 20 minutes for the placenta to detach from the uterine wall, although it can safely be longer. The placenta will detach from the uterine wall and then be expelled through the birth canal. The care provider will determine when the placenta is ready to detach by a small gush of blood or a lengthening of the cord. is not enough to just recognize the stages of labor: mothers must know exactly how to handle each stage as it comes. By becoming familiar with the unique of each stage, anxiety about giving birth can be eliminated so that when labour begins, you will be fully prepared to face step along the way to holding your newborn baby. Page Last Modified by Catherine Beier, MS, CBE Choose 7 week, 12 week, or Self- Paced online childbirth classes available wherever and whenever you need them. Vanessa's natural birth story shows that when birth is left alone to proceed as it should, it waits for no one - not even doctors or midwives. Create a free pregnancy ticker to post on your blog, website, Facebook profile or favorite social media...
<urn:uuid:908d75e1-07a2-4dfd-bcc6-e3e25823dd6b>
CC-MAIN-2016-50
http://www.givingbirthnaturally.com/stages-of-labor.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.955781
1,300
3.1875
3
If you’re planning to visit the swine barn at county fair or state fair, health officials say you should be careful to avoid potential exposure to swine influenza, especially if you have risk factors that make you more vulnerable. The West Central Tribune reports the U.S. Centers for Disease Control and Prevention say that although it’s rare, swine influenza viruses sometimes spread from pigs to people and vice versa. Last yearMinnesotasaw at least three cases of suspected swine flu among individuals who had been exposed to pigs at the State Fair. The culprit in theMinnesotacases was a variant of the H2N1 virus, which was responsible for a pandemic outbreak of flu in 2009-10. A bigger concern for the CDC, however, has been the H3N2v virus, which is thought to spread more easily from pigs to humans than other types of swine flu. CDC officials reported Monday that 14 cases of H3N2v have occurred this year, 13 inIndianaand one inOhio. Last year 309 cases of H3N2v infection were found in 12 states. This variant of the influenza virus was first identified among pigs in theU.S.in 2010. Those at high risk of influenza-related complications should avoid the swine barns at the fairgrounds. High-risk factors include being younger than 5, older than 65, pregnant or living with a chronic condition such as diabetes or asthma. The CDC has some prevention advice for the rest of the population as well: Don’t take food or drink into areas where pigs are being exhibited, and avoid close contact with pigs that look or act ill. Hands should be washed with soap and running water before and after exposure to pigs. Owners and handlers are advised to watch their pigs for any signs of illness and to use gloves and masks when handling a pig suspected of being sick. Individuals with flu-like symptoms also should avoid contact with pigs for at least a week after the onset of symptoms.
<urn:uuid:9874118c-09e4-43e4-a362-716e79e94192>
CC-MAIN-2016-50
http://www.kduz.com/2013/07/30/swine-flu-caution/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.968657
422
3.109375
3
Torso (A Study for Ariane without Arms) - Auguste Rodin (French, Paris 1840–1917 Meudon) - modeled ca. 1900–5 or earlier - Without base, 7 x 11-5/8 x 4-1/4 in. (17.8 x 29.5 x 10.8 cm); H. (with base) 9-1/4 in. (23.5 cm.); D. (with base) 5-1/4 in. (13.3 cm.) - Credit Line: - Gift of the sculptor, 1912 - Accession Number: In his studies made of plaster, wax, and terracotta, Rodin often fought the burden of narrative to concentrate instead on some problem connected with the increasingly deep assaults that his sculpture tended to make on the human form. Sometimes distortions were due to accidents in the studio that triggered Rodin's imagination. Some were the results of the sculptor's efforts to render the physical effects of the extremes of old age, emotional stress or violent physical activity. Others, such as this torso, intended as a study for a marble tomb figure, are true fragments resulting from a working method peculiar to Rodin: the deliberate breaking apart of sculptures in order to reassemble the parts in new ways. The scars left by the removal of the head, arms, and legs, and the separation of the torso from its original base are permanently preserved in this terracotta torso.
<urn:uuid:b464b085-410d-4c63-8e7a-6fb273fa5d4b>
CC-MAIN-2016-50
http://www.metmuseum.org/art/collection/search/191863?exhibitionId=&oid=&aid=01ae2766-7ea4-4f44-98b7-12504749eb8f&tab=audvid
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.940602
306
3.25
3
The University of New Hampshire has created a "Bias-Free Language Guide," which it says "is not a means to censor but ... presents practical revisions in our common usage that can ... break barriers relating to diversity." It includes a long list of "problematic/outdated" words, along with a "preferred" alternative. A search suggests the guide was posted in late May, but it's just today grabbing headlines. Why the interest? For one, as Jonathan Chait puts it at New York, the guide "indicates that the list of terms that can give offense has grown quite long indeed." A sampling: Preferred: people of advanced age, old people* Problematic/Outdated: older people, elders, seniors, senior citizen *Old people has been reclaimed by some older activists who believe the standard wording of old people lacks the stigma of the term “advanced age”. Old people also halts the euphemizing of age. Euphemizing automatically positions age as a negative. Preferred: person who lacks advantages that others have, low economic status related to a person’s education, occupation and income Problematic: poor person, person from the ghetto Note: Some people choose to live a life that is not connected to the consumer world of material possessions. They do not identify as “poor”. Preferred: person of material wealth Being rich gets conflated with a sort of omnipotence; hence, immunity from customs and the law. People without material wealth could be wealthy or rich of spirit, kindness, etc. Preferred: people of size Problematic/Outdated: obese*, overweight people "Obese" is the medicalization of size, and "overweight" is arbitrary; for example, standards differ from one culture to another. Note: "Fat", a historically derogatory term, is increasingly being reclaimed by people of size and their allies, yet for some, it is a term that comes from pain. Preferred: US citizen or Resident of the US Note: North Americans often use “American” which usually, depending on the context, fails to recognize South America Preferred: First-year students Preferred: Other Sex Problematic/Outdated: Opposite Sex See the guide in full here; and yes, it includes this.
<urn:uuid:fb46910d-f773-4d49-b083-a7b41c74e061>
CC-MAIN-2016-50
http://www.newser.com/story/210543/university-of-nh-dont-use-words-like-rich-american.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.940871
487
2.578125
3
Some interesting celestial events to simulate in Guide The following list describes some events easily simulated using Guide. This just lists a few events that have recently had my attention; if you have found others that might be of interest, please e-mail me. View from geostationary orbit The view from geostationary orbit is basically that which you would have if there was a mountain 35800 kilometers high, at the earth's equator; and you were sitting on top of that mountain. Therefore, if you go into the Settings... Location menu in Guide, and set your latitude to be zero degrees and your longitude to put yourself over a desired point on the equator, and set your altitude to be 35800 km, you can see the world as it appears from geostationary orbit. From this viewpoint, the earth goes through a full set of phases once a day, and eclipses the Sun near the equinoxes. The moon moves in a retrograde direction for much of the day, and sometimes is eclipsed by the earth. If you have the vector features and/or the grid turned on for the Earth, it will be very apparent that quite a lot less than half the earth is visible from this viewpoint; you're still close enough to the earth for perspective to matter (which is why the people at the South Pole reseach station can't get satellite TV.) View from Earth-Moon Lagrange points Guide is still a little limited when it comes to showing you viewpoints from arbitrary positions in the solar system. But you can see what the solar system would look like as seen from all five Earth-Moon Lagrange points. (Lagrange points rotate with the Earth-Moon system, and are also called "stationary" points. They exist for several pairs of objects in the solar system. Two of the points are stable; they are called "Trojan points". The other three are not stable over long periods of time.) For the Earth-Moon system, the five points are as follows: L1 = point about 92,000 km above the moon, toward the earth; L2 = point about 92,000 km above the moon, on the side away from the earth; L3 = point about 400,000 km above the earth, opposite the moon (a sort of "anti-Moon" point); L4 = a Trojan point, 400,000 km from both earth and moon, orbiting 60 degrees ahead of the moon L5 = a Trojan point, 400,000 km from both earth and moon, orbiting 60 degrees behind the moon L4 (Rotates counterclockwise as seen / \ from above the North Pole) / \ / \ / \ / \ L3----------(E)------L1-(M)-L2 \ / \ / \ / \ / \ / L5 L4 and L5 are stable. Jupiter has collected a lot of asteroids at these points; Mars has collected at least one known asteroid, (5261) Eureka; and several of Saturn's moons have "co-orbital" moons, in their same orbit but 60 degrees ahead or behind in the orbit. There have been claims of dust clouds seen at the Earth-Moon L4 and L5 points, but nothing definite. The SOHO satellite has been deliberately placed at the Earth-Sun L1 point. The method for setting your viewpoint in Guide to one of these five points works because the Moon rotates synchronously in its orbit (is "tide locked"). That lets you do the following tricks. For all of them, you must go into the Location dialog and set your home planet to be the Moon, and your latitude to be zero (on the Lunar equator.) It would be nice if staying at the L2 point for the Earth-Sun system would keep the Earth between you and the Sun; then you could put a probe there and have it get down close to 3 degrees K, something that would be wonderful for some applications. But this won't work, partly because the earth is much denser than the Sun; you would only get an "annular" eclipse. Similar tricks will get you to the Lagrangian points of almost every satellite in the solar system. Unfortunately, it won't get you to the Lagrangian points for the planets (say, the L1 point for the Earth and Sun, where the SOHO satellite stays), because the earth rotates relative to the Sun. Molniya satellite motion Jari Suomela pointed this one out to me. The Molniya-type orbit is used by several Russian satellites. An object in such an orbit has a 12-hour period, and an inclination to the Earth's orbit of about 65 degrees. The orbit is very eccentric, with a perigee of about 1000 km and an apogee of about 39000 km. The orbit is selected so that, if you're in Northern Russia, the satellite will appear to zip up over the horizon, rise very high in the sky at apogee, appear to stay almost motionless for several hours, then drift away and streak back below the horizon. So you basically have a "part-time" geostationary satellite. You have to have several (at least three) in the orbit, to provide continuous coverage (one takes over just as the previous one is heading back toward perigee). But Russia has a lot of territory at far northern latitudes, where "normal" geostationary satellites are just not a reasonable prospect; they would be hugging the southern horizon. To see this in Guide, set your lat/lon to be around E 50, N 60 (central Russia), and hit Alt-N. This results in a view of the zenith, full-sky (180 degrees), with the horizon surrounding the chart. Now go into "Settings... TLE=" and select MOLNIYA.TLE. It may not appear; if so, download and save MOLNIYA.TLE (about 4 KBytes) to your Guide directory. Go into "Data Shown" and turn Satellites to On. Personally, I'd turn satellite labels off, too; they're apt to get pretty cluttered-looking. Now start up the Animation dialog and set the "Horizon" radio button, and start animating with, say, a 15-minute step. The pattern of Molniya motion will become very obvious. If the screen flickers, go under "Display" and make sure that "Direct to Screen" is not checked. Jari is in Finland, and has done some observing of Molniyas. Like geosynchs, they have the advantage that Dobsonian owners can observe them without having to worry much about tracking problems. One need not be especially close to the USSR to see them, either. After all, these objects are in 12-hour orbits; if they "hover" over, say, E 50, N 60 on one orbit, then they will wind up over W 130, N 60 on their next orbit. Observers in Canada will find that these objects are quite nicely placed. By the way, it can also be interesting to observe Iridium satellite orbits as seen from the Moon. Set your "home planet" to be the moon, look back at the earth, and load up a set of Iridium elements. (You can get current Iridium elements, and Molniya elements, and quite a few others, at this Web site. You'll also see that there is an example set provided with Guide... this will eventually be out of date, but can still give you the general idea of what the Iridium system looks like.) If you animate with a step size of a minute or so, you can see the pattern in the Iridium "satellite constellation". Over half the earth, satellites are travelling north to south. On the other half, they run south to north. They fall into "tracks"; satellites in adjacent tracks are staggered, so you get a network of equilateral triangles. Presumably, the designers of this configuration wanted to use a minimal number of satellites. Also, for any long-path call, the call would have to be passed from one satellite to the next; keeping the number of "lateral passes" needed to a minimum would be a goal. Also, you would want to be sure that all parts of the world see at least one Iridium satellite, within decent range, all the time. The Earth also rises (as seen from the moon) It's often said that, because the moon always keeps one face to the earth, that the Earth hangs in one place in the sky as seen from the moon. If you were standing at a point near the lunar limb, the earth would appear fixed at the horizon, never rising and never setting. This is mostly true, except that the moon doesn't keep just one face toward us; it librates ("wobbles") quite a bit. And correspondingly, you can see the Earth wobble a little bit around its fixed position in the lunar sky. To see this, fire up Guide, and (under the Settings menu) click on the "Location" option. Set your home planet to be 'Luna', and your longitude to be W 90 or E 90 (latitude is irrelevant for this... but either longitude will put you on the lunar limb.) Do a "Go to... Planet" and look back at the earth. You may want to turn on the Inversion Dialog and set "Alt/Az up", so you get a nicely level horizon. (Also, it helps to go into the Backgrounds dialog and turn on the "filled ground" and horizon objects. The resulting barns, houses, and streetlights are admittedly incongruous for a lunar scene, but they do give you a frame of reference.) Start up the Animation dialog, and click on the "horizon" radio button. Set the animation rate to about one day, and start animating. Over the course of a lunar month, you'll see the earth wobble in a small circle, which will cause it to rise and set. You'll also see it go through a full set of phases, which will be opposite to those seen from earth ("Full moon" coincides with "New Earth", and "New moon" with "Full Earth"). By the way, similar stunts can be tried with all other satellites in Guide, looking back at their primaries. But except for Saturn's moon Japetus, they're in such nearly-circular orbits that the librations are quite small. Japetus is apparently far enough from Saturn that its orbit hasn't gotten quite circularized yet (check back a few billion years from now). Sun 'stands still' from Mercury This is an odd piece of solar system trivia, which can be shown in Guide now that views from the surfaces of other planets are supported. Usually, when tidal forces "lock" an object's rotation, one face of the object faces the primary (much as one face of the moon points to the earth, and the Galilean moons of Jupiter all have one face pointing toward Jupiter, and so on). The object rotates in the same time it takes to orbit the planet. If you're on the surface of the moon, for example, the Earth stays (basically) fixed in the sky. For quite a while, it was thought that Mercury did the same thing with the Sun. Based on what little could be seen in the way of features, it looked as if one face of Mercury was in perpetual sunlight (and presumably the hottest place in the solar system) and one face in perpetual shadow (and presumably the coldest place in the solar system... even colder than Pluto, probably.) Then in the mid-1960s, radio observations showed that Mercury does rotate, though quite slowly: once every 57 days. For every two trips around the sun, Mercury rotates three times (with respect to the stars; once with respect to the sun. It's a bit like the situation with the Moon, which rotates once every 27.5 days relative to the stars, but not at all with respect to us.) This 3:2 ratio for Mercury seems to be quite exact. This is a little puzzling, given that the usual ratio is 1:1, but there is a way to show that this behavior is perfectly reasonable. To do so, fire up Guide, and (under the Settings menu) click on the "Location" option. Set your home planet to be Mercury, and your lat/lon to be N0, W0. This puts you right on top of Mercury's equatorial bulge; you'll see soon why that viewpoint is important. Now use "Go To... Horizon Menu", and click on "Zenith". Turn on the Animation dialog, and select a step size of one or two days. Click on the "Horizon" radio button; that way, when you animate, the viewpoint will stay fixed on the zenith. Zoom out to about level 2 or so. Now start animating. Eventually, the Sun will come into your field of view (it may take a while, since the Sun reaches the zenith once every 176 days.) As you'll see, it zips across toward the center of the screen. But then it slows down... drags to a stop... backs up about half a degree... drags to a stop again... and goes forward! Which leads to two questions: (a) why does this happen? and (b) how does it relate to that 2:3 ratio? The answer to (a) involves Mercury's extremely elliptical orbit. On average, Mercury's rate of travel around the sun (once every 88 days) is a third slower than its rate of spin around its axis (once every 59 days). But when it gets close to perihelion, it speeds up in its orbit, courtesy of Kepler's Second Law. And for a few brief, glorious days, the angular rate of motion around the sun catches up and even slightly surpasses the angular rate of spin... so the sun stands still in the sky, then backs up a little, much as the earth stands still in the sky (all the time) from the moon. Then it gets past perihelion, and the angular rate of motion around the sun drops off, and the party's over. (If you click on "Make Ephemeris", and make an ephemeris showing the distance to the sun and its alt/az from this viewpoint over time, you'll see that perihelion always coincides with the sun being at the zenith or nadir.) Which brings us to (b). Long ago, Mercury (like our moon) probably spun quite rapidly. Like the moon, tidal drag slowed it down over millions of years. So it slowed down from rotating, say, once a day, to rotating once a week, to once a month... until it hit that magic 2:3 ratio, rotating once every 59 days. At every perihelion, the equatorial bulge pointed through the sun, much as the moon's equatorial bulge points through the earth. That rate represents a "local energy minimum", like a marble at the bottom of a bowl. So instead of continuing to slow down until it got to a 1:1 ratio (like the earth and moon, and like almost every other satellite in the solar system), it stayed in the 2:3 ratio. And now, for a little speculation. It's fair to assume that this may also explain why Mercury's orbit remains as eccentric as it is. Most planets have nearly circular orbits now, except for Mercury and Pluto. But the same forces that keep Mercury's rotation locked at the 2:3 ratio would also tend to lock the eccentricity at this value. Given suitable mathematical effort (which I may pursue someday), you could probably show that, for a 2:3 ratio, Mercury's actual orbital eccentricity of about .25 is a sort of "optimal value". This could also be generalized a bit. For higher eccentricities, there ought to be, say, an "optimal" 2:5 ratio and a 1:2 ratio. And had Mercury's orbital eccentricity been lower, there might have been an "optimal" 3:4 ratio. (You can't keep pushing that forever, though. If you went on to finding the 4:5 case and the 5:6 case, I think you would eventually find a point where the rotation is so close to 1:1 that there is no "minimum" point, and the object keeps getting tidally braked until it looks like our moon.) If anyone writes up a doctoral thesis on "Variations in Tidal Braking" or something like that, please send me a copy. Mutual planetary events The August 1992 issue of Sky & Telescope (p 208) has an interesting article on a mutual planetary occultation. On the night of 12 Sep 1170, Mars appeared to transit Jupiter. The event was observed and reported by the monk Gervase of Canterbury. An image of the event is available on this Web site. Recently, I became curious about other mutual planetary events, and wrote software to find all conjunctions of all planets with one another. A very few of these (perhaps once each century) will be close enough to appear as an occultation. The new bitmapped features in Guide make these events particularly attractive to simulate. You can click here for a list of all mutual planetary events from -1000 to +6000, but here are some of the more "interesting" events. (All dates before 1582 are Julian calendar, not Gregorian.) 30 Jan -9, 12:33 UT: Mars transits Saturn. 3 Jan 1613, 17:39 UT: Jupiter occults Neptune. The Sky & Telescope article has some comments on this event. It occurred just at the time that Galileo was observing Jupiter's satellites; Galileo did show Neptune in sketches made on 28 Dec 1612 and on 28 Jan 1613... but he identified it as a star. 8 Dec 1253, 9:15 UT: A crescent Mercury transits Saturn, as seen from the Southern hemisphere. (Usually, when Mercury or Venus transit a planet, they are nearly full; not only do they spend more time near full than near a crescent phase, they also are closer to the ecliptic when on the far side of the sun.) 25 Aug 1278, 14:18 UT: Mars occults Neptune. 28 Dec -423, 10:50 UT. Saturn and Jupiter are about 1.5 arcminutes apart. Between the years -1000 to +3000, this is the closest they get to one another. (I had hoped to get a very close pairing, or a mutual occultation; displayed in Guide, it would make a really good screen shot for my ads in Sky & Telescope. But there wasn't even a close enough pairing to make for a good ad.) So large a distance may seem a little peculiar; it happens for several reasons. The most important is that these are the slowest-moving naked-eye planets, and their conjunctions are rare, at roughly 20-year intervals. (After 20 years, Jupiter has completed 5/3 orbits, and Saturn 2/3 of an orbit; you could say that Jupiter has "gained a lap" on Saturn. Sometimes you get a cluster of three conjunctions at this point, because of the motion of the Earth; but generally, you just get one. For example, there was a cluster of three conjunctions in 1980-1981; but the next conjunction, on 28 May 2000, will be a single one.) Anyway, the natural result is that, even over a 4000-year span, conjunctions are few, and the number of opportunities for a mutual event (or very close conjunction) are limited. On the bright side, we will get a conjunction to within a mere 6 arcminutes on 21 Dec 2020, with Jupiter and Saturn in the evening sky. It should be possible to see both planets in the same telescopic field of view, with some of their attending moons. Three shadows on Jupiter When I added the display of shadows cast by objects on one another (the Earth and moon on one another during eclipses, mutual events of the satellites of Jupiter and Saturn, and shadows cast by satellites on Jupiter and Saturn), I remembered a story I read as a child (Farmer in the Sky, by Robert A. Heinlein) in which at one point, all four Galilean satellites are lined up and casting shadows on Jupiter. (In the book, this results in severe tides, which cause a quake on Ganymede.) I got curious as to whether such an event might ever occur, and searched years between 1860 and 2020. I found no events where there were four shadows at once... had I applied reason instead of a brute-force computer search, I would have seen why immediately. As Jean Meeus points out in his new book, Mathematical Astronomy Morsels, the inner three moons have related periods, and only two can cast shadows at any given time. But you can get a three-shadow event: Callisto, plus two inner moons. My program found several of these. For example, if you set Guide's time to 11 Nov 1997, 4:30 UT, and look toward Jupiter. Io, Ganymede, and Callisto are casting shadows. Those of Ganymede and Io overlap slightly, and if you zoom in far enough, you'll see that Io is partly eclipsed by Ganymede. At the start of the shadow crossings of Io and Ganymede, Io's shadow chases that of Ganymede; toward the end of the event, just as the shadows are about to leave the disk of Jupiter, the shadow of Io catches up and slides under the shadow of Ganymede. ( Click here to see a view of the situation at this point.) Due to "light lag", you're seeing this event about 40 minutes after it "really" happened. If you set your "time" in Guide to 3:50 UT and your home planet to Callisto, and look toward Jupiter at about level 4 (20-degree field of view), you'll see it "as it happens". This event was actually visible from the southwestern US; Jupiter will be 152 degrees away from the Sun, in the evening sky. Several people observed it and collected CCD images as it progressed. This event is sufficiently "dramatic" that I decided it would make a good screen shot for my ads in Sky & Telescope. See the June 1997 issue, page 109. Just for the record, between 1975 and early 2015 there are 7 "triple shadows". Times are for the beginning of the events: 23 Jan 1985, 22:44 UT 30 Jan 1997, 5:26 UT 11 Nov 1997, 3:35 UT 28 Mar 2004, 8:00 UT 12 Oct 2013, 4:31 UT 3 Jun 2014, 19:02 UT 24 Jan 2015, 6:27 UT A further aside: it seems this resonance between the inner three moons is not just a mathematical curiosity. It leads to tidal stresses that have caused them (in varying degrees) to remain geologically active. Callisto, which is not part of this resonance, is also fairly dead geologically, with an extremely old, heavily cratered surface. Heinlein's guess, back in 1949, that the inter-satellite tides might cause quakes was more accurate than he knew. Earth and partially eclipsed Moon Set your "home planet" to Mercury, and the time to 21 Jan 2000, 6:00 UT, and look toward the Earth. You will have to zoom in very far (about level 14 or 15) to see much detail. As the title says, you'll see a partially eclipsed moon, close to the Earth. Trojan asteroids as seen from above Go to level 3 (45 degree field of view) and go to Epsilon Doradus (or someplace nearby; the idea is to be looking in the direction of the south pole of the ecliptic). Set your "home planet" to be the Sun, and hit Ctrl-F5; this is an undocumented feature for shifting your point of view. When you hit Ctrl-F5, it will ask you to enter a distance in AU. Enter a distance of 20 here. Guide will redraw, showing the inner solar system and a few bright asteroids as seen from a viewpoint well above the plane of the ecliptic. Next, you want to switch from "some bright asteroids" to "all asteroids". To do this, go into the Data Shown menu and turn Asteroids ON. Also, it's a good idea to turn asteroid labels off; they tend to get in the way in this particular case. This time, Guide will take some time redrawing the screen. But you will quite clearly see two short arcs, one 60 degrees in front of Jupiter and one 60 degrees behind it, illustrating the "shapes" of the Trojan nodes. Obviously, these objects are held very close to the same distance from the sun as Jupiter, but they have some freedom to move forward and backward within the node. To return to a more "normal" view, hit Ctrl-F5 again, enter a distance of 0, set your home planet back to Earth, and turn the asteroids to OFF or AUTO or wherever you normally have them.
<urn:uuid:2a079520-9e91-4dca-9447-2ebb36d74daa>
CC-MAIN-2016-50
http://www.projectpluto.com/interest.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.946824
5,313
2.734375
3
In marshes somewhere south of Lake Helen Blazes, the St. Johns River is formed. It slowly meanders north one of only a few rivers in North America that flow in that direction. Near the end of its 300-mile journey it turns east, passes in a broad swath through the city of Jacksonville, and then merges into the Atlantic Ocean. In addition to scenic beauty, it is a valuable resource that provides a way of living to many Florida residents. As Florida's natural attractions lure more people there is more activity in and around the river but, fortunately, people who care work constantly to protect it and it has friends in Tallahassee. On Monday the 4th St. Johns River Caucus was held in the Senate. The caucus was started by state Sen. John Thrasher to give legislators whose districts are touched by the river a venue to monitor the river's status. One of the major steps in improving the river came 40 years ago when the city of Jacksonville spent $150 million to cut off sewage outfalls into the river. In earlier times it was acceptable to put sewage in the river because the river was able to assimilate it, but Jacksonville officials decided to end that practice when the Clean Water Act was passed, unlike other cities that continued to dawdle. Although point-source pollution has been eliminated, there is still runoff from agriculture and other sources. But it is a manageable problem, despite occasional outbreaks of hysteria from Big Environment. In some cases, liberal solutions are worse than the problem. Georgia-Pacific began 20 years ago trying to help the river by building a pipeline that would carry wastewater from its paper mill in Palatka into the river. Little-brained columnists and reporters often described this as an effort to dump pollution into the river. Treated wastewater from the plant has been going into the river for nearly 60 years. It is emptied into Rice Creek, which is beside the plant and runs a short distance into the river. Furthermore, the discharge met all environmental standards until regulators raised the standards a few years ago. Suddenly, what had been legal discharge became pollution. Georgia-Pacific spent $200 million to clean its wastewater, but still needed the pipeline. Water from Rice Creek oozes into the river and tends to remain near the shore. The pipeline pumps the wastewater into the main body of the river where it is diluted quickly to a level that meets the standards. The net effect is akin to dumping a teacup of wastewater into an Olympic-sized swimming pool. Truly, the solution to pollution is dilution. Although slow, the St. Johns empties nearly a billion gallons a day into the ocean. Yet, even after regulators and the courts had approved the $30 million project, various special interests continued trying to block construction. Fortunately they failed. Had they succeeded in shutting down the plant and putting 1,000 people out of work, they would have considered it a victory because in their view minnows are more important than humans. Liberals never can get it through their skulls that people are part of the environment. Lloyd Brown was in the newspaper business nearly 50 years, beginning as a copy boy and retiring as editorial page editor of the Florida Times-Union in Jacksonville. After retirement he served as speech writer for Florida Gov. Jeb Bush.
<urn:uuid:e471ea97-1270-44c8-8520-306a69da83e1>
CC-MAIN-2016-50
http://www.sunshinestatenews.com/story/making-minnows-happy-important-special-interests
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.976237
687
2.890625
3
See Where Stuff Comes From with SourceMap You have certainly heard that buying local is "greener". You have probably also heard counter-arguments: a product made more efficiently but shipped some distance may beat out a local product. But all that talk is merely theoretical if you don't know where your stuff comes from anyhow. And in the new global economy, the "made in" tag on a product does not tell half the story. What is the discerning consumer to do? Imagine a future in which pointing a PDA at a product bar code returns an instant readout of product source and environmental footprint to inform the buyer's decision. This future could be reality with SourceMap. Designed as a "collective tool for transparency and sustainability," SourceMap aims to be the Wiki of visualizing supply chains. SourceMap is a project of Media Labs, a division of MIT. Developers have adapted the Google Earth geotagging capabilities to the purpose of charting the components that go into products. After two years in develpment, the site is live in beta, and a SourceMap pilot project is underway in Scotland, where businesses can input data on sourcing and supply to share with customers. SourceMap hopes to show that a marketing and social networking advantage justifies the effort of businesses to create transparent reports. You Can Help SourceMap Track Stuff Going into the live phase as an open source project, SourceMap needs volunteers. The site will be only as good as its content, and since most people do not know where stuff comes from, SourceMap is looking in particular for - Product Designers - Supply Chain and Inventory Management experts - Life Cycle Assessment specialists - Information Visualization experts - Web Designers - Web developers - Mobile developers. If you are interested in such a project, you are probably also the type of person using Firefox or Safari. If not, be warned: SourceMap is beta, and not yet optimized for Internet Explorer. Which is probably just as well, because the site needs some time in the hands of the tech-savvy before it is ready for prime time. How SourceMap Works If you just want to browse where stuff comes from, take a look at a typical laptop computer or an iPod. If you want to contribute to creating a SourceMap, check out the parts catalog or transport catalog. Each SourceMap combines parts and transport to fully characterize the final product. Each part and mode of transport has an average CO2 emission factor associated with it. What is Next for SourceMap Currently, there are not many products defined in SourceMap. We are guessing a lot of people will start playing with the site, creating maps like Borjiz's homegrown tomatoes, tied with thread from Taiwan. Or Katherine's raw granola, with groats from China, cinnamon from Sri Lanka and honey from Oregon. This will result in lots of maps that are dead ends or downright useless. Moderators are urgently needed, and a system to identify useful product maps from stuff "under construction" would be helpful. Nonetheless, we find that the SourceMap experiment is highly promising. Open source may be a good way to solve the issue of constantly changing supply chain information...if the quality of the information input by site users can be verified and controlled. SourceMap certainly answers the question Jaymi asks about emissions software: "Does it all matter if consumers don't know?" More on Supply Chain Impacts: Karma Konsum (German) Supply Chain Emissions Software a Silver Bullet or Slippery Slope? HP Steps Up IT Industry Transparency, Releases Supply Chain Emissions Data Life Cycle Perspective On California Solar Photovoltaic Supply Chain
<urn:uuid:3cd0eccc-308d-4444-b944-5275f7456ee9>
CC-MAIN-2016-50
http://www.treehugger.com/clean-technology/see-where-stuff-comes-from-with-sourcemap.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.921318
755
2.671875
3
Learn something new every day More Info... by email After abnormal mammogram test results are returned, a few other tests will be conducted in order to confirm a diagnosis. An abnormal mammogram does not necessarily mean that something is wrong, though many women may fear the worst. Physicians will order a second mammogram test if the original films taken were blurry or unclear. A fine needle aspiration, core needle biopsy, or surgical biopsy may be completed if the physician detects a mass in the breast tissue. The physician may also do an ultrasound to visualize any masses. An abnormal mammogram may be a result of blurry scans, or a physician could order a set of diagnostic mammograms that focus on one area of the breast which was of concern to the doctor. The test is performed by asking the woman to stand in front of a machine and place her breast on a clear plastic sheet. There is another clear plastic sheet above the woman, which lowers down and compresses the breast. The woman must stand as still as possible, while an x-ray machine takes a black and white image of the breast tissue. If the results of the original or diagnostic x-ray shows suspicious masses or tissue, a physician may order a fine needle aspiration. The long, thin needle is inserted through the breast into the mass or lump. A physician will extract fluid from the mass and test the cells for abnormalities. Core-needle biopsies use a larger needle with a hollow center. The needle is inserted numerous times in order to abstract enough breast tissue to examine. Tissue from the mass is examined for cancerous cells, but there is no need for stitches after the procedure. A surgical biopsy will be performed to remove a section or the entire lump contained within the breast tissue. The woman will be sedated during the procedure, which requires the surgeon to make a cut in the breast tissue to remove the lump and a margin of tissue surrounding the mass. Breast tissue will be examined for abnormal cells, and the surrounding tissue will be tested to determine if the cells have spread beyond the location of the tumor. This surgical procedure could affect the look and feel of the woman’s breast, depending on the size and location of the mass. Abnormal mammogram results may be further tested through the use of ultrasound technology. Depending on the woman’s original films, the physician may order an ultrasound to be able to see the lumps and determine if the masses are solid or if the tumors or cysts are filled with fluid. The ultrasound produces sound waves at a high level, and theses waves bounce off of the breast tissue and create a sharp image of the mass. The physician may order a fine needle aspiration, core-needle biopsy, or surgical biopsy after seeing the images produced during the ultrasound procedure. I was called back for more films at my first mammogram. Talk about scary! Turns out, the radiologist wanted to focus on an area where I knew I'd had a little fibrocystic nodule pop up before, and that's exactly what it was. Usually, the radiologist will order more films for a suspicious mammogram, and then if he or she suspects something else, will frequently send the woman to get an ultrasound that day. Most hospitals have an ultrasound lab, so they can have the procedure done right away. This is critical because, once an abnormality has been spotted, the faster the woman receives a diagnosis and begins treatment, the better the prognosis One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
<urn:uuid:cb09a7c9-8a60-480b-a85f-118acce2c982>
CC-MAIN-2016-50
http://www.wisegeek.com/what-happens-after-an-abnormal-mammogram.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.948131
762
2.765625
3
"Great Expectations", "To Kill a Mocking Bird", and "Romeo and Juliet" are very complex and sophisticated pieces of literature. Each piece of literature has a key character in it. In "Great Expectations" there is Pip, a common boy who gets the chance to become a proper man. In "To Kill a Mocking Bird", Jem is the son of a lawyer who is defending a black man. And in Romeo and Juliet, Juliet is a young girl who thinks she has found true love. Each one of these characters has gained understanding of themselves as a result of confusion or prejudice. Pip is a character that learns a great amount by his own confusion. Throughout Great Expectations Pip believes he is in love with Estella, an appealing, proper girl. He believes he likes Estella because Estella's guardian kept telling Pip how pretty Estella is and how he should adore her. As Pip grew older this stuck to him and his goal in life became marry Estella. However, little did Pip know Estella would never be interested in him because Miss Havisham, Estella's guardian, brought up Estella never to love or show affection. When Pip got older he went back home to where Estella lived, so he could win her heart over. But, that is when he figured out he could never have Estella because she is a cold hearted woman. From this experience, Pip learned never to set your mind on one thing. He also learned to stay open minded. Lastly, Pip learned to be ready for whatever the world throws at him. Jem is a character from the book To Kill a Mocking Bird who learns a lot from prejudice. Jem lives in the south during a really racist time in American history.
<urn:uuid:62cba855-8f01-4686-9413-7c909939de0a>
CC-MAIN-2016-50
http://www.writework.com/essay/great-expectations-kill-mocking-bird-and-romeo-and-juliet
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.977855
360
3.5625
4
Chapter Six: Pedigree Analysis and Applications Pedigrees with autosomal dominant traits will show affected males and females . X-linked recessive traits will affect males predominantly and will be passed from an . Explain how a comparison of concordance in monozygotic and dizygotic . Dominant vs. recessive genes - Understanding Genetics A middle school student from New York June 4, 2004. Homolleleic: A patient has two recessive mutant genes for the locus which have the exact same mutation. Compare to heteroallelic. Incomplete dominance: . Dominant & Recessive Genes DOMINANT AND RECESSIVE CHARACTERISTICS. Characteristics in the left- hand column dominate over those characteristics listed in the right-hand column. Dominant and Recessive Traits in Humans May 18, 2010 . There are many dominant and recessive traits in humans that are exhibited . that more number of men are colorblind as compared to women.
<urn:uuid:ec6b5893-81c5-4789-8fdf-07f7adecd33a>
CC-MAIN-2016-50
http://ylaneqijuz.xlx.pl/compare-dominant-and-recessive-traits.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.806585
208
3.8125
4
Describe the assumption 'Internal Mental Processes Internal Mental Processes: - Attention: Focusing one thing at the same time or multiple things. - Memory: Ability to encode,store and recall information - Perception: Taking in and interpreting information from our environment - Language: Using words and images to comunicate and understand what is being said - Problem Solving: Manipulating information to reach a conclusion. Cognitive Deficiences: When an individual does not think or plan sufficently leading to negative behaviours. eg depressed people will give up and not try again. Cognitive Distortions: When the cognitive system inaccurately proccesses information meaning the indvidual sees it as what it isnt. the information has been distorted. In an anorexic patient they see themselves as overweight when they are severely underweight. Describe the assumption 'Computer Analogy' The human mind is likened to a computer. In a computer the information is put in through a key board, encoded and then stored in a specific file. The output would be opening or printing that particular file. In our mind the information is inputed via our senses and then stored in memory and then the output is in the form of behaviour. Processes is known as Input, storage and retrieval. Example: In an exam the teacher gives info (input), it is then stored in memory (storage) and is later used to answer exam questions (retrieval) Describe the assumption 'Schemas' Are a collection of ideas about things. They start off simple and get more complicated as we experience new things. Help us get a sense and understanding of the world around us. Example: Using the ride the bike schema to learn how to drive. Formation of relationships Internal Mental Proccesses: - Attention: Pay attention to the person - Memory: Remembering past relationships - Perception: The way we percieve an individual is important. (Looks) - Language: Communicating with the person - Problem Solving: Considering whether the relationship is worth forming. Social Exchange Theory- Explains how internal mental processes are important Cost-Benefit Analysis- Attracted to those who offer more benefits and less costs. Motivated to minimise the costs and the relationship will last longer if there are more benefits. The Comparison Level- How people feel in a relationship depends on what they expect to get out of a relationship in costs and benefits. Comparison Level for alternatives- Is there an alternative relationship that meets expectations. High comparison level for alternatives: very likely to meet someone who reaches expectations. Low comparison level for alternvatives: not very likely to meet someone who reaches expectations Main Components of CBT Main Aim: Help an indvidual to identify and irrational thoughts and replace these with more rational ways of thinking. - Case Conceptualisation: Understand CBT, Create a list of problems using self-report technique, set inital goals and treatment plan. - Skills acquisition and application: Work on intervention techniques including new skills, set goals and targets and refine intervention techniques. - Ending and Follow Up: Final assessment, Discuss ending treatment and maintence of changes, End treatment- when therapist and client both agree, Top up sessions can take place 3 or 6 months after treatment. - Self-report questionnaire such as Beck's depression inventory to indicate how the client is feeling and how this impacrs everyday life. Will complete several times during treatment. - Identify Self-Defeating beliefs. Once irrational beliefs have been uncovered through questioning the client will be asked to practice optimistic statements. These changes happen over time leading to a change in their dysfunctional behaviour. Main Components of CBT Part 2 Skills Acquistion and Application - Challenge irrational thoughts. - Asks them reasons why they have these thoughts - Helps client see that their claim is irrational and encourages them to make their thoughts more positive. - Relaxation technique depending on preferences. - Breathing techniques focusing on muscle relexation. - Taught to identify tension in certain muslce groups and the difference between tense and relaxed muscles. Questioning aids this. - Therapist acts as a guide. Create an image based on their fear and imagine the opposite. Questioning also aids this. Evaluation of CBT: Effectiveness and Ethical Issue - Derubeis (2005): 3 groups of participants who had depression. Group 1: CBT, Group 2: Anti-Depressants and Group 3: Placebo. After 8 weeks 43% of CBT had improved whereas 50% of the drugs had improved compared to 25% of the placebo group.Must be viewed with caution as it happened after 8 weeks. - Hensley et al (2004): States that CBT has a low relapse rate compared to those who take anti-depressants. - Is not suitable for all clients. Clients who have CBT need to be motivated and that it means CBT will not be effective. It involves a number of weekly sessions. - Therapist is in a position power over the client. The equality becomes more obvious throughout the treatment. as they are working together to establish goals. - Teacher and pupil relationship leading to an imbalance power. Client is actively involved in their treatment and can help them more positve. - Homework for example can be more motivational and lead to a sense of achievement. Loftus and Palmer 1974: Experiment One Aim: Effect leading questions ahve on eyewitnesses' ablity to recall information. Method and Procedure: - 45 American College Students - 7 traffic accidents ranging from 5-30 seconds. - Filled in a questionnaire asking what they had seen and specific questions. - Critical Question: How fast were the cars going when they ________ into each other. - Smashed: 40.8mph - Collided: 39.3 mph - Bumped: 38.1 mph - Hit: 34.0 mph - Contacted: 31.8 mph Conclusion: Changing a single word can affect a witnesses answer. Not only are people poor judges of speed but recall if greatly influenced by the wording of the question. Explanation: Reponse Bias- unsure whether to say 30 or 40mph. Leading question changes the persons memory so they see if more severe than it actually was. Loftus and Palmer 1974: Experiment Two. Aim: Whether leading questions distort eyewitness memory of an event. Method and Procedure: - 150 American College Students. - 1 short film of a car collision. - Completed a questionnaire asking them to describe the accident and answer a series of questions. - Critical question was the speed the cars where going when they __________into each other. - Group 1: Smashed. Group 2: Hit and there was a control group who were not asked about the speed. There were 50 in each group. - 1 week later the particpants returned and asked another 10 questions about the accident. - Question: Did you see any broken glass. Answered with yes or no. - Smashed: 16. Hit 7 and Control 6. Conclusion: Leading question has an effect not only on speed estimates but also information recalled a week later. Memories can be changed through leading questionnaires. Evaluation of Lofus and Palmer Part 1 - Lab experiment was used in both studies. - Allowed the IV to be manipulated (The Verb) and measure the DV (answers to the critical questions). - High level of control= internal validity - Measuring what they set out to measure. - Lab experiment often lacks ecological validty- unable to generalise to the world. - In courts questions would not be done through questionnaire as answers given could lead to the prosecuton of a indvidual. - Shock and emotional impact can not be repeated. - Knew they were in a experiment. - Suspected they would be asked to watch film clips and answer questions. Altered answers to fit with what the experimenter wants from them. Evaluation of Loftus and Palmer Part 2 - Did not know the aim of the experiment or the other conditions. - Not aware of what the experimenter was looking for. - Wa needed to minimise the effects of demand characteristics. Protection from harm: - The clips were staged. - Particpants still could have experienced psychological harm and had a strong emotional response to the clips. - Eyewitness testimony is used in courts and can be key pieces of information that the jury uses to prosecute a person. - Innacuarate information can lead to innocent people going to jail. - Shows our memory is reconstuctive even though we may not be aware of it.
<urn:uuid:855c8d1f-06c0-4fab-a480-48b0618088fb>
CC-MAIN-2016-50
https://getrevising.co.uk/revision-cards/cognitive-approach-4
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.929038
1,834
3.90625
4
Trisodium Citrate Anhydrous Excipient (pharmacologically inactive substance) What is it? Trisodium citrate anhydrous (C6H5O7Na3) is the tribasic sodium salt of citric acid. Anhydrous means that the water has been removed from the molecule. It has a sour taste similar to citric acid, and is salty as well. It is often used as a food preservative, and as a flavoring in the food industry. In the pharmaceutical industry it is used to control pH. It may be used as an alkalizing agent, buffering agent, emulsifier, or sequestering agent. The anhydrous form is used for purposes such as water sensitive dry blends and instant beverages, surfactants, fragrances as well as in tablets and OTC products. The anhydrous form can provide particular benefit in dry products where a long shelf life is required. According to the FDA Select Committee on Generally Recognized as Safe (GRAS) food substances, citrate salts, including sodium citrate salts, are generally regarded as safe when used in normal quantities. Dave RH. Overview of pharmaceutical excipients used in tablets and capsules. Drug Topics (online). Advanstar. 10/24/2008 http://drugtopics.modernmedicine.com/drugtopics/Top+News/Overview-of-pharmaceutical-excipients-used-in-tabl/ArticleStandard/Article/detail/561047. Accessed 08/19/2011 Jungbunzlauer Suisse AG. Trisodium Citrate Anhydrous. http://www.jungbunzlauer.com/products-applications/products/citrics/trisodium-citrate-anhydrous/general-information.html Accessed March 26, 2012.
<urn:uuid:a3ab5bd1-2dd2-40cc-b0fe-2e0b21d9aaff>
CC-MAIN-2016-50
https://www.drugs.com/inactive/trisodium-citrate-anhydrous-490.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.875247
390
2.59375
3
1 Answer | Add Yours The MMPI and the Rorschach test are quite contrasting assessment tools whose main commonality is that they both aim to uncover any deviation or dysfunction in the mental processes of clients in order to determine any kind of potential personality issues. While both tests have been around for more than six decades and are still considered valid and useful, they are given to different populations and for different purposes of usage. The Rorschach test, created by Hermann Rorschach in 1921, is a projective test which displays 10 different inkblot patterns in random order. These patterns are presented to the patient, who will use his or her own schema to decide what the inkblot represents. This way, a random stimulus will evoke patients to speak up and rationalize their thoughts. The Rorschach technique is mainly intended for patients that suffer from psychotic spells or behavior such as in the case of schizophrenic patients; in fact, this population was the intended target for Rorschach. Nowadays Rorschach is used for trauma victims, small children, emotionally-disturbed adolescents and to determine psychotic thoughts in criminals, suspects of crime, and people who deny having psychotic behaviors but do have them. The MMPI was developed by Hathaway and McKinley in 1939. After many changes in terms of variability, validity, participants and methodology, the MMPI is now known as the MMPI-2, or the MMPI-A, which is used for adolescents (hence, the letter A), OR THE MMPI2-RF (Restructured Form, published by Pearson). In the MMPI there are 10 of the originally designed scales of measurements. Each scale has a number of items that ask specific questions about the individual. The answers given will determine whether the patient has any level of neurosis or psychosis on specific areas of their lives. These scales include: - Hs- Hypochondriasis, or tendency to feel symptoms with no illness - D- Depression - Hy- Hysteria or the constant worry about being helpless - Pd- Psychopathic Deviate, or antisocial, aggressive and socially inept behavior - MF- Male/Female, or whether the patient typifies his or her gender. - Pa- Paranoid behavior - Pt- Psychastenia, or obsessive/OCD behaviors - Sc- Schizophrenia, or fantastic, odd thoughts and antisocial behavior. - Ma- Hypomania or high excitability, hyperactivity and - So- Social introversion or social anxiety as in agoraphobia. Therefore, the most salient differences between the two tests are the fact that the MMPI has a significantly higher number of active items, that the patient is asked specific questions, that there are 10 different levels of psychotic behavior that aims to be uncovered. Rorschach is a projective test, which shows the patient 10 different pictures to which the patient will respond from based on their own schema and level of ability to comprehend and understand. We’ve answered 318,944 questions. We can answer yours, too.Ask a question
<urn:uuid:eb8629ef-d4d7-40cd-9858-28a40494ef0b>
CC-MAIN-2016-50
https://www.enotes.com/homework-help/how-does-minnesota-multiphasic-personality-377035
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.936499
656
3.5
4
How Did The Superbug Infect Someone? Antibiotic Resistance Is A Threat Antibiotic-resistant bacteria poses a serious health risk; for years, medical professionals warned about the dangers of overusing antibiotics. The threat didn't feel quite so real until now: in April, doctors diagnosed the first United States-based infection that doesn't respond to the strongest available antibiotics. How did the superbug infect a Pennsylvania woman? It's unclear what precisely led to her illness, but scientists do believe they understand how the bacteria made the jump to humans. The origin of the woman's infection is currently unknown, according to The Washington Post. The Centers for Disease Control and Prevention is leading the investigation, working with the woman, her family, and her associates to source the exposure and detect any spread. Dr. Yohei Doi of the University of Pittsburgh told The Washington Post that since Chinese farmers often give Colistin to farm animals, it's possible the resistance started there. When humans eat meat containing the antibiotic-resistant microbes, it's possible for them to contract it, too. Unfortunately, researchers just found the bacteria in United States livestock, CNN reported: a pig intestine tested positive. At first, the infected woman appeared to have a urinary tract infection, according to Penn Live. When doctors tested her urine, they discovered mcr-1 in the sample. A form of E-coli, mcr-1 doesn't respond to colistin, an antibiotic generally reserved for worst-case scenarios. Other antibiotics that can be used to treat mcr-1 exist, Philly.com reported; in this case, infection wasn't a death sentence. The danger is in the potential for the mcr-1 bacteria to assist other bacteria in developing colistin resistance. CDC director Tom Frieden said that such a development could mean "the end of the road" for antibiotics, possibly resulting in increasing numbers of unanswerable infections. Some scientists see the widespread antibiotic resistance as an inevitability, prompting Frieden to call for new drugs, according to CNN. In a foreboding statement, he said, "The medicine cabinet is empty for some patients. It is the end of the road unless we act urgently." For now, medical professionals urge people not to panic. Dr. Neil Fishman of the University of Pennsylvania Health System told the Pittsburgh Post-Gazette that the emergence of antibiotic-resistant bacteria is dangerous but "not a death star." As researchers work on developing new drugs, it's essential for doctors to cut down on unnecessary antibiotic prescriptions. Mcr-1 may not be reason for immediate public concern, but many doctors agree that it's essential for the international medical community to rapidly pursue a solution to the worst-case scenario: the spread of a truly unbeatable superbug.
<urn:uuid:d60b7a14-b0d4-44b0-9f49-110826e5da34>
CC-MAIN-2016-50
https://www.romper.com/p/how-did-the-superbug-infect-someone-antibiotic-resistance-is-a-threat-11435
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00470-ip-10-31-129-80.ec2.internal.warc.gz
en
0.938423
567
3.28125
3
It takes 9 mph to out run a fly. Any less, and the pesky like creatures appear to slipstream, hover, land and generally do all that fussing and buzzing that flies do. They apparently see us, in all our sweaty glory, as large, two-wheeled, moving piles of dog waste. If you pause at the top of climb to admire the view, there they are. Stop to change a flat, and there’s a hat load of them. It’s as if you’re loitering in a fly hatchery. But the moment you set off again, gain a bit of momentum, and hit the magic 9 mph, that’s it, they’re blown away, unable to keep up with your blistering turn of speed. Try it some time. So we did a bit of scientific research, and yes it appears that the average speed of a common fly is just shy of 5mph. So a top-flight, lycra-clad fly – pardon the image – may peak out at 7.5 mph. Just for the record, the drawing above shows the average speeds for some more of our flying friends. Oh, and you read that right at the bottom there. A Horsefly is positively supersonic – with an average speed of 90.1mph! Don’t believe us? Look it up. Thanks to 18 Miles Per Hour for this Universal Truth.
<urn:uuid:7a41f69f-169c-4ba1-b4e1-6b7637acde36>
CC-MAIN-2016-50
http://aushiker.com/bicycle-art-universal-truth-of-cycling-19/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.945031
305
2.703125
3
The report says that to stabilise greenhouse-gas concentrations at 550 parts per million (a level most scientists think safeish) would require a price of $20-50 per tonne of carbon by 2020-30. That The IPCC’s economic models reckon, on average, that if the world adopted such a price the global economy would be 1.3% smaller than it otherwise would have been by 2050; or, put another way, global economic growth would be 0.1% a year lower than it otherwise would have been.That seems quite reasonable and worth the cost. The report looks at how much CO2 emissions could be reduced by 2030 with various CO2 emission prices. At $0 per ton, emissions can be reduced by approximately 6 tons or 8.5%, at $20 13 tons (19.5%), at $50 19.5 tons (29%) and at $100 23.5 tons (34.5%). To put this price in perspective, each dollar per ton translates to 1¢ per gallon of gasoline (as a gallon emits 20 lbs of CO2), .1¢ per kWh of coal powered electricity (as each kWh produces 2.1 lbs of CO2) or .065¢ per kWh of natural gas powered electricity (1.3 lbs of CO2 per kWh). A $20 price would translate to 20¢ per gallon of gasoline, 1¢ per kWh of coal electricity and .65¢ per kWh of natural gas electricity. The report also looks at what sector the savings would come from with various prices as seen in the chart (click for larger version). 1) The largest potential savings are in buildings. 2) Transportation savings account for only 8% to 14% of total savings. 3) Raising the price of carbon emissions has little impact on reducing emissions from transportation and buildings, but it has a large impact on industry, agriculture and forestry. Based on this report, I think it makes sense to have a carbon tax of $20-$50 a ton that is gradually implemented over many years. A cap and trade system could also be used if a tax is not possible politically. The Washington Post and Green Car Congress also had writeups on the IPCC's findings.
<urn:uuid:31472a01-fb36-439c-9fa5-0495bb9e09d9>
CC-MAIN-2016-50
http://fatknowledge.blogspot.com/2007/05/ipcc-calculates-cost-of-global-warming.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.964235
455
3.328125
3
1. Why are the children huddled close together in the beginning of the prologue? To hear the rest of the tale. 2. When the old man is searching for the right words in the prologue what does he do? Sips his wine. 3. What does the old man tell the children they now know when he is talking to them in the prologue? How the vampire came to be. 4. What place did the scholar and shifter of shapes come to through the Dance of the Gods? 5. What reason is it said that the warrior came to join the brother and the friend? To fight to save the worlds. This section contains 5,897 words (approx. 20 pages at 300 words per page)
<urn:uuid:4ff645b0-a98c-43f7-94f1-afeb223ebc5e>
CC-MAIN-2016-50
http://www.bookrags.com/lessonplan/dance-of-the-gods/shortanswerkey.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.948329
160
2.578125
3
The resource has been added to your collection It's easy to make spelling fun. Using a variety of activities from sorting to using words in context will help deepen students' understanding of word patterns. Not Rated Yet. Navigate to This External Web Link: Our Terms of Service and Privacy Policies have changed. By logging in, you agree to our updated Terms and Policies.
<urn:uuid:59d73f13-3b50-4518-b3c1-3d77c637a3ec>
CC-MAIN-2016-50
http://www.curriki.org/oer/Sort-Hunt-Write-A-Weekly-Spelling-Program/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.894994
77
3.078125
3
Old Town in Dixie County, Florida — The American South (South Atlantic) Erected 1961 by Florida Board of Parks and Historic Memorials. (Marker Number F-55.) Location. 29° 36.111′ N, 82° 58.977′ W. Marker is in Old Town, Florida, in Dixie County. Marker is at the intersection of U.S. 19 and Florida Highway 349, on the right when traveling west on U.S. 19. Click for map. Marker is in this post office area: Old Town FL 32680, United States of America. Other nearby markers. At least 5 other markers are within 10 miles of this marker, measured as the crow flies. The History of Fort Fanning (approx. 2.9 miles away); Fanning Springs Bridge (approx. 3 miles away); Triumph the Church and Kingdom of God in Christ (approx. 9.1 miles away); Fletcher Community (approx. 9.3 miles away); Putnam Lodge (approx. 9.9 miles away). Categories. • Antebellum South, US • Native Americans • Settlements & Settlers • Credits. This page originally submitted on , by Julie Szabo of Oldsmar, Florida. This page has been viewed 1,714 times since then and 100 times this year. Photos: 1, 2. submitted on , by Julie Szabo of Oldsmar, Florida. • Kevin W. was the editor who published this page. This page was last revised on June 16, 2016.
<urn:uuid:7197580a-1416-4304-a1e0-981652c9b77c>
CC-MAIN-2016-50
http://www.hmdb.org/marker.asp?marker=17712
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.924969
325
2.515625
3
One World One Ocean, a multi-year educational campaign led by Academy Award-nominated MacGillivray Freeman Films, is using their recently-announced 2012 online video series to draw attention to the difficulties facing oceans and marine life. The campaign's video series, which will examine topics such as the sustainable seafood movement and the intersection of oceans and the arts, forms a part of OWOO's efforts to "showcase and celebrate the importance of the world's oceans, with the goal of catalyzing a movement to protect it," according to a press release. Their videos are also reflective of OWOO's goal of highlighting the need for urgent action to protect the world's oceans. Dr. Sylvia Earle, the campaign's principal science advisor, said in a press release in October, "The world's oceans are in trouble, but the good news is there is still time to save them. Our actions toward the ocean in the next 10 years will define the next 10,000." A 2011 report from the International Programme on the State of the Ocean found that a mass extinction "unlike anything human history has ever seen" is imminent if "current actions contributing to a multifaceted degradation of the world's oceans aren't curbed." Last year, MacGillivray Freeman Films also partnered with World Wildlife Fund and Coca-Cola to produce a film about the fate of Arctic polar bears. The Morning Email helps you start your workday with everything you need to know: breaking news, entertainment and a dash of fun. Learn more
<urn:uuid:f29e2b2f-9090-4d7a-ae2e-3ca78934e8de>
CC-MAIN-2016-50
http://www.huffingtonpost.com/2012/02/06/one-world-one-ocean-campaign_n_1257415.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.941312
314
2.53125
3
When folks think of national parks, images of Yellowstone (or Jellystone, for that matter) and Yosemite come quickly to mind. These are the quintessential parks, offering campgrounds, trails for hiking, stunning roadside scenery, and places to unload your picnic basket. (Ain’t that right, Boo Boo?) But the National Park Service oversees many different kinds of sites—from National Monuments to National Seashores. In fact, there are more than 14 different designations for units in the park service. Some of these have the same facilities you expect from a national park. Others don’t. We wanted to share a few of our favorite places and perhaps shed some light on some unexpected destinations—each perfect for your next family road trip. Sit back while I take you on a tour of some of my favorites. These are sites that have historic or scientific importance. Some are pretty humble, preserving an archaeological site or two and the neighboring area. Others serve to protect unique geology, which can spread over miles. Castillo de San Marcos National Monument (St. Augustine, Fla.) – St. Augustine is all about history. The settlement was founded in 1565 by the Spanish, and it is the “oldest continuously occupied European-established city and port in the continental United States.” Central to this history is the Castillo de San Marcos, which is maintained by the park service. This fort was built in 1695 to protect Spanish interests from privateers and the land-grabbing British. Visitors can wander the walls, peek inside the fortifications to old prison cells and soldiers’ barracks. Florissant Fossil Beds National Monument (Florissant, Colo.) – Just the other side of Pikes Peak from Colorado Springs, the Florissant Fossil Beds are an easy drive from Colorado’s southern Front Range communities. Most families come to see the petrified redwood stumps, some 14 feet in diameter. Hundreds of prehistoric insects and plants have left behind their fossilized remains here. In addition, the monument is a great place for watching wildlife or going on a hike (more than 14 miles of trails!). Marianas Trench Marine National Monument (Northern Mariana Islands, Pacific Ocean) – Okay, you won’t be able to road trip to this one, but it’s so unique it needed to be mentioned. This underwater national monument is found east of the Philippines. It covers nearly 100,000 square miles and includes Challenger Deep (the deepest known ocean depth in the world). National Historic Sites For history buffs, the national historic sites do a great job of preserving a particular piece of American history and presenting how that piece affects the whole. Frederick Law Olmsted National Historic Site (Brookline, Massachusetts) – Probably best known for designing New York’s Central Park, Olmsted was also the landscape architect for the Chicago’s World’s Columbian Exposition. This was his home base, and his importance as the founder of landscape architecture and his impact on America’s parks is put center stage. John Muir National Historic Site (Martinez, California) – On the other side of the country you find the park preserves the home of another great figure in history. This one was less interested in creating scenery as preserving it. Yosemite National Park owes its pristine condition to the efforts of John Muir, the naturalist/writer who in many ways launched the conservation movement in America. National Battlefields & Military Parks Now that we’re officially in the midst of the Civil War sesquicentennial, national battlefields and military parks will be the scenes of much hoopla. Places like Gettysburg and Shiloh will see thousands more visitors than usual. Gettysburg National Military Park (Gettysburg, Pennsylvania) – Hailed as the turning point in the Civil War, 51,000 men died at Gettysburg as Lee’s push north was halted by Union troops. The park service goes all out with interpretive talks and great descriptive displays. Certainly a worthy stop for parents looking to expose their kids to a little American history. All on the Great Lakes, the country’s national lakeshores preserve some of the most beautiful geology on the planet, from the dunes of Lake Michigan to the limestone cliffs of Lake Superior. As a Michigander, I am partial to the lakeshores. Pictured Rocks National Lakeshore (Upper Peninsula, Mich.) – Found on the northern shore of Michigan’ Upper Peninsula, this is one of those spots that remains a hidden gem. There are many ways to explore the park. I think camping here makes for a great trip, but there’s plenty for the day-tripper. The best way to quickly apprehend these stunning limestone cliff is to head to Munising and schedule a tour with Pictured Rock Cruises. Sleeping Bear Dunes National Lakeshore (Northwestern Lower Peninsula, Mich.) – The Sleeping Bear Dunes are close enough to a number of Lake Michigan vacation spots that many visitor just visit for the day. If that’s the plan, be sure to drive the Pierce Stocking Scenic Drive (find information and directions at the visitor center in Empire). Kids love the dune climb, and the view of Lake Michigan from the top is one of the most breathtaking you’ll ever come across. Like the national lakeshores, these protect stretches of undeveloped shoreline on the nation’s oceans. Padre Island National Seashore (near Corpus Christi, Texas) – At 113 miles long, this barrier island is touted as the longest undeveloped barrier island in the world. There are, of course, miles of pristine beach, but facing west toward the Laguna Madre, visitors find a world of wildlife to explore. Great camping, fishing, paddling, etc. The NPS’s national trails are the most elusive of the park’s offerings. While they may stretch through a number of states, people often overlook them altogether. The North Country National Scenic Trail (New York, Pennsylvania, Ohio, Michigan, Wisconsin, Minnesota, and North Dakota) – Stretching across seven states, beginning in the Adirondack’s of New York, passing through the Ohio River Valley, along the shore of Lake Superior, and out to the Western Plains, the NCNST is one of the newer parks. The trail is still being developed in parts, and few people even know it exists. Check out the site for the North Country Trail Association to find sections to hike. There’s something for every family, whether you’re looking for a short afternoon hike or a multi-day backpack outing. Santa Fe National Historic Trail (Missouri, Kansas, Colorado, Oklahoma, and New Mexico) – Like the Lewis and Clark National Historic Trail, this one isn’t about hiking. The trail traces the east-west passage of thousands of pioneers who made their way to the frontier via the Santa Fe Trail. The route parallels rivers and streams, meandering around tougher terrain, and you will find modern road builders found the path a good one to emulate. Along the way there are a ton of stops: forts to visit, plaques to read, old wagon ruts to inspect. This is the kind of park that can be the road trip.
<urn:uuid:3fbbe026-8324-4a8e-a0d1-7511091c7671>
CC-MAIN-2016-50
http://www.roadtripsforfamilies.com/our-national-parks-theres-more-than-you-think/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.920538
1,533
2.90625
3
One person can make a difference Submitted by Edgar on Tue, 2014-02-18 15:04 Sometimes it only takes one person and one thing to make a difference in the lives of thousands. Thus is the story of a young woman from Minnesota and her love for books, which is from the online article: “The 13-Year-Old Who Is Championing World Literacy, a Million Books at a Time.” “Freedom begins in the mind, and one 13-year-old girl aims to liberate children the world over with a book campaign to advocate world literacy. When she was only eight, Maria Keller took it upon herself to begin collecting books for kids. She founded her own nonprofit, Read Indeed, and this month surpassed her goal of sending over a million books to children in various parts of the globe. Rate this article:
<urn:uuid:e6a95941-c850-4f68-a70f-2718b068cfcf>
CC-MAIN-2016-50
http://www.thejenatimes.net/articles/2014/02/18/one-person-can-make-difference
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.957182
176
2.78125
3
The following is a review of Isabel Paterson’s The God of the Machine, a 1943 book arguing for free market capitalism. I wrote this for a college course called Modern Political Thought: The year was 1943. Hitler’s Germany was in the midst of all-out war with Stalin’s Russia and Franklin Roosevelt’s United States. Isabel Paterson, a Canadian-American author, published The God of the Machine, which has become one of the more influential libertarian works of the twentieth century. Paterson was a radical individualist. Hitler and Stalin were avowed collectivists, and the well-known human suffering in Russia and Germany during their reigns was too great to be ignored. Roosevelt’s New Deal represented by far the greatest economic intervention in American history. The tendencies of world powers toward collectivism were Paterson’s main focuses, but societal attitudes also concerned her. Many believed that the war economy was healthy, and some even believed that Germany and Russia had gotten it right, increasing their powers by collectivization. John Maynard Keynes’ interventionism was emerging as the new textbook standard for economic theory. Radical sloganeers were advancing such ideas as “property is theft” and “capitalism means war.” Paterson addressed all of these developments. Her book is an overview of the logic and history behind her answer to the great question that still stands before the political actor today: which interest should be the focus of our political attention, society’s or the individual’s? If not for its analyses of ancient, modern, and contemporary histories, The God of the Machine may have been criticized as a knee-jerk reactionary critique of world leaders’ current collectivist policies; Friedrich Hayek’s The Road to Serfdom was widely brushed aside by the political establishment as reactionary. While the Nobel Laureate’s criticism of Keynesian economic theory was much more influential, Paterson published hers earlier, and she conveys the same message, that government cannot spend an economy back to health. The individualist political philosophy was first described by John Locke, and there are clear similarities between Locke’s philosophy and Paterson’s. Obviously both are individualists. Both believe that government exists to protect the natural rights of life, liberty and property, and that those rights are gifts to man from God. With respect to property, there are differences between the two thinkers. For Locke, ownership of objects in nature is initiated when people mix their labor with those objects. Paterson, contrarily, claims that ownership exists because of the physical laws of space and time. She explains this in a matter-of-fact manner, stating “two bodies cannot occupy the same space at the same time” (180). Elaborating upon this obvious statement, which at first sight appears irrelevant to the matter at hand, she points out that no one would farm if his land could be used–without restraint–by anyone who stumbled upon it, nor would any family build a dwelling, if every man were permitted to come in and go out of it as he pleased; the man farms and builds for his own private purposes (180). David Hume’s legitimate criticism of social contract theory was precisely that it was a theory. It was based on thought experimentation, and had no historical evidence. Locke was a social contract theorist, and Paterson accepts his theory, but she is not a social contract theorist; she is a social contract historian. She makes little reference to an imaginary social contract, as Locke did, because, unlike Locke, she can point to a historical social contract, the United States Constitution. The modern liberal, socialist, utilitarian, and utopian thinkers came after John Locke. The father of classical liberalism was long dead before any opportunity to rebuff their arguments presented itself. In The God of the Machine, Paterson plucks Locke’s intellectual sword from the grave and carries it into battle against the likes of Keynes, Jeremy Bentham, John Stuart Mill, Karl Marx and Pierre-Joseph Proudhon. Before that battle can be understood, however, it is necessary to explore Paterson’s social philosophy. Paterson uses a metaphor extensively throughout the book, comparing society to an electrical circuit. She perceives individual free will as the “dynamo” in society; in the metaphor, free association and exchange is the “electricity” of the circuit. She argues that all progress comes from individual action. Only individuals can think–groups cannot–and “in human affairs, all that endures is what men think” (18). Paterson’s high potential energy circuit is closed and circulating maximum energy in a free enterprise system, when men are left to think and act however they wish. It is static under a totalitarian system, when nearly every action must be commanded or permitted (78). Government intervention into the market is represented by a “leak” in the otherwise complete high potential energy circuit. A free enterprise society, then, will gain more and more prosperity and power, while a totalitarian society will tend to lose both. The society in which government is most limited will be the most powerful society, in production and in war (61). Societies that are more powerful and prosperous become that way because they devise political systems that allow the greatest freedom of human action (13). Their circuits of energy are least broken. Paterson borrows heavily from Herbert Spencer’s ideas. She replaces Spencer’s “social organism” with her “high potential energy circuit,” and does so with favorable results. Spencer laboriously pursues metaphors between government types and various biological organisms, flying over the heads of readers possessing even above average biological understanding. Paterson clarifies Spencer’s message, by using a metaphor the average person can understand, a simple electrical circuit, and she simplifies his message by condensing it. Paterson draws her “Society of Contract” and “Society of Status” from Spencer verbatim. The society of contract recognizes the divinely given freedom and responsibility of each individual. In the society of contract, “society consists of individuals in voluntary association. The rights of any person are limited only by the equal rights of another person” (41). The society status, on the other hand, institutes privilege. In Paterson’s mind, instituting a privileged status for anyone in society will lead to a class division between rulers and subjects. She believes the society of status works against nature. “The logic of status,” she says, “ignores physical fact. The vital functions of a living creature do not wait upon permission; and unless a person is already able to act of his own motion, he cannot obey a command” (42). Paterson says that her ideal societal relationships are best exemplified in what is called today’s middle class, which is not a class at all, but a classless society of contract (49). Paterson bolsters her argument with the historical example of ancient Roman civilization. She claims that Rome failed because it was a society of status, and the bureaucracy, the privileged class, became too big (and I am unqualified to argue this point with her). Too much energy was diverted from production into the bureaucracy, so that almost no energy was making it all the way around the circuit. When the productive class could no longer support the bureaucracy, the bureaucracy came down on the productive class and attempted a planned economy. Prices were fixed and the currency was debased (39). Roman civilization was torn apart. Paterson writes, “Men who had formerly been productive escaped to the woods and mountains as outlaws, because they must starve if they went on working” (40). Paterson says that the founding of the United States was the first and only time a society of contract was ever attempted. The famous principle of the Declaration of Independence, that all men are endowed by their Creator with the inalienable right to life, had never previously been used as a basis of political structure (41). The United States was an experiment in liberty. Paterson points out that in the United States, for the first time, freedom was recognized as an indivisible whole; to speak of various “freedoms” was to revert to European terminology (68). The proof of the society of contract’s worth was the unprecedented power and prosperity of the United States. Paterson derides European social philosophy as “mechanistic,” saying that it forgets that each individual naturally has freedom and responsibility, and it essentially reduces people to automatons. She blames this on the arrogance of “academic planners” and the lust for power of self-described humanitarians. (145-147) Paterson’s objection to “academic planners” returns us to the aforementioned intellectual battle between Paterson and thinkers like Keynes, Bentham, Mill, Marx, and Proudhon. She says that John Stuart Mill, under the banner of liberty, in fact sacrificed it to society, saying that it was only justifiable insofar as it “served the collective good.” “Then,” writes Paterson, “if a plausible argument can be put forward that it does not–and such an argument will seem plausible because there is no collective good–obviously slavery must be right” (150). Paterson views Bentham in much the same light, as another prominent philosopher who sold out liberty to the collective good. Bentham is famous for attempting to devise a political system according to the principle of “the greatest good for the greatest number of people.” Paterson says that this “is a vicious phrase; for there is no unit of good which by addition or multiplication can make up a sum of good to be divided by the number of persons. Jeremy Bentham, having adopted the phrase, spent the rest of his life trying to extract some meaning from his own words. He meandered into almost incredible imbecilities, without ever perceiving why they couldn’t mean anything” (90). Paterson calls Karl Marx a fool for thinking his utopian idea was an accurate prediction of the future (155). She says Marx was a “parasitic pedant, shiftless and dishonest, he wanted to put in a claim on ‘society’ solely as a consumer” (96). His theory of class war, she says, is “utter nonsense.” Elaborating, she says, “it is physically impossible for ‘labor’ and ‘capital’ to engage in war on each other. Capital is property; labor is men” (97). She also criticizes Marx’s dialectical materialism, claiming that it “reduces verbal expression to literal nonsense” (96). Paterson compares the phrase “dictatorship of the proletariat” to the phrase “roundness of a triangle” (96). Keynes famously prescribed increasing government employment as a remedy for recessions. Because recessions come with unemployment and slumping consumer demand, the theory goes that government can augment demand and employment by hiring more people, who will be consumers, multiplying demand. In criticizing Keynes, Paterson employs reductio ad absurdum. She brings up the example of paying a man to stand on the beach and throw pebbles into the ocean, arguing, “it would be just the same as if he were in a ‘government job,’ or on the dole; the producers have to supply his subsistence with no return, thus preventing the normal increase of jobs” (192). Paterson says that Proudhon is responsible for “perhaps the most senseless phrase ever coined even by a collectivist” (179). She is referring to Proudhon’s famous slogan, “property is theft.” Clearly this statement is non-sensical, because theft presupposes property (179). The slogan follows in the footsteps of Jean-Jacques Rousseau, who may well have agreed with its spirit, if not its words. Both Rousseau and Proudhon saw property as an unnatural institution, and the source of inequality and unfairness. Paterson contends otherwise, asserting that unfairness and inequality are unavoidable in any system, and that sacrificing property rights for the sake of fairness is foolish (200). She explains, “The incidental hazard of a free society, which is that of nature, that some individuals may be temporarily unable to command a livelihood, is the permanent condition of every man living in a collective society. In giving up freedom, the individual gets nothing in return, and gives up every chance or hope of ever getting anything” (200). Paterson criticizes collectivists by analyzing their language and showing its errors. She frequently uses “nonsense” as a descriptor of their rhetoric. There is a tinge of hypocrisy in her critique, because she does not hold herself to the same exacting standards. Proudhon’s “property is theft” is “senseless” to Paterson, but Paterson herself, in no uncertain terms, asserts that “profit is production,” which is evidently “senseless” to anyone with an understanding of economics (221). Even accepting Paterson’s political principles and her criticisms of the collectivists, there remains a very important question: What is the alternative? What political system does Paterson suggest? Her ideal society is the “the private property, free enterprise society of contract,” but in The God of the Machine, the political apparatus responsible for protecting property and enforcing contracts is difficult to pin down. The absence of a comprehensive, alternative political system may be the most prominent weakness of her argument. Paterson thinks the very idea of political “leadership” is a threat to civilization, because every free man must lead his own affairs (80). She echoes classical liberals in saying that, ideally, government is a necessary evil. Paterson explains, “since human beings will sometimes lie, shirk, break promises, fail to improve their faculties, act imprudently, seize by violence the goods of others, and even kill one another in anger or greed, government might be defined as the police organization” (69). Her ideal system seems to be liberty with a police man, a system that completes her high potential energy circuit for the machine of society, maximizing the creative use of human energy. It requires equal protection of the laws, with privileged status for no “type” of person, be they impoverished, wealthy, numerous, or within government. Paterson never even posits a method of determining who will make up the “police organization” that is government. Some aspects of Paterson’s political system are clear. She dislikes passports, or any other national identification (45). She thinks “democracy inevitably lapses into tyranny” (16). She favors a metal currency, saying the economist who advocates fiat money is “below the mental level of savages” because he has “forgotten how to apply number” (202). She rejects compulsory public education as “the complete model of the totalitarian state” (258). She also rejects licensing and regulation, which are impediments to free association (50). However, Paterson’s political structure remains enigmatic. As long as every individual is treated equally by the law, their natural rights are protected, and contracts are enforced, it does not concern her who governs, or how they are chosen. Paterson, Isabel (1943). The God of the Machine. New Brunswick, NJ: Transaction Publishers. ISBN: 1560006668 Filed under: Personal, Politics | Tagged: bentham, capitalism, economics, freedom, herbert spencer, history, hitler, Isabel Paterson, john locke, keynes, Marx, mill, Proudhon, rome, roosevelt, Rousseau | 2 Comments »
<urn:uuid:9dd4a74e-1de0-417a-bd33-9642533559cf>
CC-MAIN-2016-50
https://fearistyranny.wordpress.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.957216
3,327
2.953125
3
reflection and refraction When a wave hits a boundary between one medium and another some of it is reflected. angle of reflection is the same as anfle of incidence reflection happens when there is a change in densities. Refraction waves travel at different speeds in different densities when a light ray passes from air to glass the ray slows down and is bend towards the normal. when a light ray passes from glass to air it speeds up and bends away from the normal virtual and real images Real image when light from an object comes together to for an image on a screen. Virtual image when light rays diverge so light from object looks like it has come from a different place. A coverging lens is convex (bulges outwards) parallel rays of light converge to a focus axis: line passing through middle of lens focal point: where rays come together each lens has a focal point in front and behind Drawing a ray diagram for a converging lens draw ray from top of object to lens parallel to axis of lens draw ray from top of object passing through middle of lens draw refracted ray passing through fcoal point mark where rays meet reapeat for a point on bottom of object distance from lens to focal point clamp lens at one end of track and white card at other end set up near a window focused on a distant object move card until image is focused measure distance from lens to image distance from lens affects image clamp lens at one end of track and white card at other end with object on other side of lens to card move object until image is focused measure distance from object to lens. OBJECT AT 2F: REAL UPSIDE DOWN IMAGE SAME SIZE AS OBJECT AND AT 2F OBJECT BETWEEN F AND 2F : REAL UPSIDE DOWN IMAGE BIGGER THAN OBJECT AND BEYOND 2F OBJECT FURTHER THAN F: VIRTUAL IMAGE RIGHT AY UP BIGGER THAN OBJECT AND ON SAME SIDE AS LENS a refracting telescope has an eyepiece lens and objective lens objective lens coverges rays of light to form a real image at the focal point of objective lens the light from real image enters eyepiece lens which spreads rays out so they leave at a wider angleand fill more of your retrina making image look magnified. incident ray parallel to axis will pass tgrough focal point when its reflected. incident ray passing thrrough focal point will be parallel to axis when its reflected. refracting telescopes part two converging lenses draw ray from top of object to mirror parallel to axis draw ray from object to mirror passing through focal point. draw reflected ray passing through focal point. draw reflected ray parallel to axis of mirror mark where rays meet. reflecting telescope collects parallel rays of light from space the larger mirror reflects rays onto a smaller mirror in front of large mirrors focal point . the small mirror reflects rays of light through hole in second mirror a real image is formed behind mirror an objective eyepiece lens is used to magnify image
<urn:uuid:ab8bbd27-6db4-4a7f-a961-5e34e39207b8>
CC-MAIN-2016-50
https://getrevising.co.uk/revision-cards/lenses_5
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.883215
648
4.34375
4
“The way to really grasp Shakespeare is to get up and do it,” says Mary Hartman. “Technology affords us the opportunity to make it easier, but at heart it’s just getting up and doing it.” Hartman is director of education for Shakespeare & Company, which is producing a multimedia companion guide for Macbeth to help high school teachers teach the play. In February Shakespeare & Company videotaped a group of thirty-one students practicing exercises and rehearsing scenes. The footage will be integrated into a multimedia guide, scheduled for release next year in DVD, videotape, and print formats. Selections will also be available on the Internet free of charge. The study guide, being produced with NEH support, is designed to appeal to the teacher navigating Shakespeare with a class for the first time, or for the teacher seeking a new approach. Along with footage of students acting out key scenes, the guide contains printable assignments and technique instructions, annotated text, a synopsis, character summaries, and historical background. “We interviewed the students extensively during the videotaping,” says technical consultant Pam Johnson. “They talked about their own lives in relation to the events that happened in Shakespeare’s plays. Many, if not all of them, have their own tragedies, dilemmas about relationships, or axes to grind over what’s the right thing to do. When they’re struggling with these things, they don’t have the word to put to these experiences, but Shakespeare does.” Modeled on the actor-managed troupe of Elizabethan times, Shakespeare & Company trains professional actors and performs plays as well as educating students. Based in Lenox, Massachusetts, Shakespeare & Company has led performance workshops in local schools for decades. Several dozen area schools of all levels have played host to these events, some as brief as a day, others lasting nine weeks. During the past fifteen years, NEH has supported summer performance institutes for teachers at Shakespeare & Company. The company is now looking to digital media and the Internet as a way to reach an even greater number of secondary school teachers. The company is working in collaboration with five high school teachers from Massachusetts and New Haven, Connecticut, to develop and fine-tune Macbeth in Action. They are adapting techniques the company uses professionally to rehearse and learn lines, and drawing up lesson plans with the teachers’ recommendations. Together they have formulated three distinct approaches, or lesson plan blueprints. One approach focuses on the play’s characters, encouraging teachers not to begin discussion with Act I, scene 1, but rather to concentrate on each character, one by one, contrasting passages that reveal the character’s makeup. In another curriculum, the emphasis is on preparing students to move from the page to the stage. Performance exercises are provided, along with a ninety-minute cut of the text. According to Pam Johnson, such a cut is in keeping with the Shakespearean tradition. She says that in the time of Elizabeth I, a full four-hour play was rarely, if ever, performed. The third approach is more flexible. It provides recommendations and techniques for each scene, moving through the play in chronological order and combining techniques so that a teacher who is comfortable with the material will have a choice of exercises. The students who participated in the filming in February came from western Massachusetts and Chatham, New York, just over the state line from Lenox. They ran through exercises, choreographed fight scenes, and gave their thoughts on-camera in a short segment about the play’s violence. In one filming session a student rehearsing as Macbeth was told to leave the stage and go through the motions of killing Duncan, the king of Scotland, then return to the room and do the scene. He re-entered the stage, shaking. With the camera rolling, he was asked how he felt. “I didn’t know how to kill him,” the seventeen-year-old said. “He was my king, my friend, a guest in my house, how could I kill him? Could I kill him in a gentle way? I felt like so much less of a person coming in to do the rest of the scene.” It is a tragic history play, but some of Shakespeare & Company’s teaching techniques look more like gym class: students line up and run forward, one by one, each shouting a single line of Shakespeare before running away. In another drill students pass a ball around as they call out lines from key speeches. “The heart of what we’re doing is play,” Hartman says. “It helps to get your feet wet in a nonthreatening and playful way. We pull out forty or fifty words from a play, toss the students a ball, and get them to speak out some of Shakespeare’s words to each other with each pass of the ball. It’s suddenly almost spontaneous poetry.” By freeing the lines from the page, educators emphasize the performance aspect of the script, and also give students to the opportunity to develop their own interpretations of the scenes in question. Glosses, or short explanatory notes in the margins of a text, were first written into Shakespeare’s works as a way to aid understanding or to suggest an interpretation to a particular scene or speech. Rather than teach a text riddled with additions by anonymous authors, which some educators find unreliable, misleading, and less pure than the original text, the teachers collaborating with Shakespeare & Company have put the onus on the performers. “We question students on the motivations and expectations of a character. And we make it clear that our questions are not definitive,” says Hartman. “We try to take opportunities to illustrate the plasticity of interpretation.” The company comes equipped to teach key scenes from a play, but instructors strive to be resources for the students. Some may need a synopsis to lay the foundation, or an introduction to commonly held interpretations of Shakespeare before they perform their own variation. Others need to spend time on the logistics of a scene--where to stand, when to enter--and then graduate to harder questions of character motivation and affectation. “Rather than a resource with all the answers, we are developing a resource that gives students the opportunity to discover their own answers,” Hartman says. “Some students will get confused at first. They’re used to the traditional education model, which both presupposes and provides a right answer. Our way can be frightening at first. But then, once they begin to play in this realm and discover their own answers, the payoff is enormous and genuine learning takes place.” Hartman notes that whereas actors traditionally operate within the framework of the director’s vision, Shakespeare & Company believes that the raw material of the plays has enough richness to maintain its roots while fostering a multitude of interpretations. Teachers are challenging students to study and explain their own performances, and to analyze why they have delivered a line in a certain way. Today’s teachers are often faced with the notions students have already formed about plays before they have even read them. The 1996 movie Romeo and Juliet has colored their ideas, according to Mimi Paquette, a high school teacher in Holden, Massachusetts. “The kids who were at a low level could not take themselves out of the world of that film,” she says. “When we got to reading the text, some kids were hung up, wondering when Romeo’s going to jump in that cool swimming pool to make a speech to Juliet, like he did in the movie.” Springfield Central High School teacher Michael Cremonini, who has trained with Shakespeare & Company and is helping to develop the multimedia guide, assigned Macbeth to one of his classes. Even before they finished, they were anxious to act out the play, he says. The class decided on the key scenes to act out, and began to cast the parts. One student did not want to participate. When Cremonini asked him which part he would most like to perform, the student confessed he was a poor reader and said he wanted to play one of the grooms, tricked into a drunken stupor by Lady Macbeth, sleeping outside of King Duncan’s bedchamber. Cremonini agreed. The appointed day arrived, and the student was in costume. As the scene was proceeding, with Macbeth creeping past to kill Duncan, the student snored away--loudly enough to obscure the dialog, Cremonini said, and he had to be hushed. Still the student thanked his teacher for letting him participate in his own way. Students can be inventive at avoiding opportunities to speak an awkward tongue in front of peers. Somebody, they say, has to run the camera or take notes. The stubborn ones can be drawn in, Hartman asserts, if the teacher shows a willingness to encourage them. When planning the performance of Macduff and Lenox’s arrival at Macbeth’s gate in Act II Scene 3, one student volunteered service as the creaky gate. “It’s a little silly, but by saying yes to that, suddenly that student understands clearly that their ideas are valued,” Hartman says. “The next idea might actually be about the character. Say no and the student may never volunteer another idea again.” A female student was similarly inclined to minimize her role in the class’s dramatization of scenes from Hamlet. “She said, ‘I don’t want to do anything” and we said, ‘can you do nothing right here?” says Hartman. Cast as Ophelia, Hamlet’s unrequited love, she brooded in silence while her classmates surrounded her and performed her thoughts. “She was doing exactly what she requested, but suddenly she was right in the center and enjoying herself, taking her role very seriously.” Cremonini taught Shakespeare to his own class, over their protests. “They said they shouldn’t have to read it because they weren’t going to college anyway,” Cremonini said. But the class persevered and continued to read and act out the play. The class went on to develop an interest in reading Shakespeare’s sonnets. “One student said to me that if he could read and understand Shakespeare, he could read anything.” Paquette allows that there is a language barrier, but says it can be surpassed. “The language has changed, but the emotion and passion haven’t changed. When I teach Shakespeare, we start by playing with language and conquering fear of language,” she says. “Eventually kids can see it’s about basic human emotions, only the pronouns and verb endings are different. “When they start playing with words, they begin to realize how much richer their language can be to describe human emotion. They start using the language, and listen intently to their classmates as they read. These are not just honors but struggling students. One kid who could barely read a page did an honors-level project on Hamlet. The kids in my remedial class keep asking me, ‘Who gets to read this book? Why are the honors kids reading this?’” One of Paquette’s other pupils, sixteen-year-old Patrice LaHair, was one of those honors students. With a background in theater and previous experience reading Shakespeare, she says she responded well to Paquette’s activities. Others in the class needed more time, she says. “The way you’re taught something affects how you like it,” Patrice says. “Some people really don’t like Shakespeare because they don’t want to think enough to read between the lines. Miss Paquette would have something else you could get involved in if you didn’t like reading.” The entire class learned an Elizabethan dance. This year, LaHair finds herself studying Shakespeare with a different teacher, but this time the presentation is more traditional. The class has read the text of Othello and listened to an audiotape version, but there is no standing on chairs shouting lines, like last year. “Now,” she says, “I detest the classroom part of it but still love the story.”
<urn:uuid:3cd29d84-fbf7-4258-b4eb-e3fddcb53329>
CC-MAIN-2016-50
https://www.neh.gov/humanities/2003/julyaugust/feature/shakespeare-out-loud
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00278-ip-10-31-129-80.ec2.internal.warc.gz
en
0.966543
2,607
2.8125
3
ANIMATED ATLAS: THE EARLY COLONIES Animated maps with overlaid illustrations and captions help students visualize the where, when, and why of North America's early colonies: the Spanish in Florida and the West; the French in Canada; the English at Roanoke and Jamestown; the Pilgrims at Plymouth; the Puritans at Boston, Hartford, and New Haven; Roger Williams in Rhode Island; the Dutch in New Netherland; New Sweden; and Catholics in Maryland. Europe's religious intolerance is seen as a major motivation behind many of these colonies, while landforms and water routes are seen to influence site selections. Grades 4–10. Color. 22 minutes. SVE. ©2003. "Through the use of color, various overlays, and easily recogized icons, the program creatively points out how a number of factors...combined to shape the initial development of the United States...A solid addition for a U.S. history unit on colonial America..." "Animated maps of North America, South America, and Europe effectively illustrate and explain early development of the colonies and relationships between the countries."—Booklist "...this video provides a great deal of interesting information."—School Library Journal
<urn:uuid:10203f43-2ce5-45bc-a6c4-0268fe6298e8>
CC-MAIN-2016-50
http://catalog.socialstudies.com/c/product.web?nocache@0+s@eCBhW8_vUD.Q.+record@TF36457+Title@ANIMATED%20ATLAS%3A%20THE%20EARLY%20COLONIES
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.896691
255
4.09375
4
Eid Al-Adha, known as the ‘Feast of the Sacrifice’, is one of the most significant festivals on the Muslim calendar and lasts for four days. The holiday marks the end of the Hajj Pilgrimage and serves as a day to remember the Islamic profit Ibrahim, and his willingness to sacrifice of his son, Ismail (Ishmael) as an act of submission to Allah, before Allah intervened and gave Ibrahim a lamb to slaughter in the place of his son. On this day, Muslims in countries around the world start the day with prayer and spend time with family, offer gifts and often give to charity. It is customary for Muslim families to honor Allah by sacrificing a sheep or goat and sharing the meat amongst family members. Feast of the Sacrifice Oct. 26 Photo Brief: Baltimore Halloween Brew-ha-ha, Islam’s Feast of the Sacrifice, zombies in Berlin, Halloween at the zoo, pregnant pigs in a cage Baltimore Bike Party, Graphic animal slaughter images from the first day of Islam’s Eid al-Adha celebration, or “Feast of the Sacrifice,” a polar bear and white lion play with pumpkins in a Russian zoo, European pigs will soon have more room in their cages, zombies from the Middle Ages emerge from the Berlin Dungeon and more in today’s daily brief.
<urn:uuid:47d60eea-b9e5-42a5-afa0-48777429202b>
CC-MAIN-2016-50
http://darkroom.baltimoresun.com/tag/feast-of-the-sacrifice/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.948565
281
2.734375
3
Friday, June 06, 2008 Virtual Disney World Google Earth now offers a 3D virtual tour of Disney World in Florida. It's possible to explore four of the Disney theme parks and some of the hotels as well. Why is this significant? Now it's possible to preview Disney World with students who benefit from previewing. For some kids, the ability to preview helps promote positive behaviors and helps ease transitions. Kids on the Autism Spectrum or kids with cognitive disabilities may benefit from preparation beforehand. And exploring Disney World virtually may help families anticipate challenges or strategize their visit to make the experience pleasurable for all. Share this information with your student's families so they can explore this feature especially if they are planning a Disney trip in the near future. What do you think? Do you think this is a feature that can benefit families?
<urn:uuid:9b6fd82b-e3e9-426e-9af6-ef84b6413d65>
CC-MAIN-2016-50
http://teachingeverystudent.blogspot.com/2008/06/virtual-disney-world.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.925889
171
2.53125
3
Wednesday, June 16, 2010 One day, I was observing in his classroom and witnessed the emotional breakdown. He had to write his spelling words in cursive and one word included the "os" cursive combination as in "most." That one was particularly troublesome and really set him off. He just couldn't get it to look right. (Try it. It's a difficult combination, especially if you struggle with cursive). Later that day, we worked together using a computer. I had a chance to show him some of the fonts built into Microsoft Word and asked him to choose a font that looked good to him. As we tried all the various font choices, he zeroed in on a cursive font and said,"That's the one!" We also customized the size of the font and talked about how he could set it as the default font on his home computer. A month later, I got a call from his mother. She told me that her son now came home from school, went to the computer and willingly completed his homework on his own, using the cursive font. She was ecstatic; the battles over writing assignments were over. Sometimes, it's the solution that's right in front of our faces. And this one didn't cost anything. Friday, June 11, 2010 What does this mean for us? We know that too often, paper creates the disability for many students. In a non-paper environment, their disability disappears. An invaluable document detailing typical activities using paper and paperless alternatives exists for all to use as a resource. There are great ideas and abundant resources here. It's good teaching and it's Universal Design - embedding UDL principles proactively into instruction. While I applaud the effort that started the paperless trend, I encourage you to join the paperless bandwagon for your students, not just for the environment. Wednesday, June 09, 2010 “I could not live without my iPhone” How many times have you heard someone say that? Or something similar about a gadget, whether it’s an Apple device or otherwise? Well, for me, that's pretty much true. Welcome to the world of autism and assistive technology." This is fantastic. I loved reading how technology helps Jamie Knight achieve greater independence. Technology makes a difference. It is about the tools and this article reinforces the importance of showing our students different tools so they can develop their own toolbelt for life beyond school.
<urn:uuid:b02dbf04-4c62-40bf-9b6b-e8bd4be3cb9f>
CC-MAIN-2016-50
http://teachingeverystudent.blogspot.com/2010_06_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.982109
505
2.671875
3
This time it’s in the Dallas Morning News and the article is called Video Games Encourage Teens to Check Out Libraries. The good news: We learn that the Forth Worth Public Library is creating a room dedicated to gaming. Can’t wait to learn more about that! The bad news: Yet another newspaper story that lets someone (this time a professor at the University of Maryland) get away with sweeping generalizations about gaming. Melanie Killen claims, “a vast majority of the games have negative content and the consequences can be destructive, including increased impulsivity, aggressive behavior and shorter attention spans,” without providing any proof at all. Whether that’s her fault or the newspaper’s, let’s just nip this in the bud right now in case you encounter this argument at your own library. First of all, 85% of the games sold in 2006 were rated E (for Everyone), E+10 (ages 10 and up), or T (for Teen). That means only 15% of video games sold in 2006 where rated for adults, so that’s hardly a “vast majority.” Only 4 of the top 20 games sold in 2006 were rated M (Mature) (PDF). That would be 1/5, which means the “vast majority” of games sold were actually appropriate for kids and teenagers. Second of all, let’s define what we mean by “destructive” and “aggressive behavior,” because as video games have become more popular, youth violence has actually dropped, despite those stories that grab all the headlines. Third, “impulsivity” and “shorter attention spans” can be attributed to many things, not just video games. If I’m not mistaken, these arguments were made against television forty years ago, so it’s not like this is something new and it’s not like you can blame video games as the master evil behind these problems. In fact, one wonders if shorter, less complex newspaper stories that fail to provide facts or links for further information or, you know, evidence/data/research might contribute to that trend, too. What’s really ironic is that Killen is later quoted as saying, ” ‘There is a concern in our society about the preparation of the next workforce in terms of reading and math and science skills,’ she said. ‘We should be doing everything we can to facilitate that, and I think that allowing video games to go in libraries is a bad signal.’ ” If you run into this misguided assumption yourself, you can point folks to this report or this report or this report (PDF), which discuss how gaming can help with exactly those things. The worst part? They cite a figure for the number of libraries offering console or PC gaming programs that is flat out wrong, all the more curious since the summary of the survey is available online (PDF). Had they bothered to point to it from the article, they might have gotten it right. Sadly, the DMN doesn’t allow comments or trackbacks, so their readers will never know just how wrong the paper got this story. Luckily, the rest of us do.
<urn:uuid:5c3c1b71-f4e1-4fc1-a076-417a94446e10>
CC-MAIN-2016-50
http://theshiftedlibrarian.com/archives/2007/10/15/another-article-about-gaming-and-libraries-same-old-story.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.952297
677
2.609375
3
ANNAPOLIS, Md. — Maryland’s Department of Natural Resources is urging state residents to use caution as bears emerge from their winter hibernation. DNR officials say natural foods are scarce in the early spring and bears often seek human sources of food. State officials say residents can help keep black bears wild by cleaning or removing any outdoor items that may contain or smell like food. That includes locking garbage in a bear-proof trash container, or keeping garbage inside until the day of pick-up. Residents should also rinse trash containers with ammonia to eliminate food odors. Outdoor grills should also be cleaned of food residue or kept inside. And birdfeeders should be taken down between April and November. Follow WNEW on Twitter. (© Copyright 2013 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.)
<urn:uuid:c05c75af-5283-4a4a-8aec-b28ec46060a4>
CC-MAIN-2016-50
http://washington.cbslocal.com/2013/04/16/its-bear-season-in-maryland-lock-up-your-trash/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.932311
179
2.6875
3
#1 Ranked Children’s Hospital by U.S. News & World Report MyPatients provides referring primary care providers with secure access to their patients’ information. Boston Children's has launched the world's 1st program dedicated to offering hand transplants to children who qualify. Innovation insider is a semi-monthly e-newsletter analyzes innovations at Boston Children’s, other academic medical centers and from industry. Read the latest blog by a Boston Children's doctor, clinician or staff member. There are many ways you can help children and their families get the care they need. A trisomy and a monosomy are types of numerical chromosome abnormalities that can cause certain birth defects. Normally, people are born with 23 chromosome pairs, or 46 chromosomes, in each cell — one inherited from the mother and one from the father. A numerical chromosome abnormality can cause each cell to have 45 or 47 chromosomes in each cell. What are trisomies? The term "trisomy" is used to describe the presence of an extra chromosome — or three instead of the usual pair. For example, trisomy 21 or Down syndrome occurs when a baby is born with three #21 chromosomes. In trisomy 18, there are three copies of chromosome #18 in every cell of the body, rather than the usual pair. What are monosomies? The term "monosomy" is used to describe the absence of one member of a pair of chromosomes. Therefore, there are 45 chromosomes in each cell of the body instead of the usual 46. Monosomy X, or Turner syndrome, occurs when a baby is born with only one X sex chromosome, rather than the usual pair (either two Xs or one X and one Y sex chromosome). We are grateful to have been ranked #1 on U.S. News & World Report's list of the best children's hospitals in the nation for the third year in a row, an honor we could not have achieved without the patients and families who inspire us to do our very best for them. Thanks to you, Boston Children's is a place where we can write the greatest children's stories ever told.”
<urn:uuid:f50ec4d0-2a7d-40fa-81b8-fa3256ba0235>
CC-MAIN-2016-50
http://www.childrenshospital.org/conditions-and-treatments/conditions/trisomies-and-monosomies
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.940616
458
2.625
3
Eyam, famous as the Plague village, is in a beautiful setting, 800 feet above sea level, lying in the heart of the Derbyshire Peak District. Eyam owes its location to the ready availabity of water. The rain water that accumulates in the hills to the north of the village, issues out of a series of springs down the one mile length of the village. In 1588, 12 sets of stone troughs were built at convienient places in Eyam and the water was conducted to the troughs by pipes, thus making Eyam one of the first villages in the country to have a public water system. In 1665 the Plague was raging in London. A taylor from Eyam by the name of George Viccars ordered some cloth from the capital and it arrived damp and had to be laid out to dry. This released the plague carrying fleas and within days, Viccars fell ill and died. Several of his neighbours also died and some families began to panic and fled the area. William Mompesson, the rector, supported by Thomas Stanley, a former incumbent, feared that this would spread the disease over a wider area and asked villagers to quarantine themselves. Food and medical supplies were left at various points on the village boundary. Eyam church was closed and services were held in Cucklett Delf, a valley nearby where a Plague Commemorative Service is still held annually. There were no funerals and families buried their own dead near their homes. At nearby Riley a Mrs Hancock buried her husband and 6 children in a space of 8 days. The Riley graves, as they are known, are still there. The Plague ended in October 1666 and had claimed 260 lives in an 18 month period. Some of the cottages now carry a commemorative plaque. An authentic history of those fearful months is vividly told in the two floors of Eyam Museum which can be found near the coach park. The museum also looks at other aspects of village life in Eyam. Eyam has been involved with various industries over the centuries, including lead mining, limestone quarrying, agriculture, and silk and cotton production. These have all brought some prosperity to the village though sometimes at a cost of human misery. Eyam Hall has been the home of thw Wright family for over 300 years. It is a wonderful 17th century Manor House which contains an impressive hall and a tapestry room. Here thre is also a cafe and gift shop as well as the Eyam Hall Crafts Centre which is housed in the farm buildings. The centre has a number of specialized craft units. The Parish Church of St Lawerence partly dates from the 12th century and probably stands on a Saxon foundation. The North Aisle was doubled in width in 1868. The South Aisle was enlarged and a porch added in 1882. Various other restoration work was carried out in the 19th century. The church contains a chair that was used by the Rev William Mompesson, a Jacobean pulpit, a Plague register, a saxon font and a fine set of 6 bells, the oldest of which dates back to 1628. The churchyard contains various interesting tombstones and monuments including ones to Thomas Stanley and Catherine Mompesson, the wife of William. She had stayed in the village with her husband and died of the plague in its later stages. It also contains a magnificent Celtic Cross, one of the finest in the country and probably a wayside preaching cross from the 8th century. Eyam is a vibrant, active community. Some events that take place are Well Dressing, a Plague Commemorative Service, annual carnival and sheep roast and an annual village show. Oher places of interest nearby Find local accommodation at Derbyshire and Peak District Accommodation and Peak District Accommodation For Peak District information try Peak District National Park More photographs of Eyam including Mompesson Well at Eyam Photographs
<urn:uuid:2fadfdbf-5f83-4eb5-b358-30c5425c897a>
CC-MAIN-2016-50
http://www.derbyshireuk.net/eyam.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.982477
815
3.046875
3
Maintaining a healthy weight 'reduces breast cancer risk' 1st September 2009 Women can reduce their breast cancer risk by maintaining a healthy body weight and cutting back on alcohol, new research has shown. A review of research commissioned by the World Cancer Research Fund (WCRF) and carried out by researchers at Imperial College London showed that maintaining a healthy weight, physical activity and breastfeeding children can all contribute to lowering risk. The study, which is not set to be published until later this year, was based on a review of 954 research projects. Professor Martin Wiseman, medical and scientific adviser for the WCRF, explained that the new data provides 'the clearest picture we have ever had on how lifestyle affects a woman's risk'. He added: 'We estimate over 40 per cent of breast cancer cases in the UK could be prevented just by making these relatively straightforward changes.' Last month the WCRF urged parents not to put processed meat into their children's sandwiches, saying it can increase the risk of developing cancer later in life.
<urn:uuid:1d6dc950-37be-4d4c-bc69-aa89c6e32472>
CC-MAIN-2016-50
http://www.foyles.co.uk/news/maintaining-a-healthy-weight-reduces-breast-cancer-risk
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.963082
217
2.921875
3
Did you know that Hitler took cocaine? That Stalin robbed a bank? That Charlie Chaplin's corpse was filched and held to ransom? Giles Milton is a master of historical narrative: in his characteristically engaging prose, Fascinating Footnotes From History details one hundred of the quirkiest historical nuggets; eye-stretching stories that read like fiction but are one hundred per cent fact. There is Hiroo Onoda, the lone Japanese soldier still fighting the Second World War in 1974; Agatha Christie, who mysteriously disappeared for eleven days in 1926; and Werner Franz, a cabin boy on the Hindenburg who lived to tell the tale when it was engulfed in flames in 1937. Fascinating Footnotes From History also answers who ate the last dodo, who really killed Rasputin and why Sergeant Stubby had four legs. Peopled with a gallery of spies, cannibals, adventurers and slaves, and spanning twenty centuries and six continents, Giles Milton's impeccably researched footnotes shed light on the most infamous stories and most flamboyant characters (and animals) from history. 'Compact, engaging narratives... who needs fiction when you can unearth fabulous true tales like these?' Washington Post In America, the footnotes are published in two volumes: When Hitler Took Cocaine and Lenin Lost His Brain and When Churchill Slaughtered Sheep and Stalin Robbed a Bank. 'Entertaining to a fault and well researched', says Paste Magazine, 'the book reads like a champagne cyclone, extra brut and deliriously fast, brimming with strangeness and just enough pedagogy to be educational yet still entertaining'.
<urn:uuid:e9d0d95c-ba4c-4d15-9de7-f6f8f8b7659f>
CC-MAIN-2016-50
http://www.gilesmilton.com/just-published
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.922209
339
2.515625
3
The rate equation is a general formula that relates the rate of a chemical reaction to the concentration of the reacting species. Each individual reaction has its own rate equation that must be determined by experiment. Rate = k[A]x[B]y Relationship between rate and concentration It may be clearly seen that there is some unspecified relationship between the rate of a chemical reaction and the concentration of the reactants. As a reaction progresses the concentrations of the reactants decreases and so does the rate. Look at the typical graphs below: reactant concentration against time reaction rate against time The question is, exactly what is the relationship between a specific reactant concentration and the rate? This relationship can only be established by actually performing experiments to obtain data about the rate of the reaction when the concentrations of the reactants are changed (at constant temperature). General relationships between variables All relationships between two variables can be represented by an equation. This would be the equation for the graph obtained showing how the two factors vary with one another. There are only three logical possibilities for a relationship between two variables, for example A and B. - 1 No relationship at all - Change in A does not affect B - 2 A directly proportional relationship - Change in A causes a corresponding change in B - 3 A relationship that is not directly proportional - Change in A causes a non-linear change in B dependency of rate on concentration All of these possible relationships can be covered by the simple mathematical formula A = Bx Possibility 1: No relationship at all In this case the value of x is zero and the equation becomes A = B0 Any number raised to the power of zero = 1 so the equation reduces to A = 1, in other words there is no change in the value of B when A changes. Possibility 2: Direct proportionality In this case the value of x = 1 and the equation reduces to A = B1 Any number raised to the power of 1 is that number itself (eg: 21 = 2, 31 = 3, etc). In other words when A changes this causes a change by exactly the same factor in B. For example, if the value of A doubles then the value of B must double also. Possibility 3: Some other relationship In this case the value for x is some other number apart from 0 or 1. The value of x depends on the manner in which B changes with change in A. For the time being let's leave it as the unknown value 'x'. Possible relationships between A and B The rate expression Having established that the relationship between rate and concentration can be expressed mathematically, let's consider a reaction in which two hypothetical reactants A and B react together to produce a product 'C'. |A + B C| The rate of the reaction is dependent on (proportional to) the concentration of A (expressed using square brackets [A]) raised to some unknown power 'x', but it is also dependent on reactant B concentration, [B] raise to a different power 'y' |Reaction rate = constant1 x [A]x and Reaction rate = constant2 x [B]y| Combining these equations this becomes: |Reaction rate = constant (combined from constants 1 and 2) x [A]x x [B]y| This is known as the rate equation: Where: k is the rate constant, x and y are the orders of the reaction with respect to the concentrations of A and B respectively. Solving the rate equation The rate equation can only be solved through experiment. For a two component reaction, A + B, the procedure is as follows: 1. A series of experiments are performed, keeping the reactant concentration, A, constant, but changing the concentration of B. The value of [A]x must therefore be constant throughout the experiments and can be combined with the rate constant, k to give the equation: |Rate = constant x [B]y| 1. The effect of the change in concentration of B can now be seen on the rate. For example, if the concentration of B is doubled and the rate also doubles, then this means that the value of y = 1; If there is no effect on the rate when [B] is changed, then y = 0; if the rate increases by a factor of 4 when the concentration of B doubles, then the value of y = 1. 3. More experiments are now carried out keeping [B] constant while varying [A]. The value of [B]y is now constant and the rate equation becomes: |Rate = constant x [A]x| 4. The effect of the change in concentration of A can now be seen on the rate. For example, if the concentration of A is doubled and the rate also doubles, then this means that the value of x = 1; If there is no effect on the rate when [A] is changed, then x = 0; if the rate increases by a factor of 4 when the concentration of A doubles, then the value of x = 1. 5. Now that values for x and y, the orders of the reaction, have been found, they can be used in any of the experiments to find the value of k, the rate constant, from any of the experimental data. |Rate = k [A]x[B]y| Example: For the reaction, X(g) + Y(g) Z(g) the following kinetic data was obtained: Calculate the initial rate of the reaction in Exp. 4. Inspection of Experiments 1 and 2 show that the concentration of Y remains constant therefore any effect on the rate is due to change in [X]. BUT as can be seen, when [X] doubles the rate stays the same. This means that change in [X] has no effect on the rate. Therefore: Inspection of Experiments 1 and 3 show that the concentration of X remains constant therefore any effect on the rate is due to change in [Y]. AND as can be seen, when [Y] doubles the rate also doubles. This means that change in [Y] causes the same change in the rate. Therefore: Combining these two equations gives: As [X] does not affect the rate, we need only look at the change in [Y]. From experiments 3 to 4, [Y] doubles therefore the rate should double as well. Therefore new rate in experiment 4 = 1.48 x 10-2 mol/min
<urn:uuid:d46b081e-6085-4b90-989f-23dae0231e65>
CC-MAIN-2016-50
http://www.ibchem.com/IB16/06.31.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.934849
1,362
4.03125
4
Moscow (Dec. 9) Liev Kliatchko, Russian Jewish journalist, and for thirty years before the War known as the “king of reporters of the Russian press”, died here today. He was sixty years old. His revelations, particularly when he was the diplomatic correspondent of the newspaper, Recht, organ of the leader of the Kadets (Constitutional Democrats),. Professor Paul Miliukov, frequently resulted in the resignation of cabinet ministers. For twenty-five years the Czarist secret police watched his every step, repeatedly attempting to expel him from Petrograd, on the ground that he was a Jew. Under the Czarist regime, Jews were forbidden to live in large cities, unless they had special permission. But Kliatchko, on the strength of his influential connections, always ignored the expulsion orders. He even ignored the order of expulsion issued by the commander of the Petrograd military district in 1916. After the Revolution and the subsequent seizure of power by the Bolsheviki, Kliatchko published two volumes of reminiscences and of episodes in Czarist history which had never been revealed. Included in the volumes were stories of various anti-Jewish incidents in the time of the Czar. When the Bolshevik government proclaimed the NEP, the new economic policy, which removed many of the restrictions previously imposed on all trade, Kliatchko became a publisher and conducted a successful business. Later the Russian government once more tightened its hold on Russian industry, liquidating private trade. Kliatchko died in great need after a prolonged illness.
<urn:uuid:5f596683-3d4e-46b2-b261-8013e3a49a7c>
CC-MAIN-2016-50
http://www.jta.org/1933/12/11/archive/kliatchko-russian-king-of-reporters-dies-a-pauper
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.978722
328
2.53125
3
Why People Get Sick By Dr. Deepak Chopra, MD (Courtesy of Dr. Deepak Chopra and Intentblog.com) Most people assume that germs and genes cause disease. The germ theory has brought us a long way, and genetic theory promises to take us even further. But there is still a mystery surrounding why certain people get sick while others don't. For example, studies show that if cold virus is placed directly into a person's nose, the chance of getting a cold is about 1 in 8; being exposed to chill, damp, or a draft doesn't increase these odds. Also, when the Black Death wiped out a third of Europe's population in the 14th century, no one knows why the other two-thirds, who were certainly exposed, didn't die. (Photo: Jeremiah Sullivan) Every day each of us inhales or ingests enough germs to cause a variety of diseases we never contract. Some sort of "control by the host" seems to be at work. This refers to the body's ability to live with disease-causing agents without getting sick. Germs aren't the only factor. Statistics show that severely ill people often wait until a significant date has passed, such as Christmas or their birthday, before suddenly dying. Studies going back to the Korean War showed that young soldiers in their early twenties had serious blockage of their coronary arteries, yet the disease doesn't show up until middle age. Not everyone exposed to HIV contracts the virus, and in a few rare instances, those with AIDS have reversed their viral status form positive t negative. Why, then, would you or I get sick when someone else equally at risk doesn't? The best way to get sick is to suffer from as many of the following conditions as possible: --Unsanitary conditions: massive exposure to germs remains a major factor --Being poor: poverty degrades life on all fronts, including health. --High stress: physical and psychological stress damage the immune system. --Depression and anxiety: untreated psychological disorders weaken resistance to a wide range of diseases, perhaps even cancer --Lack of coping mechanisms: stress by itself is a negative factor, but the inability to bounce back form it is more important. --Lack of control, victimization: all stresses become much worse if you feel that you have no control over your own life. --Inertia, sedentary lifestyle: if you are inactive and have no outside interests, you chance of getting sick rises sharply --Feeling alone and unloved: emotional deprivation is as unhealthy as deprivation of good food. --Sudden loss: the sudden loss of a job or spouse, a reversal in finances, or finding yourself in the midst of a war or natural disaster all constitute a state of loss and lead to higher risk of getting sick. --Growing old: once considered a major cause of illness, aging is now known not to be a direct cause. Being healthy into your eighties should be your expectation, but if you neglect yourself in old age, the body becomes vastly more susceptible to disease. None of these factors comes as a huge surprise, since public health officials have drummed into us that most illness in modern society is a "lifestyle disease" born of stress, lack of exercise, and other factors external to germs. But I think most people still assume that being fat, for example, is worse for you than stress, which certainly isn't the case. Outside of diabetes and joint problems, it's hard to find a serious link between moderate overweight and any disorder, while stress and its offshoots are major risks. they exaggerate the effect of aging. Yet in the absence of high blood pressure and artery disease, most people will live a very long time, probably in good health until they contract their final illness. (I've covered a dozen other common beliefs, both true and false, in earlier posts recently.) But the mystery of who specifically gets sick remains unsolved, in part because there are subtle factors that few experts have adequately examined. --Some people get sick because they expect to. --Some people get sick, or sicker, after they are diagnosed with a disease. --Disease brings certain benefits, known as "secondary gain," that make it positive. The classic example is a child who pretends to be sick in order to get more love and attention, but adults find secondary gains of their own, such as not having to take responsibility for their lives or finding an escape from a situation they can't cope with. --Some people get sick because they want to give up, or even die. --Some people have nothing better to do than to get sick. Time Magazine heralded Deepak Chopra as one of the 100 heroes and icons of the century, and credited him as "the poet-prophet of alternative medicine." Entertainment Weekly described Deepak Chopra as "Hollywood�s man of the moment, one of publishing�s best-selling and most prolific self-help authors." He is the author of more than 40 books and more than 100 audio, video and CD-Rom titles. He has been published on every continent, and in dozens of languages and his worldwide book sales exceed twenty million copies. Over a dozen of his books have landed on the New York Times Best-seller list. Toastmaster International recognized him as one of the top five outstanding speakers in the world. Through his over two decades of work since leaving his medical practice, Deepak continues to revolutionize common wisdom about the crucial connection between body, mind, spirit, and healing. His mission of "bridging the technological miracles of the west with the wisdom of the east" remains his thrust and provides the basis for his recognition as one of India�s historically greatest ambassadors to the west. Chopra has been a keynote speaker at several academic institutions including Harvard Medical School, Harvard Business School, Harvard Divinity School, Kellogg School of Management, Stanford Business School and Wharton. Disclaimer: The views and opinions expressed in these columns are solely those of the writers and do not necessarily represent those of the editor/publisher. All Material © Copyright Kavita Chhibber and respective authors. E-mail this article
<urn:uuid:095df172-a6f5-40eb-8a55-0c65085417be>
CC-MAIN-2016-50
http://www.kavitachhibber.com/main/main.jsp?id=health-Apr2006
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.966726
1,283
3.21875
3
How to: Speaker sensitivity Speaker sensitivity—many times erroneously referred to as speaker efficiency—is used to determine the amount of power necessary to drive or operate a loudspeaker. It is a measurement of the amount of sound output derived from a speaker with one watt of power input from an amplifier. Sensitivity is usually measured with a microphone connected to a sound level meter placed one meter in front of the speaker. The resultant number is expressed in dB. Most speakers are actually very inefficient; only about 1% of the electrical energy sent by an amplifier to a typical home loudspeaker is converted to acoustic energy. The remainder is converted to heat, mostly in the voice coil and magnet assembly. The main reason for this is the difficulty of achieving proper impedance matching between the acoustic impedance of the drive unit and that of the air into which it is radiating. The efficiency of loudspeaker drivers varies with frequency as well. For instance, the output of a woofer driver decreases as the input frequency decreases. Horn loaded speakers—such as Klipsch products—can have a sensitivity approaching 110 dB at 2.83 volts (1 watt at 8 ohms) at 1 meter. This is a hundred times the output of a loudspeaker rated at 90 dB sensitivity, which would be excellent for a traditional radiating cone type. The key advantage to efficient speakers is that they require less power to drive them; they also generate less heat and generally can boast a longer component life.
<urn:uuid:7f202709-805c-4875-9cd5-d3ecc6e8e1f8>
CC-MAIN-2016-50
http://www.klipsch.com/education/speaker-sensitivity
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.948241
299
3.703125
4
21 апреля 2009 | Автор: Admin | Рубрика: Художественные книги » Мемуары. Биографии | Комментариев: 0 Helen Meller, "Patrick Geddes: Social Evolutionist and City Planner" Publisher: Routledge | 1994 | ISBN 0415103932 | PDF | 384 pages | 6.74 MB This original study examines the influences and ideas of Patrick Geddes, one of the most original thinkers of his time. Geddes pioneered a sociological approach to the study of urbanization, and focused on the intimate link between spatial form and social processes. His work and passion have become an inspiration to all those who seek to understand the city and modern life. Patrick Geddes reveals the perspectives which Geddes created on modern life, education and city development. It shows that his ideas remain stimulating and pertinent to contemporary concerns on these issues. To start download click HERE: No another mirrors, please! >>>
<urn:uuid:0bb9e266-de66-456e-b521-f804267723a0>
CC-MAIN-2016-50
http://www.knigka.su/hud_lit/memuar/93671-Patrick_Geddes_Social_Evolutionist_and_City_Planner.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.759225
259
2.609375
3
Dr. Karen G. Cloninger Next week, February 12-18, is National Heart Failure Awareness Week so this article will focus on information about heart failure that you hopefully will find helpful. Heart failure is a common condition that usually develops slowly as the heart becomes stiff or weak such that it has to work harder to fill with blood or pump blood through the body. Heart failure affects nearly 5 million people in the United States with over 400,000 new cases diagnosed each year. Heart failure does NOT mean that the heart has stopped or is about to stop and is not the same as a heart attack. However some people develop heart failure after their heart has been damaged by a heart attack. Other causes of heart failure include high blood pressure, diabetes, heart valve problems, infection of the heart muscle, drinking too much alcohol for a long period of time, and sometimes no specific cause can be identified. A recent study showed that there are four lifestyle practices that can lower your risk of developing heart failure. Maintaining a normal body weight (body mass index — BMI — of less than 25), consuming vegetables at least three times a week, abstaining from smoking and moderate to high physical activity can significantly lower the risk of heart failure. Symptoms of heart failure include shortness of breath, trouble breathing when lying down, swelling of feet or ankles, and fatigue. If you have any of these symptoms, only a physician can determine whether you have heart failure or some other reason to explain your symptoms. There are a number of tests that help determine whether someone has heart failure and if so, the reason for it. Chest x-rays and blood tests can be a starting point. One of the most important tests to evaluate the heart’s function is an echocardiogram. This test can measure the ejection fraction or EF, which is a measure of the pumping function of the heart. The EF of a healthy heart is 50 percent or greater. If you have been diagnosed with heart failure, there are things that you can do to help keep your heart failure in the best control possible which will help reduce your symptoms and reduce your need for hospitalization. Limiting your intake of salt and sodium can help decrease fluid retention which can worsen heart failure. It is important that you weigh daily and notify your health care provider if you gain more than 2 pounds in one day or 4 pounds in a week. This could be a sign that you are retaining fluid. It is important to exercise at levels recommended by your physician to help you stay as fit as possible. There are combinations of medications that have been proven to improve symptoms, keep heart failure from getting worse and prolong your life. It is extremely important to take these as prescribed and,if you think you are having side effects, notify your physician immediately. For the past year, CMC Lincoln has offered a heart failure class once a month. These are designed especially for individuals with heart failure and their families. They are FREE and everyone is welcome. The classes are held at CMC Lincoln Medical Office Building in the Oak Classroom on the first floor at 6PM. The February 21 class is CPR (cardiopulmonary resuscitation) for Family and Friends. This is not a certification class but will review Basic CPR. The March 20 class will discuss nutrition and heart failure. If you have questions regarding the classes or wish to register, please contact CMC Lincoln Education Department at 980-212-6055. There is also a wonderful website, www.abouthf.com, that has a set education modules designed for patients with heart failure. Dr. Karen G. Cloninger works with the Sanger Heart and Vascular Institute at Carolinas Medical Center-Lincolnton.
<urn:uuid:82b63d17-fb87-4117-946c-540188894a20>
CC-MAIN-2016-50
http://www.lincolntimesnews.com/2012/02/10/being-aware-of-heart-failure-causes-and-symptoms/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.945142
787
3.359375
3
In a healthy adult, each of the 20 to 30 trillion red blood cells in the bloodstream contains more than 250 million molecules of hemoglobin. This vital, iron-containing protein transports oxygen to the body tissues. An abnormally low hemoglobin level results in anemia. Typical symptoms include fatigue and shortness of breath. If the anemia is mild or develops over a long period, noticeable symptoms may be mild or absent. But sudden or severe anemia can lead to heart failure and could be fatal. A low hemoglobin can result from decreased production or increased destruction of red blood cells, bone marrow problems and blood loss, among others. Decreased production of red blood cells (RBCs) or the hemoglobin molecule itself, or both commonly leads to a low hemoglobin (HGB) level. Nutritional deficiencies are most frequently to blame, with iron deficiency the most likely culprit. Because iron is an essential component of the HGB molecule, insufficient intake leads to inadequate HGB and RBC production. A deficiency of vitamin B12, vitamin B6 and folate, or a combination of these nutritional deficiencies, also commonly leads to reduced HGB and RBC production. HGB and RBC production occurs in the bone marrow. Therefore, bone marrow disorders can lead to a low hemoglobin. Aplastic anemia, for example, shuts down the bone marrow. This condition can occur due to radiation exposure, treatment with certain chemotherapy drugs and certain viral infections. In some cases, the cause of aplastic anemia is unknown. Other conditions that affect the bone marrow’s ability to produce RBCs include leukemia, lymphoma, multiple myeloma and other blood cancers. Certain hormonal disorders -- including reduced function of the thyroid, adrenal or pituitary gland -- and Inherited bone marrow disorders, such as Fanconi anemia and Diamond-Blackfan anemia, are can also cause reduced HGB related to decreased RBC production. Increased RBC Destruction RBCs typically circulate in the blood stream for approximately 3 month before being removed and replaced. Increased or premature RBC destruction can lead to decreased HGB and anemia if the bone marrow cannot compensate for the accelerated rate of loss. Hereditary hemoglobin disorders, such as sickle cell disease and thalassemia, are examples of this mechanism of anemia. With these conditions, the HGB molecules are abnormal, which causes the RBCs to be misshapen or inflexible. These abnormalities target them for early destruction. Other RBC abnormalities unrelated to the structure of the HGB molecule can also lead increased destruction. Examples include malaria, a parasitic infection of the RBCs, and hereditary spherocytosis, a genetic condition that affects the shape of the RBCs. Autoimmune hemolytic anemia is another condition that leads to reduced HGB due to increased RBC destruction. With this condition, the immune system tags its own RBCs for destruction due to mistakenly identifying them as foreign. Reduced RBC survival might occur in people with a mechanical heart valve, which can damage the RBCs as they flow through the heart. People with a grossly enlarged spleen, which can occur with cirrhosis and other disorders, often experience a low HGB due to early removal of the RBCs from the circulation. Anemia of Chronic Disease Long-term medical conditions can adversely affect the interplay of the bone marrow and the rest of the body, leading to a condition called anemia of chronic disease. This condition is the second most common form of anemia worldwide, and occurs due to a combination of decreased RBC production and, in some cases, reduced RBC survival. Anemia of chronic disease most commonly occurs in people with chronic infections, such as hepatitis C and HIV/AIDS, or a long-term inflammatory condition, such as rheumatoid arthritis and systemic lupus erythematosus. However, the condition also occurs with noninfectious, noninflammatory chronic conditions, including cancer, diabetes, chronic kidney disease, heart failure, chronic obstructive pulmonary disease (COPD) and alcoholic liver disease. Several mechanisms, alone or in combination, can contribute to anemia of chronic disease, including: -- decreased production of the hormone erythropoietin (EPO), which stimulates the bone marrow to produce RBCs -- blunted bone marrow response to EPO -- sequestration of iron in immune system cells, making the mineral unavailable for HGB and RBC production -- reduced RBC survival, for unknown causes Anemia Due to Blood Loss A low HGB can be due to sudden or slow, chronic blood loss. Sudden, massive blood loss typically occurs due to traumatic injury or a medical condition, such as a ruptured blood vessel or tubal pregnancy. Severe hemorrhaging can quickly lead to life-threatening anemia and shock that requires prompt blood transfusions and fluid replacement. Chronic blood loss is more common and can occur with a variety of medical conditions. Examples include heavy menstrual periods, internal hemorrhoids and diverticulitis. People with chronic blood loss typically develop an iron deficiency, but this is due to iron loss through bleeding rather than a nutritional deficiency. Reviewed and revised by: Tina M. St. John, M.D.
<urn:uuid:8901b1f6-77ec-4157-859a-419e8964cbbe>
CC-MAIN-2016-50
http://www.livestrong.com/article/125674-reasons-low-hemoglobin-count/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.924131
1,089
3.59375
4
|The unit of imaginary numbers and is generally designated by the letter i Many laws which are true for real numbers are true for imaginary numbers as well. Thus Matlab recognizes the letters i and j as the imaginary number . A complex number 3 + 10i may be input as 3 + 10i or 3 + 10*i in Matlab (make sure not to use i as a variable). In the complex number + bi, a is called the real part (in Matlab, real(3+5i) = 3) and b is the coefficient of the imaginary part (in Matlab, imag(4-9i) = -9). When a = 0, the number is called a pure imaginary. If b = 0, the number is only the real number a. Thus, complex numbers include all real numbers and all pure imaginary of a complex a + bi is a - bi. In Matlab, conj(2 - 8i) = 2 + 8i. To add (or subtract) two numbers, add (or subtract) the real parts and the parts separately. For example: (a+bi) + (c-di) = (a+c)+(b-d)i In Matlab, it's very easy to do it: >> a = 3-5i 3.0000 - 5.0000i >> b = -9+3i -9.0000 + 3.0000i >> a + b -6.0000 - 2.0000i >> a - b 12.0000 - 8.0000i two numbers, treat them as ordinary binomials and by -1. To divide two complex nrs., multiply the numerator and denominator of the fraction by the conjugate of the denominator, replacing again i2 Don't worry, in Matlab it's still very easy (assuming same a and b as above): -0.4667 + 0.4000i coordinate axes, the complex nr. a+bi is represented by the point whose coordinates are (a,b). We can plot complex nrs. this easy way. To plot numbers (3+2i), can write the following code in Matlab. Plotting Complex Numbers: % Enter each coordinate (x,y) separated by commas % Each point is marked by a blue circle ('bo') plot(3,2,'bo', -2,5,'bo', -1,-1,'bo') % You can define the limits of the plot [xmin xmax ymin ymax] axis([-3 4 -2 6]) % Add some labels to explain the plot xlabel('x (real axis)') ylabel('y (imaginary axis)') And you get: Polar Form of Complex Nrs. In the figure below, . Then x + yi is the rectangular form is the polar form same complex nr. The distance positive and is called the absolute value or modulus of the complex number. The angle is called the argument or amplitude of the In Matlab, we can effortlessly know the modulus and angle (in radians) of any number, by using the 'abs' and 'angle' instructions. a = 3-4i magnitude = abs(a) ang = angle(a) 3.0000 - 4.0000i De Moivre's Theorem power of is This relation is known as the De Moivre's theorem and is true for any real value of the exponent. If the exponent is 1/n, It's a good idea if you make up some exercises to test the validity of Roots of Complex Numbers in Polar Form If k is any integer, . Any number (real or complex) has n distinct nth roots, except zero. To obtain the n nth roots of the complex number x + yi, or, let take on the successive values 0, 1, 2, ..., n-1 in the above formula. Again, it's a good idea if you create some exercises in Matlab to test the validity of this affirmation. Working with complex or imaginary numbers is easy with Matlab. 'Complex Numbers' to Numbers' to 'Matlab Examples'
<urn:uuid:fefc681f-7b92-40f9-88b4-41436c342f78>
CC-MAIN-2016-50
http://www.matrixlab-examples.com/complex-numbers.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.763054
957
4.34375
4
Tax Reform's Third Rail: Mortgage Interest Table of Contents Tax Rates, Interest Rates and Housing Prices If the deduction increases the value of housing and eliminating it would reduce that value, these effects should have appeared in the past whenever tax rate changes increased or decreased the value of the deduction. "Historically, declining tax rates have been good for homeowners." Of course the value of the mortgage interest deduction falls when tax rates fall. A $10,000 interest deduction that saves a taxpayer $3,000 per year if he or she is in the 30 percent tax bracket saves only $2,000 if the tax rate falls to 20 percent. Yet historically, declining tax rates have been good for homeowners. Tax rates and housing prices. During the 1970s, the inflation that pushed taxpayers into higher tax brackets increased the value of the mortgage interest deduction to homeowners. Thus it should have caused housing prices to rise. By the same logic, the sharp reduction in tax rates during the 1980s should have caused housing prices to fall.3 In both cases, the reverse happened. Figure I looks at the inflation-adjusted median price for new homes and the average marginal tax rate. It shows that rising marginal tax rates did not raise the real price of new homes. The real median new home price was actually less in 1982 than it was in 1973. By contrast, housing prices shot up when the Reagan tax cut became fully effective in 1983 and continued to rise even after the 1986 tax reform dropped tax rates further. Interest rates and housing prices. One can argue that housing prices were flat in the 1970s and rose in the 1980s despite the tax changes because changes in interest rates overwhelmed the tax effects. The home mortgage interest rate rose from an average of 7.96 percent in 1973 to 15.14 percent in 1982. Rates fell thereafter, bottomed in 1986 at 6.39 percent and rose to 8.8 percent by 1989. Thus much of the rise in home prices in the 1980s took place while interest rates were rising. Nevertheless, the interest rate is an important factor in setting prices. The lower the interest rate, the more house one can afford on the same income. However, market interest rates are to a large extent set by tax rates. The higher the tax rate, the higher the interest rate must be for lenders to get the same aftertax return.
<urn:uuid:80456305-c7e0-465d-8397-ac6da96ecee7>
CC-MAIN-2016-50
http://www.ncpa.org/pub/bg139?pg=2
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.971105
481
2.640625
3
Photo copyright Tim Nicholson. Taken on Big Brother in the Egyptian Red Sea. Napoleon Wrasse (also known as Maori or Humphead Wrasse) Cheilinus undulatus Back to the Brothers Gallery... The Napoleon Wrasse is found throughout the warm waters of the Red Sea, the Indian and Pacific Oceans. An extremely large fish, it grows to over 2 m and weighs up to 191 kg (420 lb or 30 stone) Napoleon Wrasse are particularly vulnerable to fishing, as they grow slowly, mature late and are uncommon. They are traded on the live reef food fish market, which serves luxury restaurants in, amongst others, Hong Kong, China, and Singapore. There is evidence of decline throughout its range, but particularly in Southeast Asia. Historical information shows Cheilinus undulatus was common in the 1950s and 1960s, and that declines have coincided with increased fishing activity. The wrasse spawns in aggregations that can easily be targeted by fishers and hence are particularly vulnerable to overfishing at the times and places at which reproduction occurs. It has been well-documented that spawning aggregations in several other reef fish species are particularly vulnerable to being overfished. To compound its problems, the species changes sex from female to male, which, if a fishery selects for larger fish, may make it even more vulnerable to over-fishing. It is estimated that less than 1% reach maturity as males. However, the most preferred market size for this fish as food is 'plate-sized' – between about 30-60 cm total length. Plate-sized fish are typically sexually immature since sexual maturity occurs at about 50 cm. This means that large numbers of sexually immature fish are removed from the wild for the live reef food fish trade. If young fish are being removed before they can produce the next generation, how can populations replace themselves and recover from fishing? Humphead Wrasses Awareness Campaign
<urn:uuid:9ebd49cc-c009-4a31-b11b-5a73abdf9f48>
CC-MAIN-2016-50
http://www.scubatravel.co.uk/bro_Humphead_wrasse_26_03a.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.956392
403
2.578125
3
|Department of Public Information • News and Media Division • New York| World Environment Day’s Focus on Biodiversity Critical in Spurring Public Conservation Actions, Says Director of UN Forests Forum The focus on biodiversity for tomorrow’s celebration of the 2010 World Environment Day can spur public action to sustain the world’s forests, says Jan McAlpine, Director of the United Nations Forum on Forests Secretariat. “The theme of this year’s World Environment Day,‘Many Species. One Planet. One Future,’ crystalized the approach that the world must take in sustainably managing the world’s forests. More than 1.6 billion people depend on forests for their livelihoods, according to World Bank estimates. Forests cover 31 per cent of the total global land area, or just over 4 billion hectares, according to 2010 data from the Food and Agriculture Organization (FAO). At the same time close to 80 per cent of the world’s terrestrial biodiversity resides in forest habitats. The health of our forests, the health of our environment, and human well-being are thus closely linked. Ms. McAlpine said that international events like World Environment Day, the International Year of Biodiversity 2010, and the International Year of Forests 2011 provide a unique global platform to celebrate actions worldwide and their vital role as agents for change in realizing the vision of a greener, more equitable, sustainable future. “Global recognition of the role of forests is growing,” she said. “There is greater awareness of the benefits forests provide in stabilizing climate change, protecting biodiversity and in the livelihoods of billions.” She went on to say that forests were an integral part of human life. “It is our responsibility to take action for forests, for people, and for all forest-dependant species. In this regard, I would like to commend the Convention on Biological Diversity and ConventiontoCombatDesertification, and the United Nations Environment Programme (UNEP) for their valuable work in working together as one [United Nations] to help accomplish the objective of this year's World Environment Day.” About the United Nations Forum on Forests The Forum on Forests was established in 2000 with the main objective of promoting the management, conservation and sustainable development of all types of forests, and strengthening long-term political commitment to that end. The Forum is the only functional commission of the Economic and Social Council with universal membership of all 192 Member States of the United Nations. In 2007, the Forum adopted the landmark Non-Legally Binding Agreement on All Types of Forests, which provides a platform for international cooperation and national action to reduce deforestation, prevent forest degradation, promote sustainable livelihoods and reduce poverty for all forest-dependent peoples. Substantive support for the Forum’s deliberations is provided by its Secretariat, which also serves as the United Nations focal point on all forest policy issues. The Forum Secretariat has been mandated by the General Assembly as the focal point for implementation of the International Year of Forests, 2011 (IYF 2011), in collaboration with Governments, the Collaborative Partnership on Forests, and other forest-relevant organizations around the world. The Forum’s Secretariat is located in United Nations Headquarters in New York. For more information, please contact Dan Shepard, United Nations Department of Public Information, e-mail: firstname.lastname@example.org; or Mita Sen, United Nations Forum on Forests Secretariat, e-mail: email@example.com. * *** *
<urn:uuid:7c1b9324-d4d8-4eaa-ab70-f954f376dcdd>
CC-MAIN-2016-50
http://www.un.org/press/en/2010/envdev1145.doc.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.911663
737
2.8125
3
Oil slick threatens nesting birds, spawning fish Louisiana: The massive oil spill bearing down on Louisiana`s fragile coast wetlands comes at the worst time for untold numbers of nesting birds and spawning fish whose young are most vulnerable to the toxic sludge. Nearly all migratory birds in the Western Hemisphere stop over in the marshes surrounding the mouth of the Mississippi river and tens of thousands are currently guarding eggs laid along the shores. There are brown pelicans - which were only recently removed from the endangered species list. There are terns, and gulls, and herons, and egrets, ducks and sparrows. If they get coated in oil they can die in a matter of days or even hours. And since they fish close to the nests, they can also carry the oil back to their young. The timing will make it harder to reach and rescue oiled birds because of the risk of trampling the eggs, said Jay Holcomb, director of International Bird Rescue Research Center. "We`ve had times when we`ve had to leave oiled birds because it would kill more birds to get them," said Holcomb, one of a handful of specialists who have set up a triage center in Fort Jackson, Louisiana to clean and treat rescued birds. The rich delta surrounding the mouth of the Mississippi River also provides prime spawning grounds for the fish, shrimp, crab and oysters which support a 2.4 billion dollar a year commercial and recreational fishing industry and supply a large chunk of the nation`s wild catch. And the oil is toxic to larvae. The very topography which makes the bogs, marshes and swamps so appealing to wildlife makes it incredibly difficult to protect from an oil spill. High tides and high winds can push the oil deep into the wetlands, which are accessible only by boat and offer few footholds for rescue workers and plenty of places for the frightened animals to hide. Holcomb spent six months in Alaska treating birds oiled in the Exxon Valdez disaster. About 1,600 were rescued. At least 500,000 died. He`s hoping it won`t be that bad this time. It could, in fact, be much worse. Nobody knows when the oil will stop gushing from a deep water well cracked open after an explosion sank an offshore oil platform run by British Petroleum on April 22. The massive slick has spread to 3,500 square miles (9,000 square kilometers) - about the size of Puerto Rico - and an estimated 210,000 gallons are leaking into the Gulf of Mexico every day. The coasts of Texas, Louisiana, Mississippi, Alabama and Florida are threatened and the first wave of oil is expected to inundate the fragile wetlands south of New Orleans. Miles of boom barriers have been placed to protect three of the most sensitive wildlife refuges which are home to about 34,000 nesting birds. More from India More from World More from Sports More from Entertaiment - There will be earthquake if I speak on demonetisation: Rahul Gandhi - Opposition observes 'black day' to mark one month of demonetisation - Zee Media Exclusive: A reality check on public opinion on notes ban! - Panel discussion on the decision of demonetisation - Watch shocking visuals - Speeding car rams pedestrians in Kolkata, 3 killed - Mongolia seeks India's financial aid, Chinese official media says it's 'insane' - SC asks govt if demonetisation was confidential; to judge if policy unconstitutional - Reliance Jio effect: Airtel offers free unlimited local and STD calls with 4G data at Rs 145 - OMG! Man punches Kangaroo in face to save dog – Watch video - Disruption of Parliament not acceptable, for God's sake, do your job: President Pranab Mukherjee to MPs
<urn:uuid:de69d23c-1a73-4a05-8490-80840da10d34>
CC-MAIN-2016-50
http://zeenews.india.com/news/eco-news/oil-slick-threatens-nesting-birds-spawning-fish_623748.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.941765
798
3.046875
3
Earth Day is a day that is intended to inspire awareness and appreciation for the Earth's natural environment. During Earth Day, the world encourages everyone to turn off all unwanted lights. Earth Day was founded by uss Gaylord Nelson as an environmental teach-in first held on April 22, 1970. U Thant, the Secretary-General of United Nations at that time, has recognized it. While the first Earth Day focused on the United States, an organization launched by Denis Hayes, who was the original national coordinator in 1970, took it international in 1990 and organized events in 141 nations. It is celebrated in more than 175 countries. Its name and concept was pioneered by John McConnell in 1969 at UNESCO conference in San Francisco. References[change | change source]
<urn:uuid:9716eef2-7fbf-43df-a0c9-cfb2b79570d4>
CC-MAIN-2016-50
https://simple.wikipedia.org/wiki/Earth_Day
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.964053
152
3.234375
3
Misconception:Three or more women forming a zimun (responsive introduction to Grace after Meals), especially in the presence of one or two men, is the product of late 20th century feminism and has no basis in traditional halachah. When such a zimun is formed, the men present should leave. Fact: Women participating in a meal in which there is a men’s zimun are obligated to participate. If there is no male zimun of ten and there are three or more women, they may form their own zimun. If there is no male zimun at all, many authorities obligate the women to form a zimun. Background: The Talmud records a tanaitic statement (Brachot 45b) that women form a zimun amongst themselves and it (Arachin 3a) includes women in the statement that “all are obligated in zimun.” In order to reconcile these statements with a seemingly contradictory one in Brachot (45a) that says that women may not have a zimun said over them, three approaches were adopted by the commentators. Tosafot views a woman’s participation in any zimun as optional. The Beit Yosef (author of the Shulchan Aruch) and Shulchan Aruch Harav obligate women in zimun when a men’s zimun exists, but regard an independent women’s zimun as optional. The Rosh, Rokeach and Gra not only obligate women when a zimun of men exists but obligate three women who ate together to form a women’s zimun. Since the Shulchan Aruch requires women to participate in an existing men’s zimun, at a meal where there is a zimun of men, the obligation of zimun devolves on all present and the women are equally obligated to respond and may not bentch on their own later. Rav Shmuel Halevi Vozner (Shut Shevet Levi 1:OC:38) post facto defends the converse. He justifies men who leave to go to a rebbe’s table where they then bentch, thus leaving the women at home without a zimun. His justification is based on an otherwise rejected position of Rav Hai Gaon that it is the end of the meal that established the zimun obligation. This would not help in the case where all finish together and the women simply abstain from the zimun. Women’s participation in zimun is thus already recorded in the Talmud. There is no halachic opinion that prohibits women from forming a zimun, and many opinions require it. In addition, if three or more women form a zimun there is no reason why one or two men who are present cannot remain and answer to this zimun. This is according to Rabbi Shlomo Zalman Auerbach as quoted by his nephew in Halichot Beisah. This opinion is also found in Toras HaYoledes by Rav Yitzchok Silberstein [son-in-law of Rav Elyashiv] and Dr. Moshe Rothschild, 1987, translated by S. Ludmir, 1989, page 403. Rabbi Shlomo Pick (Mail-Jewish 18:77) reports that he personally asked Rav Auerbach, who praised the practice, and Rav Elyashiv, who told him to stay and answer. Rabbi Aryeh Frimer (Mail-Jewish 21:59) quotes his brother Dov as having discussed this with Rav Aharon Lichtenstein who agreed, and stated that such was also his father-in-law’s (Rav Yosef Ber Soloveitchik) opinion. The above all hold that the men should answer with the usual response. Rabbi Dovid Cohen (of Gvul Ya’avetz, Brooklyn) and Rav Dovid Feinstein (MTJ) say that the men should answer as “outsiders” by responding “Baruch u’mevorach Shmo tamid le’olam va’ed.” It thus seems that a great deal of halachic firepower supports the men staying and answering. The idea that men must leave is cited without support in Rav Ellionson’s Ha’Isha V’Hamitzvot. The proper introduction for the women’s zimun should probably be “Gevirotai nevarech… Bireshut (imi morati,) gevirotai nevarech.…” Since the men are not part of the zimun, there is no need to get their formal permission. 1. A more complete discussion of this topic can be found in Ari Z. Zivotofsky and Naomi T.S. Zivotofsky, “What’s right with women and zimun,” Judaism, Fall 1993, 42(4)454-464. See also the responsum by Rabbi Yehuda Herzl Henkin in Bnei Banim, Volume 3, number 1 (5758). 2. Some women prefer the more informal “chaverotai” instead of “gevirotai.” Reprinted from JEWISH ACTION Magazine, Fall 5760/1999 issue
<urn:uuid:0508de62-04b2-451b-bfd2-c3e883562570>
CC-MAIN-2016-50
https://www.ou.org/torah/machshava/tzarich-iyun/tzarich_iyun_womens_zimun/
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.939382
1,138
2.921875
3
One way to make an ice sculpture is to fill balloons of various shapes and sizes with water and allow them to freeze after arranging them in creative designs, then once the water is completely frozen, peel away the balloons to reveal the ice sculpture. It is important to include a few grains of sand in each of the balloons before adding the water. The sand acts as a nucleus for the water and ensures that the water freezes throughout.Continue Reading A fun variation on an ice sculpture is to add food coloring to the water before putting the water in the balloon. Food coloring can also be added to the outside of the ice after the balloons are removed to color just the outside instead of completely through the ice. Some more intricate ice sculptures include Christmas lights underneath the ice to create a beautiful effect when the lights are turned on at night. For a smaller ice sculpture, use flexible silicone molds and fill them with water. Place the molds in the freezer. Once the water is frozen, remove the silicone mold. Several molded pieces can be fastened together by sprinkling ice shavings and water between two flat surfaces of ice.Learn more about Fine Art
<urn:uuid:ffcf4e75-45d1-41de-8c30-b46edb8ca5c1>
CC-MAIN-2016-50
https://www.reference.com/art-literature/make-ice-sculpture-37b81a359967c337
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00086-ip-10-31-129-80.ec2.internal.warc.gz
en
0.954401
231
2.875
3
Pile Field Support Prefabricated Plants Floated into Placeby Elmer Isaak, (F.ASCE), Pres.; URS/Madigan Praeger Inc., New York, N.Y., Sidney Johnson, Partner; Morrisey-Johnson Cons. Engrs., New York, N.Y., Serial Information: Civil Engineering—ASCE, 1980, Vol. 50, Issue 1, Pg. 61-63 Document Type: Feature article Abstract: A unique pile support system, combined with earth dikes and superflooding inside an earth drydock to found floating, prefabricated plant structures on the piles, is estimated to have saved some $6,000,000 and about two years time in setting up a $300,000,000, 750-ton per day pulp mill in a remote section of Brazil. A conventional drydock would have been too expensive to build and would have needed an additional 2 years of construction time. An earthfill drydock was used in place of a conventional drydock, and the design employed the barge hulls themselves to distribute loading to the foundation soils through the piling. The key to this approach was a pile support system that would allow for the differential contact between structure and piles, thus possible pile overloading, as the hulls were being lowered. The complementary part of the scheme involved raising the barges some 7 m above their flotation level in the river, positioning them over the pre-driven pile field, then dewatering to lower them into place. Subject Headings: Prefabrication | Pile groups | Docks | Brazil Services: Buy this book/Buy this article Return to search
<urn:uuid:95d8d821-5e6c-46db-994a-12b2d03565d5>
CC-MAIN-2016-50
http://cedb.asce.org/CEDBsearch/record.jsp?dockey=0030344
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544678.42/warc/CC-MAIN-20161202170904-00406-ip-10-31-129-80.ec2.internal.warc.gz
en
0.927154
353
2.796875
3