text
stringlengths 1.59k
23.8k
| id
stringlengths 47
47
| dump
stringclasses 8
values | url
stringlengths 15
3.15k
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 2.05k
4.1k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
NUCLEAR MAGNETIC RESONANCE SPECTROSCOPY : NUCLEAR MAGNETIC RESONANCE SPECTROSCOPY M.Sudha,
M.Pharm. 1 OUTLINE OF PRESENTATION : OUTLINE OF PRESENTATION Introduction
Chemical shift Factors influencing chemical shift
Factors influencing coupling constant
Spin - Spin decoupling
Proton exchange reaction 2 NMR Spectroscopy - Intro : NMR Spectroscopy - Intro The study of spin changes at the nuclear level when radio frequency energy is absorbed in the presence of magnetic field.
Measures the absorption of EM radiation in the radiofrequency region 4 MHz to 750 MHz (wavelength 0.4 m to 75 m)
Most commonly done on 1H and 13C.
MRI 3 THEORY & PRINCIPLE : THEORY & PRINCIPLE Nuclei of atoms with an odd atomic number or an odd mass number have a nuclear spin or angular momentum.
The total angular momentum depends on the spin quantum number (I).
Because nuclei are positively charged, their spin induce a magnetic field.
When a magnetic field is applied to atomic nuclei, the magnetic fields of the nuclei align themselves either parallel or anti-parallel to the applied magnetic field. 4 : THEORY & PRINCIPLE contnd… 5 Magnetic moments and energy states for a nucleus with a spin quantum number of + 1/2. : Magnetic moments and energy states for a nucleus with a spin quantum number of + 1/2. 6 THEORY & PRINCIPLE contnd… Slide 7: THEORY & PRINCIPLE contnd… E = h Bo
2p The energy difference E between and the two spin states depends on the strength of the applied magnetic field Bo.
h Planck’s constant ( 6.6262 × 10-27 erg sec )
Nuclear constant or Gyro magnetic ratio, is a constant for each nucleus. (26,753 s-1gauss-1 for H and 6,728 s-1 Tesla-1 for C) 7 PRINCIPLE : PRINCIPLE When energy in the form of Radiofrequency is applied and when, Applied frequency = Precessional frequency absorption of energy occurs and a NMR signal is recorded .
The nuclei are said to be in resonance, and the energy they emit when flipping from the high to the low energy state can be measured. 8 Slide 9: PRINCIPLE OF NMR 9 Slide 10: DE = h n
n = g Bo / 2p
DE = g h Bo / 2p
w = 2pn
w = g Bo
= g h I / 2p n -Electromagnetic frequency in radio frequency
w - Precessional frequency
- Magnetic dipole moment
I - Spin quantum number
DE - Energy Difference
h - Planck’s constant
- Gyro magnetic ratio
Bo - Applied magnetic field 10 Slide 11: THEORY & PRINCIPLE contnd… 1.41 T
60 MHz 2.35 T
100 MHz 4.7 T
200 MHz E = hv 7.0 5T
300 MHz RELATIONSHIP BETWEEN APPLIED MAGNETIC FIELD
& RADIOFREQUENCY 11 Slide 12: NMR Spectra of Acrylonitryle at 60 ,100 and 220 MHz 12 THEORY & PRINCIPLE contnd… Slide 13: SPIN – LATTICE RELAXATION (longitudinal):
The components of the lattice field can interact with nuclei in the higher energy state, and cause them to lose energy (returning to the lower state). RELAXATION PROCESS SPIN - SPIN RELAXATION (transverse):
The interaction between neighbouring nuclei with identical precessional frequencies but differing magnetic quantum states.
a nucleus in the lower energy level will be excited, while the excited nucleus relaxes to the lower energy state.
There is no net change in the populations of the energy states, but The relaxation time, T1 (the average lifetime of a nucleus in the excited state) will decrease. 13 NMR Spectrum : NMR Spectrum A Spectrum of Absorption of Radiation Vs. Applied Magnetic Strength is called as NMR Spectrum. The number of signals shows how many different kinds of protons are present.
The intensity of the signal shows the number of protons of each kinds.
The location of the signals shows how shielded or deshielded the proton is.
Signal splitting shows the number of protons on adjacent atoms. 14 Combined 13C and 1H Spectra : Combined 13C and 1H Spectra 15 Slide 16: Diamagnetic shielding Aromatic Protons, 7-8 Vinyl Protons, 5-6 Acetylenic Protons, 2.5 Aldehyde Proton, 9-10 16 CHEMICAL SHIFT () : CHEMICAL SHIFT () The variations of nuclear magnetic resonance frequencies of the same kind of nucleus, due to variations in the electron distribution . ppm w – w ref w ref Hz MHz Chemical Shift = Absorption Frequency relative to TMS (Hz)
Spectrometer Frequency (MHz) 17 Tetramethylsilane (TMS) : Tetramethylsilane (TMS) Only one peak on NMR spectrum
High electronic density of H in TMS. Almost all the H peaks of organic compounds appear on the left of the TMS peak. 18 Slide 19: The effective magnetic field at the nucleus can be expressed
in terms of the externally applied field B0 by the expression
Where is called the shielding factor or screening factor. The factor is small - typically 10-5 for protons and <10-3 for other nuclei
When a signal is found with a higher chemical shift the signal or shift is downfield or at low field or paramagnetic
Conversely a lower chemical shift is called a diamagnetic shift, and is upfield . B=Bo(1 - ) 19 Slide 20: 20 Slide 21: 21 FACTORS INFLUENCING CHEMICAL SHIFT : FACTORS INFLUENCING CHEMICAL SHIFT Both 1H and 13C Chemical shifts are related to the following major factors:
Depends on Hydrogen bonding
Depends on adjacent group
Depends on carbon group attached
Depends on hybridization
Depends on anisotropy 22 HYDROGEN BONDING : HYDROGEN BONDING Molecules having hydrogen bonding have higher chemical shift and absorb radiation at low field.
That is due to the decrease of electronic density around the nucleus 23 ADJACENT GROUP : ADJACENT GROUP For protons on carbon attached to an electronegative atom or group X( Cl , F ,Br ,I), the chemical shift increases with the electro negativity of X. This is due to the inductive effect on the shielding of the protons and is apparent in the methyl halides. 24 CARBON GROUP ATTACHED : CARBON GROUP ATTACHED 25 HYBRIDIZATION : HYBRIDIZATION 26 ANISOTROPY : ANISOTROPY Protons on an aromatic ring appears at very low field (7.27), due to the aromatic ring current. 27 Slide 28: one spin two spins see each other few Hz Ha Ha Hb Ha Hb Magnetic field of Hb adds to the applied field . Ha signal appears at a lower applied field Magnetic field of Hb subtracts to the applied field . Ha signal appears at a higher applied field SPIN - SPIN COUPLING The interactions between the spins of neighbouring nuclei in a molecule may cause the splitting of the lines in the NMR spectrum. Example: The N + 1 Rule : The N + 1 Rule If a signal is split by N equivalent protons , It is split into N + 1 peaks.
Example: 29 Slide 30: 30 1,1,2-Tribromoethane : 1,1,2-Tribromoethane Doublet: 1 Adjacent Proton Triplet: 2 Adjacent Protons 31 COUPLING CONSTANT(J) : COUPLING CONSTANT(J) Distance between the peaks of multiplet .
Measured in Hz
Geminal Coupling :
Depends on bond angle
H-C-H, two sigma bonds
Depends on dihedral angle
H-C-C-H, three sigma bonds 32 VALUES FOR COUPLING CONSTANTS : VALUES FOR COUPLING CONSTANTS 33 FACTORS INFLUENCING COUPLING CONSTANT : FACTORS INFLUENCING COUPLING CONSTANT Geminal coupling constant:
Increasing bond angle-more + ve J
Electronegative substituent - more + ve J
Neighbouring pi bonds-more –ve J
Vicinal coupling constant:
Increasing Dihedral angle- more + ve J
Electronegative substituent- less + ve J
J Decreases with bond angle 34 SPIN - SPIN DECOUPLING : SPIN - SPIN DECOUPLING Irradiation of protons or groups of equivalent protons with sufficiently intense radio frequency energy to eliminate completely the observed coupling to the neighbouring protons.
Process of removing spin – spin splitting between the spins. 35 Slide 36: Types of Spin – Spin Decoupling in 13C NMR Broad-band Decoupling Off-Resonance Decoupling PROTON EXCHANGE REACTION(PROTON TRANSFER) : PROTON EXCHANGE REACTION(PROTON TRANSFER) Describes the fact that in a given period of time, a single -OH proton may attach to a number of different ethyl alcohol molecules.
The rate in pure alcohol ethyl alcohol is slow, but increased in acidic or basic impurities. If the rate is very slow, the expected multiplicity of hydroxyl group is observed. If it is rapid, a single sharp signal is observed. It causes spin decoupling. 37 Slide 38: 38 | <urn:uuid:3ee0b564-ad09-4955-a070-611fc351e962> | CC-MAIN-2015-35 | http://www.authorstream.com/Presentation/sudha7-260838-nuclear-magnetic-resonance-spectroscopy-science-technology-ppt-powerpoint/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00340-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.767271 | 2,120 | 3.375 | 3 |
A decision is choice out of several alternatives (options) made by the decision maker to achieve some objective s in a given situation. Business decisions are those, which are made in the process of conducting business to achieve its objective in a given environment. Managerial decision-making is a control point for every managerial activity may be planning, organizing, staffing, directing, controlling and communicating. Decision-making is the art of reasoned and judicious choice out of many alternatives. Once decision is taken, it implies commitment of resources.
The business managers have to take variety of decision. Some are routine and others are long-term implementation decision. Thus managerial decisions are grouped as:
(a) Strategic decision
(b) Tactical decision
(c) Operation decision
1. Strategic Decision: these are known as major decision influence whole or major part of the organization. Such decisions contribute directly to the achievement of common goals of the organization; have long range effect upon the organization.
Generally, strategic decision is unstructured and thus, a manager has to apply his business judgment, evaluation and intuition into the definition of the problem. These decisions are based on partial knowledge of the environmental factors which are uncertain and dynamic, therefore such decision are taken at the higher level of management.
2. Tactical Decision: tactical decision relate to the implementation of strategic decisions, directed towards developing divisional plans, structuring workflows, establishing distribution channels, acquisition of resources such as men, materials and money. These decisions are taken at the middle level of management.
3. Operational Decision: operational decisions relate to day-to-day operations of the enterprise having a short-term horizon and are always repeated. These decisions are based on facts regarding the events and do not require much of business judgments. Operational decisions are taken at lower level of management.
The business decision-making is sequential in nature. In business, the decisions are not isolated events. Each of them has a relation to some other decision or situation. The decision may appear as a ‘snap’ decision but it is made only after long chain of developments and a series of related earlier decisions.
The decision-making process is a complex process in the higher hierarchy of management. The complexity is the result of many factors such as inter-relationship among the experts of decision-makers, a job responsibility, and a question of feasibility, the codes of morals and ethics and a probable impact on business.
The personal values of the decision-maker play a major role in decision-making. A decision otherwise being very sound on the business principle and economically rationality may be rejected on the basis of the personal values, which are defeated if such a decision is implemented. The culture, the discipline and the individual commitment to goals will decide the process and success of the decision.
The decision-making process requires creativity, imagination and a deep understanding of human behavior. The process covers over a number of tangible and intangible factors affecting the decision process. It also requires a foresight to predict the post-decision implication and a willingness to face those implications. All decisions solve a ‘problem’ but over a period of time they give rise to a number of other ‘problems’.
The need of information system in organization is to support the decision-making process. The managers must be aware of problems before decision can be made. A problem exists when the real situation is different than the expected one. After the problem has been identified, the cause of existence of the problem must be identified and then the solution to the problem has to be found. The decision-making process can be divided into three main phases:
(a) Intelligence: searching the environment for condition calling for decisions. The phase consists of determining that a problem exists.
(b) Design: during this phase a set of alternative solution is generated and tested for feasibility.
(c) Choice: in this phase, the decision-maker select one of the solution identified in the design phase.
Thus, the decision process follows the sequence from intelligence to design and from design to choice. It is possible to get back from one phase to another and whole process may be repeated. It is very important to distinguish between programmed and non-programmed decision.
If a decision can be based on a rule, method or even guidelines, it is said to be programmed decision. The effectiveness of rule can be analyzed and then rule can be reviewed and modified from time to time for an improvement. The programmed decision-making can be delegated to the lower level in management.
A decision which cannot be made by using a rule or a model is the non-programmed decision. Such decisions are infrequent but the stakes are usually larger. Therefore, they cannot be delegated to the lower level. The MIS in the non-programmed decision situation can help to some extent, in identifying the problem, giving the relevant information to handle the specific decision-making situation. The MIS, in other words, can develop support system in the non-programmed decision-making situation. Advertising budgets, new product decisions and similar problems illustrate the non-programmed type of decision that cannot be automated.
The major reason for distinguishing these two types of decisions is to arrive at some classification of decision-making methods in order to improve decision-making.
TYPES OF DECISIONS
The types of decisions are based on the degree of knowledge about the outcomes or the events yet to take place. If the manager has full and precise knowledge of the event or outcomes which is to occur, then the decision-making is not a problem. If the manager has full knowledge, then it is a situation of certainty. If he has partial knowledge or a probabilistic knowledge then it is decision-making under risk. If the manager does not have any knowledge, whatsoever then it is decision-making under uncertainty.
A good MIS tries to convert a decision-making situation under uncertainty to the situation under risk and further to certainty. Decision-making in the operational management is a situation of certainty. This is mainly because the manager in this field, has full knowledge of environment, and has a predetermined decision alternative for choice or for selection.
Decision-making at the middle management level is of the risk type. This is because of the difficulty in forecasting an event with 100 percent accuracy and the limited scope of generating the decision alternatives.
At the top management level, it is a situation of total uncertainty on account of insufficient knowledge of the external environment and the difficulty in forecasting business growth on a long-term basis.
A good MIS design gives adequate support to all the three levels of management
TYPES OF DECISION-MAKING SYSTEMS
The decision-making systems can be classified in a number of ways. There are two types of systems based on the manager’s knowledge about the environment. If the manager operates in a known environment then it is a closed decision-making system. The conditions of the closed-decision making systems are:-
I. The manager has a known set of decisions alternatives and knows their outcomes fully in terms of value, if implemented.
II. The manager has a model, a method or a rule whereby the decision alternatives can be generated, tested and ranked for selection.
III. The manager can choose one of them, based on some goal or objective criteria.
Few examples are; a product mix problem, an examination system to declare pass or fail, or acceptance of the fixed deposits.
If the manager operates in an environment not known to him, then the decision-making system is termed as an open decision-making system. The conditions of this contrast closed decision-making system are:-
I. The manager does not know all the decision alternatives
II. The outcome of the decision is also not known fully. The knowledge of the outcome may be a probabilistic one.
III. No method, rule or model is available to study and finalize one decision among the set of decision alternatives.
IV. It is difficult to decide an objective or a goal and therefore, the manager resorts to that decision, where his aspirations or desires are met best.
Deciding on the possible product diversification lines, the pricing of a new product, and the plant location, are some decision-making situations which fall in the category of the open decision-making system.
The MIS tries to convert every open system to a closed decision-making system by providing information support for the best decision. The MIS gives the information support, whereby the managers know more and more about environment and the outcomes, he is able to generate the decision alternatives, test them and select one of the alternatives. A good MIS achieves this.
HERBERT SIMON MODEL
Decision-making is a process in which the decision-maker uses to arrive at a decision. The core of this process is described by Herbert Simon in a model. He describes the model in three phases as shown in the figure below:
I. Intelligence: raw data collected, processed and examined, Identifies a problem calling for a decision.
In the intelligence phase, the MIS collects the data. The data is scanned, examined, checked and edited. Further, the data is sorted and merged with other data and computations are made, summarized and presented. In this process, the attention of the manager is drawn to all problem situations by highlighting the significant differences between the actual and the expected, the budgeted or the targeted.
In the design phase, the manager develops a model of the problem situation on which he can generate and test the different decision alternatives, he then further moves into phase of selection called as choice.
In the phase of choice, the manager evolves selection criteria such as maximum profit, least cost, minimum wastage, least time taken and highest utility. The criterion is applied to the various decision alternatives and the one which satisfies the most is selected.
In these phases, if the manager fails to reach a decision, he starts the process all over again and again. An ideal MIS is supposed to make a decision for the manager.
An example of the Simon model would illustrate further its use in the MIS. For example, a manager finds on collection and through the analysis of the data that the manufacturing plant is underutilized and the products which are being sold are not contributing to the profits as desired. The problem identified, therefore, is to find a product mix for the plant, whereby the plant is fully utilized within the raw material and the market constraints, and the profit is maximized. The manager having identified this as the problem of optimization, now examines the use of linear programming (LP) model. The model is used to evolve various decision alternatives. However, selection is made first on the basis of feasibility and then on the basis of maximum profit.
The product mix so given is examined by the management committee. It is observed that the market constraints were not realistic in some cases and the present plant capacity can be enhanced to improve the profit. The same model is used again to tool the revised position. Therefore, additional data is collected and an analysis is made to find out whether the average 20 percent utilization of the capacity can be increased. A market research for some products is made and it is found that some constraints need to be removed and reduced. Based on the revised data linear programming model is used and a better optimum solution is obtained.
MIS AND DECISION-MAKING
It is necessary to understand the concept of decision-making as they are relevant to the design of the MIS. The Simon model provides a conceptual design of the MIS and decision-making wherein the designer has to design the system in such a way that the problem is identified in precise terms. That means the data gathered for data analysis should be such that it provides diagnostics and also provide a path to bring the problem to surface.
In the design phase of the model, the designer is to ensure that the system provides models for decision-making. These models should provide for the generation of decision alternatives, test them and pave way for the selection of one of them. In a choice phase, the designer must help to select the criteria to select one alternative amongst the many.
The concept of programmed decision-making is the finest tool available to the MIS designer, whereby he can transfer decision-making from a decision-maker to the MIS and still retain the responsibility and accountability with the decision maker or the manager. In case of non-programmed decisions, the MIS should provide the decision support systems provide a generalized model of decision-making.
The concept of decision-making system, such as the closed and the open system, such as the closed and the open systems, helps the designer in providing design feasibility. The closed systems are deterministic and rule based, therefore, the design needs to have limited flexibility, while in an open system, the design should be flexible to cope up with the changes required from time to time.
The methods of decision-making can be used directly in the MIS provided the method to be applied has been decided. A number of decision-making problem calls for optimization, and operational models are available which can be made a part of the system, the optimization models are static and dynamic, and both can be used in the MIS. Some of the problems call for a competitive analysis, such as payoff analysis. In these problems, the MIS can provide the analysis based on the gains, the regrets and the utility.
The concept of the organizational and behavioral aspects of decision-making provides an insight to the designer to handle the organizational culture and the constraints in the MIS. The concepts of the rationality of a business decision, the risk awareness of the managers and the tendency to avoid an uncertainty, makes the designer conscious about the human limitations and prompts him to provide a support in the MIS to handle these limitations. The reliance on organizational learning makes the designer aware of the MIS and makes him provide the channels in the MIS to make the learning process more efficient.
The relevance of the decision-making concepts is significant in the MIS design. The significance arises out of the complexity of decision-making, the human factors is the decision-making, the organizational and behavior aspects, and the uncertain environments. The MIS design addressing these significant factors turns out to be the best design. | <urn:uuid:12bd69e3-1540-4727-8369-ad0458400bd2> | CC-MAIN-2015-35 | http://deepread.blogspot.com/2011/06/mis-and-decision-making-concepts.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00041-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.950555 | 2,906 | 3.4375 | 3 |
Blue and Gray Trail
Civil War Encyclopedia
Civil War in Georgia
On the Blue and Gray Trail
Civil War by state
Today in the Civil War
This year in the Civil War
He initially dismissed the work of John Taylor (Arator) but the work of Humphrey Davy (Elements of Agricultural Chemistry) intrigued him. In a test on his own plantation he dramatically increased crop yields by adding marl (A mixture of clays, calcium and magnesium carbonates, and remnants of shells used as fertilizer for lime-deficient soils) from the tidewater area. Then he combined elements of Davy's and Taylor's work to increase crop yields by nearly 50% and reduce soil erosion.
In 1823 he was elected a Virginia senator, resigning after serving three years of a four-year term. During the Election of 1824 Ruffin expressed displeasure with each candidate running, Henry Clay, John Quincy Adams, Andrew Jackson, John C. Calhoun and William Crawford for expanding the power of the government through advocating tariffs, internal improvement, or federal banks. Unhappy with political life, he returned to farming and writing, publishing a book on the value of marl in 1832, then starting the Farmer's Register in 1833.
He began using his magazine to express political viewpoints (for example, Ruffin maintained that the movement to popular election was never intended by the founding fathers), sometimes alienating both politicians and farmers. In 1840 he supported Whig William Henry Harrison and John Tyler, who was a friend. Edmund Ruffin founded a separate magazine, Southern Magazine and Monthly Review in January, 1841, but both fell victim to recession and political intrigue.
James H. Hammond brought Edmund Ruffin to South Carolina, where the Virginian preached the benefits of technological advancement and searched for marl beds. While in South Carolina Ruffin developed two model plantations, at Red Cliff and Silver Bluff, that practiced, among other things, crop rotation (note: This crop rotation was akin to Charles Townsend's 18th century theory, not George Washington Carver's revolutionary concept some forty years later). Ruffin's push for advanced agricultural techniques actually reduced the South's need for slaves.
Edmund Ruffin moved into the forefront of the Southern nationalistic movement following the death of his friend John C. Calhoun in 1850, although he was not a national politician. Ruffin was a writer with a dramatic flair who turned to paper and pencil and an evangelistic speaking style to make his pro-slavery case. In Debow's review, Agricultural, commercial, industrial progress and resources, Ruffin was a frequent contributor, both for his agricultural expertise and his pro-slavery sentiment.
Before the Nashville Convention Ruffin championed the cause of an independent South, along with other Virginia extremists like M. R. H. Garnett, James Mason and Beverly Tucker. Virginia would be the only Upper South state to send delegates to the convention.
Following the Compromise of 1850 his radical talk abated and he returned to advocating the advancement of the introduction of new technologies and old fashion learning to increase yields on Virginia farms. He advocated the use of marl, better plowing techniques to reduce soil runoff and crop rotation. For the depleted tobacco farms of southeastern Virginia he helped introduce guano, lime, bone and superphosates to replenish the soil.
By 1856, though, his writings had returned to avocation of an independent southern nation and a defense of slavery:
Slavery is not intrinsically right, it is only circumstantially right under a set state of circumstances. The right rule is freedom, but slavery is an exception to that rule; and if right, right as all exceptions are, according to the circumstances which surround it.
Unlike many of the Southern extremists, Ruffin was realistic. When the Lecompton Constitution came up before Congress, Ruffin admitted the document was the work of a minority and the slavery clause would be "repealed within a year." At the time, pro-Union feelings were running high in Virginia - former Whigs H. H. Stuart and John M. Botts had been organizing Unionists in the state under the guidance of Alexander Stevens. It was during this time that Ruffin became convinced that the South could not depend on the Democratic party to protect its "rights."
In 1858 Ruffin founded the "League of United Southerners," which backed the concept of an independent southern nation. William Yancey, another fire-eater, is sometimes given credit as a "co-founder" - this is wrong. Even Yancey referred to it as "Ruffin's League." In 1859, Ruffin rushed to Harper's Ferry when talk of additional revolt arose. He took 15 of the pikes that Brown intended to use to arm the slaves and sent them to the Southern governors with the label "Sample of the favors designed for us by our Northern brethren."
In June, 1860 Edmond Ruffin published a futuristic novel, Anticipations of the Future, to Serve as Lesson for the Present Time correctly predicting Abraham Lincoln winning the election of 1860, followed by Republican William Seward in 1864. The potential reelection of Seward in 1868 brings secession, then a war that takes place in Virginia. The North enlists "Negro armies," and violence racks Northern cities before a truce leaves an independent South. At the end of the book Ruffin offers a second outcome. The South secedes immediately and "the great cities of Boston, New York and Philadelphia... (are) sacked and burnt, and their wealthiest inhabitants massacred, by their own destitute, vicious and desperate population..."
Convinced that Lincoln would win the election, Ruffin began a heavy schedule of pro-Secession speeches in October, 1860, including a noted speech in the South Carolina statehouse in Columbia that included the following statement:
I have studied the question now before the country for years. It has been the one great idea of my life. The defense of the South, I verily believe, is only to be secured through the lead of South Carolina. Old as I am, I have come here to join her in that lead. I wish Virginia was as ready as South Carolina, but, unfortunately, she is not. But the first drop of blood spilled on the soil of South Carolina will bring Virginia and every other Southern State to her side.
As war approached, Ruffin returned to Charleston, this time to serve in the Army at the age of 74. When asked what unit he belonged to he responded, "The one with a vacancy." He was added to South Carolina's Palmetto Guards and is generally believed to have fired the first shot at Fort Sumter, although this is questioned. From the Official Records, P. G. T. Beauregard said:
The venerable and gallant Edmund Ruffin, of Virginia, was at the Iron battery, and fired many guns, undergoing every fatigue and sharing the hardships at the battery with the youngest of the Palmettoes.
Captain G. B. Cuthbert, who was in direct command of the Palmetto Guards on Morris Island, spelled out Edmund Ruffin's role in the battle:
The mortar battery at Cummings Point opened fire on Fort Sumter in its turn, after the signal shell from Fort Johnson, having been preceded by the mortar batteries on Sullivan’s Island and the mortar battery of the Marion Artillery.
Cuthbert, in a later report at the battle of Manassas, makes the following statement:
Many of the soldiers threw their arms into the creek, and everything indicated the greatest possible panic. The venerable Edmund Ruffin, who fired the first gun at Fort Sumter, who, as a volunteer in the Palmetto Guard, shared the fatigues and dangers of the retreat from Fairfax Court-House, and gallantly fought through the day at Manassas fired the first gun at the retreating column of the enemy, which resulted in this extraordinary capture.
The historic discrepancy comes over the "signal shot" from Johnson Island and from Cuthbert's statement, "...having been preceded by the mortar batteries on Sullivan’s Island and the mortar battery of the Marion Artillery." South Carolina had an interest in Ruffin firing the first shot since he was from Virginia and they had not yet seceded. So, was Ruffin's the first shot or not? You decide.
After the battle was over, in a surrender negotiated by fellow fire-eater Louis Trezevant Wigfall, Edmund Ruffin led the Palmetto Guards into the fort as color-bearer. Local papers rang the praise of Virginia's son claiming, "That ball fired at Sumter by Edmund Ruffin will do more for the cause of secession in Virginia than volumes of stump speeches." The gun Ruffin fired has been known as the secession gun ever since.
After returning to Richmond, Ruffin addressed the Virginia congress, one of two speakers rallying the body to vote for secession. While he did not get what he wanted (Virginia voted to hold a popular vote on the secession document), his powerful sermon-like speech did sway votes for secession.
As tensions began to build in western Virginia and elsewhere, Ruffin rejoined the Palmetto Guard at Fairfax Courthouse. He sent the following to Jefferson Davis in the new capital of Richmond, Virginia:
RICHMOND, VA., May 16, 1861.
For salvation of our cause come immediately and assume military command.
Ruffin did participate in the withdrawal from Fairfax Courthouse as Irvin McDowell advanced. During the battle of First Bull Run - First Manassas he is credited with firing the gun that turned the Union retreat into a stampede, but Ruffin quickly ended his practice of going into battle. He returned to Danville, Virginia. With the Surrender at Appomattox, Northern forces began occupying the South.
The Union Army destroyed his property at Coggin’s Point and his beloved estate on the Pamulkey River, Marlbourne. On Saturday, June 17th, 1865, Ruffin ate breakfast, visited with some guests, then went upstairs and committed suicide using his gun and a forked stick. His suicide note said, "I cannot survive the liberties of my country."
His son, Edmund Ruffin, Jr., returned to Marlbourne following his father's death where he grew oats, wheat and corn and built a new house. In 1866 he began raising cotton.
Reminiscences of forts Sumter and Moultrie in 1860-'61, by Abner Doubleday.
Links appearing on this page:
Edmund Ruffin was added in 2005
Battles | Places | Events by year | Events by date | Feature Stories |
Bookstore | Links | Who We Are | | <urn:uuid:164a5854-dff0-4a69-bbc1-e0719978b204> | CC-MAIN-2015-35 | http://blueandgraytrail.com/event/Edmund_Ruffin | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00216-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.963116 | 2,230 | 3.15625 | 3 |
ELIZABETH II: Long May She Reign
A Selected List of References
Compiled by Josephus Nelson, Reference Specialist, Humanities
and Social Sciences Division
On the occasion of the fortieth anniversary of the Queen's
accession to the throne.
"Thy choicest gifts in store,
On her be pleased to pour;
Long may she reign:
May she defend our laws,
And ever give us cause
To sing with heart and voice,
God save the Queen."
Succeeding her father, George VI, on February 6, 1952,
Elizabeth II, at the meeting of the Accession Council on February
8, 1952, declared: "I shall always work as my father did ... to
uphold constitutional government and to advance the happiness and
prosperity of my peoples," and "I pray that God will help me to
discharge worthily this heavy task." Forty years later, most will
concede that she has lived up to her earlier declaration. Not
only has she faithfully carried out her constitutional duties,
but she has made the monarchy an accepted and popular
institution. The high level of interest in the royal family and
in royal events is an indicator of the special place that the
Queen and her family hold in modern life.
Following is a selection of books which tell the story of
Queen Elizabeth's reign. The Queen's personality and role as
sovereign are examined minutely, and the evolution of the British
monarchy during her reign is amply demonstrated.
The Queen : a Penguin special. -- Harmondsworth, Eng. ; New
York : Penguin Books, 1977. -- 185 p., leaves of plates
A series of essays written in honor of the Queen's 1977 silver
jubilee. Her constitutional status, royal relatives, right of
succession, and personal style are some of the topics addressed.
Albert, Harold A.
The Queen and the arts. -- London : W.H. Allen, 1963. -- xi,
177 p. : ill., (some col.), port.
Describes the Queen's interest in and patronage of the arts and
her efforts to conserve and add to the royal collections.
Millar, Oliver, Sir, 1923-
The Queen's pictures / Oliver Millar. -- 1st American ed. --
New York : Macmillan, 1977. -- 240 p. leaves of plates
: ill. (some col.)
Includes biblographical references and index.
This story of the amassing of the royal collections, from the
time of the Tudors to the present reign, includes an account of
Queen Elizabeth's stewardship of these collections.
Duncan, Andrew, 1940-
The Queen's year : the reality of monarchy. -- [1st ed.]. --
Garden City, N.Y. : Doubleday, 1970. -- viii 345 p. :
geneal. tables, ports.
Attempts to determine how "an integral yet archaic aspect of
British society like monarchy operates," by describing a year
(1968) in the life of the Queen.
Sovereign : Elizabeth II and the Windsor dynasty / Roland
Flamini. -- New York : Delacorte Press, 1991. -- viii 440 p.
A current assessment of Elizabeth II in which the author argues
that she has initiated change in the monarchy--adapting it to the
new expectations of her subjects, but doing so without changing
any of the essentials.
Howard, Philip, 1933-
The British monarchy in the twentieth century / [by] Philip
Howard. -- London : Hamilton, 1977. -- 208 p., xvi p. of
plates : ill. (some col.), facsim., geneal. table, ports.
Bibliography: p. 204.
Presents a thorough examination of the place of the monarchy in
British life. The Queen's constitutional role, ties to the
Commonwealth, social influence, family, and finances are reviewed
at length with the author concluding that the monarchy "though
illogical but functionally useful" will continue to thrive for
many more years.
James, Paul, 1958-
At home with the Royal Family / Paul James and Peter
Russell. -- 1st U.S. ed. -- New York : Harper & Row, c1986.
-- 246 p. : ill.
Allows a glimpse behind the scenes into the day-to-day running
of the royal household.
Majesty : Elizabeth II and the House of Windsor / Robert
Lacey. -- New York : Harcourt Brace Jovanovich, c1977. --
xxxii, 349 p., leaves of plates : ill.
Bibliography: p. 313-317.
Addresses the first twenty-five years of the Queen's reign and
notes that the lessons learned from the abdication of Edward VIII
have greatly influenced her style as monarch.
How the Queen reigns : an authentic study of the Queen's
personality and life work. -- Rev. ed. -- London : Pan
Books, [1961, c1959]. -- 381 p. : ill.
This is an early attempt to describe the Queen's personality and
to give an account of her multifaceted role as soverign.
The Queen and her royal relations : a who's who of the royal
families of Europe. -- London : R. Hart-Davis, . -- 47
p. : coats of arms, geneal. tables.
Montague-Smith, Patrick W.
The 'Country life' book of the royal silver jubilee / [by]
Patrick Montague-Smith. -- [London] : [Hamlyn for] Country
Life Books, . -- 176 p. : ill. (some col.), geneal.
table, ports (some col.)
Celebrates the royal silver jubilee by examining Queen
Elizabeth's work as sovereign.
The Queen / Ann Morrow. -- 1st U.S. ed. -- New York : W.
Morrow, 1983. -- 254 p., p. of plates : ill.
Through interviews and from close observation on many royal
tours, Ann Morrow attempts to define the Queen's character.
Morrah, Dermot, 1896-
The work of the Queen. -- London : W. Kimber, -- 191
p. : ill.
Discusses the duties and responsibilities of the monarch.
Packard, Jerrold M.
The Queen & her court : a guide to the British monarchy
today / Jerrold M. Packard. -- New York : Scribner, c1981. -
- 234 p., leaves of plates : ill.
Bibliography: p. 222-225.
The Queen, members of the royal family, royal homes, jewels and
ceremonies, the curt, and the peerage are discussed in this guide
to the monarchy.
Pearson, John, 1930-
The selling of the Royal Ramily : the mystique of the
British monarchy / John Pearson. -- New York : Simon and
Schuster, c1986. -- 350 p., p. of plates : ill.
Suggests that the popularity of the royal family is the result
of the use of sophisticated marketing techniques. "The fact is
that the British Royal Family is the most successful PR operation
in the world, reinventing itself generation after generation,
holding the public's interest with its ceremony, personalities,
Caricatures and Portraits
Miller, Harry Tatlock.
Undoubted Queen / [compiled by Harry Tatlock Miller]. --
Garden City, N.Y. : Doubleday, . -- 252 p. : chiefly
The Queen's early reign is revealed in this pictorial record.
Strong, Roy C.
Cecil Beaton : the royal portraits / Roy Strong. -- New York
: Simon and Schuster, c1988. -- 227 p. : ill. (some col.)
Bibliography: p. 225.
This essay, complemented by many well known photographs, reveals
the manner in which Cecil Beaton, royal photographer, helped to
shape the modern image of the House of Windsor.
"We are amused" : the cartoonists' view of royalty / edited
and introduced by Peter Grosvenor ; foreword by H.R.H. The
Prince of Wales. -- London : Bodley Head, 1978. -- 126
p. : chiefly ill.
Includes cartoons from an exhibition at the London Press Club,
Selects cartoons which point out the foibles of the royal
Harris, John, 1931-
Buckingham Palace and its treasures / [by] John Harris,
Geoffrey de Bellaique [and] Oliver Miller ; introd. by John
Russell ; photography by Lionel Bell, Kerry Dundas [and]
Sidney Newbury. -- New York : Viking Press, . -- 320
p. : 297 ill. (79 col.), plans, ports. -- (A studio book)
This is an account of the history and contents of Buckingham
Palace, the Queen's administrative center.
The history & treasures of Windsor Castle / Robin Mackworth-
Young. -- 96 p. : ill. (some col.)
Traces the development of Windsor, one of the Queen's principal
homes, from its beginnings as a military outpost to its place as
royal treasure house and seat of the Windsor dynasty.
Martin, Kingsley, 1897-
The Crown and the Establishment. -- [1st ed.] --
Harmondsworth : Penguin in association with Hutchinson,
1965. -- 192 p.
In this critical study of the British monarchy, the author
assails the royal house for its role in serving merely as "head
of a social class and a vanishing economic order," rather than
acting as a symbol of the "new England that waits to be born."
The enchanted glass : Britain and its monarchy / Tom Nairn.
-- London : Radius, 1988. -- 402 p.
Includes bibliographical references (p. 393-402).
Contends that the monarchy is outmoded, and that republicanism
would serve Great Britain better.
Rites and Ceremonies
Barker, Brian, O.B.E.
When the Queen was crowned / Brian Barker. -- 1st American
ed. -- New York : D. McKay Co., 1976.
Bibliography: p. -218.
Firsthand account, written by a senior civil servant, describes
the planning and execution of the Queen's coronation.
Brooke-Little, J.P. (John Philip)
Royal ceremonies of state / [by] John Brooke-Little. --
Feltham : Country Life Books ; London ; New York :
distributed by Hamlyn Pub. Group, 1980. -- 144 p. : ill.
(some col.), coats of arms, facsims., ports. (some col.)
Describes the colorful ceremony and pageantry associated with
the Queen and the royal family.
Church of England. Liturgy and ritual. Coronation service.
The music with the form and order of the service to be
performed at the coronation of Her Most Excellent Majesty
Queen Elizabeth II in the Abbey Church of Westminster on
Tuesday the 2nd day of June, 1953. -- [Official ed.]. --
London : Novello, 1953. -- vi, 183 p.
Fisher, Geoffrey Francis, Abp. of Canterbury, 1887-
I here present unto you . . . Addresses interpreting the
coronation of Her Majesty Queen Elizabeth II, given on
various occasions by His Grace the Lord Archbishop of
Canterbury, Primate of All England. -- London : S.P.C.K.,
1953. -- 45 p.
A Form of prayer & of thanksgiving to Almighty God on the
occasion of the silver jubilee of the accession of Our
Sovereign Lady Queen Elizabeth the Second. -- London :
Cambridge University Press : Eyre and Spottiswoode : Oxford
University Press, . -- 14 p.
"Published with the approval of the Archbishops of Canterbury
and York, the Cardinal Archbishop of Westminster and the
Moderator of the Free Church Federal Council."
Includes bibliographical references.
prepared for internet | <urn:uuid:ac56272e-3547-4940-bf4d-b6b37f9a68f0> | CC-MAIN-2015-35 | http://loc.gov/rr/main/gopher/qe2_bib.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.797961 | 2,630 | 2.71875 | 3 |
Mr. Greaves, economist, lecturer, and author of numerous articles and books, sewed with the U. S. House of Representatives Committee on Education and Labor during the preparation and passage of the 1947 revisions of the National Labor Relations Act, popularly known as the Taft-Hartley Act.
Unemployment can be a dreadful condition. The inability to find a needed job is a heart-rending experience for anyone. For those with young children to feed and clothe, it is a terrifying predicament. It gnaws at and destroys the spirit and self-confidence of even the strongest souls. With nerves on edge, family harmony too often flies out the window.
In addition to the deep mental anguish, there are also physical and financial losses. An adult’s health, as well as his spirit, may suffer irreparably. A child’s growth may be permanently stunted. The loss of the family car can reduce both the hope and the possibility of getting another job. The foreclosure of a mortgage on the family home can liquidate the savings of a lifetime. In short, a prolonged period of unemployment can wreck a person’s life.
Then, too, the unemployed are not the only sufferers. With millions of able-bodied persons searching for a source of income or twiddling their thumbs in frustrated idleness, the potential quantity of goods and services available in the market place is greatly reduced. This means higher prices and lower living standards for everyone. Government programs to provide a floor for the unemployed also mean higher taxes and/or still higher prices as a result of the political creation and distribution of unearned dollars. Actually, mass unemployment and its aftermath is probably the greatest single driving force behind our politically sponsored inflation.
So solving the problem of mass unemployment is a major task of our time. Before we can solve it, we must locate the root cause. There was no unemployment at Plymouth or Jamestown. There was no mass unemployment during this country’s first hundred years of existence. What is different today?
Not a Free Market
One major difference is that there is no longer a free market in jobs and wage rates. There are now laws on the statute books that grant certain groups of workers the privilege of demanding and getting higher wages than they could and would earn in a free market. The unemployed are no longer permitted to compete and thus reduce the higher than free market wage rates of the privileged few. So those shut out from the higher paying jobs must compete for work and drive down the wage rates in unorganized occupations. Then, they face the floor decreed by minimum wage laws which often prevent employment at these reduced market wage rates.
Employers cannot long pay workers the legal minimum wage rate if consumers cannot or will not buy the resulting goods and services at prices that cover costs. As a result, rail-lions are now legally prevented from taking either high paying jobs or lowpaying jobs. The free market in jobs and wage rates has been legally destroyed.
It should thus be evident that the remedy for mass unemployment is to repeal the laws which prevent people from competing for the higher paying jobs or taking the lower paying jobs—lower paying, until workers acquire the skill and experience needed to climb the ladder to higher incomes.
Historian Clarence B. Carson has written a small book, Organized Against Whom?, which tells some of the story of how we strayed from the free market path for jobs and wage rates. It is an ugly story vividly describing the coercion and violence employed by many in the labor union movement in their effort to convince the electorate that they are entitled to special privileges and immunities. They have successfully convinced many that labor unions are the protectors of downtrodden poorly paid workers who are supposedly at the mercy of greedy all-powerful employers who rob them of their rightful earnings.
Today, thanks to socialist and labor union propaganda, there is little understanding of the fact that employers are merely middlemen operating in a heavily taxed and very competitive market place. Actually, employers have very little to say about wage rates. Employers are compelled by market forces to pay employees in accordance with the value that consumers place on the production of their marginal employees, the last hired. If employers pay higher wage rates than they get back from consumers, they suffer losses and sooner or later cease to be employers. If employers seek to increase their profits by paying lower than market wage rates, competitors soon bid away their employees. Thus, the free market competition of employers is the salvation of workers looking for higher wages.
The Voluntary Way
In a free society, labor unions, like other organizations, would be voluntary groups trying to advance the interests of their members. They would abide by the laws and seek no special privileges or immunities. Unions that offered employers the most competent and reliable workers, who were willing to work for competitive free market wage rates, would grow and prosper. Labor unions that offered incompetent workers, insisted on featherbedding, or other unnecessary or costly conditions and demanded higher wage rates than competent non-union members would willingly accept would soon fade away. Certainly, in a free society no group should or would resort to violence, coercion or special privileges to obtain what it seeks.
The free market operates according to the Golden Rule. The higher values one contributes to the market place, as valued by consumers, the more one receives in return. Free market operations are always voluntary transactions by which all parties exchange something they have for something on which they place a higher value. Goods and services thus continually move to persons who place a higher value on them. Barring human error or the use of force or fraud, all parties gain from all such transactions. The prevention of the use of force or fraud is a prime function of government.
Dr. Carson tells us how many labor unions now operate, with the help of laws and court decisions, coercing employers to join with them to grant them a monopoly of certain jobs. Such unions are thus able to shut out the competition of competent applicants for those jobs. Then, by demanding still higher wage rates, some unions further reduce production and employment by pricing some of their own members, those with low seniority, out of their high paying jobs. In short, labor unionism, as now practiced, is not only the enemy of employers, investors and consumers, but it is primarily the enemy of competent job seekers who, as a result of union action, must remain underpaid or unemployed.
Unions Gain Monopoly Status
Today, we live in an economy of political privileges with all kinds of lobbies trying to get for their members what they consider their “fair share” of the political largesse. Un questionably, labor unions have been one of the first and strongest of these political pressure groups. As Dr. Carson narrates, they won their first great political victory in 1914, when they persuaded Congress to decree: “That the labor of a human being is not a commodity or article of commerce.” Congress has great powers, but it did not by this legislation alter the fact that labor is one of the factors of production traded in the market place.
With this law on the books, union leaders waged a propaganda campaign demanding that government help them raise wage rates above those of the free market, which they maintain, falsely, are set too low by the whims of all-powerful employers. Their propaganda campaign was accompanied with strikes and violence that disturbed the entire nation and contributed to the mass unemployment of the depression period that started in 1929.
As a result of this propaganda and the show of force, Congress and the courts were persuaded in the 1930s to grant these labor union advocates of self-serving coercion most of the special privileges and immunities they sought. Now, we have the results. Employers as a breed are becoming scarce. So are investors willing to place their savings in new or expanded production facilities. The combined result is that the ranks of the unemployed are now reckoned in the millions. Mass unemployment has even caught up with many of the legally privileged union members. The economic laws of the market cannot long be circumvented without eventually producing undesirable consequences.
As Dr. Carson tells us, our constitutionally chosen government has empowered the labor unions to accomplish all this. He may be a bit harder on the unions than they deserve. There can be no excuse for their resort to violence and coercion. However, they can hardly be blamed for taking advantage of the special privileges and immunities from prosecution that Congress and the courts have conferred on them. In taking advantage of existing laws, they are doing no more than many college kids, lots of old folks and millions of persons in between. Of course, that does not make it right or permanently possible. Neither Congress nor the courts have any power to repeal the laws of economics. They could make us all millionaires, but only by destroying the value of the dollar. A price must be paid for every interference with the inexorable laws of economics.
A Story of Special Privilege
It would seem we are fast losing the freedom for which our Founding Fathers pledged their lives, their fortunes and their sacred honor. As Dr. Carson writes: “The thrust of the American Revolution was in the direction of removing special privileges and legal supports from groups and organizations.” For decades now the courts have supported Congressional grants of “special privileges and legal supports” on a wholesale basis. As Carson writes, this has been “a fundamental departure from the principles of good government,” not to mention the principles of sound economics.
Our government has permitted, encouraged and even underwritten the power of labor unions to coerce all other elements of our society to bend to their will. This small book tells much of the story of how this came about. In doing so, it exposes many of the errors in the popular fallacies, the acceptance of which has permitted labor unions to attain their present position of power. This story is one with which every American should be familiar.
The book is not without its faults and contradictions. Some are only the result of an unfortunate choice of words. For example, lawlessness is referred to as the “state of nature.” Or, “An ancient union complaint could certainly be disposed of if governments neither recognized, gave status to, taxed, or otherwise noticed private organizations, except as they might disturb the peace.” That would mean no legal recognition or taxation of corporations or any other private organizations. In effect, it would repeal the First Amendment. For no press or religious organizations would have any status or right to be recognized in court. Or when Carson writes, “Congress is empowered to make laws regulating commerce.” The Constitution carefully limited that power to “interstate commerce,” and that is what it meant until the Supreme Court, in 1937, ignored the key word “interstate” in a 5 to 4 decision which upheld the National Labor Relations Act, popularly known as the Wagner Act.
There are some unfortunate contradictions in the book, as when we read, “Let me confess at the outset that I do not know what labor unions are.” Then the author proceeds in chapter after chapter to tell what they are and what they do. At another point we read, “Violence is not essential to unionism.” That is true, of course, if they operate within the rules and ethics of a free society. However, the thesis of this book is that labor unions are organized against society in general and against other workers in particular. As the author describes so well, they have for years pursued their policies by resorting to violence and coercion. For decades now the government has given its support to their anti- social actions—actions that impede not only full employment and prosperity but also the legitimate activities of many governmental entities.
Criticism might be made of such statements about labor unions as, “They are not economic organizations,” and “Nor is the labor union primarily a political organization.” If economics is the science of human actions to attain selected goals, then attaining union goals by boycotts, strikes and stopping others from working are certainly economic actions. This book presents many incidents illustrating how labor unions have used both economic and political means to attain their present position of power.
Perhaps this reviewer’s greatest disagreement is with the author’s assertion that “Labor unions are religious, or religion-like organizations and, as I say, once this is grasped they come into focus. Their immediate goals are ethical in character; their ultimate goals are religious. Their economic claims are ethical in character.” The latter might be so if they sought their legitimate ends by ethical means. However, there is nothing ethical or religious about the use of coercion, be it legal or illegal.
As for labor unions being religious, many economically ignorant labor union members and Congressmen undoubtedly swallow the propaganda and follow the wishes of the union bosses with a “religious” faith and fervor. We may live “in the age of the divine right of majorities,” as the author rightly states, but the fact that labor unions are “supported by compulsory tithes and taxes” does not make them religious or “established churches.”
Religion pertains to the supernatural—metaphysics. Except for the fact that reason tells us there must have been a Creator, religions deal with matters which cannot be logically proved or disproved. Religions are concerned with the irrational aspects of human life. Consequently, honest people, who are both sane and intelligent, can and do differ on religious matters. The aims and actions of labor unions are certainly neither heavenly nor irrational. They are earthy and concrete. Labor unions seek more for their members. There is nothing wrong with that objective if they pursued it by ethical means—by voluntary agreements for the mutual benefit of all parties. However, as Dr. Carson has so vividly pointed out, our present problems have arisen from the use of violence, coercion and special privileges which are neither ethical nor particularly metaphysi cal.
The mass media, which are largely manned and edited by labor union members, constantly present a one-sided favorable picture of union policies, privileges and activities. The public needs to know more about the antisocial effects of the prerogatives exercised by labor unions. This book strips away much of the veneer that covers the unfortunate deification of labor union activities, activities which, if committed by individuals or other organizations, would be properly labeled as crimes. We need more books which, like this one, expose the root cause of mass unemployment, a major blight not only on economic peace and prosperity but also on the pursuit of human happiness. | <urn:uuid:9a4fbf96-d3ff-4de0-8daa-9d0cc0bfaa57> | CC-MAIN-2015-35 | http://fee.org/freeman/on-labor-unions | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645294817.54/warc/CC-MAIN-20150827031454-00335-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.968294 | 2,995 | 3.15625 | 3 |
If there’s consensus on anything in education, it’s this: Tests are awful. At best, they’re a necessary evil, something that teachers, parents and students all hate, but that we tolerate in the interest of assessing how well our students are learning.
But maybe we’ve been thinking about tests all wrong. Research shows that tests can actually be powerful tools for learning, but only if teachers use them right.
The medical student
I met up with Michael Young on the campus of Georgia Regents University’s medical school in Augusta, Georgia, where he’s a student. Young has neatly combed hair, a quiet voice, and an intensely earnest manner. Every time I saw him over the course of two days, he was wearing a clean, pressed, white lab coat.
Young is in his third year now, but he had a rocky start to medical school. He nearly flunked out, and he says that was because he was studying wrong. In fact, most of us study wrong. Researchers say we could learn more, learn it faster, and remember it better if we didn’t make the kinds of mistakes Young was making.
Young decided he wanted to be a doctor while working as a counselor at a drug abuse clinic. He was impressed with how much more the doctors were able to do to help the patients who came in to the facility. So he approached the nearest university, Columbus State, to ask about premed programs.
“They kind of laughed at me,” he says. “They said, ‘Well, we don’t really have a premed program and besides, people from Columbus State don’t go to medical school.”
Young persisted and finally convinced the school to put together a modified premed program for him. It was a bit haphazard, he says now.
“Since they didn’t have a Bio 2 class, I ended up taking fishing to meet that requirement,” he remembers. “So we went out to the river and waded around, and we actually dissected a fish one time. So that’s something.”
Young got through it and managed a decent score on the medical school entrance exams. He was thrilled when he received an acceptance letter from Georgia Regents. But when he got there, he found he was not prepared for med school.
Young’s first medical school class was biochemistry. From the first day, he felt like he was drowning. He didn’t understand any of the notations, the formulas, even the vocabulary the professor used.
Halfway into the semester, the class took its first test.
“I studied from the time I woke up in the morning until the time I went to bed, 16 hours a day,” Young says. “I took my notes, read them. When I finished, read them again. When I finished, read them again. And I said, ‘Okay, I’ve got this. I memorized it in the book. I know this. I’m just going to do great.'”
This is actually exactly how most students study. Rereading is by far the most common study strategy for students in higher education. A recent survey of university students found that 83.6 percent of students choose rereading as a study strategy. That’s probably because rereading feels good. Once you’ve reread a chapter a couple of times, you really feel like you’ve got it, and in the short term, you do. Research shows that people can remember stuff they just read really well.
But it turns out that rereading is almost totally ineffective for long-term remembering, even remembering that has to happen just a day later. Which Young found out on that first biochem test.
“I get to the test, and I just have no idea,” he says.
Everything that Young had been so sure was lodged in his head just vaporized. It was as if he hadn’t studied at all. He ended up getting a 65, a failing grade.
The same thing happened on his next couple of tests, in all of his classes. Young’s first semester grades were terrible.
But instead of giving up or dropping out, Young decided to approach his problem scientifically.
“I asked myself the question: am I doing everything right?” he says. “I knew that the answer wasn’t studying more, because I knew there was no way to study more than what I did. And I thought, ‘Well, maybe I’m just not studying the right way.'”
So Young went to his computer and started searching for answers. He started by typing ‘How to improve memory’ into his browser window. Predictably, this turned up a bunch of junk, and so he started poking around in the academic research. And that’s how he came across Henry Roediger.
The testing enthusiast
Roediger — who goes by Roddy Roediger — is a psychology professor at Washington University in St. Louis, the head of the school’s Memory Lab, and a memory obsessive who’s been studying how and why people remember things for four decades.
About 20 years ago, Roediger was running an experiment on how images help people remember. He separated his subjects into three groups and asked each group to try to memorize 60 pictures. The first group just studied the pictures for 20 minutes. The second studied them for most of that time, but was asked to recall the pictures once during the session. But Roediger tested the third group on the pictures three times over the 20 minutes.
“Seven minutes, they recalled what they could, little break, seven more minutes, take their sheet away, do it again,” he says. “Another seven minutes, [they were] bored out of their minds. They thought, ‘This is awful.'”
But when Roediger tested the three groups on the pictures a week later, there were huge differences in how much they each remembered. The first group, which had just studied the whole time, remembered 16 of the 60 pictures. The second group did a little better. But the third group, the ones he had driven crazy by testing them over and over, did great. They remembered 32 pictures — twice as many as the first group.
This phenomenon — testing yourself on an idea or concept to help you remember it — is called the “testing effect” or “retrieval practice.” People have known about the idea for centuries. Sir Francis Bacon mentioned it, as did the psychologist William James. In 350 BCE, Aristotle wrote that “exercise in repeatedly recalling a thing strengthens the memory.”
But the testing effect had been mostly overlooked in recent years.
“What psychologists interested in learning and memory have always emphasized is the acquisition part. The taking [information] in and getting it into memory,” Roediger says.
Laypeople — and even experts — tend to think of human memory as a box to be packed with information.
“What people neglected and didn’t think about was the getting it out part,” Roediger says. “We don’t get information into memory just to have it sit there. We get it in to be able to use it later. … And the actual act of retrieving the information over and over, that’s what makes it retrievable when you need it.”
Why does retrieval, or quizzing, slow forgetting and help us remember?
“It’s a good question, and we don’t know the answer to it,” says Roediger’s colleague Mark McDaniel.
One theory is that the act of retrieving information from the vastness of our memory systems poses a challenge to the brain, and retrieval practices that act: in effect, greasing the wheels of memory.
Another theory is that information goes into our brains attached to context. The texture of the book page that we flip as we read; the hum of the air conditioner in the background; the taste of the chips we’re snacking on as we study: these all become part of a stored memory.
“Memory is dynamic, and it keeps changing,” says McDaniel. “And retrieval helps it change.”
Every time a memory is retrieved, it becomes connected to new sensations and contexts.
“The more things you have it connected to, the easier it is to pull it out, because you have lots of different ideas that can lead you to that particular material,” McDaniel says. “And the things you retrieve get more accessible later on, and the things you don’t retrieve get pushed into the background and become harder to retrieve next time.”
Coming across the research from Roediger’s lab completely changed how Georgia medical student Michael Young thought about studying.
Roediger and Young struck up a correspondence. They exchanged lengthy messages.
“I’m sure I bugged him so much,” Young says. “I was emailing him constantly.”
The first thing Roediger told Michael: Stop rereading. Instead, start testing yourself. Read a chapter, look up from your book, try to recall what information you just took in. Make up little quizzes for yourself.
That kind of studying was harder for Young. He says it felt uncomfortable, inefficient. It made his head hurt.
“This is a difficult way to study,” admits Mark McDaniel. “I think most people want learning to be easy and effortless. They want a magic bullet for it. And learning is not easy and effortless. It takes work, and it takes effort and time and dedication.”
But the more Young studied using retrieval, the more things came together for him. His grades improved. Soon, he was making A’s. And he’s become something of a legendary tutor around Georgia Regents, spreading the gospel of study skills and the testing effect.
Sydney Baranovitz came to Young for tutoring after failing her first neuroanatomy test, and she credits him with saving her medical career. “I didn’t know how to study,” she says. “I had the ability; I just didn’t know what to focus on. The issue with learning is, no one ever sits down and teaches you [how to study].”
Mark McDaniel agrees. “One of the gaps or problems in the educational system is that no one ever helps a student figure out how to learn, and yet that’s the primary challenge a student is faced with. You’ve got to assist them with how to do that. And that’s where I think we’re failing somewhat.”
More tests, not fewer
It’s not just that many students are never taught how to study. It’s also that many classes, especially in higher education, are set up to encourage bad study habits.
I met Andrew Sobel in his office on the campus of Washington University in St. Louis. Sobel is a professor of international studies, and he used to teach a freshman introduction to political science class. He structured it in the traditional way, with daily lectures, a midterm exam and a final.
Then he heard Roddy Roediger give a presentation on the testing effect, and Sobel realized that his students were studying in exactly the wrong way, by rereading their notes the night before his two exams.
A vastly better model, Sobel thought, would be one where he essentially forced his students to retrieve knowledge over and over again throughout the course.
So, every semester, instead of two exams, he started giving his students nine quizzes. All these little tests would count for a grade, but they would also, Sobel hoped, be a tool for learning.
At first, Sobel says, his students hated the quizzes. But he was shocked when he realized that by the end of the semester, his students were writing answers to his questions that were comparable to those of his upper division students.
“That had never happened before,” Sobel says. “And so the only thing that can explain that, the only thing that varied in there was the testing structure.”
Sobel has tried to talk to his colleagues about the results he was seeing with quizzing, but he says most of them aren’t interested in switching from a few exams to multiple quizzes.
“University faculty are considered very smart, but are also very conservative,” he says. “We don’t like to change our ways.”
“I’ve always said there was kind of a conspiracy between students and faculty,” says Roddy Roediger. “Faculty hate making up and grading tests. Students hate taking them. So we pretend they’re not very important, and we don’t give them. … [Our lab] is arguing for more testing, not less — not standardized tests, but tests that help kids learn.”
You can read more about Roediger’s work in his new book, Make It Stick: The Science of Successful Learning. | <urn:uuid:e6b43f27-b757-4bf5-b003-eb7d9138df48> | CC-MAIN-2015-35 | http://www.americanradioworks.org/segments/learning-to-love-tests/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00046-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.97781 | 2,825 | 2.765625 | 3 |
Authored by Julia Anne Matheson and Anna Balishina Naydonov
Do you know that the terms "aspirin," "escalator," "thermos," and "zipper" were once protectable trademarks? And do you know that FRISBEE is yet another mark that may soon join the list of trademarks that have become generic? What causes protectable marks to fall into public domain? To a large degree, FRISBEE and other similarly-situated marks are victims of their own success. In each case, these products have been so novel and so successful that consumers have come to identify the thing itself by the brand name (is escalator a brand of moving stairs, or moving stairs themselves!). In some cases the trademark owner has failed to provide the public with an easy generic name for the product itself -- a factor that all but ensures misuse of the brand. And a lack of adequate enforcement on the part of trademark owners can accelerate the process and make genericness irreversible.
The Spectrum of Distinctiveness
It is often said that not all trademarks are created equal. Trademark law in the United States recognizes that some marks are stronger than others and are, therefore, worthy of greater protection. The degree of strength is measured based on where a particular mark falls along the spectrum of distinctiveness. On the one end of the spectrum are fanciful and arbitrary marks that enjoy the highest degree of protection because they are immediately perceived by consumers as source-identifiers. Fanciful marks are neologisms created to identify particular goods or services, the most famous examples being EXXON and KODAK. Arbitrary marks are terms that have common meaning in the language but are used in an arbitrary manner on the goods or services—for example, the mark APPLE for computers, or CAMEL for cigarettes. Suggestive marks, as one court noted, "shed some light on the characteristics of the goods, but so applied they involve an element of incongruity" and require imagination on the part of consumers to connect the name with particular features of the goods or services. Examples of suggestive marks include IVORY for soap or SPARKLE for fine jewelry. Fanciful, arbitrary, and suggestive marks are all immediately protectable. Further along the spectrum are descriptive marks, which do just that—they describe certain characteristics of the goods or services. Examples of descriptive marks are XPRESSO for a coffee shop or E-COUTURE for an online designer clothing store. Descriptive marks are only protectable upon a showing of acquired secondary meaning—namely, that consumers have come to associate the mark with the source of the product rather than with the product itself or a particular feature of the product. At the end of the spectrum are generic terms, which serve as common terms identifying the goods or services in connection with which they are used (such as the word "tea" for a beverage made of tea leaves) and are unprotectable under any circumstances -- even if they have acquired secondary meaning.
Interestingly, even the strongest fanciful mark can eventually become generic if the rights in the mark are not properly enforced. An example that comes to mind is the mark XEROX that, at one point, was actively used by the public to identify photocopies and the process of making photocopies. It took titanic efforts on the part of attorneys and marketing professionals to re-educate U.S. consumers about the trademark significance of the term XEROX and to resurrect the mark's distinctiveness. BANDAID has suffered similarly. Other marks have not been so lucky and have inevitably fallen into public domain . . . Will the mark FRISBEE be another victim of genericness?
FRISBEE: Not a Toy War
Toy maker Wham-O, Inc. ("Wham-O") currently owns a federal registration for the mark FRISBEE for "toy flying saucers for toss games" that dates back to 1959. Since the 1960s, Wham-O has also been the owner of the registrations for the marks SLIP'N SLIDE for water slides and HULA HOOP for plastic hoops. A competing toy manufacturer, Manley Toys, Ltd. ("Manley"), with whom Wham-O has been engaged in bitter legal battles for a number of years, recently filed a petition to cancel the mark FRISBEE on the ground that the mark has become a generic term for identifying flying disks.1 On the same day, Manley Toys also filed petitions to cancel the marks SLIP'N SLIDE2 and HULA HOOP3 also on genericness grounds.
In November 2008, Wham-O brought a declaratory judgment action in the United States District Court for the Northern District of California seeking a court ruling that its federally registered marks FRISBEE, SLIP'N SLIDE, and HULA HOOP have not become generic.4 The complaint alleges that Wham-O has exclusively used the marks for over forty years and that "[t]oys sold under the marks are among the most popular ever sold, with sales in the hundreds of millions of units."5 Wham-O accuses Manley of engaging in "a wide-range scheme to destroy Wham-O's business and the enormous goodwill associated with many of its famous brands."6 The complaint also provides the history of Wham-O and Manley Toys' rocky relationship, including an award of $6 million in actual and exemplary damages for the intentional infringement and dilution of Wham-O's SLIP'N SLIDE design marks and a permanent injunction against the use of Wham-O's federally-registered FRISBEE and SUPERBALL marks by Manley.7
On August 13, 2009, the court threw out Wham-O's declaratory judgment action on the grounds of lack of federal subject matter jurisdiction and failure to present a justiciable case or controversy.8 Specifically, the court noted that it lacked original jurisdiction over the case because "the courts do not have 'jurisdiction under the Declaratory Judgment Act to determine the validity of [a] trademark where there is no issue of infringement.'"9 The court went on to state that there was no justiciable case or controversy before it because "[t]here is no evidence of a threat of litigation or liability hanging over Wham-O as a result of any activity on Defendants' part."10 Dismissing the case, the court held, will "preserve the status quo" because Wham-O is allowed to continue to use the marks and Manley is refrained from doing so until the TTAB renders a decision in a cancellation proceeding before it.11
Wham-O has appealed the decision of the district court to the United States Court of Appeals for the Ninth Circuit. And unless the appellate court disagrees with the decision of the district court, the TTAB will get to determine whether the marks FRISBEE, SLIP'N SLIDE, and HULA HOOP are free for competitors and the public to use as generic identifiers for flying disks, water slides, and plastic hoop toys.
At least with respect to the FRISBEE mark, Wham-O is facing an uphill battle to seeking to keep its trademark alive. Flying disks sold under the FRISBEE mark have enjoined immense success. The products have been so successful that competitors and, most importantly, the public have started using the mark FRISBEE as a common name for flying disks. The opening sentence in a Wikipedia article about "Flying Disks" states: "Flying disks (commonly called Frisbees) are disk-shaped objects . . ." This sentence alone exemplifies that the FRISBEE mark is in a lot of trouble. The word "commonly" and the term "Frisbees"—a trademark used in a plural form with no ® sign next to it or even a single mention of Wham-O—sound like a public verdict of genericness.
So what went wrong for the FRISBEE mark? Numerous legal actions against Manley show that Wham-O did enforce its rights at least against most egregious infringers. Sometimes, however, the inevitable path to genericness starts on a much smaller scale: with college students on campuses across the country using the word FRISBEE as a common term to identify flying disks that they like to toss around or with online articles or blog posts using the mark FRISBEE as a generic identifier for certain types of toys. And, as the law of nature dictates, eventually the quantity of improper use transforms into quality. Every-day use of a registered mark as a generic term by the public can become so widespread and devastating to the trademark's distinctiveness that any enforcement activities start looking like an attempt to fight a fire with a squirt gun.
Keep a Tight Grip on Your Mark
The Wham-O case suggests that even though a trademark owner is not under an obligation to prosecute every de minimis infringement or police every instance of misuse of its mark, sometimes going only after the most notorious infringers is not enough. Educating the public on how to properly use your registered trademark is equally important. The controversy around the FRISBEE mark is a good reminder for all trademark owners that correctly using the mark in advertising and promotional materials as well as educating the public on what use is improper can help avoid problems in the future.
Among the basic rules for proper usage of trademarks promulgated by the International Trademark Association ("INTA") are the following:
- Make sure that the mark stands out from the rest of the text by capitalizing the first letter (e.g., Exxon) or using all uppercase letters (e.g., "COCA-COLA");
- Remember that trademarks are adjectives and should always be followed by a generic term (e.g., "APPLE computers");
- Never pluralize the mark or use it in a possessive form (e.g., "two Frisbees" or "Frisbee's color" are incorrect usages of the mark);
- Never use the mark as a noun (e.g., "use a Kleenex" instead of "use a KLEENEX tissue");
- Never use the mark as a verb (e.g. "Xerox the book" instead of "make photocopies of the book").
- If a mark is improperly used in third party materials (such as media articles), promptly ask the author to correct the improper use.12
This might seem like a lot of rules to keep in mind and consistently follow, but, as a matter of practice (and finances), it is far easier to prevent a mark from becoming generic than to try to resurrect a mark that has already fallen victim to genericness.
Will the FRISBEE mark rise from the ashes like a phoenix or will toy manufacturers across the country gain the right to use the word freely as applied to their own Frisbees?
To be continued.
1 Manley Toys, Ltd. v. Wham-O, Inc., Cancellation No. 92049734 (petition for cancellation filed on July 30, 2008).
2 Manley Toys, Ltd. v. Wham-O, Inc., Cancellation No. 92049646 (petition for cancellation filed on July 30, 2008).
3 Manley Toys, Ltd. v. Wham-O, Inc., Cancellation No. 92049760 (petition for cancellation filed on July 30, 2008).
4 Wham-O, Inc. v. Manley Toys, Ltd., CV 08-07830 (C.D. Cal. Nov. 25, 2008).
5 Id. at *1.
8 Wham-O, Inc. v. Manley Toys, Ltd., No. CV 08-07830 CBM (SSx), Order granting defendants' motion to dismiss, denying defendant's motions to strike, and denying defendants' requests for judicial notice (C.D. Cal. August 13, 2009).
9 Id. at *6 (citing Homemakers, Inc. v. Chicago Home for the Friendless, 169 USPQ 262, 263 (7th Cir. 1971) (per curiam)).
10 Id. at *7.
11 Id. at *8.
12 http://www.inta.org/index.php?option=com_content&task=view&id=108&Itemid=129&getcontent=1 (link no longer active as of 5/1/2012).
Copyright © Finnegan, Henderson, Farabow, Garrett & Dunner, LLP. This article is for informational purposes, is not intended to constitute legal advice, and may be considered advertising under applicable state laws. This article is only the opinion of the authors and is not attributable to Finnegan, Henderson, Farabow, Garrett & Dunner, LLP, or the firm’s clients. | <urn:uuid:7400f10f-727d-4816-8411-59f7f2892a4e> | CC-MAIN-2015-35 | http://www.finnegan.com/resources/articles/articlesdetail.aspx?news=f2bab1ee-ca17-465b-afd5-0281d2b712f3 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645235537.60/warc/CC-MAIN-20150827031355-00220-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.930956 | 2,687 | 2.78125 | 3 |
|Welcome to Research Highlights 2008, the annual report of the World Bank’s research department, the Development Research Group (DECRG) in the Development Economics Senior Vice Presidency.
What a year! The food, fuel, and financial crises of 2008 naturally dominated attention, including the Bank’s research department. Our researchers have responded in assessing the causes of the crises, the likely impacts on poverty and human development, and appropriate policy responses.
The obvious first step was to take stock of what we know from past work. In fact past crises have been much studied by DECRG’s researchers, in many 100s of papers going back to 1990. As the world enters what is clearly a truly major financial crisis, it is of interest to look at some of the main lessons that can be drawn from our past research.
The existence of financial crises does not change our assessment that, on balance, financial development and globalization are good for poverty reduction in the longer term. However, this positive long-run relationship can coexist with a negative short-run relationship through financial fragility. This can reflect fundamental distortions that build up for a long time, largely hidden from view, before a macro shock reveals the underlying vulnerabilities. But financial crises can also strike economies with relatively sound institutions and generally good policies.
Arguably, greater openness in areas such as trade and migration helps countries deal with domestic shocks, but may well increase vulnerability to external shocks. Globalization has probably facilitated contagion of the 2008 financial crisis, although some economies and some people are likely to be more vulnerable than others.
Even an economy-wide crisis can have diverse, heterogeneous impacts that warn against simple generalizations, and also point to the need for a flexible social policy response. It should not be presumed that the poorest will be hit hardest; indeed, some of the same (undesirable) factors that have kept a significant share of the developing world’s population in deep and persistent poverty—including a lack of connectivity to markets, and consequent lack of opportunity for economic advancement—will protect them to some degree from the crisis. However, significant welfare impacts can be expected, notably in countries, and regions within countries, that have benefited from market-oriented development.
Poverty is very likely to be higher as a result of the financial crisis, although by how much will depend on the extent of the aggregate economic contraction and the rise in inequality (if any). DECRG has been providing regular assessments of the likely poverty impacts; our latest estimates suggest that in 2009 alone the crisis will trap an extra 40-50 million people in extreme poverty. However, an aggregate poverty measure cannot tell the whole story. There are likely to be both gainers and losers at any level of living, including among the poor. And there may well be adverse impacts on important non-income dimensions of welfare, including the nutrition and schooling of children.
Even a short-lived crisis can have longer-term impacts for some of those affected, most notably through the nutrition and schooling of children in poor families. And deficient crisis responses can lay the seeds of longer-term vulnerability to crises. The extent to which these adverse outcomes materialize will depend in part on the policies adopted by developing-country governments. The record of past policy responses to crises contains both successes and failures.
The lessons for policy from our past research on crises span financial sector policies, macroeconomic stabilization, external trade policies, education and health care, and social protection. A number of specific lessons emerge, but there is only space here to point to some generic lessons. The paper on “Lessons from World Bank Research on Financial Crises” goes into more detail on those lessons.1
The generic lessons include the importance of an early response.The fiscal cost of interventions can be quite large, but the cost of inaction can be even larger. Other generic lessons include the importance of understanding incentives in the design of policy responses, the importance of spending composition in designing a fiscal stimulus or adjustment program, and the importance of sound information on what is happening on the ground as the crisis unfolds.
However, if there is one lesson that stands out it is that the short-term responses to a crisis cannot ignore longer-term implications for development in all its dimensions. The macroeconomic stabilization response must be consistent with restoring the growth process and (hence) the pace of poverty reduction. Financial sector policies need to balance (understandable) concerns about the fragility of the banking system with the needs for sound longer-term financial institutions. The paper by Asli Demirgüç-Kunt, and Luis Servén,“Are All the Sacred Cows Dead? Implications of the Financial Crisis for Macro and Financial Policies, ” focuses on financial sector policies, where the challenge ahead is to align private incentives with public interest without taxing or subsidizing private risk-taking.2
Some broadly similar issues of information and incentives underlie our discussions of social policy responses, which must provide rapid income support to those in most need—giving highest on the poorest among those affected—while preserving the key physical and human assets of poor people and their communities. Difficult choices will be faced in addressing the (inevitable) tradeoffs between rapid crisis response and these longer-term development goals.
Social protection will figure prominently in the crisis responses of developing countries. Many governments and citizens are asking what can be done to help protect the poorest. There is a compelling case for believing that the composition of public spending and taxation should change in favor of the poor, although the evidence on past performance is not encouraging; too often it is spending on the non-poor that is protected. A recently popular class of transfer programs requires the children of the recipient family to demonstrate adequate school attendance (and health care in some versions).
Our research has provided evidence from impact evaluations that such Conditional Cash Transfer (CCT) programs bring non-negligible benefits to poor households—in terms of both current incomes and future incomes, through higher investments in child schooling and health case. The 2008 Policy Research Report, "Conditional Cash Transfers: Reducing Present and Future Poverty," documents the evidence from past evaluative research on CCTs, and points to important lessons for the ongoing efforts to introduce and scale up these programs, as part of the efforts of governments to respond to the crisis.3
Our past research has covered other important policies for social protection, including workfare programs. In 2008 we completed an overview of our past research on the range of social protection programs, “Bailing out the Poorest,” that can help protect the poor in a crisis, pointing to both successes and failures, and emphasizing the need for care in thinking about incentives in policy design, the role played by political economy, and the importance of flexibility in adapting to the settings faced, informed by rigorous monitoring and evaluation.4
Having taken stock of what we have learned, we have proceeded to start filling some of the obvious gaps in our knowledge about the crises that emerged so visibly in 2008. By re-deploying the department’s own resources, we instigated rapidly a series of ten or so small research projects in 2008, as well as a number of other projects financed by external resources.
Among the topics being studied are the following: international capital flows and portfolio allocations, including studying the behaviors of institutional investors; international transmission mechanisms of the sub-prime crisis, including through trade and migration; understanding stock market reactions in times of crises; and assessing the likely impacts on poverty and human development, including impacts on child welfare and schooling.
There are huge information challenges in a crisis—even to know what exactly is happening on the ground in a timely way, let alone to figure out what is the best policy response in specific circumstances. Since data are at the core of almost everything DECRG’s researchers do, we have also been active in exploring new ways of monitoring what is happening and evaluating interventions. New high-frequency data sources are being explored actively—data sources that have only become possible with advances in information technology, including (of course) the internet.
Just as the crisis brings new economic opportunities in its wake, with gainers as well as losers, it creates research opportunities. For example, the crisis caused havoc in one of our research projects on the role of management skill in firm performance in India, but the researchers concerned were also able to seize the opportunity to learn about the potential role of better management in helping firms protect themselves from the crisis.
Crisis responses cannot ignore longer-term impacts. The faster the developing world gets back on track toward a sustainable path of poverty reduction, the better. Nor can we ignore research on our longer-term development goals at a time of crisis. Research continues on our core areas of long-run growth, distributional change and poverty reduction, climate change, pollution, energy, finance for development, private sector development, trade reform, migration, governance and delivering better schooling and health care services.
Our research also continued on measuring and monitoring progress against poverty in the world. In 2008 we completed a major update of our estimates of the extent of absolute poverty in the developing world, and we found that the incidence of poverty was greater than our past estimates had suggested, although we still find that there has been substantial long-term progress against extreme poverty in the developing world as a whole, though certainly not in all regions.5
The lack of progress in Sub-Saharan Africa over 1981-2005 is notable and worrying, though, prior to the crisis, there are some recent signs that this may be reversing. Our current expectations are that the crisis will essentially stall that progress over 2009-2010. In 2008 we also instigated a major long-term initiative to improve poverty data for Sub-Saharan Africa, thanks to substantial financial support from the Gates Foundation.
This year also saw the publication of two new books on trade distortions in agriculture in Latin America and Europe’s transition economies.6 Strikingly these studies find that the direct taxation of export-oriented agriculture, which was once so common 20 years ago, has largely vanished. In its place, we have seen greater protection of import-competing agriculture.
One of the unusual, and possibly unique, features of DECRG is the fact that we span such a wide range of development issues. Research is conducted both within and across the six teams, in collaboration with researchers in other parts of the Bank, with colleagues in universities and research institutions throughout the world, and collaborators in almost all the developing countries in which the department’s work is focused. In 2008, the department’s country-specific research spanned 45 developing countries (on top of cross-country comparative work).
Introduction to the 2008 edition of Highlights
We have greatly shortened our highlights report over past years. The full online Research Highlights 2008 report (available at http://econ.worldbank.org/research/highlights2008) provides a complete list of publications by team in calendar year 2008, which included 21 books, 161 journal articles, 69 book chapters, well over 176 working papers (that will be published in due course), and 19 new and updated datasets.
The full report also gives details on the group’s outreach efforts, which included 10 web articles, 15 web briefs, blog entries, and 9 conferences organized or co-organized by staff with other institutions. And, of course, staff gave well over 500 presentations at seminars and conferences throughout the year.
I very much hope that you enjoy reading this edition of Research Highlights. Please tell us what you think about the issues raised in these pages, and bring up topics you think need more research. In the end, it is the active interaction with development thinkers and practitioners that will continue to assure that research at the World Bank remains relevant to our shared goals of achieving inclusive and sustainable economic development.
1. Development Research Group. 2008. “Lessons from World Bank Research on Financial Crises.” Policy Research Working Paper 4779, World Bank, Washington, DC.
2. Demirgüç-Kunt, Asli, and Luis Servén. 2008. “Are All the Sacred Cows Dead? Implications of the Financial Crisis for Macro and Financial Policies.” Policy Research Working Paper 4807, World Bank, Washington, DC.
3. Fiszbein, Ariel, and Norbert Schady. 2009. Conditional Cash Transfers: Reducing Present & Future Poverty. Washington, DC: World Bank.
4. Ravallion, Martin. 2008. “Bailing Out the World’s Poorest.” Policy Research Working Paper 4763, World Bank, Washington, DC.
5. Chen, Shaohua, and Martin Ravallion. 2008. “The Developing World is Poorer than We Thought, But No Less Successful in the Fight against Poverty.” Policy Research Working Paper 4703, World Bank, Washington, DC.
6. Anderson, Kym, and Alberto Valdes. 2008. Distortions to Agricultural Incentives in Latin America. Washington, DC: World Bank; Anderson, Kym, and Johan Swinnen. 2008. Distortions to Agricultural Incentives in Europe's Transition Economies. Washington, DC: World Bank. | <urn:uuid:303ab3c1-9462-4f2e-bcf5-550713257e06> | CC-MAIN-2015-35 | http://econ.worldbank.org/WBSITE/EXTERNAL/EXTDEC/EXTRESEARCH/0,,contentMDK:22103931~pagePK:64165401~piPK:64165026~theSitePK:469382~isCURL:Y~isCURL:Y~isCURL:Y~isCURL:Y~isCURL:Y~isCURL:Y,00.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.937839 | 2,722 | 2.578125 | 3 |
Genetically Modified Organisms (GMOs) are some of the world’s most controversial technologies. Transatlantic disputes arising from the sharp regulatory differences between the two major frameworks — those of the United States and the European Union — have affected research, investment and planting decisions worldwide.
Global factors enabling the global expansion of GMOs include heavy investment, favourable international prices and the expanding role of transnational companies. These conditions, however, do not appear to apply to Spain, where insufficient information, transparency, independent studies and weak social mobilization have left the field open to constant lobbying by GM companies.
The EU’s legal framework
The GMO situation in Spain cannot be understood without first grasping the EU’s legal framework for their authorization. Considered restrictive by many, the procedure has nonetheless enabled their rapid spread. It is based on Directive 2001/18/CE, transposed into Spanish legislation by Law 9/2003, and two Regulations: 1829/2003 and 1830/03.
Given the diversity of national, regional, and local conditions under which European farmers work, the European Commission considers that coexistence measures to avoid the unintended presence of GMOs in conventional and organic crops should be developed and implemented by the Member States, and Article 26a of Directive 2001/18/EC entitles them to do so.
However, in 2012 the EU Court of Justice ruled (Case C-36/11) that Member States cannot subject the cultivation of GMOs to an additional national authorization procedure if already authorized via relevant EU legislation. The ruling follows previous case law of the Court that sanctioned France for inappropriate transposition of GM EU rules (Cases C-419/03 and C-121/07), thus setting clear limits to Article 26a.
Opt-outs and concerns
Presently, several Member States have called for a clause that would allow them to opt out of GM cultivation. Some of these Member States have already banned the cultivation of GMOs on the basis of a safeguard clause (Article 23 of Directive 2001/18/EC) or emergency measures intended to address new information about risks emerging after an authorization has been granted (Article 34 of Regulation 1829/2003). As a result, the European Food Safety Authority (EFSA) did not consider the bans to be scientifically justified, as initially was the case for France. Other countries like Austria, Poland, Hungary, Greece, Luxembourg and Germany have been successful in their bans.
Interestingly, in 2009, 12 of the 21 members of the EFSA GMO panel had a conflict of interest as defined by the Organisation for Economic Co-operation and Development: their involvement with the biotech industry positioned them to potentially exploit their official capacity for corporate or personal benefit. Since then, only one conflicted member has left the panel.
Despite countering claims, another conflict of interest appears to have driven the retraction of a publication of the scientific independent study by a French research group led by Gilles-Éric Séralini, which described harmful effects on rats fed with Monsanto’s GM maize. This retraction from the journal Food and Chemical Toxicology happened after Richard Goodman, who worked from 1997–2004 for Monsanto, joined the journal’s Editorial Board.
The article was retracted under the argument that the small sample size of rats and the rat strain selected were sufficient reasons to question the finality of the conclusions. However, among other counterarguments made by the European Network of Scientists, inconclusiveness of research results is not contained in the guidelines for retractions in scientific publishing, set out by the Committee on Publication Ethics, of which the journal Food and Chemical Toxicology is a member.
France banned Monsanto’s corn line MON810 based on this study and previous scientific findings, but the ban was initially challenged by the EFSA. This line, like other Bt maize, is considered insect resistant because it is designed to contain Bacillus thuringiensis, a pathogen toxic to insects in the Lepidoptera order, including the European Corn Borer. Based on documentation submitted by France, the EFSA affirmed in 2012 that “…there [was] no specific scientific evidence, in terms of risk to human and animal health or the environment, that would support the notification of an emergency measure [...] and invalidate its previous risk assessments of maize MON810”. Despite the EFSA’s and the State Council’s rejection of the ban, the French government decided to maintain it.
Understanding the case of Spain
Despite scientific findings and increasing concerns, and in great contrast with France’s position, Spain currently has the highest adoption rate of Bt maize in the EU since it was first introduced in 1998. In 2012, over 120 thousand hectares of Bt maize were cultivated — 19.5 percent more than the previous year — representing 90 percent of GM crops in the EU.
So why do countries that share a common European legal framework, as well as similar climate and soil conditions, have diametrically opposed views on this issue?
A survey was conducted for the European Commission in 2005 in three of Spain’s leading Bt maize-growing provinces. While results do report higher yields, the study shows statistical significance in only one province, and all Bt maize produced was actually sold for feed manufacturing.
The survey also found that 30 percent of farmers still applied insecticides even when the treatment was ineffective. Although the study praises this percentage as an achievement, data must be contrasted with recent findings on the use of pesticides: in the US, the fifteen-year period following the 1996 introduction of Roundup Ready crops saw herbicide use rise by about seven percent. This percentage, however, would be much higher if “insecticides applied” counted the Bt produced by transgenic plants, as many entomologists propose.
Interestingly, when asked about their reasons for adopting Bt maize, farmers stated “lowering the risk of maize borer damage”, “obtaining higher yields”, and “better quality of the harvest”, though there is no scientific basis for believing that the technology provides any of those results. Moreover, factors including soil type, irrigation intensity, weather conditions or ecological integrity were not analyzed in the study, all of which have direct impact on the three responses provided.
The roots of Spain’s GM openness
In 1998, the Spanish government authorized two varieties of Bt maize 176 for the first time, entrusting the biomonitoring process to the same companies that had created those varieties. The change of government in 2004, from right-wing to more centre oriented, made it possible for the protests coming from civil society to be heard, and a representative from the environmental sector was admitted in the National Commission on Biosafety.
During European Council meetings, the Spanish government shifted from “pro-GMO” responses to “abstention” in most cases. Interestingly, though, Member States’ individual votes are not accessible to the public. Fortunately, votes are recorded by NGOs, uncovering the inconsistency of the country’s neutral EU position and policies at the national level: the same government approved 14 new varieties of maize MON810 in July 2005, bringing the total number to 40 at that time. Although Bt 176 varieties were removed from the country’s list of authorized GMOs in 2005, today the total number of approved GM commercial varieties is 116.
The difference between the GM maize cultivation rates of France and Spain stems, on the one hand, from weak mobilization of social organizations and insufficient public debate in Spain and, on the other, from the Spanish government’s support of GM companies.
A weak social mobilization compared to the French case could originate from the difference in the sociological composition and history of the French and Spanish countrysides. Unlike Spain, France experienced a phenomenon of neo-ruralisation after May 1968, which led to the formation of strong agricultural unions devoted to protecting the rights of small-scale farmers (e.g., Confédération Paysanne).
At the same time, the Spanish rural population is not as receptive to alternatives, such as organic agriculture, as the French. Although a number of regions have declared themselves GMO-free zones, which highlights the disconnect between national government and local populations, the productivist approach still remains unchallenged, and this includes GMOs.
Furthermore, cables released by WikiLeaks revealed that, according to Monsanto, the French government would have contravened WTO regulations by banning MON810, and that the company would seek compensation. The Spanish government possibly saw this as a retaliatory threat.
In addition, these cables showed that US diplomats were working directly for GM companies like Monsanto to help ensure Spain’s adoption. “In response to recent urgent requests by [Spanish rural affairs ministry] Josep Puxeu and Monsanto, post requests renewed US government support of Spain’s science-based agricultural biotechnology position through high-level US government intervention.”
It also emerged that Spain and the US worked closely to persuade the EU not to strengthen biotechnology laws. In one cable, the embassy in Madrid writes: “If Spain falls, the rest of Europe will follow.”
Learning from Spain
Lack of transparent requirements and procedures for GMO adoption and development remains an issue both at national and European levels. While registers of approved GMOs at both levels remain, in principle, open to the public, information appears limited and highly technical.
The lack of information provided to farmers about long-term effects on health and the environment is no coincidence either. Conclusions stated by the above-mentioned survey as positive are based on the implicitly accepted fact that information provided to farmers remains insufficient, at least in the areas of its scope.
There is also an overall lack of independent scientific studies. Instead of being undertaken by the CSIC (Spain’s leading public scientific research institute), they are first carried out by the companies themselves after 90-day tests, clearly insufficient compared to the two-year period required for pharmaceutical products. Governmental or European agencies might conduct further research, but these often pick scientific evidence at their convenience and include members who are in conflict of interest.
Furthermore, weak social mobilization and the shutting-out of NGO and critical civil society voices have played a major role in the rapid spread of GMOs. Additionally, Spain has been made deliberately dependent on European agricultural subsidies, where local communities are rendered particularly vulnerable to supra-national decisions and left with very little or no voice in those fora.
The connivance of political powers with the private sector, already known before the WikiLeaks revelations, was further evidenced by sudden changes of policy by ministers of agriculture before and after their election. The politicization of the issue cannot be avoided, but information, transparency and social participation must be fostered in order to prevent misinformed farmers’ decisions, private benefit-oriented political decisions and the quietening of social sectors key in this process. | <urn:uuid:d7695361-47bb-4d33-9e49-a05d74352c19> | CC-MAIN-2015-35 | http://ourworld.unu.edu/en/why-and-how-spain-became-the-eus-top-grower-of-gmos | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00341-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.954548 | 2,235 | 2.859375 | 3 |
By Will Schroeder
How do you answer the question “Do you speak American?" Many people might answer that yes, they speak English. But English is not the only language spoken in this country. Twenty-eight million people speak Spanish, and more than 2.8 million of them do not speak English at all. Among Americans who speak English at home--82 percent--there is a variation in pronunciation, vocabulary, and grammar, from Maine watermen and Louisiana Cajuns to southern Californians.
A new documentary, Do You Speak American?, explores the country’s linguistic diversity. “Everyone has something to say about language,” says Susan Mills, the film’s executive producer.
The three-part documentary and an accompanying DVD, both supported by NEH, will debut this fall. An educational outreach program will bring the study of language to middle- and secondary-school students. Video images on a Web-enabled DVD are linked to a website, so that users will be able to access continually updated information. The web component is intended to prevent the static versions of the project--video images and the companion book to the series, written by Robert MacNeil--from becoming dated too quickly. Joan Friedenberg, the developer and producer of the film’s digital ancillaries, says that the DVD will allow student viewers to explore “links between language and culture in a way that today’s television--and today’s Internet-- by themselves cannot accomplish.”
Do You Speak American? is written and hosted by MacNeil, and directed, peak produced, and co-written by Bill Cran. The documentary takes on the stuff of sociolinguistics--the discipline that studies less the mechanics of language than the social ramifications of language--and asks questions about power, education, and access to resources.
There is no one agreed-upon best way to speak American. Many factors contribute to the way a person speaks: regional origin or affinity, ethnicity, social or economic class, level of education. The components of linguistic identity do not always match up neatly. Some people switch from one dialect to another. “You do have to be bilingual in this country,” says Los Angeles disc jockey Steve Harvey, a speaker of Black English. For him and many other Americans, speaking one way at home and another at work is part of living on the ethnic margins of American society.
Others put on an accent for entertainment purposes. Country singer Cody James comes from Oregon, but sings with a Southern accent, he says, not just to sell records, but because “it’s real comfortable.”
“Talkin’ country has now become the informal way to speak American,” says MacNeil. Inland Southern English is the largest dialect group in America and still growing, says John Fought, a linguist who studies the “New South phenomenon,” the rising vogue for Southern ways and country talk. It is spoken from the Piedmont east of the Appalachians to the Ohio River watershed, across the Mississippi, through Texas into the Southwest, and throughout the Sunbelt. Along with pronunciation and vocabulary, double modals--“Zack had ought to give your hat back” and “I might could visit you”--and constructions such as “fixin’ to” are characteristics.
Do You Speak American? features audio clips of presidential inaugurations, illustrating the salty Texan expressions of President Lyndon Johnson, President Jimmy Carter’s Georgian inflections, and President Bill Clinton’s Arkansas twang--all three spoke forms of Inland Southern English publicly, even though the dialect has long been considered non-standard. Southern comedian Jeff Foxworthy makes his living by making Southerners and others laugh at the way Southerners talk. His style of comedy deals head-on with the prejudices people hold about speakers of Southern dialects. When asked by MacNeil, “Do you think Northern people think Southerners are stupid because of the way they talk?” Foxworthy replies, “Yes, I think so, and I think Southerners really don’t care that Northern people think that.”
“We like our food spicy and we like our language spicy, too,” says Texan journalist Molly Ivins in an on-camera interview. She believes that language in the Lone Star state has a “lunatic quality of exaggeration” that encourages people to invent new metaphors all the time: such as the Texanisms “I’m happy enough to be twins” and “meaner than a skillet full of rattlesnakes.”
“There was a popular intellectual theory about twenty years ago that the whole country is talking more and more alike, with the interstate highways and Howard Johnson restaurants,” says Ivins. “But what’s amazing is not the fragility of those cultures, but their hardihood. I’m amazed by the tenacity with which custom and dialect endures.”
Dialects are the seedbed in which language grows and expands. Texan English, which has incorporated elements of Plantation Southern, Appalachian, German, Polish, and Czech into its distinctive sound and vocabulary, has coined terms such as “wrangler,” “maverick,” “rustler,” and “chuck wagon,” as well as the phrases “stiff upper lip” and “hot under the collar”--all of which have been adopted into standard English. “Bronco,” “stampede,” “corral,” “lasso,” and “vamoose,” Texan terms that bear the influence of Mexican Spanish, have also made their way across the country and into popular speech.
“When people talk about how they speak, they are talking about who they are, and what it means to live where they live,” says linguist Barbara Johnstone. She studies Pittsburghese, the dialect of her native city, and its characteristic features such as “yins”--the plural form of “you”--which she says harks back to Scottish and Irish immigrants and is still in use in Belfast today.
“People’s fierce pride in their own speech is a measure of the importance of place,” MacNeil says.
Dozens of varieties of English flourish regardless of considerations of formality and propriety, yet there still exists a debate among scholars, literati, educators, and politicians about whose English deserves to be represented as correct. American English may be spoken around the globe, but it is not actually the official language of the United States: no law has ever been passed to declare it as such.
The documentary identifies the linguistic camps that stake out academic, cultural, or social ground in deciding how to approach language in the absence of a legal standard. Broadly defined, there are two sides: the descriptivists and the prescriptivists. Descriptivists record language but make no value judgments about it, in an attempt to document how people actually speak. Prescriptivists, on the other hand, such as some editors and English teachers, consider deviations from the norm substandard.
Prescriptivists believe that language should not change, and decry popular influences on English in America. William Safire, who writes the column “On Language” for the New York Times magazine, scolds public figures for contorting the meaning of words, and has rebuked the 2000 Census for its misuse of commas and misleading phrasing. “The United States Census 2000 says: ‘Please use a black or blue pen.’ I have a blue pen that writes with black ink; I suppose that's O.K. But I also have a black pen that writes with red ink; is that impermissible?”
Traditionally, written language has been the guardian of formality. But many English-language newspapers in America are tending more and more toward what they see as a more informal and accessible tone. One “language watchdog” MacNeil interviews is the assistant managing editor of the Columbus Dispatch, Kirk Arnott. He says the standards of correctness he maintains have to do with helping reporters avoid word misuse in the narrowest sense. Arnott makes sure that “bemused” is used according to its dictionary definition--it means “perplexed,” and not “amused”--and that “nonplussed” is used to mean “bewildered,” not “unperturbed.”
“We should be as conversational as we can be, because we should be as accessible as we can be,” Arnott says. “I certainly don’t want it to sound as if the paper were edited by a schoolmarm, but still, someone has to keep language from slipping into the abyss.”
But both sides of the debate recognize one necessary feature of human language: it does constantly change. Safire concedes that semantic meaning shifts with time and technology--in a recent column he gave the word “blog” a nod of approval, explaining that the term refers to personal web logs. But many prescriptivists wish to prevent just this sort of change, and refuse to admit neologisms. When asked about the state of American English, theater critic John Simon describes it as “unhealthy, poor, sad.”
In contrast, some descriptivist linguists see a positive change in the way the public views language. “One of the most important trends today is that we are coming to celebrate and recognize dialect differences as part of our national cultural heritage, instead of stamping them out,” says Walt Wolfram, a sociolinguist at North Carolina State University.
“In a country full of quirky accents, there seems to be one accent everyone agrees is ‘normal,’” MacNeil says in the film. “For most Americans, the Midland accent, from the Northern family of dialects, is the yardstick of the most normal or correct English.” Midland English is spoken in a zone that reaches from Ohio, Michigan, and Northern Indiana, to Wisconsin and Pennsylvania.
While the Great Lakes region dialect used to be considered the broadcast standard, it is now changing away from that, says William Labov. He and other linguists are keeping an eye on what they call the Northern Vowel Shift, a phenomenon in the industrial inland near the Great Lakes. In the urban areas of Buffalo, Chicago, Cleveland, Detroit, Flint, Gary, Syracuse, Rochester, Rockford, and Toledo, vowels are trading places. Words such as “stuck” are pronounced so that speakers of other dialects hear “stack”; and “stalk” sounds like “stock.” The shift is spreading in Milwaukee, Pittsburgh, Columbus, and Indianapolis.
Labov believes that the vowel shift is the most fundamental language revolution to occur in English in a thousand years. It has already affected thirty-four million people.
These changes suggest that American speech is not becoming more homogeneous. “This is the most surprising result of our research,” says Labov. “While local dialects of small communities may be receding, the larger regional patterns are becoming more different from each other.” He says that the dialects of New York, Philadelphia, Detroit, Chicago, Saint Louis, Dallas, and Los Angeles are more different from each other today than they were just fifty years ago.
“Movies, television, and the radio industry help spread new language,” MacNeil says.
Movies that have become cult classics, such as the 1995 Clueless, help spread California teen phrases such as the skeptical “as if,” and “whatever,” and slang terms: “flava,” “money,” and “smooth,” to mean “good,” and “random,” “heinous,” and “sucks” for “bad.”
Contemporary teenagers MacNeil interviews in Irvine, California, offer him words such as “uber,” to describe intensity, and “tight” for something good.
Teen talk is a way of establishing peer groups and moving away from the family and into a tribe of friends, says Winnie Holzman, the writer and creator of the television series My So-Called Life. “There’s almost nothing more personal than how you express yourself,” she says.
Do You Speak American? discusses white teens’ use of instant-messaging slang, which the viewer discovers is heavily influenced by Black English, or what many linguists now call African American Vernacular English. “Ima,” “das kool,” and “sup wit u” mean “I’m going to,” “that’s cool,” and “what’s up with you?” In instant messaging, the medium is text, but the register--the tone and the vocabulary associated with a particular context--is informal.
“Everything follows the streets in America,” says one member of the Athletic Mike League, a Detroit hip-hop crew. In a dialect in which “bad” sometimes means “good,” these rappers attend university and alternate between formal English and their street variety.
At the same time as they identify with one community through language, they realize they must learn another variety to prosper economically or socially. Geneva Smitherman, a sociolinguist and consultant to the documentary, calls this “linguistic push-pull.” Do You Speak American? examines the case of three African American mothers in Ann Arbor, Michigan, who brought a lawsuit in federal court in 1978 because they believed their children were being discriminated against in school for speaking African American Vernacular English. The ruling in favor of the plaintiffs in The Martin Luther King Jr. Elementary School Children v. The Michigan Board of Education et al. affirmed that the children were being denied equal access to education. “For the first time, you had a federal judge acknowledge formally that African American Vernacular English represented a significant linguistic barrier to academic achievement and success,” says John Baugh, professor of linguistics at Stanford University.
At the same time, “the more we badmouth black speech, the more we are fascinated by it,” MacNeil contends. He says that at least as far back as the turn of the twentieth century, when ragtime and jazz were gaining popularity, white America has been interested in black cultural forms. Today hip-hop, with its reliance on African American Vernacular English, influences the speech of white teens, and not just in the instant messaging chat room.
Cliff Nass, co-director of the Social Responses to Communication Technologies Project at Stanford University, creates computer technology that simulates different regional or ethnic accents. He says that many people attach linguistic expectations to a person’s appearance. If you look African American, most people assume you will speak with African American Vernacular English characteristics--and if you speak otherwise, “That mismatch can lead to mistrust.”
The documentary ends with a note on the voice-activated technology Nass is developing. Such innovations will allow drivers to navigate or dial phone calls without touching a button. If computer voice recognition becomes widely available, Americans may of necessity have to learn a standard way of speaking--otherwise their cars might not function. There may be limits to the technology, Nass says. “Do we want to have a cacophony of voices in the home? It’s not clear that people are going to want to have long conversations with their toasters or refrigerators--we have to design around that problem.” | <urn:uuid:eb01265a-556c-49ea-8e55-2909cdfa8a46> | CC-MAIN-2015-35 | http://www.neh.gov/print/12781 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.954507 | 3,352 | 2.515625 | 3 |
Paul DePasquale, ed. Natives and Settlers Now and Then: Historical Issues and Current Perspectives on Treaties and Land Claims in Canada. Edmonton: University of Alberta Press, 2006. 220 pp. $49.94 (paper), ISBN 978-0-88864-462-6.
Greg Johnson. Sacred Claims: Repatriation and Living Tradition. Charlottesville: University of Virginia Press, 2007. xi + 208 pp. $55.00 (cloth), ISBN 978-0-8139-2661-2; $19.50 (paper), ISBN 978-0-8139-2662-9.
Reviewed by Christine Boston
Published on H-AmIndian (February, 2009)
Commissioned by Patrick G. Bottiger
Sacred Lands, Sacred Claims
Natives and Settlers Now and Then, edited by Paul DePasquale, and Sacred Claims, by Greg Johnson, address similar themes: repatriation of native lands or material remains and identities. Focusing on native land claims in Canada, the contributors to Natives and Settlers examine land claims and treaties from a primarily historic perspective, although they also delve into current repercussions of treaties. Such an approach resembles Aboriginal Land Claims in Canada: A Regional Perspective (1992), edited by Ken Coates, who dissected complex Canadian land claim issues. Complementing Natives and Settlers, Sacred Claims focuses on the Native American Graves Protection and Repatriation Act (more commonly referred to as NAGPRA) with a particular emphasis on repatriation efforts and claims by a native Hawaiian group. While there have been several books dedicated to the subject of NAGPRA, this one provides a unique angle by concentrating on Hawaiian material.
DePasquale introduces Natives and Settlers by explaining the origins of the volume, which began as a series of conferences centering on native issues and the interactions between native and European peoples. This edited collection expands on those themes by incorporating a native perspective. Such an approach allows the contributors to better evaluate the factors shaping present day relationships between natives and nonnatives in Canada. Jonathan Hart, for example, argues that both groups need a greater understanding of their past as well as mutual respect to address current misunderstandings. This collection is meant to be a source for educating the public to reach these goals.
Illustrating historical perspectives on treaties and treaty making in Canada, Natives and Settlers is a useful resource for understanding processes that have played an important part in land claims across the country. The authors draw from a range of sources including oral traditions, personal anecdotes, archival records, modern laws, original treaties, and published works to create a well-rounded and well-researched volume. This range of sources produces a blend of information that unites both the cultural and the academic, creating an enriching volume that is practical and educational.
Sharon Venne builds on these issues in her essay “Treaties Made in Good Faith." Venne explains how the British took advantage of the Cree during the original treaty making process by exploiting linguistic and cultural misunderstandings between both groups. Patricia Seed discusses some of the same issues but through an international perspective. Her essay, “Three Treaty Nations Compared: Economic and Political Consequences for Indigenous Peoples in Canada, the United States, and New Zealand,” investigates the treaty making processes between three nations and their indigenous peoples to illustrate the mechanisms put in place that caused the loss of native lands. It also explores how one group of New Zealand natives was able to restore their land rights based on shrewd treating making skills. Frank Tough and Erin McGregor explore the colonial land scrip system in “'The Rights to the Land May Be Transferred': Archival records as Colonial Text--A Narrative of Metis Scrip.” The authors rely on archival documents to trace the land claims by one Metis man, in particular, and to show how the Canadian government manufactured a system that never officially granted lands to claimants. Finally, Harold Cardinal's contribution explores themes of nation building and native identity in Canada in “Nation-Building: Reflections of a Nihiyow (Cree).” Cardinal's essay is an appropriate closing to this volume because it connects the other contributions by evaluating how the past influenced modern native identity.
Two additional sections follow Cardinal's essay--a detailed "Question and Discussion" section and a dedication to Cardinal. The "Question and Discussion" section, which is a transcription drawn from a session at the conference, clarifies many of the issues brought up in the papers presented both at the conferences and in the collection. Readers may find it a handy reference if they have any unanswered questions. The dedication to Cardinal, who passed away before the publication of the volume, is a testament to his personal accomplishments for native communities, and includes dedications from each of the volume's contributors.
While the focus of this book is historical, it can be used in various academic settings, including but not limited to political science, anthropology, native studies, and Canadian studies. The various topics explored in this book create this broad appeal; for instance, the authors discuss such diverse topics as Canadian and International law, identity, and culture. The wide academic appeal is also based on the various backgrounds of authors who contributed to the volume. Each of these authors conveys their own unique perspective as well as their disciplinary specialty. Venne is a lawyer active in representing natives in land claims, Cardinal was a political leader who dedicated his life to native issues, and the remainder of the authors are specialists of Aboriginal studies from various academic departments (English, literature, anthropology, native studies, and history).
This book pairs nicely with Johnson's Sacred Claims, a monograph based on research Johnson conducted during his doctoral studies, which focused on how NAGPRA has aided the processes that create and recreate Native American and native Hawaiian identity and culture. Johnson’s approach regarding NAGPRA fits well with other books on the topic, such as The Future of the Past: Archaeologists, Native Americans and Repatriation (2001), edited by Tamara L. Bray, and Skull Wars: Kennewick Man, Archaeology, and the Battle for Native American Identity (2000) by David Hurst Thomas and Sarah Colley. The ease in which Johnson relates NAGPRA to cultural identity makes the book unique.
Johnson’s study begins with an examination of the terms "identity" and "culture." He points to the problematic nature of viewing them as static and unchanging, and argues that in reality they are both dynamic and evolving. Johnson focuses on his interactions with one Hawaiian group, the Hui Mālama, and their struggles with NAGPRA for the repatriation of Hawaiian artifacts. He explains the mechanisms underlying the formation of NAGPRA and provides an in-depth analysis of the legislation. Johnson's book holds a lot of promise for those wishing to understand repatriation and NAGPRA. He discusses the background of the Hawaiian repatriation movement and its origins. He reiterates several times the difficult process surrounding repatriation by showing the conflicting views on repatriation from native Hawaiians, the various interpretations of NAGPRA, and the motivations of each group involved in the repatriation process. Johnson dovetails the multifactorial nature of the movement with a specific examination of various repatriation claims of native Hawaiians and Native Americans.
Johnson achieves his goal of showing the evolving nature of culture and tradition by illustrating how the meanings of words and associations with material objects change to fit the needs of people. He examines a mainland Native American claim--the Ute tribe's claim on Anasazi cultural materials--to make this point. According to Johnson, the Ute have no historical or cultural claim to the Anasazi remains that are situated on their reservation lands. As time progressed from their original placement on these lands and as the Ute culture changed, they raised a claim for the Anasazi artifacts. Johnson explains that this claim is based on a need to fulfill a loss of cultural identity and tradition, and that the Anasazi artifacts are viewed as a way of fulfilling this perceived loss. Johnson’s use of the Ute example solidifies his argument that culture and tradition are ever changing entities. He further illustrates this point throughout the book in chronicling the disputes between the various native Hawaiian groups involved in the Bishop Museum case of repatriating native Hawaiian human remains to the Hui Mālama.
Johnson attempts to balance the book by presenting a myriad of perspectives on repatriation from the native groups and claimants, museums, archaeologists, and collectors who have possession of the cultural artifacts. Johnson's role in his research, however, leaves him unable to obtain personal interviews and interactions with museum personnel, archaeologists, and collectors. He tries to make up for this by providing as much information as he can regarding their motivations and perspectives through news reports, press releases, and other sources. Furthermore, although Johnson claims that he will look at both native Hawaiian and Native American claims, he presents mainly native Hawaiian claims. When he focuses on Native American claims, he examines solely the contentious issue of the Anasazi remains, which has been well documented in Grave Injustice: The American Indian Movement and NAGPRA (2002), edited by Kathleen Fine-Dare. However, unlike other sources that concentrate on this issue from the Hopi and Navajo perspective, Johnson discusses the other claimants who have become involved in this repatriation claim, with a particular focus on the Ute tribe.
On a positive note, the level of detail provided by Johnson, such as information on NAGPRA Review Committee meetings, gives the reader the opportunity to understand the complex issues discussed during these meetings. This attention to detail is a credit to Johnson and reflects the care and dedication that he puts into his work. This attention to detail makes this book a great resource in many ways. While it is not as holistic as one may hope, the book's themes are general enough that they can be applied to various situations. Plus, the concentration on Hawaii makes this an excellent source for individuals wishing to study the complexities of both the native Hawaiian culture and their repatriation issues. Furthermore, Johnson's exploration of the meanings of and changes to NAGPRA has created an invaluable resource to anyone wishing further understanding of NAGPRA and the repatriation processes. Similar to Natives and Settlers, this book is accessible to a wide audience. Based on the themes Johnson discusses, Sacred Claims will be useful for scholars of anthropology, religious studies, sociology, and political science.
Despite the different content of the two volumes, these two books are united under one main theme: native/nonnative interactions. These books confront this theme in different ways: Natives and Settlers takes a historical approach by examining the history behind native and nonnative treaties, while Sacred Claims looks at the modern NAGPRA and how it is producing new interactions between native and nonnatives. Both volumes showcase the struggles between natives and nonnatives while they attempt to renegotiate identity, space, and culture, and both point to how such negotiations have led to cultural misunderstandings, unfair stereotypes, and other persistent conflicts. These volumes provide readers with the resources necessary to begin resolving intercultural conflict.
If there is additional discussion of this review, you may access it through the list discussion logs at: http://h-net.msu.edu/cgi-bin/logbrowse.pl.
Christine Boston. Review of DePasquale, Paul, ed, Natives and Settlers Now and Then: Historical Issues and Current Perspectives on Treaties and Land Claims in Canada and
Johnson, Greg, Sacred Claims: Repatriation and Living Tradition.
H-AmIndian, H-Net Reviews.
|This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.| | <urn:uuid:06e755fc-76a0-4c4b-b523-2248ccf31e39> | CC-MAIN-2015-35 | http://www.h-net.org/reviews/showrev.php?id=23547 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.940235 | 2,443 | 2.59375 | 3 |
Hiawatha and the Pearl-Feather (IX)
On the shores of Gitche Gumee,
Of the shining Big-Sea-Water,
Stood Nokomis, the old woman,
Pointing with her finger westward,
5O'er the water pointing westward,
To the purple clouds of sunset.
Fiercely the red sun descending
Burned his way along the heavens,
Set the sky on fire behind him,
10As war-parties, when retreating,
Burn the prairies on their war-trail;
And the moon, the Night-sun, eastward,
Suddenly starting from his ambush,
Followed fast those bloody footprints,
15Followed in that fiery war-trail,
With its glare upon his features.
And Nokomis, the old woman,
Pointing with her finger westward,
Spake these words to Hiawatha:
20"Yonder dwells the great Pearl-Feather,
Megissogwon, the Magician,
Manito of Wealth and Wampum,
Guarded by his fiery serpents,
Guarded by the black pitch-water.
25You can see his fiery serpents,
The Kenabeek, the great serpents,
Coiling, playing in the water;
You can see the black pitch-water
Stretching far away beyond them,
30To the purple clouds of sunset!
"He it was who slew my father,
By his wicked wiles and cunning,
When he from the moon descended,
When he came on earth to seek me.
35He, the mightiest of Magicians,
Sends the fever from the marshes,
Sends the pestilential vapors,
Sends the poisonous exhalations,
Sends the white fog from the fen-lands,
40Sends disease and death among us!
"Take your bow, O Hiawatha,
Take your arrows, jasper-headed,
Take your war-club, Puggawaugun,
And your mittens, Minjekahwun,
45And your birch-canoe for sailing,
And the oil of Mishe-Nahma,
So to smear its sides, that swiftly
You may pass the black pitch-water;
Slay this merciless magician,
50Save the people from the fever
That he breathes across the fen-lands,
And avenge my father's murder!"
Straightway then my Hiawatha
Armed himself with all his war-gear,
55Launched his birch-canoe for sailing;
With his palm its sides he patted,
Said with glee, "Cheemaun, my darling,
O my Birch-canoe! leap forward,
Where you see the fiery serpents,
60Where you see the black pitch-water!"
Forward leaped Cheemaun exulting,
And the noble Hiawatha
Sang his war-song wild and woful,
And above him the war-eagle,
65The Keneu, the great war-eagle,
Master of all fowls with feathers,
Screamed and hurtled through the heavens.
Soon he reached the fiery serpents,
The Kenabeek, the great serpents,
70Lying huge upon the water,
Sparkling, rippling in the water,
Lying coiled across the passage,
With their blazing crests uplifted,
Breathing fiery fogs and vapors,
75So that none could pass beyond them.
But the fearless Hiawatha
Cried aloud, and spake in this wise,
"Let me pass my way, Kenabeek,
Let me go upon my journey!"
80And they answered, hissing fiercely,
With their fiery breath made answer:
"Back, go back! O Shaugodaya!
Back to old Nokomis, Faint-heart!"
Then the angry Hiawatha
85Raised his mighty bow of ash-tree,
Seized his arrows, jasper-headed,
Shot them fast among the serpents;
Every twanging of the bow-string
Was a war-cry and a death-cry,
90Every whizzing of an arrow
Was a death-song of Kenabeek.
Weltering in the bloody water,
Dead lay all the fiery serpents,
And among them Hiawatha
95Harmless sailed, and cried exulting:
"Onward, O Cheemaun, my darling!
Onward to the black pitch-water!"
Then he took the oil of Nahma,
And the bows and sides anointed,
100Smeared them well with oil, that swiftly
He might pass the black pitch-water.
All night long he sailed upon it,
Sailed upon that sluggish water,
Covered with its mould of ages,
105Black with rotting water-rushes,
Rank with flags and leaves of lilies,
Stagnant, lifeless, dreary, dismal,
Lighted by the shimmering moonlight,
And by will-o'-the-wisps illumined,
110Fires by ghosts of dead men kindled,
In their weary night-encampments.
All the air was white with moonlight,
All the water black with shadow,
And around him the Suggema,
115The mosquito, sang his war-song,
And the fire-flies, Wah-wah-taysee,
Waved their torches to mislead him;
And the bull-frog, the Dahinda,
Thrust his head into the moonlight,
120Fixed his yellow eyes upon him,
Sobbed and sank beneath the surface;
And anon a thousand whistles,
Answered over all the fen-lands,
And the heron, the Shuh-shuh-gah,
125Far off on the reedy margin,
Heralded the hero's coming.
Westward thus fared Hiawatha,
Toward the realm of Megissogwon,
Toward the land of the Pearl-Feather,
130Till the level moon stared at him
In his face stared pale and haggard,
Till the sun was hot behind him,
Till it burned upon his shoulders,
And before him on the upland
135He could see the Shining Wigwam
Of the Manito of Wampum,
Of the mightiest of Magicians.
Then once more Cheemaun he patted,
To his birch-canoe said, "Onward!"
140And it stirred in all its fibres,
And with one great bound of triumph
Leaped across the water-lilies,
Leaped through tangled flags and rushes,
And upon the beach beyond them
145Dry-shod landed Hiawatha.
Straight he took his bow of ash-tree,
On the sand one end he rested,
With his knee he pressed the middle,
Stretched the faithful bow-string tighter,
150Took an arrow, jasperheaded,
Shot it at the Shining Wigwam,
Sent it singing as a herald,
As a bearer of his message,
Of his challenge loud and lofty:
155"Come forth from your lodge, Pearl-Feather!
Hiawatha waits your coming!"
Straightway from the Shining Wigwam
Came the mighty Megissogwon,
Tall of stature, broad of shoulder,
160Dark and terrible in aspect,
Clad from head to foot in wampum,
Armed with all his warlike weapons,
Painted like the sky of morning,
Streaked with crimson, blue, and yellow,
165Crested with great eagle-feathers,
Streaming upward, streaming outward.
"Well I know you, Hiawatha!"
Cried he in a voice of thunder,
In a tone of loud derision.
170"Hasten back, O Shaugodaya!
Hasten back among the women,
Back to old Nokomis, Faint-heart!
I will slay you as you stand there,
As of old I slew her father!"
175 But my Hiawatha answered,
Nothing daunted, fearing nothing:
"Big words do not smite like war-clubs,
Boastful breath is not a bow-string,
Taunts are not so sharp as arrows,
180Deeds are better things than words are,
Actions mightier than boastings!"
Then began the greatest battle
That the sun had ever looked on,
That the war-birds ever witnessed.
185All a Summer's day it lasted,
From the sunrise to the sunset;
For the shafts of Hiawatha
Harmless hit the shirt of wampum,
Harmless fell the blows he dealt it
190With his mittens, Minjekahwun,
Harmless fell the heavy war-club;
It could dash the rocks asunder,
But it could not break the meshes
Of that magic shirt of wampum.
195 Till at sunset Hiawatha,
Leaning on his bow of ash-tree,
Wounded, weary, and desponding,
With his mighty war-club broken,
With his mittens torn and tattered,
200And three useless arrows only,
Paused to rest beneath a pine-tree,
From whose branches trailed the mosses,
And whose trunk was coated over
With the Dead-man's Moccasin-leather,
205With the fungus white and yellow.
Suddenly from the boughs above him
Sang the Mama, the woodpecker:
"Aim your arrows, Hiawatha,
At the head of Megissogwon,
210Strike the tuft of hair upon it,
At their roots the long black tresses;
There alone can he be wounded!"
Winged with feathers, tipped with jasper,
Swift flew Hiawatha's arrow,
215Just as Megissogwon, stooping,
Raised a heavy stone to throw it.
Full upon the crown it struck him,
At the roots of his long tresses,
And he reeled and staggered forward,
220Plunging like a wounded bison,
Yes, like Pezhekee, the bison,
When the snow is on the prairie.
Swifter flew the second arrow,
In the pathway of the other,
225Piercing deeper than the other,
Wounding sorer than the other;
And the knees of Megissogwon
Shook like windy reeds beneath him,
Bent and trembled like the rushes.
230 But the third and latest arrow
Swiftest flew, and wounded sorest,
And the mighty Megissogwon
Saw the fiery eyes of Pauguk,
Saw the eyes of Death glare at him,
235Heard his voice call in the darkness;
At the feet of Hiawatha
Lifeless lay the great Pearl-Feather,
Lay the mightiest of Magicians.
Then the grateful Hiawatha
240Called the Mama, the woodpecker,
From his perch among the branches
Of the melancholy pine-tree,
And, in honor of his service,
Stained with blood the tuft of feathers
245On the little head of Mama;
Even to this day he wears it,
Wears the tuft of crimson feathers,
As a symbol of his service.
Then he stripped the shirt of wampum
250From the back of Megissogwon,
As a trophy of the battle,
As a signal of his conquest.
On the shore he left the body,
Half on land and half in water,
255In the sand his feet were buried,
And his face was in the water.
And above him, wheeled and clamored
The Keneu, the great war-eagle,
Sailing round in narrower circles,
260Hovering nearer, nearer, nearer.
From the wigwam Hiawatha
Bore the wealth of Megissogwon,
All his wealth of skins and wampum,
Furs of bison and of beaver,
265Furs of sable and of ermine,
Wampum belts and strings and pouches,
Quivers wrought with beads of wampum,
Filled with arrows, silver-headed.
Homeward then he sailed exulting,
270Homeward through the black pitch-water,
Homeward through the weltering serpents,
With the trophies of the battle,
With a shout and song of triumph.
On the shore stood old Nokomis,
275On the shore stood Chibiabos,
And the very strong man, Kwasind,
Waiting for the hero's coming,
Listening to his songs of triumph.
And the people of the village
280Welcomed him with songs and dances,
Made a joyous feast, and shouted:
'Honor be to Hiawatha!
He has slain the great Pearl-Feather,
Slain the mightiest of Magicians,
285Him, who sent the fiery fever,
Sent the white fog from the fen-lands,
Sent disease and death among us!"
Ever dear to Hiawatha
Was the memory of Mama!
290And in token of his friendship,
As a mark of his remembrance,
He adorned and decked his pipe-stem
With the crimson tuft of feathers,
With the blood-red crest of Mama.
295But the wealth of Megissogwon,
All the trophies of the battle,
He divided with his people,
Shared it equally among them. | <urn:uuid:c32a900f-c68a-464a-9b7d-0599c6a1d2f4> | CC-MAIN-2015-35 | http://www.kalliope.org/da/digt.pl?longdid=longfellow1999062909 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00090-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.916953 | 3,039 | 2.703125 | 3 |
Alternate names: Leszno [Pol], Lissa [Ger], Leshno. 51°51' N, 16°35' E, Lissa/Leszno was the major town (Kreisstadt) in the W Prussian province of Posen. 1900 Jewish population: 1,206. Yizkor: Geschichte Der Juden in Lissa (, 1904). Lissa (between 1800 and 1918 [Ger] also called Polnisch Lissa or Polish Leszno) is a town in central Poland with 63,955 habitants in 2008 in the southern part of the Greater Poland Voivodeship since 1999 and previously the capital of the Leszno Voivodeship (1975-1998). travel story describing the synagogue, now a museum. The history of Jews in Leszno dates from the second half of 16th century when Leszczyń obtained permission to settle from Andrej, the former owner. They gradually dominated the economic force of the city in 1626; and Bogusław V Rafal Leszczyńscy gave the Jews a special privilege governing relations between the Jewish community, and municipalities and to owners of the castle. Under this privilege, the Jews were a distinct population of the district and permitted to construct the first cemetery and synagogue. Rapidly, the Jewish community became one of the largest in the Republic and distinguished in economic and cultural activities. Their trade reached Turkey, Persia, China and Russia and the most important fairs: Leipzig and Frankfurt, and Jaroslaw, and Brody. Many scholars studying with the Leszno rabbis held rabbinates in Italy, England, the Netherlands, Germany and France. The yeshiva brought hundreds of students from the Poland, Prussia, Czech lands, and Moravia. In the 18th century, this community had over Wielkopolskie communities in Kalisz, Poznan and Krotoszyn. From 1714 to 1764, the local kahał represented Wielkopolska at the Jewish Sejm of Four Lands - Waad Arba Arcot - the highest Jewish authority in Poland. The collapse of the Polish State in the late 18th century and the change in administration resulted in an imbalance to the foundations of the life of Jews in Leszno. Diverse political and economic issues and continuing immigration in the 19th century diminished the size and significance of the municipality. In the early 20th century, its only several hundred people were the smallest population in its almost two hundred history. The last Leszno Jews alive in the Holocaust were deported to the ghetto in Grodzisk Mazowiecki, and then to Warsaw, dying in the gas chambers of Treblinka. Today, all that remains of Jewish Leszno is a synagogue, a few buildings, and the Jewish cemetery. The synagogue in the central section of the Jewish street, Narutowicza 31, is considered the largest and probably oldest in Wielkopolska region. The first synagogue was established about 1626, a wooden building or half-timbered that caught fire many times and was destroyed in the great fires that devastated Leszno. Today, the synagogue is an impressive 18th century Baroque façade from 1905, designed by Wroclaw architects, Richard and Paul Ehrlich, in the style and the spirit of the Vienna Secession. The internal layout of the monumental and modern synagogue included the women's gallery and all walls except the east occupied by a large arcade joining the body of the tower. A higher bimah with an altar for the Torah and organ gallery despite the Orthodox nature of the town, he arranged a thoroughly modern synagogue. The prayer hall had a bima, the aron ha-kodesh from inner amphitheater and a pulpit to women equipped with good central heating, ventilation and beautifully decorated with plant and Art Nouveau elements and a Mogen David. In addition, German was used in sermons as in the progressive synagogues meaning that Leszno's synagogue and Orthodox environment had no equal until its use stopped in 1939. After WWII, for many years served as the "stop bath". In 1956 part of the tower dropped its helmet, then lost the inside ferroconcrete on two floors. In 1991, after entry in the register of historic buildings in 1992, the synagogue taken over by the Regional Museum in Leszno, intended for permanent collections and storage for display. Finally, it was put into service in 2006 as an art gallery. For many years, unresolved issues surrounded the synagogue Leszczyńskiej, a building considered the oldest preserved synagogue in the Wielkopolska region. The importance of the temple is not only related to its age, but also to the fact that the synagogue was home to the most important Jewish community in Wielkopolska in the 18th and first half of the 19th century when Leszno became the center of Jewish communities in Great Poland, primacy for the municipality of Poznan both in terms of financial and organizational issues. When the old Jewish synagogue burned in the 19th century, new buildings replaced it. Leszno synagogue survived due to its good technical construction. Renovated in 1905, the temple survived WWII and another fifty years of neglect, awaiting listing onthe register of historic buildings in 1991 to revive it. In 1993, renovation of the building to adapt it to the museum for two millions gold for repair work was approved. In 1999, the renovation was delayed by a claim of ownership from the Jewish Community in Wroclaw based of 1997 legislation about return of former Jewish property to the rightful owners. Finally, Sejmik Wielkopolska decided compensate to the Wroclaw Jewish community that then turned over the building to the city. Sejmik Wielkopolska granted funds on November 28, 2004 to refurbish of the synagogue by the end of June 2005, for development of new documentation, obtaining necessary licenses, and carrying out the renovation within the time limit. Delayed, In 2006, the budget of the Region approved was 550 000 PLN for continuation of work and purchase of the building. Afterward came initial installations to modernize and transform the synagogue building, showing shows the Wielkopolska government's appreciation for the importance of Leszczynska museum facility. While administratively part of the Regional Museum in Leszno, the synagogue will enable better display of collections and serves as a tourist attraction to Wielkopolska since the synagogue in Leszno is only one of three similar facilities in the country - Tykocinie, Leczna and Krakow. The preserved building structure will include virtually every Leszna cultural event for the entire region. The facade of the synagogue has become undoubtedly the most representative building of Old Leszczyńskiej. Other buildings that once served important functions for the local Jewish community such as the house of prayer, the Universal Building Schools for the Jewish al. Z. Krasinski, and the mortuary house at Al. Jana Pawła II make an interesting historical tourist trail. synagogue. school and synagogue. rabbinate [May 2009]
The Jewish cemetery in Leszno established in 1626, originally was located outside urban areas at today's ul. Jana Pawła II number 14 and was operated by the Jewish community until 1939. The cemetery was completely devastated by the Nazis during WWII. Thousands of destroyed gravestones were used as rubble for the construction of roads. After 1945, the cemetery was neglected. In the 1970s, the cemetery retained only a small fragment of its size and two buildings: the gravedigger residence and the mortuary house from the end of the 19th century. For the last many years, it was used for electroplating. In 1992, a mortuary house was entered in the register of historic buildings and from 1993-2004 housed the Department Judaistyczny District Museum in Leszno. Then the museum held about four hundred gravestones/matzevot from Jewish cemeteries in Leszno (352), Borku Wlkp (15) and Rydzyny (2). The oldest of gravestone found is from 1700, more than thirty gravestones date from the 18th century, and the others from the 19th and 19th centuries. The Regional Museum in Leszno for many years made considerable effort to preserve historic Leszno buildings of Jewish culture to ensure the tangible heritage left by the Jewish people in Leszna. Several years ago, creation of the Judaist branch in the building of the burial house to promote learning about Jewish culture and art has been recognized by the Ministry of Culture and Arts. In 1993, the museum facility was awarded "The Most Interesting Museum Event of 1993 " for the renovated and adapted burial house museum. A torah scroll temporary exhibition was organized by the museum. In 2005, "Nasi Bracia Starsi" - "Our Elder Brothers" exhibited paintings, drawings, and graphic art from the collections of the Jewish Historical Institute in Warsaw. In 2004, prints and paintings of Jewish themes by Lila Fijałkowska, a graduate of the Moscow Academy of Fine Arts, exhibited. "Wielkopolscy Rabbis-- In the Circle of the Jewish tradition" examined the achievement of Jewish thought in the field of philosophical investigations as well as theological and historical elements. Photos. [May 2009]
The Jewish cemetery in Leszno, Poland was visited on 24 July 1997. The former caretaker's house is now part of a regional museum system devoted to the history of the area's former Jewish population. The grounds were well kept. An effort is underway to expand the museum, to develop at lE a portion of the cemetery as a memorial garden. Several gravestones have been located and pieces of a number of others have been collected. Leszno was the birthplace in 1740 of Haym Salomon who immigrated to New York in 1772. He subsequently joined the Sons of Liberty and played a vital part in the success of the Revolutionary War working closely with Robert Morris, the Minister of Finance, in raising funds for the war effort. A commemorative stamp was issued in Salomon's memory in 1975. A statute of Haym Salomon with George Washington and Robert Morris has been placed in Herald Square, Wacker Drive in Chicago. During our visit to the museum, two very cooperative attendants were present but the curator was not working. They arranged for us to meet him in the nearby town of Wschowa where he lived. He led us to another Jewish cemetery in an isolated rural area close to a nearby village. (This latter restored cemetery was in relatively good shape except for vegetation.) The curator of the Leszno museum is Dariusz Czwojdrak, who prepared the 29 Oct 1991 survey of the Leszno cemetery that appears next. Mr. Czwojdrak asked for help in locating descendants of those interred in this cemetery to obtain permission to use the land as art of the museum. In the meantime, the museum is looking for additional articles for its collection. Articles concerning Haym Salomon are also lacking. The museum address for Mr.Czwojdrak is: Dariusz Czwojdrak, Muzeum Okregowe, Dzial Judaistyczny, ul. Estkowskiego 2, 64-100 Leszno, Poland. Prepared and names sent by Scott Clark, Professor of Environmental Health, University of Cincinnati, PO Box 670056, Cincinnati, Ohio 45267-0056, tel: (513)-558-1749, fax: (513)-558-2722,
LESZNO: US Commission No. POCE000320
Alternate name: Lissa in German. Leszno is in Leszno woj at 51º51 16º35, 69 km from Poznan and 96 km from Wroclaw. Cemetery location: ul. E. Estkowskiego. Present town population is 25,000-100,000 with no Jews.
The earliest known Jewish community was 16th century. 1921 Jewish population was 299 (1.8%). Elia Margolies, Rabbi Abraham Lissa, Rabbi Jacob Lissa, Rabbi Akiba Eiger, Rafal Kosch, Dr. Leo Baeck, Hirsch Kalischer, and Ludwik Kalisch lived here. The Conservative and Progressive/Reform Jewish cemetery was established in 17th century with last burial 1939. Buried here are Rabbi Izaak ben R. Schalom, Rabbi Izaak ben R. Mose Gerson, and Dawid Tewle. Wschowa in 1759 (19 km away), Swieciechowa (6 km away), and Zaborowo (2 km away) used this cemetery. The isolated urban flat land has no sign or marker. Reached by turning directly off a public road, access is open to all with no wall or gate. The size of the cemetery was 2.7 ha but no longer existed. Residential buildings now occupy its land. Stones that were moved are in the district museum in Leszno (4 pieces). About 30 pieces are incorporated into roads. Tombstones date from 18th-19th century. The sandstone flat shaped stones, finely smoothed and inscribed stones, or flat stones with carved relief decoration have Hebrew and German inscriptions. No known mass graves. The municipality owns the cemetery property used for residential buildings and storage. Properties adjacent are recreational and residential. The cemetery boundaries are smaller than in 1939 due to housing development. Private visitors rarely visit. The cemetery was vandalized during WWII. There is no maintenance or care. A pre-burial house, a gravedigger's house, and residential buildings are within the limits of the cemetery. Security, erosion, and incompatible nearby and planned development are moderate threats.
Dariusz Czwojdrak, ul. Lipowa 22a/4, 67-400 Wschowa visited site and completed survey 29 Oct 1991. No interviews.
|Last Updated on Friday, 12 June 2009 01:12| | <urn:uuid:396b78b2-712d-4ac1-8f50-8da72031f466> | CC-MAIN-2015-35 | http://www.iajgsjewishcemeteryproject.org/poland/leszno.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062760.2/warc/CC-MAIN-20150827025422-00219-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.954353 | 2,938 | 3.125 | 3 |
This submission, drawn from recent Human Rights Watch research, focuses on four areas of concern regarding Canada’s human rights record: violence against indigenous women and girls, counterterrorism, abuses related to the extractives industry, and the use of cluster munitions. It also examines Canada’s adherence to commitments made in response to its first Universal Periodic Review (UPR) in 2009.
II. Violence Against Indigenous Women and Girls
Serious concerns persist regarding Canada’s response to the widespread violence against indigenous women and girls in the country. During Canada’s first UPR, the government stated that it was committed to “identifying the causes of violence against Aboriginal women and developing appropriate responses in consultation with Aboriginal and civil society organizations” and accepted many detailed recommendations in this regard. However, since that review, the government has taken steps that call into question its commitment to meaningfully engage with indigenous communities on the issue of police accountability for responding to such violence.
In 2010, the government ceased funding the Sisters in Spirit (SIS) initiative of the Native Women’s Association of Canada (NWAC), which had collected data showing that nationally 582 indigenous women and girls had gone missing or been found murdered in Canada, with 39 percent of the disappearances and deaths occurring since 2000. The government has since committed to funding NWAC’s “Evidence to Action,” a follow-up to SIS programming, for three years. However, the statistical monitoring of cases of missing and murdered indigenous women and girls that was a part of the SIS database initiative–the only database of its kind in Canada–is now to be assumed by the National Police Support Centre for Missing Persons, run by the Royal Canadian Mounted Police (RCMP), Canada’s national police force, with government funds allocated to improve the Canadian Police Information Centre database on missing persons. Police forces in Canada are not mandated to collect ethnicity data; thus there is currently no precedent for the routine collection of such data in RCMP missing persons databases. While the center is still in development and the database will not be fully operational until 2013, it will lack the independence and focus of the NWAC data initiative.
Human Rights Watch is currently conducting research into police treatment of indigenous women and girls in northern British Columbia. As of 2010, the SIS initiative had documented approximately 160 cases of missing and murdered women in British Columbia, considerably more than any other province or territory in Canada. The province also had the highest unsolved rate of murders of indigenous women and girls. An ongoing provincial inquiry into missing and murdered women floundered when many of the nongovernmental organizations (NGOs) granted standing were unable to participate in the inquiry due to the lack of provincial government funding for counsel. A number of community groups representing the interests of the missing and murdered indigenous women have since refused to engage with the inquiry, citing concerns about the exclusionary and discriminatory nature of the process. The resignation of the inquiry’s first appointed independent counsel for indigenous interests over the lack of attention to indigenous communities’ concerns–including entrenched discrimination, poverty, and economic and social inequalities that contribute to indigenous women’s exposure to violence–has further undermined the inquiry’s legitimacy.
While some attention has been paid to addressing open cases involving missing and murdered women, including through the RCMP’s Project E-Pana, families continue to express frustration with delays and inaction by the police in response to the disappearances and deaths. These problems extend to the policing of violence against women generally. According to reports received by Human Rights Watch from domestic violence survivors and community organizations in northern British Columbia, calls to the police are frequently met with skepticismand victim-blaming.
In addition to examining police failure to investigate violence committed by others, Human Rights Watch is investigating the abuse of indigenous women and girls in RCMP custody, including reports of physical assaults, sexual harassment, and poor conditions of detention. Complaints of police misconduct are themselves investigated by police. Although a civilian complaints commission monitors the processing of public complaints against the RCMP and external police teams investigate the more serious allegations, current practice does not provide the accountability of an independent civilian mechanism. Furthermore, fear of retaliation obstructs access to existing complaint mechanisms, particularly for women and girls who live in small communities, are street-involved, or have had multiple contacts with the criminal justice system. Notably, the Canadian government failed to provide complete information in response to a request from the Committee on the Rights of the Child (CRC) for the number of reported cases of abuse and maltreatment of children occurring during their arrest and detention. A recent class action lawsuit brought by more than 150 current or former RCMP officers alleging gender-based discrimination and sexual harassment within the national police force raises added concerns about discrimination within police operations.
Since Canadian citizen Omar Khadr was first detained by US forces in Afghanistan in July 2002 at the age of 15, Canada has failed to protect his rights and in fact has been complicit in the violation of his rights under both international and Canadian law. Throughout his detention, he has never been afforded the special protections provided for detained child soldiers under international law, including segregation from the adult population. Despite a commitment to favorably consider an application by Khadr for transfer from the US detention facility in Guantanamo Bay to Canada, the Canadian government delayed approving his transfer, finally repatriating him on September 29, 2012, almost a full year after he was eligible under the terms of his plea agreement. International human rights law ensures the right of a national to return to his or her country. While this right cannot be invoked to avoid lawful punishment, Khadr had pleaded guilty to alleged crimes in a military commission and requested transfer to Canada to serve out the remainder of his sentence, not to be released. During his detention by the US, Canada also failed to ensure Khadr was provided with rehabilitative services as appropriate for a former child soldier.
Canada has also maintained problematic policies with regard to the use of intelligence information obtained through torture. Canada has refused to disallow reliance on information obtained from governments with known records of human rights violations, including obtaining information through torture or other ill-treatment. The Canadian Security Intelligence Service (CSIS), the RCMP, and the Canada Border Services Agency have all received a ministerial-level directive permitting, in some circumstances, the use of information obtained by other states through torture, or information that once shared, may result in torture or other ill-treatment. One prominent instance of the use of evidence possibly obtained by torture is the detention of Mohamed Mahjoub on a “security certificate.” Mahjoub was first detained in June 2000 on suspicion of involvement with the Vanguards of Conquest, a faction of al-Jihad al-Islamiya, in Egypt. In September 2012, then-public safety minister Stockwell Day testified that at the time he signed Mahjoub’s security certificate in February 2008, he had received a memo from CSIS Director Jim Judd warning that it was “difficult, if not impossible” to determine whether the information justifying Mahjoub’s detention had been derived from torture as it had been shared with CSIS by governments with a reputation for using abusive interrogation.
IV. Extractive Industries
Canada is the most important hub of the global mining industry and is home to some 75 percent of the world’s mining and exploration companies. The global reach of Canada’s mining industry is of central importance to the Canadian economy and to the economies of many developing countries. But mining can be an incredibly destructive industry if not carried out responsibly and with adequate government oversight–with profound negative impacts on mining-affected communities. Many Canadian mining firms work overseas in countries whose governments cannot or will not effectively regulate the human rights practices of multinational companies operating on their soil.
Currently, Canada’s government does nothing to monitor, let alone regulate, the human rights practices of Canadian firms when they operate in other countries. This has led to numerous abuses and has stained Canada's international reputation. For instance, Human Rights Watch research uncovered evidence of serious abuses including gang rape by private security personnel employed at the Porgera gold mine in Papua New Guinea—a project majority owned and solely operated by Canadian firm Barrick Gold, the world’s largest gold mining company. The company responded by committing to serious action to remedy the situation—but the disturbing fact remains that in the absence of a strong government role, the company failed to detect or act upon these abuses itself. The Canadian government currently has no mandate to even investigate such situations, and no way of knowing how many similar problems Canadian firms might be involved in around the world.
Human Rights Watch has consistently argued that the best way for Canada’s government to play a responsible role with respect to these industries would be to exercise greater oversight and regulation of Canadian companies’ human rights practices abroad. A 2007 roundtable process that included representatives of both industry and civil society arrived at a series of useful recommendations about the way forward on this issue, including the creation of an ombudsman’s office to investigate allegations of abuse. None of the key roundtable recommendations were implemented by the government, and industry support for them has waned considerably. A modest legislative effort to empower Canada’s government to monitor the human rights practices of Canadian extractives companies operating abroad was defeated in parliament in 2010. The mining industry lobbied heavily against that bill, which Canada’s government also opposed.
V. Ratification and Implementation of the Convention on Cluster Munitions
The Canadian parliament is in the process of debating proposed national legislation (Bill S-10) designed to implement the 2008 Convention on Cluster Munitions, allowing Canada to ratify the treaty. While Human Rights Watch encourages Canada to become a full state party to the instrument banning cluster munitions, Bill S-10 raises many concerns. As written, several provisions would fail to achieve, or even run counter to, the convention’s goal of eliminating cluster munitions and the human suffering they cause.
While the bill has numerous shortcomings, most disturbing is that it creates large loopholes to the convention’s absolute prohibitions during joint military operations with states not party to the convention. For example, it would allow Canadian forces to direct or authorize the use of cluster munitions by states not party, expressly request the use of cluster munitions in certain situations, and themselves use the weapons while on secondment to allies who have not joined the convention. Bill S-10 also could be understood to allow stockpiling of cluster munitions in and transit of them through Canadian territory, and it fails explicitly to prohibit investment in the production of cluster munitions. Proponents of the bill argue that exceptions are allowed under article 21 of the Convention on Cluster Munitions, which addresses joint military operations. That provision, however, should be read as a clarification that military operations with states not party are permitted rather than as a qualification to the convention’s absolute ban during such operations.
VI. Recommendations to the Canadian Government
On violence against indigenous women and girls, Canada should:
- Develop and implement a national action plan to address violence against indigenous women and girls that addresses the structural roots of the violence, as well as the accountability of government bodies charged with preventing and responding to violence.
- Collect and publish accurate and comprehensive disaggregated data that includes an ethnicity variable data on violence against indigenous women and girls in cooperation with indigenous community organizations.
- Ensure police accountability for thorough investigations of violence against indigenous women and girls through enhanced oversight of such cases from the time that a disappearance or an act of violence is reported.
- Expand training for police officers to counter racism and sexism in the treatment of indigenous women and girls in custody and to improve police response to violence against women and girls within indigenous communities.
- Establish independent civilian investigations of reported incidents of serious police misconduct.
On its treatment of child soldiers, Canada should:
- Provide rehabilitation services to Omar Khadr while still in detention and upon release, as well as provide services to assist in his reintegration into Canadian society.
- Investigate and hold accountable those responsible for failing to protect Khadr’s rights under the Canadian Charter and international law, and establish policies to prevent any similar denial of rights to child soldiers who may be captured by foreign forces in the future.
On the use of information derived from torture or other ill-treatment, Canada should:
- Disallow reliance on any information believed to be obtained by Canada or any foreign country that was likely derived from torture or other ill-treatment.
- Refuse to share information with other states where there is reason to believe that information may lead to torture or other ill-treatment.
On the extractives industry, Canada should:
- As an urgent priority, establish an ombudsman’s office or some other mechanism to monitor the human rights conduct of Canadian oil, mining, and gas companies operating abroad and to investigate credible allegations of human rights abuse.
- Introduce legislation to implement the full range of recommendations from the 2007 National Roundtables on Corporate Social Responsibility and the Canadian Extractive Industry in Developing Countries.
On cluster munitions, Canada should:
- Ratify and implement the Convention on Cluster Munitions.
In order to implement the Convention on Cluster Munitions, pass national legislation that:
- Prohibits explicitly and absolutely the use, production, transfer, and stockpiling of cluster munitions, and assistance with those activities under all circumstances, including joint military operations.
- Prohibits foreign stockpiling and transit of cluster munitions.
- Prohibits investment in the production of cluster munitions. | <urn:uuid:19a96c4f-054e-46ea-b4f4-a54b9cda6332> | CC-MAIN-2015-35 | http://www.hrw.org/news/2012/10/10/un-human-rights-council-hrws-submission-canadas-universal-periodic-review | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00102-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.957651 | 2,765 | 2.640625 | 3 |
Earlier in this chapter (in the section called “Strategies for Repository Deployment”), we looked at some of the important decisions that should be made before creating and configuring your Subversion repository. Now, we finally get to get our hands dirty! In this section, we'll see how to actually create a Subversion repository and configure it to perform custom actions when special repository events occur.
Subversion repository creation is an incredibly simple task. The svnadmin utility that comes with Subversion provides a subcommand (svnadmin create) for doing just that.
$ # Create a repository $ svnadmin create /var/svn/repos $
Assuming that the parent directory
/var/svn exists and that you have
sufficient permissions to modify that directory, the previous
command creates a new repository in the directory
/var/svn/repos, and with the default
filesystem data store (FSFS). You can explicitly choose the
filesystem type using the
which accepts as a parameter either
$ # Create an FSFS-backed repository $ svnadmin create --fs-type fsfs /var/svn/repos $
# Create a Berkeley-DB-backed repository $ svnadmin create --fs-type bdb /var/svn/repos $
After running this simple command, you have a Subversion repository. Depending on how users will access this new repository, you might need to fiddle with its filesystem permissions. But since basic system administration is rather outside the scope of this text, we'll leave further exploration of that topic as an exercise to the reader.
The path argument to svnadmin is just
a regular filesystem path and not a URL like the
svn client program uses when referring to
repositories. Both svnadmin and
svnlook are considered server-side
utilities—they are used on the machine where the
repository resides to examine or modify aspects of the
repository, and are in fact unable to perform tasks across a
network. A common mistake made by Subversion newcomers is
trying to pass URLs (even “local”
file:// ones) to these two programs.
Present in the
db/ subdirectory of
your repository is the implementation of the versioned
filesystem. Your new repository's versioned filesystem begins
life at revision 0, which is defined to consist of nothing but
the top-level root (
Initially, revision 0 also has a single revision property,
svn:date, set to the time at which the
repository was created.
Now that you have a repository, it's time to customize it.
While some parts of a Subversion repository—such as the configuration files and hook scripts—are meant to be examined and modified manually, you shouldn't (and shouldn't need to) tamper with the other parts of the repository “by hand.” The svnadmin tool should be sufficient for any changes necessary to your repository, or you can look to third-party tools (such as Berkeley DB's tool suite) for tweaking relevant subsections of the repository. Do not attempt manual manipulation of your version control history by poking and prodding around in your repository's data store files!
A hook is a program triggered by some repository event, such as the creation of a new revision or the modification of an unversioned property. Some hooks (the so-called “pre hooks”) run in advance of a repository operation and provide a means by which to both report what is about to happen and prevent it from happening at all. Other hooks (the “post hooks”) run after the completion of a repository event and are useful for performing tasks that examine—but don't modify—the repository. Each hook is handed enough information to tell what that event is (or was), the specific repository changes proposed (or completed), and the username of the person who triggered the event.
hooks subdirectory is, by
default, filled with templates for various repository
$ ls repos/hooks/ post-commit.tmpl post-unlock.tmpl pre-revprop-change.tmpl post-lock.tmpl pre-commit.tmpl pre-unlock.tmpl post-revprop-change.tmpl pre-lock.tmpl start-commit.tmpl $
There is one template for each hook that the Subversion
repository supports; by examining the contents of those
template scripts, you can see what triggers each script
to run and what data is passed to that script. Also present
in many of these templates are examples of how one might use
that script, in conjunction with other Subversion-supplied
programs, to perform common useful tasks. To actually install
a working hook, you need only place some executable program or
script into the
which can be executed as the name (such as
post-commit) of the hook.
On Unix platforms, this means supplying a script or
program (which could be a shell script, a Python program, a
compiled C binary, or any number of other things) named
exactly like the name of the hook. Of course, the template
files are present for more than just informational
purposes—the easiest way to install a hook on Unix
platforms is to simply copy the appropriate template file to a
new file that lacks the
customize the hook's contents, and ensure that the script is
executable. Windows, however, uses file extensions to
determine whether a program is executable, so you would
need to supply a program whose basename is the name of the
hook and whose extension is one of the special extensions
recognized by Windows for executable programs, such as
.exe for programs and
.bat for batch files.
For security reasons, the Subversion repository executes
hook programs with an empty environment—that is, no
environment variables are set at all, not even
under Windows). Because of this, many administrators
are baffled when their hook program runs fine by hand, but
doesn't work when run by Subversion. Be sure to explicitly
set any necessary environment variables in your hook program
and/or use absolute paths to programs.
Subversion executes hooks as the same user who owns the process that is accessing the Subversion repository. In most cases, the repository is being accessed via a Subversion server, so this user is the same user as whom the server runs on the system. The hooks themselves will need to be configured with OS-level permissions that allow that user to execute them. Also, this means that any programs or files (including the Subversion repository) accessed directly or indirectly by the hook will be accessed as the same user. In other words, be alert to potential permission-related problems that could prevent the hook from performing the tasks it is designed to perform.
There are several hooks implemented by the Subversion repository, and you can get details about each of them in the section called “Repository Hooks” in Chapter 9, Subversion Complete Reference. As a repository administrator, you'll need to decide which hooks you wish to implement (by way of providing an appropriately named and permissioned hook program), and how. When you make this decision, keep in mind the big picture of how your repository is deployed. For example, if you are using server configuration to determine which users are permitted to commit changes to your repository, you don't need to do this sort of access control via the hook system.
There is no shortage of Subversion hook programs and scripts that are freely available either from the Subversion community itself or elsewhere. These scripts cover a wide range of utility—basic access control, policy adherence checking, issue tracker integration, email- or syndication-based commit notification, and beyond. Or, if you wish to write your own, see Chapter 8, Embedding Subversion.
While hook scripts can do almost
anything, there is one dimension in which hook script
authors should show restraint: do not
modify a commit transaction using hook scripts. While it
might be tempting to use hook scripts to automatically
correct errors, shortcomings, or policy violations present
in the files being committed, doing so can cause problems.
Subversion keeps client-side caches of certain bits of
repository data, and if you change a commit transaction in
this way, those caches become indetectably stale. This
inconsistency can lead to surprising and unexpected
behavior. Instead of modifying the transaction, you should
simply validate the transaction in the
pre-commit hook and reject the commit
if it does not meet the desired requirements. As a
bonus, your users will learn the value of careful,
compliance-minded work habits.
A Berkeley DB environment is an encapsulation of one or more databases, logfiles, region files, and configuration files. The Berkeley DB environment has its own set of default configuration values for things such as the number of database locks allowed to be taken out at any given time, the maximum size of the journaling logfiles, and so on. Subversion's filesystem logic additionally chooses default values for some of the Berkeley DB configuration options. However, sometimes your particular repository, with its unique collection of data and access patterns, might require a different set of configuration option values.
The producers of Berkeley DB understand that different
applications and database environments have different
requirements, so they have provided a mechanism for overriding
at runtime many of the configuration values for the Berkeley
DB environment. BDB checks for the presence of a file named
DB_CONFIG in the environment directory
(namely, the repository's
subdirectory), and parses the options found in that file.
Subversion itself creates this file when it creates the rest
of the repository. The file initially contains some default
options, as well as pointers to the Berkeley DB online
documentation so that you can read about what those options do. Of
course, you are free to add any of the supported Berkeley DB
options to your
DB_CONFIG file. Just be
aware that while Subversion never attempts to read or
interpret the contents of the file and makes no direct use of
the option settings in it, you'll want to avoid any
configuration changes that may cause Berkeley DB to behave in
a fashion that is at odds with what Subversion might expect.
Also, changes made to
take effect until you recover the database environment (using | <urn:uuid:2b08f2db-2c62-4fc8-b341-ed6d97e54690> | CC-MAIN-2015-35 | https://www.visualsvn.com/support/svnbook/reposadmin/create/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064445.47/warc/CC-MAIN-20150827025424-00340-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.890817 | 2,199 | 2.59375 | 3 |
Apicetomy – tooth root operation
Reviewed by Mr Vassilios Papalois, consultant transplant and general surgeon
What is it?
The white shiny part of each tooth is called the crown. The part that fits into its socket in your jaw is called the root. The deepest part of the root is called the apex. The apex of your tooth has an infection with germs in it. The infection may have ended up forming a cyst that possibly has pus in it.
A cyst is a little pocket with some liquid in it. It is about half an inch (1.2cm) across, in the jaw bone, near the apex of the tooth. This is called a dental cyst.
The cyst and infection is cleaned out through a small opening in the gum and bone. The opening is then closed up. This operation is called an apicectomy.
When you have a local anaesthetic, the area of the operation is numbed with an anaesthetic injection. You will feel that something is happening in the area of the operation, but will not feel any pain. You will also be awake and conscious. The sedation is sometimes added to the local anaesthetic to help you relax and allow you to go through the operation. If you are sedated, you will be awake during the operation, but will not be aware of what is going on.
Finally, if you have a general anaesthetic you will be completely asleep during the operation and you will not feel any pain. The decision about the type of anaesthetic will be discussed between you and your surgeon.
Although most apicectomies can be done safely and comfortably just with local anaesthetic, it is better to have some sedation or a general anaesthetic if it is anticipated that the operation will be difficult.
A small cut will be made into the gum over the infected apex or the cyst. The surgeon will drill or chisel into the jaw bone down to the apex. All the infected material, any cyst, and a small part of the root will be taken out. Some of the infected tissue will be sent to the laboratory to be tested for germs. The end of the root may be sealed off, using a special filler material, to fill the space that has been made.
The cut in the gum is then closed with stitches. These are usually special stitches that melt away in 7 to 10 days. Sometimes the surgeon uses stitches that need to be taken out after about two weeks.
Your operation can be done as a day case. This means that you come into hospital on the day of your operation, and go home the same day.
If you have the operation under sedation or under a general anaesthetic you might need to stay in the hospital overnight to make sure that the effect of the sedative medication or the anaesthetic has gone completely.
If you leave things as they are, the infection will get worse and may form an abscess (a collection of infected fluid or pus). It may spread to the roots of other teeth.
If there is a cyst, this will get bigger. It may seriously weaken your jaw. There is too much infected apex to clear by drilling through from the crown of your tooth into the root.
Fillings put in by your dentist may not have sealed the root properly. The root may no longer be hollow enough for drilling, due to your age. Or, there is too much of a cyst to clear this way. Your tooth could be taken out to get rid of the infection. You would then need a false tooth to fill in the space.
Drugs and medicines will not help at this stage to control the infection or to shrink the cyst.
Before the operation
If you know that you have problems with your blood pressure, your heart, or your lungs, ask your family doctor to check that these are under control.
Check you have a relative or friend who can come with you to the hospital and take you home.
Sort out any tablets, medicines, inhalers that you are using. Keep them in their original boxes and packets. Bring them to the hospital with you.
On the ward, you will be checked for past illnesses and will have special tests to make sure that you are well prepared and that you can have the operation as safely as possible.
Please tell the doctors and nurses of any allergies to tablets, medicines or dressings. You will have the operation explained to you and will be asked to fill in an operation consent form.
Before you sign the consent form, make sure that you fully understand all the information that was given to you regarding your health problems, the possible and proposed treatments and any potential risks. Feel free to ask more questions if things are not entirely clear.
Any tissues that are removed during the operation will be sent for tests to help plan the appropriate treatment. Any remaining tissue that is left over after the tests will be discarded.
Before the operation and as part of the consent process, you may be asked to give permission for any ’left over’ pieces to be used for medical research that have been approved by the hospital. It is entirely up to you to allow this or not.
Many hospitals now run special preadmission clinics, where you visit a week or so before the operation, where these checks will be made.
After – in hospital
After the operation, you will be taken on a trolley to the recovery ward for a few minutes. After your anaesthetic has worn off, the nurse from the ward will take you back to your ward.
If you have had a general anaesthetic, although you will be conscious a few minutes after the operation ends, you are unlikely to remember anything until you are back in your bed on the ward. The same thing happens with sedation but to a lesser degree.
Some patients feel a bit sick after the operation, but this passes off quickly.
You may be given oxygen from a face mask for a few hours if you have had any chest problems in the past, you are a smoker, or obese.
A general anaesthetic will make you slow, clumsy and forgetful for about 24 hours. Again, the same, but to a lesser degree, happens with sedation. The nurses will help you with everything you need until you are able to do things for yourself. Do not make important decisions, drive a car, use machinery, or even boil a kettle during this time.
The mouth will feel bruised and swollen. The jaw will be slightly stiff, usually with some discomfort. The gum with the stitches will swell a little, with slight bruising of the skin.
You will be given painkilling tablets to help with any discomfort. The swelling, bruising and stiffness of the jaw will gradually disappear over a week to 10 days.
You will be able to drink two to three hours after the operation. Avoid eating until any sickness has passed, and after the feeling has come back to your mouth and tongue. Before you leave the ward, you may be given an appointment to come back to the dental outpatient clinic to see the surgeon. This will be about two weeks after the operation.
The surgeon will check that the wound has healed. He will take out any stitches if needed. He will make sure that the infection and any cyst have settled down. He will have the report from the laboratory about the tissue from the apex and any cyst. You may have a further X-ray of your teeth.
You may need to visit the outpatient clinic again for further checks.
After – at home
Take two painkiller tablets every six hours to control any pain or discomfort.
Chewing may be painful on your tooth and gum for three or four days. So you should eat a softer diet and avoid very 'spicy' or 'vinegary' foods. You need to keep the mouth cleaner than normal to prevent infection of your wounds.
Gently brush your teeth with ordinary toothpaste three times a day. Follow this with a warm salt water mouth bath. This is a pinch of salt to half a pint of warm water. Hold a mouthful for one minute on each side of the mouth. Then follow the salt mouth with the antiseptic mouthwash for one minute.
You may have aches and twinges in your teeth for a month or two. These will settle down gradually.
You will be fit to go back to work the second day after your operation. You will be fit to drive 24 hours after the operation. Avoid strenuous sports and swimming until the gum has fully healed in a month or so.
If you have this operation under general anaesthetic, there is a very small risk of complications related to your heart and lungs. The same is true for sedation but to a lesser degree.
The tests that you will have before the operation will make sure that you can have the operation in the safest possible way and will bring the risk for such complications very close to zero.
If you follow the advice given above, you are unlikely to have any problems.
Complications are rare. Some slight bleeding is normal for a day or two after this operation. If the bleeding is heavy and carries on for more than an hour, phone the hospital or your GP for advice. They will tell you how to bite on a small pack of gauze for 20 minutes or so to stop the bleeding.
Rarely patients need to come back to hospital for treatment of bleeding. If you experience increasing pain at the area of the operation, you feel that is getting more swollen and you have a temperature, it most probably means that the area of the operation is infected. This happens relatively rarely and taking antibiotics tablets for a week or two usually solves the problem.
In a very small number of patients the infection can be serous and lead to a collection of infected fluid or pus (abscess) at the area of the operation. In this situation you will need another operation to drain the infected fluid or pus.
If you develop an abscess it is sometimes possible for the infection to spread in the blood. If this happens, you may need to stay in hospital and have intravenous antibiotics (through a vein in your arm).
Sometimes, there is some numbness around the gum after any anaesthetic has worn off. This may be caused by bruising around or damage to small nerves near the tooth root. Usually the feeling comes back in a day or so. Rarely, it takes six weeks or more.
In about 9 out of 10 cases, the infection and any cyst heal up. In the other cases, the tooth has to come out, or, rarely, another apicectomy is needed.
If you have any queries or problems, please ask the doctors or nurses.
Based on a text by Surgery Door
Last updated 06.07.2009 | <urn:uuid:ee08cc17-c0ad-4616-8c83-1a847ed156b4> | CC-MAIN-2015-35 | http://www.netdoctor.co.uk/surgical-procedures/apicetomy-tooth-root-operation.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066017.21/warc/CC-MAIN-20150827025426-00218-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.961327 | 2,214 | 3.15625 | 3 |
2007 Annual Report
The soil conditioning index is useful in predicting soil quality in cotton production systems. Various models are being developed and utilized by scientists and government agencies to quantify the potential for carbon storage in soil. However, testing of models is needed to verify their accuracy and reliability. A collaboration among scientists from the USDA-Agricultural Research Service in Watkinsville Georgia, USDA-Natural Resources Conservation Service, Auburn University, and Texas A&M University tested the performance of a highly technical environmental model (EPIC v. 3060) against a simple predictive model currently used by the USDA-Natural Resources Conservation Service to quantify soil management systems impact on soil quality. Several cotton management systems were evaluated at three locations (Blackland Prairie in Texas, Southern Coastal Plain in South Carolina, and Southern Piedmont in Georgia). Both models predicted low soil quality with conventional tillage production of cotton without a cover crop (traditional management), but higher soil quality with no-tillage management of cotton with winter cover crop and/or rotation with other high-residue producing crops. Although both models can be used by land managers and policy makers to evaluate soil quality on the 7 million acres of cotton in the southeastern USA, there is still an urgent need to collect field-based measurements of soil quality to fully validate and refine these tools. This project is contributing to the Soil Resource Management National Program (NP202) in Problem Area 3 (Soil carbon measurement, dynamics, and management) and Problem Area 5 (Adoption and implementation of soil and water conservation practices and systems). It also will contribute to the Global Change National Program (NP204) in Component 1 (Carbon cycle and carbon storage).
Aeration of well-drained grasslands captures more rainfall and reduces phosphorus losses. Phosphorus (P) losses from organic and inorganic (manures) fertilizers applied to pastures can contribute to eutrophication of surface waters. Management practices are needed to reduce losses of P from fertilized pastures. Scientists from USDA-ARS, J. Phil Campbell Sr., Natural Resource Conservation Service and the Univ. of Georgia worked together to determine the effectiveness of mechanical aeration to reduce overland flow, runoff and nutrient losses that contaminate surface waters. At the plot scale using simulated rainfall, aeration improved retention of rainfall and reduced nutrient losses for two fertilizer sources (inorganic fertilizer and broiler litter). At the field scale, aeration reduced runoff and dissolved P losses on well-drained soils but exacerbated P losses on poorly drained soils. Results from this study are being utilized by the Georgia P-Index Workgroup to calibrate the Georgia P-Index management coefficients for determination of site vulnerability to P losses to surface waters. The Workgroup is made up representatives from the USDA-NRCS, USDA-ARS, the University of Georgia, and the Georgia Dept. of Agriculture. This project contributes to the Soil Resource Management National Program (NP202) in Problem Area 4 (Nutrient management for crop production and environmental protection) and to the Water Resource Management National Program (NP201) in Problem Area 6 (Water quality protection systems).
Stream-side pastures with 45% or more forage cover minimize nutrient and sediment transport. Losses of phosphorus (P) and nitrogen (N) to streams and rivers from adjoining pastures contribute to eutrophication of surface water bodies. Collaborators from USDA-ARS, J. Phil Campbell Sr., Natural Resource Conservation Service, University of Georgia, and North Carolina State University used simulated rainfall studies on stream-side grassland fields fertilized with inorganic fertilizers or manures to identify management systems that retain nutrients for use by plants and animals rather than transporting them to surface waters in sediment and run-off. Inorganic fertilizer lost more total P than broiler litter while the opposite was true for N. These results support the use of different nutrient-source weighting factors in risk assessment tools such as state P-Indices, which are used in nutrient management programs. Stream-side vegetative cover of 45% or more was found to be an effective management strategy to reduce sediment, P and N losses from deposited cattle feces and urine. The results of this study are being utilized by the Georgia P-Index Workgroup to support and modify source coefficients of the Georgia P-Index for the determination of site vulnerability to P losses. They are also being utilized by the North Carolina Cooperative Extension Service to develop new best management practices (BMPs). This project contributes to the Soil Resource Management National Program (NP202) in Problem Area 4 (Nutrient management for crop production and environmental protection) and to the Water Resource Management National Program (NP201) in Problem Area 6 (Water quality protection systems).
Conservation tillage system reduces nutrient losses in run-off. Losses of phosphorus (P) and nitrogen (N) to streams and rivers from adjoining cropland contribute to eutrophication of surface water bodies. ARS scientists at J. Phil Campbell Sr. Natural Resource Conservation Center, Watkinsville, GA and Southeast Watershed Research Unit, Tifton, GA, in cooperation with University of Georgia scientists simulated rainfall at a constant- and variable-intensity on loamy sand soils managed under conservation (strip-tillage) and conventional tillage. They found that constant-intensity rainfall simulations may over estimate the amount of dissolved nutrients lost to the environment in runoff. Conservation tillage resulted in more losses of dissolved P and N than conventional tillage treatments but conservation tillage systems lost 71% less total N and 67% less total P in runoff than conventional-tillage systems. This information can be used by State Cooperative Extension Systems, USDA-NRCS, environmental consultants, and agricultural producers to promote adoption of conservation tillage as a means of improving water quality, as well as for reducing soil erosion. This project contributes to the Soil Resource Management National Program (NP202) in Problem Area 4 (Nutrient management for crop production and environmental protection) and to the Water Resource Management National Program (NP201) in Problem Area 6 (Water quality protection systems).
Spatial variation in soil organic carbon and crop yield predicted with process-based model. Computer simulation models can be useful tools to predict changes in crop yields and environmental consequences from soil management practices. However, these models need to be checked or validated against data from long-term field experiments in order to have confidence in model predictions and improve their usefulness. A collaboration among scientists from the USDA-Agricultural Research Service in Watkinsville Georgia and Auburn Alabama, Auburn University, USDA-Natural Resources Conservation Service in Temple Texas, and Joint Global Change Research Institute in College Park Maryland tested the performance of a highly technical environmental model (EPIC v. 3060) against five years of crop yield and soil data collected from a corn–cotton rotation in central Alabama. The cropping system had additional variables of dairy bedding manure and conventional and conservation tillage systems. The model accounted for 88% of the variation in corn grain and cotton lint yields during the five years. Model predictions were sensitive to landscape position. Predictions of soil organic carbon at the end of five years of the different management schemes were very reasonable, although distribution with depth and within various fractions of organic matter were not wholly adequate. This research demonstrated that EPIC modeling has challenges to overcome, but could be a reasonably accurate tool to predict yield and environmental consequences for the greater than 10 million acres of corn and cotton land in the southeastern USA. This project is contributing to the Soil Resource Management National Program (NP202) in Problem Area 3 (Soil carbon measurement, dynamics, and management). It also will contribute to the Global Change National Program (NP204) in Component 1 (Carbon cycle and carbon storage).
Modeling and remote sensing used to predict the effects of conservation management on soil organic matter in West Africa. In the drought-prone Sudan-Sahelian zone of West Africa, agricultural operations are based on relatively low-output systems, which maintain production at subsistence levels. It is getting more difficult to sustain the required food supply for its people, because of land degradation from soil erosion and nutrient mining. Scientists from the USDA Agricultural Research Service in Beltsville MD and Watkinsville GA collaborated with scientists from the University of Hawaii and Institute for Rural Economy in Mali to evaluate management systems for improving soil quality and carbon sequestration. Based on land-use classification, climate variables, soil texture, in-situ soil carbon concentrations and crop growth characteristics, the EPIC-Century model was used to project the amounts of soil carbon sequestered for the region. Under continuous conventional cultivation with minimal fertilization and no residue management, the soil top layer was continuously lost due to erosion. The combination of modeling with land use classification was used to calculate that a modest, but significantly positive amount of carbon could be sequestered with ridge tillage, increased application of fertilizers, and residue management. These findings have important implications for building soil fertility, improving human livelihoods, and sequestering atmospheric carbon throughout West Africa. This project is contributing to the Soil Resource Management National Program (NP202) in Problem Area 3 (Soil carbon measurement, dynamics, and management) and Problem Area 5 (Adoption and implementation of soil and water conservation practices and systems). It also will contribute to the Global Change National Program (NP204) in Component 1 (Carbon cycle and carbon storage).
Soil bacterial populations are altered by tall fescue-endophyte associations. With the concern over carbon dioxide emission and its connection with global warming, the United States and other nations have been interested in identifying means to enhance the removal of carbon dioxide from the atmosphere. Tall fescue with endophyte infection has been shown to enhance soil organic carbon accumulation compared to tall fescue pastures without this fungal infection. Previous research indicated that the effect on soil organic carbon from the endophyte infection may have been related to toxic compounds produced by the fungus by altering the functional capability of soil bacteria involved in decomposing plant material. Scientists at the USDA Agricultural Research Service in Watkinsville GA conducted an experiment to directly determine if endophyte infection of tall fescue altered the population and diversity of soil bacteria. Endophyte-infected tall fescue decreased the population of four bacterial groups that are involved in decomposition of plant materials, compared with uninfected tall fescue. This study has identified important groups of bacteria that were affected by endophyte-infected tall fescue and has contributed to a better understanding of the potential mechanisms for enhanced soil organic carbon sequestration. This research will be of keen interest to scientists and government agencies dealing with global warming issues, greenhouse gases, and management of agricultural activities. This project is contributing to the Soil Resource Management National Program (NP202)in Problem Area 1 (Understanding and managing soil biology and rhizosphere ecology) and Problem Area 3 (Soil carbon measurement, dynamics, and management).
Moderate grazing pressure can ensure high productivity and avoidance of pasture decline. Bermudagrass is a typical pasture grass in the southeastern USA that can be grazed by beef cattle during the summer. Despite considerable research on cattle performance from bermudagrass, a gap exists in how low and high grazing pressure might affect cattle stocking rate, performance, and production over a number of years. Scientists at the USDA Agricultural Research Service in Watkinsville GA conducted a 5-year grazing study to investigate dynamics in cattle performance and production. During the first couple of years, cattle stocking rate and cattle gain were greater under high than under low grazing pressure. However by the end of five years, stocking rate and cattle gain had become similar, suggesting that high grazing pressure had reduced pasture productivity as a result of changes in plant community composition and surface soil condition. How grazing animals can alter pasture productivity and economic return needs to be a consideration in long-term management strategies on the 46 million acres of pastureland in the southeastern USA. This research will benefit: (a) science- by improving grazing land ecological theory, (b) producers- by improving productivity, and (c) the environment- by reducing land degradation. This project is contributing to the Pasture, Forage, Turf and Rangeland Systems National Program (NP215) in Component 4 (Grazing management: Livestock production and the environment).
Abrahamson Beese, D.A., Norfleet, M.L., Causarano, H.J., Williams, J.R., Shaw, J.H., Franzluebbers, A.J. 2007. Effectiveness of the soil conditioning index as a carbon management tool in the southeastern USA based on comparison with EPIC. Journal of Soil and Water Conservation 62:94-102.
Butler, D.M., Franklin, D.H., Ranells, N.N., Poore, M.H., Green, Jr., J.T. 2006. Ground cover impacts on sediemt and phosphorus export from manured riparian pastures. Journal of Environmental Quality. 35:2178-2185.
Butler, D.M., Ranells, N.N., Franklin, D.H., Poore, M.H., Green, Jr., J.T. 2007. Ground cover impacts on nitrogen export from manured riparian buffers.Journal of Environmental Quality. 36:155-162.
Doraiswamy, P.C., McCarty, G.W., Hunt Jr, E.R., Yost, R.S., Doumbia, M., Franzluebbers, A.J. 2006. Modeling soil carbon sequestration in agricultural lands of Mali. Agricultural Systems. doi:10.1016/j.agsy.2005.09.011.
Franklin, D.H., West, L.T., Radcliffe, D.E., Hendrix, P.F. 2007. Characteristics and genesis of prefrential flow paths in a Piedmont ultisol. Soil Science Society of America Journal. 71:752-758.
Franzluebbers, A.J., Brock, B.G. 2007. Surface-soil responses to silage cropping intensity on a typic kanhapludult in the Piedmont of North Carolina. International Journal of Soil and Tillage Research. 93:126-137.
Causarano, H.J., Shaw, J.N., Franzluebbers, A.J., Reeves, D.W., Raper, R.L., Balkcom, K.S., Norfleet, M.L., Izaurralde, R.C. 2007. Simulating field-scale soil organic carbon dynamics using EPIC. Soil Science Society of America Journal. 71:1174-1185.
Potter, T.L., Truman, C.C., Bosch, D.D., Strickland, T.C., Franklin, D.H., Bednarz, C.W., Webster, T.M. 2006. Combined Effects of Constant Versus Variable Intensity Simulated Rainfall and Reduced Tillage Management on Cotton Preemergence Herbicide Runoff. Journal of Environmental Quality. 35:1894-1902.
Stuedemann, J.A., Franzluebbers, A.J. 2006. Cattle performance and production when grazing bermudagrass at two forage mass levels in the southern Piedmont. Journal of Animal Science. 85(5):1340-1350.
Franklin, D.H., Cabrera, M.L., West, L.T., Calvert, V.H., Rema, J.A. 2007. Field scale, paired watershed study: aeration to reduce runoff and phosphorus losses from grass lands fertilized with broiler litter. Journal of Environmental Quality. 36:208-215.
Franklin, D.H., Truman, C.C., Potter, T.L., Bosch, D.D., Strickland, T.C., Bendnarz, C.W. 2007. Nitrogen and phosphorus runoff losses from variable and constant intensity rainfall simulations on loamy sand under conventional and strip tillage systems. Journal of Environmental Quality. 36:846-854. | <urn:uuid:d4337dbf-8cfd-4517-8546-df4cfc987da9> | CC-MAIN-2015-35 | http://www.ars.usda.gov/research/projects/projects.htm?ACCN_NO=411336&showpars=true&fy=2007 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00102-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.901686 | 3,343 | 2.59375 | 3 |
- Year Published: 1915
- Language: English
- Country of Origin: United States of America
- Source: Barnum, R. (1915). Squinty, the comical pig. New York: Barse and Hopkins.
- Flesch–Kincaid Level: 3.5
- Word Count: 1,508
Barnum, R. (1915). Chapter 6: “Squinty on a Journey”. Squinty, the Comical Pig (Lit2Go Edition). Retrieved August 30, 2015, from
Barnum, Richard. "Chapter 6: “Squinty on a Journey”." Squinty, the Comical Pig. Lit2Go Edition. 1915. Web. <>. August 30, 2015.
Richard Barnum, "Chapter 6: “Squinty on a Journey”," Squinty, the Comical Pig, Lit2Go Edition, (1915), accessed August 30, 2015,.
“Mamma, did you hear what they were saying about Squinty?” asked Wuff-Wuff, as the boy and the two men walked away from the pig pen.
“Oh, yes, I heard,” said Mrs. Pig. “I shall be sorry to lose Squinty, but then we pigs have to go out and take our places in this world. We cannot always stay at home in the pen.”
“Yes, that is so,” spoke Mr. Pig. “But Squinty is rather young and small to start out. However, it may all be for the best. Now, Squinty, you had better keep yourself nice and clean, so as to be ready to go on a journey.”
“What’s a journey?” asked the comical little pig, squinting his eye up at the papa pig.
“A journey is going away from home,” answered Mr. Pig.
“And does it mean having adventures?” asked Squinty, flopping his ears backward and forward.
“Yes, you may have some adventures,” replied his mother. “Oh dear, Squinty! I wish you didn’t have to go and leave us. But still, it may be all for your good.”
“We might hide him under the straw,” suggested Wuff-Wuff. “Then that boy could not find him when he comes to put him in a box, and take him away.”
“No, that would never do,” said Mr. Pig. “The farmer is stronger and smarter than we are. He would find Squinty, no matter where we hid him. It is better to let him do as he pleases, and take Squinty away, though we shall all miss him.”
“Oh dear!” cried Curly Tail, for she liked her little brother very much, and she loved to see him look at her with his funny, squinting eye. “Do you want to go, Squinty?”
“Well, I don’t want to leave you all,” answered the comical little pig, “but I shall be glad to go on a journey, and have adventures. I hope I don’t get lost again, though.”
“I guess the boy won’t let you get lost,” spoke Mr. Pig. “He looks as though he would be kind and good to you.”
The pig family did not know when Squinty would be taken away from them, and all they could do was to wait. While they were doing this they ate and slept as they always did. Squinty, several times, looked at the hole under the pen, by which he had once gotten out. He felt sure he could again push his way through, and run away. But he did not do it.
“No, I will wait and let the boy take me away,” thought Squinty.
Several times after this the boy and his sisters came to look down into the pig pen. The pigs could tell, by the talk of the children, that they were brother and sisters. And they had come to the farm to spend their summer vacation, when there was no school.
“That’s the pig I am going to take home with me,” the boy would say to his sisters, pointing to Squinty.
“How can you tell which one is yours?” asked one of the little girls.
“I can tell by his funny squint,” the boy would answer. “He always makes me want to laugh.”
“Well, I am glad I am of some use in this world,” thought Squinty, who could understand nearly all that the boy and his sisters said. “It is something just to be jolly.”
“I wouldn’t want a pig,” said the other girl. “They grunt and squeal and are not clean. I’d rather have a rabbit.”
“Pigs are so clean!” cried the boy. “Squinty is as clean as a rabbit!”
Only that day Squinty had rolled over and over in the mud, but he had had a bath from the hose, so he was clean now. And he made up his mind that if the boy took him he would never again get in the mud and become covered with dirt.
“I will keep myself clean and jolly,” thought Squinty.
A few days after this Squinty heard the noise of hammering and sawing wood outside the pig pen.
“The farmer must be building another barn,” said Mr. Pig, for he and his family could not see outside the pen. “Yes, he must be building another barn, for once before we heard the sounds of hammering and sawing, and then a new barn was built.”
But that was not what it was this time.
Soon the sounds stopped, and the farmer and the boy came and looked down into the pig pen.
“Now you are sure you want that squinty one?” the farmer asked the boy. “Some of the others are bigger and better.”
“No, I want the squinty one,” the boy said. “He is so comical, he makes me laugh.”
“All right,” answered the farmer. “I’ll get him for you, now that you have the crate all made to carry him home in on the cars.”
Over into the pig pen jumped the farmer. He made a grab for Squinty and caught him.
“Squee! Squee! Squee!” squealed Squinty, for he had never been squeezed so tightly before.
“Oh, I’m not going to hurt you,” said the farmer, kindly.
“Squinty, be quiet,” ordered his papa, in the pig language. “Behave yourself. You are going on a journey, and will be all right.”
Then Squinty stopped squealing, as the farmer climbed out of the pen with him.
“At last I am going on a journey, and I may have many adventures,” thought the little pig. “Good-by!” he called to his papa and mamma and brothers and sisters, left behind in the pen. “Good-by!”
“Good-by!” they all grunted and squealed. “Be a good pig,” said his mamma.
“Be a brave pig,” said his papa.
“And—and come back and see us, sometime,” sniffled little Curly Tail, for she loved Squinty very much indeed.
“I’ll come back!” said the comical little pig. But he did not know how much was to happen before he saw his pen again.
“There you go—into the box with you!” cried the farmer, as he dropped Squinty into a wooden box the boy had made for his pet, with a hammer, saw and nails.
Squinty found himself dropped down on a bed of clean straw. In front of him, behind him, and on either side of him were wooden slats—the sides of the box. Squinty could look out, but the slats were as close together as those in a chicken coop, and the little pig could not get out.
He did not want to, however, for he had made up his mind that he was going to be a good pig, and go with the boy who had bought him for a pet from the farmer.
Over the top of the box was nailed a cover with a handle to it, and by this handle the pig in the little cage could be easily carried.
“There you are!” exclaimed the farmer. “Now he’ll be all right until you get him home.”
“And, when I do, I’ll put him in a nice big pen, and feed him well,” said the boy. Squinty smacked his lips at that, for he was hungry even now.
“Oh, have you caged him up? Isn’t he cute!” exclaimed one of the boy’s sisters. “I’ll give him the core of my apple,” and she thrust it in through the slats of the box. Squinty was very glad, indeed, to get the apple core, and he soon ate it up.
“Come on!” cried the boy’s father. “Is the pig nailed up? We must go for the train!”
“I wonder what the train is,” thought Squinty. He was soon to know. The boy lifted him up, cage and all, and put him into the wagon that was to go to the depot. Squinty knew what a wagon was and horses, for he had seen them many times.
Then away they started. Squinty gave a loud squeal, which was his last good-by to the other pigs in the pen, and then the wagon rattled away along the road.
Squinty had started on his journey. | <urn:uuid:9644cd98-3214-4416-bde0-16c86b032254> | CC-MAIN-2015-35 | http://etc.usf.edu/lit2go/204/squinty-the-comical-pig/4482/chapter-6-squinty-on-a-journey/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00102-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.988974 | 2,275 | 3.03125 | 3 |
“Can I have another bowl of cereal?”
The shy boy was one that regularly asked for seconds at breakfast, and I suspected he didn’t eat dinner regularly at his house. He was a camper at my small YMCA day camp in Northeast Georgia. We struggle financially, as do our campers’ parents, and the fees we charge only cover staffing and program costs.
Camp is a place that kids can go to have fun, make friends, challenge themselves, and gain confidence and independence. For some kids, though, camp is also a place where they literally get fed during the summer months. Thanks to the Child Nutrition Act, we are able to provide our campers with both breakfast and lunch; therefore, I was able to give the quiet boy a second bowl of cereal and fill his growling belly.
For over five years, our camp has participated in the Summer Food Service Program, which is an offshoot of the Child Nutrition Act. It is a mirror of the program that provides meals at public schools. The program pays for each meal served at a set reimbursement rate for eligible kids.
Originally signed into federal law in 1966, the Child Nutrition Act was created to help “safeguard the health and nutrition of the nation’s children.” An amendment in 1968 added summer camps, along with day care centers, to the law. Now, churches and other nonprofits take part in handing out meals to kids through this program. Sometimes the program is as simple as a sack lunch in a park to which the child walks, while other programs are like ours — fully functioning day camps that offer meals as part of the program day.
In Georgia, the Summer Food Service Program is administered by Bright from the Start, a part of the Department of Early Care and Learning, which oversees childcare centers and the state’s pre-K program. Working with two government agencies brings with it a copious amount of paperwork and regulations, but the extra work is worth it. For a budget-strapped camp, it gives the chance to offer meals to your campers that you might not otherwise be able to afford. For your campers, it ensures they will be getting two nutritious meals a day — something they may not get at home regularly. And for your parents, it alleviates the need to get their children fed and a lunch made before heading out the door to work and camp.
What to Know
How Does It Work?
Once you have been approved to sponsor a program, the person in charge of overseeing it will have to make sure the staff has been properly trained, the menu meets the nutritional requirements, and the food service is set up correctly.
We use the local school system to provide our meals with a menu we choose. In the spring, I meet with the nutritionist and we work out a menu to include proper serving amounts of grains, proteins, fruits, and vegetables required by the overseeing agency. Breakfast ranges from cereal with fruit to specially packaged pancakes that can be microwaved individually. For lunch, we serve hamburgers, pizza, or turkey and cheese sandwiches, all accompanied by fresh fruit and milk.
Field trips have to be carefully planned to ensure that lunch is somewhat portable. We do all our field trips on Fridays, and we plan ahead to provide sandwiches on those days, along with whole-grain chips and easy-to-eat vegetables like carrot sticks. It’s never a good idea to go to a waterpark when your lunch for the day is ravioli!
The school system employees prepare lunch every day and deliver it to all three of our sites. Since breakfast is either cold foods or meals that can be microwaved, they also deliver breakfast for the next day. Your staff will have to check that the food is in good condition and at the right temperatures before they sign the delivery ticket.
Meals can be served as a unit (like a sack lunch) or cafeteria style in a system called “offer versus serve.” At our camp, we use the “offer” method, which means staff members offer each item to each child, and the child can accept it or not. They can only turn down so many items. The program’s rules also dictate that every child must be offered milk. Of course, like every camp, we have our own rules about how much a child must eat minimally to be ready for the afternoon activities. In other words: “no fruit, no pool.”
Furthermore, we use a “share table” where kids can put an unopened food item on the table. Other kids wanting seconds of just that item, like crackers or milk, can take it off the share table, which reduces food waste.
Training staff is critical because they are your frontline folks making sure all the rules are met properly. Every three years, representatives of the state agency running the program will visit your site, much the same way ACA standards visitors come to assess compliance to accreditation standards. The visitor checks to make sure food is being served properly, menus are followed, paperwork is filled out correctly, and antidiscrimination tools are in place. And just like with ACA, if your staff is not performing correctly, there are consequences, such as meals not being counted or the need for additional training. At worst, you could be financially liable for meals already reimbursed if the visitor feels you haven’t been claiming meals properly all summer.
When kids come through the line, the site director makes a tic mark on a form to make sure every child is properly counted. Each part of the meal must be measured out accurately so the child gets the correct amount and to ensure you are not wasting food. A certain number of second meals are also allowed to be claimed, and these are noted on the form as well. At the end of the month, the person in charge adds up all the meals that have been served and files for reimbursement electronically.
One of the parts of the program that is essential to get right is the communication with your food vendor to make sure you are getting the proper amount of meals. You only get reimbursed for meals served, so if you order too many, you will be paying for meals the kids are not eating. Additionally, the food service vendor will need to know if the food is arriving at the proper temperature and, most importantly, if the kids like it. It doesn’t do any good to offer nutritious food if kids won’t eat it. We find each summer we have to make adjustments to the menu, such as changing the kind of fruit we offer or switching from chicken sandwiches to sloppy joes because of taste preferences.
Steps to Follow
How Do You Get Involved?
- Contact the agency that administers the nutrition program in your state and find out what their application process is. In Georgia, new applicants must undergo a two-day training session that outlines the program goals, the application process, administering the program, and how to file a claim for reimbursement. To be fully prepared, it is best to begin the process in December or January.
- After training, the program contact fills out the application online. This is a rather lengthy application and includes proof of need, a budget, and a management plan. The management plan details exactly who will be responsible for ordering the food, serving the meals, and administering the program. It also goes over how records will be kept and where they will be stored. The budget, too, is quite detailed, noting how much you expect to spend on staffing, kitchen equipment, and food. Allow a few weeks to get it all done correctly. The agency overseeing the nutrition program will have to approve the application, and then you are ready to move into the nitty-gritty of running the food program.
- Meet with food vendors or caterers to make sure they understand the meal requirements. A brief conversation at a local health fair with the school nutritionist led to the partnership between the school system and our camp. Our school system had already taken part in the food program independently the year before, so the nutritionist was already well-versed in the details of the requirements and even had suggestions for bettering our menu. She was prepared to train her staff on their part of the food program requirements.
- Train staff on properly serving and documenting meals. It is best to train all your staff in meal service, just in case your site director is out. We found this out the hard way when our unannounced visit just happened to be on a day when the trained staff member was on vacation. The person running the meal that day didn’t properly document the meal delivery or service, and we faced quite a mountain of paperwork correcting the problem. Plus, we had to redo our training in the middle of a busy summer.
- Set up a system to maintain your paperwork; then, most fun of all, feed the kids!
Things to Remember
- Only kids that meet the income requirements are eligible. For some camps, this may mean getting income eligibility statements from each family. My camp is located in an area where all the schools are Title I schools, so that makes all our children and any counselor under eighteen eligible. (Title I is a federally funded educational program established to provide extra resources to schools and school districts with the highest concentration of poverty.)
- The program is a reimbursement program, which means the camp will have to pay the cost up front. You will also have to budget correctly because the reimbursement rates are set by the federal government; if you serve a meal that costs you $4.00, you will only get reimbursed $2.75.
- There will be some costs in running a food program that are not covered by the reimbursement, so you will need to budget for this. Some of these costs include staffing, kitchen equipment, and the administrator’s time.
- The key to success is having one person in charge of running the program and maintaining the paperwork. Usually, a detail-oriented person who loves checklists and three-ring binders is perfect for this! If you are a summer camp serving kids facing poverty issues, the Summer Food Service Program is a great option for you. It takes federal dollars and allows you to use them to better your neighborhood. It allows you to feed the kids in your care so they are never hungry at camp. And, when kids are full and happy, they can focus on what camp is really for . . . fun and friends!
Get Involved in Your Area
States manage child nutrition programs. To participate or learn more, contact your state agency. Information is at www.fns.usda.gov/cnd/Contacts/StateDirectory.htm . Visit the federal Summer Food Service Program homepage at www.summerfood.usda.gov .
Robin Dake is the CEO of the Toccoa-Stephens County YMCA in Northeast Georgia and has been in camping for almost twenty years. She holds a journalism degree from the University of Georgia.
Originally published in the 2013 March/April Camping Magazine. | <urn:uuid:fa5d0138-0be2-4614-84a1-09cb78d4f152> | CC-MAIN-2015-35 | http://www.acacamps.org/print/33722 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00160-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.961476 | 2,281 | 2.546875 | 3 |
Retention index guide
|MassFinder 4: Manual|
MassFinder's Retention Index Guide explains the concept of retention indices in gas chromatography and its application to identify compounds, particularly in combination with mass spectrometry (GC/MS).
This guide is meant as a concise, system-independent introduction, while the tutorial lesson Analysing your own data gives a detailed step-by-step explanation on how to use the retention index feature of MassFinder with your own data files.
Abstract: (1) Retention indices are relative retention times normalised to closely eluting n-alkanes. Retention indices are system independent and long-term reproducible, even after many years and in different laboratories around the world. (2) Identifying peaks by library searches should not solely focus on mass spectral similarity, but also include retention indices in order to optimise the quality and reliablitity of library hits. Many isomeric compounds like sesquiterpene hydrocarbons can only be identified if taking both mass spectra and retention indices into account.
We assume that you are familiar with modern gas chromatography as it is employed in most analytical laboratories nowadays and can be described as high-resolution capillary gas-liquid chromatography. Typically, polysiloxanes are used as stationary phase, and hydrogen or helium is used as mobile phase.
Gas chromatography is an analytical method to separate and identity compounds. Separation occurs due to different gas/liquid equilibrium constants which in turn depend on the polarity and volatility of the analytes. The gas chromatographic retention time can be used as a property to characterise the compound, because under constant chromatographic conditions the retention time of a compound is reproducible.
Identification based on retention times relies on knowing which compound elutes at a certain retention time, i.e. you compare the observed retention time of your sample with a table of previously recorded retention times. Thus, reference compounds of all possible constituents have to be measured under exactly identical chromatographic conditions. Naturally, different compounds might coincidentally co-elute at the same retention time, which somewhat limits the scope of gas chromatography and gives rise to the need of more enhanced analytical methods like GC/MS.
The major problem of the retention time based approach of identifying compounds is the necessity of maintaining exactly identical chromatographic conditions. A subtle temperature difference of 1 °C, a slightly increased carrier gas pressure, or a few seconds of delay when starting the acquisition may cause retention time deviations larger then the retention time range of several possible constituents.
Further, it is not possible with the required degree of accuracy to compare the retention times of one GC with another system, neither in the same laboratory nor worldwide. In most cases, system maintenance like shortening the column or installing a new column will change the retention times and require all reference retention times to be measured again. Routinely, references and samples are run in the same sequence shortly after each other.
In summary, retention times are valuable information to characterise and identify compounds, but they are poorly reproducible long-term or between different systems and should only be relied on when measuring reference and sample under identical conditions and shortly after each other.
Relative Retention Times
One basic approach to overcome these limitations was to calculate relative retention times, i.e. dividing the retention time of your compound by the retention time of an internal standard. Thus, slight variations of temperature can be compensated (because they have equal effect on both compounds) and relative retention times are to a certain degree even comparable between different systems. However, initial acquisition delays or entirely different temperature programs can not be compensated with this method. Further, the larger the retention time difference between internal standard and target compound, the less accurate is this kind of compensation, and the more disturbances may occur after the first compound eluted, thus affecting only one of both substances, which in turn can not be compensated by this method.
The Concept of Retention Indices
These limitations can mostly be resolved by calculating relative retention times based on two internal standards, one shortly eluting before and the other shortly eluting after the target compound. Thus, quite a large number of standards is necessary to cover the complete time range and the use of inert n-alkanes is established for this purpose. The difference between the retention times of two consecutive n alkanes is divided in 100 parts and the so-called retention index of an n-alkane itself is defined as 100 n.
Retention indices are retention times normalised to adjacently eluting n-alkanes.
The range of the employed n-alkanes has to cover the expected retention time range of all possible target compounds. For monoterpenes and sesquiterpenes a range of C8 to C20 is usually suitable, oxygenated sesquiterpenes and diterpenes can require up to C26.
Definition of the retention index
RIx = 100 n0 + 100 (RTx – RTn0) / (RTn1 – RTn0)
with x the name of the target compound n0 n-alkane Cn0H2n0+2 directly eluting before x n1 n-alkane Cn1H2n1+2 directly eluting after x RT retention time (in any unit such as minutes, seconds, or scans) RI retention index (pure number without unit)
Examples of retention indices RI(n-decane) = 1000 RI(n-undecane) = 1100 RI(x) = 1050, for any x that elutes exactly in the middle between n-decane and n-undecane
Application of Retention Indices
Retention indices are established worldwide and used by a large number of scientist and laboratories. You may think of retention indices as a kind of natural property of a compound, which is in a complicated way related to the underlying gas/liquid equilibrium. Naturally, the RI is dependent on the kind of stationary phase and different stationary phases give rise to different RI of the same compound. Thus, the type of stationary phase should always be given when reporting retention indices. Beside this limitation, RI are system-independent, reliable and reproducible.
Retention indices are independent from
- delay of acquisition (absolute shift of time axis) has no influence on RI
- unit of time measurement (min, seconds, scans) has no influence on RI
- carrier gas pressure and flow rates have no influence if held constant during one measurement
- column length, column diameter and stationary film thickness have no influence on RI
- pre-columns have no or neglectible influence on the RI
- different isothermal temperatures have neglectible on RI
- different linear temperature ramps have neglectible on RI
Typical temperature programs composed of isothermal sections and linear temperature ramps give only slight to medium RI variations, particularly near the inflection points of the temperature profile. Best reproducibility is obtained if the temperature program consists of only one continuous temperature ramp. An inital isothermal period does no harm if the start of the temperature ramp is earlier then the retention time of the first actually required n-alkane standard. Likewise a final heating-off section does no harm if it starts later than the retention time of the last required n-alkane standard.
Again, different stationary phases can give entirely different retention indices. Using the same stationary phase is of utter importance for the successful employment of retention indices.
Reproducibility of retention indices
Generally, retention indices are reproducible on the same system with deviations equal or less than ± 2 RI, given that alkane standard measurements take place as required (see next chapter).
Generally, retention indices are reproducible between typical systems with deviations equal or less than ± 5 RI.
In the case of MassFinder's Terpenoids Library typical system means using a normal carrier gas velocity with respect to column length, a moderately fast continuous temperature gradient, and achieving overall good chromatographic resolution. Any commercially available polysiloxane column compatible with DB-1 (100% polydimethylsiloxane) should afford retention indices of approximately ± 5 RI or better when compared with our library's reference values. The more polar column DB-5 (95% polydimethylsiloxane-5%polydiphenylsiloxane) usually affords retention indices deviating less than ±10 RI for unpolar compounds and less than ±20 RI for polar compounds.
Limitations of retention index reproducibility
Retention indices might differ from reference values in cases such as
- overloading effects (visible as peak fronting or detector saturation)
- column activity (contaminated column, reactive parts, polar interactions)
- interconverting, reactive, or decomposing compounds (highly temperature dependent!)
Generally, all effects that might influence retention times under otherwise constant conditions will potentially influence retention indices as well. All processes that disturb the gas chromatographic separation or reduces or modifies column selectivity may cause significant deviations of retention indices.
Practical aspects of using retention indices
Internal vs. External Standards
Typical procedures for employment of retention indices does not use n-alkanes as internal standards. Routinely, all n alkanes necessary to cover your desired time range are mixed together in equal amounts in a single sample. This mixture is measured under your standard chromatographic conditions to obtain the retention times of all relevant n alkanes and the result table is called alkane pattern. There are commercial standard solutions available with n-alkanes of various ranges. The alkane pattern can usually be measured with a single injection.
As stated at the beginning of this document, retention times are valuable if identical chromatographic conditions are strictly maintained. A modern gas chromatograph reproduces retention times over several days or even weeks with satisfying accuracy and the alkane pattern has only to be measured again after system maintenance such as column shortening or pressure readjustment. Quality control regulations may require daily measurement of the alkane pattern, a single injection that increases reliability and validity of your GC data. Of course, the alkane pattern is different for each GC and the pattern has to be acquired for each system.
In summary, you may combine the advantages of short-term retention time reproducibility by separating the alkane pattern from the actual samples, and also profit from the use of retention indices due to inter-system and long-term compatibility. Even many years later and in a different laboratory you may reproduce your old retention indices while your retention times have no meaning anymore.
Should for whatever reason your application require to use the reference n-alkanes as internal standards, we recommend to restrict yourself to those n-alkanes absolutely necessary to calculate the retention indices of your analytes, i.e. those n-alkanes directly before, in between, and after the retention times of all primary target compounds. Thus, your run time will not be prolonged unnecessarily and the chromatogram will not be cluttered with useless peaks.
Using n-alkanes as internal standards is rarely necessary. Try to work with alkane patterns as external standard.
Are Other Reference Patterns Possible?
Naturally, the concept of retention indices requires a grid of several standard compounds distributed more or less equally over the whole possible retention time range. While n-alkanes are internationally established and you will easily find reference values based on n-alkanes, it is possible to use other compounds as standards. You just have to accept that the comparability is significantly reduced and you should have good reasons to deviate from the standard approach, e.g. a completely different stationary phase or very high elution temperatures with difficult to obtain n-alkanes. In such cases you can define your own set of standard compounds. We recommend choosing substances that are reasonably inert and highly temperature stable, whose retention times are equally distributed over your relevant time range and which time distance to each other is sufficiently short. Further, it should be very likely that the standard compounds will be available for many years to come and you need to document their identity exactly.
Retention Indices and Mass Spectrometry (GC/MS)
The challenge of identifying and assigning GC/MS peaks
A set of chemically different substances may give highly similar mass spectra, thus rendering it almost impossible to unambiguously distinguish the compounds from each other. The mass spectra shown below demonstrate this issue with three sesquiterpene hydrocarbons exhibiting pretty similar fragmentation patterns.
Introducing the second dimension...
In GC/MS technique the "GC" and the "MS" part are two independent experiments.
Usually substances with identical chromatographic retention times exhibit different mass spectra, or in other words, co-eluting peaks could only by pure chance give rise to identical mass spectra. Thus, mathematically spoken, the experimental values for retention times and mass spectra are different dimensions and independent properties. The chromatographic separation thus offers a very simple and powerful experimental value, the retention time or retention index, which can be used to distinguish compounds with identical mass spectra. Naturally, many compounds may give the same retention time, but usually these compounds exhibit different mass spectra. Only extremely rarely, two substances have by pure chance both identical retention times and identical mass spectra.
In summary, using retention indices in GC/MS does significantly increase the reliability of peak identifications by providing a second, independent experimental value. If you ignore retention times or retention indices in GC/MS and solely rely on mass spectral similarity, you waste precious information of an experiment already successfully performed.
Analysing your own data
Calculating retention indices from retention times is a tedious job and needs to be automated in order to reliably and rapidly employ retention indices. The GC/MS software MassFinder does support retention indices inherently and makes the usage of retention indices as well as comparing mass spectra and retention indices with libraries entries highly convenient.
Further reading on to set up MassFinder: Analysing your own data | <urn:uuid:a4b13ee0-1ddf-4fc5-af74-646aa3b55a79> | CC-MAIN-2015-35 | http://massfinder.com/wiki/Retention_index_guide | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00043-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.909674 | 2,878 | 2.71875 | 3 |
Twelve thousand years ago the first man came to Florida, having migrated over the Asian land bridge. Broken pottery, arrowheads, spearpoints and other artifacts have been found. At the time these people inhabited the area, the Halifax River was just a shallow, fresh-water stream.
The Timucuan Indians made this area their home in the early 1500’s and were one of six main tribes occupying Florida when the Spaniards made their first visit. The local tribes lived in fortified villages along the Tomoka and Halifax Rivers. French explorer Jacques LeMoyne wrote of tawny, muscular people who were accomplished craftsmen in many ways. They were experts in weaponry, clay pottery, jewelry, and clothing -- made mostly of deerskin and moss. Physical fitness was a prized attribute of the Timucuan people. Training sessions in the form of "games’ were common tribal activities. They were also excellent fishermen, hunters, and warriors.
The primary settlement was called Nocoroco and is thought to have been located where Tomoka State Park is today. Spanish captain DePrado documented this village in the late 1500’s in writings to the King of Spain. In the early 1600’s Alvaro Mexia was sent on an exploring expedition down
Florida’s northeast coast and created a map which shows Nocoroco on a peninsula between two rivers.
Within 200 years after DePrado’s expedition, the Timucuans entirely disappeared from the east coast of Florida, due to susceptibility to diseases brought by the Spaniards, emigration, raids from the Yamassee Indians, and the British raids from North Carolina sped their demise.
At the end of the Seven Years War in Europe, Spain ceded Florida to the British in exchange for Cuba. It was not until Florida became a British colony that pioneer settlers come to the area. The British government gave many land grants to its subjects, including 20,000 acres to Richard Oswald in 1766. Mount Oswald was a rice and indigo plantation encompassing what is now Tomoka State Park. Indigo was big business during Oswald’s time and was sent back to England for use as a dye for cloth and paints as well as a bluing agent for laundry. Naturalized indigo can still be found in the area today.
While under British occupation, Florida’s first highway, The King’s Road, was constructed. It covered 106 miles from St. Augustine to New Smyrna Beach and 48 miles from Palatka to Amelia Island.
When the British left in 1785, the meager beginnings of a plantation capital fell into ruin and did not flourish again until the Spanish land grants of the early 1800’s brought planters from the Bahamas. Spain was in possession of Florida from 1783 to 1821 when it became a United States Territory.
James and George Anderson, benefactors of the Spanish land grants, came to Ormond and settled an area that had been a British plantation of earlier times – Mount Oswald. The Dumettes had also settled in Ormond, taking the land grant that included Rosetta Plantation, former holding of the Moultrie family during British occupation. North of the Anderson's (Cobb’s Corner) and Dumettes was the Damietta, the cotton and indigo plantation of the Ormond family. Captain James Ormond I received a 2,000 acre land grant for his Damietta Plantation. Ormond was killed in 1817 by a runaway slave and the family moved back to Scotland.
James Ormond II returned to Damietta with his wife and four children, including James Ormond III in 1820. When James Ormond II died in 1829, his family abandoned Damietta. He is buried about four miles north of Tomoka State Park. James Ormond III would return many years later.
The second Seminole Indian War erupted in 1835, over hunting and fishing grounds and the freedom of movement of the Indians. There was a tremendous uprising against the planters and a massive evacuation to St. Augustine took place. The plantations fell victim to fiery raids. Bulow Ville, one of the most glamorous and wealthy of all the plantations, became a military outpost until the Indians came too close, too often. Soon Bulow Ville became a victim of the strife and only ruins remain today. When the war ended in 1842, the sugar and cotton plantations along the Halifax and Tomoka Rivers were destroyed – never to be restored.
Another economic interest was developing in this area at the outbreak of the Seminole War – ship building. Florida’s live oak trees were used to build military and commerce ships, but trade came to a dramatic decline when wooden warships were replaced by the ironclad ships of the Civil War. However, these timberland owners retained an interest in the area by selling the land to pioneer settlers and keeping the timber rights.
It was one of these sales, made in the 1870’s to men employed by the Corbin Lock Company of New Britain, Connecticut, that brought families to Ormond searching for the perfect orange groves. They bought the Henry Yonge grant on the west banks of the Halifax and, remembering home, named the village New Britain.
During the late 1800s, the area to the south also caught the attention and imagination of wealthy northern tycoons who found the land favorable for investment. One such mogul, Mathias Day, founding father of what was then called Daytona, built Daytona’s first hotel, the Palmetto House, in 1874. On the 26th of July in 1876 the first town meeting of Daytona took place. At this meeting, the town was "officially" named and incorporated and held its first election naming a Mayor (Rev. Dr. L. D. Houston), a Common Council of seven, a Clerk and a Marshall. The name Daytona came to be by honoring the founder of the settlement, Mr. Mathias Day of Mansfield, Ohio.
Other settlements were also being established near at hand during this same period. There were two colonies across the Halifax River, one known as Seabreeze and one by the name of Ormond Beach. These three towns would eventually join forces by voting on January 4, 1926 to incorporate into one city known as Ormond Beach.
In the late 1800’s, brothers John and Andrew Bostrom came to Ormond to homestead. Land sold for about $2 per acre at the time and Andrew built one of the finest residences in this area at the time and named it Bosarve (on what is now Riverside Drive). Their two sisters joined them, keeping an open house for travelers.
John Anderson emigrated to Florida from Maine during this period. He, too, settled on the east side of the Halifax River. His first home, Trappers Lodge, was located in the "wilds" of the peninsula. Later he built a plantation on the Halifax River and named it Santa Lucia after a popular Italian melody. Other early pioneers brought by the Corbin Lock Company included the McNary family and the Dix sisters. These two families were highly involved in the early politics of New Britain. In fact, it was at the Dix sisters’ home on April 22, 1880 that a meeting of the citizens took place to decide if the town should be incorporated.
John Anderson, Andrew Bostrom and James Ormond III became friends during this period. In fact, James Ormond III had recently visited Bosarve Plantation and this visit is said to have been instrumental in the town being named because John Anderson and Andrew Bostrom convincingly swayed the group to name it Ormond in honor of the James Ormond family. The name Ormond was adopted then and there – and so was the banana tree as the City’s emblem.
Travel to and from Ormond was limited to Old Kings Road on the mainland and the Savannah Trail on the peninsula. Crossing rivers along the route was accomplished by ferry or sailboat until the coming of the St. Johns and Halifax Railroad in 1886. In 1887 the first bridge across the Halifax River was built in Ormond, opening up the east coast and stimulating its growth potential. With the bridge and the impending arrival of the St. Johns and Halifax Railroad, the time was right for development. George Penfield (age 14) won the competition for the design of the 75 room Ormond Hotel and golf course. Being far-sighted individuals, John Anderson and J.D. Price bought part of the Bostrom peninsula homestead and built the first wing of the Ormond Hotel. The community celebrated the opening – and New Year’s Eve – on January 1, 1888. A great success, the hotel drew giants of American industry from the cold, Northern winters and their wealth made anything possible. Ransom Olds and Alexander Winton were two of the first racers on the hard packed sand - dead heating down the beach at 57 MPH. Forerunners in the community, Anderson and Price organized the first auto races on the beach.
The later years of the 19th century proved to be a time of growth for Florida, like the rest of the country with the industrial revolution and all that came with it – thanks to industrialists like Henry Flagler. Pouncing on the potential of becoming a railroad giant, he purchased all the existing "little" railroad lines and coordinated a rail system from Amelia Island to Key West. His policy was to incorporate hotels along his line. The Hotel Ormond was enlarged to accommodate 600 guests and became one of his fashionable resorts, especially with winter guests.
The Hotel Ormond management coordinated activities and events and a variety of entertainment for their guests both on and off the premises. Anderson and Price, the former hotel proprietors began a new business of providing tours of the area in chauffeur-driven steam automobiles. They were also tour guides on the Tomoka River Cruise departing from the hotel and serving picnic lunches at their river cabin. The hot lunches and ice-cream were delivered to the cabin overland from the hotel.
Down the road, Commodore Charles Bourgoyne built a community center in Ormond Beach in the early 1900s and organized concerts along the riverfront actively promoting the town's events to travelers.
Anderson and Price were instrumental in the development of Ormond’s Birthplace of Speed reputation. In 1902 they hired W.J. Morgan to promote racing on the beach. The first speed trial was run on the beach in that year. The beach proved to be the ideal race course and over the years a number of famous drivers tested their courage in this new-found sport of auto racing on Ormond’s beaches. By 1904, the Florida East Coast Automobile Association boasted 200 members with names like Vanderbilt, Flagler, Astor, and Gould among them.
During these early racing days, the gray-shingled Ormond Garage was built to accommodate race cars. In this garage the race cars were assembled, modified, serviced and even prayed over. Some of the drivers slept with their cars or in tents outside the garage. It is said that Henry Ford had to sleep on the beach during his first visit to Ormond because he couldn’t afford a room in the hotel. Ormond’s beach race course also produced some records. On January 27, 1906, driving a "steamer," Demegeot (known as the Speed King) reached 122.44 miles per hour drag racing down the beach. The mixture of speed and sand brought new excitement to Ormond.
During the "roaring twenties" prohibition created another area of interest for the locals. The coastline was a perpetual bootlegger warehouse. Local residents living or visiting the beach at Ormond could watch the signal lights from bootleggers at sea. When prohibition officers pursued a boat loaded with rum runners, the liquor was thrown overboard and the locals picked it up off the beach.
Of course, a look at Ormond’s history would not be complete without mentioning one of the most famous residents: John D. Rockefeller. Mr. Rockefeller stated that he would live to be 100 years old. Determined to accomplish this, he became a "health nut" before it was fashionable. He sent his employees to find the most pollution-free place to spend his winters in retirement. They chose Ormond.
In 1914 John D. Rockefeller arrived at the Ormond Hotel and rented an entire floor for himself and his staff. After four winter seasons at the hotel, he purchased the home built by Reverend Harwood Huntington, whose wife was the daughter of the creator of the Pullman Train Car Company, supposedly due to a dispute Rockefeller had with hotel employees. The Casements his winter cottage, was located only a few hundred yards to the south of the Ormond Hotel.
Through the years, Ormond residents became accustomed to have the "world’s richest man" as a neighbor. Visitors to see Mr. Rockefeller in Ormond included such popular personalities of the day as the Prince of Wales, Henry Ford and Will Rogers – to name just a few! Each winter he held the annual Rockefeller Christmas Party at the Casements. He invited his Ormond friends to sit around the tree, share gifts and holiday cheer.
Although it was believed that Rockefeller would live to see 100 years, he died in 1937 at the age of 97 while sleeping in the Casements, his home for over 19 years. After his death, his family put the house up for sale. Rockefeller himself might have been lost to Ormond, but the pride and prestige of his time here was not lost.
In the meantime, Ormond Beach’s reputation as a fashionable winter resort center began to decline. By the outbreak of World War II, the wealthy were vacationing in Palm Beach or Jeckyll Island.
In 1947, the National Association of Stock Car Auto Racing was founded in Ormond Beach. Motorsports gained new ground in 1959 with the opening of the Daytona International Speedway, which has become the World Center of Racing.
By 1970, the Ormond Hotel’s ownership had been changed three times. With new management came new roles. The Casements, sold by the Rockefellers in 1939, also passed from owner to owner numerous times within the next forty years becoming a girls preparatory school and a home for the elderly. In 1959 the property was purchased by the Ormond Hotel Corporation with plans for development. Unfortunately, that never materialized. In 1973 the Casements was purchased by the City of Ormond Beach. Today, after its restoration, it serves the City as a cultural and community center.
Not all of the structures discussed previously have survived. The Bosarve, in later years known as the San Souci Hotel, on Riverside Drive is gone as is the old Coquina Hotel and the Ormond Hotel. Some have remained, however, such as the home of the Dix sisters, the Ormond Beach Women’s Club and the Melrose House.
Ormond Beach celebrated its centennial in 1980 with a pageant and special events. In little over 100 years since the original settlement of New Britain, the City has grown from about 900 acres to more than 15,000 – and is still growing, now spreading north to the Flagler County line and west of Interstate 95.
Recently, Ormond Beach has been changing the blue-collar image it earned from its influx of bikers and spring-breakers. And since the storms of 2004, the rate of change has accelerated dramatically, the mom & pop motels of a long-gone era, now being leveled to make way for super luxury oceanfront condos and the steady influx of baby-boomers. | <urn:uuid:6e309be6-2a6f-4a64-ace7-07f12459683f> | CC-MAIN-2015-35 | http://www.castlesinflorida.com/1242572.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00160-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.977814 | 3,229 | 3.5625 | 4 |
Problem: you have a bunch of gorillas coming over for a party and you have no idea what music they like. Solution: apparently there is none. That’s because, at least according to this study, gorillas have individual responses to different kinds of music. Here, researchers observed three gorillas (Koga, Sydney, and Lily) listening to rainforest sounds (natural), Chopin (classical), or Muse (rock). Although all the gorillas changed behaviors when listening to the rainforest sounds, Koga oriented toward the speakers playing Muse 40% of the time, while Sydney did it 10%, and Lily never did (graph below). Maybe next time they should try Gorillaz.
“Several studies have demonstrated that auditory enrichment can reduce stereotypic behaviors in captive animals. The purpose of this study was to determine the relative effectiveness of three different types of auditory enrichment-naturalistic sounds, classical music, and rock music-in reducing stereotypic behavior displayed by Western lowland gorillas (Gorilla gorilla gorilla). Read More
When a friend has a stomach bug and you hold her hair back while she blows chunks, are you at risk for inhaling aerosolized virus? Well, that’s exactly what these scientists wondered. But who wants to spend months hanging out at the hospital in the hopes that someone with a stomach bug walks in and lets you measure how many viral particles get aerosolized when they puke? Let’s just go ahead and say (or hope) no one. So, to answer the question, these scientists built a vomit machine–that even included a face–to replicate what happens to the chunks that get blown when we hurl. But what to put in the vomit machine? Why, artificial vomit of course! (And virus. Don’t forget the virus.) Finally, to measure the amount of aerosolized virus, they collected air samples from a plexiglass box that surrounded the “face” of the vomit machine. The result of these shenanigans? Well, lets just say the harder they puke, the worse your chances are.
“Human noroviruses (NoV) are the leading cause of acute gastroenteritis worldwide. Epidemiological studies of outbreaks have suggested that vomiting facilitates transmission of human NoV, but there have been no laboratory-based studies characterizing the degree of NoV release during a vomiting event. The purpose of this work was to demonstrate that virus aerosolization occurs in a simulated vomiting event Read More
The saying goes “to each his own,” and that definitely holds true for fetishes. This paper describes a person with “eproctophilia”, which is the term for when someone is sexually aroused by flatulence. The first half of the article is included below. Warning–it’s a bit of a wild ride!
“Olfactophilia (also known as osmolagnia, osphresiolagnia, and ozolagnia) is a paraphilia where an individual derives sexual pleasure from smells and odors (Aggrawal, 2009). Given the large body of research on olfaction, it is not surprising that, in some cases, there should be an association with sexual behavior. As Bieber (1959) noted, smell is a powerful sexual stimulus. Furthermore, the erotic focus is most likely to relate to body odors of a sexual partner, including genital odors.
One subtype of olfactophilia is eproctophilia. This is a paraphilia in which people are sexually aroused by flatulence (Aggrawal, 2009). Therefore, eproctophiles are said to spend an abnormal amount of time thinking about farting and flatulence and have recurring intense sexual urges and fantasies involving farting and flatulence (Griffiths, 2012a). To date, there has been no academic or clinical research into eproctophilia. Therefore, the following account presents a brief case study of an eproctophile and given a pseudonym (Brad). Brad gave full consent for his case to be written up on the understanding that he could not be identified and that he was guaranteed full anonymity and confidentiality. Read More
If you think gazing into someone else’s eyes for a long time becomes uncomfortable rather quickly, imagine if you were a subject in this study, and were asked to stare into a stranger’s eyes for ten whole minutes. Turns out that it’s a lot more than just awkward. They actually started to experience hallucinations, likely brought on by “a dissociative state induced by sensory deprivation.” So there you have it: look deeeeep into my eyes… at your own risk!
“Interpersonal gazing in dyads, when the two individuals in the dyad stare at each other in the eyes, is investigated in 20 healthy young individuals at low illumination for 10-min. Results indicate dissociative symptoms, dysmorphic face perceptions, and hallucination-like strange-face apparitions. Dissociative symptoms and face dysmorphia were correlated. Strange-face apparitions were non-correlated with dissociation and dysmorphia. These results indicate that dissociative symptoms and hallucinatory phenomena during interpersonal-gazing under low illumination can involve different processes. Read More
Unlike many other languages, most English words are not innately gendered. But apparently things aren’t so simple when it comes to numbers. The authors of this study have spent several years studying whether people perceive numbers as having genders, and whether this perception differs between men and women. Here, they asked college students to rate the masculinity and femininity of different numbers shown on a computer. They found that odd numbers tended to be perceived as male (as well as having the characteristics of being “independent and strong”), while even numbers were perceived as female (and “friendly and soft”). Interestingly, zero was classified as neither male nor female, and women tended to see numbers as more gendered than men. Sorry, lady in the photo — time to put the 9 down. That number is not for you!
“Do numbers have gender? Wilkie and Bodenhausen (2012) examined this issue in a series of experiments on perceived gender. They examined the perceived gender of baby faces and foreign names. Arbitrary numbers presented with these faces and names influenced their perceived gender. Specifically, odd numbers connoted masculinity, while even numbers connoted femininity. In two new studies (total N = 315), we further examined the gendering of numbers. The first study examined explicit ratings of 1-digit numbers. We confirmed that odd numbers seemed masculine while even numbers seemed feminine. Although both men and women showed this pattern, it was more pronounced among women. Read More
As a society, we are happily ensconced in the internet era. And we’re sure that you, oh wonderful blog readers, are among the first to use the internet to find information about candidates come election time. And by and large, we assume the internet search engines we use to find that information are unbiased. But what if they aren’t? Could the order of search results skew our perceptions of possible candidates? Well, this paper explores that very scenario. The result? Let’s just say that we’re happy that Google’s motto is “don’t be evil.”
“Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Read More
Tired of slathering on sunscreen every time you want to spend some time outside? Try eating chocolate instead! Chocolate naturally contains very high levels of antioxidants (flavanols), but these are mostly lost during conventional chocolate processing. In this study, the researchers tested whether simply eating high-flavonol chocolate could help protect people’s skin from the effects of the sun (measured here by “minimal erythema dose” [MED], or the amount of UV exposure needed to produce a sunburn). Surprisingly, after the subjects ate 20 g of high-flavanol chocolate daily for 12 weeks, their MEDs doubled compared to a control group who ate conventional low-flavanol chocolates. So go ahead, bring your chocolate to the beach … but maybe think twice about the bikini?
Cocoa beans fresh from the tree are exceptionally rich in flavanols. Unfortunately, during conventional chocolate making, this high antioxidant capacity is greatly reduced due to manufacturing processes.
To evaluate the photoprotective potential of chocolate consumption, comparing a conventional dark chocolate to a specially produced chocolate with preserved high flavanol (HF) levels.
A double-blind in vivo study in 30 healthy subjects was conducted. Fifteen subjects each were randomly assigned to either a HF or low flavanol (LF) chocolate group and consumed a 20 g portion of their allocated chocolate daily. The minimal erythema dose (MED) was assessed at baseline and after 12 weeks under standardized conditions. Read More
As we’ve reported previously, heat is really bad for sperm — to the extent that a polyester scrotum sling is actually an effective form of contraception. So it’s probably not surprising that the opposite might be true: that not wearing underwear at all might be a fertility booster. Enter this author, who argues that traditional Scottish kilts (and perhaps other traditional skirt-like garments) may have originally arisen because they reduce scrotal temperature and thus increase fertility. Any volunteers out there want to help test this hypothesis?
“BACKGROUND AND AIMS:
There are anecdotal reports that men who wear (Scottish) kilts have better sperm quality and better fertility. But how much is true? Total sperm count and sperm concentration reflect semen quality and male reproductive potential. It has been proven that changes in the scrotal temperature affect spermatogenesis. We can at least affirm that clothing increases the scrotal temperature to an abnormal level that may have a negative effect on spermatogenesis. Thus, it seems plausible that men should wear skirts and avoid trousers, at least during the period during which they plan to conceive children. Read More
Maybe it’s because of the various traumatic ways I lost my baby teeth, but whatever the reason, teeth feature prominently in my nightmares. And now I have yet another vision to add to the bag of horror: “intranasal teeth” (literally, teeth inside the nose). Apparently (and horrifyingly), it’s not unheard of to have teeth buried deep inside one’s nose. There are a number of ways this can come about, but typically it happens to children. And no wonder–the image to the left shows what the front of a child’s face looks like when the adult teeth are about to grow in; from nose to chin, it’s pretty much ALL teeth. Not surprisingly, some of these teeth can get a bit lost and grow into the nasal cavity. Other children fall, lose a tooth, and get it stuck in their nose. Like this one:
“BACKGROUND: Intranasal teeth are uncommon. Causes include trauma, infection, anatomical malformations and genetic factors. Read More
You might think that after centuries of breeding, racehorses have reached their peak speeds. And previous studies supported that. But not this one! According to this study, which used “a much larger dataset covering the full range of race distances and accounting for variation in factors such as ground softness,” racehorses have gotten faster over the past 150 years or so, an improvement evident even in the past 15 years. Holy Secretariat!
“Previous studies have concluded that thoroughbred racehorse speed is improving very slowly, if at all, despite heritable variation for performance and putatively intensive selective breeding. Read More | <urn:uuid:3fb0c693-0d26-4017-b90b-03680bea3838> | CC-MAIN-2015-35 | https://blogs.discovermagazine.com/seriouslyscience/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315227.83/warc/CC-MAIN-20150827031515-00221-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.946554 | 2,491 | 3.21875 | 3 |
Appendix C. Trematode taxonomy operational procedures for Table 1 prevalence data.
When assessing the overall data presented in Table 1, there are some taxonomic issues that might affect our reported prevalence for certain trematode species. In addition, for some of the species we report, questions have been raised in the literature regarding their distinctions as separate species. Due to the taxonomic issues described in the species below, their prevalences as reported in the Literature columns of Table 1 may not accurately reflect their true prevalences in nature because species identifications were not standard across all studies in the literature. The most notable of these issues are described below.
First, two trematode species, Cercaria parvicaudata and Renicola roscovita, are typically distinguished based on the color of their sporocysts, which are "orange" for Cercaria parvicaudata and "cream" for R. roscovita (James 1968a, Stunkard 1971); this can obviously be highly subjective (Galaktionov and Skirnisson 2000). Furthermore, these two species have been debated in the literature as to their status as separate species (e.g., Stunkard 1950, Galaktionov and Skirnisson 2000), and some authors have lumped them as Renicola spp. (e.g., Granovitch et al. 2000), referred to Cercaria parvicaudata as Renicola parvicaudata (Lauckner 1980), or described Cercaria parvicaudata as a synonym of R. roscovita (Pohley 1976). For our study, we have used James’s Littorina sp. trematode taxonomic key (1968a)—primarily the distinction based upon sporocyst color—in order to distinguish the two species, which Stunkard (1950) reported as the only distinct characteristic: "except for the difference in color of the sporocysts, the two species [Cercaria parvicaudata and R. roscovita] are almost identical." However, color distinctions are subjective and may not remain universal from researcher to researcher, but because we have used this technique for distinguishing species across populations in our field surveys and because Cercaria parvicaudata and Renicola roscovita have been described in Europe and North America in both the field and literature, it should have little affect on our overall species richness counts and comparisons between the regions.
Second, the two Himasthla species, H. elongata and H. littorinae, can also be difficult to distinguish morphologically (Galaktionov and Skirnisson 2000) and have sometimes been lumped in the literature as Himasthla spp. (e.g., Matthews et al. 1985, Galaktionov and Bustnes 1995, Mouritsen et al. 1999). For our study, we have distinguished these species using James' Littorina sp. trematode taxonomic key (1968a) and descriptions by Stunkard (1966, 1983). In James' key, H. elongata was reported as H. leptosoma, which was a misidentification. H. leptosoma is a different species and uses the snail, Hydrobia ulvae, as its first-intermediate host (Galaktionov and Skirnisson 2000). Therefore, what James (1968a) reported as H. leptosoma in his key is actually the species H. elongata, and we have used this key in distinguishing H. elongata and H. littorinae.
Second, the "pygmaeus" microphallid group is a group of four species (Microphallus pirifomis, M. pygmaeus, M. pseudopygmaeus, and M. triangulatus) that are morphologically similar due to their close phylogenetic relationships (Galaktionov et al. 2004) and also due to their infection life cycle, which uses the snail as both a first- and second-intermediate host. When the microphallid species metacercariae mature within their snail hosts (Galaktionov et al. 2004), they often become difficult to distinguish to species level, and there is still debate regarding the morphological differentiation of some of these microphallid species (Galaktionov and Skirnisson 2000) though recent molecular evidence has shown M. piriformis and M. pygmaeus to be genetically distinct while M. pseudopygmaeus and M. triangulatus form a species complex (Galaktionov et al. 2004). In our investigations, we primarily observed these trematodes in their metacercarial state (we only observed cercariae on one or two occasions); and therefore, we could not be confident in distinguishing the four microphallids of the "pygmaeus" group and so have lumped them in both Europe and North America. In the literature, these four microphallids have also been lumped either as Microphallus spp. or as microphallids of the “pygmaeus” group (Granovitch 1992, Galaktionov and Bustnes 1995, Saville et al. 1997, Galaktionov and Skirnisson 2000) and prior to the understanding that the “pygmaeus” group was four species, they were described just as Microphallus pygmaeus (the initially described microphallid (Galaktionov and Skirnisson 2000)). In sum, the highly similar morphological details of these trematode species make it very unlikely that authors throughout the years would have applied a consistent standard to differentiate these species correctly, which is why we have chosen to lump the species in our investigation.
Overall, we are confident in our species identifications, barring these individual cases described above where there has been debate or ambiguousness in the literature. In such cases, we have relied on morphological characters found across sources or in the case of the "pygmaeus" group of species, we have lumped them for reasons described in the previous section. As we have applied these criteria in both Europe and North America, there should be little effect on overall trematode species richness counts and comparisons between the regions and therefore should not greatly impact our conclusions based upon the patterns we observed in species richness between the regions.
Galaktionov, K., and J. Bustnes. 1995. Species composition and prevalence of seabird trematode larvae in periwinkles at two littoral sites in North-Norway. Sarsia 80:187191.
Galaktionov, K., and K. Skirnisson. 2000. Digeneans from intertidal molluscs of SW Ireland. Systematic Parasitology 47:87101.
Galaktionov, K., S. A. Bulat, I. A. Alekhina, D. H. Saville, S. M. Fitzpatrick, and S. W. B. Irwin. 2004. An investigation of evolutionary relationships within "pygmaeus" group microphallids (Trematoda: Microphallidae) using genetic analysis and scanning electron microscopy. Journal of Helminthology 78: 231236.
Granovitch, A. 1992. The effect of trematode infection on the population structure of Littorina saxatilis (Olivi) in the White Sea. Pages 255-263 in J. Grahame, P. Mill, D. Reid, editors. Proceedings of the Third International Symposium on Littorinid Biology. The Malacological Society of London, London, UK.
Granovitch, A., S. Sergievsky, and I. Sokolova. 2000. Spatial and temporal variation of trematode infection in coexisting populations of intertidal gastropods Littorina saxatilis and L. obtusata in the White Sea. Diseases of Aquatic Organisms 41:5364.
James, B. 1968a. The distribution and keys of species in the family Littorinidae and of their digenean parasites, in the region of Dale, Pembrokeshire. Field Studies 2: 615650.
James, B. 1969. The Digenea of the intertidal prosobranch, Littorina saxatilis (Olivi). Z. Zool. Syst. Evol. Fursch 7:273316.
Lauckner, G. 1980. Diseases of Mollusca: Gastropoda. Pages 311424 in O. Kinne, editor. Diseases of Marine Animals. Biologische Anstalt Helgoland, Hamburg, Germany.
Lauckner, G. 1985. 3. Diseases of Aves (Marine Birds). Pages 627637 in O. Kinne, editor. Diseases of Marine Animals. Biologische Anstalt Helgoland, Hamburg, Federal Republic of Germany.
Matthews, P., W. Montgomery, and R. Hanna. 1985. Infestation of littorinids by larval Digenea around a small fishing port. Parasitology 90:277287.
Mouritsen, K., A. Gorbushin, and K. Jensen. 1999. Influence of trematode infections on in situ growth rates of Littorina littorea. Journal of the Marine Biological Association of the United Kingdom 79:425430.
Pohley, W. 1976. Relationships among three species of Littorina and their larval Digenea. Marine Biology 37:179186.
Sannia, A., and B. James. 1977. The Digenea in marine molluscs from Eyjafjordur, North Iceland. Ophelia 16:97109.
Saville, D., K. Galaktionov, S. Irwin, and I. Malkova. 1997. Morphological comparison and identification of metacercariae in the 'pygmaeus' group of microphallids, parasites of seabirds in western palaearctic regions. Journal of Helminthology 71:167174.
Stunkard, H. 1950. Further observations on Cercariae parvicaudata Stunkard and Shaw, 1931. Biological Bulletin 99:136142.
Stunkard, H. 1966. The morphology and life history of the digenetic trematode, Himasthla littorinae sp. n. (Echinostomatidae). Journal of Parasitology 52:367372.
Stunkard, H. 1971. Revue critique renicolid trematodes (Digenea) from the renal tubules of birds. Annales de Parasitologie (Paris) 46:109118.
Stunkard, H. 1983. The marine cercariae of the Woods Hole, Massachusetts region, a review and a revision. Biological Bulletin 164:143162. | <urn:uuid:9ccad519-46d1-499a-ac9f-da98da5e75f6> | CC-MAIN-2015-35 | http://esapubs.org/archive/ecol/E089/064/appendix-C.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.855003 | 2,344 | 2.921875 | 3 |
Global Warming - Misconceptions
Global warming is the warming up of the planet above the temperature it is expected to be from recent experience. It is a concern as it seems that currently the global average temperature is rising at a rate far faster than ever before and it is thought that it may be the activities of the human population over the last 150 years or so that is doing it.
Global warming is just a theory and scientists disagree about whether it is happening and whether it is anything to do with human activity.
There is a very strong scientific agreement in the scientific community that global warming is happening and that it is a result of man's activities in releasing carbon into the atmosphere that was locked away for millions of years in fossil fuels.
The evidence that the earth is warming is overwhelming
There will always be people who argue the opposite to any idea and they are vital to avoid blind adherence to what may turn out to be incorrect and to encourage further investigation and refinement of ideas.
There comes a point however where an alternative idea becomes something only considered by the occasional maverick (and don't mavericks love the idea of being a maverick?) and then just becomes something that isn't considered seriously any more.
There is also the fact that many of the articles that doubt global warming are sponsored by the oil and related industries.
Many who doubt mans influence on global warming by releasing carbon dioxide from fossil fuels are driven by a dislike of taxes that are being raised by politicians as a result. Arguments frequently start with not wanting to pay tax and end up with deciding global warming is not happening.
97% of climate scientists agree that climate warming trends over the last 100 years are most likely the result of human activities.
The amount of carbon dioxide hasn't risen enough and there is only a tiny amount of it anyway - it can't possibly make any difference.The amount of atmospheric CO2 has gone from 0.0284% to 0.04% (or 400 parts per million, ppm) between 1832 and 2015. That's a 0.0116% increase in the amount of CO2 in the atmosphere, or an approx. 40% rise in the amount of CO2 compared to pre-industrial levels.
The amount fluctuates slightly over the course of a year.
In 2012 it first reached 0.04% in the Arctic, in 2013 it reached
this in Mauna Loa, Hawaii where systematic measurements of carbon
dioxide first started in 1958. In May 2015 it reached 0.o4%
or 400 parts per million as a global average for the first time.
Half of this rise from the 1832 starting point has happened
since 1980. Current
That part of the atmosphere known as the stratosphere is 50km thick, the atmosphere goes beyond this to 690km, the edge of the "thermosphere" and beyond, but this will do for now.
00.04% of this is a layer works out at 20m thick if all the
CO2 was collected in a single layer (that's over
To try and put it in some kind of context, compare this to the ozone layer. The ozone above our heads protects us from ultra violet rays from the sun, without it life on earth as we know it would be impossible (the immediate effects would be appalling sunburn on overcast days). If the ozone above our heads were collected together in a continuous layer it would be about 3mm thick (1/8th of an inch). Hard to believe? but true, by comparison 0.04% at 20m is a huge thick duvet.
Global warming can't be true as the temperature in some places is stable or falling.
Global warming models differ as to the exact effects of where and to what extent temperatures will change, but they all have one thing in common - that the effect of global warming is not uniform.
Different parts of the world react in different ways and one of them is that in the early stages (where we are now) the temperature in some parts of the world will actually fall or at least remain stable which is what is being observed in East Antarctica for instance. Eventually as warming advances, then most or all parts of the world will rise in temperature, though again - not at a uniform rate.
Global warming is a good thing as I don't like the cold.Well maybe from a egocentric point of view it could be a good thing. I live in the UK and many projections show that this will increasingly become a more predictably warm, more attractive and more pleasant place to live.
The areas that benefit however will be greatly outweighed by the parts of the earth that become too hot and dry to inhabit. Coastal towns and cities, including some of the worlds greatest cities may be flooded and so have to be abandoned or build extensive flood defence barriers, if that is possible.
Hundreds of millions of people could suffer from global warming, the winners will be a small proportion by comparison.
Volcanoes contribute most of the CO2 released into the atmosphere each year.Less than 1% of annual CO2 emissions come from volcanoes.
"Comparison of CO2 emissions from volcanoes vs. human activities.
Scientists have calculated that volcanoes emit between about 130-230 million tonnes (145-255 million tons) of CO2 into the atmosphere every year (Gerlach, 1999, 1992). This estimate includes both subaerial and submarine volcanoes, about in equal amounts.
Emissions of CO2 by human activities, including fossil fuel burning, cement production, and gas flaring, amount to about 22 billion tonnes per year (24 billion tons) [ ( Marland, et al., 1998) - The reference gives the amount of released carbon (C), rather than CO2.].
Human activities release more than 150 times the amount of CO2 emitted by volcanoes."
The ozone hole causes global warming.The ozone hole is the result of a loss of ozone in the upper levels of the stratosphere which reduces the ability of the atmosphere to absorb harmful ultra-violet light from the sun. This is caused mainly by a group of chemicals called CFC's.
The hole in the ozone layer is a different problem with different causes to global warming, although a result of the ozone hole has been to cause an amount of cooling in Antarctica which has balanced warming to some degree.
If you do the sums, it just doesn't add up, we don't produce enough carbon dioxide to make any kind of difference.
The atmosphere consists of a VAST amount
of air, so even billions of tons of CO22 doesn't
push the actual portion up that much. Unfortunately though
CO2 is incredibly effective at being a greenhouse
gas and so tiny amounts make big differences. The figure in
the air is around 0.04%. There's also the fact that the
oceans have been absorbing a lot of the CO2 produced,
so all of that extra produced is not actually in the atmosphere.
The amount of carbon dioxide in the air has increased in the last 150 years, It has been doing so since the industrial revolution and is currently outside of any observed natural cycle:
The above diagram is used courtesy of Robert A. Rohde Global warming Art
How much carbon dioxide is released by a car?
1g of octane (the chemical name for the main
component of petrol or gasoline) takes 3.5g of oxygen to burn it
fully. This is 1 molecule of octane to 12.5 molecules of oxygen.
The complete combustion (burning) of 1g of octane produces just over 3g of CO2 and 1.42g of water, 8
carbon dioxides and 9 water molecules.
This is the counter-intuitive part in that 1g of a tangible substance (petrol - octane - an easily visible liquid) is producing more than 3g of CO2 an invisible, odorless, colorless gas. But the science all adds up correctly.
This means that for a petrol (gasoline) car, the mass of CO2 emitted is 3 times the mass of petrol burnt.
Worth thinking about next time you're standing by the pump as all the fuel goes flooding into the tank, 50L of petrol weighs about 37kg, this is the equivalent of 111kg of carbon dioxide being released per tank full.
The extra water vapor created by kettles, showers, baths (swimming and bathing types), steam jet cleaning, car washes, cooling towers, industry, angry teachers etc. contribute to global warming.The amount of water vapor the atmosphere can hold is largely a function of the temperature of the air, if water vapor is kicked out into the atmosphere but the temperature isn't right, the physics won't work and it will condense out, carbon dioxide doesn't work like that.
Water vapor may add to the effects of global warming by adding to the earth's heat retentive blanket in a positive feedback effect. More carbon dioxide means warmer temperatures, means more water vapor in the air, means warmer temperatures. Water vapor alone is not the reason for global warming and certainly not the starting point.
Carbon dioxide released from fizzy drinks released is a contributing factor to Global Warming.The good news is that nearly all industrial carbon dioxide is reclaimed from processes that release it in the first place rather than generate it specifically for another use - so it is nearly always recycled.
Of course that does mean that
it has been released, but it's cutting down on the amount
a little at least.
The carbon dioxide that fizzes your drinks up today could have come from burning fuel, yeast that were busy making beer or some chemical process that releases it as a by-product.
Exercise produces carbon dioxide, so if I don't do any I'm helping the planetWe breathe out carbon in carbon dioxide as a by-product of respiration, the more we respire the more we produce. The carbon comes into our body in our food and the oxygen comes in by breathing.
This carbon in the food we eat is a part of the natural carbon cycle whereby plants and animals are balanced. It does not contribute to global warming as it will be recycled when it is taken in by a plant for photosynthesis via carbon dioxide in the air.
The problem with carbon dioxide and global warming comes from carbon released from fossil fuels where the carbon has lain locked up in a carbon-sink for many millions of years and is now released into the atmosphere. If we didn't burn fossil fuels (or do other processes such as making cement) then mankind would pretty much carbon-neutral - as we were before the industrial revolution.
Picture credits, copyright pictures used by permission: Petermann Glacier - NASA Goddard Space Flight Center, used under Creative Commons 2.0 license | <urn:uuid:6852c249-9670-48ce-954c-66946704cb0f> | CC-MAIN-2015-35 | http://www.coolantarctica.com/Antarctica%20fact%20file/science/global_warming3.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00039-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.951192 | 2,221 | 2.90625 | 3 |
CPSC100: Practical Computer Fluency (F'05)
Week 2 Lecture Notes: Computer Fundamentals and Operating Systems
- Despite the variety of computer types, models, and technologies, all are made up of four primary components:
input devices or connections, a central processing unit or processor, storage devices, and output devices or connections.
The following diagram illustrates the relationships between these components.
Data flows in the directions of the arrows.
Not all computers are connected to a network as illustrated.
- If you were to compare this to the typical human, the input would be the 5 senses, the output would mostly be muscle contractions,
the storage is a person's memory, and the CPU is the brain.
The "program" that runs on the brain CPU might be called your mind, your personality, or even your soul.
- Input devices provide data from the outside world to the processor:
keyboards, mice, joysticks, microphones, digital cameras, scanners, and so forth, including various sensors for embedded computers that
measure temperature, speed, throttle position, or whatever.
- Output devices allow the processor to communicate the results of its work to the outside world:
monitors, printers, speakers, LED and LCD displays, and all sorts of different servos and actuators for embedded computers that
change valve positions, angle rudders, pump the brakes--the list is endless.
- Storage devices are used by the processor to temporarily or permanently store data so that it can be retrieved at a later time.
"Volatile" storage loses its contents when the computer power is turned off (the human analogy would be your own memories).
The most common example of this type of storage is Random Access Memory, or RAM, or just "memory".
"Persistent", or "non-volatile", storage does not lose its contents when the power is turned off:
hard disks, floppy disks, CD-ROMs, DVDs, and so forth
(the human analogy would be books, cave drawings, and oral histories passed from generation to generation).
- The Central Processing Unit, or just "processor" for short, is the engine or brain of a computer.
The processor executes a series of instructions that gather data from the input devices, occasionally store intermediate
results using the storage devices, and then produce final results suitable for the output devices.
- The diagram above also shows a network of some sort (perhaps the Internet, or maybe a cell phone network).
The computer's interface with a network involves both input and output devices, although these may be physically incorporated into a
single piece of hardware (e.g. a network interface card, or NIC). This is because the computer both sends messages out to a destination
computer somewhere on the network, and also listens for messages that arrive as input to the computer.
- All of the arrows on the diagram are typically realized by cables of some sort, or they may be wireless (radio or infrared) signals,
or they may be "wires" built in to the computer's main circuit board (sets of these wires, or "traces", are often called "busses").
There are many, many types of these cables, each with its own type of connector:
coaxial, serial, parallel, USB, FireWire, ribbon, twisted-pair, Ethernet, and so on.
The Central Processing Unit
- The Central Processing Unit (CPU) is the brains of the computer.
It's only about the size of a fingernail, but is encased in a large plastic enclosure with many electrical
pin connectors sticking out. Recent CPUs also have large heat sinks and sometimes even little fans to keep
- CPUs are purely digital devices. This means that they only understand discrete numerical values.
In fact, the only numerical values a CPU understands is zero and one: a numbering system called "binary".
Ultimately, everything we do with a computer must be translated into those binary values.
- The instructions that CPUs understand are very simple: fetch a value from memory, perform simple
calculations, store a value in memory, and compare two values and jump to another instruction depending
upon the outcome. Other than these jumps, the CPU simply executes each instruction in turn, one by one.
- The CPU is hooked to a crystal clock that ticks very quickly. Each time the clock ticks, the CPU
performs one of these very simple operations (often it takes multiple ticks to perform one operation, but
never less than one).
- These ticks are produced at a certain frequency.
Apple ][+ era computers (The Mostek 6502 CPU) had a frequency of about 1.5MHz (1.5 million ticks per second).
If you look very closely at the internal video display used by the Terminator in the first movie, it
appears that Cyberdyne Systems Model 101 is built using a Mostek 6502 CPU.
- The most recent CPU chips from Intel run at a frequency of over 2GHz
(more than 1,000 times that of the first Apples).
But frequency alone shouldn't be used as criteria for comparing the speed of CPUs, particularly if the chips
are from different product lines or from different manufacturers.
- In 1964, Gordon Moore (who co-founded Intel in '68) announced that the number of transistors on
computer chips appeared to be doubling every 18 months. The trend has continued to this very day.
It also loosely translates to a doubling of CPU speed every 18 months.
- Software is that part of the computer that you cannot kick.
- You may think of software as the recipe that a CPU must follow in order to perform a task.
- Computers are very literal: they will only do exactly what you tell them to do--even if you really meant
to tell them to do something else (this is what we call a bug).
- Computers are also deterministic: given the same input, they will always produce the same output.
This is a good thing, because otherwise it would be impossible to program computers to do the same thing
twice (although quantum theory heralds the day when computers don't behave deterministically).
- A program is just a list of instructions to the CPU, that when executed, turn into what we think of as software.
A program implements an algorithm (a mathematical formula or set of instructions) in the "machine language" of the CPU.
The Boot Process
- Before any computer can perform useful work, it must first initialize itself using a process known as bootstrapping,
or simply the boot process.
When power is first applied to a computer, it is in a random state: all of the volatile storage devices are filled with
To move from this chaotic state to an ordered state involves a bit of effort (entropy must be reversed).
Most PCs follow a similar set of steps to bootstrap themselves, as outlined in the following points.
Compare this to your own experience of first waking in a hotel room while on vacation: you need to do some work to:
remember where you are and why, get up, get dressed, shower, and so forth.
You can't really perform useful work until all of that has been done.
- A special chip in the computer holds what is called the BIOS (Basic Input/Output System),
essentially just the instructions needed by the PC to get things started.
The first thing the BIOS does is execute the POST (Power On Self Test) which checks the video card
(so that any errors can be displayed to the user--if the video card doesn't work, then the BIOS beeps a few times
to indicate the problem).
- The BIOS then checks that the other major devices are connected and functional:
keyboard, mouse, internal busses (circuit board "wires" connecting critical components).
- The next step is to verify that the RAM (volatile storage) in the computer works.
The BIOS writes to each location in memory and then reads the same data back to make sure that the memory chips are
This step can take a few seconds, depending upon how much memory is installed.
- The BIOS also checks the main storage devices connected to the system, including floppy drive(s), CD-ROM drive(s),
and hard drive(s).
- The BIOS then starts looking for an operating system to load. It usually checks the floppy first, then the CD-ROM,
and finally the hard drives.
As soon as it finds something promising, it loads the first "chunk" of data from the storage device into memory and tells
the processor to execute the instructions found in that chunk. The BIOS's work is now done.
- The first chunk begins to execute. It should contain just enough instructions to load the next chunk stored on the
disk into memory and execute those instructions.
There may be many such steps, but the end result is to load the operating system into memory and start it running.
- The operating system (OS) first initializes itself, and then loads any special device drivers (little programs that
allow the OS to communicate with specific types of hardware).
On Windows, little dots or a progress bar are displayed while this is going on.
- Once the drivers are loaded, the OS runs any programs that are supposed to run every time the computer is booted up
(e.g. virus scans).
- The operating system then runs a program that allows a user to log on to the system with a username and password.
Some operating systems skip this step entirely if they're not meant to be used by more than one person.
- The operating system then runs a program called a "shell" and loads any of your user preferences (colours and so forth).
This is the program that displays the graphical desktop in Windows or Macintosh (and others), or a text-based prompt in
other operating systems.
This shell program runs the entire time you use the computer and allows you to interact with other programs
(e.g. word processors, web browsers, calculators).
- When sold as shrink-wrapped products, we know operating systems by such names as Microsoft Windows, Apple Macintosh,
UNIX (and its variants Linux, Solaris, AIX, HP-UX, etc.), OS/2, and even ones you may never have heard of: OS/390, OS/400,
Be, CP/M, VMS, TOPS, MVS, ITS, etc.
- But those products include a whole lot more than just the operating system program: web browsers, disk utilities,
graphical user interface shells, file managers, calendars, e-mail programs, and so forth.
- An operating system (OS) is a program that we never interact with directly.
The operating system has two main responsibilities:
- Manage hardware and software resources so that programs may use these resources without conflict.
- Provide a consistent and hardware-independent programming interface for software development.
- The first of these responsibilities is most important for operating systems that can run more than one program at a time
(almost any OS with which you might be familiar). These programs are completely unaware that other programs are also running.
Suppose that two programs that are currently running each wish to display something on the screen, save a file to the hard disk,
and play a sound on the speakers. If they both tried those simultaneously, they would likely overwrite each other's efforts: producing
a garbled screen display, a corrupted file containing interspersed bits written by both programs,
and some random noise from the speakers.
The operating system's job in such a scenario is to isolate each program from the hardware devices by playing a "traffic cop" role.
The programs are still allowed to use the hardware, but they always do so through the intermediating OS, which prevents the programs from
interfering with each other (perhaps by isolating the two programs' display into separate windows, by placing the saved files in two
different locations, and by letting only one program at a time use the speakers).
The operating system does this by essentially restricting each running program to its own "virtual computer". If a program ever tries
to break out of this virtual environment (usually because of a programming defect, or "bug"), the operating system can detect this
condition and halt the offending program. You may get an error message (perhaps a General Protection Fault), and you will lose any unsaved
data in that application, but the OS should be able to restrict the damage to only that virtual computer; all of the other programs
should continue running normally.
- Because the OS must be an intermediary between programs and the computer hardware, this leads directly to the second responsibility:
The OS hides the specifics of any particular piece of hardware, proving only a standard way of accessing hardware of that nature. This
means that application programs (word processors, spreadsheets, games, etc.) do not have to be written for specific hardware, only for
specific operating systems. In the days of yore, this was not the case, and WordPerfect (the champion word processor of its day) had to
include hundreds of printer definition files for every conceivable printer make and model on the market. Today, the operating system
loads just one such device driver for a particular printer (or any other device) and all programs simply ask the operating system to
print something, none the wiser about the model of printer connected to the computer.
- How does a single computer run many programs simultaneously? The operating system manages this by letting each program in turn use
the processor for a short period of time (microseconds or nanoseconds). The switch is so fast that we don't notice that only one program
is really running at any one time. | <urn:uuid:1c156d72-1d3c-4bde-932e-a02944331561> | CC-MAIN-2015-35 | http://college.yukondude.com/2005_09_cpsc100/html/note.php?note=02%5ELecture_Notes%5EComputer_Fundamentals_and_Operating_Systems.tpl | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00339-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.925706 | 2,879 | 3.796875 | 4 |
By Daniel Frezza
The action of The Winter’s Tale is often seen as progressing from winter's death to spring's rebirth. Shortly after the play begins Camillo says "I think this coming summer the King of Sicilia means to pay Bohemia the visitation which he justly owes him." (1.1.5; Arden Shakespeare Third Series, ed. John Pitcher [London: Methuen Drama, 2010]) The line implies that the season is not summer. “A sad tale's best for winter” says young prince Mamillius as he plays with queen Hermione’s attendants in act 2 (2.1.25) Hence the common view that the first part of the play happens in winter. But a literal reading of several lines doesn’t reveal Shakespeare’s complex interplay of seasonal imagery and what we may call the action’s “emotional temperature.” A deeper appreciation of the play may be attained by viewing the action as encompassing all seasons, often commingled.
What temperature, what season does Leontes’s sudden, rash jealousy evoke in your mind? Probably not winter. “Too hot, too hot,” (1.2.108) he mutters, describing what he thinks he sees in the relationship between his wife Hermione and Polixenes, his dearest friend. That these words also describe Leontes’s emotional state is suggested by the fragmented syntax and fevered imagery of his following speech to Mamillius—especially the passage “Affection? Thy intention stabs the centre” (1.2.138–146). Earlier in the scene, Polixenes describes the happy boyhood he and Leontes shared: “We were as twinned lambs that did frisk i’ th’ sun” (1.2.67). Thus, well before we hear Mamillius’s line about winter in act 2, Shakespeare gives us images of gentle summer and of blasting heat.
The time elapsed between acts 1 and 2 is only the few hours needed for Polixenes and Camillo to escape Leontes’s murderous rage. “A sad tale's best for winter” does imply that the season is winter. The tale we are watching is indeed a sad one. But Mamillius is a child; a child’s view of time is limited. As Edith Sitwell notes, “He could not foresee the spring” (Edith Sitwell, A Notebook on William Shakespeare [Boston: Beacon Press, 1961], 204). Against this winter reference Shakespeare opposes a powerful, if indirect, sun reference. Even when he’s most misguided, Leontes seeks the truth from Apollo’s oracle. (2.1.183–7) Apollo, we remember, is the sun god.
Emotional heat and references to fire dominate act 2, scene 3. Hermione has delivered the child Leontes believes Polixenes fathered. The queen’s friend Paulina brings the baby girl to Leontes in an attempt to persuade him to acknowledge the child is truly his. Leontes tries to silence her, but Paulina’s hot temper proves a match for his. Six times Leontes threatens burning Hermione, the baby, or Paulina. (2.3.8; 94; 113; 133; 140; 155) Later he yields and orders the baby to be abandoned in the wild.
At the start of Hermione’s trial Leontes’s mood is one of grief (3.2.1) but his anger soon erupts. Hermione’s tone is in marked contrast to Leontes’s passionate, irrational accusations. In defending her honor she too is impassioned but ordered and controlled. Shakespeare explains Hermione’s cool rationality: “The Emperor of Russia was my father; / O that he were alive, and here beholding . . . / The flatness of my misery; yet with eyes / Of pity, not revenge!” (3.2.117–20). Twice earlier in the scene we heard that Hermione is the daughter of a king. This time Hermione adds a detail—her cold homeland—that is unnecessary in terms of plot but significant, particularly in her rejection of revenge, in moderating the emotional temperature. Her defiance of Leontes’s accusations avoids the heat of Paulina’s earlier confrontation and is the more powerful for it. Leontes rejects the oracle’s confirmation of Hermione’s innocence. In swift succession Mamillius’s death is announced, Hermione swoons and is carried off, and Paulina announces her death. Finally realizing his errors, Leontes is plunged into emotional winter.
The scene shifts to Bohemia where the infant is abandoned in the wilderness as ordered. She is immediately found by the Old Shepherd along with gold and a note that she is to be called Perdita (3.3.32). Sixteen years pass, as a Chorus in the person of Time tells us (4.1.5–6).
Act 4, scenes 3 and 4 abound in seasonal references. Autolycus, a thief and con man who lives by adapting to all situations/seasons makes his first entrance singing, and his song contains specific references to three seasons: “daffodils,” “winter's pale,” and “summer songs” (4.3.1–12). His reference to “tumbling in the hay” may suggest autumn when hay is in the barn or summer when hay is cut. The great pastoral scene (4.4) celebrates the recently completed sheep shearing which, in Elizabethan England, occurred in summer. (Shakespeare’s England: Life in Elizabethan and Jacobean Times. R. E. Pritchard, Ed. [Gloustershire: Sutton Publishing, 1999], 80). Perdita confirms that it is mid-summer: “the year growing ancient, / Not yet on summer’s death nor on the birth / Of trembling winter” (4.4.79–81), she says as she gives rosemary and rue to two strangers (Polixenes and Camillo in disguise) who appear at the feast. These flowers, she notes, “Keep seeming and savor all the winter long” (4.4.75). Polixenes replies “well you fit our ages / With flowers of winter” (4.4.78). A moment later she gives them lavender, mint, marjoram—“flowers / Of middle summer” (4.4.106–7). Next Perdita tells her lover Doricles (actually Florizel, Polixenes’s son) “I would I had some flowers o’ th’ spring that might / Become your time of day” (4.4.113). Before concluding “O, these I lack,” she describes seven spring flowers. Thus Shakespeare establishes that the season is mid-summer and simultaneously invokes memories of winter and spring.
The action returns to Sicilia in act 5. The first scene’s sorrowful mood and multiple references to the deaths of Hermione, Mamillius, Antigonus, and the infant convey a penitential chill. Additionally, the likelihood that Leontes will die without an heir suggests winter’s sterility. The scene plays out Paulina’s earlier rebuke to Leontes: “Ten thousand years together, naked, fasting, / Upon a barren mountain, and still winter / In storm perpetual, could not move the gods / To look that way thou wert” (3.2.208–11).
Paulina briefly moderates the chill. After extracting from Leontes an oath that he will not remarry except by her leave, she hints that she might find him a wife: “she shall be such / As, walk'd your first queen's ghost”; but no; he shall not remarry until his “first queen's again in breath” (5.1.84). A teasing impossibility! Is this not like those final days of winter when it seems spring will never come? But, of course, it does come. A servant enters with news of Florizel’s arrival with his princess—“the most peerless piece of earth / That e'er the sun shone bright on” (5.1.93). Leontes greets the young couple: “Welcome hither, / As is the spring to the earth” (5.1.150).
The restoration to Leontes of his friends, daughter, and wife in the final two scenes may indeed be considered an analogy of spring’s rebirth. Yet Camillo’s reference to sixteen winters and summers (5.3.51–52) implies that the play’s ending transcends any specific season and embraces the entire cycle of seasons and of life. References to death and life and to time’s passage recur throughout the final scene. Hermione’s supposed statue looks older than Leontes remembers her. Hermione and Perdita have been “preserv’d”—not reborn—and are now restored to their rightful places, suggesting a storing up until ripeness has been attained. Her work done, Paulina will live secluded, lamenting her lost husband until her own end (5.3.13–5). Leontes draws Paulina back into the group of “precious winners all” by betrothing her and Camillo, and the play concludes with marriages and one remarriage of three generations. Renewal is not just for the young, but for all. | <urn:uuid:aa5a2612-3ed8-4f5a-b27a-8b74e3781bb1> | CC-MAIN-2015-35 | http://www.bard.org/a-story-for-all-seasons | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00160-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.922792 | 2,067 | 2.953125 | 3 |
- the Indus River was to be controlled exclusively by Pakistan, despite its origination in southwestern China and passage through the Indian area of JK before entering the Pakistan area of JK and then Pakistan proper;
- the Jhelum River was to be controlled exclusively by Pakistan, despite its origination in the Indian area of JK and passage through the Pakistan area of JK before entering Pakistan proper and eventually merging with the Chenab River;
- the Chenab River was to be controlled exclusively by Pakistan, despite its origination in Himachal Pradesh (the northern-most province of India, outside of JK) and passage through the Indian area of JK before entering Pakistan and, eventually, joining the Indus;
- the Sutlej, Beas (tributary to the Sutlej), and Ravi, all Eastern Rivers and relatively minor in flow volume, were to be controlled exclusively by India, with a one-time financial compensation paid to Pakistan for the Indian consumptive use of those waters before all three rivers pass into Pakistan and eventually join the Chenab River.
Given a Treaty that seems to be constructed heavily in Pakistan's favor as the junior riparian state (having been partitioned from greater India in 1947), does it seem reasonable to allow India as the senior riparian some use of the rivers that originate in or pass through its own territories? At the time of the Treaty, India maintained six existing hydropower projects on Western Rivers, only one with more than 1 MW of generation, and was already constructing eight more with three larger than 10 MW, the largest with 15 MW of power generation. Annexure D to the Treaty allowed these projects to continue as planned or already constructed and for new run-of-the-river generations plants to be constructed under specified parameters, and according to storage rules specified in Annexure E, on the mainstem Western Rivers in India's territory. On tributaries to the mainstem rivers, India had different specifications for constructing new run-of-the-river generation facilities, and on irrigation canals that originate on a mainstem Western River still other specifications. In general, these run-of-the-river facilities constituted small amounts of true storage and non-consumptive overall use.
Storage allowances stipulated in the Treaty's Annexure E are strict, limiting India to the further construction of only 1.25 million acre-feet (MAF) of aggregate general storage capacity but with none allowed on the mainstems of the Jhelum and Chenab Rivers, 1.6 MAF of aggregate storage capacity for power generation but with none allowed on the mainstem of the Jhelum River, 0.75 MAF of additional capacity for flood control or other non-consumptive or domestic use but only on tributaries of the Jhelum River, and any storage determined necessary for flood control on the mainstem of the Jhelum River as long as the floodwaters are released as soon as possible after the flood event. By river, the numbers add up to 0.4 MAF of total storage on the Indus River, 1.5 MAF on tributaries to the Jhelum River but only flood control as necessary on its mainstem reaches, and 1.7 MAF on the Chenab River and its tributaries. Out of nearly 120 MAF in normal annual flow that exit the Himalaya - Karakoram Range through JK, India was allowed to retain only the smaller Eastern Rivers and approximately 3% of the Western Rivers. There are plenty of other details listed in the Treaty Annexures, including the minimum information that must be provided by India to Pakistan regarding all constructed works on the Western Rivers.
However well-regulated the sharing of Indus and tributary waters seems to be on the basis of the Treaty, differences and disputes have inevitably arisen since 1960. Most recently and publicly, India and Pakistan have pledged to improve relations overall, though the sides differ on what that really means for the agenda of renewed talks. With so much focus from the U.S. on support to Pakistan in the course of the Afghanistan conflict, India has been unwilling to resume ''composite dialogue'' that, despite leaving territorial claims in JK unchanged, has allowed India advancement on seemingly more important issues like bilateral trade and cross-border trust-building. India's stumbling block seems on the outside to be Pakistan's approach to reducing militant activity. In the meantime, Pakistan has recently lodged complaints against India regarding one operating hydropower project at Baglihar Dam, completed on the Chenab River in 2008, and numerous proposed hydropower projects in Indian JK. India has plans for these new projects on its frontier, while Pakistan suffers drought and diminishing water supplies for its burgeoning urban population and vast agricultural needs in Pakistani Punjab and the Indus delta, one of the largest irrigated areas in the world. Should Pakistan bring these complaints to the World Bank under the provisions stipulated in the Indus Waters Treaty, arbitration results would be legally binding for both countries, though how long it would take to reach a decision on one or more complaints, separately or in aggregate, is anyone's guess. Should Pakistan resort to measures outside of the Treaty, or bring up the possibility of renegotiating the Treaty in order to resolve the more modern issues, many feel that Pakistan would most surely lose the significant water concessions that the Treaty provides as well as anything else involved in such a confrontation.
Meanwhile, officials in the U.S. are projecting seriously mixed messages on relations between India and Pakistan. Within the past week, the Undersecretary of Defense for Policy stated that American interests in Pakistan extend ''beyond Washington’s security interests in the region to wide-ranging areas including support for Islamabad’s key energy and water requirements.'' This is in direct contradiction to a sequence of diplomatic events around this most recent World Water Day: on 22 March 2010, Secretary of State Hillary Clinton delivered her remarks on the purposes of American foreign aid in the water sector:
''Access to reliable supplies of clean water is a matter of human security. It’s also a matter of national security. And that’s why President Obama and I recognize that water issues are integral to the success of many of our major foreign policy initiatives...Seems earnest enough, especially as Congress slowly works out the kinks on an update and expansion to the original Paul Simon Water for the Poor Act of 2005 that just expired.
In the United States, water represents one of the great diplomatic and development opportunities of our time. It’s not every day you find an issue where effective diplomacy and development will allow you to save millions of lives, feed the hungry, empower women, advance our national security interests, protect the environment, and demonstrate to billions of people that the United States cares...
Water is actually a test case for preventive diplomacy. Historically, many long-term global challenges – including water – have been left to fester for years until they grew so serious that they could no longer be ignored. If we can rally the world to address the water issue now, we can take early corrective action, and get ahead of the challenges that await us. And in doing so, we can establish a positive precedent for early action to address other serious issues of global concern.''
Barely two weeks after World Water Day, however, the Wall Street Journal reported that ''Secretary of State Hillary Clinton has signaled that Washington isn't interested in mediating on water issues, which are covered by a bilateral treaty.'' The Times of India quoted Secretary Clinton directly:
''We're well aware that there is a 50-year-old agreement between Pakistan and India concerning water... Where there is an agreement...with mediation techniques, arbitration built in, it would seem sensible to look to what already exists to try to resolve any of the bilateral problems between India and Pakistan... Let's see what we do to protect our aquifers. Let's see what we do to be more efficient in the use of our water. Let's see what we do to capture more rainwater; how do we actually use less of it to produce more crops? We think we have some ideas with our experts that we want to sit down and talk with your experts about and see where that goes''First of all, Madam Secretary, you have contradicted yourself. Twice. On World Water Day you said that the U.S. ''cares,'' then when Kashmir was an explicit aspect of the issue you said that the U.S. didn't want to get involved, and then you offered expert technical assistance to Pakistan in a direct interview with their diplomatic delegation. While I certainly agree that an exchange of technical knowledge will help build capacity in Pakistan, and might even teach us a thing or two about resource management in our own country - remember, they've been at it about 5,000 years longer than we have - that's the Lexus resolution, while we would be leaving the Olive Tree (Kashmir conflict, Pakistani militants) to it's own devices (thanks again, Tom Friedman). Setbacks in ethnic and territorial issues can quickly and easily unravel any technical and technological progress in water management and food security. The U.S. needs to form and stick to a clearer message in our approach to Pakistan's issues and helping them with their priorities, not just our own.
Second, the Indus Waters Treaty is held in force by the World Bank and the sheer will of the signatories, but it is entirely possible that the terms of the Treaty are outdated and need revisiting. Third, the U.S. cooperates directly with India on numerous issues and now provides significant aid (monetary and otherwise) to Pakistan. And finally, it is the U.S. that nominates the President of the World Bank and holds a plurality of the votes, with the ability to block any opposing super-majority.
In the Kashmir issue and the responsible development of the region's resources, especially water, the U.S. has leverage, vested interests, and a call for help to be answered - means, motive and opportunity if we've ever seen it. As the JK region remains at the origin of these cross-border water issues, our American commitment to self-determination and the spread of democracy comes into question when we refuse the opportunity for diplomacy between India and Pakistan, as well as the people of JK themselves who see their land and resources, including water, held in trust by the very neighbors who administer their human rights. Helping to solve the Indus waters issue, and possibly resolve the conflict over JK, may be a tough problem to wrap your brain around, but it's not as if our efforts to do so will destabilize the region any further. | <urn:uuid:43994935-e784-46e3-843f-db51b22ef9ff> | CC-MAIN-2015-35 | http://hydro-logic.blogspot.com/2010/05/south-asian-tri-axis-part-2-indus.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00159-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.95765 | 2,191 | 3.234375 | 3 |
Usability will first be discussed according to the three primary principles set forth by Jenny Preece in her book Online Communities: Designing Usability, Supporting Sociability (2001). These three principles are users, tasks, and software. First the the users needs will be explicated. Second, the tasks that the site enables will be explicated. Third, the software used to enable these tasks will be explicated. Then finally, the site will be critiqued to see both how well it follows the usability guidelines set forth in Chapter 9 of Preece's book and how well the supported tasks meet user needs in general.
Without a proper evaluation of user needs, it is difficult to build a site with decent usability. Consequently, a thorough analysis of user needs must be completed before critiquing the site. The target audience of 43 Things is anyone who has a goal who has access to the internet. This includes users of every possibly combination of age, sex, culture, experience, etc. Consequently, when evaluating how well this site meets user needs, the site will have to be both accessible and easy to use.
43 Things also must support user needs related to the purpose of the site. We will now explicate some of those needs. It appears that the developers of the site had user needs in mind when developing the site. From the main statement of purpose of the site, it can be argued that 43 Things is designed with the following three primary user needs in mind:
Enabling users to list up to 43 of their goals. They explain that writing down one's goals can be helpful for:
- Clarifying existing goals
- Prioritizing goals
- Discovering new goals.
Enabling users to view other users goals. They explain that viewing other peoples goals will help users by allowing them to:
- Discover a shared goal.
- Inspire a new goal.
- Get help answering the question, "What do I want to do with my life?"
- A user can get psychological satisfaction from the fact that other people are working towards the same goal.
- A user can get psychological satisfaction from the fact that other people are working towards goals that he or she has already completed.
Enabling users to share their progress reaching their goals because:
- Sharing your story with other people pursuing the same goal will help others along.
- Sharing that you completed the task with someone pursuing the same goal will help them along.
- Keeping a record of ones personal progress towards a goal can make task completion easier.
- Shared empathy with other users working towards the same goal can be psychologically satisfying.
- Seeing how other people are working to complete their goals can give a user ideas for how to complete the goal themselves.
- By discussing the process of completing a task with others, a user can get both emotional and practical support needed to encourage task completion.
This section will list the tasks supported in 43 Things. Currently, all computer mediated communication that is supported is asynchronous. The following is an inventory of all of the supported tasks this reviewer could discover. They are grouped loosely into primary functional tasks and the subtasks which support those tasks:
- Allow users to find other peoples goals.
- Allow users to adapt the same goals as others.
- Allow users to create original goals.
- Allow users to "give up" on completing a goal.
- Allow users to see what other goals users with a matching goal are pursuing.
- Allow users to view a visual representation of the relative popularity of a random subset of goals.
- Allow users to perform keyword searches on goals.
- Allow users to view a list of the most recently "cheered" goals.
- Allow users to view goals of users from a specific city.
- Allow users to remove a goal along with all entries and comments made about that goal (irreversible).
- Allow users to reorder their list of goals.
- Allow users to opt-in to viewing goals that contain "mature content".
- Allow users to invite other users to complete a goal.
- Allow users to invite other users to complete a goal and be your teammate towards completing that goal.
- Allow users to find and mark similarities between how they state their goals and how other people state their goals ("Report a very similar goal").
- Allow users to ask questions about completing their goals to those who have already completed the goal.
- Allow users to tag their goals.
- Allow users to view how other people have tagged their goals.
- Allow users to browse goals by tag.
- Allow users to view a visual representation of the relative popularity of the most popular tags.
- Allow users to perform keword searchs by tag.
- Allow users to search for and view flickr and del.icio.us tags alongside 43 Things tags.
- Points users to Technorati tags.
- Allow users to view other peoples progress towards their goals.
- Allow users to comment on other peoples progress towards their goals.
- Allow users to "cheer" other peoples entries on their progress.
- Allow users to list 43 goals they have completed.
- Allow users to create entries about goals they have completed.
- Allow users to view other peoples entries about goals they completed.
- Allow users to comment on other peoples entries about goals they have completed.
- Allow users to "cheer" other peoples entries on their completed goals.
- Allow users to state whether a completed goal was "worth doing" or not.
- Allow users who have completed a goal to assist those who are working towards that goal.
- Allow users to get updates on the most popular goals.
- Allow users to get updates on the most recent goals.
- Allow users to get updates on the most recent entries about a specific goal.
- Allow users to get updates on the most recent entries about all their personal goals.
- Allow users to get updates on the most recent comments on their entries.
- Allow users to get updates on all of their own recent activity changing goals, writing entries and writing comments.
- Allow users to get updates on all of their own and their teammates recent activity.
- Allow users to get updates on the most recent activity of any other user.
- Allow users to list 43 original or shared ideas for improving 43 Things.
- Allow users to create entries about their ideas for improving 43 Things.
- Allow users to view other peoples ideas for improving 43 Things.
- Allow users to comment on other peoples entries about ideas for improving 43 Things.
- Allow users to "cheer" other peoples ideas for improving 43 Things on their completed goals.
- Allow users to vote on whether an idea has been implemented.
- Allow users who have completed a goal to assist those who are working towards that goal.
- In certain instances, users are directed to email the administrators for assistance.
- Allow users to post comments to the Robot Co-op Blog.
- Allow users to post comments to the goals of the Robot Co-op team
- Allow users to add a link to a personal site from their profile.
- Allow users to incorporate the functionalities of 43 Places and 43 People into their 43 Things user experience.
- Allow users to connect flickr images to goals.
- Allow users to have their entries on 43 Places show up as postings on the users blog (currently works with Blogger, WordPress, Movable Type, Live Journal, and Type Pad).
- Allow users to place their list on their personal site.
- Allow power users flexibility for how their blog and 43 Things interact using this API.
- Allow users to view a visual representation of the relative popularity of the most popular places.
- Allow users to upload pictures to the 43 Things server.
- Allow users to create a profile.
Tasks primarily supporting users in listing their goals:
Tasks primarily supporting a user towards the completion of his or her goals:
Tasks primarily related to the tagging of goals:
Tasks primarily enabling users to check other users progress towards goals:
Tasks primarily enabling users to list their completed goals:
Tasks that primarily allow users to monitor changes in the site:
Tasks that allow users to offer feedback to the site administrators:
Tasks that primarily enable the integration of 43 Things with other sites and services:
In the FAQ, it is explains that the backend of 43 Things is run using FreeBSD (operating system), Ruby (object-oriented scripting language), and MySql (database). This all comes together using the Ruby on Rails framework. It is beyond this reviewer's technical expertise to review these choices. However, it is worth noting that the Robot Co-op collaborated with and received praise from 37 Signals, the Ruby on Rails developers, in the development of this project.
Preece discusses software in a somewhat different way. She explicates different types of "software" according to the genre of the technique that a given tool falls into. For example, the integration of email, discussion lists, or blogs into a site would be considered a software choice. While 43 Things is built on its own, some key techniques can be determined.
Similarities to a blogUsers post entries and comments in the same way as in a blog. This mechanism is very barebones and simple for what it is in that it allows users to perform only those tasks relevant to 43 Things. For example, trackback functionality is not included because task discussion is supposed to exist within the 43 Things environment. However, because the posting and commenting tools were designed with blogging in mind, it is possible to connect one's 43 Things entries with one's blog entries, or to develop further integrated services, using the previously mentioned 43 Things Web Service API or the tools developed specifically to allow users to post to their blogs from 43 Things.
TaggingOne of the more original aspects of 43 Things is its use of tagging to develop folksonomies for both finding new goals and connecting similar goals. Their use of tagging was designed to resemble the established models developed by Technorati and del.icio.us and later flickr. Furthermore, early attempts were made to integrate 43 Things tags with these sites. One technique that was both borrowed and then adapted from these sites was the use of tag clouds to display not only tags, but also goals. In fact, goals themselves can be considered a specialized type of tagging, making 43 Things a social networking site based almost entirely off the concept of tagging.
RSSRSS feeds are offered extensively throughout 43 Things. This enables users to perform the monitoring task listed above. RSS can also be used to monitor the company site/blog. This again conforms to Dan Gilmour's suggestions from We The Media (2004). The use of RSS will be discussed further in the critique below.
Preece explains that a system with good usability, "supports rapid learning, high skill retention, low error rates, and high productivity (Preece, p. 276)." Preece proceeds to list a number of guidelines for good usability. 43 Things conforms well to most items on her list. One area that the site performs very well on is consistency. Practically every function is based on the user making a list of 43 things. For each list, people are able to add entries, cheer, and comment in the exact same fashion. Because of this, once a user learns how to perform the core tasks of making a list, adding entries, commenting, and cheering, they are well equipped to handle most other tasks.
One area where the site performs very poorly is its failure to have a central area to provide support and documentation about the site. This reviewer was unable to find either an "about us" section or a "help" section. The FAQ was very limited. While the site explained to users how to perform complicated peripheral tasks such as integrating 43 Things' entries into ones' blog, it failed to explain simple core tasks like cheering and teammates. The only place it offers information about cheers is the "Edit Your Account" page. This page is very limited. So limited in fact that, under normal circumstances, this reviewer would never have gone their unless he wanted to close his account. It took this reviewer close to 20 minutes to find out what cheers were and how to use them. Worse yet, was teammates. This reviewer only discovered this concept BY ACCIDENT while reading someone's comment about his teammates. It then took the reviewer another 20 minutes to figure out what this was. The answer was first found in an entry to a goal about getting teammates where someone explained the process to another confused soul. The only significant mention of teammates is when a user clicks on the "Invite people to do this" link. Even their, it is only an inobtrusive checkbox ("Invite to do this with you as a team") that this reviewer missed. Furthermore, there was no explanation of what it meant. These two significant documentation problems made it very obvious that there was no central help section. Under normal circumstances this reviewer would have never gone through the trouble to find out these definitions.
One other blatant usability problem was language options. On the home page, there is sometimes what appears to be an advertisement to 43 Things in another language. However, there is no place to set language as an option or to view what languages are available. This is a major problem for accessibility. Similarly, this makes it difficult to know whether the site supports other character sets. This and the above issues make one wonder what other tasks and options are allowed, but hidden?
There are a few other areas where accessibility may be a problem. That RSS is the only way to receive updates on changes could be a technical barrier to many users as this technology has yet to be widely adopted. If email notifications were also offered, this would increase the number of users who would use this feature. The site does point users to a place to find out more about RSS. However adopting RSS requires a user to either download third-party software or register at a third-party site. However, given the sites similarities to the blog format and the developers focus on making their site compatible with blogs, it can be assumed that the developers are targeting bloggers as the primary users. Bloggers are early adopters of RSS, so this decision is good for them. Bloggers are also early adopters of tagging making them a likely user group.
There is what looks like a big search box in the center of the home page. It is clearly labeled as a box to create a goal and adds whatever is entered into the box to one's list. The first time using the page, this reviewer confidently typed a term in hoping to find similar goals. Instead, an original goal was created. There is an actual search box at the top of each screen. However, this reviewer still accidentally types search terms into the create a goal box because of its prominence on the screen. When running searches on the site, it is very obvious that multiple wordings of common goals are listed. Given the ease of finding existing goals using the search bar, it is possible that these redundancies are because it is easier to create new goals than to find existing goals. This is a problem because redundancies prevent critical masses from forming around individual goals.
The above issues aside, 43 Things is mostly very user friendly. Users are frequently greeted with offers for more information about topics such as RSS (users are sent here) and Tags. Responsiveness to user feedback appears also appears to be very good. That users are encouraged to list 43 things about the site that they have problems with is a creative and unique way to encourage user feedback and monitor popular requests. A number of entries from the Robot Co-op blog also indicate that user suggestions have been integrated into the page. In fact, that so many services are available for bloggers and people using other tagging sites is largely a response to user requests. Unfortunately, some less experienced users needs may be getting ignored because of their minority status in the 43 Things user community.
This discussion has focused solely on the usability of the existing features of the site. In the next section, additional features will be suggested that may improve the sociability of the site. Now that usability issues have been discussed, we will examine how well 43 things has been designed to foster sociability. | <urn:uuid:7fc3fa29-5dd8-4696-a6cc-f8decd5aa62f> | CC-MAIN-2015-35 | http://www.mchabib.com/43things/usability.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00106-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.946978 | 3,350 | 3.015625 | 3 |
The Apostolic FathersJustin MartyrIren?s.
This, then, is our starting point. The baptism of the New Testament is the baptism of believers. Our next inquiry will be, How the post-Apostolic Church thought and acted on this subject ?
Christian baptism, as instituted by the Saviour, and practiced by the Apostles, was the immersion of believers in water, in the name of the Father, and of the Son, and of the Holy Ghost. It was the declaration of their adhesion to Christ, and the symbol of their renunciation of sin. It was in every case the act of a free agent, and thus it harmonized with the spiritual nature of Christianity. All this is now generally admitted.
The next inquiry is, Did the usages of the period im?mediately succeeding the Apostolic, accord with these views? Or did they indicate any change or any depar?ture from them?
Here it is necessary to interpose a caution. Apostolic example has the force of authority. It is the inspired exposition of the law. Not so the example of the primitive churches as they are called, that is, as they existed after the Apostolic age. The plainness of the Christian cere?monial offended those who were fond of pomp and show, and the equality of the Christian brotherhood offended those who loved power. Hence corruptions crept in. They were anticipated and foretold by the Apostles. And hence the necessity of distinguishing between Divine law and human tradition. We have no power to change the law, or to make any addition to it. The assumption of such power in primitive times was a fatal error, the evil consequences of which are felt to this day. Instead of adhering strictly to the Scripture rule, men dealt with Christianity as they dealt with systems of philosophy. They treated it as if it were susceptible of improvement, and might be accommodated to circumstances. They took the liberty to engraft on it certain peculiarities of Judaism, and even of Paganism. They multiplied forms to the sore detriment of the spirit and the life.
It has been customary to appeal to the opinions and practices of the churches of the first three centuries after the Apostles. In the controversy with the Church of Rome it is an available argument to this extent, that it takes from that Church the plea of antiquity, since it proves that Romanism, as such, did not exist in the above-mentioned period. Yet it cannot be denied that the first steps towards Romanism were then taken. Professing Christians soon abandoned the high ground of Scripture, and took pleasure in vain deceit and will-worship. In this they are not examples for our imitation. We must go further backto the Book itselfto the recorded enactments of the Divine Lawgiver; and our object will be to ascertain how far, and by whom, the Saviors will has been regarded.
This can only be accomplished by consulting the writers of the times now under consideration. The Apostolic Fathers first claim attention. They are: Barnabas, Hermas, Clement of Rome, Ignatius, and Polycarp. To these some add Papias, a few fragments only of whose writings have been preserved by Eusebius, the ecclesiastical historian. They contain no reference to the subject now before us.
The writings ascribed to Barnabas and Hermas were probably composed in the second century, by some weak-minded Christians, who fathered their own poor effusions on the coadjutor of the Apostle Paul, and the brother mentioned by him in his epistle to the Romans (chap. 16:14). But though they are not genuine books, they may be regarded as witnesses to the religious views entertained by the Christians of those times. In the work ascribed to Barnabas, we find the following passage:We descend into the water laden with sins and corruption, and ascend bearing fruit, having in the heart the fear [towards God], and in the spirit the hope towards Jesus.1 There are several references to baptism in the writings bearing the name of Hermas, some of them exceedingly fanciful, but there is not the slightest allusion to infant baptism; he speaks repeatedly of descending into the water, and ascending out of it, evidently alluding to immersion.
Let us pass on to Clement of Rome. He was bishop or pastor of the Church in Rome, and died about the year 100. His epistle to the Corinthians is a precious gem. Baptism is not mentioned in it. A second epistle to the Corinthians is attributed to him, but without sufficient grounds. There is one sentence referring to baptism. It is as follows:If we do not keep the baptism pure and undefiled, with what confidence shall we enter the kingdom of God?2
Ignatius comes next. He was pastor at Antioch in Syria, and suffered martyrdom by exposure to wild beasts at Rome, A.D. 116. Several letters were written by him, which have come down to us in an interpolated state. There are a few allusions to baptism. He refers twice to the baptism of our Saviour by John. He tells the Smyrneans that the ordinance should not be administered without the bishop.3 In writing to Polycarp he uses this military phraseologyLet your baptism continue as a shield, faith as a helmet, love as a spear.4 This is all.
Polycarp suffered martyrdom by fire at Smyrna, A.D. 167. An epistle to the Philippians is attributed to him. It does not allude either to baptism or to the Lords Supper.
Justin Martyr was a philosophic Christian. He was put to death at Rome, A.D. 166. In his first Apology, addressed to the Emperor Marcus Aurelius, he gives the following account of baptism as practiced in his days: As many as are persuaded and believe that what we teach is true, and undertake to conform their lives to our doctrine, are instructed to fast and pray, and entreat from God the remission of their past sins, we fasting and praying together with them. They are then conducted by us to a place where there is water, and are regenerated in the same manner in which we were ourselves regenerated. For they are then washed in the name of God the Father and Lord of the Universe, and of our Saviour Jesus Christ, and of the Holy Spirit.5 Observe the manner in which he speaks of baptism. The candidates are those who are persuaded and believe; and the ordinance is administered, not by sprinkling, but by the washing of immersion. Semisch, the learned biographer of Justin, says, Whenever Justin refers to baptism, adults appear as the objects to whom the sacred rite is administered. Of infant baptism he knows nothing.
Iren?s became bishop of Lyons in France, A.D. 177, and died A.D. 202. He mentions baptism several times, and seemingly connects it with regeneration, as Justin had done before him, in the passage just cited: but it is extremely doubtful whether Justin or Iren?s thought that men were regenerated in or by baptism. Their object was to show that as the convert came under new obligations and entered into new relationships, at his baptism, it was equivalent to the assumption of a new life: he was in this profession born again unto God, and publicly entered into the spiritual family. This view of the subject is confirmed by another representation given of baptism by Justin in the course of his narrative. He says, This washing is called Illumination, because those who learn these things are enlightened in their minds.6 Baptism is not illumination, but it is so called because it is connected with an enlightened state of mind: in like manner, baptism is called Regeneration, not because it regenerates, but because it is connected with a regenerate state and a new life, profession of which is then made.
Two passages used to be quoted by P?obaptist writers, as testimonies in favour of infant baptism. One is from Justin Martyr. He writes thus:Many men and many women, sixty and seventy years old, who from children have been disciples of Christ, preserve their continence.7 The other is from Iren?s. These are his words: He came to save all persons by Himself; all, I say, who are regenerated by Him unto Godinfants, and children, and boys, and young men, and old men. But baptism is not mentioned in either of these passages, and modern critics have confessed that they afford no support to the P?obaptist view. All that Justin means is, that he knew many persons who had been disciples of Christ from early life; and he expressly connects choice and knowledge with baptism, of which infants are incapable. The language used by Iren?s merely expresses, says Hagenbach (a German P?obaptist), the beautiful idea that Jesus was Redeemer in every stage of life, and for every stage of life; but it does not say that He became Redeemer for children by water baptism.8
We are now brought to the close of the second century. But few Christian authors had as yet appeared. Is it not remarkable, however, that in none of their writings which have been preserved is there any mention of infant baptism? If it existed, it must have been a prominent thing in the Church transactions of the period. But these Christians knew nothing of it. Neither Clement of Rome, nor Ignatius, nor Justin, nor any other author, wrote a word which would lead us to suppose that infants were baptized. There is a singular difference in this respect between the statements of these Christian fathers and the correspondence of modern P?obaptist missionaries. Read the letters of missionaries in the Reports of Missionary Societies. How careful they are to give us full information respecting the number of children that have been baptized, and how numerous are the references to them! With what solicitude are arrangements made, and their operation watched over, with a view to the religious instruction and training of baptized children! We search the Christian writings of the first two centuries in vain for anything of this kind. That the Christians of those times gave their children the benefit of religious teaching and example is not to be doubted; but they did not baptize them till they could answer for themselves, and voluntarily assume the vows of the Christian profession.
We have now advanced two hundred years, and have not yet found infant baptism. It will come in sight soon, along with other corruptions and inventions.
1 Chap. ii.
2 Sect. 6.
3 Sect. 8.
4 Sect. 6.
5 Sect. 79.
6 Sect. 80.
7 Apol. i. sect. 18.
8 History of Doctrines, i. p. 193. Dr. Ira Chase has examined all the passages n Iren?s in which the phrase "regenerated unto God" occurs. See Bibliotheca Sacra, November, 1849.
The Reformed Reader Home Page
Copyright 1999, The Reformed Reader, All Rights Reserved | <urn:uuid:54b6e88d-d132-4a74-9d28-26eb6478015a> | CC-MAIN-2015-35 | http://www.reformedreader.org/history/cramp/s01ch02.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00045-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.976423 | 2,298 | 2.515625 | 3 |
Media & Medicine in Modern America
Students were asked to review readings and websites and submit questions in advance, many of which Susannah Fox addressed in her remarks:
The Pew Internet Project studies the social impact of the internet. We have been tracking online life since the year 2000, when 46% of American adults had access to the internet and only 5% of homes had broadband connections.
It was a pre-Flickr, pre-YouTube, pre-Facebook online world back then. Now three-quarters of adults go online, two-thirds of U.S. households have broadband access, and the internet has become fast, mobile, and social for a lot of Americans.
In 1994, a man facing a treatment choice had to impersonate his doctor in order to gain access to a medical journal article which described the procedure. It was shocking to his doctor that he would want to know everything he could about his options before making a decision. It’s like a ghost story of the pre-internet age, but it’s true.
In December 2001 the American Medical Association put out a press release suggesting that Americans make a New Year’s resolution to “trust your physician, not a chat room” since the information found online puts “lives at risk.”
Of course most people ignored that advice and flocked online for health information, just as they ignored the advice of the recording industry and flocked to music downloading sites. Gathering and sharing information online and connecting with people of like interests became the new normal.
The internet created a way for people to pool knowledge and resources – and e-patients, often desperate to save their own lives, barged right in and set up shop.
In 2002, the Pew Internet Project asked internet users to write essays about how they connect to online health resources. We heard from people who used eBay to buy hard-to-find home medical equipment. We read about old-school bulletin boards and listserves which serve as lifelines for people with rare diseases and conditions. We heard about how people participating in clinical trials found each other online, or as one e-patient wrote, “we are lab rats tapping out messages on the bars of our cages.”
Back then the internet was a dial-up, stationary, information vending machine and people were already using it to better their health.
Now, the internet is a mobile, social, information and communications appliance that fits in your hand. 80% of adults between the ages of 18-29 go online via a mobile device, compared to just 16% of adults age 65+. When we include mobile devices in our definition of the internet user population, differences between white and African American adults disappear.
That’s my first piece of advice: Celebrate change. Embrace any new technology you find useful. Don’t be the AMA, railing against the internet because it was breaking the traditional model of “doctor knows best.”
But don’t forget about the people who don’t get it yet, who may not want to get it, particularly our elders. Honor their experience and try to understand it since we’re all headed that way.
My grandfather, who died at age 88, once told my dad that he was shocked every time he looked in the mirror. He still thought of himself as 18 years old, the year he started college. Help your elders with technology, if they need it. Or ask them to help you if they are anything like my grandmother, who died at age 96 and a half, a daily internet user. Her last words were, “Erase my email.”
And don’t forget that, some day, you might be confronted with a technological advance you aren’t ready for. There is always a new generation coming up behind you.
Clay Shirky, who is communications professor at NYU, tells a story about a little girl who, when watching a movie at home, jumped off the couch and starting rooting around in the cables behind the TV. When asked what she was doing, she replied, “Looking for the mouse.”
Shirky’s conclusion is that four-year-olds know that a screen that ships without a mouse ships broken. Media that is targeted at you but doesn’t include you may not be worth sitting still for.
That is where most health care is these days, stuck in the broadcast world when it could be transformed and transformative. E-patients know that health care that is targeted at you but doesn’t include you may not be worth sitting still for. As e-patients are “looking for the mouse” in health care, I’d like to suggest that one possible answer is the concept of participatory medicine.
Participatory medicine is a cooperative model of medical care that encourages and expects active participation by all involved parties as an integral part of the full continuum of care.
Participatory medicine acknowledges that it’s not just patients who are looking for the mouse in health care. Doctors, nurses, hospital administrators, and other health care professionals are all looking for the mouse, too.
Reforming health care is too big for most people to grasp; creating spaces for participatory medicine is not. For example, e-patients are already experts at finding and sharing information online. When I talk to people about my research, I often hear about how they are worried about bad or false information being passed around online. That was a question from one of the sections, in fact. My answer: Flood the market with good information. Deputize e-patients with the best data. Make it easier for people to find and share the right information. Don’t hide the best information behind a subscription wall. Do publish in HTML or XML instead of in PDFs. Do open your site to comments or provide a way for people to get in touch with you. Do get top executives to participate, not just observe, so they can create policies which reflect the realities of today’s technology, not their memory of how it was 5 years ago or their fear of it.
My other plea is for evidence: Clinical trials to show the power and the pitfalls of online health resources.
I have two examples – the first is a clinical trial conducted in Boston showing how text messages can increase treatment adherence. The Center for Connected Health conducted a randomized trial in 2008 using text messaging to send a daily weather report and reminder to apply sunscreen. The control group did not receive any reminders – just a tube of sunscreen. Each tube had a monitor strapped onto it so every time the cap was removed, an alert was sent back to the researchers. Study participants who received text reminders applied the sunscreen an average of 56% of the time, compared with the control group, which had a mean daily adherence rate of 30%.
The text messages were nothing fancy, just a weather report and a reminder to apply sunscreen. But they increased adherence. What else can be sent via text messages? How about appointment reminders? How about air pollution alerts?
Those of you who had time to read my blog saw my post, “What’s the point of Health 2.0?” And a couple of you had questions about Darthmed’s challenge to use simple, basic interventions to change health outcomes in the U.S. instead of creating fancy, interactive tools. I understand what he’s saying and agree. I’ve also heard people say that the Dept. of Agriculture has more to do with Americans’ health than the Dept. of Health & Human Services.
How many of you have heard of Michael Pollan, who wrote In Defense of Food and Food Rules? His answer to all our diet worries is summed up in seven words:
Eat food. Not too much. Mostly plants.
The New York Times held a Seven-Word Wisdom contest . Here are my favorite entries:
Eat pie. Very good pie. Not often.
Call Mom. Let her talk. Don’t argue.
Make promises. Don’t break them. Find loopholes.
In the spirit of Michael Pollan’s diet wisdom, I think we should look at what technologies are widely available and leverage them. One of you asked, What’s the role for doctors in this new era? They are central. Eight in ten adults turn to a health professional when they need medical advice. Six in ten adults go online for health information. 85% of American adults have a cell phone.
Patients want doctors to lead them to better health, but as coaches. Health professionals could learn a lot from watching what patients are doing online. And cell phones are an incredible opportunity to reach a diverse population.
So here’s my Seven Word Wisdom:
Recruit doctors. Let e-patients lead. Go mobile.
My second example comes from PatientsLikeMe, a social network for people living with chronic conditions, a site which was also part of your suggested reading for this week.
A survey of HIV community members on PatientsLikeMe found that two-thirds of respondents said they are more knowledgeable about risks and benefits of a “treatment holiday” because of what they have learned from other users on the site. How many people know what a “treatment holiday” is? It’s the idea that you should give your body a break from your medications every once in a while, which might sound good, but has potentially dire consequences for some conditions, such as HIV and diabetes.
Seven in ten survey respondents said using PLM has increased their interest in results of tests ordered by the doctor treating their HIV. That’s because many of them are posting the test results online, comparing them to other patients to see how they are doing. These patients are under the care of health professionals, but exchanging data, insights, and information to take better care of themselves.
PatientsLikeMe is an example of a small website that provides in-depth data on a limited number of conditions. Someone in the class asked that I elaborate on the blog post I wrote about Google privileging big sites over small sites, and whether there are issues of accuracy, relevance, and efficiency. First, I need to define my role: I’m like a geologist: I study the rocks, don’t judge them. I can’t say whether Google is doing the right thing or the wrong thing by guiding people to big sites. Also, the answer is unclear because it depends on the context of the search being performed.
I attended a day-long meeting with all the federal agencies concerned with HIV and AIDS. The director of AIDS.gov described how he noted that most people landing on their site arrived after a general search about high-risk behavior. They may have engaged in some behavior the night before which made them worry about their risk for HIV. Is AIDS.gov the right source for that person? Further, would a general-interest site like WebMD be the right choice for someone living with HIV, already on a treatment path, with advance knowledge of their disease? Do people need consumer-strength information about their condition? Or do they need industrial-strength information, which can come from both health professionals and from peers?
Looking at Pew Internet’s national data, 41% of e-patients have read someone else’s commentary or experience about health or medical issues on an online news group, website, or blog. When I talk to health advocates, whether at the CDC or a non-profit or a dot-com, I tell them about this social life of health information and ask: How are you helping people to spread your message? There is so much opportunity for better design, more data liquidity, and just plain useful technology in the fight against misinformation.
My next report focuses on people living with chronic disease. The Pew Internet Project and the California HealthCare Foundation conducted a national telephone survey, asking about the following five chronic diseases: high blood pressure, lung conditions, heart conditions, diabetes, and cancer.
One-third of adults in the U.S. say they are living with at least one of those five conditions and one in ten say they have two or more.
The median age for adults living with two or more chronic conditions is 63, compared with the median age for the general adult population: 45. This age gap accounts for many of the differences between the two groups in terms of technology adoption and use.
Once online, however, all adults are equally likely to use the internet to gather health information.
My favorite two questions in the survey ask about the impact of online health resources. Has the internet helped or has it harmed? Because I wonder: What does it all mean if it doesn’t make a difference in someone’s life?
I invite you to think about how you can move the needle on this measurement of the internet’s impact on people’s lives. What resources can be deployed to help people living with chronic disease? How are you going to help them? In what ways can media and medicine be deployed to make a difference? | <urn:uuid:7f9e92d0-e7f6-4441-b1a7-622da99f6815> | CC-MAIN-2015-35 | http://www.pewinternet.org/2010/02/18/media-medicine-in-modern-america/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00279-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.959081 | 2,712 | 2.90625 | 3 |
In addition to object-oriented programming, Python also supports an approach called Aspect-Oriented Programming . Object-oriented programming focuses on structure and behavior of individual objects. Aspect-oriented programming refines object design techniques by defining aspects which are common across a number of classes or methods.
The focus of aspect-oriented programming is consistency. Toward this end Python allows us to define “decorators” which we can apply to class definitions and method definitions and create consistency.
We have to note that decorators can easily be overused. The issue is to strike a balance between the obvious programming in the class definition and the not-obvious programming in the decorator. Generally, decorators should be transparently simple and so obvious that they hardly bear explanation.
We’ll look at what a decorator is in Semantics of Decorators.
We’ll look at some built-in decorators in Built-in Decorators.
In Defining Decorators we’ll look at defining our own decorators.
It is possible to create some rather sophisticated decorators. We’ll look at the issues surrounding this in Defining Complex Decorators.
Essentially, a decorator is a function that is applied to another function. The purpose of a decorator is to transform the function definition we wrote (the argument function) into another (more complex) function definition. When Python applies a decorator to a function definition, a new function object is returned by the decorator.
The idea of decorators is to allow us to factor out some common aspects of several functions or method functions. We can then write a simpler form of each function and have the common aspect inserted into the function by the decorator.
When we say
@theDecorator def someFunction( anArg ): pass # some function body
We are doing the following:
Cross-Cutting Concerns. The aspects that makes sense for decorators are aspects that are truly common. These are sometimes called cross-cutting concerns because they cut across multiple functions or multiple classes.
Generally, decorators fall into a number of common categories.
Simplifying Class Definitions. One common need is to create a method function which applies to the class-level attributes, not the instance variables of an object. For information on class-level variables, see Class Variables.
The @staticmethod decorator helps us build method functions that apply to the class, not a specific object. See Static Methods and Class Method.
Additionally, we may want to create a class function which applies to the class as a whole. To declare this kind of method function, the built-in @classmethod decorator can be used.
If you look at the Python Wiki page for decorators (http://wiki.python.org/moin/PythonDecoratorLibrary), you can find several examples of decorators that help define properties for managing attributes.
Debugging. There are several popular decorators to help with debugging. Decorators can be used to automatically log function arguments, function entrance and exit. The idea is that the decorator “wraps” your method function with additional statements to record details of the method function.
One of the more interesting uses for decorators is to introduce some elements of type safety into Python. The Python Wiki page shows decorators which can provide some type checking for method functions where this is essential.
Additionally, Python borrows the concept of deprecation from Java. A deprecated function is one that will be removed in a future version of the module, class or framework. We can define a decorator that uses the Python warnings module to create warning messages when the deprecated function is used.
Handling Database Transactions. In some frameworks, like Django (http://www.djangoproject.org), decorators are used to simplify definition of database transactions. Rather than write explicit statements to begin and end a transaction, you can provide a decorator which wraps your method function with the necessary additional processing.
Authorization. Web Security stands on several legs; two of those legs are authentication and authorization. Authentication is a serious problem involving transmission and validation of usernames and passwords or other credentials. It’s beyond the scope of this book. Once we know who the user is, the next question is what are they authorized to do? Decorators are commonly used web frameworks to specify the authorization required for each function.
Python has a few built-in decorators.
The @staticmethod decorator modifies a method function so that it does not use any self variable. The method function will not have access to a specific instance of the class.
This kind of method is part of a class, but can only be used when qualified by the class name or an instance variable.
For an example of a static method, see Static Methods and Class Method.
Here’s a contrived example of using introspection to display some features of a object’s class.
import types class SelfDocumenting( object ): @classmethod def getMethods( aClass ): return [ (n,v.__doc__) for n,v in aClass.__dict__.items() if type(v) == types.FunctionType ] def help( self ): """Part of the self-documenting framework""" print self.getMethods() class SomeClass( SelfDocumenting ): attr= "Some class Value" def __init__( self ): """Create a new Instance""" self.instVar= "some instance value" def __str__( self ): """Display an instance""" return "%s %s" % ( self.attr, self.instVar )
Here’s an example of creating a class and calling the help method we defined. The result of the getMethods() method function is a list of tuples with method function names and docstrings.
>>> ac= SomeClass() >>> ac.help() [('__str__', 'Display an instance'), ('__init__', 'Create a new Instance')]
A decorator is a function which accepts a function and returns a new function. Since it’s a function, we must provide three pieces of information: the name of the decorator, a parameter, and a suite of statements that creates and returns the resulting function.
The suite of statements in a decorator will generally include a function def statement to create the new function and a return statement.
A common alternative is to include a class definition statement . If a class definition is used, that class must define a callable object by including a definition for the __call__() method and (usually) being a subclass of collections.Callable.
There are two kinds of decorators, decorators without arguments and decorators with arguments. In the first case, the operation of the decorator is very simple. In the case where the decorator accepts areguments the definition of the decorator is rather obscure, we’ll return to this in Defining Complex Decorators.
A simple decorator has the following outline:
def myDecorator( argumentFunction ): def resultFunction( \*args, \*\*keywords ): enhanced processing including a call to argumentFunction resultFunction.__doc__= argumentFunction.__doc__ return resultFunction
In some cases, we may replace the result function definition with a result class definition to create a callable class.
Here’s a simple decorator that we can use for debugging. This will log function entry, exit and exceptions.
def trace( aFunc ): """Trace entry, exit and exceptions.""" def loggedFunc( \*args, \*\*kw ): print "enter", aFunc.__name__ try: result= aFunc( \*args, \*\*kw ) except Exception, e: print "exception", aFunc.__name__, e raise print "exit", aFunc.__name__ return result loggedFunc.__name__= aFunc.__name__ loggedFunc.__doc__= aFunc.__doc__ return loggedFunc
Here’s a class which uses our @trace decorator.
class MyClass( object ): @trace def __init__( self, someValue ): """Create a MyClass instance.""" self.value= someValue @trace def doSomething( self, anotherValue ): """Update a value.""" self.value += anotherValue
Our class definition includes two traced function definitions. Here’s an example of using this class with the traced functions. When we evaulate one of the traced methods it logs the entry and exit events for us. Additionally, our decorated function usees the original method function of the class to do the real work.
>>> mc= MyClass( 23 ) enter __init__ exit __init__ >>> mc.doSomething( 15 ) enter doSomething exit doSomething >>> mc.value 38
A decorator transforms an argument function definition into a result function definition. In addition to a function, we can also provide argument values to a decorator. These more complex decorators involve a two-step dance that creates an intermediate function as well as the final result function.
The first step evaluates the abstract decorator to create a concrete decorator. The second step applies the concrete decorator to the argument function. This second step is what a simple decorator does.
Assume we have some qualified decorator, for example @debug( flag ), where flag can be True to enable debugging and False to disable debugging. Assume we provide the following function definition.
debugOption= True class MyClass( object ): @debug( debugOption ) def someMethod( self, args ): real work
Here’s what happens when Python creates the definition of the someMethod() function.
Here’s an example of one of these more complex decorators. Note that these complex decorators work by creating and return a concrete decorators. Python then applies the concrete decorators to the argument function; this does the work of transforming the argument function to the result function.
def debug( theSetting ): def concreteDescriptor( aFunc ): if theSetting: def debugFunc( *args, **kw ): print "enter", aFunc.__name__ return aFunc( *args, **kw ) debugFunc.__name__= aFunc.__name__ debugFunc.__doc__= aFunc.__doc__ return debugFunc else: return aFunc return concreteDescriptor | <urn:uuid:8685c26d-1657-4774-9045-6324aff364f7> | CC-MAIN-2015-35 | http://www.itmaybeahack.com/book/python-2.6/html/p03/p03c06_decorators.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00276-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.793343 | 2,212 | 3.875 | 4 |
* This is the Consumer Version. *
- Screening Tests
- Diagnostic Procedures
- Resources In This Article
- Drugs Mentioned In This Article
Tests for Gynecologic Disorders
Sometimes doctors recommend screening tests ( Tools of Prevention : Screening), which are tests that are done to look for disorders in people who have no symptoms. Women with gynecologic symptoms sometimes need to have diagnostic procedures done.
Two important screening tests for women are cervical cell (cytology) testing, such as the Papanicolaou (Pap) test, to check for cancer of the cervix (the lower part of the uterus) and mammography to check for breast cancer (see Breast Disorders:Mammography). Women at risk of sexually transmitted diseases should be screened for these diseases. Other screening tests are done in pregnant women (see Medical Care During Pregnancy).
Cervical cytology testing (such as the Pap test) involves collecting a sample of cells from the cervix and examining them under a microscope. There are two types of cervical cytology tests: the conventional test and the liquid-based test. Doctors collect the sample by inserting a speculum into the vagina to spread the walls of the vagina apart and using a plastic spatula (similar to a tongue depressor) to remove some cells from the surface and opening of the cervix. Then, a small bristle brush is inserted into the passageway through the cervix (cervical canal) to obtain cells from the wall of the canal. The samples are sent to a laboratory, where they are examined under a microscope for abnormal cells, which may indicate precancerous changes or, rarely, cervical cancer. Usually, the Pap test feels scratchy or crampy, but it is not painful and takes only a few seconds.
Pap tests identify 80 to 85% of cervical cancers, even very early-stage cancer. They can also detect changes in cervical cells that can lead to cancer (precancerous changes). These changes, called cervical intraepithelial neoplasia (CIN), can be treated, thus helping prevent cancer.
Pap tests are most accurate if the woman is not having her period and does not douche or use vaginal creams for at least 24 hours before the test. Experts now recommend that the first Pap test be done after a women reaches the age of 21 years. How often the test is needed depends mainly on the woman’s age and the results of previous Pap tests:
From age 21 to 30: Testing is usually done every 3 years
After age 30: Testing is done every 3 years if only a Pap test is done or every 5 years if a Pap test and a test for human papillomavirus (HPV) are done. However, women with a high risk of cervical cancer need to be tested more frequently. Such women include those who have an HIV (human immunodeficiency virus) infection, who have a weakened immune system (which may result from taking a drug or having a disorder that suppresses the immune system), or who have had abnormal Pap test results.
After age 65 or 70: Testing is no longer needed if test results have been normal for at least 3 years in a row and no result has been abnormal in the last 10 years. Pap tests should be resumed if the woman has a new sex partner or should be continued if she has several sex partners.
Women who have had their uterus completely removed (total hysterectomy) and have not had any abnormal Pap test results do not need Pap tests.
Women at risk of sexually transmitted diseases should be screened yearly for these diseases, even if they have no symptoms. High-risk women include the following:
Sexually active women aged 25 and younger
Women who are just beginning sexual activity
Women who have several sex partners
Women whose partner has had several sex partners
Women who have had a sexually transmitted disease
Women who do not consistently use a barrier contraceptive (such as a condom) and are not in a mutually monogamous relationship or are unsure whether the relationship is mutually monogamous
Women who have a vaginal discharge
For most sexually transmitted diseases, the doctor uses a swab to obtain a small amount of cervical discharge from the cervix. The sample is sent to a laboratory for analysis. Women who think they may have one of these diseases can request screening. Testing for gonorrhea and chlamydial infection can also be done using a urine specimen or a sample from inside the vagina obtained by the woman with a swab.
A doctor may consider screening women for HPV if they are 30 years old or older, if a Pap test detected abnormalities that may result from HPV infection, or if the results were not clear. HPV can cause genital warts or cervical cancer. A sample of vaginal discharge, obtained with a swab, is used for this test. Normal results of an HPV test indicate that cervical cancer and precancerous conditions are highly unlikely. For women at high risk of HPV infection, the HPV test can be done at the same time as a Pap test. If results of a Pap test and an HPV test are normal in women older than 30, neither test needs to be repeated for at least 3 years.
Occasionally, more extensive diagnostic procedures are needed.
A biopsy consists of removing a small sample of tissue for examination under a microscope. Biopsy of the vulva, vagina, cervix, or lining of the uterus can be done.
A cervical biopsy is done when a condition likely to eventually lead to cancer (precancerous condition) or cancer is suspected, usually because a Pap test result was abnormal. A biopsy of the cervix or vagina is usually done during colposcopy. During colposcopy, doctors can identify the area that looks most abnormal and take tissue samples from it. Usually, biopsy of the cervix or vagina does not require an anesthetic, although this procedure typically feels like a sharp pinch or a cramp. Taking a nonsteroidal anti-inflammatory drug (NSAID), such as ibuprofen, 20 minutes before the procedure may help relieve any discomfort during the procedure.
For biopsy of the lining of the uterus (endometrial biopsy), a speculum is used to spread the walls of the vagina, and a small metal or plastic tube is inserted through the cervix into the uterus. The tube is used to suction tissue from the uterine lining. This procedure is usually done to determine the cause of abnormal vaginal bleeding. Also, infertility specialists use this procedure to determine whether ovulation is occurring normally and whether the uterus is ready for implantation of embryos. An endometrial biopsy can be done in a doctor's office and usually does not require an anesthetic. Typically, it feels like strong menstrual cramps. Taking an NSAID, such as ibuprofen, 20 minutes before the procedure may help relieve discomfort during the procedure.
Colposcopy is often done if results of a Papanicolaou (Pap) test are abnormal. For colposcopy, a speculum is used to spread the walls of the vagina and a binocular magnifying lens (similar to that of a microscope) is used to inspect the cervix for signs of cancer. Often, a sample of tissue is removed for examination under a microscope (biopsy). Colposcopy alone (without biopsy) is painless and thus requires no anesthetic. The biopsy procedure is typically described as causing a crampy sensation and also does not require an anesthetic. The procedure usually takes 10 to 15 minutes.
Endocervical curettage consists of inserting a small, sharp, scoop-shaped instrument (curet) into the passageway through the cervix (cervical canal) to obtain tissue. The curet is used to scrape a small amount of tissue from high inside the cervical canal. A cervical biopsy (to remove a smaller piece of tissue from the surface of the cervix) is typically done at the same time. The tissue samples are examined under a microscope by a pathologist.
Endocervical curettage is done when endometrial or cervical cancer is suspected or needs to be ruled out. Usually, it is done during colposcopy and does not require an anesthetic.
For dilation and curettage (D and C), a speculum is used to spread the walls of the vagina. Then, metal rods are used to stretch open (dilate) the cervix so that a small, sharp, scoop-shaped instrument (curet) can be inserted to remove tissue from the lining of the uterus.
This procedure may be used to treat women who have had an incomplete (partial) miscarriage. D and C is sometimes used to identify abnormalities of the uterine lining when biopsy results are inconclusive, but it is no longer commonly used for this purpose because biopsies usually provide as much information and can be done in the doctor’s office. D and C is often done in a hospital. Conscious sedation (when people can breathe on their own and respond to directions but do not feel pain) or a general anesthetic may be used. However, most women do not have to stay overnight in the hospital.
For hysterosalpingography, x-rays are taken after a radiopaque dye (which can be seen on x-rays) is injected through the cervix to outline the interior of the uterus and fallopian tubes.
The procedure is often used to help determine the cause of infertility or to confirm that a sterilization procedure to block the tubes is successful. The procedure is done in a place where x-rays can be taken, such as a hospital or the radiology suite of a doctor's office. Hysterosalpingography usually causes discomfort, such as cramps. Taking an NSAID, such as ibuprofen, 20 minutes before the procedure may help relieve discomfort.
To view the interior of the uterus, doctors can insert a thin viewing tube (hysteroscope) through the vagina and cervix into the uterus. The tube is about 1/4 inch in diameter and contains cables that transmit light. Instruments used for a biopsy, electrocautery (heat), or surgery may be threaded through the tube. The site of abnormal bleeding or other abnormalities can usually be seen and can be sampled for a biopsy, sealed off using heat, or removed. This procedure may be done in a doctor's office, or it may be done in a hospital with a general anesthetic at the same time as dilation and curettage.
To directly examine the uterus, fallopian tubes, or ovaries, doctors use a viewing tube called a laparoscope. The laparoscope is attached to a thin cable containing flexible plastic or glass rods that transmit light. The laparoscope is inserted into the abdominal cavity through a small incision just below the navel. A probe is inserted through the vagina and into the uterus. The probe enables doctors to manipulate the organs for better viewing. Carbon dioxide is pumped through the laparoscope to inflate the abdomen, so that organs in the abdomen and pelvis can be seen clearly.
Often, laparoscopy is used to determine the cause of pelvic pain, infertility, and other gynecologic disorders. Instruments can be threaded through the laparoscope to do some surgical procedures, such as biopsies, sterilization procedures, and removal of an ectopic pregnancy in a fallopian tube. Additional incisions may be required if surgical procedures, such as removal of an ovarian cyst or the uterus (hysterectomy), are needed.
Laparoscopy is done in a hospital and requires an anesthetic, usually a general anesthetic. An overnight stay in the hospital is usually not required. Laparoscopy may cause abdominal pain, but normal activities can usually be resumed in 3 to 5 days, depending on the extent of the procedure that is done through the laparoscope.
In a loop electrical excision procedure (LEEP), a thin wire loop that conducts an electrical current is used to remove a piece of tissue. Typically, this piece of tissue is larger than that obtained in a biopsy of the cervix.
This procedure may be done after an abnormal Pap test result to evaluate the abnormality more accurately or to remove the abnormal tissue. LEEP requires an anesthetic (often a local one), takes about 5 to 10 minutes, and can be done in a doctor's office. Afterward, women may feel mild to moderate discomfort and have a small amount of bleeding. Taking an NSAID, such as ibuprofen, 20 minutes before the procedure may help relieve discomfort during the procedure.
For sonohysterography, fluid is placed in the uterus through a thin tube (catheter) that is inserted through the vagina and then the cervix. Then ultrasonography is done. The fluid fills and stretches (distends) the uterus so that abnormalities inside the uterus, such as polyps or fibroids, can be more easily detected. The procedure is done in a doctor's office and may require a local anesthetic. Taking an NSAID, such as ibuprofen, 20 minutes before the procedure may help relieve discomfort.
Ultrasonography uses ultrasound waves, produced at a frequency too high to be heard. The ultrasound waves are emitted by a handheld device that is placed on the abdomen or inside the vagina. The waves reflect off internal structures, and the pattern of this reflection can be displayed on a monitor.
Ultrasonography can detect an ectopic pregnancy, tumors, cysts, and other abnormalities in the internal reproductive organs (ovaries, fallopian tubes, uterus, and vagina). It is commonly done during pregnancy to determine the condition and size of the fetus, to monitor the fetus, or to guide the placement of instruments during amniocentesis or chorionic villus sampling (see Procedures : Ultrasonography). Ultrasonography is painless and has no known risks.
Generic NameSelect Brand Names
ibuprofenADVIL, MOTRIN IB
* This page is for Consumers * | <urn:uuid:81ce9e55-6dda-4a11-b88b-1461ba8b6193> | CC-MAIN-2015-35 | http://www.merckmanuals.com/home/women-s-health-issues/diagnosis-of-gynecologic-disorders/tests-for-gynecologic-disorders | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00104-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.938509 | 2,910 | 3.078125 | 3 |
Written by Paul D. Race for Big Indoor Trains™
Thirty-Inch RailroadingIn the early days of railroading, nobody cared if your railroad used the same track width as the next railroad over, since cars and trains were never switched between lines anyway. Big companies with enough money, time, and space to build wide smooth curves put their rails 4' 8.5" apart, a "standard gauge" measurement that may go all the way back to ancient Rome. But where there wasn't money, time, or space for huge curves and wide roadbeds, railroads were built with the rails closer together.
Narrow Gauge Railroading - Sure, the "narrow gauge" railroads couldn't handle the big engines and heavy cars of their big brothers, but they could be built for a fraction of the cost. And they were ideal for places where you literally couldn't fit a standard gauge railroad, and where small, unscheduled trains were more common than big scheduled freight and passenger trains.
In the United States, the most common narrow gauge was 36", although railroads were also built that had the rails 18", 24", 30", and 42" apart (plus other, less common variations too numerous to mention). The Denver and Rio Grand Western proved just how much you could do with 36" rails if you engineered carefully. They ran on 36" rails long after most other narrow gauge railroads had either gone out of business or converted to standard gauge. That's one reason many folks like modeling the D&RGW.
30-inch Railroading - But to truly go where no rails had gone before, some railroads cut even more corners, and built tracks with the rails 30" apart (or less). Coal mines, logging camps, rock quarries, lumber mills, steel mills, sugar plantations, and many other industries needed to get materials cheaply from point A to point B, and they didn't need any "frills" like 36"-inch wide track.
As the list below shows, only a handful of U.S. 30" railroads hauled commercial freight or passenger traffic. But just about every kind of industrial load was hauled on 30" railroads, throughout most of the country. And some of those railroads would also haul passengers or paid freight where there was a need.
Locomotives on 30" RailroadsBecause they couldn't handle big heavy engines, most 30" railroads used the smallest engines that would pull the load they needed. Two of the most popular were designs by H. K. Porter and Ephraim Shay. Both designs could be ordered in any gauge, so they also turned up on many other gauges of track, including 24", 36", 40", and standard gauge (56.5").
Cars on 30" RailroadsBecause most 30" railroads had specialized uses, most of them had custom cars. Ore cars, pulp wood cars, flat cars, gondolas, even passenger cars were built with the specific needs of this railroad in mind. One especially interesting kind of "car" was the "logging disconnect." These were actually two "trucks" (sets of four wheels) with couplers on the ends and a crossbar across the top for chaining the logs to. The "body" of the car was made up of logs chained to each truck. When the logs were delivered, the "disconnects" were simply coupled to each other for the return trip. A related construction, the "skeleton" car, was used when the logs were going to be cut to a predetermined length. It was also built of two trucks with crossbars, but the trucks were connected "permanently" with a single beam.
This "roll your own" approach to equipment makes 30" railroads especially fun to model. In fact, many modelers who designed something they thought was "outrageous" have later found a real narrow gauge railroad somewhere that actually used such a thing.
Construction on 30" RoadsQuick and inexpensive was the rule. Ties might be set in gravel, but that was far from universal - sometimes they were just laid on the soil and the builders hoped the ground would never be muddy enough to swallow the track rails and all. Instead of expensive grading, hastily-assembled timber supports might be used for even small dips in the terrain or to repair washouts. Where a retaining wall was critical, "cribbing" with timbers was far more common than the stone or block retaining walls of the "big guys." Rickety trestles were far more common than tunnels, which cost a lot to dig. In other words, if you like railroads with "folksey, rickety, home-grown" looks, 30" railroads offer all the character you could want.
In addition, 30" railroads served sugar plantations, coal towns, logging camps, mining towns, and many other small "communities" where economics often kept buildings and facilities small, dated, and (more often than not) in need of a paint job. In other words, not only did the 30" railroads have plenty of "character," so did their surroundings.
Modeling 30" RailroadsMost model railroaders interested in narrow gauge trains have modeled railroads like the D&RGW that ran on 36" rails. But there are more opportunities for modeling 30" railroads now than there have ever been.
One thing "fun" about modeling 30" railroads was that many pieces of equipment were special-order, custom-built, or home-made, so there were a thousand real-world variations on this stuff. If you make something up that seems reasonable for your 30" railroad, there's a good chance that someone in the real world has already built such a thing.
Another bonus is that modeling 30" railroads lets you use bigger accessories and more detail than you would ordinarily use running standard gauge trains on the same track.
Indoor Modeling - (Mostly On30) - In the 1990s, interest in modeling 30" railroads indoors boomed for a surprising reason: Department 56 Christmas Village houses had become popular, and people were looking for trains to go with their ceramic towns. Bachmann, one of the world's largest model train manufacturers, came up with the ideal solution: On30 trains. What does this name mean?
The use of O scale (1:48) also allows modelers to take advantage of a host of O scale buildings, figures, and accessories for their railroads.
Some folks have modeled 30" railroads in HO or S scale (HOn30 or Sn30 respectively), but those segments of the hobby are not growing nearly as fast as the On30 segment.
Outdoor Modeling - (Mostly Large Scale) - Most garden trains run on 45mm track (about 1 3/4"). The newest narrow gauge models tend to model 36" railroads in 1:20.3 scale. To model a 30" railroad on 45mm track, your models would have to be in about 16.9:1 scale, about 17% larger than 1:20.3 models (like the Bachmann Shay) and 20% larger than 1:22.5 models (like the LGB Mogul or Bachmann ten-wheeler).
On the other hand, boiler and driver sizes were almost unique to each 30" railroad, and most 30" railroads had cars that were narrower and shorter (lengthwise) than cars on 36" railroads. So you could "borrow" most 1:20.3 ore cars. You could give your other equipment a 30" look by going up. For example, using high-sided gondolas when possible, increasing the door heights by 17% or so if possible, putting slightly taller cabs and smokestacks on 1:20.3 locomotives, and so on. More information is provided in our article Small But Mighty - 30" Power
The "adjusted" locomotive looks like it has a dinky boiler and drivers, but that's perfectly appropriate for a 30" railroad.
Ironically, Bachmann's 1:20.3 outside frame Consolidation was actually based on a real 30" locomotive, the one at Alder Gulch, Montana. But judging by the photos (they haven't given me a locomotive to try out on my railroad), it is probably 1:20.3 scale and would still need the cab and smokestack to be a little taller to look quite right for 1:17.
Locations of 30" RailroadsThis list shows the 30" railroads that folks are pretty sure were really built and operated for some period of time. Many more never got off the drawing board, or were never taken seriously enough to keep a detailed record. I would guess that today, we probably only have records of about 20% of the 30" railroads that actually ran. But that's great news for the modeler, because, you can stage a Porter or Shay locomotive and a few small cars on any kind of "heavy" industry in pretty much any part of the country and have a pretty good chance of modeling something that really existed.
To see a much more detailed list of historical 30" railroads, visit the World-wide 30" Gauge Railways and Railroads page.
Commercial Traffic 30" Railroads We're Pretty Sure AboutVery few 30" railways were open for public transportation of people or goods (although some industrial railroads hauled passengers and commercial freight on an occasional basis). Commercial 30" railroads we're reasonably sure about included:
States with 30" Logging and Timber Processing Railroads
Mexican 30" Railroads - Several of the 30" railroads in Mexico hauled much of the same kinds of freight and passengers as the 36" railroads in the U.S. Those railroads needed bigger and faster locomotives than the Shays and Porters that dominated U.S. 30" railroads. But they had to make some interesting compromises to fit "conventional" locomotives onto 30" rails.
As an example, the Outside Frame 2-6-2 (Prairie-type) locomotive shown to the right fit a bigger-than-average boiler over 30" rails by having the frame outside of the drivers, not inside as was normal. The drivers' counterweights, however, were outside the frame, making the locomotive look like an eggbeater going down the tracks. It's no coincidence that the Outside Frame Consolidation shown above was also built for a Mexican railroad: Ferrocarril Mexicano. A photo of a Mexican-owned Outside Frame Mikado (2-8-2) is shown on Bruce Pryor's page "Narrow Gauge From Off the Beaten Path."
The moral of this story is that if you want to boldly go where no 30"-gauge modeler has gone before, the 30" railroads of Mexico offer a wealth of inspiration. Some fascinating descriptions and some very unusual photos, are available at the following link.
Note: Big Indoor Trains(tm), Big Train Store(tm)m Family Garden Trains(tm), Big Christmas Trains(tm), and Garden Train Store(tm) are trademarks of Breakthrough Communications(tm) (www.btcomm.com). All information, data, text, and illustrations on this web site are Copyright (c) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 by Paul D. Race.
Reuse or republication without prior written permission is specifically
For more information, contact us. | <urn:uuid:805d6837-ec5e-49f0-978c-f009f9b5a762> | CC-MAIN-2015-35 | http://www.bigindoortrains.com/primer/narrow_gauge_railroads/30in_rwys/30in_rwys.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00343-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.974099 | 2,402 | 3.1875 | 3 |
Coretan Pelajar Artikel Pendidikan
This essay aims to overlook the implementation of English curriculum document at primary school level currently used by the educational ministry. According to the document, English in the primary schools is only an optional subject as part of local contents. As the consequence, the curriculum development of English should follow the same guideline of others local contents curricula which strongly emphasize the students' needs as well as the interest of the community. The examination of the curriculum and its implementation shows the wide gap between the expectation and the reality. Therefore, both aspects need to be considered in order to find more realistic approaches.
The latest curriculum in Indonesian primary schools known as the 2004 Curriculum Framework was developed after a dramatic change in the governmental system in 2001 from centralised to decentralised government. This decentralization program is also called regional autonomy (otonomi daerah). Thus, the release of the 2004 curriculum framework has given a wider autonomy to regional governments to determine their educational policies. In order to implement this autonomy, the current Indonesian education system has adopted the competency-based teaching approach (Kurikulum Berbasis Kompetensi). In regard to this approach, Schneck (1978) argued that "competence-based education has much in common with such approaches to learning as performance-based instruction, mastery learning and individualized instruction. It is outcome-based and is adaptive to the changing needs of students, teachers and community." (Scheck, 1978, p vi cited in Richards, 2001 p. ).
To illustrate how such changing needs have been accommodated and implemented in the 2004 curriculum framework, this essay aims to examine the curriculum of English in the Indonesian primary schools. It is important to note that the notion 'curriculum' stated in this essay refers to White's curriculum definition or what Nunan called as a plan as this term is currently used by the Educational ministry (see also White, 1988 and Nunan, 1988). This essay is organised into two sections. The first section focuses on the examination of the related documents of English curriculum particularly with respect to the target learners, teachers, the institution, the community and the society, and their needs and interests. The second section investigates the extent to which the implementation of this curriculum has met those criteria. In addition, this essay also evaluates the possible mismatch between the curriculum documents and the real needs and interests of the learners and the teachers as well as provides some suggestions concerning this issue.
What is in the document?
The establishment of English in Indonesian primary schools curriculum was marked by the release of the renewed curriculum popularly known as the 1994 Curriculum. In this curriculum framework, English was placed as one alternative subject as a part of two local content (Muatan Lokal) subjects that need to be taught in the primary schools. As it was optional, schools may decide either to include or exclude English in their subjects list. In 2004, the curriculum was reviewed again and this renewed curriculum was called the 2004 Curriculum Framework . In this new curriculum, English is once again being emphasised, but its position is still as a local content and non-compulsory.
As mandated by the legislation, the curriculum of local content subjects must be developed by the schools (school-based curriculum development) namely Curriculum at the Educational Institution Level (Kurikulum Tingkat Satuan Pendidikan or KTSP). Furthermore, the formation of this curriculum requires the collaboration between, schools, regional government and local community and should meet regional characteristics, needs and conditions. In implementing this policy, the educational ministry has set seven basic principles (see table 1) as a guideline for schools to develop their local content curriculum.
Table 1. Principles of curriculum development
1. Centralised to the potency, development, needs and interests of the students and their environment.
2. Varied and interconnected.
3. Responsive to a rapid development of science, arts and technology.
4. Relevant to students' everyday lives.
5. Integral and continuing.
6. Lifelong learning.
7. Balance between national and regional interests.
Source: Depdiknas, 2007
It is clear from these basic principles that the students' interests and needs theoretically become the major considerations in the curriculum development. Moreover, beside these principles, the educational ministry also provides a more specific guideline to be used to assist the local institutions in developing their local contents curricula. This guideline includes several criteria to be considered concerning the target students, the teachers, the institution, the community and the society and also their needs and interests (see depdiknas, 2007).
In regard to the target learners, this guideline has set at least five criteria that need to be taken into consideration in developing the local content curriculum including (i) the used material should match with the level of development of students including their current knowledge and thinking ability and the students' emotional and social development level; (ii) the learning and teaching activities should be convenient and do not add any burden to the students, e.g. by avoiding the homework; (iii) the program should also consider the current physical and psychological aspects of the students ; (iv) the material to be taught also has to be meaningful and useful for students in their everyday lives; and (v) teachers also need to involve the participation of students through their mental, physical and social activities in order to be able to select the appropriate teaching and learning strategies. In other words, in developing local content curricula, schools need to consider students' current knowledge and development level, learning difficulties, age, learning resources, and learning strategies.
The guideline has also given a specific account on respect to the teachers' needs and interests. According to the guideline, teachers should be given a wide authority in the selection of teaching methodologies, teaching resources and materials.
The institutional interests are accommodated by the government policy to place English as one of local content subjects. The educational ministry had acknowledged that the schools' capacity to teach English varied from one school to another and also among regions. The educational ministry has also recognized that not all schools have the resources to teach English, especially relating to the availability of English teachers. Therefore, schools do not require to teach English if they do not have adequate resources for it. In addition, for schools that decided to include English as a part of two local content subjects to be taught in primary schools, the educational ministry has also given the guideline for its implementation (see depdiknas, 2007). According to the guideline, if a school is not able to develop its own local content curriculum, it might seek assistance from other schools in the same region that have successfully developed the curriculum. However, if there are no reference schools in that region, a school may ask assistance from the Curriculum Development Team (Tim Pengembang Kurikulum-TPK) in that region or province.
Furthermore, the needs and interests of the community have been clearly accommodated by looking at the definition of the local contents itself. The Department of National Education has defined local contents as "the program activity that aims to develop (students) competency based on the unique needs, interests, and strengths of local regions in which its substantial could not be classified into the existing subjects" (free translation from Depdiknas 2007). Hence, the local content subjects should highlight the needs and interest of local regions. The government intention to emphasize on the interests of local community is also revealed in the special objectives of local contents set by the Department of National Education, which are: (i) for students to become more familiar with their environment and also their socio-cultural background, (ii) for students to have knowledge, ability and skills about their regions that are relevant to their needs and interests and also the surrounding community, and (iii) for students to demonstrate their attitude and behavior that exhibit their cultural values, and preserve and develop these values to support national development. To conclude, the needs and interests of local community have become the central issue in the local content program.
What is in the reality?
By looking at the curriculum documents about local contents presented in the previous discussion, it is obvious that the central government has given a full support to promote educational decentralisation which underlined the needs and interests of students as well as local regions. But in its implementation, this policy is still far from perfect. The following discussion will highlight some constrains that might affect the level of success of the implementation of English as one of the local content subjects in Indonesian primary schools.
Had students-teachers' needs and interests become the top priority of the implementation of English in the primary schools program?
Before English in the primary schools introduced in 1994, in practice, many primary schools, especially private schools had started this program. The inclusion of English subject in these schools was more like an 'icon' or symbol. Thus, schools that run this program were considered to be having a high-status. After English was formally recognized as one optional subject in the local contents in the 1994 Curriculum and again was emphasised in the 2004 Curriculum, more and more schools including state schools have followed this trend. Consequently, according to Cahyono, the implementation of English in the primary schools is still like a 'fashion' or 'prestige' so that they can be identified as the best schools (Cahyono 2008 cited in Surya 2008). This phenomenon, in fact, is one of the social, cultural & political contexts identified in the English language teaching (see Bretag, 2005). Bretag stated that "English has become the language of power and prestige" (Bretag, 2005 p. 5). As a result, this phenomenon has overshadowed the importance of students' needs and interests which are supposed to be the central issue in running this program.
Moreover, the English teaching in the primary schools has even had negative effects for students as the English teachers in those primary schools were mostly unqualified. The truth is that before 94' curriculum was introduced, the institution that was responsible for teacher training, which is Lembaga Pendidikan Tenaga Kependidikan (Teachers Training Institute) was not intended to produce primary school teachers, but the main concern was to produce high school teachers (Chandra, 2002). Furthermore, she argued that, even for high school teachers, especially for English teachers, the training program was considered as inadequate. This problem had serious impacts on students motivation, as Cahyono commented that incompetent teachers had given bad experience for students to learn language as some of them might have become 'phobia' of foreign language (Cahyono 2008 cited in Surya 2008).
In the mean time, the intention of the educational ministry to give a wider authority to the teachers in the selection of teaching methodologies, teaching resources, and materials, and also to participate in the formation of English curriculum did not gain a great deal of interests either. Based on Sutardi's interview with twenty English teachers in five major cities in Indonesia, he concluded that although these English teachers agreed to have some flexibilities in organizing the teaching material by adapting the students' needs, the majority of them were preferred to have and refer the national curriculum (see Sutardi 2005). They expected that the availability of national curriculum could answer some of their difficulties in teaching English in the primary schools, particularly concerning the difficulty of obtaining teaching materials.
Is English really what the local community needs?
Looking back at the definition of local content which was "the program activity that aims to develop (students) competency based on the unique needs, interests, and the strength of local regions in which its substantial could not be classified into the existing subjects" (Depdiknas, 2007), one important question that need to be raised is "has English really fitted this criterion?" The answer is yes, but in only a small part of the region. It is difficult to find the reasons of learning English that is associated with local community needs other than for tourism industry. For this reason, only regions that get in touch with international community such as Jakarta, Yogyakarta, Surabaya, Bali, and Medan that can be fit with this condition.
The next question is how about the other regions, do they need English? The answer could also be 'yes', but the community and students' desire to learn English in these regions could not fulfill the requirements of local content. In fact, all primary school students need to learn English if they want to compete equally with others in the globalisation era. Labeling English in the primary schools as a local content subject, in my opinion, has contributed to a wider education inequality, whereas the students who do not have the opportunity to learn English in their early age will be left behind from the others in their future education, as many scholars believed that age factor plays a major role in English acquisition (see e.g. Richards et al. 1987; Bialystok and Hakuta, 1999).
Based on the above discussion about the curriculum document of English in the primary schools and its implementation, there are several points that can be made:
(i) The curriculum document has shown the ideal aspects of educational decentralisation in regard to the regional autonomy, but its implementation has shown a serious breakdown of this policy. The major factor that contributed to the failure was the teacher factors. But, unfortunately, this issue did not receive sufficient attention in the current curriculum documents. Therefore it is important to provide more efforts concerning the teacher factors, and more importantly to include the procedures that support teachers' professional development through providing a training arrangement on the development of the curriculum, as the teachers' professionalism had become the central key affecting the breakdown. In addition to this, it is also critical to give more thought on providing more resources for the teachers in order to support the classroom teaching and learning.
(ii) Given the fact that learning English is vital for the preparation to face a global competition, all primary school students in all over the country share the same 'instrumental motivation' (For the discussion about the definition and types of motivation, see e.g. Dornyei, 2001). However, evaluating the current policy of the Indonesian government to place English as a local content subject was not relevant with both the students and community needs. Based on the current practice, only a small percentage of primary schools students had the opportunity to learn English. The disadvantages for the others who do not have this opportunity are huge, especially when they start the junior high school level where English becomes compulsory subject. For this reason, it is essential to consider English to become a compulsory subject in the primary schools in order to attain educational equality. Hence, despite a huge attention from the government to the students' interests as presented in current curriculum document, the students' motivation in learning English should be reconsidered.
(iii) The discussion also shows an enormous gap between the curriculum in the document and in the reality. In this regard, it is crucial to combine both aspects in the development of curriculum. Therefore, a further revision of the current document should take into account these two aspects.
Bialystok, E. And Hakuta, K., 1999, "Confounded age: linguistic and cognitive factors in age differences for second language acquisition", In Birdsong, D. (ed.), Second language acquisition and critical period hypothesis, Lawrence Erlbaum associates, Publishers, New Jersey, pp. 161-81
Bretag, T. 2005, The social, cultural and political contexts of Teaching English to speakers of Other Languages, School of Management, Unisa
Chandra, A. 2002, Pengajaran bahasa Inggris di sekolah dasar, viewed 16 August 2008
Departemen Pendidikan Nasional/Depdiknas 2007, Materi Sosialisasi dan Pelatihan Kurikulum Tingkat Satuan Pendidikan (KTSP), Depdiknas, Jakarta
Dornyei, Z. 2001, Teaching and researching motivation, Longman, Harlow
Nunan, D. 1988, Designing tasks for the communicative classroom. Cambridge, Cambridge University Press
Richards, J. C, 2001, Curriculum development in language teaching. Cambridge. Cambridge University Press
Richards, J.C., Plat, J., and Weber, H., 1987, Longman dictionary of applied linguistics, Longman Group, Hong Kong
Surya, 2008, Bahasa Inggris hanya muatan lokal, akibatkan siswa SD phobia | <urn:uuid:69319ab9-6151-44fd-afa0-24c4ca65024b> | CC-MAIN-2015-35 | http://dwieasmara.blogspot.com/2011/10/english-in-indonesian-primary-schools.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065828.38/warc/CC-MAIN-20150827025425-00275-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.958367 | 3,390 | 2.6875 | 3 |
Early Mars Was Frozen - But Habitable: Part II
Moffett Field - Sep 24, 2003
Early Mars was cold - very cold, says Chris McKay, a planetary scientist at the NASA Ames Research Center. But that doesn't mean it was incapable of supporting life. McKay has extensively studied life in some of the harshest environments in the world: the Antarctic dry valleys, the Arctic, and the Atacama desert.
At a meeting of the American Astronomical Society's Division of Planetary Sciences, held in September 2003 in Monterey, CA, McKay gave a plenary talk in which he discussed the evidence for a cold, but wet, early Mars. McKay compared these early Martian conditions to Antarctica's modern-day dry valleys. And he laid out a strategy for searching for evidence of the organisms that may have inhabited Mars during its first billion years. His talk is presented here in two parts; this is part two.
As I mentioned earlier, Antarctica is very cold, but the pressure is high enough to support liquid. So when the glaciers melt, the water flows down the Onyx River, and it's stable against boiling and it flows into the lake.
That's the one requirement that's not met on Mars today. The reason that you couldn't see such systems on Mars today is not because it's too cold. Cold isn't really the issue here. It's because the pressure is too low.
So the key environmental factor for making Mars a better place for life, a kinder, gentler planet, is not making it warmer. The key factor is raising the pressure up from 6 to maybe 100 millibar. [One hundred millibar is one-tenth of the pressure on Earth at sea level.] Not much higher than that would be needed.
At that pressure, liquid water could exist on a very cold Mars. Lake Vanda in Antarctica could be an analog, for example, for Gusev Crater, which Nathalie Cabrol and many others have shown is likely once to have been full of water. If you look around at the terrain around Gusev, you can see that it would have been very cold at the time. So the remnant of an ice-covered lake could be what the MER-A lander "Spirit" is going to land in.
And what might it find? Probably the best thing it might find in a place like this is a fossil.
And now I want to go back to the question that I raised originally: Why are we going to Mars? We're going to Mars to search for a second genesis of life. A second genesis of life is not something we're going to get from a fossil. Fossils are not enough.
We'd be happy to find a good fossil on Mars. I'm sure it would make the cover of Science - or Nature, depending on whether it's NASA or ESA that finds it. It would tell us that there was life on Mars. But it wouldn't tell us the nature of that life, or its relationship, if any, to life on Earth. That's the key question: We want to know not just was there life on Mars, but how does it relate to us? Are martian organisms our cousins, or do they represent a second genesis?
Well, to do that, again I return to analogs on Earth, as a way of developing a strategy for searching for a second genesis on Mars. In Siberia, in old permafrost on Earth, we find frozen bacteria. This is 3.5-million-year-old permafrost, some of the oldest permafrost on Earth. And we find viable bacteria in this permafrost.
We're developing drills that can drill in permafrost without drilling fluid. There was some work done just a few months ago up in the Arctic, drilling in permafrost with air-supported drills, using a technique that lets us demonstrate that there's no contamination getting into the drill cores. We're learning how to use a drill as a microbiological instrument.
In Antarctica, we think we have ice that may be 8 million years old. Again, in that ice, we find viable bacteria. In the oldest and coldest ice on Earth, we find organisms still preserved. So we want to apply this logic to Mars. Could something remain preserved in the permafrost on Mars for a long period of time, perhaps billions of years?
Well, what limits long-term dormancy? There are two factors. One is the second law of thermodynamics, thermal decay. But this is not that important on Mars because it is so cold there.
The other, background radiation, from natural levels of the decay of uranium, thorium and potassium, even deep below the surface, is about 0.2 rads per year on Earth and would be roughly similar on Mars. This would deliver a lethal dose, even to the most radiation-resistant organisms, in about 100 million years.
So although it's cold enough in the martian permafrost for life to be preserved, over the time period that we're interested, in there would be hundreds of lethal doses delivered to any dormant organisms trapped there. So - "It's dead, Jim."
But it's there. And that's important. Because there's a big difference between something that's dead and a fossil. If you're searching for life and you find a corpse - that's what it would be, a corpse - you can do an autopsy. You can determine whether the corpse has the same genetic biological content that we have. You can't do an autopsy on a fossil. And the permafrost on Mars is where we have the best chance of finding these frozen, dead micro-martian corpses to do an autopsy on.
We have thought for awhile that there was a permafrost on Mars, that it had deep, cold ice-cemented ground. Now we have further indications that that's the case. The Mars Odyssey neutron spectrometer results show ground ice in the polar regions, and the magnetometer results indicate that in some of those regions, the ice is very old and very stable.
So I would argue that at longitude 180 degrees west and latitude about 80 south, far enough south that you're in deep permafrost but not so far south that you're in the younger polar deposits, you could find the oldest frozen material on Mars. And the magnetic striping in the terrain there, seen by the magnetometer onboard Mars Global Surveyor, confirm that this is a region that's likely to have been undisturbed by impacts for billions of years. Because you see where there have been large impacts, like in Hellas and Argyre Basins, that the pattern of magnetic striping has been erased.
So one strategy for searching for life is to go find fossils in Gusev. But then we also need to address the question of a second genesis, which is really the big question, the question that scientifically and culturally drives astrobiology. And to do that, we need to go drill deep into this permafrost where we will hopefully find an actual Martian organism.
Now, if we find a dead organism on Mars, how are we going to tell if it was once alive? How are we going to recognize life? One is to use a tricorder. You remember in episode 26, they adjusted the tricorder so they could not just detect life, but could detect silicon life, all done within a few seconds of the show - a great device.
Another approach for detecting life is to say, "Well, we'll just know it when we see it."
But maybe we can do better. And I want to make the suggestion that there's a general principle that we can use to detect life. I call it the LEGO principle. And it's based on the rather simple observation that life is built largely from a small number of components. Life is not just a hodge-podge of stuff all thrown together. It's certain bricks, used over and over again. The basic polymers of life, the proteins, the polysaccharides, the nucleotides, DNA and RNA, are all based on these few bricks, used over and over again. The same way a LEGO city is built out of identical bricks. And this is likely a common property of biology, as well as of mass-produced children's toys, throughout the universe.
There are, for example, the 20 amino acids that are used as the LEGO blocks in building up the proteins used by on Earth. Alien organisms might have a different set of LEGO blocks, but they would have a set of LEGO blocks. They would use certain molecules over and over again. And we would see these molecules show up in unusually high numbers.
And that's, I argue, a possible way to recognize a biological organic material from a non-biological one, even if it's alien and we can't amplify it with PCR. So I'd like to propose that on a future mission to Mars we send a 10-meter drill to the polar regions, to the permafrost, to look for martian LEGO blocks. I think that's where we have the best chance of digging up some ancient organic material from martian bugs, and of finding evidence of a second genesis of life.
Email This Article
Comment On This Article
Mars at JPL
Center for Mars Exploration
Subscribe To SpaceDaily Express
Mars News and Information at MarsDaily.com
Lunar Dreams and more
Pasadena CA (JPL) Jan 09, 2006
Last week Spirit completed robotic-arm work on "El Dorado." The rover used all three of its spectrometers plus the microscopic imager for readings over the New Year's weekend.
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2006 - SpaceDaily.AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA PortalReports are copyright European Space Agency. All NASA sourced material is public domain. Additionalcopyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement| | <urn:uuid:7fbe67ef-a4a4-4765-9f65-83b287f3b237> | CC-MAIN-2015-35 | http://www.marsdaily.com/reports/Early_Mars_Was_Frozen__But_Habitable_Part_II.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.952386 | 2,113 | 3.484375 | 3 |
Lithuania Table of Contents
In 1995 Lithuania had an estimated population of 3,717,000, which was 44,000 fewer people than in 1992. Of the total, females were in the majority, as in most Central European countries and in Russia. The population group that has increased most quickly in Lithuania, as in many other relatively developed countries, consists of senior citizens and pensioners (those over age sixty) (see fig. 12). For example, pensioners grew in number from 546,000 to 906,000 between 1970 and 1991. This group grew from 17.3 percent of the population in 1980 to 19.5 percent in 1992. The zero-to-fifteen-year-old age-group, by comparison, diminished slightly from 25.2 percent in 1980 to 23.9 in 1992, not as a result of increased mortality but as a result of a continuing decline in the birth rate. The group of working-age people (aged sixteen to fifty-nine for men and fifteen to fifty-four for women) also decreased, from 57.5 percent to 56.6 percent. The birth rate decreased from 17.6 per 1,000 population in 1970 to 12.5 per 1,000 population in 1993 and 12.0 per 1,000 population in 1994. Mortality increased from 10.5 per 1,000 population in 1980 to 10.9 in 1991 and 12.8 in 1994. Life expectancy in 1993 was 63.3 years for males and 75.0 years for females, or an average of 69.1 years. This, too, was on the decline from the peak years of 1986-87, when the average was 72.5 years (67.9 years for males and 76.6 years for females). The decrease coincides with the worsening economic situation and the decline in the quality of health services during the postindependence economic transition.
The average Lithuanian family is still somewhat larger than families in the neighboring Baltic states, but it has been declining. The average family size shrank to 3.2 by 1989. People marry young, but their marriages are often quickly dissolved. The divorce rate has been increasing. In 1989, of 9.3 marriages per 1,000 population, there were 3.3 divorces. The highest divorce rate is among ethnic Russians and in ethnically mixed families. These statistics indicate the existence of social problems with which society has been ill equipped to deal. Churches are not allowed to intervene to address these problems, and the profession of social work is still virtually nonexistent. The postcommunist government must face the formidable task of developing a social work sector.
Under Soviet rule, especially in the last decade, one-half or more of the annual population increase resulted from immigration, primarily from Russia. But this situation has changed. More people emigrate to former Soviet republics than arrive from them, and more people leave for the West than come from there. In 1990 Lithuania's net migration loss to former Soviet republics was 6,345. Loss to the West includes Jewish emigration. Gains from the West include returning Americans and Canadians of Lithuanian descent.
Soviet industrialization brought about fast and sustained urban development. Annually, almost 1 percent of the rural population has moved to cities since the early 1950s. In 1939 only 23 percent of the population lived in cities; in 1992 the urban percentage was 69. Lithuania has five cities with a population of more than 100,000. The largest is the capital, Vilnius, established in 1321 (1994 population 584,000); Kaunas, the capital between the two world wars, founded in 1361 (1994 population 424,000); the port city of Klaipeda, established in 1252 (1994 population 205,000); the center of the electronics industry, Siauliai, founded in 1236 (1994 population 147,000); and the city of chemical and automobile parts industries, Panevezys, founded in 1548 (1994 population 132,000).
In 1994, according to official estimates, 81.1 percent of Lithuania's population consisted of ethnic Lithuanians. The remaining 18.9 percent was divided among Russians (8.5 percent), Poles (7.0 percent), Belarusians (1.5 percent), Ukrainians (1.0 percent), and others, including Jews, Latvians, Tatars, Gypsies, Germans, and Estonians (0.9 percent). Altogether, people of more than 100 nationalities live in Lithuania.
The proportion of the ethnic Lithuanian population--more than 90 percent of whom speak Lithuanian--stayed at 80 percent or a fraction higher until 1989, when it dropped slightly below 80 percent. The decrease resulted in fears that a pattern of decline would develop as a result of increasing Russian immigration, which might endanger the survival of Lithuania's culture and national identity as it did in Estonia and Latvia.
The Russian minority consists of old and new immigrants. Many Russians settled in Lithuania in the nineteenth century or in the early twentieth century, shortly after the Bolsheviks came to power in Moscow. Two-thirds of the Russian minority, however, are immigrants--or their descendants--of the Soviet era, many of whom regard Lithuania as their homeland. They usually live in larger cities. In Vilnius 20.2 percent of the population was Russian in 1989. The same year, in Klaipeda, 28.2 percent of the inhabitants were Russians; in Siauliai, 10.5 percent. Ignalina, where the nuclear power plant is located, had a Russian majority of 64.2 percent. Less than 10 percent of the population in Kaunas and the resort towns of Druskininkai, Palanga, or Neringa was Russian, however. These percentages most likely will decline slightly in the 1990s because some Russians, finding it difficult to accept that they live in a "foreign" country, are leaving Lithuania. The majority of Russians, however, have shown little inclination to leave; 88 percent of those polled in the fall of 1993 described relations between their group and the ethnic Lithuanian population as good, and more than 60 percent felt that economic conditions for people like themselves would be worse in Russia than in Lithuania.
Poles live primarily in the city of Vilnius (18.8 percent of Vilnius's population in 1989) and in three adjacent rural districts. In 1989 the ethnic Polish population in the Salcininkai district constituted 79.6 percent; in the rural district of Vilnius, it was 63.5 percent; and in the district of Trakai, it was 23.8 percent. Small Polish groups also live in a number of other localities. Since the late 1940s, the Polish presence in Lithuania has declined considerably. About 200,000 Poles left Lithuania for Poland in 1946, under an agreement signed between Warsaw and Vilnius. Afterward, the Polish percentage of Lithuania's population declined from 8.5 percent in 1959 to 7.0 percent in 1989, primarily as a result of the influx of Russians. The Polish population of eastern Lithuania is composed of inhabitants whose families settled there centuries ago, of immigrants who came from Poland in the nineteenth and early twentieth centuries when the region was part of Poland, and of many assimilated Lithuanians and Belarusians.
Jews began settling in Lithuania in the fourteenth century. In time, Vilnius and some other cities became centers of Jewish learning, and Vilnius was internationally known as the Jerusalem of the North. Between the two world wars, Jews developed an active educational and cultural life. The Jewish community, which did not experience large-scale persecution until World War II, was almost entirely liquidated during the Nazi occupation. In 1989 only 12,400 Jews were left in Lithuania, and emigration after independence had cut their number to an estimated 6,500 by 1994.
For centuries, Vilnius has been an ethnically diverse city. Historically, the city has served as a cultural center for Lithuanians, Poles, Jews, and Belorussians. In the sixteenth and early seventeenth centuries, it also was a center of Ukrainian religious and cultural life. At the turn of the century, the largest minority ethnic group was Jewish. After World War II, the largest minority ethnic group was Polish. The population of Vilnius in 1989 was 50.5 percent Lithuanian, 20.2 percent Russian, 18.8 percent Polish, and 5.3 percent Belorussian.
The Lithuanian constitution of 1992 provides guarantees of social rights that were earlier provided by the Soviet regime. The constitution puts special emphasis on the maintenance and care of the family. It expresses in detail, for example, the guarantee for working mothers to receive paid leave before and after childbirth (Article 39). The constitution provides for free public education in all state schools, including schools of higher education (Article 41). The constitution forbids forced labor (Article 48); legalizes labor unions and the right to strike (Articles 50 and 51); guarantees annual paid vacations (Article 49); and guarantees old-age and disability pensions, unemployment and sick leave compensation, and support for widows and families that have lost their head of household, as well as for others in situations as defined by law (Article 52). Finally, the constitution guarantees free medical care (Article 53).
All political groups support these guarantees--considered more or less inviolable--although it is not clear to what extent the government will be able to fund the promised services during the continuing economic transition. The amounts of support and the quality of services have declined from the modest, but always predictable, level first established in the Soviet period.
The national system of social security consists of programs of social insurance and social benefits designed to continue the benefits provided by the Soviet system. Social insurance includes old-age retirement; survivor and disability pensions; unemployment compensation; pregnancy, childbirth, and child supplements; certain welfare support; and free medical care. It is cradle-to-grave insurance. According to a 1990 law, payments cannot be lower than necessary for a "minimal" living standard. In 1990 old-age and disability pensions in Lithuania were slightly more generous than in Estonia and Latvia. The budget for the program is separate from the national and local budgets. Only military pensions and some other special pensions are paid from the national budget.
Social insurance is financed, according to a law passed in 1991, from required payments by workers and employers, from income generated by the management of state social insurance activities, and from budgetary supplements by the state if the program threatens to run a deficit. To be eligible for an old-age pension, a male worker must be at least sixty years of age and have at least a twenty-five-year record of employment. A woman must be fifty-five and have a record of twenty years of employment. This category of recipients includes not only factory and government workers but also farmers and farm workers.
A program of social benefits is financed by local governments. It includes support payments for women during pregnancy and childbirth and for expenses after the child's birth. The program features single payments for each newly born child, as well as child support for single parents or families. These latter payments continue up to age limits established by law. The state also maintains a number of orphanages, sanatoriums, and old-age homes.
In the medical field, Lithuania has sufficient facilities to fulfill the guarantee of free medical care. In 1990 the country had more than 14,700 physicians and 2,300 dentists; its ratio of forty-six physicians and dentists combined per 10,000 inhabitants compared favorably with that of most advanced countries. In addition, in 1990 Lithuania had more than 47,000 paramedical personnel, or 127 per 10,000 population and 46,200 hospital beds, or 124 beds per 10,000 population. In the medical profession, Lithuania's cardiologists are among the most advanced in the former Soviet Union. In 1987 the first heart transplant operation was performed at the cardiac surgery clinic of Vilnius University. Hundreds of kidney transplants have been performed as well. One reasonably reliable and generally used indicator of the quality of a country's health services system is infant mortality. In 1990 Lithuania's infant mortality rate of 10.3 per 1,000 population was among the lowest of the Soviet republics but higher than that of many West European countries.
Special features of Lithuania's health status are high alcoholism (191 cases per 100,000 persons), low drug abuse (3.1 cases per 100,000), and few cases of human immunodeficiency virus (HIV) infection. Reported cases of HIV in 1992 were under 100. The main causes of death are cardiovascular diseases, cancer, accidents, and respiratory diseases. In addition to alcoholism, important risk factors for disease are smoking, a diet high in saturated fat, hypertension, and environmental pollution.
Notwithstanding efficient ambulance service and emergency care, medical services and facilities in Lithuania suffer from a lack of equipment, supplies, and drugs, as well as from inertia in the operation and administration of health services. The system is mainly state owned and state run. Private medical practice, begun only in the late 1980s, has not progressed appreciably because of the economic crisis. Since 1989 the government has encouraged church groups and others to enter the field of welfare services and medicine. The best-known such group is the Roman Catholic charitable organization Caritas.
Health care expenditures increased from 3.3 percent of the gross national product (GNP--see Glossary) in 1960 to 4.9 percent of GNP in 1990, but this figure is still low by world standards. Lithuania is unable to afford investments to improve its health care infrastructure at this time. Lithuania needs humanitarian assistance from the world community in importing the most critically needed drugs and vaccines. Disease prevention needs to be emphasized, especially with regard to prenatal, pediatric, and dental care. To reduce the occurrence of prevalent risk factors, the government needs to make fundamental improvements in public education and health programs.
Lithuania's standard of living in the early 1990s was slightly below Estonia's and Latvia's but higher than in the rest of the former Soviet Union. At the end of 1992, the standard of living had declined substantially, however. Energy shortages caused severe limitations in heating apartments and providing hot water and electricity. Before the post-Soviet economic transition, Lithuanians had abundant food supplies and consumed 3,400 calories a day per capita, compared with 2,805 calories for Finns and 3,454 calories for Swedes. But an average Lithuanian had only 19.1 square meters of apartment living space (less in the cities, more in rural areas), which was much less than the 30.5 square meters Finns had in the late 1980s. Housing, moreover, had fewer amenities than in the Scandinavian countries; 75 percent of Lithuanian urban housing had running water in 1989, 62 percent had hot water, 74 percent had central heating, 70 percent had flush toilets, and 64 percent had bathing facilities. Formerly low utility rates skyrocketed in the 1990s. Rents also increased, although by the end of 1992 almost 90 percent of all state-owned housing (there was some privately owned housing under Soviet rule) had been privatized--bought from the state, mostly by those who lived there. In 1989 families were well equipped with radios and televisions (109 and 107 sets, respectively, per 100 families). Most had refrigerators (ninety-one per 100 families), and many had washing machines (seventy), bicycles (eighty-four), vacuum cleaners (sixty), sewing machines (forty-eight), and tape re-corders (forty-four). Every third family had a private automobile (thirty-six automobiles per 100 families). Detracting from the quality of life, however, was the increasing rate of violent crime, especially in the larger cities (see Crime and Law En-forcement, this ch.).
Data as of January 1995
Lithuania Table of Contents | <urn:uuid:35711c7a-2404-48b8-a0c5-a1e8e5e61f3c> | CC-MAIN-2015-35 | http://www.country-data.com/cgi-bin/query/r-8289.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00281-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.968624 | 3,269 | 2.703125 | 3 |
In economic circles, there has been a lot of buzz about Quantitative Easing of late. Basically, the U.S. Federal Reserve has lowered interest rates to near zero percent and the fear is that these cuts will not have enough effect on the willingness to lend in order to reflate the U.S. economy. Therefore, the Fed has decided to take more draconian measures, one of which is Quantitative Easing, flooding the economy with money.
This experiment is not without risks. There is the potential for very high inflation down the line if the Fed is successful. But, does the Fed have a choice? It seems that it is looking at deflation or depression on the one hand or stagflation on the other. Take your choice.
But before you take sides, first let me go back a few years in history to describe exactly just what quantitative easing, a policy first really practiced in earnest in Japan, really is. Wikipedia has an excellent definition.
Quantitative easing was a tool of monetary policy that the Bank of Japan used to fight deflation in the early 2000s.
The BOJ had been maintaining short-term interest rates at close to their minimum attainable zero values since 1999. More recently, the BOJ has also been flooding commercial banks with excess liquidity to promote private lending, leaving commercial banks with large stocks of excess reserves, and therefore little risk of a liquidity shortage.
The BOJ accomplished this by buying much more government bonds than would be required to set the interest rate to zero. It also bought asset-backed securities, equities and extended the terms of its commercial paper purchasing operation.
In essence, the Bank of Japan found that despite lowering short-term interest rates to zero it could not get its zombie banking sector to lend. Credit, the life blood of our fractional reserve banking system, was just not increasing. Therefore, the Bank of Japan began buying Japanese government bonds (JGBs) with money that it created out of thin air — that is they bought existing assets with money that did not previously exist. Central Banks can do this because they control the electronic printing presses. Now, the likes of Murray Rothbard, an Austrian School economist calls this counterfeiting. However, regardless of how you see this, this is how our monetary system works.
But this is also the definition of inflation. See my post, “What is inflation” for a primer on why inflation is not consumer price inflation, but rather an increase in the supply of money. And while the Japanese economy did not get higher consumer price inflation as a result of the massive quantitative easing, this money did create the carry trade as people borrowed in Yen and invested abroad. The Japanese central bank was then very much a factor in creating the global bubble we just experienced.
Printing money is effective because it has the effect of putting more high-powered money into circulation. The aim is to increase bank reserves enough so as to increase lending that results from those reserves.
And Fed Chairman Ben Bernanke knows this. He is a student of the Great Depression and deflation, a well-regarded economic historian. Bernanke earned the moniker “Helicopter Ben” a few years back as a result of some comments he made in 2002 at the National Economists Club regarding quantitative easing to avoid deflation before he became the Fed Chairman. Here is what he said as quoted on the Federal Reserve’s website:
As I have mentioned, some observers have concluded that when the central bank’s policy rate falls to zero–its practical minimum–monetary policy loses its ability to further stimulate aggregate demand and the economy. At a broad conceptual level, and in my view in practice as well, this conclusion is clearly mistaken. Indeed, under a fiat (that is, paper) money system, a government (in practice, the central bank in cooperation with other agencies) should always be able to generate increased nominal spending and inflation, even when the short-term nominal interest rate is at zero.
The conclusion that deflation is always reversible under a fiat money system follows from basic economic reasoning. A little parable may prove useful: Today an ounce of gold sells for $300, more or less. Now suppose that a modern alchemist solves his subject’s oldest problem by finding a way to produce unlimited amounts of new gold at essentially no cost. Moreover, his invention is widely publicized and scientifically verified, and he announces his intention to begin massive production of gold within days. What would happen to the price of gold? Presumably, the potentially unlimited supply of cheap gold would cause the market price of gold to plummet. Indeed, if the market for gold is to any degree efficient, the price of gold would collapse immediately after the announcement of the invention, before the alchemist had produced and marketed a single ounce of yellow metal.
What has this got to do with monetary policy? Like gold, U.S. dollars have value only to the extent that they are strictly limited in supply. But the U.S. government has a technology, called a printing press (or, today, its electronic equivalent), that allows it to produce as many U.S. dollars as it wishes at essentially no cost. By increasing the number of U.S. dollars in circulation, or even by credibly threatening to do so, the U.S. government can also reduce the value of a dollar in terms of goods and services, which is equivalent to raising the prices in dollars of those goods and services. We conclude that, under a paper-money system, a determined government can always generate higher spending and hence positive inflation.
Of course, the U.S. government is not going to print money and distribute it willy-nilly (although as we will see later, there are practical policies that approximate this behavior).8 Normally, money is injected into the economy through asset purchases by the Federal Reserve. To stimulate aggregate spending when short-term interest rates have reached zero, the Fed must expand the scale of its asset purchases or, possibly, expand the menu of assets that it buys. Alternatively, the Fed could find other ways of injecting money into the system–for example, by making low-interest-rate loans to banks or cooperating with the fiscal authorities. Each method of adding money to the economy has advantages and drawbacks, both technical and economic. One important concern in practice is that calibrating the economic effects of nonstandard means of injecting money may be difficult, given our relative lack of experience with such policies. Thus, as I have stressed already, prevention of deflation remains preferable to having to cure it. If we do fall into deflation, however, we can take comfort that the logic of the printing press example must assert itself, and sufficient injections of money will ultimately always reverse a deflation.
Translation: it is always preferable to a central bank to print money or create some reasonable facsimile of printing to prevent deflation before its onset than to try and deal with deflation once it has set in.
I strongly suggest you read Bernanke’s remarks in their entirety. The most important statement he made was about the electronic printing press and its effectiveness in combating deflation. If we are to take him at his word, Bernanke will print money — lots of it — to avoid deflation. The key, of course, is high-powered money. Rebecca Wilder said this quite well on a recent post of hers:
What is the difference between money and high powered money? Money is a function of two things
- The monetary base, which equals bank reserves plus currency in circulation
- The money multiplier, or how quickly the base switches hands in a fractional reserve banking system (for a discussion of money creation, see this wiki article).
The Fed is raising the monetary base through its QE policy and increasing its balance sheet (credit extended to the banking system) from $884 billion on August 28 to $2.1 trillion on November 28. The Fed simply creates new monetary base (reserves) out of thin air; hence, the printing money connotation.
This experiment is not working, though. Before the present day, conventional wisdom was that in Japan in the 1990s and in the U.S. in the 1930s, policymakers waited too long to begin any quantitative easing. Deflation had already set in and QE was pretty much a bust, as a result. However, we are beginning to see that it was not only the delay in policy, but the natural course of deleveraging which caused credit and the money multiplier to contract.
I mentioned this in April in my post “Finding a bottom,” which I quote her in its entirety because it is germane to why credit will contract:
As the writedowns at global financial institutions near $300 billion in capital lost as a result of the sub-prime crisis, the question as to when we reach a bottom is ever more urgent. Market history tells us that the severity of the bust after an economic upswing is usually related to the size of the original upswing. This mortgage and credit bubble being the mother of all bubbles requires a sustained and robust deleveraging to set things right. Therefore, the real economy in which we all live and breath should feel some very significant impacts for some time to come. My hope is this process will find a bottom late next year.
We are now in the midst of a financial crisis after a large speculative bubble. As such, the real economy effects will tend to be larger than during a garden-variety recession. The problem is the deleveraging feedback loop that needs to work its way through the system. The Federal Reserve, The Bank of England and the European Central Bank are doing everything they can to make sure this process occurs without a systemic failure in the global financial system like what was suffered during the Great Depression.
De-leveraging begins when the speculatively financed investments go bust and writedowns occur. Normally, banks can handle these writedowns without having to de-leverage because they are well-capitalized. However, when banks writeoff unexpectedly large amounts of capital, this reduces their capital base and makes the bank look more leveraged and risky as a financial institution. $300 billion is a large amount. When the financial sector writes off $300 billion in equity capital in the span of one year, some institutions start to look pretty risky. In a fractional reserve banking system (in which only part of deposits are held in reserves), banks risk ruin if depositors lose trust in their stability. Therefore, it is important for institutions to re-capitalize after large writedowns.
Re-capitalising can occur in one of three ways: the banks can increase their equity capital through paid-in capital, they can increase their equity base through profits or they can de-leverage. To date, banks have been forced to issue additional equity capital (often from sovereign wealth funds) in order to maintain strong balance sheets. However, the Federal Reserve has done all it can (and more — indeed, too much more) by lowering interest rates to banks in order to increase the spread between money lent and money borrowed, which will increase bank profits. This too will help banks — albeit slowly as the banks can only profit from these spreads over time.
Nevertheless, banks will not be able to strengthen their balance sheets quickly enough through those two methods without significant deleveraging. They will need to sell assets and reduce future credit availability in order to gain the rock solid balance sheets that customers, counter-parties, and consumers will require in a more cautious economic environment.
The Feedback Loop
How much deleveraging will need to occur? That brings us back to market history, which tells us that deleveraging will be extensive given the size of the speculative run-up earlier this decade. Moreover, the feedback loop with the real economy suggests that many more writedowns are to come. As investments have soured and credit availability becomes scarce, individuals and companies have started to feel the pinch in the real economy. Layoffs have begun in earnest. As a result, consumers have cut back. This will cause more financial distress in other lending sectors of the economy. Top on the list are Alt-A Mortgages, some Prime Mortgages (especially zero percent, zero down and adjustable rate varieties), Construction and Development loans, Corporate Real Estate loans, Credit Cards, Auto Loans and High Yield Corporates. From here, the feedback loop will begin again with losses, writeoffs, and credit tightening in the new distressed sectors as well as in the previously distressed sub-prime market. The feedback loop continues with more deleveraging, layoffs, consumers tightening their belt, and reduced corporate profitability.
At some point, this whole feedback loop will end a we will find a bottom. The hope is that we can do this with a minimum of damage to the real economy, a minimum of personal financial distress and as quickly as possible. When we reach the bottom is anybody’s guess, but expect this deleveraging process to play out at least for the entirety of 2008 and through well into 2009. Let’s hope that we find a bottom then.
When we do find a bottom we should know whether Bernanke has been successful. If he fails, prices fall, the real value of debt rises and depression ensues. If Bernanke succeeds, however, the outcome is less clear. Rebecca Wilder paints a good picture of what is at stake here.
Will the Fed’s QE strategy lead to inflation? In the short-term, no. The money multiplier is falling because the economy is in a nasty recession alongside a serious credit crisis. In this environment, the surge of high powered money will not cause prices to rise.Prices normally drop in a recession (deflation) because the demand for money (the ability to purchase goods and services) falls with rising unemployment and declining income (slackening demand for goods and services). But the 2008 recession is accompanied (or partially caused) by a credit crisis that induces banks to hoard the new base as excess reserves; this adds to the deflationary pressures. If deflation were to become embedded into consumer and firm expectations, then the macroeconomy could be facing a severe problem. So for now, and until the economy emerges from its recession, QE will not lead to inflation.
But what happens when the economy rebounds? Inflation becomes a serious risk if the Fed does not extract the high powered money. If the Fed gets it wrong, or its timing is off, then the money supply will rise quickly as banks start to lend more freely, and inflation results.
In the US’ case, I see the Fed getting it wrong as a serious risk to price stability (rising inflation). American consumers are not savers and love to spend; and although some suggest that the American saving behavior has changed, the evidence is far from concrete. Unless saving rises permanently – the economy transitions to a world where consumption is less than 70% of GDP – consumers will be more than happy to swoop up the new bank lending and spend that new easy money.
Quite frankly, I am not sure the Fed can get us out of this one. Money multipliers are plummeting and credit is just not increasing. Meanwhile, the real economy is falling off a cliff, meaning more loans will sour. How does the Fed believe it can increase lending in that environment? I’m sorry — deleveraging will continue apace.
But, what if I am wrong? What if the Fed can reflate the economy? Well, then we have to worry very seriously about the huge amounts of money sloshing around the system. If things get back in gear, inflation is going to be a very big problem. We can only hope this is a problem the Fed can handle. But, for most of us, this is the problem which we would rather have.
Quantitative easing – Wikipedia
All of the buzzwords in one post: quantitative easing, inflation and printing money – News N Economics
Remarks by Governor Ben S. Bernanke Before the National Economists Club, Washington, D.C.
November 21, 2002: Deflation: Making Sure “It” Doesn’t Happen Here – Federal Reserve Board | <urn:uuid:53bd7e52-1bc6-4575-b808-4ac685ea9cdf> | CC-MAIN-2015-35 | https://www.creditwritedowns.com/2008/11/quantitative-easing-printig-money-like-mad-to-ward-off-deflation.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062760.2/warc/CC-MAIN-20150827025422-00218-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.961825 | 3,326 | 2.796875 | 3 |
HomeworldEvacuation Homeworld Evacuation YKTTW Discussion
A mass exodus from the homeworld for the purpose of escaping disaster.Description Needs Help
Earth (or another species' homeworld) has fallen victim to some disaster and is no longer inhabitable. But fear not, we managed to get some people off before it was too late. For some reason this seems to be the sole reason for interstellar colonization in most "mainstream" sci-fi. Always results in a case of Earth That Was, though in some cases of that trope the space colonies existed before the disaster.
ExamplesAnime and Manga
- Eureka Seven takes place over 10,000 years after Mankind was forced to leave the Earth due to an unintentionally harmful alien life. Somewhere along the line, all of Humanity found a new place to settle down, completely forgetting and/or unaware that they just went back to Earth and live on a new surface that was created by the aliens. The real, perfectly inhabitable Earth, lay below the surface.
- Dragon Ball Z: When the Namekian homeworld is set to explode by one of Freiza's attacks, a wish is made on the dragon balls to save their lives by evacuating all the Namekians to Earth.
- The Authority do this in one arc, evacuating the planet's entire population so as to cause less collateral damage when fighting a superpowered villain.
- WALL•E - Humanity relocates to ships like the Axiom for 700 years.
- In Titan A.E., the alien Drej vaporize Earth in the opening sequence while refugee ships try to escape.
- In Battle for Terra, Earth, Mars and Venus have all been blown up, so humans go to the eponymous planet to find a new home.
- The movie Oblivion (the new one) has humanity attempting to escape a ruined Earth that has been wrecked during an Alien Invasion war, settling a new colony on Titan. Of course, this is a big damn lie, but saying any further details is a massive spoiler.
- In the live-action Transformers film Cybertron was destroyed in the Autobot / Decepticon war and they search for the Allspark in order to rebuild it.
- In Last and First Men the Fifth Men migrate to Venus when the Moon (destabilized millions of years earlier in the Martian/Second Men war) starts to crash into the Earth. And the eighth men design the ninth to colonize Neptune when the sun expands to cover the Inner System. But eventually the sun goes nova too quickly for the Eighteenth Men to devise a means of escaping to another system, though they do manage to send out "seeds" of life that might eventually evolve into new humans.
- In the "The Homecoming Saga", a series of 5 novels by Orson Scott Card, the Earth was rendered uninhabitable by human wars, and mankind departed for Harmony, as well as at least forty other planets.
- In Walter Jon Williams' Aristoi, Earth was destroyed by Grey Goo, leading to the reorganization of society into essentially a confederation of feudal states, with each state's leader (the Aristoi in question) being the only ones allowed to use nanotechnology freely.
- Arthur C. Clarke's Rescue Party has aliens coming to Earth in order to try saving at least a few humans before the Sun goes nova. In the end, it turns out the humans built a fleet and left already.
- Robert Sheckley has an amnesiac human waking up on a starship, apparently the last survivor after a nova. The ending reveals he serves as a Neuro-Vault for the entire humanity.
- Gianni Rodary has two short stories about aliens from a threatened planet settling on Earth.
- Isaac Asimov's The Currents of Space end with a planet (not Earth) being evacuated - its sun is about to go nova.
- Happens twice in the Noon Universe:
- The premise of Space Mowgli is that Terrans intend to evacuate the Human Aliens of the planet Panta, whose sun is about to explode, to the planet Ark (named after Noah's Ark).
- In Beetle in the Anthill, the scientists speculate that the entire population of Hope (the planet Abalkin explores in the flashbacks) was evacuated to another (unknown) planet by the Wanderers.
- The Insects From Shaggai (AKA Shan) in Ramsey Campbell's Cthulhu Mythos stories. When their home planet was destroyed by a Mythos abomination, some of them fled to a succession of other planets, finally ending up on Earth.
- In the Isaac Asimov book Robots and Empire which links his Robots series to the Foundation series, a robot causes/allows a radioactive explosion which will slowly poison Earth, forcing the population to expand out into space.
- Somewhat inverted in Battlestar Galactica (Classic) and Battlestar Galactica (Reimagined), the twelve colonies of Kobol are being evacuated and searching for Earth, which is the "lost" thirteenth colony.
- A surprisingly consistent point of future history in Doctor Who foretells the mass evacuation of Earth around the thirtieth century, to avoid solar flares. The Eleventh meets the Starship UK in "The Beast Below", but it comes up in other episodes as well. In addition the fourth Doctor encounters a wheel-type space station full of sleepers in The Ark In Space, which is set 20,000 years in the future implying that it happens more than once.
- The National Geographic special Evacuation Earth has as its premise a wayward neutron star heading towards Earth and attempts to build a Generation Ship to take 250,000 people to Bernard's Star.
- In Defiance the Votan Collective came to Earth when a supernova destroyed their home system.
- On the Planet of the Week in the Stargate SG-1 episode "Lifeboat" SG-1 finds a crashed Sleeper Starship built by a human society called the Talthuns, who had evacuated as many people as possible before their planet was destroyed by a coronal mass ejection caused by a "dark star".
- In the Babylon 5 episode "The Deconstruction of Falling Stars", which flashes forward to different eras of humanity's future after the show's time frame, it is shown that humans evacuate Earth one million years in the future, before an impending mysterious artificially-induced nova explosion of Sol.
- Star Trek: The Original Series episode "The Empath". The star of the Minarva system is about to go nova. A group of highly advanced aliens known as the Vians can save the population of only one of the planets in the system. They decide to determine which planet's population will be saved by putting a member of each population through a Secret Test.
- During The Fall in Eclipse Phase millions of people uploaded their Egos to the sparsely-populated colonies in other parts of the solar system (uploading being the easiest means of space travel), while some others crowded the Space Elevators and crammed aboard ships.
- Classic Traveller, Double Adventure "The Chamax Plague/Horde". In the Back Story, the alien population of a planet was close to being wiped out by a Super-Persistent Predator species called the Chamax. They decided their only chance was to build a fleet of Sleeper Starships to carry all of the remaining aliens to other star systems.
- BIONICLE: In the Kingdom Alternate Dimension, Matoro fails to revive the Great Spirit Mata Nui, leaving the Matoran Universe in danger and prompt mass exodus onto the island of Mata Nui. Not all beings made it safely, but while many made it to the island, it is only a temporary refuge. The survivors settled quickly and also planned on how to leave the island for the stars.
- Homeworld: The Mothership was meant to be a colony ship before the Kushan even were aware of the threat to their existence, but when the Taiidan incinerated Kharak's atmosphere it became necessary to their survival as a species.
- Earth, or Lost Jerusalem as it's called, is referred to often in Xenosaga. Humans had to leave it because of a mysterious space-time disturbance. Its location has been long lost. At the end of the third game a chunk of the party goes off searching for it, and we're left wanting another sequel.
- In Outpost 2 the human race has fled from an asteroid-doomed Earth. The plot of the game revolves around the earthling colonists of a new planet and how they destroy themselves all over again.
- Another non-Earth example: The D'ni of the Myst series originated on a world called Garternay, which became uninhabitable when its sun began growing dim. Their ancestors fled into a succession of other worlds via their linking books, and have since lost all contact with their abandoned homeworld.
- In SimEarth, if the sapient civilization develops past the "nanotech age", an event called "the exodus" is triggered. All cities, regardless of tech level, are fitted with engines and take off into space. The planet is declared a preserve and left alone, possibly allowing a new sapient species to evolve. The motivation for the exodus is unclear.
- In the RTS Earth2150, this is the ultimate goal of all three factions, on account of the imminent Earth-Shattering Kaboom. Throughout the campaigns, you not only have to complete missions to cripple your rivals, but stockpile your excess resources in order to build a colony ship to carry you to another world before the countdown expires.
- In Mass Effect:
- This is a common last resort for species attempting to survive a Reaper invasion. Of particular note is a Side Quest to save the Elcor, whose homeworld is being assaulted, in Mass Effect 3. However, this only serves to delay the inevitable, because the Reapers are patient enough to spend centuries exterminating every last trace of all sapient life, no matter where they hide.
- Before the story proper, the quarian race escaped their homeworld to avoid being exterminated by the robot race they created, the geth. Worthy of note is that the geth allowed them to leave to avoid committing genocide.
- Also before the series proper, the drell were rescued from an overpopulated homeworld by the hanar, and felt indebited enough to work as their agents, laborers and assassins, of their own free will.
- In Orion's Arm Old Earth suffered a Grey Goo outbreak known as the Nanodisaster, but that's not why it was evacuated. The outbreak was nullified by an AI named GAIA and e decided humans were the worst threat to Earth so e told us to leave, before e sicced eir nanoswarms on us. E was considerate enough to build a fleet of ships first though.
- The Simpsons Treehouse of Horror has 2 ships to leave the earth with various stars. One goes to Mars, and the other to the Sun.
- A staple of the Transformers series.
- In Generations 1, the Autobot evacuate Cybertron as it has run out of Energon to support life.
- In Transformers Prime, the Autobots and Decepticons fled Cybertron when it became unsuitable to support them. | <urn:uuid:5f1d17e5-3dce-4999-b289-a9f18a299f60> | CC-MAIN-2015-35 | http://tvtropes.org/pmwiki/discussion.php?id=qcdsk9zpdx2ivje8zejlxe54&trope=HomeworldEvacuation | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00220-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.945414 | 2,362 | 2.671875 | 3 |
Arthur's Seat, Edinburgh
Millions of visitors huff and puff their way up Edinburgh's Arthur's seat and the Crags every year... but how many have actually completely missed these delightful little ruins?
St. Anthony's Chapel stands on a rocky outcrop high above St Margaret's Loch, and is a fantastic place to enjoy excellent views over North Edinburgh, Leith and the River Forth. Instead of following the path up to the summit, keep to your left and you'll find it easily.
Surprisingly little is known about the origins of St Anthony's Chapel, but it does seem very likely that the chapel was closely associated with Holyrood Abbey, which stood just a few hundred yards away to the north west. The two were linked by a well-made stone track (now heavily worn) with kerbstones that can still be seen in some places, and about three quarters of the way along this track up to the chapel is the spring and carved stone bowl known as St. Anthony's Well.
It has also been suggested that the chapel served as a sort of religious beacon, designed to be clearly visible to sea-borne pilgrims coming to Holyrood Abbey as they sailed up the River Forth.
As so little is known about the chapel, it's quite difficult to determine its exact age but the building could date back into the 1300s or beyond. Details of its demise are equally unclear, but presumably, like Holyrood Abbey itself, St. Anthony's Chapel fell into disuse and disrepair after the Reformation in 1560.
The chapel ruins are easily reachable by able walkers with good shoes, and serve as a great place to relax and enjoy some quiet moments away from the crowds of Edinburgh.
--> You can walk down to St. Margret's Loch from here, but beware that it's a very steep descent (and slippery!). We walked down here after heavy rains to find that the loch was actually starting to overflow... so we got wet feet, haha!
At the foot of Arthur's Seat is Holyrood Park... this was originally built as a hunting reserve in the 12th century. St Margaret's Loch is a shallow man-made loch to the south of Queen's Drive.
The loch was formed in 1856 as part of Prince Albert's improvement plans for the area surrounding the palace. The loch has been used as a boating pond but is now home to a very large population of ducks, geese and swans... who are - by the way - neither shy nor afraid or people, let me tell you!
There are always many tourists and local families walking around the park, and many bring some bread to feed the substantial resident avian population.
On the day we visited, we saw a mother give her 3-year old son a bag of bread. Whilst he was struggling to open the bag, the birds took notice and suddenly the entire loch was literally on the move... towards him!
Swans jumped ashore and waddles towards him... ducks came charging past the swans... and seagulls were dive-bombing the little one from above. All of this commotion was obviously too much for the boy so he just dropped the bread bag and ran away, screaming! His mother and many bystanders (including my husband and I) thought this was just too funny... after all: no human children or birds were harmed during this greedy encounter, haha!
If you are lucky enough to be able to climb Athur's Seat, which is the plug of an old volcano, you can record spectacular views of Edinburgh and the surrounding countryside. I take my visitors as high up as I can via the trusty Renault, so they can hop out and snap a few memorable shots. I would suggest that you look at atypic's page as he and his lovely wife Pascale actually climbed to the top. That man is FIT lololol
Look at the website if you want to be here virtually.
A great way to see Edinburgh is to take a walk up to Arthur's Seat (and go to the top!). Arthur's Seat and Salisbury Crag's are both situated in Holyrood Park to the east of the city (you really can't miss them).
The Salisbury Crag's sit at the front with Arthur's Seat just behind and a bit taller at 823ft aboce sea level. This is a great walk/hike to take and I really recommend it. Just remember that it will be fairly cool and windy at the top. And also remember to wear sneakers (or appropriate footwear) and take a bottle of water with you, and maybe even some lunch to eat at the top!
See my travelogue on Arthur's Seat for more information and photos.
Well the sun came out just as I was about to make my way to this hill and what a lovely evening I spent there. There is a bitumen road to the top or you can make like a goat and zig zag your way up.
I tried to walk it but ran out of time and had to come back. It is a good idea to take a water bottle with you as well as a snack and a warm coat. It get very windy!
I climbed all the way up Arthur's Seat via a path beginning near Hollyrood Castle. After close to an hour I met more and more people. There is a car park on the far side of the hill, almost at the top! Those lazy people just drive up. It's much more worth it to walk cuz you really appreciate it.
I took the long way down, along a ridge. That would have been a challenging climb!
This is Arthur's Seat, the highest protrusion rising above Edinburgh in Holyrood Park, as seen from my son's kitchen window on South Oxford Street.
For some more views, please see my Views of Edinburgh travelogue.
Arthur's Seat is a 250m height remains of a volcano right in the middle of Edinburgh and very close to the holyrood palace.
You can walk in the area and also you can see Arthur's Seat from many places in the town.
Arthur's Seat is a fun hike on an easy zig-zagging trail, but the closer you get to the top, the stronger weather you will have. We almost made it to the top, but the wind and hail and the fact that no one else was around told us that we should probably head back down. Be prepared for colder temperatures as well. In June, I wore a t-shirt, medium-weight rain coat, and a hat and was on the chilly side of comfortable. Arthur's Seat is definitely worth doing if you're in Edinburgh. I found it a magical experience.
I don't think many people go up to Arthur's Seat. This is where the Edinburgh observatory sits and where you can get a beautiful top view of the city.
I recomment to go there especially during the sunset. It rewards visitors witha panoramic and uniquely red coloured view of the city.
You can get there either by walking or by car.
I would assume that only one out of every 25 tourists actually has the motivation to make the climb up to Arthur's Seat. I believe this is a dead volcano, but nonetheless it is a large hill and the highest point in Edinburgh.
If you park your car at the Palace of Holyrood House and cross the road heading towards the hills you can climb up to Arthurs Seat. This deeply eroded remnant of a long extinct volcano is part of Holyrood Park. Arthrus Seat is 251 metres high and it gives wonderful views over the city and over the Firth of Forth with its 2 bridges. Holyrood Park was the former hunting grounds of the Scottish Monarches and it is 263 hectares of varied landscape in the busy city.
It really is a fantastic walk, you follow the footpath to near St. Anthony's Chapel (which is a ruin) and then head up the well marked path to link with the paths coming from the east side of the hill (where Dunsapie Loch is). At this point several paths come together where a new path has been constructed. From there you continue over the volcanic rock to the summit of Arthur's Seat. Just follow the people climbing up the steps or along the path you cannot miss it and although it can be quite busy it is not too much and the chat with other walkers does help you when climbing the steeper bits!!
Alternatively you can park at Dunsapie Loch and approach the summit from the east along either of the two obvious paths starting at the car park. This is an easy stroll that takes only 15 minutes to reach the summit. We walked alongside the loch on our way back to Holyrood House.
A great thing to do that is slightly off the beaten track is to take a vigorous walk to the top of Arthur's Seat. Arthur's Seat is volanic crag that erupted millions of years ago. It can get quite windy or rain at any time(which is normal in Scotland!) so take a coat or simliar. It is a very romantic place to take a picnic lunch. The views of Edinburgh and the surrounding countryside are awesome...especially the flashes of yellow in the distant rape fields. I have also seen people rock-climbing there.
Arthur's Seat and Salsbury Craggs was used as a hunting ground for the Scottish Kings. Arthur's Seat is an extinct volcano. It only takes about one to two hours to climb to the top! The view from up there is tremendous. It's a good idea to go their on your last day in Edinburgh as a last sceney, while you can relax and enjoy the panoramic views of this beautiful city.
Lucky Edinburgh - it has a mountain in the middle of the city, well okay, a hill actually but it only takes about half an hour to walk up and the views are fantastic.
More views can be had even closer to town by going up Carlton Hill, just past the East end of Princes Street.
Another good walk is from Stockbridge, along the water of Leith to Dean Village and then on to the Gallery of Modern Art Cafe (Belford Road) for coffee and cake! | <urn:uuid:450207f3-75d5-4aec-a3d1-859e01364476> | CC-MAIN-2015-35 | http://www.virtualtourist.com/travel/Europe/United_Kingdom/Scotland/Lothian/Edinburgh-313508/Off_the_Beaten_Path-Edinburgh-Arthurs_Seat-R-1.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064445.47/warc/CC-MAIN-20150827025424-00341-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.97488 | 2,120 | 2.5625 | 3 |
Obstetric complications cannot be predicted or prevented but can be managed by timely provision of life saving services. Emergency Obstetric Care is defined as a set of critical life saving functions commonly called signal functions provided by a health facility, 24 hours a day, 7 days a week. Among the obstetric complications, many can be dealt by basic EmONC services and few will need comprehensive EmONC facilities, while a majority of the newborn emergencies may also be dealt with at the basic EmONC level.
Definition of Emergency Obstetric and Newborn Care Services:
The provision of Basic EmONC services includes but is not limited to: intravenous and intra-muscular administration of drugs such as antibiotics, oxytocin and anticonvulsants; assisted vaginal delivery; manual removal of placenta; manual removal of retained products of an abortion or miscarriage; and stabilization and referral of obstetric emergencies not managed at the basic level. In terms of newborn emergencies, the required services at the basic EmONC level include management of neonatal infection, very low birth weight infants, complications of asphyxia and severe neonatal jaundice, (skills and supplies for intravenous fluid therapy, thermal care including radiant warmers, Kangaroo Mother Care, oxygen, parenteral antibiotics, intragastric feeding, oral feeding using alternative methods to breast feeding and breast feeding support.
The provision of Comprehensive EmONC services includes all of the services provided at the basic level, plus cesarean section, blood transfusion services, and newborn special care at the advanced level, such as intensive care neonatology units.
The current status of under-five child mortality is 105 per 1,000 live births. It has shown a steady albeit not rapid improvement over the years. The major causes of death in children under 5 are perinatal causes (18%), Diarrhea (18%), ARI (18%), Measles (7%), Malaria (6%) and others (32 %). Thus capacity building of health care workers in management of above conditions is planned in this activity. Ministry of Health has approved and adopted National Child Survival Strategy which clearly defines all interventions required at home, referral and facility level for promoting growth and development of a healthy child and optimal care of sick child. The recommendations of National Survival Strategy for purpose of this PC-1 have been incorporated in different components (EmONC, essential newborn care, Immunization plus, nutrition counseling and IMNCI).
New Born Care:
A recent compilation of available information on the pattern of newborn deaths and IMR in Pakistan indicate that while there has been steady decline in IMR over the last 20 years corresponding with the implementation of the “child survival” programs there has not been a proportional decline in newborn mortality.
Infections, birth asphyxia, and preterm/low birth weight account for 86 percent of newborn deaths, and most occur during the first hours and days of life. Researchers estimate that approximately 70 percent of newborn deaths could be averted through the use of proven, cost-effective interventions. These interventions are surprisingly simple, ranging from ensuring clean delivery, to treating infections with antibiotics, to promoting immediate and exclusive breastfeeding. As outlined in The Lancet Series on Newborn Survival, just strengthening family community and outreach services, including health education to improve home care practices and preventive services such as tetanus vaccination coverage, can reduce newborn deaths by up to 40 percent.
As a result of various studies it is shown that in our communities across the nation the main risk factors identified being responsible for a high mortality and morbidity during newborn/perinatal age are:
There is sufficient evidence available that introducing integrated newborn packages at clinical, outreach and community level brings down newborn mortality by 40%, 25% and 30% respectively.
Emergency newborn care:
Improved care of ill babies especially infections, complications of preterm birth and of birth asphyxia could be provided as a part of Comprehensive EmONC services in selected facilities identified in the National MNCH Program. In addition, newborn resuscitation and immediate newborn care protocols already developed under Women’s Health Project will be provided to 7000 labor rooms in public sector and 3000 maternity homes in the private sector.
Community based low cost and low tech interventions will be scaled up through LHWs and CMWs. This would include Community IMNCI, Community Case Management and Behavior Change Communication.
Integrated Management of Newborn & Childhood Illnesses (IMNCI):
The Government of Pakistan is fully committed to implementation of the IMNCI strategy in Pakistan and a pilot project is implemented since 1998. This is approach is also present in National Child Survival Strategy with emphasis on diseases causing most of the child mortality i.e. Diarrhea and ARI. Now with the experience gained from implementation of the pilot IMNCI program the decision is to increase coverage to the entire country. Out of the three components of IMNCI: the first component of improving the skills of the health workers at the facilities and the second component of improving access and referral will be covered under this PC1. The third component of community IMNCI will be covered by the National Program for Family Planning and Primary Health Care as it has the necessary workers available in the field to support implementation of the strategy.
Creation of IMNCI Task Force
The commitment to implement IMNCI will be formalized and strengthened through the Technical Advisory Group. Strategic guidelines for advocacy and program implementation will be prepared and be part of district social sector plans. The activities will be coordinated by the DPC program under supervision of Program Manager MNCH.
The steering committees of MNCH will review implementation and achievements annually.
The infrastructure at the DHQ hospitals has sufficient capacity to enable provision of EmONC services, however the state of repair of the buildings is open to question. Therefore under this program a systemic effort to repair all the labor rooms, newborn care units, Maternity and Child wards will be undertaken. All the DHQ hospitals will be provided with funds for repair and maintenance. The amount has been estimated at an average cost of Rs. 1.2 million per DHQ Rs. 1.0 million per THQ providing Comprehensive EmONC services, Rs 0.6 million per THQ, Rs. 0.4 million per RHC providing Basic EmONC services. In addition Rs 0.3 million per RHC and 0.1 million per BHU for repair of residential accommodations for female staff. This amount can be reallocated within the district by the provincial MNCH cell based on a proposal having a detailed needs assessment presented by the EDO Health in consultation with the district government. On the other hand it would not be feasible or practical to start construction of new hospitals / health facilities with these funds and as such they will only be used for strengthening the existing health infrastructure. Funds for repair of RHC are already available with Punjab under the Health sector reform program and therefore these are not duplicated.
The hospitals in Punjab, Sindh and NWFP have a majority of equipment available for MNCH and therefore only some additional equipment will be provided to these hospitals. On the other hand, Balochistan has very limited capacity at the DHQ hospitals and thus the nine hospitals will be targeted for strengthening at the regional level. The THQs hospitals will be dealt with on a case by case basis. It is also proposed to provide these hospitals with incinerators to dispose of hospital waste through the Hepatitis B program. However for chemical disposal of hospital waste the recurrent costs shall be met from the regular budget of the hospital. All hospitals will need to be equipped with laboratory support, X-ray, Blood Bank, Operation Theatre and Anesthesia. The equipment package proposed for these hospitals is given on page117. The attached list covers all the essential equipement for DHQ/THQ hospitals for comprehensive EmONC services. Similarly, it is assumed that majority of the districts and tehsils hospitals would not require a complete set of equipment as it is available from the regular provincial budget and other sources. However, a lupsum amount for equipment for comprehensive EmONC services is allocated in this PC-1 for the provinces. Moreover, Piaman project is providing comprehensive EmONC equipment in 10 DHQ’s, 10 THQ’s and 10 RHC’s in their designated districts. UNFPA is also providing all the essential equipment to DHQ, THQ and RHC’s in their own designated districts, equipment for these health facilities for comprehensive EmONC services is not costed in the MNCH PC-1..
Similarly, the hospitals will conduct a review of available equipment in comparison with the list of equipment proposed and categorize it into three parts, available, repairable and new required. This exercise should take maximally six months to complete and the detailed compilation of this information should be available at the Provincial MNCH Cell directorate within 9 months of launch of the program.
The Federal MNCH PIU will conduct a standardization exercise and finalize the specifications of the equipment and issue a call for tenders. The procurement committee will conduct the tendering process and issue rate contracts for the equipment. The supply orders will be issued by the provinces/ districts as per requirement.
The equipment will be provided under warranty and service contract will be made with the supplier to perform at least one maintenance visit every four-six months. Provision has been made for service contracts for electrical equipment.
Newborn Care units at health facilities providing Comprehensive EmONC:
Newborn care units would be added to all facilities providing comprehensive and basic emergency obstetric care services, preferably near to the labour room and maternity ward. This will be done simultaneously with the renovation/ construction under establishing 24/7 comprehensive emergency obstetric care services in the first year of the project. All the facility staff handling deliveries would be trained in essential newborn care. However, for emergency newborn care specialized units would be established with adequate staff and equipment. Staff would be given specialized training for the purpose and will be permanently deployed in the unit rather than on rotation (especially the nursing staff).All facilities providing comprehensive EmONC services will have functional newborn units. Once the district has conducted a situation analysis of availability of staff, equipment and space at the health facility, a proposal to establish the newborn units shall be sent to the provincial MNCH Directorate/Cell.
The newborn unit will require minimally the presence of a pediatrician, one MO/WMO specifically for the unit in addition to at least two staff nurses to run the unit (included in the minimum staff requirement for 24/7 EmONC services).
Establishment of newborn care units would require necessary renovation and construction work. Standard designs and specifications would be developed by an experts committee and followed in all districts and facilities. The proposed equipment and supplies for these units would be procured as required. A list of equipment is provided on page 122.
- Strengthening the THQ Hospital
The provision of comprehensive EmONC services in the hospital requires a functioning MNCH wing comprising minimally of a labor room, operation theater, labour ward, gynecology ward, and intensive care unit; in addition to a newborn intensive care unit, child intensive care unit and children ward. The THQ hospitals shall be provided with a package of equipment based on the need.
Human Resource needs:
The DHQ/THQ hospitals already have an existing structure that can be utilized to improve service provision including availability of posts, operation theaters and indoor facilities; however these are sub optimally staffed. The minimum staffing requirement for comprehensive emergency obstetric and newborn care is outlined below. This number is essential to ensure 24hours a day 7days a week services.
The package of basic EmONC services is designed for facilities with less human resource and infrastructure available but is serviced by an ambulance service and has the provision of transferring the patient to a higher level facility providing comprehensive EmONC services if required.
The provision of basic EmONC services in the hospital requires a functioning MNCH wing comprising minimally of a labor room, operation theater, labour ward, and gynecology ward. In addition at the THQs a newborn intensive care unit, child intensive care unit and children ward will be provided. The hospitals THQ/RHC shall be provided with funds for repair/ renovation of the health facility and not for new construction. The allocation will be decided along side the allocation for comprehensive EmONC facilities. Most of the RHCs have provision for 20 beds for inpatients and have an Operation Theater and X-ray. The provision of services will not require new civil works and there will be a need for some renovation at these RHCs.
Provision of equipment and supplies to meet requirement for basic EmONC package including Laboratory support and functioning of minor OT, Supplies (contraceptives, medicines, IMNCI package of medicines, Basic newborn care kit, clean delivery kits, and basic equipment)
Human Resource needs:
There is a complement of staff posts sanctioned at the THQ and RHC. The provision of Basic EmONC services at these facilities will not entail deployment of additional staff and the existing posts can be utilized with the provision of an incentive from the program.
All the RHCs shall be strengthened to provide 24/7 Basic EmONC services, for this purpose there is already a complement of staff available at the RHC and this will be supplemented to enable service provision for 24 hours through existing posts in the health system. The proposal is to place an additional WMO and LHV at these health facilities to improve the availability of staff and allow for 24 hours coverage for Basic EmONC.
It is Assumed that the BHU’S are being strenghtned under respective Health Sector Reforms in the provinces, which are already scaling up MNCH activities. BHUs are expected to be equipped through the regular health budget of the province to provide preventive obstetric care services and for this the primary ingredient is availability of LHV at the BHUs, while incentive for the LHV aready posted will be provided from the MNCH program as well as renovation of the residence of the LHV at the BHUs. These BHUs can be linked with the CMWs and LHWs to promote institution based delivery. There will be a provision of providing performance based incentives at these BHUs; the performance measures shall include the number of patients seen for Antenatal care, ANC visits per pregnant woman, Post natal visits, number of normal deliveries assisted, number of cases of complicated pregnancy referred to higher level, proportion of children immunized in the catchment area, ORS distribution, FP clients in catchments area.
Medicines, IMNCI package of medicines, Basic newborn care kit, clean delivery kits, and basic equipment) will be provided to identified BHUs. | <urn:uuid:cf3ea597-be4f-4763-bb5e-9c800bd1ceb3> | CC-MAIN-2015-35 | http://dynasoft.org/mnch/eoc.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645235537.60/warc/CC-MAIN-20150827031355-00212-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.930788 | 3,047 | 2.703125 | 3 |
Frances Oldham Kelsey
|Frances Oldham Kelsey|
|Born||Frances Kathleen Oldham
July 24, 1914
Cobble Hill, British Columbia, Canada
|Died||August 7, 2015
London, Ontario, Canada
|Alma mater||Victoria College, British Columbia
University of Chicago
|Known for||Preventing thalidomide from being marketed in the United States|
|Spouse(s)||Fremont Ellis Kelsey (m. 1943, died 1966)|
Frances Kathleen Oldham Kelsey, CM (July 24, 1914 – August 7, 2015) was a Canadian pharmacologist and physician. She was most famous as the reviewer for the U.S. Food and Drug Administration (FDA) who refused to authorize thalidomide for market because she had concerns about the drug's safety. Her concerns proved to be justified when it was shown that thalidomide caused serious birth defects. Kelsey's career intersected with the passage of laws strengthening FDA oversight of pharmaceuticals. Kelsey was the second woman to be awarded the President's Award for Distinguished Federal Civilian Service by President John F. Kennedy.
Birth and education
Born in Shawnigan Lake on Vancouver Island, British Columbia, Kelsey was graduated by St. Margaret's School high school at age 15, and attended Victoria College, British Columbia (1930–1931) in Victoria, British Columbia (now University of Victoria). She then enrolled at McGill University to study pharmacology. At McGill, she received both a B.Sc.(1934) and a M.Sc.(1935) in pharmacology, and "on [a] professor's urging, wrote to EMK Geiling, M.D., a noted researcher [who] was starting up a new pharmacology department at the University of Chicago, asking for a position doing graduate work". Geiling presumed that Frances was a man and offered her the position. Kelsey accepted and began working for Geiling in 1936.
During her second year, Geiling was retained by the FDA to research unusual deaths related to elixir sulfanilamide, a sulfonamide medicine. Kelsey assisted on this research project, which showed that the 107 deaths were caused by the use of diethylene glycol as a solvent. The next year, the United States Congress passed the Federal Food, Drug, and Cosmetic Act of 1938. That same year she completed her studies and received a Ph.D. in pharmacology at the University of Chicago. Working with Geiling led to her interest in teratogens, drugs that cause congenital malformations.
Early career and marriage
Upon completing her Ph.D., Kelsey joined the University of Chicago faculty. In 1942, like many other pharmacologists, Kelsey was looking for a synthetic cure for malaria. As a result of these studies, Kelsey learned that some drugs are able to pass through the placental barrier. While there she also met fellow faculty member Dr. Fremont Ellis Kelsey, whom she married in 1943.
While on the faculty at the University of Chicago, Kelsey was awarded her M.D. during 1950. She supplemented her teaching with work as an editorial associate for the American Medical Association Journal for two years. Kelsey left the University of Chicago in 1954, decided to take a position teaching pharmacology at the University of South Dakota, and moved with her husband and two daughters to Vermillion, South Dakota, where she taught until 1957.
She became a dual-citizen of Canada and the United States in the 1950s in order to continue practicing medicine in the U.S., but retained strong ties to Canada where she continued to visit her siblings regularly until late in life.
Work at the FDA and thalidomide
In 1960, Kelsey was hired by the FDA in Washington, D.C. At that time, she "was one of only seven full-time and four young part-time physicians reviewing drugs" for the FDA. One of her first assignments at the FDA was to review an application by Richardson Merrell for the drug thalidomide (under the tradename Kevadon) as a tranquilizer and painkiller with specific indications to prescribe the drug to pregnant women for morning sickness. Even though it had already been approved in Canada and more than 20 European and African countries, she withheld approval for the drug and requested further studies. Despite pressure from thalidomide's manufacturer, Kelsey persisted in requesting additional information to explain an English study that documented a nervous system side effect.
Kelsey's insistence that the drug should be fully tested prior to approval was vindicated when the births of deformed infants in Europe were linked to thalidomide ingestion by their mothers during pregnancy. Researchers discovered that the thalidomide crossed the placental barrier and caused serious birth defects. She was hailed on the front page of The Washington Post as a heroine for averting a similar tragedy in the U.S. Morton Mintz, author of The Washington Post article, said "[Kelsey] prevented… the birth of hundreds or indeed thousands of armless and legless children." Kelsey insisted that her assistants, Oyam Jiro and Lee Geismar, as well as her FDA superiors who backed her strong stance, deserved credit as well. The narrative of Dr. Kelsey's persistence, however, was used to help pass rigorous drug approval regulation in 1962.
After Morton Mintz broke the story in July 1962, there was a substantial public outcry. The Kefauver Harris Amendment was passed unanimously by Congress in October 1962 to strengthen drug regulation. Companies were required to demonstrate the efficacy of new drugs, report adverse reactions to the FDA, and request consent from patients participating in clinical studies. The drug testing reforms required "stricter limits on the testing and distribution of new drugs" to avoid similar problems. The amendments, for the first time, also recognized that "effectiveness [should be] required to be established prior to marketing." The new laws were not without controversy.
As a result of her blocking American approval of thalidomide, Kelsey was awarded the President's Award for Distinguished Federal Civilian Service by President John F. Kennedy, becoming the second woman to receive that award. British Pathé released a film of Kennedy acknowledging Kelsey in a speech. After receiving the award, Kelsey continued her work at the FDA. There she played a key role in shaping and enforcing the 1962 amendments. She also became responsible for directing the surveillance of drug testing at the FDA.
Later life and death
Kelsey continued to work for the FDA while being recognised for her earlier work. She was still working at the FDA's Center for Drug Evaluation and Research in 1995 and was appointed deputy for scientific and medical affairs. In 1994, the Frances Kelsey Secondary School in Mill Bay, British Columbia was named in her honour. She retired in 2005.
In 2010, the FDA presented Kelsey with the first Drug Safety Excellence Award and named the annual award after her, announcing that it would be given to one FDA staff member annually. In announcing the awards, Center Director Steven K. Galson said “I am very pleased to have established the Dr. Frances O. Kelsey Drug Safety Excellence Award and to recognize the first recipients for their outstanding accomplishments in this important aspect of drug regulation.”
Kelsey turned 100 in July 2014, and shortly thereafter, in the fall of 2014, she moved from Washington, D.C., to live with her daughter in London, Ontario. In June 2015, when she was named to the Order of Canada, Mercédes Benegbi, a thalidomide victim and the head of the Thalidomide Victims Association of Canada, praised Dr. Kelsey for showing strength and courage by refusing to bend to pressure from drug company officials, and said “To us, she was always our heroine, even if what she did was in another country.”
Kelsey died in London, Ontario, on August 7, 2015 at the age of 101, less than 24 hours after Ontario’s Lieutenant-Governor, Elizabeth Dowdeswell, visited her home to present her with the insignia of Member of the Order of Canada for her role against thalidomide.
Legacy and awards
- 1962 • President's Award for Distinguished Federal Civilian Service
- 1963 • Gold Key Award from University of Chicago, Medical and Biological Sciences Alumni Association
- 2000 • Inducted into the National Women's Hall of Fame
- 2001 • Named a Virtual Mentor for the American Medical Association
- 2006 • Foremother Award from the National Research Center for Women & Families
- 2010 • Recipient of the first Dr. Frances O. Kelsey Award for Excellence and Courage in Protecting Public Health given out by the FDA
- 2012 • Honorary doctor of science degree from Vancouver Island University
- 2015 • Named to the Order of Canada
- Peritz, Ingrid (November 24, 2014), Canadian doctor averted disaster by keeping thalidomide out of the U.S., The Globe and Mail, retrieved August 7, 2015.
- "Frances Kelsey", Canada Heirloom Series (Heirloom Publishing Inc.), 986, retrieved August 15, 2009.
- Bren, Linda (March–April 2001), "Frances Oldham Kelsey: FDA Medical Reviewer Leaves Her Mark on History", FDA Consumer, archived from the original on October 20, 2006, retrieved August 15, 2009.
- "When Kelsey read Geiling's letter offering her a research assistantship and scholarship in the PhD program at Chicago, she was delighted. But there was one slight problem — one that 'tweaked her conscience a bit.' The letter began 'Dear Mr. Oldham,' Oldham being her maiden name. Kelsey asked her professor at McGill if she should wire back and explain that Frances with an 'e' is female. 'Don't be ridiculous,' he said. 'Accept the job, sign your name, put 'Miss' in brackets afterwards, and go!'" Bren (2001).
- Spiegel, Rachel, Research in the News: Thalidomide, archived from the original on August 22, 2007, retrieved August 15, 2009.
- Simpson, Joanne Cavanaugh (September 2001), "Pregnant Pause", Johns Hopkins Magazine 53 (4), retrieved April 30, 2006.
- Rouhi, Maureen (June 20, 2005), "Top Pharmaceuticals: Thalidomide", Chemical & Engineering News (American Chemical Society) 83 (25), retrieved April 30, 2006.
- "The Story Of The Laws Behind The Labels", FDA Consumer, June 1981, retrieved August 15, 2009.
- Mintz, Morton (July 15, 1962), 'Heroine' of FDA Keeps Bad Drug Off of Market, The Washington Post, p. Front Page. See also Mintz's comments from 2005 on Kelsey.
- Dr. Frances Kathleen Oldham Kelsey, National Library of Medicine, retrieved April 30, 2006.
- McFadden first=Robert (August 7, 2015), Frances Oldham Kelsey, F.D.A. Stickler Who Saved U.S. Babies From Thalidomide, Dies at 101, The New York Times.
- Frances Oldham Kelsey, Chemical Heritage Foundation, retrieved March 23, 2014.
- Prenatal fluoride: The most controversial use of the new 1962 law was a vigorous enforcement that took prenatal vitamins with fluoride off the market. Unlike thalidomide, fluoride was not patented or otherwise owned by anyone, so there was no one to defend it or pay for the newly required well-controlled study. The big money in fluoride is in fluoride deficiency (dental caries). Prenatal fluoride is still illegal. http://raygrogan2-ivil.tripod.com/answersforbabycenterposts/id1.html
- Kennedy, John F. (1962), Remarks Upon Presenting the President's Awards for Distinguished Federal Civilian Service, retrieved May 1, 2006.
- Women of the Hall – Frances Kathleen Oldham Kelsey, Ph.D., M.D., National Women’s Hall of Fame, 2000, retrieved May 1, 2006.
- "President Kennedy Calls For Stronger Drug Laws", British Pathe News, 1962
- Lyndsey Layton (September 13, 2010), "Physician to be honored for historic decision on thalidomide", The Washington Post.
- FKSS History, Frances Kelsey Secondary School, retrieved December 26, 2014.
- "Frances Kelsey, scientist - obituary". The Telegraph. August 11, 2015. Retrieved 11 August 2015.
- Harris, Gardiner (September 13, 2010), "The Public’s Quiet Savior From Harmful Medicines", The New York Times, retrieved January 4, 2011.
- Margaret A. Hamburg, M.D., Commissioner of Food and Drugs – Remarks at the Award Ceremony for Dr. Frances Kelsey.
- Barber, Jackie (November 10, 2005), "Center ceremony honors 107 individuals, 47 groups: Spring event inaugurates Frances Kelsey Drug Safety Award", News Along the Pike (FDA/Center for Drug Evaluation and Research), archived from the original on June 15, 2007, retrieved August 15, 2009.
- McElroy, Justin (July 24, 2014), Canadian scientist Frances Kelsey, who spurred FDA reforms, turns 100, Global News, retrieved July 24, 2014.
- Ingrid Peritz (July 1, 2015), "Doctor who opposed thalidomide in U.S. named to Order of Canada", The Globe and Mail, retrieved July 1, 2015.
- Bernstein, Adam; Sullivan, Patricia (August 7, 2015), Frances Oldham Kelsey, FDA scientist who kept thalidomide off U.S. market, dies at 101, The Washington Post, retrieved August 7, 2015.
- Ingrid Peritz (August 7, 2015), "Canadian doctor who kept thalidomide out of U.S. dies", The Globe and Mail, retrieved August 7, 2015.
- Gold Key Award Recipients, The University of Chicago The Medical & Biological Sciences Alumni Association, retrieved August 14, 2006.
- Geraghty, Karen (July 2001), "Profile of a Role Model – Frances Oldham Kelsey, MD, PhD", Virtual Mentor – American Medical Association Journal of Ethics 7 (7), archived from the original on September 29, 2007, retrieved August 15, 2009.
- 2006 Foremothers Awards Luncheon, National Research Center for Women & Families, retrieved August 15, 2009.
- "FDA honors one of its own". CNN blog. September 16, 2010. Retrieved August 9, 2015.
- "Honorary doctor of science degree from Vancouver Island University", Nanaimo News Bulletin (Black Press, Inc.).
|Wikimedia Commons has media related to Frances Oldham Kelsey.|
- Bren, Linda (March–April 2001), "Frances Oldham Kelsey: FDA Medical Reviewer Leaves Her Mark on History", FDA Consumer, archived from the original on October 20, 2006, retrieved August 15, 2009
- Harris, Gardiner (September 13, 2010), The Public’s Quiet Savior From Harmful Medicines, The New York Times.
- Harris, Steven B. (1992), The Right Lesson to Learn from Thalidomide.
- Kelsey, Frances O. (1993), Autobiographical Reflections (PDF). This was drawn from oral history interviews conducted in 1974, 1991, and 1992; presentation, Founder’s Day, St. Margaret’s School, Duncan, B. C., 1987; and presentation, groundbreaking, Frances Kelsey School, Mill Bay, B. C., 1993.
- Mintz, Morton (1965), The therapeutic nightmare; a report on the roles of the United States Food and Drug Administration, the American Medical Association, pharmaceutical manufacturers, and others in connection with the irrational and massive use of prescription drugs that may be worthless, injurious, or even lethal., Boston: Houghton Mifflin, LCCN 65015156. Library of Congress catalog entry.
- McFadyen, R.E. (1976), "Thalidomide in America: A Brush With Tragedy", Clio Medica 11 (2): 79–93.
- Mulliken, J. (August 10, 1962), "A Woman Doctor Who Would Not be Hurried", Life Magazine 53: 28–9, LCCN 37008367.
- Perri III, Anthony J; Hsu MD, Sylvia, "A review of thalidomide's history and current dermatological applications", Dermatology Online Journal 9 (=3): 5, retrieved August 14, 2006.
- Seidman, Lisa A.; Warren, Noreen (September 2002), "Frances Kelsey & Thalidomide in the US: A Case Study Relating to Pharmaceutical Regulations", The American Biology Teacher 64 (7): 495, doi:10.1662/0002-7685(2002)064[0495:FKTITU]2.0.CO;2, 7.
- Stamato, Linda (December 17, 2012), "Thalidomide, after fifty years: A tribute to Frances Oldham Kelsey and a call for thorough, responsible federal drug regulation and oversight", NJ Voices (NJ.com). | <urn:uuid:4796ba2d-213d-448d-977b-012e45a7db11> | CC-MAIN-2015-35 | https://en.wikipedia.org/wiki/Frances_Oldham_Kelsey | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00281-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.929995 | 3,636 | 2.671875 | 3 |
Strengthening the Pelvic Floor
As you get older, your pelvic floor muscles get weaker. Women who have had
children may also find they have weaker pelvic floor muscles.
The pelvic floor is a large hammock of muscles
stretching from side to side across the floor of the pelvis. It is attached to
your pubic bone in front, and to the the tail end of your spine behind. The
openings from your bladder, your bowels and your womb all pass through your
pelvic floor. Your muscles near your pelvis are very important to
stretch. They get tight
very quickly and can become a serious problem if not handled. The pelvic floor
muscles can be separated into lifting, opening and closing muscles. These
muscles all have a role to play in the elimination process with the opening and
closing muscles being used at the actual time of going to the toilet. The
lifting muscles support the organs in the pelvis as we move about and exert
ourselves during the day --
walking, standing, lifting, sneezing and toileting. They help keep the
rectum and bladder in the 'right place' so that we can pass urine and faeces
efficiently and without straining. They give support during
childbirth and are
important in love making. They can be damaged or weakened by:
Childbirth- Evidence suggests that
problems can start during
pregnancy and not just after
birth. Women who
have had multiple births, instrumental births (with forceps or ventouse),
severe perineal tearing or large babies (birth weight over 4kg) are at greater
risk of pelvic floor muscle damage. If you're trying to get
fit after the birth
of your baby, don't do straight-leg sit-ups and double-leg lifts. These put
severe pressure on your pelvic floor and your back.
Straining to pass stools - Chronic or
repeated straining on the toilet (associated with
lead to pelvic floor weakness and/or prolapse of the organs into the vagina or
to a rectal prolapse (the rectal lining protrudes from the anus). It is
important to teach the underlying bowel problem and good toileting habits.
Chronic coughs and sneezing- Chronic
coughing for any reason (for example,
asthma, bronchitis or
a smoker's cough) increases the risk of urinary incontinence and prolapse.
Being overweight- Being
overweight increases the
risk of leaking urine and may place greater strain on the pelvic floor.
Heavy lifting- Heavy lifting can
create pressure on the pelvic floor and ultimately lead to prolapse. Women in
certain professions such as nursing or courier services are at particular
risk. Women performing heavy
at a gym can also be at risk of straining the pelvic floor.
High impact exercise Women involved in
high impact sports such as basketball, netball or
running are at
increased risk of leaking urine. This applies to elite athletes as well.
Age Pelvic floor muscles tend to get
weaker with increasing age. Pelvic floor muscle exercises can help strengthen
them at any age.
Strong pelvic floor muscles
can help you to:
reduced risk of
'sagging' of internal organs)
Support the baby during
prepare for, and recover
and orgasmic potential; and
increased social confidence and quality
Help to stabilise and support the spine
Identifying the Pelvic Floor muscles
First try to find your pelvic floor muscles, by one of these ways:
Try to tighten your muscles around your vagina and back passage and lift
up, as if you're stopping yourself passing water and wind at the same time.
A quick way of finding the right muscles is by trying to stop the flow of
urine when you're in the toilet. Don't do this regularly because you may start
retaining urine. Once you've found the muscles, make sure you
relax and empty your bladder
If you're not sure you are exercising the right muscles, put a couple of
fingers into your vagina. You should feel a gentle squeeze when doing the
Why should I do pelvic floor muscle
The reproductive system lies with in the lower part of the abdomen and is
protected by the bony pelvic girdle. This area needs to be open and relaxed so
that energy can circulate freely through the reproductive system and conception
is unhindered. Regular movement of the pelvis brings energy,
strength to this area.
Besides, regular pelvic floor muscle exercises make the muscles that support
your pelvic organs stronger and helps you use the muscles more effectively.
Women who have a problem with urine leakage have been able to eliminate or
greatly improve this problem just by doing pelvic floor muscle exercises each
Pregnant and postpartum women who do pelvic floor muscle exercises have
significantly less urine leakage.
Exercises to work the pelvic floor
A helpful stretch to loosen you inner thighs is called the
butterfly stretch. You may be familiar with this but you should know how to do
it correctly. Sit on the floor on your butt. Stretch your legs straight out
and then bring your feet in towards your pelvis. Put the bottoms of your feet
together and pull them into your body. You can also lean over to feel a more
Another stretch is the side split. Stand up with your feet
slightly farther out to the sides. Slowly push each of them farther away from
your body. Go down as far as you can to the floor. Hold your lowest position
for a count of ten. After that, it is easier to just fall onto your butt and
then get up.
This stretch is called the Eye Of The Needle stretch. You
will feel a pull in your outer
buttocks. Lie down on your back and putt both
of your feet in the air. Put one of your legs on top of the other's thigh.
Keep the straight leg high in the air. Now grab the back on your straight leg
and pull it into your body. Repeat this with both legs.
Pelvic stretch-Sit on
the edge of a sturdy chair with feet apart and set firmly on the floor. Place
your hands on your thighs above your knees, with fingers turned in and elbows
turned out. Lean forward, bend your elbows and take your upper body weight on
your thighs. This frees the pelvis-think of it as a bowl and tip it forward at
the front "rim" (the pubic bone) and up at the back "rim" (where the sacrum
joins the spine). Open the front of the body by spreading your arms with palms
up, lifting your chest and tucking your pelvis under so that the front pelvic
"rim" rises and the back "rim" is lowered. This movement stretches the spine
and releases tension. It also tightens the lower abdominal muscles that hold
the pelvis in place. Repeat these two movements several times and practice
frequently in order to increase mobility in the pelvic area.
The most well-known pelvic
floor exercises are the
Kegels. Squeeze and draw in the muscles around
your back passage, vagina and front passage and lift up inside as if trying to
stop passing wind and urine at the same time. Try to hold the muscles strong
and tight as you count to 8. Now let them go and relax. You should have a
distinct feeling of letting go. Repeat the "Squeze, Lift and Hold" movement
and let go It is best to rest in between each lift up of the muscles. If you
can't hold for a count of 8, just hold for as long as you can. Repeat this
"Squeeze, Lift and Hold" contraction as many time as you can, up to a limit of
8-12 contractions. Try to do three sets of 8 to 12 squeezes each, with a rest
The pros of pelvic floor exercises
You can do them when sitting, standing or lying down.
You don't need any special equipment.
You can do them with or without vaginal cones.
The downside of pelvic floor exercises
You have to keep doing them for the rest of your life.
It can take up to 15 weeks before you see any difference.
If you haven't noticed a difference after three months, see your
continence adviser again to check whether you're doing them correctly or if
there's another problem.
Important Tips for pelvic floor muscle exercises
Each contraction should involve a concentrated effort to get maximum
tightening. To strengthen your pelvic floor muscles, sit comfortably and
squeeze the muscles 10-15 times in a row.
Try to contract only the pelvic muscles. (If you feel your abdomen,
thighs or buttocks tightening then relax and aim just for the pelvic muscles
by using a less intense muscle contraction. If it seems impossible not to
tighten the abdomen, thigh, or buttock muscles, then concentrate on full
relaxation and try gentle flicks? of the pelvic muscles, working the muscles to higher layers with each flick.)
Be sure to breathe while holding the
When you get used to doing pelvic floor exercises, you can try holding each
squeeze for a few seconds. Every week, you can add more squeezes, but be careful
not to overdo it, and always have a rest in between sets of squeezes.
Practice fully relaxing the muscle for at least 10 seconds between each
Experiment with contracting the muscles in many different positions
(standing upright, lying, sitting, on hands and knees, feet together, feet
Do not forget to, record your progress. You might want to keep a daily diary of whether or
not you have had a leaking accident. Over the weeks you should begin to see a
decrease in the frequency and amount of unwanted urine loss. Another way to
check your progress is to see whether or not you can slow or stop your urine
stream when you are going to the bathroom. We recommend that you try this no
more than once a week. As your pelvic muscles get stronger you will find that
you are able to stop the stream more quickly.
Dated 21 February 2014 | <urn:uuid:928aef45-17dc-4086-b93a-5706e08479ac> | CC-MAIN-2015-35 | http://www.womenfitness.net/strengthening_the_pelvic_floor.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064865.49/warc/CC-MAIN-20150827025424-00224-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.926176 | 2,154 | 2.75 | 3 |
Moriscos (Spanish ""Little Moors") or Mouriscos (Portuguese) were Spanish Muslims who converted to Catholicism during the Reconquista of Spain. The term later became a pejorative applied those who had outwardly converted but secretly continued to practice Islam.
Muslim communities were usually granted religious freedom until the late fifteenth century. This policy changed when Christian authorities in Spain began pressuring Muslims to convert, using such methods as forced conversions, the education of Morisco children in Catholic schools, and the mobilization of the Spanish Inquisition to investigate suspected secret Muslims.
Many Moriscos, however, continued to practice Islam in secret. The continuing vitality of Islamic culture and religion among the Moriscos became a matter of concern for rulers such as Emperor Charles V of the Holy Roman Empire and Philip II of Spain. After several major waves of persecution by the Inquisition and regional expulsions ordered by the government, Philip III of Spain finally determined to expel the remaining Morisco population by decree in 1610. Migration was forced and involved hundreds of thousands of people. Most of the Moriscos made their way to lands controlled by the Ottoman Empire and North Africa. Some settled in France and a number remained in Spain as practicing Christians.
The treatment of the Moriscos by the Spanish Christians represents one of the great failures of the Christian spirit and civilization, paralleling the earlier treatment of the Jews and Marranos.
The reconquest of formerly Christian Spain and Portugal from the Muslims was accomplished over several centuries, with the last Muslim stronghold, Granada, falling in 1492. Muslim converts to Christianity were known as Moriscos, while Muslims who submitted to Christian rule but retained the Muslim faith were called Mudéjars. However, many Moriscos continued to remain crypto-Muslims, just as many Jewish conversos had secretly continued to practice Judaism.
The exact status of the Mudéjars and Moriscos depended on various capitulation pacts and later royal decrees. In Aragon (1118) and Valencia (1238) Muslims who agreed to accept Christian rule were granted freedom to practice their faith. Likewise, after the fall of the city of Granada in 1492, the Treaty of Granada guaranteed the Muslim population the right of religious freedom. However, that promise was short-lived. When Muslims reacted against peaceful conversion efforts on the part of Granada's first archbishop, Hernando de Talavera, the future Cardinal Cisneros took more forceful measures as the century drew to a close: Forced conversions, burning Islamic texts, and the prosecution of some of Granada's leading Muslims.
In response to these and other violations of the treaty, Granada's Muslim population rebelled in 1499. The revolt, which lasted until early 1501, gave the Spanish authorities an excuse to void the remaining terms in Granada's treaty of surrender. In 1501, Granada's Muslims were given the ultimatum of either converting to Christianity or leaving. Most did convert, but usually only superficially, continuing to dress, write, and speak as they had before, and to practice Islam in secret. In 1502, the ultimatums were extended to the Muslims of Castile and Leon. The Muslims of Navarre had to convert or leave by 1515, and those of Aragon by 1525. Additional restrictive legislation was introduced at the national level in 1526 and 1527 under Holy Roman Empire's Emperor Charles V. However, wealthy Moriscos were able to buy exemptions to restrictions against them.
In August 1529 the Turkish Muslim privateer Barbarossa Hayreddin attacked the Mediterranean coasts of Spain and helped some 70,000 Muslims and Moriscos escape from Andalusia in seven consecutive journeys. The sympathy of the Moriscos with such "pirates" worsened their reputation among Spanish Christians.
In 1567, Philip II of Spain issued an order requiring Moriscos throughout the kingdom to give up their Muslim names and traditional Muslim dress, and prohibited the speaking of Arabic. An edict requiring Morisco parents to surrender the education of their children to Christian priests led to an uprising in the Alpujarras from 1568 to 1571, resulting in the forced resettlement of the Moriscos of Granada, often to the kingdom of Valencia. Only a few Moriscos, those who had collaborated with the royal forces during this revolt, were permitted to remain in the city and territory of Granada. The relocation also affected the Moriscos of Castile, who were quite assimilated by that time. During this time, the Spanish Inquisition intensified its attention toward the Moriscos. From 1570, cases involving Moriscos whose conversion were suspect became predominant in the tribunals of Zaragoza, Valencia, and Granada. In the tribunal of Granada, between 1560 and 1571, 82 percent of those accused by the Inquisition were Moriscos.
In Spain's conflict with the Ottoman Empire, the Moriscos were also suspected of being a Muslim fifth column, aiding the Barbary pirates, and conspiring against Spain. Spies reported that the Ottoman Emperor Selim II (reigned 1566-1574) was planning to attack Malta and later Spain, a strategy which would allegedly involve inciting an uprising among Spanish Muslims and Moriscos. King Philip II, thus, enacted additional restrictive measures against them.
However, many of the Muslims and Moriscos had risen to positions of wealth and prominence, and wielded considerable counteracting influence. Aragonese and Valencian nobles in particular appreciated their contribution and tried to protect them from expulsion, advocating a line of patience and religious instruction. Toward the end of the sixteenth century, Morisco writers sought to challenge the perception of their culture as alien to Spain with literary works presenting a version of early Spanish history in which Arabic-speaking Spaniards played a positive and major role.
Meanwhile, some Moriscos indeed fought against Christians as corsairs based at Algiers, Cherchell, and Salé. Others became mercenaries in the service of the Moroccan sultan, crossing the Sahara, and conquering Timbuktu and the Niger Curve in 1591.
In Valencia, the Catholic preacher Juan de Ribera came to the conclusion that it would ultimately be impossible to bring the majority of Moriscos to the point of authentic conversion. Determined to persuade the king to banish them, he portrayed the Moriscos as traitors and heretics, justifying their complete expulsion as the logical conclusion of of the reconquista.
The crown ultimately agreed, deciding that the Moriscos were fundamentally untrustworthy and too troublesome to tolerate. The Moriscos were thus forcibly expelled from Spain between 1609 and 1614 by Philip III, at the instigation of the Duke of Lerma. Estimates for this second wave of expulsion have varied with some contemporary accounts setting the number at around 300,000 (about 4 percent of the Spanish population), a majority of which were expelled from what is today Aragon, Catalonia, and Valencia.
The arrangements for the expulsion of Morisco children presented Catholic Spain with a dilemma, as they had all been baptized, and consequently could not be legally transported to Muslim lands. Some authorities proposed that children should be forcibly separated from their parents, but this proved to be impractical, not to mention its moral implications. Consequently, families remained together for the most part, with the official destination of the deportees generally stated to be France. Most of these, however, soon continued on to Africa and the Ottoman Empire, with about 40,000 settling in France permanently. Those Moriscos who sincerely wished to remain Catholic were usually able to find new homes in Italy, but the overwhelming majority of Moriscos settled in Muslim-held lands.
A substantial number of Moriscos were also able to remain in Spain, camouflaged among the Christian population. Some, whose conversion to Christianity was genuine, stayed on for religious reasons, others mainly for economic reasons or as a matter of convenience. It is estimated that, in the kingdom of Granada alone, between 10,000 and 15,000 Moriscos remained after the general expulsion of 1609-10.
Miguel de Cervantes' writings, such as Don Quixote and Conversation of the Two Dogs, offered interesting views of Moriscos. In the first part of Don Quixote, which takes place before the expulsion of 1609-10, a Morisco translates a found document containing the Arabic history that Cervantes is described as "publishing."
In the second part, after the expulsion, the character Ricote is a Morisco and a good mate of Sancho Panza. He cares more about money than religion, however, and thus leaves for Germany, returning later as a false Christian pilgrim with the purpose of recovering treasure that he has buried. He admits, however, that expulsion of the Moriscos is just. His daughter, María Félix, is brought to Berbery but suffers, since she is a sincere Christian.
Morisco is sometimes applied to other historical crypto-Muslims, in places such as Norman Sicily, ninth century Crete, and other areas, along the medieval Christian-Muslim frontier.
In the racial classification of colonial Spanish America, morisco was used for a certain combination of European and African ancestry, regardless of religion, similar to the classification mulatto.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | <urn:uuid:855c48bc-25f8-45b2-b28e-e86e325db3de> | CC-MAIN-2015-35 | http://www.newworldencyclopedia.org/entry/Morisco | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00157-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.96579 | 2,057 | 4.1875 | 4 |
INTEGRATION OF EVOLUTIONARY FORCES
Selection leads to an increase in the average fitness of the
population. Illustrated by the following example. Recall that the average
fitness ("w bar") = wAA p2 + wAa 2pq
+ waa q2. Now consider the case where p=.2, q= .8
so f(AA)=0.04, f(Aa)=0.32 and f(aa)=0.64. If their respective fitnesses
are 1.0, 0.2 and 0.1, then w bar = .04(1)+.32(.2)+.64(.1) = .132. If selection
were to continue for a number of generations q would decrease and p would
increase. Lets say that we recalculated the average fitness when p=0.8
and q=0.2: thus f(AA)=.64, f(Aa)=.32 and f(aa)=.04 so w bar = .64(1)+.32(.2)+.04(.1)=.708.
When will the average fitness reach a maximum? when the deleterious allele
(a) is selected out of the population so the only genotype is AA, the most
fit genotype. (How would this differ if there was dominance?)
The sickle cell story with a new twist. There are actually more than
just two alleles relevant to the sickle cell story. The normal allele (A)
and the sickle allele (S) produce the genotypes traditionally considered:
AA is normal but susceptible to malaria, AS individuals are not severely
debilitated, but they are resistant to malaria, SS individuals are severely
affected and rarely survive to reproduce. A third allele (C) results in
the following genotypes and phenotypes: AC is normal but susceptible, SC
has mild anemia and CC is normal and resistant to malaria. The following
approximate fitnesses have been assigned to these genotypes:
Bantu speaking people moved into central and western Africa. A slash and burn agriculture opened up habitat for mosquitoes and along with them came Plasmodium the malarial agent. Consider a large population with the above fitness exposed to malaria for the first time. f(A) ~ 1.0 f(S) and f(C) very low, thus all S and C alleles will be in heterozygous states as AC or SC. Virtually No SS or CC genotypes will be present (the product of two very small numbers). Selection will lead to an increase of the S allele due to its higher fitness as a AS heterozygote. Expect no change in the C allele because heterozygotes have equal or lower fitness than other common genotypes. Can't "pass through" the heterozygote stage.
Selection is "short-sighted" cannot "see"
the best solution, i.e., the highest fitness state where C goes to fixation.
How do we get to the "best" condition?
Consider the effects of population structure: Bantu people establish
local breeding groups of small effective population size. This necessarily
will bring on some inbreeding which serves to increase homozygotes
and decrease heterozygotes. Some CC genotypes will be produced by the chance
effects of inbreeding and now the "best" genotype is present
in the population and selection can "see" it so the frequency
of the C allele increases, and the population would go to fixation for
The highest fitness state could not be reached by selection alone; Drift can affect the outcome of selection. This has been conceptualized as an Adaptive landscape:
Pick two allele frequencies, one for the A locus and one for the B locus.
This defines a "population" (obviously this would be done in
many dimensions for all loci in the genome; only two here for illustration).
The population will evolve by selection to the top of the nearest peak.
If a population starts with f(A) ~ f(B) ~ 0.2, this population would evolve
to the top of the peak at the lower right of the diagram. This is not the
highest peak, but selection acts to increase average fitness and can only
"see" the nearest peak. If drift due to low effective population
size rapidly shifted both f(A) and f(B) to higher frequencies, then the
population might be in the "domain of attraction" of the
highest peak in the upper right corner.
The population would stop evolving when it reached the top of the highest
peak because there is no higher peak to shift to unless the environment
changes at which point we would have to redraw the adaptive landscape.
Sewell Wright conceived of this view of evolution and believed that
this was a more accurate description of how "real" populations
evolved sine most species do have some structure to their populations and
experience drift. So there will be a shifting balance between allele
frequencies and a shifting balance between drift and selection as the causative
agents in evolution. This is the so called shifting balance theory.
Wright envisioned different stages of evolution by shifting balance:
1) Drift in local populations would shift allele frequencies to
new values and the demes may evolve up a local peak because the allele
frequency in such a deme drifted near the 'domain of attraction' of a peak.
The assumption is that without drift, a population on a 'flat' section
of the adaptive landscape would not evolve by natural selection, because
there would be no fitness variation in a flat region of the landscape.
2) intrademic selection where selection within local populations
(demes) would drive the various demes to the top of their nearest peak.
Even if several populations were at different "locations" on
the adaptive landscape, the highest peak may not be reached. One can invoke
stage 2.5 by saying that drift in local populations might move such a population's
allele frequency off one peak and into the 'domain of attraction' of an
adjacent peak with a different maximum fitness. This peak may be lower
or higher than the old one, but after several rounds of drift at least
one population may evolve to the top of the highest peak. 3) This
(these) high-fitness populations will produce many emigrants and
tend to change the other demes' allele frequencies closer to their own
as a result of gene flow; this third stage is called interdemic
selection (selection among demes); emigration rates are proportional
to the extent that the fitness of a given population is greater or less
than the average fitness of all populations. When all populations
are homogenized to the allele frequencies of maximal fitness a new balance
will be achieved and the allele frequencies will be maintained by selection
until the environment changes the adaptive landscape. (For empirical support
of this theory see Wade and Goodnight, 1991 Science vol. 253 pg.
1015-1018 and a commentary on page 973 by Crow).
Alternative way to view the shifting balance theory: consider a surface
with troughs and pits in it. Put several marbles on the surface. If marble
is near pit it falls in selection ~ gravity. Shake surface
and balls will roll up out of pits against gravity and make their
way to new pit. Shaking ~ drift. See figures 8.8-8.11, pgs.
Before discussing the shifting balance view of evolution we considered selection as if it were acting on a single locus. This is a gross oversimplification because many loci are linked along the chromosome. Who's to say that selection is acting the same way on both loci? Things get much more interesting (but more complicated) when we face the reality of linked loci. Consider the cross between the two two-locus genotypes:
The offspring can be AB/AB,
AB/ab or ab/ab. Other two locus genotypes are possible: or
. But these can only be produced in
the cross if there is recombination between the two loci. We can
thus refer to four two-locus gametes AB and ab are the coupling
gametes and aB and Ab are the repulsion gametes (another way
to think about gametes is to just refer to them as a "chromosome"
since this will reflect the linear array of whatever alleles are linked
together). The frequency of these four gametes will be determined by two
things 1) the frequencies of the respective alleles (p and q for
the A locus and a different p and q for the B locus) and 2) the
degree of linkage disequilibrium which describes whether recombination
has broken up any association between the two linked loci.
When allele frequencies all = 0.5 and all gametes are in equal frequency then f(AB) = f(ab) = f(Ab) = f(aB). But if A alleles tend to be associated (linked) to B alleles then AB gametes will be in higher frequency than expected at random. We can quantify the disequilibrium as follows:
D = [f(AB) f(ab)] - [f(Ab) f(aB)]. (Note frequencies are multiplied)
When all gametes are in equal frequency D = 0 i.e., linkage equilibrium.
When only the coupling gametes are present D = 0.25; when only the repulsion
gametes are present D = -0.25. If the frequencies of the alleles are less
than 0.5, then the maximum value for D will be less than 0.25.
Note that when allele frequencies are different from p = 0.5
= q, the maximum value of D (absolute value)will be less than 0.25.
For example if p=0.8, q=0.2 and if only the coupling gametes were in the
population then D = 0.16
A worked example: gamete frequencies in 1000 observations: 580 AB's,
140 Ab's, 60 aB's and 280 ab's. Thus f(A) allele = (520+140)/1000 = 0.66
so f(a) = .34. f(B) = (520+60)/1000 = 0.58 so f(b) = 0.42. At random we
expect the following gamete frequencies: f(AB) should be .66(.58)1000 =
383. f(Ab) should be .66(.42)1000 = 277. f(aB) should be .34(.58)1000 =
197 and f(ab) should be .34(.42)1000 = 143. These numbers of expected gametes
are clearly different from the observed gametes. We can thus calculate
the linkage disequilibrium as d = [.52(.28)] - [.14(.06)] = 0.1372. This
tells us that the A and B alleles are in linkage disequilibrium.
This disequilibrium will be broken up by recombination and the rate
of breakup will be determined by the rate of recombination (see
figure 8.2, pg. 202).
Now let's say that the A locus was under selection with A alleles favored. If the A and B loci were in linkage disequilibrium in the coupling state what would happen to the B alleles? They too would be selected for, but not because they were under selection. This is a very important phenomenon in population and evolutionary genetics called hitchhiking. It demonstrates a very important distinction we must make about selection and phenotype: we need to distinguish between selection "of" and selection "for" If the A allele is favored, there is selection for the A allele and selection of the B allele due to its linkage to the A allele (i.e., linkage between the A and B loci).
Now consider the situation where the nose length is the result of interactions
between the two loci. In the first case the interaction is additive, in
the second case there is epistasis
In the first case if we selected for long noses, we would tend to drive
the a and b alleles to high frequency. If we selected for long noses in
the second case we would tend to drive the A and b alleles to high frequency.
The important distinction between these two tables is that in the simple
two-locus additive case on the left, heterozygotes at one
locus are intermediate between the two homozygotes regardless of
the genotype at the other locus. In contrast, the table on the right shows
that the relationship between genotype and phenotype at one locus depends
on the genotype at the other, interacting locus. In a sense, one locus
is modifying the expression of the other locus. If selection were
to act in favor of nose length in the right-hand epistatic system,
the way alleles "marched to fixation" would be very different.
Now consider how linkage and epistasis can affect the response
to selection. In the second case above if there was high linkage disequilibrium
so that all we had we AB and ab chromosomes in the population (= AB or
ab gametes in the gamete pool), there would be less variation to select
on (sizes 1, 6 and 3). Now if there was recombination such that Ab and
aB chromosomes were produced, then the full range of phenotypic variation
would be exposed (up to 9) and selection would rapidly shift the mean phenotype
to longer noses and to high frequency of A and b alleles.
The general point is that loci do not act independently and their response
to selection depends critically on their linkage relationships and
their interaction with other loci. For the ecologically minded,
there are some interesting parallels between community ecology and population
genetics: there are an uncountable number of ways that the interacting
participants can interact. In community ecology one considers species in
a community; in population genetics one considers genes in the genome.
The fate of each player depends on the degree to which it is "connected"
to the other players in the system. Darwin referred to the complexity of
nature as a "tangled bank"; this is very true of the genes within
genomes within populations. | <urn:uuid:6f1d5917-88d4-424c-9e00-5621b13fed27> | CC-MAIN-2015-35 | http://biomed.brown.edu/Courses/BIO48/9.Integration.forces.HTML | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00159-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.9097 | 3,061 | 3.46875 | 3 |
Museum Acquires Unique Collection
|See pieces from the collection in the exhibition PLAY WORK BUILD.|
Blueprints Spring 2007
Volume XXV, No. 2
The National Building Museum has acquired an extraordinary collection of several thousand architectural toys, assembled over the past few decades by George Wetzel, a now-retired schoolteacher from Peotone, Illinois. Items in the collection, many of which date to the 19th century, include simple building blocks, sophisticated Erector Sets, and other kits used by children of various ages to create miniature buildings and cities. Viewed together, these toys not only trace the progress of construction technology, but also reflect and illuminate dramatic shifts in domestic culture.
Martin Moeller: How did your toy collecting get started?
George Wetzel: It began about 25 years ago, when I started to have my own family. I was buying toys for my children and soon realized that the ones available at the time were a lot different from the ones I had growing up. Today’s stuff is often a bunch of cheap junk compared to what I remember as a child. There are still sets they call “erector sets” these days, but nothing like what I had 50 years ago. There are exceptions—I continue to buy Legos, which I enjoy. They are among the few toys that I still think well of. Legos have always been good in terms of function—they challenge kids’ creativity.
Within a year or so after I started buying toys for my kids, my own dad actually turned up my old American Flyer train, and within another year my mother turned up my old Erector Set. They were so well made back then, you just didn’t throw them away. You kept them for the next generation. When I started collecting, I found that this was not unusual—handing down from generation to generation. But my kids did not show much interest in toy trains.
Gradually, I found a lot of other Baby Boomers like me, and then I realized that there was a group of collectors of toy trains. The more I found the more I rediscovered the history and variety of these things. I got the collecting bug. Gradually as my kids showed less and less interest, it turned into an addiction for me!
Moeller: But you did not have a background in architecture or engineering, right?
Wetzel: I am a retired English teacher. I took a few classes in college about architecture, but that’s not really what drew me to this. A lot of collectors of these toys are architects and engineers, but not me.
I decided to keep my collecting interest fairly broad. I asked myself, “What has not been done before? What is really unique?” So I came across architectural sets fairly early on.
Moeller: How did your philosophy of collecting develop?
Wetzel: It was a learning process—every year I came across some new toy and new piece of history. I wanted to have a representative collection of just about every type of building toy that was made. I started writing letters to people, joining collectors’ groups and clubs, and found out where you can get these things.
Most of the items initially came from other collectors. I went to trading shows, where you could buy, sell, and trade. I kept bumping into the right people. Once they realized I was a serious and studious collector, they let me into the “club.” I didn’t have a lot of money—I soon found it was kind of a rich man’s game—but I made a commitment to acquire things over a long period of time. I always made a point when I met collectors to go see their stuff first hand. Once I saw those collections, that is what pointed me in a direction that no one else had quite gone. I focused on the architectural aspect of these toys, which I didn’t see anyone else doing at the time.
Moeller: Did your attitudes toward the collection itself change over the years?
Wetzel: My objectives changed. I have been doing this about 25 years, and about every five years my direction changed a little. There was a certain point in time I remember when I decided that, on the one hand, I could devote myself to actually building layouts from these sets and spend hours away from my family. On the other hand, I could just collect and put them on a shelf and enjoy them—and keep spending time with my family. I just enjoyed the collecting aspect more than the actual building.
As I got into it, once I started seeing some of the exotic and unusual Victorian pictures on the box lids, that intrigued me even more, especially the old period graphics on the boxes—boys in knickers and girls in pinafores.
Moeller: Forgive the cliché question, but what are your favorite items in the collection?
Wetzel: I could probably find a dozen really unusual, rare, and valuable items. There’s the Bilt-E-Z, a metal set made in Chicago in the 1920s, which can be used to make a very realistic looking early skyscraper. Not only was the result authentic, but the set fit together easily—it was easy to use. Some others are so awkward and hard to build with—really not for kids at all. Examples of the Bilt-E-Z set can be found, but to find a large set that’s whole and presentable was tough.
Moeller: You have pointed out that one of the ironies of collecting is that people tend to collect unsuccessful products, because they were not produced in large numbers and are therefore rare. So what are the worst items you collected?
Wetzel: Right. I wasn’t really seeking the best or most popular. It’s the oddball sets that are rare because not many were made. Sometimes you’d see a nice picture on the box, but then you’d open it up and say, “What’s this?”
One of the oddest is the Build-a-Set, though there are lots with similar names. It was made in the early ’40s. During the war, there was a metal shortage, so this was made out of cardboard. Instead of nuts and bolts, there were little wooden pins. It’s so flimsy—the whole concept was crazy. I found two versions—one in mint condition, the other mangled, showing what happened if you actually tried to use it.
Moeller: You primarily collected sets that were used to create miniature buildings. Where does the history of such sets begin?
Wetzel: A lot of early sets had religious connotations. They were often used to build model churches. It was traditional to play with them on Sundays, but then the kids would have to put them back in the box and not play with them again until the next Sunday. Some pieces had scripture verses written right on them. So they were attempting to instill Christian values with these blocks in a hands-on, tactile way. That’s where this all started back in the 1850s.
Moeller: How did they develop from there?
Wetzel: From there through the 1890s they became more sophisticated. A key figure was Frederick Richter in Germany in the 1880s. He made building blocks that were like stone.
By the 1890s, they started putting metal pieces in the sets for roofs and floorboards, or to make bridges. Before the introduction of metal parts, roofs, in particular, were always a problem, so this was a step to another level of realism.
In 1901, Frank Hornby invented the Meccano set in England—a precursor to the Erector Set in the United States in 1913. This brought building sets to a whole other level, with wheels and other moving parts.
A.C. Gilbert, who invented the Erector Set in 1913, was a genius at several levels. Once he got into toys, he made refinements to what Hornby had done, operating on the theory that bigger is better. With Gilbert’s set, you could make bridges and towers seven or eight feet high, as well as airplanes and zeppelins and locomotives. He really brought it out of toy realm and into the model-building arena.
The high-end toys from the teens, ’20s and ’30s are highly sought after. Then came the Depression and the war, and Lincoln Logs and Tinker Toys became popular because no one could afford to make metal toys. Then in the ’50s and ’60s, it was all about plastic. I kind of feel lucky to have grown up in the ’50s, because I was part of the last generation to have a lot of really high-quality toys.
Moeller: When did you decide that it was time for the collection to find a new home in a museum?
Wetzel: Eventually, after about 15 years of collecting, I could barely walk into my attic, and that’s when I realized these things belong in a museum. That’s also when I started writing stories about my experiences as a collector.
Not much happened for some time. But then I spoke to Chase [Rynd] a couple of years ago. He came out and looked at the collection, and he was thoroughly taken by it. I was very pleased, as [the National Building Museum] is exactly the type of place where I hoped it would end up.
Moeller: How would you assess the importance of this collection for posterity?
Wetzel: When you see people pick up these items and touch them like they are their old friends, you realize the connection. These were so popular then— they were the equivalent of the video games of today. And yet, in many ways, they were the exact opposite of the video games. These toys demanded a lot of patience, a lot of creativity. You’d spend a whole evening with these things—hours and hours. The whole family could enjoy them. Now [that the collection has gone to the National Building Museum], future generations will have a chance to catch a glimpse of another way of life. | <urn:uuid:c7fd01be-b743-4036-9415-7dac32a45071> | CC-MAIN-2015-35 | http://www.nbm.org/about-us/publications/blueprints/toy-story.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00158-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.98281 | 2,143 | 2.59375 | 3 |
Redesigning a River: Architect Jeanne Gang
Whether she sought the position or not, Jeanne Gang has risen to the tier of American “celebrity architects,” a rarefied community of building designers whose work commands national praise and attention. Of her most famous project -- the 82-story Aqua apartment tower in her hometown of Chicago -- the Pulitzer Prize-winning architecture critic Paul Goldberger wrote in The New Yorker: “It reclaims the notion that thrilling and beautiful form can still emerge out of the realm of the practical.” The design of that skyscraper -- with its irregularly shaped balconies stacking and combining in such a way as to suggest rippling fabric, or the surface of a pond after a pebble has been tossed into it -- helped earn Gang a MacArthur “genius” grant in 2011. She was the first architect to win one in over a decade.
Gang considered a career in engineering before deciding on architecture; ultimately her focus, regardless of the project, is on the solving of problems. Those undulating balconies, for instance, aren’t simply there to make street-level gawkers stop in their tracks (and they certainly do). As gorgeous as they are, they’re serious energy savers. The built-in concrete shades help cool the apartments below them and do double-duty as protection from fierce winds that typically buffet buildings of that height. The design saves massive amounts of money and materials that would otherwise need to supplement the construction.
One of Gang’s most exciting projects, however, is still in the theoretical stages. If it ever comes to pass, the project will have a profound effect on Chicago and on three bodies of water -- the Chicago River, Lake Michigan, and the Mississippi River -- that are crucial not only to the city, but to the larger Midwest region, and indeed, the entire country.
In an admirable effort to reduce Great Lakes pollution in the late 19th century, civil engineers actually reversed the flow of the Chicago River, which originally emptied into Lake Michigan. Today, instead of stoically accepting the Chicago River’s dirty outflow, Lake Michigan releases a billion gallons of water per day into the river. Unfortunately, that simply means that the city’s pollution now runs the other way -- toward the Mississippi River watershed, where it eventually makes its way downstream to the Gulf of Mexico. Since most of the Chicago Riverfront has historically been zoned for industrial use, Chicagoans resigned themselves long ago to the fact that this waterway, which cuts through the heart of their city, is little more than an escape route for runoff and effluvia -- a fact that was grimly acknowledged in the naming of “Bubbly Creek,” a section of the river beside the city’s notorious meat-packing district that became a watery grave for livestock remains.
In her book Reverse Effect: Renewing Chicago’s Waterways, which grew out of a year-long collaboration with NRDC, Gang details how a plan to place a permanent barrier in the Chicago River -- between the Great Lakes system and the Mississippi River watershed with which it now connects -- could solve a number of problems at once. It could stop the recent invasion of Asian carp into Lake Michigan, where the ravenous fish are sure to wreak havoc if they ever fully establish themselves. The barrier could ameliorate troublesome flooding and pollution that have chronically plagued the Chicago River, which still roils with sewage and runoff after heavy rains. And it could revivify a beloved city symbol, returning the long-neglected river to citizens -- and making it, finally, into a destination that Chicagoans can use and enjoy.
Jeanne Gang spoke to me recently from the offices of Studio Gang, her architecture firm in Chicago.
How did the proposed project that you outline in Reverse Effect come into being?
I’d been working with people from NRDC for a long time. One day we did an eco-salon here at our studio, talking about all of our green projects. I got to talking with some people from NRDC about the invasive species issue: about the carp that are heading toward the Great Lakes up the Chicago River. They told me that they were studying the benefits that might come from placing a dam in the river, establishing a barrier between the Great Lakes watershed and the Mississippi River basin. My curiosity was sparked: I thought it was a great opportunity to think not only about the barrier itself, but also what the act of creating it could mean for the city and its future.
What kinds of benefits were you imagining at that point?
The issue of the invasive species is one thing, but there’s also the issue of water quality -- the fact that we’re still putting raw sewage into our waterways. Even though they’re trying to expand our city’s deep-tunnel sewer system, the capacity of that system isn’t great enough to handle the rain; it just isn’t able to keep up. With very little rainfall -- barely over half an inch -- we have a situation where we’re sending runoff directly into the river and lake. And with climate change, this will get even worse. We’ll have stronger storms in shorter amounts of time.
And then another issue was really just the feel of the riverfront. It’s post-industrial; much of it is now just sitting there, abandoned and unused.
Along with the barrier, what can be done to improve the quality of water in both the river and the lake?
One of the most important things for improving water quality would be to reduce runoff by putting in a lot more green infrastructure, so that you could absorb that much more rainwater and not just flush it into the sewer system, which then becomes overwhelmed.
Eventually, as steps like this and others started taking place, the river itself would be remediated and cleaned. Ultimately what we want to do is not keep wasting the water that’s coming out of the lake and flushing it down to the Mississippi. It’s exciting; it’s really a new way to think about the relationship between these three bodies of water. Instead of thinking of the Great Lakes with these canals coming off that are taking water out of the lakes and down to the Mississippi, we’d be capturing the water, using it, then cleaning it first with technology and further by charging it into a series of wetland lagoons, and then letting it go back to the lake. Which would be amazing, if you think about what that could mean for the quality of life in the city in the future.
How could your plan improve the quality of life for Chicagoans?
For one thing, just increasing access to the river would be huge. It’s not in the greatest, most pristine condition right now, but I think it’s really important to give people a chance to care about the river. If they can’t get to the edge, because it’s in private hands, how can they care about it? Our plan would call for reinvigorating the area of the riverfront by cleaning it up and creating these wetland lagoons, and also adding a harbor. Big boats coming off the lake into the Chicago River, toward downtown, would have a new destination; smaller boats that just wanted to row up and down could launch from there. Also, installing this barrier could create an opportunity to connect the two opposite sides of the river, two neighborhoods that have never been physically connected, in the form of a bridge.
The idea of “creating” a natural filtration system with the aid of green infrastructure like wetlands is undoubtedly a complicated one, logistically and practically speaking. But conceptually it’s quite simple. Do architects and urban planners sometimes overlook simple solutions in favor of the newest, most whiz-bang technological ones?
What people like to hear about are new inventions, new technologies -- those are what get talked about more often. The basis of all our designs at Studio Gang is: What are the easiest, cheapest, and most implementable solutions? We start with that question, and with the goal of trying to reduce basic energy use. Before you introduce mechanisms for creating renewable energy, for example, first you need to concern yourself with just getting the basic energy-use factor down, through how you design the building.
There are some great, tried-and-true solutions. Number one being just the orientation of the building -- if you’re lucky enough to have control over that. (And you don’t always have control over that in a city.) You can reduce energy load just by how you site the building, and the way that you shade from the sun or let the sun in. You have to look at the climate that you’re dealing with and then find out what the cheap -- or even free -- age-old solutions are.
That makes me think of your plan for a large residential building in Hyderabad, India, which takes advantage of local materials that have been used for thousands of years.
A thorough examination of what people have already done is always illuminating -- and exciting, actually. For Hyderabad we realized that we could use material directly from the site, this clay-like material that’s literally right there, and then use a compressed block press to make bricks that don’t require firing. They just air-dry, but they have a very high compression rate, so they’re really strong, and you could just set up a workshop right there on the site. You can significantly reduce the carbon that’s going into the building by employing a material like that.
Not to mention the carbon-reduction benefit that comes from not having to ship materials from hundreds of miles away in trucks.
For the Ford Calumet Environmental Center, a proposed community center in an industrial corner of Chicago, you showed a similar inclination to use recycled and/or locally sourced materials.
That’s a really interesting site -- it combines a much older industrial heritage with a degree of still-live industry, so there are lots of opportunities. A lot of slag -- which is a byproduct of making steel -- moves through the Calumet area, as do other heavy materials. So on the one hand it’s pretty industrial, but at the same time it also happens to be this really important habitat for migratory birds, because it has these natural river wetlands.
And thinking about those birds got us wondering: “What can we use from around here to make the building? To make the building more like a bird’s nest -- using things that are available nearby?” So all of the materials for the building were conceived as things that we could source extremely locally, like from within a four-mile radius of the building. Slag is used in the terrazzo floors; we have acoustic material made out of recycled denim, from old blue jeans!
In all the projects you describe, from the Chicago River barrier and waterfront clean-up, to Aqua, to Hyderabad, to the Ford Calumet site, I sense a pattern: simple ideas can yield big payoffs.
What I love about the idea of the river barrier, for example, is that it’s a small thing, really -- just a piece of infrastructure -- but it has such large implications: for the neighborhoods around it, the city at large, and the entire waterway. By doing something very local and even very small, you can have this great impact. I think the architects of the future are going to be much more involved with these types of problems -- not just designing one building at a time. They’re really going to have to think about how all these things are connected. | <urn:uuid:ca851000-2341-4c44-a531-b274304bb311> | CC-MAIN-2015-35 | http://archive.onearth.org/article/qa-with-architect-jeanne-gang | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167592.45/warc/CC-MAIN-20150827031247-00046-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.960596 | 2,442 | 2.796875 | 3 |
LONGEVITY MEME NEWSLETTER
September 08 2008
The Longevity Meme Newsletter is a weekly e-mail containing news, opinions and happenings for people interested in healthy life extension: making use of diet, lifestyle choices, technology and proven medical advances to live healthy, longer lives.
- The Importance of Autophagy
- The Debate Over How To Tackle Aging
- New SAGE Crossroads Podcasts
- Latest Healthy Life Extension Headlines
THE IMPORTANCE OF AUTOPHAGY
Autophagy is the process by which cells break down old and damaged components, such as the all-important mitochondria, the cellular power plants, thereby recycling the materials needed to build fresh components. Damaged cells cause problems for the surrounding tissue: if cells are less damaged and remain damaged for a shorter period of time, then fewer problems result. A higher rate of autophagy should then lead to a longer-lived organism.
This appears to be the case. Research over the past few years shows that increased autophagy occurs in a range of diverse genetic and metabolic alterations that boost healthy life span in mammals. Calorie restriction, for example, or more recent progress in manipulating the p53 gene:
"Given the level of funding and interest in calorie restriction mimetics, I imagine that the development of autophagy-enhancing drugs will proceed in the much the same way over the next few years."
THE DEBATE OVER HOW TO TACKLE AGING
The most important scientific debate of our time is not much noticed beyond the aging research community. It is over how to devote resources to create therapies to treat aging. What strategies should be followed? The majority position is made up of researchers who believe that a long, slow, and hard path towards re-engineering our metabolism to slow aging is the only viable path. The minority position consists of researchers who see that striving to repair biochemical damage to the metabolism we have is faster and more efficient.
If we want to see our own lives significantly extended, we'd better help the minority position grow larger. The path of slowing aging is fearsomely complex, a very long haul, and unlikely to help those of us reading this today live longer in any significant way. A way to slow aging that emerges 30 years from now isn't all that helpful for someone who is middle-aged today. A way to reverse the damage of aging 30 years from now is a whole different story - especially as it looks to be no harder to achieve, and quite possibly somewhat easier:
"Every biochemical component in our metabolism is a part of many different complex evolved systems - evolution loves reuse and interacting, linked feedback systems. You can't change a thing without having to worry about profound side-effects in every connected process, and the processes important to aging are right in the middle of the engines of life.
"But the modern longevity engineers, the heretical minority in the aging research community, are not taking that path forward. Rather, they use the metabolism we have when we are young as the ideal reference model, and seek to reverse all changes away from that reference model that occur with age. No re-engineering, no worrying about how change A affects systems B, C, and D - this is a straightforward repair and restoration strategy. The objective is to restore the metabolism we know works, not create some new metabolism that must be extensively tested and understood.
"That is efficiency, and the nature of efficiency in longevity research is the most important debate within the life sciences today, for all that most people know nothing of it. The result of this debate will determine how long we all live in good health."
NEW SAGE CROSSROADS PODCASTS
A couple of new podcasts can be found at SAGE Crossroads. Links and commentary are in the following Fight Aging! post:
"Humanity faces many challenges this century. There are three important considerations that can help us distinguish between the challenges that are truly the biggest problems from those that are less pressing. The first is the magnitude of the harms in question. Second, is their certainty of happening. Last, is the likelihood that we could do something about them. Aging scores very high on all three of these issues."
The highlights and headlines from the past week follow below.
Remember - if you like this newsletter, the chances are that your friends will find it useful too. Forward it on, or post a copy to your favorite online communities. Encourage the people you know to pitch in and make a difference to the future of health and longevity!
LATEST HEALTHY LIFE EXTENSION HEADLINES
To view commentary on the latest news headlines complete with links and references, please visit the daily news section of the Longevity Meme: http://www.longevitymeme.org/news/
Aging Cast As Autophagy Disorder (September 05 2008)
Enhanced autophagy is clearly important in most - possibly all - of the demonstrated ways to extend healthy longevity in mammals. I noticed this paper today: "Many macromolecules under degradation inside lysosomes contain iron that [makes] lysosomes sensitive to oxidative stress. ... Apart from being an essential turnover process, autophagy is also a mechanism for cells to repair inflicted damage, and to survive temporary starvation. The inevitable diffusion of hydrogen peroxide into iron-rich lysosomes causes the slow oxidative formation of lipofuscin in long-lived postmitotic cells, where it finally occupies a substantial part of the volume of the lysosomal compartment. This seems to result in a misdirection of lysosomal enzymes away from [autophagic vacuoles], resulting in depressed autophagy and the accumulation of malfunctioning mitochondria and proteins with consequent cellular dysfunction. This scenario might put aging into the category of autophagy disorders."
Progress in Bypassing Mitochondrial Damage (September 05 2008)
Allotopic expression of genes normally found in mitochondrial DNA is a core portion of the Strategies for Engineered Negligible Senescence. It is the process of inserting a copy of vital mitochondrial genes into the cell nucleus, and then figuring out how to get the proteins produced by those genes back to the mitochondria where they are needed. This could eliminate the contribution of mitochondrial DNA damage to aging. A technique for doing all this is now demonstrated in rats: "We obtained a complete and long-term restoration of mitochondrial function in human fibroblasts in which the mitochondrial genes ATP6, ND1, and ND4 were mutated ... ND1 and ND4 are mutated in nearly all cases of Leber hereditary optic neuropathy (LHON). LHON is the most common mitochondrial disorder and is characterized by a loss of vision. ... They introduced the human ND4 gene with the mutation present in the majority of LHON patients into rat eyes. The treatment caused retinal ganglion cells (RGCs) to degenerate significantly when compared to those from control eyes and was associated with decreased visual performance. Importantly, reintroducing normal ND4 led to prevention of RGC loss and visual impairment, effectively rescuing the animals from impending blindness. ... These data represent the 'proof of principle' that optimized allotropic expression is effective in vivo and can be envisaged as a therapeutic approach for mtDNA-related diseases."
Reactive Carbonyl Species, ALEs, and Aging (September 04 2008)
Free radicals (such as reactive oxygen species) are increasingly generated with age - this is the end of a long chain of consequences that starts with damaged mitochondrial DNA. How do those oxidizing agents actually cause widespread harm to bodily systems? This paper gives an overview of one broad set of mechanisms, wherein step one is the creation of reactive carbonyl species (RCS) by free radicals: "Most of the biological effects of RCS [are] due to their capacity to react with cellular constituents, forming advanced lipoxidation end-products (ALEs). Compared to reactive oxygen and nitrogen species, lipid-derived RCS are stable and can diffuse within or even escape from the cell and attack targets far from the site of formation. Therefore, these soluble reactive intermediates, precursors of ALEs, are not only cytotoxic per se, but they also behave as mediators and propagators of oxidative stress and cellular and tissue damage. ... The causal role of ALEs in aging and longevity is inferred from the findings that follow: a) its accumulation with aging in several tissues and species; b) physiological interventions (dietary restriction) that increase longevity, decrease ALEs content; c) the longer the longevity of a species, the lower is the lipoxidation-derived molecular damage; and finally d) exacerbated levels of ALEs are associated with pathological states."
Update on the Longevity Science Amex Members Project (September 04 2008)
From the Methuselah Foundation blog: "I'm pleased to say that the pro-longevity science community rallied to vote the Amex Members Project submission "Undergrads Fighting Age Related Disease" into the top 25 projects by vote totals - and made it the most discussed project of all. Thank you! That discussion is still ongoing, by the way, and people unfamiliar with longevity research have questions about the project. Feel free to jump in and help answer them. What comes next? Well, between now and September 9th - less than a week away - the Members Project advisory panel will look at the projects, votes, and discussions, and announce the final 25. Those 25 projects will be voted on by Amex card holders to determine which 5 will be funded. ... So, all you generous folk who rounded up your friends and spread the word: we're going to do it all again for those with American Express cards starting on the 9th. We here at the Methuselah Foundation are looking forward to it!"
Submissions Wanted For Hourglass III (September 03 2008)
From Ouroboros: "The third installation of Hourglass, a monthly blog carnival devoted to the biology of aging, will appear on September 9th at SharpBrains. We are soliciting entries in the general subject area of aging and biogerontology: Topics of posts should have something to do with the biology of aging, broadly speaking - including fundamental research in biogerontology, age-related disease, ideas about life extension technologies, your personal experience with calorie restriction, maybe even something about the sociological implications of increased longevity. Opinions expressed are not necessarily those of the management, so feel free to subvert the dominant paradigm. If in doubt, submit anyway. Submissions should be emailed to [hourglass.host][at][gmail][dot][com]. (In the meantime, feel free to check out previous editions of the carnival, here and here. Hourglass IV will appear on October 14th at psique.)"
Another Regenerative Strategy For Hearing Loss (September 03 2008)
Following on from the gene therapy approach for age-related deafness mentioned a few days ago, here's a cell-based therapy via EurekAlert!: "hearing loss due to cochlear damage may be repaired by transplantation of human umbilical cord hematopoietic stem cells ... the team used animal models in which permanent hearing loss had been induced by intense noise, chemical toxicity or both. Cochlear regeneration was only observed in animal groups that received HSC transplants. Researchers used sensitive tracing methods to determine if the transplanted cells were capable of migrating to the cochlea and evaluated whether the cells could contribute to regenerating neurons and sensory tissue in the cochlea. ... Our findings show dramatic repair of damage with surprisingly few human-derived cells having migrated to the cochlea. A fraction of circulating HSC fused with resident cells, generating hybrids, yet the administration of HSC appeared to be correlated with tissue regeneration and repair as the cochlea in non-transplanted mice remained seriously damaged."
Metformin as Calorie Restriction Mimetic (September 02 2008)
This paper is illustrative of the thinking that leads to trying anti-diabetic drugs as calorie restriction mimetics: "Studies in mammals have led to the suggestion that hyperglycemia and hyperinsulinemia are important factors both in aging and in the development of cancer. It is possible that the life-prolonging effects of calorie restriction are due to decreasing IGF-1 levels. A search of pharmacological modulators of insulin/IGF-1 signaling pathway (which resemble effects of life span extending mutations or calorie restriction) could be a perspective direction in regulation of longevity. Antidiabetic biguanides are most promising among them. Here we show the chronic treatment of female outbred SHR mice with metformin (100 mg/kg in drinking water) slightly modified the food consumption but decreased the body weight after the age of 20 months, slowed down the age-related switch-off of estrous function, increased mean life span by 37.8%, mean life span of last 10% survivors by 20.8%, and maximum life span by 2.8 months (+10.3%) in comparison with control mice." Full calorie restriction does better than that (30-40% maximum life span extension), but this is a strong argument for its effects on insulin metabolism to be one cause of enhanced health and longevity.
Another Human Longevity Gene Association (September 02 2008)
The Telegraph reports on confirmation that a class of longevity genes indentified in lower animals also has an effect on human populations: "The gene linked with better health and a longer life is called FOXO3A and although similar genes have been shown to prolong life span in other species, this is the first time that FOXO has been linked directly to longevity in humans. ... Each gene comes in two copies and the team found the longevity effect of this letter was additive: those with one copy doubled their odds of living an average 98 years ... Men who had two G copies did even better and almost tripled their odds of living nearly a century, and were markedly healthier at older ages ... We screened 213 of the long-lived participants' DNA and 402 of the average-lived, focusing on five genes ... These genes were selected for good reason because they involved in the insulin pathway and signalling, which studies of other animals have shown is linked with longevity." This doesn't tell we laypeople more than we already knew: that insulin metabolism is significant in health and longevity variations within a species.
Towards a Regenerative Cure For Hearing Loss (September 01 2008)
From ScienceDaily: "scientists have successfully produced functional auditory hair cells in the cochlea of the mouse inner ear. ... researchers specifically focused on the tiny hair cells located in a portion of the ear's cochlea called the organ of Corti. It has long been understood that as these hair cells die, hearing loss occurs. Throughout a person's life, a certain number of these cells malfunction or die naturally leading to gradual hearing loss often witnessed in aging persons. Those who are exposed to loud noises for a prolonged period or suffer from certain diseases lose more sensory hair cells than average and therefore suffer from more pronounced hearing loss. ... One approach to restore auditory function is to replace defective cells with healthy new cells. Our work shows that it is possible to produce functional auditory hair cells in the mammalian cochlea. ... It remains to be determined whether gene transfer into a deaf mouse will lead to the production of healthy cells that enable hearing."
On the Way to Controlling Telomerase (September 01 2008)
Researchers are making progress in figuring how to control telomerase, and through it influence telomeres, cancer, and aging. From EurekAlert!: researchers "have deciphered the structure of the active region of telomerase, an enzyme that plays a major role in the development of nearly all human cancers. The landmark achievement opens the door to the creation of new, broadly effective cancer drugs, as well as anti-aging therapies. ... Researchers have attempted for more than a decade to find drugs that shut down telomerase - widely considered the No. 1 target for the development of new cancer treatments - but have been hampered in large part by a lack of knowledge of the enzyme's structure. The findings [should] help researchers in their efforts to design effective telomerase inhibitors ... Telomerase is an ideal target for chemotherapy because it is active in almost all human tumors, but inactive in most normal cells. That means a drug that deactivates telomerase would likely work against all cancers, with few side effects." Long-term deactivation will cause massive issues, of course, but that's not the intent for the moment. Given new information about telomerase and mitochondria in aging, there are potentially more interesting end results than good cancer therapies. | <urn:uuid:498796a8-2126-49c7-9227-e1de6b984ca1> | CC-MAIN-2015-35 | https://www.fightaging.org/archives/2008/09/longevity-meme-newsletter-september-08-2008.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00340-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.931598 | 3,461 | 2.78125 | 3 |
History 540: France
1600-1815 Prof. Jeremy
France’s Mid-17th-Century Crisis: The
increase in royal power in France
was dramatically interrupted in 1648 by the outbreak of a series of challenges
to absolutism that came to be known collectively as the Fronde. From 1648 to 1653,
the Fronde plunged France
into a somewhat toned-down version of the disorders it had experienced during
the wars of religion. The king was
driven from his capital, several provinces revolted, and revolutionary claims
for the rights of magistrates, nobles, and even some of the common people to
participate in government were put forward.
The Fronde ended, however, with a restoration of absolute royal authority
rather than a change in the French system of government. For historians, the Fronde raises fascinating
questions about the failure of resistance to develop into a genuine revolution,
like the one that occurred in England
at almost the same time (1640-1660) or the one that occurred in France itself
of Cardinal Richelieu in 1642 and Louis XIII in 1643 plunged France into
another period of uncertainty, like the one that had followed Henri IV’s death
in 1610. The heir to the throne, the
future Louis XIV, was only five years old.
His mother, Anne of Austria,
became regent, assisted by Cardinal
Mazarin, an Italian diplomat recruited to the French government by Richelieu in the years before his death. Within a few years of Louis XIII’s death,
they would find themselves facing a crisis that almost became a
revolution: the Fronde, a series of uprisings that seemed for several years to be
on the verge of toppling the system of absolute monarchy painstakingly created
by Henri IV, Sully, Louis XIII and Richelieu.
The weakness of the Fronde was revealed from the first in its name,
taken from a children’s game played with slingshots (“frondes” in French). The adoption of this label suggested that the
movement was never entirely serious.
Austria and Mazarin did not have to face the religious conflicts that had
confronted Catherine de Medici or Marie de Medici, but they had enough problems
of their own. As in previous regencies,
high-ranking nobles such as the prince of Condé, France’s leading general, and the
duke of Orléans, Louis XIII’s younger brother, insisted on their right to
exercise political influence. In Paris,
the judges of the Parlement, France’s
main court, as well as the members of other royal courts, challenged the
regent’s authority. Another threat to royal
authority came from the head of the Catholic Church in Paris, the Cardinal de Retz. As the “boss” of the city’s clergy, he
controlled a network whose influence extended to the whole population.
Since 1635, France had been fully engaged in
the Thirty Years’ War, fighting against the Spanish Habsburgs. The high cost of the war had forced Richelieu to raise taxes to record levels, creating
fierce discontent that had resulted in a series of peasant rebellions in the
late 1630s. Many royal officials were
also upset by the burden of taxes. The judges of the Parlement were reluctant to
approve unpopular taxes on the rest of the population, and they were also
concerned because they knew that the paulette
tax, which guaranteed their ownership of their offices, was due for renewal in
1648. Mazarin intended to use the
expiration of the paulette as a bargaining tool
to put pressure on the judges to accept his other tax proposals.
Mazarin was particularly anxious to
avoid a domestic crisis in 1648 because he was expecting a victorious end to
the Thirty Years’ war. If he could find
the money to keep the French army in the field, he would be in a position to
achieve a settlement that would significantly weaken France’s
In their anxiety to force through
new tax edicts, Anne of Austria and Mazarin drove the judges of the Parlement
too far. On 15 January 1648, they
brought the nine-year-old king to a formal session of the court, called a lit de justice, to force the judges to
register an unpopular tax measure. The
judges exercised their right to remonstrate
or criticize the edict, starting a series of events that culminated in a call
for the judges of all the Paris
courts to come together to consider reforms in the kingdom. On 26 June 1648, acting without the Regent’s
approval, the Parlement summoned those judges to meet in a body called the Chambre Saint Louis. This date marked the beginning of the
Fronde. Street demonstrations, organized
by Retz, showed that the judges had strong popular support.
The frondeurs focused their anger especially on Mazarin. They denounced him as a foreigner who had no
respect for the laws and institutions of France, and as an intriguer who was
using his influence over Anne of Austria to enrich himself and ruin the
country. Paris was flooded with printed pamphlets
called mazarinades, vicious personal
attacks on the minister, “this foreign rogue, juggler, comedian, famous robber,
low Italian fellow only fit to be hung,” as one of them put it. Anne, a foreigner herself, nevertheless
remained loyal to Mazarin throughout the Fronde, and may even have secretly
married him, although definite proof of this is missing.
The summoning of the Chambre Saint Louis was a dramatic
defiance of royal authority. It looked
like the beginnings of the English
Revolution in 1640, when Parliament had defied king
Charles I. One reason the two movements
took a very different course, however, was that the defiant judges failed to
build a broad base of support.
Initially, nobles like Condé and Orléans remained loyal to Anne and
When they could not subdue the unrest in Paris, Anne and Mazarin decided to flee the
city, taking the young Louis XIV with them, and threaten a military siege of
the capital. On 8 January 1649, the royal
family escaped to the suburb of Saint-Germain.
breakdown of central authority in Paris led to
frondeur movements in many of France’s
provinces as well. In January 1649 in Aix-en-Provence, for
example, judges of the local parlement led a popular uprising against the royal
governor, who had been ordered to replace them with more cooperative
magistrates. “You could even see
disshevelled women, as furious as bacchantes… running through the streets to
arouse the people, some with pistols or naked swords in their hands, others
with sacks of money to win them over; some shouting loudly, ‘Long live liberty
and no taxes’…” one witness wrote.
next few months, Anne and Mazarin negotiated with the leaders of the Paris parlement and
finally reached an agreement with them.
This angered many nobles, however, because their demands for a greater
voice in politics were ignored. The parlementary Fronde launched in 1648
now gave way to the Fronde of the
princes. Revolts broke out in
several provinces, often led by their royal governors or other prominent
nobles. Among those who turned against
Mazarin was the prince of Condé.
Suspecting his treachery, Mazarin had him arrested in January 1650. Condé’s supporters now fought against Mazarin,
while he tried to win some of the original frondeurs over to his side. By February 1651, however, Mazarin’s position
had become so shaky that he and Anne agreed that he should leave the
country. Condé was released from prison
and became the dominant figure in a new royal council.
factions in the country continued to fight among themselves in the rest of
1651, and circumstances gradually permitted Anne to insist on the return of
Mazarin. In September 1651, Louis XIV
was officially recognized as king, giving his mother stronger authority. Condé revolted against being edged out of
power, but the royalist forces were able to defeat him. Support for a return to absolutist government
grew in reaction to the most radical manifestation of the Fronde, the Ormée movement in Bordeaux.
Driven to extremes by the harsh treatment they had suffered from rival
Fronde factions, the people of that city had risen up and formed a
revolutionary government, claiming the right to govern themselves and dismiss
officials such as the judges of their local parlement. Rather than risk the spread of such dangerous
ideas, nobles and parlement members preferred to help restore the
authority of the king, even at the cost of allowing Mazarin to regain
power. By the fall of 1652, the last
elements of frondeur resistance were crumbling; Mazarin returned to France
as the young Louis XIV’s principal minister, a role he would maintain until his
death in 1661.
has gone down in French history as a confusing episode with few permanent
effects. In contrast to the English
Puritan revolution that occurred at the same time, the French rebels had no unifying program. Much of the movement was directed against a
single minister—Mazarin—and the divisions among the frondeurs became apparent
when he withdrew from the scene. The
English revolution resulted in a permanent increase in the powers of
Parliament. The Fronde instead further
discredited the notion of any limit on royal authority in France.
experience of the Fronde had an especially significant impact on the young
Louis XIV. He was deeply marked by the
experience of having to sneak out of his disobedient capital city in 1649. When he became king, he would make sure that
no such threat to his authority would ever arise again. His insistence on his own absolute authority
and his decision to move the royal palace from the center of Paris
to an isolated location at Versailles
reflected his memories of the Fronde. | <urn:uuid:8514fd6a-e9f3-44ef-8ae8-be4a25d1e24b> | CC-MAIN-2015-35 | http://www.uky.edu/~popkin/540syl2007/540Fronde.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064445.47/warc/CC-MAIN-20150827025424-00335-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.96828 | 2,209 | 4 | 4 |
Throughout history, states have generally sought to get larger, usually through the use of force. In the 1970s and 1980s, however, countervailing trends briefly held sway. Smaller countries, such as Japan, West Germany, and the "Asian tigers," attained international prominence as they grew faster than giants such as the United States and the Soviet Union. These smaller countries -- what I have called "trading states" -- did not have expansionist territorial ambitions and did not try to project military power abroad. While the United States was tangled up in Vietnam and the Soviet Union in Afghanistan, trading states concentrated on gaining economic access to foreign territories, rather than political control. And they were quite successful.
But eventually the trading-state model ran into unexpected problems. Japanese growth stalled
during the 1990s as U.S. growth and productivity surged. Many trading states were rocked by the Asian financial crisis of 1997-98, during which international investors took their money and went home. Because Indonesia, Malaysia, Thailand, and other relatively small countries did not have enough foreign capital to withstand the shock, they had to go into receivership. As Alan Greenspan, then the U.S. Federal Reserve chair, put it in 1999, "East Asia had no spare tires." Governments there devalued their currencies and adopted high interest rates to survive, and they did not regain their former glory afterward.
Russia, meanwhile, fell afoul of its creditors. And when Moscow could not pay back its loans,
Russian government bonds went down the drain. Russia's problem was that although its territory was vast, its economy was small. China, India, and even Japan, on the other hand, had plenty of access to cash and so their economies remained steady. The U.S. market scarcely rippled.
Small trading states failed because the assumptions on which they operated did not hold. To succeed, they needed an open international economy into which they could sell easily and from which they could borrow easily. But when trouble hit, the large markets of the developed world were not sufficiently open to absorb the trading states' goods. The beleaguered victims in 1998 could not redeem their positions by quick sales abroad, nor could they borrow on easy terms. Rather, they had to kneel at the altar of international finance and accept dictation from the International Monetary Fund, which imposed onerous conditions on its help.
In the aftermath of the crisis, the small trading states vowed never to put themselves in a similar position again, and so they increased their access to foreign exchange through exports. Lately, they have proposed forming regional trade groups to get larger economically, by negotiating a preferential tariff zone in which to sell their goods and perhaps a currency zone in which to borrow cash.
CHALLENGE AND RESPONSE
Global markets have grown dramatically in recent decades. The international consulting firm McKinsey & Company calculated that in 2007, world financial assets (including equities, private and public debt, and bank deposits) amounted to $194 trillion, or 343 percent of the world's GDP. It is easy to see why smaller economies can be defenseless against shifts in the global market. Money coming into a country can be an unexpected (and sometimes unwanted) boon; money going out can spell disaster. Local inflation and deflation can occur as a result of the whims of untutored but powerful investors based far away on Wall Street or in the City of London. Should foreign investors lose faith in a small country's economy, for whatever reason, that country is in trouble.
During the recent global economic crisis, moreover, even the largest economies confronted huge
losses as foreign and domestic investors removed funds or sold their holdings. From 2007 to 2008, stock markets worldwide depreciated by 50 percent. U.S. interest rates remained low only because China, Japan, and Europe continued to buy and hold U.S. securities; had these funds been removed, no amount of domestic spending (or printing of money) could have compensated. Even the biggest players, in other words, were too small to surmount the crisis on their own.
The world market, of course, has always been larger than its component parts, and it was in part to protect themselves from economic vulnerability that the great powers of the past sought to increase their size and strength. By 1897, the United Kingdom controlled an empire that covered one-quarter of the globe and included one-seventh of the world's population, as the historian Patrick O'Brien has documented. But even the British Empire did not control Russia, the United States, or the rest of Europe, and in 1929, an economic crash originating on Wall Street undercut British imperial self-sufficiency. This proved that no purely political instrument could bring together the world market.
The Great Depression and World War II forced even the major powers to recognize the limits of their individual capabilities. In the aftermath of these traumas, Jean Monnet, a hitherto obscure broker from Cognac, convinced his French and, later, German colleagues that the Western European states were too small to contend with the Soviet Union's huge landmass or the United States' vast industrial heartland. They could compete only if they came together, he argued. And thus began the process of European integration.
The 27 states that now compose the European Union will soon be accompanied by almost ten others, making Europe stretch from the Atlantic to the Caucasus. Member states have benefited from participating in an enlarged market extending beyond their national borders. The absence of tariffs in the EU allows greater cross-border commercial cooperation, which promotes specialization and efficiency and provides consumers in the member states with cheaper goods for purchase. Over time, as economists such as Andrew Rose and Jeffrey Frankel have shown, such trade zones increase their members' trade volume and GDP growth. There are also administrative advantages: southern and eastern European states with less advanced economies have found help and tutelage from veteran EU members and have not been allowed to fail (even if their fiscal policies have been reined in).
Something similar, if more gradual, has been occurring on the other side of the Atlantic as well,
with the formation, in 1988, of the free-trade area between Canada and the United States and of the North American Free Trade Agreement, including Mexico, in 1994. In the 1980s, Canadian Prime Minister Brian Mulroney had worried that the Reagan administration, which was in financial trouble, might reduce Canada's access to the U.S. market. When Ronald Reagan agreed to a preferential trade agreement with Canada, Mexico's president, Carlos Salinas de Gortari, felt compelled to join, lest Mexican exports be excluded from the North American market. Although NAFTA is at best a pale replica of the EU (without courts, decision-making bodies, or a common currency), it paved the way for other efforts in Central and South America. The vaunted Free Trade Area of the Americas has not yet emerged, but there has been a proliferation of bilateral trade agreements containing implicit provisos that they could be merged into a larger unit later on.
In Asia, meanwhile, the Association of Southeast Asian Nations has become increasingly focused on
economic unity since emerging in 1967 in the wake of a regional military crisis. As Europe further united, and particularly after the Asian financial crisis of 1997-98, ASEAN broadened its reach: China, Japan, and South Korea joined an ASEAN + 3 grouping in 1999, and Japan has proposed an Asian regional fund and has even floated the concept of an Asian currency union. These efforts
have floundered on the inability of China and Japan to forge a consensus akin to that between France and Germany in Europe, but that does not mean they could not succeed at some later date, if there were to be a deeper Chinese-Japanese rapprochement.
Finally, in 2006, German Chancellor Angela Merkel -- recognizing that the World Trade Organization's Doha Round of international trade negotiations would fail to reduce tariffs overall -- proposed the establishment of a transatlantic free-trade area composed of the EU and the United States. If realized, this trade arrangement would encompass more than 50 percent of the world's GDP, providing a stimulus and an enlarged market for both U.S. and European industry. hemmed in by Congress (which still has not ratified pending free-trade agreements with Colombia, Panama, or South Korea), U.S. President George W. Bush could not seriously take up Merkel's offer. But the deal might become more popularly attractive should the United States confront a slow economic recovery or even dip back into recession.
Before the twentieth century, states usually increased their power by attacking and absorbing others. In 1500, there were about 500 political units in Europe; by 1900, there were just 25 -- a consolidation brought about partly through marriage and dynastic expansion but largely through force.
In 1914, many statesmen thought that the Great War would consolidate the world even further,
both within Europe and outside of it. Instead, the conflict led to the breakup of the Austro-Hungarian, Ottoman, and Russian empires and dealt their British and French counterparts a serious blow. Military force remained a successful means of territorial expansion outside Europe, however, and in the 1930s, Germany and Japan sought to establish new empires of their own. Their efforts were stopped during World War II, and the remaining European empires disintegrated during the 1950s, 1960s, and 1970s. The Soviet Union was the last to concede, emancipating all of its territories by 1991.
This splintering of global politics into more and smaller pieces, however, was inconsistent with
the functional demands of global economics, which put a premium on size. The question of the late twentieth century, therefore, was how to construct larger economic units despite the discrediting of military expansion. Economic growth seemed a good bet, having worked for various powers in the past, and during the postwar era, the trading states had their heyday. But with that model having recently run into trouble as well, negotiated economic integration is becoming increasingly attractive.
Although the results of negotiated amalgamation are not the same as those of military conquest, they are likely to be more satisfactory and longer lasting. To be sure, an agglomeration of markets within a tariff zone does not guarantee political unity: as the EU shows, political disagreements still intrude, and participants often disagree on external policy. Yet the error is likely to be too much quietude, not aggression.
In the 1950s, the political scientist Karl Deutsch described how groups of countries could become so closely connected through the exchange of messages, values, migration, and trade that military conflict between them would essentially be ruled out. Norway-Sweden, Benelux, and the United States-Canada were cited as examples of such "pluralistic security communities." Since Deutsch's day, the EU has created another, forging a comparable connection between France and Germany and bringing others into their association. Subscription to the EU's acquis communautaire (its current body of law) has a social impact among members. They do not think of breakup but rather think of the prospect of others' joining.
Although the continent has no single decision-making center, its network has multiple nodes that hold the total complex together. The London-Frankfurt and Zurich-Milan corridors offer crucial economies of scale, as concentrations of expertise in finance, technology, and crafts greatly enhance efficiency. And in eastern Europe, a low-cost manufacturing sector is developing with links to hubs in France, Germany, and Italy. In 2008, 168 of the world's 500 largest companies were based in the EU, compared with 153 in the United States.
Europe has fashioned a cost-effective response to the need for size that avoids the mistakes of
yesteryear. The EU's total GDP is higher than that of the United States and will remain so. And in addition to its internal growth, Europe can continue to expand geographically. China cannot take over India, Japan, or South Korea, but Europe can peacefully absorb its neighbors.
RESISTANCE IS FUTILE
The United States cannot ignore the need for size and the new means of attaining it and should
recognize the developmental stimulus that would come from joining forces with Europe, the strongest economic power on earth.
A transatlantic economic association would not involve a political union. Nor would it mean a
gathering of the world's democracies, which do not necessarily have overlapping economic interests. Rather, it would mean combining the two most powerful economic regions of the globe, so that they could prosper more together than they would separately.
There are many theorists who still argue that geographic economic blocs are disadvantageous and potentially dangerous, providing little help for their members while increasing the risks of conflicts like those of the 1930s. Rather than paving the way for broader trade and political accords, these critics argue, such blocs hinder progress as they jockey for position with one another. Critics are right that the British, German, Japanese, and U.S. blocs did not cohere in the 1930s. But there was little foreign direct investment between them, nor production chains of the sort that join great economic powers today. Then, major countries sought to find and monopolize new sources of energy and raw materials, often following a mercantilist path in order to escape the constraining effects of foreign trade. The authoritarian powers also used violence as a tool for achieving economic and territorial gain.
But no great power today would think of solving its economic problems by military expansion. It
could occupy neighboring areas but not assimilate large ones. It definitely could not guarantee extracting their raw materials, oil, or other natural resources, as such attempts would be vulnerable to local subversion. Military expansion, in other words, poses difficulties today that it did not 75 years ago, making the potential dangers of regional economic blocs less of a concern today.
The peaceful expansion of trade blocs today, moreover, is likely to bring outsiders in rather than keep them out. It has done so in Europe and to some degree in North America and Asia as well. Self-sufficient trade blocs are impossible and will not be sought after. The key to a successful trade group, in fact, is that as it grows, it attracts sellers from the outside.
What would China, India, and Japan do if the United States and the EU formed a trade partnership?
They would not find an Asian pact a satisfactory rejoinder to the transatlantic combination. Since the major markets of the world are located in Europe and North America, Asian exporting nations would have to continue to sell to them. And if Japan eventually joined the partnership, the stakes for China and India would rise. China and India might not be significantly challenged if they could substitute domestic sales for exports. But even they, as big as they are, could not do so entirely. However important Chinese consumption becomes, it will not be able to sop up all the goods that China currently exports to technologically advanced and luxury markets in Europe, the United States, and Japan. To avoid falling behind, Beijing and New Delhi would need a continuing association with markets elsewhere.
What all this means is that the patterns of global politics and economics that have prevailed for
the last half millennium are increasingly outmoded. During that period, eight out of the 11 instances of a new great power's rise led to a "hegemonic war." With a potential Chinese challenge looming in the 2020s, the odds would seem stacked in favor of conflict once again, and in other eras it would have made sense to bet on it.
Yet military conflict is not likely to occur this time around, because even if political power
sometimes repels, today economic power attracts. The United States does not need to fight rising challengers such as China or India or even to balance one off against another. It can use its own market capacity, combined with that of Europe, to draw surging protocapitalist states into its web.
During the Cold War, the economic force of the West eventually surpassed and subverted even the heavy industrial growth of the Soviet economy. In the 1980s, the attractions of North Atlantic, Japanese, and even South Korean capitalism were a critical factor in Soviet leader Mikhail Gorbachev's decision to renew his country's economic and political system -- and end the Cold War. They also helped stimulate Deng Xiaoping's reforms in China after 1978.
Now that the formula for capitalist economic success has become widely understood and been replicated, Western economic magnetism will stem not just from the triumphs of individual economies but from their development as an increasingly integrated group. The expansion and agglomeration of economies in Europe -- and perhaps also across the Atlantic -- will serve as a beacon for isolated successes such as those in Asia.
The need for a transatlantic economic union will become clearer should the U.S. economic recovery begin to flag. At some point, U.S. policymakers will recognize -- and find a way to convince the country at large -- that trade agreements with other nations are not a means of transferring U.S. production overseas but rather part of a robust recovery strategy to gain greater markets abroad. The crucial factor may be a recognition that such markets will not continue to open up without dramatic action. The failure of the Doha Round will become apparent, as will the fact that the only realistic response to that failure is to accept the EU's invitation to form a transatlantic free-trade area and essentially extend the U.S. market by almost half a billion people. | <urn:uuid:e89bdc90-3f77-4d0a-aa0e-bb979dc8b4be> | CC-MAIN-2015-35 | https://www.foreignaffairs.com/articles/2010-05-01/bigger-better | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.971663 | 3,576 | 2.609375 | 3 |
This species occupies a range from southeastern and central Brazil through Uruguay, Paraguay, and northeastern Argentina. (Redford and Eisenberg , 1992)
Akodon cursor is one of the most common species in the forest and forest-grassland ecotones. In Misiones province, Argentina, they are found in a variety of habitats but prefe flat and xeric, semi-deciduous areas. This species also displays spatial partitioning with Akodon montensis. Akodon cursor dominates elevations from 0-800m leaving Akodon montensis to dominate elevations exceeding 800m. Populations of Akodon cursor flourish in dry, open areas with little human influence. (Gentile, 2000; Redford and Eisenberg , 1992; Patton and Smith, 2001)
Akodon cursor is a medium sized, vole-like mouse, with short limbs, and a short tail. The pelage is soft and full with a reddish brown to olive brown color dorsally, fading to more of a tan on the sides and gradually becoming a reddish tan to gray washed with orange on the venter. The tail is sparsely haired and almost bicolored. The feet are tan and the face shows some blackish hairs. Juveniles weigh around 30g for females and 28g for males. The sub-adult class contains females ranging from 30g-40g and males ranging from 28g-45g. Adult males weigh around 45g and adult females can weigh greater than 40g. (Gentile, 2000; Redford and Eisenberg , 1992; Nowak, 1999)
There is little information on mating in Akodon cursor. (Gentile, 2000)
Breeding season is typically from September to March, however, this species will breed year round opportunistically if conditions are right. Reproduction is also tied to habitat availability. During the rainy season habitat is lost to flooding and scarcity of litter and understory. Most births usually occur in dry periods but reproductive patterns are not distinct. As a result most juveniles are present during periods of low precipitation. The litter size is usually three and average gestation time of other Akodon species is 23 days. Young are weaned at about 14 days old in other Akodon species. Akodon cursor will occasionally hybridize with other species of the same genus including Akodon montensis. Many females retain a copulatory plug to indicate they have mated. For males, sexual maturity occurs at 32-37 days old and at 28g. For females, sexual maturity is delayed to around 42 days or at a weight of 30g. Overall, this species has a short life expectancy, short gestation time, and early maturity which results in rapid population turnover and quick responses to environmental variation. Delayed implantation is thought to occur in some species of Akodon and may occur in Akodon cursor as well. (Gentile, 2000; Redford and Eisenberg , 1992; Patton and Smith, 2001)
Young are nursed and cared by their mother for until they are weaned at about 14 days old.
There is little information on longevity in Akodon cursor. It is likely that most mortality occurs during their first year and that they are unlikely to reach their third year.
Akodon cursor can be territorial. This species also reduces activity in times of low temperatures to conserve heat due to their large surface area to volume ratio. (Gentile and Cerqueira, 1995; Fernandez, et al., 1999; Bittencourt, et al., 1999)
This species has shown the greatest frequency of movement from 0-20m away from nest sites with most movements less than 30m away. This indicates a relatively small home range in comparison to other neotropical rodents. Any significant difference between the movement patterns of males and females has not been observed. Lower mobility of Akodon cursor is also paired with higher aggregation and more permanent populations. (Gentile and Cerqueira, 1995)
Akodon cursor, like most mammals, relies on a suite of visual, auditory, chemical, and tactile cues for communicating with conspecifics. It is likely that olfactory cues are important in communicating territories and reproductive activity.
Stomach samples from this species have indicated a diverse diet ranging from plant material and seeds to adult and larval coleopterans, lepidopterans, and dipterans. (Redford and Eisenberg , 1992)
Akodon cursor blends well with it's environment and utilizes ground cover and brush to hide from predators. A variety of raptors and carnivores feed on this species. (Redford and Eisenberg , 1992)
Akodon cursor is a mouse that typically occurs in great abundance in open, dry areas. It often preys on small insects and plant material. It may also disperse seeds that are ingested as food. This species also acts as food for larger mammals, snakes, and raptors. (Redford and Eisenberg , 1992; Nowak, 1999)
Despite the reputation of rodents to damage crops, this has not been observed in this species. The diet of this species, which includes insects, may actually help reduce farm pests and crop damage. (Gentile, 2000)
There are no known negative impacts of Akodon cursor on humans.
Akodon cursor is abundant in appropriate habitats, they are not protected under CITES or IUCN.
A recent study has shown that females of this species have exhibited an XY chromosome combination in 10-66% of samples. Sex ratio of males to females is typically 1:1. (D'Andrea, et al., 1999; Patton and Smith, 2001)
Lars Higdon (author), University of Wisconsin-Stevens Point, Chris Yahnke (editor), University of Wisconsin-Stevens Point.
living in the southern part of the New World. In other words, Central and South America.
uses sound to communicate
young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
uses smells or other chemicals to communicate
having markings, coloration, shapes, or other features that cause an animal to be camouflaged in its natural environment; being difficult to see or otherwise detect.
in mammals, a condition in which a fertilized egg reaches the uterus but delays its implantation in the uterine lining, sometimes for several months.
animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds.
parental care is carried out by females
forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality.
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
having the capacity to move from one place to another.
an animal that mainly eats all kinds of things, including plants and animals
scrub forests develop in areas that experience dry seasons.
breeding is confined to a particular season
reproduction that includes combining the genetic contribution of two individuals, a male and a female
uses touch to communicate
Living on the ground.
defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement
the region of the earth that surrounds the equator, from 23.5 degrees north to 23.5 degrees south.
A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia.
A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome.
A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands.
uses sight to communicate
reproduction in which fertilization and development take place within the female body and the developing embryo derives nourishment from the female.
breeding takes place throughout the year
Bittencourt, E., C. Vera Y Conde, C. Rocha, H. Bergallo. 1999. Activity Patterns of Small Mammals in an Atlantic Forest Area of Southwest Brazil. Ciencia e Cultura, 51/2: 126-132.
D'Andrea, P., R. Gentile, R. Cerqueira, C. Grelle, C. Horta. 1999. Ecology of SMall Mammals in a Brazilian Rural Area. Revista Brasileira de Zoologia, 16/3: 611-620.
Fernandez, F., S. Freitas, R. Cerqueira. 1999. Density Dependence in Within-Habitat Spatial Distribution: Contrasting Patterns for a Rodent and a Marsupial in Southeastern Brazil. Ciencia e Cultura, 49/1-2: 127-129.
Gentile, R. 2000. Population Dynamics and Reproduction of Marsupials and Rodents in a Brazilian Rural Study: A Five Year Study. Studies on Neotropical Fauna and Environment, 35: 1-9.
Gentile, R., R. Cerqueira. 1995. Movement Patterns of Five Species of Small Mammals in Brazilian Restinga. Journal of Tropical Ecology, 14/4: 671-677.
Nowak, J. 1999. Walker's Mammals of the World. Baltimore: Johns Hopkins University Press.
Patton, J., F. Smith. 2001. Diversification in the Genus Akodon(Rodentia: Sigmodontiane) in Southeastern South America: Mitochondrial DNA Sequence Analysis. Journal of Mammalogy, 82: 92-101.
Redford, K., J. Eisenberg . 1992. Mammals of the Neotropics. Chicago: University of Chicago Press. | <urn:uuid:74622263-04ef-406d-bd81-b0cf2eca1022> | CC-MAIN-2015-35 | http://animaldiversity.org/accounts/Akodon_cursor/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00164-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.902233 | 2,224 | 3.0625 | 3 |
Riparian areas are critical ecosystems in the semi-arid landscape of the Colorado Plateau, yet in the last few decades many have been seriously degraded and others entirely lost due to human activities and land use. Overall, a 90% loss of presettlement riparian ecosystems has occurred in Arizona and New Mexico.
The degradation of riparian communities on the Colorado Plateau began in the early 19th century with the near extirpation of the region's beaver population by fur trappers. Beavers play an important role in creating and maintaining riparian areas by cutting trees and building dams. Water retained in beaver ponds during periods of low flow support native fish populations and provide drinking water essential not only for mammal populations, but for some species of birds and bats. The trapping of alluvial sediments by beaver ponds provide opportunities for new plant growth.
By contrast, human diversion or impoundment of free-flowing water by dams, diversions, irrigation, or channelization has been a major factor in the degradation of the natural functions of riparian areas. Without natural hydrologic systems, water tables have lowered, and surface sediments have dried out. Cottonwoods are particulary susceptible to water stress and may decline as groundwater becomes less available.
With less flooding, there is less channel shifting and less suitable habitat for establishment of cottonwood and willow seedlings, which are dependent on recently inundated sediments to become established. Diverse mosaics of riparian vegetation created by shifting of river channels have decreased or been eliminated in many riparian systems on the Colorado Plateau. Where existing riparian forests have aged without replacement, they have become a monoculture of maturing trees that eventually senesce, die, or become victims of fires.
Overgrazing by domestic livestock has been a major factor in the alteration and degradation of riparian areas. Heavy grazing, whether by big game or livestock, deteriorates stabilizing vegetation, erodes banks, and causes declines in water storage capacity and quality. In some cases gullying or arroyo cutting occur. In others, stream beds become wider and streambeds shallower, water temperatures rise, and fish and aquatic invertebrate habitat quality declines.
Riparian systems at lower elevations on the Colorado Plateau are now increasingly characterized by a reduction of plant species diversity and density. Overgrazing of palatable native species such as willows and cottonwood saplings, combined with the introduction of less palatable nonindigenous species such as Russian olive and tamarisk, has also contributed to changes in overall plant community structure. Tamarisk, introduced to the Colorado Plateau in the 1950s, has been particularly devastating, outcompeting cottonwood and willow, and dominating lower elevation riparian systems throughout the region. Its establishment introduces a regime of episodic fire, which researchers believe is uncommon in most native riparian woodlands.
Road building, logging, construction and other development has caused additional degradation of riparian areas, especially through bank erosion. Additional nutrients and fertilizers added to stream systems by agricultural runoff and sewage treatment facilities have resulted in reductions in water quality and increased eutrophication.
As natural riparian zones are being lost, so are their associated faunas. The highest percentages of threatened fish in the United States are found in the Colorado Plateau states and California. Although only 36 native freshwater fish species formerly lived in the Colorado River basin, the largest watershed in the Southwest, species-level endemism is high at 64%. The number of nonindigenous fishes introduced into the Colorado River basin is 72, twice the number of native fishes. Four of the five fishes that evolved in large rivers in the Colorado River basin are listed as endangered, and the fifth is listed as a sensitive species. The few remaining fish stocks native to smaller riparian systems on the Colorado Plateau survive only in those areas with intact riparian habitats, often in remote upper-elevation watersheds.
The responses of riparian bird communities to changes due to grazing have been particularly well studied. At some sites 40% of riparian bird species were negatively affected by livestock grazing, and a negative correlation between recent cattle grazing and abundance of several riparian birds was found.
Almand, J. and Krohn, W. 1979. The position of the Bureau of Land Management on the protection and management of riparian ecosystems. Pp. 259-361 In: Johnson, R. and McCormick, F., editors. Strategies for protection and management of floodplain wetlands and other riparian ecosystems: Proceedings of the symposium, 1113 December 1978. General Technical Report WO-12. U.S. Forest Service, Washington, D.C.
Ames, C. R. 1977. Wildlife conflicts in riparian management: Grazing. Pp. 49-58 In: Johnson, R. R. and Jones, D. A., editors. The importance, preservation and management of the riparian habitat. General Technical Report RM-43. USDA Forest Service, Rocky Mountain Forest and Range Experiment Station, Fort Collins, CO.
Anderson, B. W. and Ohmart, R. D. 1979. Riparian revegetation: An approach to mitigating for a disappearing habitat in the Southwest. Pp. 481-487 In: Swanson, G. A., editor. The mitigation symposium: A national workshop on mitigating losses of fish and wildlife habitats. General Technical Report RM-65. USFS, Rocky Mountain Forest and Range Experiment Station, Fort Collins.
Armour, C., Duff, D. and Elmore, W. 1991. The effects of livestock grazing on riparian and stream ecosystems. Fisheries 16: 7-11.
Barth, R. C. and McCullough, E. J. 1988. Livestock grazing impacts on riparian areas within Capitol Reef National Park. Capitol Reef National Park, Torrey, UT.
Benenati, P. L., Shannon, J. P. and Blinn, D. W. 1998. Desiccation and recolonization of phytobenthos in a regulated desert river: Colorado River at Lees Ferry, Arizona, USA. Regulated Rivers 14: 519.
Brock, J. H. 1994. Tamarix spp. (salt cedar), an invasive exotic woody plant in arid and semi-arid riparian habitats of western USA. Pp. 27-44 In: de Waal, L. C., Child, L. E., Wade, P. M. and Brock, J. H., editors. Ecology and management of invasive riverside plants. John Wiley and Sons Ltd, Chichester, NY.
Brookshire, D. S., McKee, M. and Schmidt, C. 1996. Endangered species in riparian systems of the American west. Pp. 238-241 In: Shaw, D. W. and Finch, D. M., editors. Desired future conditions for southwestern riparian ecosystems: bringing interests and concerns together. General Technical Report RM-272. USDA Forest Service, Rocky Mountain Forest and Range Experiment Station, Fort Collins, CO.
Carothers, S. W. and Dolan, R. 1982. Dam changes on the Colorado River. Natural History 91: 74-84.
Carothers, S. W. and Brown, B. T. 1991. The Colorado River through Grand Canyon: Natural history and human change. University of Arizona Press, Tucson, 235 pp.
Christensen, E. M. 1962. The rate of naturalization of Tamarix in Utah. American Midland Naturalist 68: 51-57.
Cooper, D. J., Merritt, D. M., Anderson, D. C. and Chimner, R. A. 1999. Factors controlling the establishment of Fremont cottonwood seedlings on the Upper Green River, USA. Regulated Rivers: Research and Management 15: 419-440.
Deacon, J. E. 1988. The endangered woundfin and water management in the Virgin River, Utah, Arizona, Nevada. Fisheries 13: 18-24.
DeBano, L. F. and Schmidt, L. J. 1989. Improving Southwestern riparian areas through watershed management. Report RM-182. USDA Forest Service, Rocky Mountain Forest and Range Experiment Station, Fort Collins, CO, 33 pp.
Denevan, W. M. 1967. Livestock numbers in nineteenth-century New Mexico and the problem of gullying in the Southwest. Annals of the Association of American Geography 57: 691-703.
Detenbeck, N. E., DeVore, P. W., Niemi, G. J. and Lima, A. 1992. Recovery of temperate-stream fish communities from disturbance: A review of case studies and synthesis of theory. Environmental Management 16: 33-53.
Douglas, E. 1954. Phreatophytes: Water hogs of the west. Land Improvement 1: 8-12.
Duce, J. T. 1918. The effect of cattle on the erosion of canyon bottoms. Science 47: 450-452.
Fleischner, T. L. 1994. Ecological costs of livestock grazing in western North America. Conservation Biology 8: 629-644.
General Accounting Office, 1988. Public rangelands: some riparian areas restored but widespread improvement will be slow. General Accounting Office, Washington, D.C., 85 pp.
Gilles, C., Bravo, L. and Watahomigie, D. 1991. Uranium mining at the Grand Canyon: What costs to water, air, and indigenous people? The Workbook 16: 2-17.
Graf, W. L. 1986. Fluvial erosion and federal public policy in the Navajo Nation. Physical Geography 7: 97-115.
Harris, J. H. and Silveira, R. 1999. Large-scale assessments of river health using an index of biotic integrity with low diversity fish communities. Freshwater Biology 41: 235-252.
Hunter, W. C., Ohmart, R. D. and Anderson, B. W. 1988. Use of exotic saltcedar (Tamarix chinensis) by birds in arid riparian systems. The Condor 90: 113-123.
Johnson, J. E. 1987. Reintroducing the natives: Colorado squawfish and woundfin. Pp. 118-124 In: Proceedings of the Desert Fishes Council. XVI-XVIII. Desert Fishes Council, Bishop, CA.
Johnson, R. R. 1991. Historic changes in vegetation along the Colorado River in the Grand Canyon. Pp. 178-206 In: Marzolf, G. R., editor. Colorado River ecology and dam management. National Academy Press, Washington, D.C.
Karp, C. A. and Tyus, H. M. 1990. Humpback chub (Gila cypha) in the Yampa and Green Rivers, Dinosaur National Monument, with observations on roundtail chub (G. robusta) and other sympatric fishes. Great Basin Naturalist 50: 257-264.
Kauffman, J. B., Beschta, R. L., Otting, N. and Lytjen, D. 1997. An ecological perspective of riparian and stream restoration in the western United States. Fisheries 22: 12-24.
Kay, C. E. 1994. The impact of native ungulates and beaver on riparian communities in the Intermountain West. Natural Resources and Environmental Issues 1: 23-44.
Knopf, F. L. 1989. Riparian wildlife habitats: more, worth less, and under invasion. Pp. 20-22 In: Mutz, K., Cooper, D., Scott, M. and Miller, L., editors. Restoration, creation, and management of wetland and riparian ecosystems in the American West. Society of Wetland Scientists, Rocky Mountain Chapter, Boulder, CO.
Krueper, D. J. 1993. Effects of land use practices on western riparian ecosystems. Pp. 321-330 In: Finch, D. M. and Stangel, P. W., editors. Status and management of Neotropical migratory birds. General Technical Report RM-229. U.S. Forest Service.
Leiner, S. 1996. The habitat quality index applied to New Mexico streams. Hydrobiologia 319: 237.
Modde, T., Scholz, A. T., Williamson, J. H., Haines, G. B., Burdick, B. D. and Pfeifer, F. K. 1995. An augmentation plan for razorback sucker in the Upper Colorado River Basin. American Fisheries Society Symposium 15: 102-111.
Neary, D. G. and Medina, A. L. 1995. Geomorphic responses of a montane riparian habitat to interactions of ungulates, vegetation, and hydrology. In: Shaw, D. W. and Finch, D. M., editors. Desired future conditions for Southwestern riparian ecosystems: Bringing interests and concerns together. General Technical Report RM272. USDA Forest Service, Rocky Mountain Forest and Range Experiment Station, Fort Collins, CO.
Ohmart, R. D. 1994. The effects of human-induced changes on the avifauna of western riparian habitats. Studies in Avian Biology 15: 273-285.
Rieger, J. 1992. Western riparian and wetland ecosystems. Restoration & Management Notes 10: 52-55.
Rinne, J. N. 1996. Desired future condition: Fish habitat in southwestern riparian-stream habitats. Pp. 336-345 In: Shaw, D. W. and Finch, E. M., editors. Desired future conditions for southwestern riparian ecosystems: Bringing interests and concerns together. General Technical Report RM-272. U.S. Forest Service Rocky Mountain Forest and Range Experiment Station, Fort Collins, CO.
Rinne, J. N. 1999. Fish and grazing relationships: The facts and some pleas. Fisheries 24: 12-21.
Schultz, T. T. and Leininger, W. C. 1990. Differences in riparian vegetation structure between grazed areas and exclosures. Journal of Range Management 43: 295-299.
Schultz, T. T. and Leininger, W. C. 1991. Nongame wildlife communities in grazed and ungrazed riparian sites. Great Basin Naturalist 51: 286-292.
Sogge, M. K. 1995. Southwestern willow flycatcher surveys along the San Juan River, 1994-1995: Final report to the Bureau of Land Management, San Juan Resource Area. National Biological Service, Colorado Plateau Research Station, Northern Arizona University, Flagstaff, AZ, 27 pp.
Sogge, M. K., Marshall, R. M., Sferra, S. J. and Tibbitts, T. J. 1997. A southwestern willow flycatcher natural history summary and survey protocol. Technical Report NPS/NAUCPRS/NRTR-97/12. National Park Service, Washington, D.C.
Sogge, M. K., Tibbitts, T. J. and Petterson, J. R. 1997. Status and breeding ecology of the southwestern willow flycatcher in the Grand Canyon. Western Birds 28: 142.
Stevens, L. E. and Waring, G. L. 1988. Effects of post-dam flooding on riparian substrates, vegetation, and invertebrate populations in the Colorado River corridor in Grand Canyon. Glen Canyon Environmental Studies Report No. 19. Bureau of Reclamation, Flagstaff, AZ.
Stevens, L. E., Schmidt, J. C., Ayers, T. J. and Brown, B. T. 1995. Flow regulation, geomorphology, and Colorado River marsh development in the Grand Canyon, Arizona. Ecological Applications 5: 1025-1039.
Stromberg, J. C. and Chew, M. K. 1997. Herbaceous exotics in Arizona's riparian ecosystems. Desert Plants 13: 11.
Swenson, E. A. and Mullins, C. L. 1985. Revegetating riparian trees in southwestern floodplains. Pp. 135-138 In: Johnson, R. R., Ziebell, C. D., Patton, D. R., Ffolliott, P. F. and Hamre, R. H., editors. Riparian ecosystems and their management: reconciling conflicting uses. General Technical Report RM-120. USDA Forest Service, Rocky Mountain Forest and Range Experiment Station, Fort Collins, CO.
Taylor, D. M. 1986. Effects of cattle grazing on passerine birds nesting in riparian habitat. Journal of Range Management 39: 254-258.
Tremble, M. 1993. The Little Colorado River. Pp. 283-289 In: Tellman, B., Cortner, H. J., Wallace, M. G., DeBano, L. F. and Hamre, R. H., editors. Riparian management: common threads and shared interests. USDA Forest Service, Rocky Mountain Forest and Range Experiment Station, Fort Collins, CO. | <urn:uuid:90d86acd-cd7f-449a-ad6b-f98ee3af9c85> | CC-MAIN-2015-35 | http://www.cpluhna.nau.edu/Biota/riparian_degradation.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00163-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.829095 | 3,635 | 4.21875 | 4 |
What does a patent protect?
A patent refers to a new and non-obvious invention, which is suitable to be applied in trade or industry or agriculture. An invention can be a new product, process, device or the like, or an improvement on an existing product, process, device or the like.
Who is protected by a patent?
The applicant for a patent must be the rightful owner of the invention; eg he must be the inventor, the assignee of the inventor or the inventor’s employer if the invention was made in the course and scope of employment.
How does a patent protect you?
A patent is a monopoly right conferred by the State for a limited period of time. Hence most commonly a patent is a national right limited to the nation concerned, but there are a few regional patents, one for Europe, one for “English-speaking” African countries and one for “French-speaking” African countries. A patent for “Europe” now extends to numerous East European and Mediterranean countries as well.
A patent gives you the right to exclude others from making, using, exercising or disposing of your invention, so that you can enjoy the whole profit and advantage accruing from the invention for the duration of the patent.
Are there any exceptions to patent protection?
Certain categories of inventions excluded from patent protection in most countries are the traditional copyright works or any aesthetic creations; a method, scheme or rule of doing business or playing a game; a scientific theory or mathematical method; the presentation of information; a computer program, and methods of medical treatment. Sometimes excluded categories can be indirectly protected by monopolising the technology necessary for them.
How long does a patent protect you?
A provisional application is available in South Africa and a few other countries, including UK and USA, and obtains provisional protection for a period of twelve months in all Paris Convention countries (about 154); this can be followed by a complete application in selected countries (before the expiry of the twelve-month period), which provides protection for generally 20 years from the date of filing the complete application, renewable usually from the 4th year, in each country. However, increasingly South Africans are choosing rather to obtain foreign patent rights by entering the PCT (Patent Co-operation Treaty) system, for which see below.
What will patent protection cost you?
If you file the provisional patent application in South Africa yourself, a R60 revenue stamp will suffice. If the provisional patent is filed by a professional (which is preferable, to avoid subsequent invalidity or complications arising from inadequate description or too narrow a definition of the invention), the fee might be anything between about R5 000 and R7 000 or more, depending on the complexity of the invention.
The complete patent filing in South Africa, which must be done by a patent attorney or patent agent, will cost between R7 000 and R10 000 or more, including official fees. The renewal fee, payable annually from the 3rd year onwards, is on a sliding scale from R130 to R206 for official fees and R725 for a patent attorney’s fee. As in the case of design registration, the patent attorney’s fee will include professional advice, preparation of specification drawings and formal papers and the implementation of a system to remind you and making the payments.
What are the registration requirements for a patent?
You, the inventor, or your patent attorney can apply to the Registrar of Patents at CIPRO for a patent. Application for a patent can be made by way of a provisional application, to obtain provisional protection, followed by a complete application (this must be done by a patent agent or a patent attorney).
Alternatively, a complete patent application may be filed in the first instance. (See address list,at the back of this publication.)
What countries are covered by patent protection?
A South African patent has effect only in the territorial area of South Africa. If you require protection in other countries, you must file separate applications in such countries. An exception is the regional patent for Europe and regional patents in Africa.
Your South African patent application can serve as a basis for claiming so-called convention priority in respect of foreign applications in the majority of other countries. You must file such foreign applications within twelve months of your first South African application.
An important alternative, used increasingly since South Africa joined the PCT in May 1999, is to extend the period of provisional protection from 12 months to 30 months, internationally, by filing a PCT (Patent Co-operation Treaty) patent application. Rights are provisionally reserved in 135 countries and the benefit of an international search and preliminary examination is included. The advantages include deferring the costs of filing in foreign countries, an early reliable and impartial indication of validity (at the 16th –17th month), with opportunity to amend if it is necessary, which can form a basis for confidence by investors. At the end of the PCT procedure the patent application will be in a better condition to go through the examination successfully in each country in which you finally patent.
Tips and comments on patent protection
The term “new” means that the invention or information about the invention should not have been available to the public anywhere in the world. Accordingly, it is of vital importance that your patent application be filed before your invention is made known to members of the public.
You should neither discuss your invention nor show it to anyone who is not legally bound, by contract or by the nature of your relationship with them, to keep it confidential. Disclosure of your design or invention to a professional advisor, eg a patent attorney, is not a premature disclosure because he owes you a legal duty of confidentiality, and what you tell him will be privileged in law.
The filing of a provisional application in the first instance has several advantages, including that it can be filed relatively quickly (since only a provisional specification is required to be filed), and it secures a filing date yet affords a twelve-month period during which the novelty, technical merit and commercial prospects can be further investigated, before the application is completed.
This twelve-month period may be far too short for developing the invention into a product. Hence the value of the PCT procedure mentioned above, which gives 30 months. In addition, it may be advantageous not to file a patent immediately, provided that the novelty of the invention will not be adversely affected by the delay. Limited protection during the product’s development and testing phase can be achieved in other ways, eg a confidentiality or non-disclosure agreement, before a patent application is filed (see Licences, Assignments and other Agreements, page 16).
Once the patent is filed, the clock starts ticking and you have limited time before deciding whether or not to file a complete patent and, if so, in which countries.
If your provisional patent protection time has expired, it may be possible to re-file your application. This, however, carries the disadvantage that you lose your original file date, which can have serious consequences. Do not take this decision lightly and preferably seek professional advice.
In some (rare) instances, it may be better not to patent your discovery or invention but to continue to keep it secret, eg as in the case of the formula for Coca-Cola. This enables you to have monopoly rights well beyond the 20 years of cover granted by the patent! This option is viable only if it will not be possible for outsiders to analyse and determine your invention, or for insiders to reveal it.
The provisional patent application secures the date and a right to the invention but does not guarantee that your claim is legitimate. No search is done by the Registrar of Patents in South Africa to ensure that your idea is unique and there could well be another similar invention, either local or overseas. The onus is entirely on you, the applicant for the patent, to look into this question. This means that patents may be granted in South Africa for inventions that are not new and that may even have been patented. It is worth the additional time and costs to conduct an international search to ensure the uniqueness of your invention even before filing the provisional – also this could save you from re-inventing the wheel. South Africans now have good access via Internet, to international patent databases, see the addresses under the heading “Internet Searching Addresses”.
In most overseas countries, such as the USA and Europe, intensive worldwide patent searches are carried out before a patent is issued, thus ensuring that the invention is truly unique. This is where the PCT procedure is helpful, as it includes the preliminary search, which is good insurance against adverse results later.
Five years after the date of application for your patent in South Africa, any person can obtain details of search results in any foreign patent applications you may have filed. Publication of granted foreign patents usually include details of prior art that was considered before the grant to you.
Searches in the markets where you intend to sell, are necessary if you want to be sure that in manufacturing you do not infringe someone else’s patent. It is quite possible to find that a product is protected in some countries and not in others.
Inventions and supporting literature should all be marked with “patent applied for” or “patent”, with the relevant patent number, to ensure that infringement damages can be recovered. Falsely claiming that patent rights exist is a criminal offence.
Basic procedure for patent registration
Step 1: It is advisable to do a ‘novelty search’ at the search facility at CIPC’s Paper Based Disclosure Centre to make sure that the invention is truly new and will not infringe on someone else’s existing patent rights. It is a manual search. A patent attorney may also be appointed to do this on the applicant’s behalf. It is also advisable to do searches on other countries’ patent websites to ensure as far possible, that the invention is truly new. If no similar invention is found, the applicant can take the next step, which is to register the invention.
Step 2: The applicant can apply for provisional patent rights by completing a set of application forms, which consist of form P1 (dupl.), P2 (dupl.), P3 and P6. The registration fee is R60 plus any professional fees incurred. A detailed, broad description of the invention must be prepared on separate A4 size paper that must be attached to the application forms. Neat drawings in black ink (on A4 paper) may also be included to help with the description of the invention. The wording of the description is very important and the description may not be altered or added to after it has been lodged at CIPRO. The protection of the patent will depend on the wording used in the description.
The provisional patent will be valid for a period of 12 months. In these 12 months, the patent may be manufactured and marketed. Any time during these 12 months, the final patent may be filed when the applicant is confident that the invention is successful.
A private individual or a company may apply for a patent. The inventor/s can only be indicated as natural persons and not a company. If a company is the applicant or if the applicants differ from the inventor/s, an assignment of invention document from the inventor to the applicant must be filed with the application. It is more advantageous to apply in a private capacity – especially if it is planned to file the patent internationally (Patent Cooperation Treaty System (PCT).
A provisional patent may be extended up to three months, but a priority claim in a foreign countrymust be within 12 months The cost for extension is R50 per month. Alternatively, the applicant may apply for post-dating, but the original protection date will be forfeited (the application date is “shifted” on as if the application was filed on a later date). One may post-date up to six months. The cost is R50.
Step 3: The final (complete) patent can be filed in South Africa only or with the PCT international filing system.
The SA Patent Act requires the signature of a registered patent attorney on the specification.
Through the PCT system, an applicant may apply him/herself by doing an international reservation in member countries of PCT. If an applicant applies in a private capacity, the applicant will receive a huge discount on the reservation fee. The reservation fee is roughly ± R6000 (including the discount). Later when it reaches the national phase (where the application is filed in the different countries), fees are paid as set by each country, around R 35 000 average..
After the application has been accepted and granted (advertised in the Patent Journal), the patent needs to be renewed annually from the third year to keep it in force. The renewal may be paid by the owner of the patent him/herself or a patent attorney firm. The renewals may be paid in advance. A patent’s life span is 20 years. The renewal fees work on a gliding scale and may be viewed on the CIPC web site www.cipc.co.za
*All fees and costs are estimates and subject to change without notice. | <urn:uuid:4a6eb73f-a55e-4f76-9d5a-9c1fe28c2394> | CC-MAIN-2015-35 | http://trademarksearchblog.com/2013/06/14/you-and-your-patent-in-depth-overview/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.940658 | 2,719 | 2.921875 | 3 |
Author: Yvette C. Terrie, BSPharm, RPh
Allergic rhinitis is common, and its prevalence is on the rise. Pharmacists can counsel patients about their symptoms and recommend OTC products to control them.
Allergic rhinitis is estimated to affect approximately 60 million individuals in the United States, and its prevalence is increasing.1
This condition affects an estimated 10% to 30% of all adults and as many as 40% of children.1,2
Allergic rhinitis symptoms typically manifest after age 2. Allergic rhinitis is very common among those in the pediatric population, as well as in individuals aged 18 to 64.2
Allergic rhinitis can be classified as seasonal or perennial (see Table 1).2,3
Some individuals may experience both types of rhinitis, with perennial symptoms getting worse during specific pollen seasons.3
In addition, there are also nonallergic causes for rhinitis that can be related to hormones (eg, pregnancy, puberty, thyroid conditions), structural defects (eg, septal deviation, adenoid hypertrophy), lesions (eg, nasal polyps and neoplasms), and the use of certain pharmacologic agents, such as beta-blockers, oral contraceptives, clonidine, angiotensin-converting enzyme inhibitors, aspirin and other nonsteroidal anti-inflammatory drugs, or overuse of topical decongestants.3-6
Allergic rhinitis can be associated with complications such as otitis media, sinusitis, recurring sore throats, cough, headaches, changes in sleep patterns, sleep apnea, depression, fatigue, anxiety, irritability, poor school performance, and impaired cognitive function.3-5
In addition, some children may develop delayed speech, altered facial growth, and dental problems.3-5
Allergic rhinitis is characterized by repetitive and predictable symptoms that include episodes of repetitive sneezing, rhinorrhea, postnasal drip, nasal congestion, loss of smell, headaches, earache, tearing, red, itchy eyes, eye swelling, fatigue, drowsiness, and malaise.2,4
Ideally, the optimal treatment for allergic rhinitis is to avoid the offending allergens; however, that is not always a possible or practical mode of therapy. While allergic rhinitis cannot be cured, the goals of therapy include a reduction in symptoms, improvement in the patient’s ability to function in his or her daily routine, and overall improved well-being.2
Typically allergic rhinitis is treated in 3 steps: environmental control measures and allergen avoidance, pharmacologic therapy, and immunotherapy.2,4
Many patients may have to try several different treatment options before finding one that works for them. Table 2 offers counseling tips for patients with allergic rhinitis.
Antihistamines and Decongestants
Currently, there are a plethora of OTC products available for the symptomatic relief and management of allergic rhinitis, including oral and ocular antihistamines; oral, nasal, and ocular decongestants; and topical mast cell stabilizers.2
These products are available as single-entity or combination products and in sustained release formulations. Selection of therapy should be individualized and based on the patient’s medical and medication history, specific symptoms and their severity, cost of the medications, and frequency of dosing intervals.
Antihistamines are considered to be the top choice for providing symptomatic relief of allergic rhinitis and are indicated for the relief of itching, sneezing, and rhinorrhea symptoms (see Table 3). First-generation antihistamines (sedating antihistamines) are associated with drowsiness or sedation, impaired mental alertness, and anticholinergic effects.2
Second-generation OTC antihistamines (nonsedating antihistamines) currently available include loratadine and cetirizine, and usually do not cause significant drowsiness.2
Patients should start taking antihistamines and mast cell stabilizers at least a week before symptoms typically appear or as soon as possible.2
The available formulations of OTC antihistamines include chewable tablets; oral disintegrating tablets or medicated thin strips; immediate or sustained release capsules, tablets or caplets; and liquid formulas, as well as alcohol-free, sugar-free, and dye-free products.
Since nasal congestion is another common complaint for many allergy sufferers, the use of a systemic or shortterm topical nasal decongestant may be necessary for some individuals.2
Decongestants are indicated for the temporary relief of nasal and Eustachian tube congestion and cough associated with postnasal drip.2
Common adverse effects associated with the use of oral decongestants include insomnia, nervousness, and tachycardia. The use of decongestants may also exacerbate medical conditions that are sensitive to adrenergic stimulation (eg, hypertension, diabetes, coronary artery disease, prostatic hypertrophy, and elevated intraocular pressure).2
Patients should also be reminded about the potential of rhinitis medicamentosa (rebound congestion) when using topical decongestants for more than 3 to 5 days.2
Many products on the market contain a combination of an antihistamine and a decongestant. Patients should be advised to only use combination products when warranted to avoid unnecessary drug use. Since antihistamines and decongestants interact with several medications and are contraindicated in various patient populations, pharmacists are key in identifying potential drug interactions or contraindications.
Another option for allergy suffers is the nasal spray cromolyn sodium, which is indicated for preventing and treating the symptoms associated with allergic rhinitis. It is approved for those age 5 and older. Patients should be instructed to administer 1 spray in each nostril 3 to 6 times daily, and treatment should be initiated at least a week before seasonal symptoms occur. The most common adverse effects include a burning and stinging sensation in the nasal area.2,7
There are no known drug interactions associated with intranasal cromolyn sodium.
Patients with allergic rhinitis who also suffer from watery and itchy eyes may benefit from using an ocular antihistamine product. The available ophthalmic OTC antihistamines include pheniramine maleate and antazoline phosphate. These antihistamine products are available in combination with the decongestant naphazoline. The most common adverse effects associated with the use of ophthalmic antihistamines include burning, stinging, and discomfort upon instillation.5
In 2006, the FDA approved ketotifen fumarate 0.025% ophthalmic solution from prescription to OTC status. The newest ketotifen ophthalmic products on the market are Claritin Eye Drops (Schering-Plough) and Zyrtec Eye Drops (McNeil-PPC, Inc). Ketotifen is the only OTC antihistamine ophthalmic product that relieves ocular itching without the use of a decongestant. Ketotifen is a benzocycloheptathiophene derivative and is classified as a noncompetitive H1 receptor antagonist and mast cell stabilizer that inhibits the release of mediators from cells involved in hypersensitivity reactions. 5,8,9
This agent is approved for use in individuals 3 years of age and older and is classified as pregnancy category C.8,9
It is indicated for the temporary relief of itchy eyes due to exposure to ragweed, pollen, grass, animal hair, and dander. The recommended dosage is 1 drop to the affected eye(s) every 8 to 12 hours, but no more than twice daily.5,8,9
The advantages of ketotifen drops include relief within minutes and twice-a-day dosing—it is considered very safe because of no concerns of vasoconstrictor overuse.5,8,9
Common adverse reactions include headache, dry eyes, and rhinitis.5,8,9
Ketotifen is not indicated for treatment of contact lens–related inflammation. Patients who wear contacts should be instructed to wait at least 10 minutes before inserting their lenses after instillation of ketotifen.8,9
Prior to recommending any OTC products for allergic rhinitis, pharmacists should always determine if selftreatment is appropriate and refer individuals to seek further medical evaluation when warranted. The patient’s medication profile and medication history should be screened for potential drug interactions and contraindications, including allergy sensitivities. Patients should always be advised to adhere to the manufacturer’s directions and be aware of potential adverse effects. Pharmacists can also offer suggestions of nonpharmacologic measures, such as the use of nasal saline solutions or the drug-free Breathe Right Nasal Strips (GlaxoSmithKline) and other measures, such as avoidance of allergens when possible; lowering the humidity level in the home to reduce the incidence of mold; keeping car and home windows closed, especially when pollen and mold levels are high; and checking pollen and mold counts in the local area.2
Since peak pollen production occurs between 5 am and 10 am, pharmacists can remind patients to plan outside activities at other times of the day when feasible.10,11
Ms. Terrie is a clinical pharmacy writer based in Haymarket, Virginia.
1. Allergy Statistics America Academy of Allergy Asthma and Immunology website www.aaaai.org/media/statistics/allergy-statistics.asp#allergicrhinitis. Accessed January 24, 2010.
2. Scolaro K. Disorders Related to Colds and Allergy. In Berardi R, Newton G, McDermott JH, et al, eds. Handbook of Nonprescription Drugs 16th ed. Washington, DC: American Pharmacists Association; 2009. 189-200
3. Rhinitis .The American Academy of Allergy, Asthma and Immunology website http://www.acaai.org/public/advice/rhin.htm. Accessed January 24, 2010.
4. Sheikh J and Najib U. Rhinitis, Allergic. eMedicine website. http://emedicine.medscape.com/article/134825-overview. Accessed January 30, 2010.
5. Fiscella R and Jensen, M. Ophthalmic Disorders. In Berardi R, Newton G, McDermott JH, et al, eds. Handbook of Nonprescription Drugs 16th ed. Washington, DC: American Pharmacists Association; 2009. 526-528
6. Spring and Allergic Rhinitis. The American Academy of Allergy, Asthma and Immunology website. http://www.aaaai.org/patients/topicofthemonth/0307/. Accessed January 24, 2010
7. NasalCrom Product Information. Blacksmith Brands website. http://nasalcrom.com. Accessed January 24, 2010.
8. Zaditor Product Information. Novartis website. http://www.zaditor.com/info/about/zaditor-eye-drops.jsp. Accessed January 24, 2010.
9. Alaway Product Information. Bausch and Lomb website. http://www.alaway.com/product-information. Accessed January 25, 2010.
10. Outdoor Allergy Tips, Schering Plough Claritin Healthcare Products website, www.claritin.com/claritin/allergies/spring. Accessed January 25, 2010.
11. Allergy Information. Wyeth Consumer website www.alavert.com/allergy_info.asp. Accessed January 25, 2010. | <urn:uuid:b79def17-dd68-4540-a313-55e58ee78128> | CC-MAIN-2015-35 | http://www.pharmacytimes.com/print.php?url=/publications/issue/2010/April2010/AllergyProducts-0410 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066017.21/warc/CC-MAIN-20150827025426-00215-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.8909 | 2,475 | 3.09375 | 3 |
By Randena Hulstrand
The UNT-Chile Field Station, located in the Cape Horn Biosphere Reserve at the southern tip of South America, is home to interdisciplinary research on sub-Antarctic biocultural conservation. The location permits accessibility to pristine wilderness areas and archeological sites, and the station collaborates with area schools and various government services and social organizations.
The UNT-Chile program has been growing under the coordination of Ricardo Rozzi, an associate professor of philosophy who actively collaborates with Chilean partners, as well as Robert Frodeman, professor and former chair of philosophy; Eugene Hargrove, director of the Center for Environmental Philosophy; and James Kennedy, professor of biological sciences and director of the Elm Fork Education Center and Natural Heritage Museum.
Rozzi says the program recognizes that international partnerships enhance research, as different cultural experiences and fields of expertise are essential for translating scientific knowledge and integrating education into the community.
"This sustainable biocultural conservation initiative cannot be successful with science alone; to confront global change, science needs to be involved in society at local and global scales," says Rozzi, who recently earned the 2008 Science and Practice of Ecology and Society Award from the online journal Ecology and Society.
The Yahgan, nomadic people living in the Cape Horn Archipelago at the southern tip of South America for the last 7,000 years, have long revered Omora. This green-backed firecrown hummingbird is a cosmological hero maintaining harmony between society and nature.
Ricardo Rozzi, associate professor in the Department of Philosophy and Religion Studies at the University of North Texas, also is bridging the divide between humans and other living things. With a team of scientists, philosophers, artists and other collaborators, he is integrating research disciplines and building relationships between the United States and Chile while helping establish UNT as a global leader in biocultural conservation studies.
Rozzi, a native Chilean, is the director of UNT's Chile Sub-Antarctic Biocultural Conservation Program and Field Station at the Omora Ethnobotanical Park in the UNESCO Cape Horn Biosphere Reserve. Located in one of the world's most pristine remaining wilderness areas, the reserve encompasses islands, fjords, glaciers, bogs and forests, and is home to sub-Antarctic wildlife and plants.
With a keen understanding of the diversity of disciplines needed to conserve both biological and cultural diversity, Rozzi, a philosopher and ecologist, is a natural collaborator.
"I saw a group of biologists on one slope and philosophers on another slope, and I wanted to bring the two together," he says.
In 2000, Rozzi led the effort to create the Omora Ethnobotanical Park on Navarino Island in the Cape Horn Archipelago. Five years later, his efforts helped secure the area's designation as the UNESCO Cape Horn Biosphere Reserve and set the stage for a cooperative agreement between UNT and the University of Magallanes, where Rozzi also is an associate researcher.
By 2005, he helped organize a consortium made up of the two universities, the Chilean Institute of Ecology and Biodiversity and the Omora Foundation, a nonprofit organization associated with the park. Also included are UNT's Center for Environmental Philosophy and the Omora Sub-Antarctic Research Alliance.
As a result of the work there, researchers and students at the UNT-Chile Field Station have incorporated environmental philosophy with biocultural conservation, including the traditions and philosophies of the indigenous Yahgan community and South American researchers.
Last year, the Institute of Ecology and Biodiversity, one of UNT's primary partners in the program, received a $15 million grant from the Comisión Nacional de Investigación Científica y Tecnológica (CONICYT), the Chilean equivalent of the National Science Foundation in the United States. The money is helping to fund the construction of new facilities at the field station and support the work of UNT researchers and students during the next 10 years.
The UNT-Chile Field Station provides an opportunity to study at the Omora park in Puerto Williams, the southernmost town in the world, with facilities under construction overlooking the Beagle Channel and the Cordillera Darwin mountain range. The station will house up to 15 students and faculty during courses and research expeditions. Plans include a library-classroom, computer area and laboratory for processing and storing plant, insect and other research samples.
Through a series of summer and winter courses titled "Tracing Darwin's Path," UNT undergraduate and graduate students from anthropology, journalism, biology, philosophy and art get hands-on experience with topics such as nature writing, ethnoecology, and biocultural and sub-Antarctic watershed conservation.
The field station provides opportunities for students and faculty to engage in field philosophy, studying the effects of real-world issues such as the loss of languages and biodiversity, damming of rivers, exotic invasive species and global warming, while forming solutions that can transfer to other areas of the world.
"Living in a global context, we can't just offer concepts; we need actual applications like the field station," says J. Baird Callicott, chair of UNT's Department of Philosophy and Religion Studies.
The station allows for studies that focus on the global challenges of biological diversity, such as the impacts of the introduction of North American beavers on watersheds and forested landscapes, or the introduced mink's predation on ground-nesting song birds. But research at the station also includes study of linguistic and cultural diversity as well as conservation of bird, plant and aquatic insect species.
"Rather than theorizing from afar, students and researchers can engage with the local flora and fauna, as well as the indigenous Yahgan people," Rozzi says. "Yahgan knowledge of the local environment is being lost as their language and ecological practices are replaced by global culture."
Recently, the Omora Foundation received $500,000 from the Chilean Office for Development to develop in partnership with UNT and UMAG the concept of "Tourism With a Hand-Lens." The innovative research project, involving several UNT philosophy, science and art faculty and students, will result in a series of ecotourism options.
Last year, researchers with the UNT-Chile program reported in Frontiers in Ecology, the leading ecological journal, that the Cape Horn region represents less than 0.01 percent of the Earth's land surface but is home to more than 5 percent of the world's bryophytes, or nonvascular plants like mosses. In the project's "Miniature Forests of Cape Horn," citizens and tourists are learning to appreciate the beauty and ecological value of the mosses, lichens and liverworts through guided tours.
Rozzi says the project, which is being used as a model for other research, includes not only scientific research and education of the public through guided ecological activities, but also conservation on site, such as in the building of a miniature forests garden for the tours.
"This project brings research and conservation together with biodiversity, transferring it into ecotourism activities and education; it's not abstract," Rozzi says.
As tourism is the fastest-growing industry in Chile, he says the advantage of developing specific ecotourism experiences is economic as well as ecological.
"Tourists spend money at hotels, and the guided tours help them understand this sub-Antarctic research and appreciate a floristic diversity that was previously overlooked, while keeping their footprints limited to smaller, concentrated areas," he says.
UNT Chilean doctoral student Tamara Contador, who earned her bachelor's degree in biology from UNT in 2006, is studying the fauna of these miniature forests, focusing on the ecology of freshwater insects in the Robalo River watershed, which provides drinking water for Puerto Williams. Working with James Kennedy, a regular instructor of the "Tracing Darwin's Path" courses, Contador will move to the field station for a year. Her dissertation is part of a larger plan to disclose the richness of sub-Antarctic freshwater insects and translate scientific findings into ecotourism and conservation activities.
For Alexandria Poole, a philosophy and environmental science graduate student, the field station is a practicum for theoretical and applied research projects. She is studying international biocultural conservation efforts and ecological education through ecotourism, educational programs and policy in Chile.
"I hope to help society re-engage the natural world in a way that will fortify our communities and culture, but also lessen the damage we are doing to the environment," Poole says.
As an interdisciplinary, international initiative, UNT's Chile Sub-Antarctic Biocultural Conservation Program will continue to build opportunities with UMAG — the southernmost university in the world — through the inauguration of a joint office for the program at the central campus in Punta Arenas, Chile. In collaboration with the Center for Environmental Philosophy, professors are producing bilingual editions of journals and books with plans for a future UNT-UMAG dual degree program including online courses, video conferencing and semester-long exchanges.
"Global issues don't stop at boundaries of a country," says Kennedy, professor of biological sciences and director of the Elm Fork Education Center and Natural Heritage Museum. "Besides the neat science we're doing, it's about the collaborations we're creating and the international experiences our students and the Chilean students are getting."
The UNT-Chile Field Station is making investments for the future, not only for researchers, local citizens and the environment, but for the students.
"Through UNT's innovations in field environmental science and philosophy, I expect our students to become leaders of the biocultural events around the world," Rozzi says. "My hope is that we not just integrate philosophy, art and biology, but we also contribute to conservation in high-latitude habitats threatened by global change.
"By 'changing the lenses' through which we view not only the problems but also the beauty of the landscapes, our future leaders — together with the local people, the government, the little plants — can make a difference."
The Future of Lighting
Researchers look to organic light-emitting diodes for energy-efficient, money-saving light.
By Sarah Bahari
Next Generation Learning
UNT leads transformation of large classes with new technology and active learning.
By Alyssa Aber
Calling Emergency Services
Computer scientists help overhaul 911 systems to keep pace with Internet phones.
By Sarah Bahari
Solutions By Design
Braille readers, Haitian farmers benefit from UNT communication design.
By Ellen Rossetti
Preserving Endangered Languages
Linguist's electronic archive will help a minority language of India live on.
By Nancy Kolsti
Student examining effects of radiation on stainless steel wins DOE support.
By Mellina Stucky
For more information about this research:
Emerging strengths, expanding expertise
Research funding, Discovery Park, new deans
Honors for research and creativity
Solutions and scholarship across disciplines
Experts in conducting, Latino political behavior, tinnitus, materials science
Projects in education, science, social science, art, business, music
Poetry, justice, Aztec history and more
A growing research agenda
Web page last updated or revised: February 2, 2009
Questions or comments about this web site? firstname.lastname@example.org
"University of North Texas," "UNT," "Discover the power of ideas" and their associated identity marks, as well as the eagle and talon graphic marks, are official trademarks of the University of North Texas; their use by others is legally restricted. If you have questions about using any of these marks, please contact the Division of University Relations, Communications and Marketing at (940) 565-2108 or e-mail email@example.com. | <urn:uuid:e4a25e81-08b6-460e-a786-0f129eeedb9d> | CC-MAIN-2015-35 | http://www.unt.edu/untresearch/2008-2009/subantarctic-conservation.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062760.2/warc/CC-MAIN-20150827025422-00224-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.933605 | 2,470 | 2.75 | 3 |
|Scientific Name:||Callithrix kuhlii|
|Species Authority:||Coimbra-Filho, 1985|
|Taxonomic Notes:||The validity and authorship of the name Callithrix kuhlii has been the subject of some debate. Hershkovitz (1975, p.142) was the first to indicate that Wied-Neuwied (1826) had referred to the marmoset of south-east Bahia as “Hapale penicillata Kuhlii” [sic]. However, Hershkovitz (1975, 1977) argued at length that kuhlii was not a valid taxon, being merely an intergrade between C. j. penicillata and C. j. geoffroyi. Vivo (1991, pp.80-81), on the other hand, argued that Wied-Neuwied (1826) had not intentionally given it this name, merely, and incorrectly, ascribing the authorship of the name penicillatato Kuhl. The first person to intentionally use the name kuhlii to describe the marmosets from south-east Bahia was Hershkovitz (1975), but his argument that it was not a valid taxonomic entity disqualifies the possibility of him being attributed authorship. This is therefore given to Coimbra-Filho (1985). A full description of the species is given in Coimbra-Filho et al. (2006).
In the past, the eastern Brazilian marmosets (penicillata É. Geoffroy, 1812, geoffroyi É. Geoffroy in Humboldt, 1812, aurita É. Geoffroy in Humboldt, 1812, and flaviceps Thomas, 1903) of the “jacchus group” were considered to be subspecies of Callithrix jacchus, following Hershkovitz (1977). All are now considered to be full species (see Coimbra-Filho 1984; Mittermeier et al. 1988; Marroig et al. 2004; Coimbra-Filho et al.. 2006).
|Red List Category & Criteria:||Near Threatened ver 3.1|
|Assessor(s):||Rylands, A.B. & Kierullf, M.C.M.|
|Reviewer(s):||Mittermeier, R.A. & Rylands, A.B. (Primate Red List Authority)|
This species is currently listed as Near Threatened as it is believed to have experienced a decline in the order of 20-25% over the past 18 years primarily as a result of habitat loss. Since it is rather adaptable to anthropogenic disturbance, declines are unlikely to be such that the species would require listing in a threatened category. Almost qualifies as threatened under criterion A2c.
Callithrix kuhlii occurs between the Rio de Contas and Rio Jequitinhonha in southern Bahia, just entering the north-easternmost tip of the state of Minas Gerais (Santos et al. 1987; Rylands et al. 1988). The western boundary is not well known, but undoubtedly defined by the inland limits of the Atlantic coastal forest. I. B. Santos (in Rylands et al. 1988) observed hybrids of C. penicillata and C. kuhlii in the region of Almenara, Minas Gerais, left bank of the Rio Jequitinhonha (16°41’S, 40°51’W). Its range is largely coincident with that of the Golden-headed Lion Tamarin Leontopithecus chrysomelas. These two callitrichids are broadly sympatric.
Surveys in 1986/1987 by Oliver and Santos (1991) demonstrated the presence of forms intermediate in appearance between C. kuhlii and C. penicillata north from the Rio de Contas, along the coast up to the regions of Valença and Nazaré, just south of the city of Salvador (Mittermeier et al. 1988). Individuals observed by Rylands near to Nazaré, just south of the city of Salvador lacked the white frontal blaze, and, although retaining the pale cheek patches typical of kuhlii, were paler grey. A photograph of the marmoset from Valença, Bahia, north of the Rio de Contas, is shown in Mittermeier et al. (1988, p.19). The variation in pelage colour of the marmosets in this region is considerable, but Coimbra-Filho et al. (1991/1992), showed that true C. kuhlii extended north through coastal Bahia into the state of Sergipe as far as the Rio São Francisco in the recent past. The present-day confusion has arisen from the widespread forest destruction, most marked and nearly total in Sergipe, and the introductions and invasions of C. jacchus and C. penicillata.
Native:Brazil (Bahia, Minas Gerais)
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||Population densities recorded from the Lemos Maia Experimental Station (CEPLAC/CEPEC), Una, Bahia were 8.70-9.09 groups/km² or 50.00-68.06 individuals/km², along three trails of 1 km, 1 km and 1.5 km (Rylands 1982).|
|Habitat and Ecology:||
An Atlantic forest species occurring in lowland and sub-montane humid forest, seasonal (mesophytic) rain forest, restinga and white sand piaçava forest. Also known to use cabruca - cacao plantations which are shaded with some native trees remaining from the original forest. They have been observed in secondary growth forest in abandoned rubber plantations. Callithix kuhlii is an adaptable species well able to live in degraded and secondary forests, depending on sufficient year round food sources and foraging sites. Near the coast, in the cocoa growing region, there is no distinct dry season with rainfall exceeding 2,000 mm a year (the heaviest rains are from March to June), but in the west of their range the forests are mesophytic with a distinct dry season, and in some areas the forests are semideciduous, with rainfall as low as 1,000 mm a year (Rylands 1989; Pinto and Rylands 1997).
Marmosets and tamarins are distinguished from the other monkeys of the New World by their small size, modified claws rather than nails on all digits except the big toe, the presence of two as opposed to three molar teeth in either side of each jaw, and by the occurrence of twin births. They eat fruits, flowers, nectar, plant exudates (gums, saps, latex) and animal prey (including frogs, snails, lizards, spiders and insects). Marmosets have morphological and behavioural adaptations for gouging trees trunks, branches and vines of certain species to stimulate the flow of gum, which they eat, and in some species form a notable component of the diet (Coimbra-Filho 1972; Rylands 1984). They live in extended family groups of between four and 15 individuals. Rylands (1982) observed groups sizes of 5 to 9 individuals (mean 6.56 ±1.33, n=8). Generally, only one female per group breeds during a particular breeding season. The groups defend home ranges 10-40 ha, the size depending on availability and distribution of foods and second-growth patches. Rylands (1982, 1989) recorded a home range of 12 ha for a group of 5 individuals.
Rylands (1982, 1984, 1989) studied the behaviour and ecology of C. kuhliii at the Lemos Maia Experimental Station, Una, Bahia. B. Raboy and G. Canale are also studying this species in the Una Biological Reserve (Raboy and Dietz 2000; Raboy et al. 2006).
Males 482 g (n=55) (Smith and Jungers 1997).
|Major Threat(s):||The main threat to this species is forest loss and fragmentation, most particularly in the west of their range where cattle ranches predominate and forest fragmentation is most severe. They are also hunted for pets.|
Occurs in the following protected areas:
Una Biological Reserve (18,500 ha)
Serra do Conduru State Park (8,941 ha)
Serra das Lontras National Park (16,800 ha)
Una Wildlife Refuge (23,000 ha)
Lemos Maia Experimental Station (CEPLAC/CEPEC) (495 ha)
Canavieiras Experimental Station (CEPLAC/CEPEC) (500 ha)
The expansion of the Una Bioloigcal Reserve is ongoing and of importance for this species as well as Cebus xanthosternos and Leontopithecus chrysomelas.
This species is listed on Appendix II of CITES.
Coimbra-Filho, A. F. 1972. Aspectos inéditos do comportamento de sagüis do gênero Callithrix (Callithricidae, Primates). Revista Brasiliera de Biologia 32: 505–512.
Coimbra-Filho, A. F. 1984. Situação atual dos calitriquídeos que ocorrem no Brasil (Callitrichidae-Primates). In: M. T. de Mello (ed.), A Primatologia no Brasil,, pp. 15-33. Sociedade Brasileira de Primatologia, Brasília, Brazil.
Coimbra-Filho, A. F. 1985. Sagüi-de-Wied Callithrix kuhli (Weid, 1826). FBCN/Inf., Rio de Janeiro.
Coimbra-Filho, A. F., Mittermeier, R. A., Rylands, A. B., Mendes, S. L., Kierulff, M. C. M. and Pinto, L. P. de S. 2006. The taxonomic status of Wied’s black-tufted-ear marmoset, Callithrix kuhlii (Callitrichidae, Primates). Primate Conservation 21: 1–24.
Coimbra-Filho, A. F., Rylands, A. B., Pissinatti, A. and Santos, I. B. 1991/1992. The distribution and conservation of the buff-headed capuchin monkey, Cebus xanthosternos, in the Atlantic forest region of eastern Brazil. Primate Conservation 12-13: 24–30.
de Vivo, M. 1991. Taxonomia de Callithrix Erxleben, 1777 (Callitrichidae, Primates). Fundacao Biodiversitas para Conservacao da Diversidade Biologica, Belo Horizonte, Brazil.
Hershkovitz, P. 1975. Comments on the taxonomy of Brazilian marmosets (Callithrix, Callitrichidae). Folia Primatologica 24: 137-172.
Hershkovitz, P. 1977. Living New World monkeys (Platyrrhini), with an introduction to Primates. University of Chicago Press, Chicago, USA.
Marroig, G., Cropp, S. and Cheverud, J. M. 2004. Systematics and evolution of the jacchus group of marmosets (Platyrrhini). American Journal of Physical Anthropology 123: 11-22.
Mittermeier, R. A., Rylands, A. B. and Coimbra-Filho, A. F. 1988. Systematics: species and subspecies - an update. In: R. A. Mittermeier, A. B. Rylands, A. F. Coimbra-Filho and G. A. B. da Fonseca (eds), Ecology and Behavior of Neotropical Primates, pp. 13-75. World Wildlife Fund, Washington, DC, USA.
Oliver, W. L. R. and Santos, I. B. 1991. Threatened endemic mammals of the Atlantic forest region of south-east Brazil. Wildlife Preservation Trust, Special Scientific Report 4: 1-125.
Pinto, L. P. S. and Rylands, A. B. 1997. Geographic distribution of the golden-headed lion tamarin, Leontopithecus chrysomelas: implications for its management and conservation. Folia Primatologica 68: 161-180.
Raboy, B. E. and Dietz, J. M. 2000. Patterns of interspecific associations between wild golden-headed lion tamarins and sympatric Wied's marmosets in southern Bahia, Brazil. American Journal of Primatology 51(1): 83-84.
Raboy, B. E., Canale, G. R. and Dietz, J. M. 2006. Ecology, behavior, and conservation status of the Wied’s black tufted-ear marmoset (Callithrix kuhli). merican Journal of Primatology 68(1): 65-66.
Rylands, A. B. 1982. The behaviour and ecology of three species of marmosets and tamarins (Callitrichidae, Primates) in Brazil. Doctoral Thesis, University of Cambridge.
Rylands, A. B. 1984. Exudate-eating and tree-gouging by marmosets (Callitrichidae, Primates). In: A. C. Chadwick and S. L. Sutton (eds), Tropical Rain Forest: The Leeds Symposium, pp. 155–168. Leeds Philosophical and Literary Society, Leeds, UK.
Rylands, A. B. 1989. Sympatric Brazilian callitrichids: the black-tufted-ear marmoset, Callithrix kuhli, and the golden-headed lion tamarin, Leontopithecus chrysomelas. Journal of Human Evolution 18(7): 679-695.
Rylands, A. B. 1996. Habitat and the evolution of social and reproductive behavior in Callitrichidae. American Journal of Primatology 38: 5–18.
Rylands, A. B., Spironelo, W. R., Tornisielo, V. L., Lemos de Sá, R. M, Kierulff, M. C. M. and Santos, I. B. 1988. Primates of the Rio Jequitinhonha valley, Minas Gerais, Brazil. Primate Conservation 9: 100-109.
Santos, I. B., Mittermeier, R. A., Rylands, A. B. and Valle, C. 1987. The distribution and conservation status of primates in southern Bahia, Brazil. Primate Conservation 8: 126-142.
Smith, R. J. and Jungers, W. L. 1997. Body mass in comparative primatology. Journal of Human Evolution 32: 523-559.
Wied-Neuwied, M. and Prinz, zu. 1826. Beiträge zur Naturgeschichte von Brasilien, Vol. 2.
|Citation:||Rylands, A.B. & Kierullf, M.C.M. 2008. Callithrix kuhlii. The IUCN Red List of Threatened Species. Version 2015.2. <www.iucnredlist.org>. Downloaded on 28 August 2015.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided| | <urn:uuid:468bad82-969e-4793-9483-a4c82d255382> | CC-MAIN-2015-35 | http://www.iucnredlist.org/details/full/3575/0 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00161-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.785188 | 3,433 | 2.765625 | 3 |
THREE CENTURIES OF ARCHITECTURE
AND URBANISM IN TEANECK
Mark Alan Hewitt, AIA
Associate Professor of Architecture
New Jersey Institute of Technology
This publication commemorates the centennial of the incorporation of Teaneck as a township in 1895. After many years of consolidated existence, it seceded from Englewood, Ridgefield Park and the boroughs of Leonia and Bogota. Only in the 1920s did the town become fully urbanized with its familiar street grid and commercial centers, a fact which may surprise many residents of this well-ordered community of homes and businesses. New Jersey's hundreds of Progressive-Era municipalities were part of a borough movement which transformed the state from a largely agricultural region into the dense, suburban megalopolis we know today. While the majority of the landmarks to be found in the following pages date from the past 100 years, the 6 ½ mile square township has a history of European settlement dating to the early seventeenth century, and a native American presence stretching back even further.
Teaneck takes its name from "Tee Neck," one of two early Dutch and Huguenot settlements lying within the Kiersted Patent, a 2120 acre tract bounded to the east and west by the Hackensack River and Overpeck Creek. The land was granted following the Dutch-Indian Treaty of 1645 to Sarah Kiersted, who, it is said, learned the native language and befriended Chief Oratam of the Lenni Lenape clan who had long inhabited the territory. She received confirmation of her land rights from Lords Berkeley and Carteret, Proprietors of East Jersey, in 1669 and retained her deed until 1686, but no written records of a permanent settlement exist before 1704. At that time two small agricultural villages were located in the vicinity of Fycke Lane and upon the bluff at Brett Park. The origins of the town's name are not known precisely, but it probably derives from a native American word meaning "villages."
The landscape of this part of Bergen County is distinctive for the high ridge (Teaneck Ridge) which lies between the Overpeck Creek (Tatanqua to the Lenape tribes) and the lower marshy meadows of the Hackensack River. To the east are the dramatic cliffs of the Palisades and to the west the old town of Hackensack and the land routes to the iron mines of the Ramapo Mountains. Robert Erskine, proprietor of the Ringwood mines and George Washington's cartographer, located New Bridge and Tee Neck on his map of the evacuation of Fort Lee by Washington's army in November 1776, one of the first depictions of the settlement landscape. The early Cultural and material characteristics of the area were distinctive, marked by the use of native sandstone by the Dutch builders of the seventeenth century. Agriculture was the mainstay of the settlers here from the 1640s until the late nineteenth century, and the distinctive architectural type was the farmstead. A number of Dutch farmhouses are preserved, and several fine examples are in Teaneck.
During the Revolution much of Bergen County was torn by the divided loyalties of pro-British and pro-Revolutionary forces. Citizens were at the mercy of raiding parties from both sides in search of food, arms and supplies throughout all seven years of the war. Following the end of hostilities the centers of commerce were Hackensack and New Bridge, and to a lesser extent communities along the Hudson, where Teaneck's farmers attended church, sold their goods, and socialized. The leading landowners continued to be the descendants of early Dutch settlers - the Zabriskie, Ackerman, Demarest, Van Buskirk, Van de Linde, and Brinkerhoff names remained prominent for another century. There were 13,000 inhabitants of the county in 1790. Churches in the town of Hackensack and Schralenburgh (today's Bergenfield and Dumont) served the needs of most residents of the area. Two separate neighborhoods grew along early Indian trails, one along the banks of the Hackensack River where a small number of residents built a Lutheran church (the Van Buskirk Cemetery marks the site)- and another along the edge of Overpeck Creek on the east side of the ridge. There were otherwise few public buildings in the area.
Transformation of the county came with the construction of the first railroads in the years before the Civil War. Subsequently the New York Central ran its West Shore Branch line northward, constructing a station in what was then West Englewood. The first commuters in the county built villas and country houses on agricultural properties, where they could retreat from their businesses in New York City. It was then that New Jersey played a major role in the development of America's suburban ideal, cultivated in such publications as The Architecture of Country Houses, by Andrew Jackson Downing (I 85 0), and Frank J. Scott's The Art of Beautifying Suburban Home Grounds (I 8 70). In nearby West Orange Andrew Jackson Davis had designed the first planned suburban enclave, Llewellyn Park, where picturesque gardens surrounded comfortable Victorian cottage residences with their characteristic piazzas or verandas. Teaneck's few examples of this dwelling type have largely disappeared, but telltale traces of their presence remain in the landscape and form of the town. Following the Civil War the area maintained its largely agricultural base of large farms and occasional country seats - 1 9 000 acre farms stretched between River Road and the Overpeck. The most significant land transaction in the town's history occurred on April 10, 1865, when a young and ambitious lawyer from New York purchased 88 1/2 acres in what would later become the center of Teaneck. That man was William Walter Phelps, son of the wealthy mercantilist and railroad magnate, John Jay Phelps. When his father died in 1869, the younger Phelps relocated from New York City to his country estate in Bergen County and began to take an active interest in New Jersey, national and later international politics. At the center of the estate he expanded an existing Dutch farmhouse into a rambling, 350-foot-long, somewhat Richardsonian manor dubbed "The Grange." But architecture was not his abiding interest; Phelps was a pioneer in the management and stewardship of land, a trait shared with contemporaries Frederick Law Olmsted and his pupil Charles Eliot. He planted over 600,000 trees on his properties (later to total over 5,000 acres), developed thirty miles of roads through land that had heretofore been in cultivation, and built sixty bridges. He controlled railroads and speculated in real estate in the northern part of the county and throughout the U.S. When he died in 1894 over half of the present township of Teaneck was left in his estate, to be managed by his son and two executors. The largest portions of this land remained undeveloped until the death of Mrs. Phelps in 1920.
Because this major landscaped tract occupied a prime area stretching east to west between Teaneck Road and River Road, Teaneck's early development occurred mainly on the fringes of the present township. Incorporation in 1895 brought the first organized subdivisions, the first municipal services including police and fire brigades, and a political and community organization long desired by residents. This was the age of the streetcar suburb, and Teaneck benefited greatly from the web of trolley and rail lines which ran westward and northward from the Hudson River and Newark's rail hubs. A key intersection developed at Fort Lee Road, on the southern end of town, and it was here that one of the first large subdivisions was constructed under the auspices William Bennett (1841-1912), a Binghamton, N.Y. builder who became the first council chairman in 1895. (Bennett had previously managed the Phelps estate lands). Walter Selvage purchased the 70-acre Brinkerhoff tract and developed his Selvage Addition Subdivision along Teaneck Road in 1901. Between 1900 and 1909 256 new homes were constructed. The town began to take on the characteristics of a garden suburb, with the added attraction that the tree lined streets and verdant landscapes of the Phelps tract formed a kind of park at the heart of the community. By 1910 the Population had increased 200% to over 2,000.
When the Phelps estate opened its holdings and began to sell parcels in 1922, a development boom occurred reflecting that of the greater New York metropolitan area- New York City issued its first regional plan in 1929, a document with far-reaching prescriptions for northern New Jersey, the five boroughs, Connecticut and Long Island. New housing, infrastructure and transportation were major elements in the plan. In this context, Teaneck's planned subdivisions and smaller speculative home tracts may be seen as constituent elements of a vast middle landscape spreading in a ring around New York. Only a few miles distant, in 1926 Clarence Stein (1883-1975) and Henry Wright (1878-1936) designed their experimental new town of Radburn (now Fair Lawn), one of the seminal planning and housing projects of the twentieth century. The design featured a mix of residential types, segregation of pedestrian and auto circulation, greenbelts woven through the housing tracts, and integration of schools, businesses and housing into a multi-layered fabric. in Teaneck, a smaller but very similar venture, the Fred T. Warner subdivision, attempted to create the same sense of community. On a less utopian scale, Teaneck's developers sought to win the hearts of prospective homeowners by offering trim, comfortable dwellings at a modest cost in a community linked by ready transportation to the urban hub of New York City.
By the mid-1920s a hectic real estate boom was underway - 1,065 property transfers were recorded between July 1924 and July 1925 alone. "Three Years Ago Farmland, Today Beautiful Homes," proclaimed one promotional brochure. The dominant style for these houses was "Tudor," a cozy and sentimental variation on old English models of the late 19th century. Districts like the Standish Road subdivision put Teaneck on the map with Mamaroneck, Chestnut Hill, and Great Neck as desirable communities for the aspiring middle income family. Teaneck's Collegiate Gothic high school and Georgian elementary schools reassured residents that children would be reared in the core values America's dominant work ethic. A stately Colonial Revival town hall and library reinforced patriotic virtues. And a friendly, domestically scaled main street commercial district developed along Cedar Lane, in what once was the heart of the Phelps preserve. By 1930 a town had appeared which could rival nearby Ridgewood and Montclair for coherence, convenience and community pride. Moreover, a new transportation linkage would give added incentives to choose Teaneck over rival communities. New Jersey Route 4 and the opening of the George Washington Bridge in 1931 increased the value of Bergen County property despite the worsening economy. During the Depression years up until World War II Teaneck maintained its dramatic population growth, climbing to nearly 20,000. By then the defining years of the town's physical identity had passed, and with them the most significant period of suburbanization in America's history. Teaneck was a part of that historical moment.
Following the long struggle of World War II which consumed the country during the 1940's, the town began its final period of development, building upon the strong armature established by the Phelps tract and the community planning of the interwar years (the first zoning ordinance was passed in 1928). Teaneck had a medical center at Holy Name Hospital, begun in 1924 on the 10-acre Griggs estate owned and occupied by Mrs. Phelps at her death. In 1954 Fairleigh Dickinson University began its Teaneck operations on a river front site along River Road. And parkland, vital to the health of any community, was set aside by astute township officials, much of it according to the original 1933 master plan. By the 1960s there were 11 parks, four separate playgrounds, two long park strips, and nine small circles, all maintained by the township. One of the most far-reaching decisions was the bold concept of purchasing land for an easement along either side of Route 4, providing a necessary greenbelt and insuring privacy for residents in adjacent subdivisions.
The dominant models for post-war housing were garden apartments, split level and ranch style houses, trim colonials, and a few Tudor survivals. Subdivision occurred in areas at the fringes of the township, including the eastern and northwestern edges. The construction of Interstate 80 and its ancillary system of regional highways in the 1970s brought increased traffic, the commercial/residential project at Glenpointe and other development pressures to the township and county. By 1980 development of new land had ceased, and like much of America Teaneck turned to slow growth initiatives and increased planning controls to preserve its quality of life. Bergen County's population was nearly a million, and its infrastructure was strained to the limit by traffic and population expansion. In the mid-1980s New Jersey joined many states in passing enabling legislation for historic preservation. Teaneck entered in the fight to conserve historic and natural resources with the establishment of a Historic Preservation Commission to administer its preservation ordinance.
Teaneck began as landscape held in the balance of ecological and political forces between European settlers and native American cultivators. The treaty made between the Dutch and the Indians in the mid-seventeenth century to divide and share the land depended
Upon the intentions and commitment of both parties to make it work. Similarly, present efforts at conservation, limitation of development and planning also depend upon intention and commitment. This brochure is a celebration of history, heritage and perseverance. It marks the collective memory of a community via the trail of history preserved in artifacts--architecture, landscape, infrastructure, and the telltale creations of our ancestors. In its pages will be found reminders of a past which, though sometimes dimly recalled, will shape the future of this land and its human inhabitants for years to come. | <urn:uuid:0ee90af0-d7e8-4478-92ae-e247ed0f9808> | CC-MAIN-2015-35 | http://teaneck.org/virtualvillage/HistLandmarks/OVERVIEW.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.964701 | 2,907 | 3.296875 | 3 |
The intention of this study was to evaluate the effectiveness of the Promoting Healthy Styles program. In this study formative and qualitative evaluation were used in a quasi-experimental design with a retrospective test. The experimental group was composed of 25 active participants of the Family and Consumer Science program and the control group consisted of 22 participants from a handicraft program at Lajas, PR. The experimental group received the lessons on which consisting of a curricular guide lessons. The control group participated in a conference onpromotion. A retrospective test was used and the analysis of the collected data was performed by using the statistical package “Statistical Package for Sciences” (SPSS) version 15.0 for Windows. The findings for the experimental group indicate positive changes and attitudes in relation to modifying life styles at same time this study showed the effectiveness of the Promoting Healthy Styles educative program.
Monthly Archives: March 2012
Eficacia del programa Promoviendo Estilos de Vida Saludables con amas de casa participantes del programa Ciencias de la Familia y el Consumidor del Servicio de Extension Agricola en el municipio de Lajas, Puerto Rico (Education Papers posted on March 31st, 2012 )
Teaching reading comprehension using match-to-sample discrete trial teachings, sight word identification, positive reinforcement and response repetition correction (Education Papers posted on March 31st, 2012 )
The purpose of this study was to determine whether the use of match-to-sample discrete trial teachings along with positive reinforcement and response repetition correction would assist in developing an autistic high school student’s reading comprehension. Data were collected once a week over 11 trials. The percentage of words comprehended by the participant was graphed by the researcher during each of the trials. The researcher used an ABAB reversal design. This study indicates that the participant’s reading comprehension greatly improved during treatment and returned to baseline when treatment was removed. The results of this study also demonstrate that with the use of match-to-sample discrete trial teachings, sight word reading, positive reinforcement and response repetition correction, reading comprehension can be significantly improved for students within the autistic population.
Percepcion de los orientadores escolares y de los estudiantes de colegios catolicos adscritos a la diocesis de Arecibo en relacion a las Ciencias Agricolas como alternativa de estudio universitario (Education Papers posted on March 31st, 2012 )
The purpose of this descriptive-correlational study was to determine knowledge and perception of school counselors and of twelve grades student of private schools of Arecibos diocese regardingScience as a careers and agriculture. Six school counselors and 237 students indicated to have a neutral perception about Industry and agreed with their knowledge about this area. Moreover, they have a neutral perception and knowledge regarding science. Students indicated to agree with the knowledge they have about agriculture and a neutral perception of this area and Science. Students need more information related to this area and those who received information indicated to be positive related to Science. During the last five years, school counselors indicated that they probably hey provided guidance but, they have not done it because of lack of time to coordinate activities. It is recommended to develop the needed guidance tools so the school counselors help their students to develop a successful future.
Study of Split Screen in shared-access scenarios: Optimizing value of PCs in resource-constrained classrooms in developing countries (Education Papers posted on March 31st, 2012 )
Cost restrictions in developing countries result in multi-user, shared-access scenarios preventing each user from having exclusive access to a. Increasing student enrollment raises technology demand in classrooms. Despite collaborative learning practices, skewed user- ratios continue to result in unequal opportunities for independent learning. This research studies students sharing a PC using a novel Split Screen User-Interface concluding that it enables students to achieve benefits of both independent and peer-supported learning. This interface allows two co-located students to share one , interact with independent Windows Desktop sessions through the same display using separate input devices. The thesis draws conclusions from a study conducted in a computer school in Bangalore, India. Qualitative observations of two user groups are analyzed and a comparison is presented. The first group retained traditional Shared Screen practice while the second group employed Split Screen mode. Although the experiment did not account for significant quantitative difference in performance between the two groups as the sample size was small), detailed observations show that students in Split Screen group had the natural tendency to become individual learners, but there were ample evidences of peer-assisted learning. This confirms the thesis that Split Screen offers students the active and independent involvement opportunity of First World classrooms while retaining peer learning behaviors of traditional classrooms in developing countries. Although there are some usability constraints that prevent universal applicability, Split Screen potentially maximizes the value of existing computers while ensuring that educators retain their flexibility to design learning structures and methods.
Voice onset time for voiced and voiceless stops across English proficiency levels for sixteen Puerto Rican Spanish speakers at the University of Puerto Rico at Mayaguez (Education Papers posted on March 31st, 2012 )
This thesis examined Voice Onset Time (VOT) for voiced and voiceless stops across four English proficiency levels for sixteen Puerto Rican Spanish speakers at the University of Puerto Rico at Mayaguez. It used a Production Task to find out whether VOT values of voiceless and voiced stops in the Spanish and English of eight students varied across English proficiency levels and found that VOT values did not vary in Spanish but varied in English. High English proficiency students produced stops within the VOT range for American English; low proficiency students did not. The VOT production of the high proficiency students was consistent with an external target for English; that of the low proficiency students was consistent with inter-language and transfer. It used an Identification Task to examine whether eight students from the proficiency levels could identify the productions from the Production Task and found that identification did not match production.
As a new school forms, the development of community is essential to creating a positive learning environment that meets both academic and-emotional needs of students. The purpose of the study was to examine how interactions in a garden could contribute to the development of community. From the literature, seven common elements of community were identified: inclusion, democracy, common purpose, diversity, communication, interactions, and a connection with the earth. The participants were 13 students in first through third grade and three garden teachers at a newly opened school with a focus on stewardship. Teacher and student initiated interactions were observed and recorded over the first three months of school. The data were analyzed to examine each element and interaction over time. The findings provided evidence of consistently positive interactions in the garden. Highly positive interactions in the garden include interconnection to the earth, and working together for a common purpose. Continuing instruction in the outdoor garden environment was recommended due to the highly positive community interactions observed.
This dissertation focuses on the development of aof film . Why film? Films are a rich source of enjoyment for many of us; however, they can also give us insight into the world beyond our immediate experience and can, and often do inspire us, shock us, or make us rethink our assumptions about the world. I argue that film can be an agent of change. Everyday consumers can draw knowledge and self-identity from the mythic content of motion pictures and television programs. Far from being merely entertainment, mass media vehicles such as film convey ideas and ideals regarding the nature of the world and the universe and the moral structure of society. Through its integration of cinematic form and sound, film aspires to become a language, much like the other arts, such as literature, painting, and photography, are languages, and thus is amenable to . In this study I attempt to identify, explain, and justify some of the key aims, content, and pedagogical approaches of an education in film. I argue that filmmaking is a cognitive, collaborative and constructivist activity. This dissertation examines the place of film in the broad context of a general education. In order to further place the study in context, I explore and outline a brief history of film education in Canada and illustrate why film is significant to our understanding of the arts in general. I explore the reasons why film music may be seen as being fundamental to the film experience. I argue that the notion of literacy should be broadened to include the visual. Finally, I develop perspectives on teaching film theory and practice, including assessment, as part of a conception of curriculum and benefit in process from interviews with two local film educators. Keywords: aims of education; film; film music; popular culture.
Athletic training students’ perceptions of their academic preparations for the Board of Certification examination (Education Papers posted on March 31st, 2012 )
To examine athletic training students’ perception of their Athletic TrainingProgram (ATEP) in relation to their preparation for the Board of Certification (BOC) examination, participants completed an online survey consisting of 2 multiple choice questions and 13 questions utilizing a 5-point Likert scale. T-tests were performed to analyze all data. Alpha level was set at 0.05. The respondents perceived their academic preparation as either satisfactory (N=573, 87.6%) or unsatisfactory (N=81, 12.4%). Significant differences existed between those respondents passing the written, simulation, and practical portions of the exam on the first attempt compared to those who failed those portions. Of the content areas, only Pharmacology ( M=3.31), Psychosocial Intervention (M=2.89), Nutrition (M=2.82), and Healthcare Administration (M=2.71) had mean scores above 2.50 (1=Excellent, 5=Poor). As perceived by athletic training students who sat for the certification examination, ATEP’s are adequately preparing their students for the BOC certification examination.
The purpose of this study was to explore how one state’s formative assessment testing system was utilized in classrooms throughout the 2006-2007 academic year, the perceptions teachers hold about this system, as well as the relationship between these uses and perceptions. Survey responses from 730 teachers in the state of Kansas were analyzed. Teachers taught various grade levels between 3rd and 8th grades as well as high school and taught math, reading, or a combination of both content areas. Results indicated that based on the stated purpose of the formative assessments, which is to guide instructional practices as needed in order to improve student learning as well as performance on the summative state assessment, teachers reported using this system as it was intended at least some of the time. The majority of teachers also thought that there were valuable benefits to using this formative assessment system. Finally, teachers’ use of the formative assessment system investigated in this study was significantly related to how they perceived its efficacy and value.
Percepcion de los maestros de Educacion Agricola de Puerto Rico sobre el Programa de Experiencias Agricolas Supervisadas (Education Papers posted on March 31st, 2012 )
This descriptive research determined teachers’ perception related to the importance of SAE in the Program of. Data analysis demonstrated that teachers agreed with the importance of integrate SAE in the curriculum. Also, it was found that teachers agreed with the existence of barriers that might restrict the preparation and utilization of SAE. Related to teachers’ knowledge it was found that 95% of them obtained a punctuation of 70% of knowledge regarding to SAE. Most teachers consider SAE as an useful tool to broaden their knowledge, they are interested in integrate SAE in the curriculum and are available to participate in meetings related to SAE in order the promote it with their students. | <urn:uuid:07471dc1-7652-459c-9b5b-fe7e816a2846> | CC-MAIN-2015-35 | http://www.edu-papers.com/2012/03/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00100-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.943341 | 2,434 | 2.8125 | 3 |
The Invisible Epidemic Causing Headaches, Fatigue and Depression
Posted By Dr. Mercola | December 26 2011
By Dr. Mercola
Most people spend as much as 90 percent of their time indoors.
But indoor air quality can be up to five times worsethan outdoor air, which can have a very detrimental impact on your health.
For example, according to the US Environmental Protection Agency (EPA), poor indoor air quality can cause or exacerbate:
- Asthma, allergies, and other respiratory problems
- Eye and skin irritations
- Sore throat, colds and flu
- Memory loss, dizziness, fatigue and depression
Long-term effects from exposure to toxic airborne particles include heart disease, respiratory disease, reproductive disorders, sterility and even cancer.
Tips for Healthier Indoor Air
In The Daily Green, the American Lung Association offers 25 tips on how to keep the air in your home healthy.
Here's a small sampling of them:
Don't Allow Smoking Indoors: Each year, second hand smoke sends up to 15,000 children to the hospital.
There is no safe level of secondhand smoke; never let anyone smoke inside your home.
Don't Idle the Car in the Garage: Carbon monoxide exposure can lead to weakness, nausea, disorientation, unconsciousness and even death. Fumes from cars or lawnmowers left running in enclosed spaces can endanger your health.
Use Low-VOC Paints: Paints release VOCs, or volatile organic compounds, for months after application. VOCs can include highly toxic chemicals such as formaldehyde and acetaldehyde. Use low-VOC or no-VOC paints, varnishes, and waxes.
Clean Your Air Conditioner and Dehumidifier: Standing water and high humidity encourage the growth of dust mites, mold and mildew. All of these can worsen asthma. Use a dehumidifier or air conditioner when needed, and clean both regularly.
Beware of Dry Cleaning Chemicals: Dry cleaning solvents can be toxic to breathe. Let dry cleaned items "air out" outdoors before bringing them inside.
Avoid Toxic Household Products: Hair and nail products, cleaning products, and art and hobby supplies can increase the levels of VOCs in your home. Some of the VOCs in these products have been linked to cancer, headaches, eye and throat irritation and worsened asthma.
To see the rest of their tips, please review the featured Daily Green article.
Do You Know What's in the Air You Breathe?
One 2009 study, in which they used gas chromatography and mass spectrometry to examine the air inside 52 ordinary homes near the Arizona-Mexico border, researchers found that indoor air was FAR more contaminated than previously demonstrated. They identified an astounding 586 chemicals, including the pesticides diazinon, chlorpyrifos and DDT. Phthalates, endocrine-disrupting chemicals found in a variety of plastics, were also found in very high levels. Even more disturbing was the fact that they detected 120 chemicals they couldn't even identify!
For a listing of the most common indoor air pollutants and toxic particles, please see this previous article.
There's little doubt that most indoor areas have poor air quality. The question is, what to do about it? One of my recommendations used to be to move to an area known to have better air quality, as the more heavily polluted your outdoor air is, the worse it's going to be indoors.
I now believe the best thing you can do is to be proactive about cleaning the air inside your home and office, and being mindful about the chemicals you bring into and use inside your home. I also recommend paying close attention to the materials used in the construction and furnishings of your space as many building materials act as a continuous source of toxic air contaminants.
You may not have much individual control over the air outside, but these are some of the factors you DO have control over, which can help you create as health-promoting an environment as possible.
Four Major Sources of Air Contamination You DO have Control Over
Since environmental health is a concern of mine, I wanted to create the healthiest office possible for my staff, so a few years ago we built the "greenest" building we could. So green, in fact, the building received the prestigious Gold LEED certification. In addition to using air purifiers and lots of live plants, we also paid a great deal of attention to the building materials and furnishings that went into the space.
Four of the most common sources of air contamination from building materials and home furnishings include:
- Pressed wood products—This faux wood is made of wood leftovers combined together. Pressed wood products include paneling, particle board, fiberboard and insulation. The glue that holds the wood particles in place may use urea-formaldehyde as a resin. The U.S. EPA estimates that this is the largest source of formaldehyde emissions indoors.
Formaldehyde exposure can set off watery eyes, burning eyes and throat, difficulty breathing and asthma attacks. Scientists also know that it can cause cancer in animals. The risk is greater with older pressed wood products, since newer ones are better regulated. To limit your exposure:
- Use "exterior-grade" pressed wood products (lower-emitting because they contain phenol resins, not urea resins).
- Use air conditioning and dehumidifiers to maintain moderate temperature and reduce humidity levels.
- Increase ventilation, particularly after bringing new sources of formaldehyde into the home.
- Ask about the formaldehyde content of pressed wood products, including building materials, cabinetry, and furniture before you purchase them.
- Use solid wood whenever possible.
- Chemicals in carpets—Many types of indoor carpeting off-gas VOC’s and contain other toxic materials. The glue and dyes used with carpeting are also known to emit VOCs, which can be harmful to your health. Limit or eliminate exposure by carefully selecting non-toxic carpeting, such as those made of wool, or opt for non-toxic flooring like solid wood or bamboo instead.
- Paint—While paints have gotten a lot less toxic over the past 25 years, most paints still emit harmful vapors, such as VOC’s, formaldehyde and benzene, just to name a few. These types of fumes may be harmful to your brain over time, and they’re released daily for about 30 days after application. Low levels can continue to leak into the air for as long as a year afterward, so you’ll want to make sure you ventilate the area repeatedly.
Another danger is lead-based paint, which can be found in many homes built before 1978. Once the paint begins to peel away, it releases harmful lead particles that can be inhaled. In 1991, the U.S. government declared lead to be the greatest environmental threat to children. Even low concentrations can cause problems with your central nervous system, brain, blood cells and kidneys. It's particularly threatening for fetuses, babies and children, because of potential developmental disorders.
Fortunately, it’s getting easier to find high-quality non-toxic paints, also known as “low-VOC” or “no-VOC” paint. Both large paint companies and smaller alternative brands now offer selections of such paints. For a list of distributors and manufacturers, see this link.
- Mattresses, upholstery, drapes and curtains—These are all common sources of polybrominated diphenyl ethers (PBDEs); flame retardant chemicals that have been linked to learning and memory problems, lowered sperm counts and poor thyroid functioning in rats and mice. Other animal studies have indicated that PBDEs could be carcinogenic in humans, although that has not yet been confirmed.
Your mattress may be of particular concern, as many contain not only PBDE's, but also toxic antimony, boric acid, and formaldehyde. Shopping for a safe mattress can be tricky, as manufacturers are not required to label or disclose which chemicals their mattresses contain. However, some manufacturers now offer toxin-free mattresses, such as those made of 100% wool, which is naturally fire resistant. There are also mattresses that use a Kevlar, bullet-proof type of material in lieu of chemicals for fire-proofing. These are available in most major mattress stores, and will help you to avoid some of the toxicity.
For more information and guidelines on selecting healthier alternatives, see this helpful article by Healthy Home Plans.
Air Purification Requires a Multi-Faceted Approach
The most effective way to improve your indoor air quality is to control or eliminate as many sources of pollution as you can first, before using any type of air purifier. You simply must eliminate all fixable sources of the indoor air pollution, otherwise it is like trying to drive your car with your foot on the accelerator and the brake at the same time.
This is particularly true for mold. No air purifier will ever remove the source of the mold, which is typically related to a previous or ongoing intrusion of water into the living space, providing increased humidity levels that molds require. You simply must repair this before you consider any air purifier to solve your indoor air quality issues.
Once you have eliminated the sources of indoor air pollution, there are a wide variety of devices on the market that function in a number of different ways. My recommendations for air purifiers have changed over the years, along with the changing technologies and newly emerging research. There are so many varieties of contaminants generated by today's toxic world that air purification manufacturers are in a constant race to keep up with them, so it pays to do your homework.
At present, and after much careful review and study, I believe air purifiers using Photo Catalytic Oxidation (PCO) seems to be the best technology available (see chart below).
That said, truly effective air cleaning requires a multi-pronged approach that incorporates a variety of different air cleaning processes/technologies as no one device can remove all types of pollutants. Finding ONE air purifier that does it all is like trying to find one magic vitamin that would meet all your physiological needs. Still, if you can only afford one, newer devices employing PCO technology will remove the widest spectrum of pollutants.
Air pollutants fall into three main categories, each requiring a different approach:
- Biological particles (molds, bacteria, spores, viruses, parasites, animal dander, pollen, etc.)
- Non-biological particles (smoke, dust, heavy metals, radioactive isotopes, etc.)
- Gases (fumes from things like adhesives, petroleum products, pesticides, paint, and cleaning products; radon, carbon monoxide, etc.)
Modern air purification devices work using the following four primary technologies:
|Technology||Types of Pollutants||How It Works|
|Filtration||Particles and biologicals||Mechanical or electrostatic, these physically trap particles in a filter (HEPA is example)|
|Photo Catalytic Oxidation (PCO)||Particles, gases, biologicals||Destroys pollutants using a UV lamp and a catalyst that reacts with the UV light|
|Negative Ionization||Particles and biologicals||Disperses charged particles into the air to attract to nearby objects, or to each other, thereby settling faster|
|Ozone||Biologicals||Activated oxygen assists with the breakdown of biologicals|
Basic Steps for Improving the Air Quality in Your Home
Aside from being mindful of the building materials and furnishings used during construction or renovation and using an air purification system, there are a number of other things you can do to take charge of your air quality and greatly reduce the amount of indoor air pollutants generated in the first place:
- Vacuum your floors regularly using a HEPA filter vacuum cleaner or, even better, a central vacuum cleaner which can be retrofitted to your existing house if you don’t currently have one. Standard bag- or bagless vacuum cleaners are another primary contributor to poor indoor air quality. A regular vacuum cleaner typically has about a 20 micron tolerance. Although that's tiny, far more microscopic particles flow right through the vacuum cleaner than it actually picks up! Beware of cheaper knock-offs that profess to have "HEPA-like" filters—get the real deal.
- Increase ventilation by opening a few windows every day for 5 to 10 minutes, preferably on opposite sides of the house. (Remember, although outdoor air quality may be poor, stale indoor air is typically even worse by a wide margin.)
- Get some houseplants. Even NASA has found that plants markedly improve the air! For tips and guidelines, see my previous article The 10 Best Pollution-Busting Houseplants.
- Take your shoes off as soon as you enter the house, and leave them by the door to prevent tracking in of toxic particles.
- Discourage or even better, forbid, tobacco smoking in or around your home.
- Switch to non-toxic cleaning products (such as baking soda, hydrogen peroxide and vinegar) and safer personal care products. Avoid aerosols. Look for VOC-free cleaners. Avoid commercial air fresheners and scented candles, which can degass literally thousands of different chemicals into your breathing space.
- Avoid powders. Talcum and other personal care powders can be problematic as they float and linger in the air after each use. Many powders are allergens due to their tiny size, and can cause respiratory problems
- Don't hang dry cleaned clothing in your closet immediately. Hang them outside for a day or two. Better yet, see if there's an eco-friendly dry cleaner in your city that uses some of the newer dry cleaning technologies, such as liquid CO2.
- Upgrade your furnace filters. Today, there are more elaborate filters that trap more of the particulates. Have your furnace and air conditioning ductwork and chimney cleaned regularly.
- Avoid storing paints, adhesives, solvents, and other harsh chemicals in your house or in an attached garage.
- Avoid using nonstick cookware. I now carry my favorite alternative, ceramic cookware, in my store.
- Ensure your combustion appliances are properly vented.
- Make sure your house has proper drainage and its foundation is sealed properly to avoid mold formation. For more information about the health dangers of mold and how to address it, please see this previous article.
- The same principles apply to ventilation inside your car—especially if your car is new—and chemicals from plastics, solvents, carpet and audio equipment add to the toxic mix in your car's cabin. That "new car smell" can contain up to 35 times the health limit for VOCs, "making its enjoyment akin to glue-sniffing," as this article reports.
(Send your news to email@example.com, Foodconsumer.org is part of the Infoplus.com ™ news and information network)
- Industry Watchdog Asks USDA to Ban Use of Wastewater
- Wake Up and Smell the Poison - newsletter 082715 from Organic Consumers Association
- AICR Enews - 070915
- Addictive and Toxic: Found in Bread, Pasta Sauce and Salad Dressing
- Healthy Recipes: Raspberry Almond Muesli | <urn:uuid:9219f1a6-37cb-43c7-94c4-bccc7222ff68> | CC-MAIN-2015-35 | http://www.foodconsumer.org/newsite/Non-food/Disease/Headaches_Fatigue_and_Depressio_1226110749.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062760.2/warc/CC-MAIN-20150827025422-00224-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.931551 | 3,221 | 3.171875 | 3 |
K1 62005 was designed by the London and North Eastern Railway, built by the North British Locomotive Company in their Queen’s Park Works, Glasgow as NBL no 26609 and delivered to the fledgling British Railways in June 1949.
The K1 Pedigree
The locomotive’s design is attributed to A H Peppercorn but its pedigree goes back to the Great Northern Railway. A young Nigel Gresley’s first loco design was influenced by the popularity of the 2-6-0 wheel arrangement in North America. The result was the GNR class H2 later LNER original class K1. This developed into Gresley’s highly successful K2 design that served the three railway eras, GNR, LNER and British Railways. Several K2s saw service on the West Highland lines and even had special, side window, cabs fitted to help cope with the climate (the summer version of which, the NELPG support groups know only too well).
Gresley wanted a more powerful mogul for the West Highland and developed his three cylinder K4 but this small class of only six locos posed maintenance difficulties during and after the dark days of World War 2. Edward Thompson became Chief Mechanical Engineer, in 1941, after Gresley’s death. In 1945, he modified no 3445 (later numbered 1997) to a two cylinder design. This prototype, MacCailin Mor proved to be successful such that, after Thompson’s retirement, Arthur Peppercorn, his successor, made a few more design alterations and ordered a batch of 70 from the North British Locomotive Company. Although to an LNER design, all were delivered after nationalisation. All the original LNER K1s had been converted to K2s by 1937 so the new design took over the K1 classification with the prototype being K1/1.
The K1 Design and Development
Thompson’s design modifications included replacing three 18.5 inch cylinders with two of 20 inch diameter. The valve sizes were increased from 8 to 10 inch diameter and the boiler pressure was increased from 200 to 225 psi. The K4 was a totally vacuum braked loco but this new loco was fitted with steam brake on loco and tender. The graceful sweep of the running plate ahead of the driving wheels was lost to accommodate the larger valves and cylinders. The K4 pony truck, to a Gresley double swing link design, was changed to utilise a side spring control system. Thompson was driven to improve standardisation of parts and as a result the K1 cylinders are the same as those on a B1. Similarly, the K1 boiler is a shortened version of the B1 boiler with identical firebox.
When Arthur Peppercorn succeeded Edward Thompson, he made a few further modifications to the rebuilt K4 design and then ordered 70 of these K1s from NBLCo. Peppercorn replaced the 3 bar slide bar by a single bar design. This had a different motion bracket. A gap was put at the front of the running plate to give better access for valve removal. A rocking grate and hopper ashpan were fitted and the pony truck was modified again to utilise coil rather than leaf springs. To increase range, the 3,500 gallon group standard tender inherited (and retained) by the K1/1 was changed to ones of 4,200 gallon capacity for all of the K1s.
All of the class were fitted with a BTH speedometer and electric lighting powered by a Stones steam turbine. Most of the class retained their generators but all lost the speedometers even though some retained the support brackets.
Further modifications in BR days included fitting an automatic warning system to some locos, including 62005. This involved moving the drivers side injector over to the fireman’s side to clear space for the AWS battery box. However the pipework within the cab remained unchanged. The steam brake valve inside the cab was repositioned to give space for the drivers AWS control box. Ironically, had the smaller design of steam brake valve fitted to the BR standard locos, been available in 1949, it could possibly have stayed in its original position resulting in a tidier cab layout.
62005 in Service
Loco 62005, like all of the class went for running in to Eastfield shed, Glasgow. From there it went first to Darlington, then Heaton in Sept 49, back to Darlington in July 52, Ardsley in June 59, York in August 59, North Blyth in March 66, Tyne Dock in May 67 and finally Holbeck in September 67.
It was condemned on 30thDecember 1967 and eventually sold to a consortium of Viscount Garnock, Geoff Drury, Brian Hollingsworth and George Nissen on 30th May 1969 for the boiler to be saved as a spare for the K4 61994 (LNER 3442) The Great Marquess, which they had bought. No 62005 had only survived until then because it had been used for a brief period as a temporary stationary boiler on the ICI North Tees Works. The boiler was not needed for the K4 so the loco was kindly donated to the infant but ambitious NELPG in 1972 and was delivered to BR’s Thornaby Depot on 14th June of that year.
The loco was overhauled at Thornaby Depot by NELPG volunteers and in accordance with the wishes of the donating group, was painted in a fully lined out LNER green livery. The restored loco was moved to the North Yorkshire Moors Railway on 28th May 1974 and went into traffic on 8thJune. In 1975 it made its first mainline runs in preservation between Whitby and Battersby before going to Shildon to join the Sand D 150 celebrations. It made further excursions onto the mainline in 1977, 78 and 79.
Over the winter of 1980/81 the K1 was fitted with new tyres at BSC Rotherham. A visit to the KWVR and 2 solo performances over the Sand C preceded a major overhaul in 1985 in ICI Wilton Works No 5 Depot involving the Manpower Services Commission scheme. The loco was quickly back at work and by the end of 1986 it had covered 5,500 miles on the NYMR and 1,067 miles on the main line since the overhaul.
On 28thJune 1987 the K1 began its long association with the West Highland line from Fort William to Mallaig. It returned in 1988 and it hauled some Santa specials around the Edinburgh suburban circuit. The 87/88 winter maintenance included a full re-tube at Wilton. The 1989 season, apart from a trip to Saltburn, was spent on the NYMR then back to Fort William 1990.
The 1991 season was spent on the NYMR again then back to Scotland for 1992. On 28thMarch 1993 the K1 made two return trips to Eastgate, Weardale at a time when that railway was still part of the national network. The loco went directly from those trips to the KWVR for a short season. On its way to Scotland in 1994, the K1 hauled a railtour from Darlington to Carlisle via the East Coast Main Line and the Tyne Valley handing over to the A2, which took the remainder of the tour to Skipton. The K1 went on its way North for the season that included the West Highland Railway centenary celebrations, the highlight of which was the pairing of the K1 with the K4, The Great Marquess, the loco for which, 22 years earlier, it was intended to be the boiler donor!
The 1994 season ended sadly. Having endured a repair to a cracked throat plate the K1 returned home for its next major overhaul. On its way it shared adjoining roads in Thornaby Depot with the A2 the night after the calamitous wheelslip at Durham. The K1 was to receive a new bottom section to the copper inner firebox. However the overhaul was delayed awaiting delivery of the copper sheet and inevitably by the need to attend to the stricken A2. To make matters worse, the planned J27 overhaul had already started.
In August 1998 the overhauled K1 left the ICI No 5 Depot at Wilton for the last time for the NYMR. It then went to the East Lancs Railway, by road, for January and February 1999, returning to Grosmont by rail as its initial mainline test run. Then it was off to Ruislip, London in May for ‘Steam on the Met’, a two weekend festival involving four locos on a shuttle service between Neasden, Amersham and Watford, all part of the London Underground system. The K1 then returned to the NYMR finishing the year with the Captain Cook Pullman special from Middlesbrough to Whitby and return.
The year 2000 saw the K1 back for ‘Steam on the Met’, calling at Hull docks on the way back to Grosmont for a photo charter, then settling down to a season on the NYMR. In preparation for Fort William, the winter of 2000/1 saw the valves rebored, all the injector copper pipework renewed and an ashpan sprinkler fitted. But 2001 is remembered by the group for an incident on route to Fort William. As the train of 9 coaches hauled by the B1 and K1, the K1 broke a drawbar pin and the loco with a small support crew, had to be abandoned at Eastfield on the outskirts of Glasgow. Overnight repairs, courtesy the Scottish Railway Preservation Society, and the K1 continued its journey to Fort William. A successful season culminated in railtour home via Oban.
The 2002 season started with a railtour to Bury via York and Hull. The visit to Bury was to the works of Riley (E) and Son for the fitting of its second set of new tyres and a short season on the East Lancs Railway before setting off again for Scotland. The 2002 Jacobite season ended with a memorable series of photo charters across Rannoch Moor and another railtour homewards to Carnforth. In early 2003 all the boiler washout doors were refurbished and a small crack was found around one of the doors on the LH side – a foretaste of things to come!
The 2003 season at Fort William was uneventful until 2nd October when a crosshead cotter worked loose. Repairs involved returning crosshead and piston to a specialist firm on Teesside for repair then, 4 days later, returning the parts to Scotland to finish the Jacobite season. The loco then went to the Keighley and Worth Valley Railway for a short stay and whilst there was fitted with the, by then, mandatory Train Protection and Warning System. This was followed by crown stay, small boiler tube and superheater element replacement at Carnforth. Early in 2004 the cylinder liners were replaced and all this work was to extend the loco’s mainline certification until December 2007.
After the work at Carnforth, a railtour was successfully operated with a double circular journey via Lancaster, Preston, Hellifield and Carnforth. The K1 was then put on display at the NRM’s Railfest leaving York on 26th May, bound again for Fort William, via Carnforth. In June the K1 deputised for the B1, which had flue problems, but on 17thJune the K1 itself was in difficulties with a hot driving axle box. The loco, without tender, was brought to Carnforth, by road, the axle dropped out and the box remetalled in time for the loco to return to Scotland and resume service on 1stJuly. The remainder of the season passed without further problems and the loco was back at Grosmont in time for a Whitby Middlesbrough special on Dec 2nd.
In 2005 the loco was made ready for a special to celebrate the 40th Anniversary of the closure of what is now the NYMR. The special train ran from Whitby to Pickering and return. The choice of the K1 was appropriate because no 62005 with the K4, by then numbered 3442, in effect performed the closing ceremony in 1965.
The 2005 season in Scotland saw the beginning of 7 day/week running to cope with the public demand. Things went smoothly until, almost at the close of the season, at the end of a 22 day continuous spell of running, a crack was found in a similar position, but on the other side of the boiler, to that found in 2003. An on-site repair was attempted by Ian Riley’s men but it was realised that a boiler lift would be necessary to get to the root of the problem. So with great ignominy, the K1, with con rods removed, was towed, cold the 287 miles back to Carnforth.
The West Coast Railway Company was willing to give priority to the boiler repair and bearing in mind that a lot of work had already been done at Carnforth in 2004, a decision was made to carry out as much as possible of a full mechanical overhaul whilst the boiler was out of the frames. Non Destructive Testing revealed the need to replace the firebox outer backplate. The desire to have the loco available for the 2006 Jacobite season gave urgency to the acquisition of this item. Alan McEwen had never made such a large backplate but he was the only boilermaker who could promise delivery on time. The plate was delivered 2 weeks early at the end of March. The plate was fitted by WCRC by the end of April and all the boiler was back in the frames and steamed by the end of May. Final fitting out, installation of OTMR (on train monitoring and recording) equipment, a loaded test run and Vehicle Acceptance Body approval were completed by early July and the locomotive then left for yet another season in Scotland.
After a successful season the loco then returned to Grosmont to hurriedly take over the NYMR’s last few days of running to Whitby before the start of this winter’s maintenance work. A tight timetable in 2005-6 meant that the pony was not overhauled so this was tackled along with attention to the drawbar, intermediate buffers and the regular piston and valve examinations.
The loco entered traffic on the NYMR, early in the spring for Esk Valley line crew training, and to haul the Easter Whitby trains. After the 40th anniversary "3 Dales Railtour" in May, 62005 returned to Fort William for another full Jacobite season.
|Cylinders:||(2) 20” x 26”|
10” piston valves
|Heating Surface of Firebox:||168 sq.ft.|
|Grate area:||27.9 sq.ft.|
|Boiler pressure:||225 p.s.i.|
|Boiler Type:||Round topped (no. 29780)|
|Injectors:||(2) Davies and Metcalf monitor 10mm live steam|
|Brake Type:||Vacuum and Steam|
|Tractive effort:||32,081 lbs.|
|Length over buffers:||59’ 10”|
|Weight (full) engine:||66t 0c|
|Maximum axle load:||19t 4c|
|Water capacity:||4,200 gallons|
|Coal capacity:||7t 10c|
|Maximum speed:||50 mph (45 mph tender first)| | <urn:uuid:4368c2cd-83cc-4320-bc0b-5cd80793bde5> | CC-MAIN-2015-35 | http://www.nelpg.org.uk/index.php?option=com_content&view=article&id=17&Itemid=4 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064951.43/warc/CC-MAIN-20150827025424-00164-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.968077 | 3,240 | 2.671875 | 3 |
Grinding Is High-Tech
Grinding is necessary to machine certain new materials, and new grinding techniques will expand the applications for this technology
By Stuart C. Salmon
Advanced Manufacturing Science and Technology
A process employed by Neanderthals ages ago, grinding has stood the test of time, and is as important to our survival today as it was then. One might think that we would know all there is to know about grinding, but nothing could be farther from the truth.
Grinding has branched into two well-defined areas; the first is abrasive machining, and the second is precision grinding. Abrasive machining is the range of processes that allows the removal of significant stock. It's akin to large-chip machining processes such as milling, turning, and broaching, whereas precision grinding belongs to the more-traditional realm of creating extremely precise and accurate forms, sizes, and surface finishes, with minimal stock removal.
Grinding and abrasive machining processes are essential to today's industries, not only because of the need for higher levels of precision, but because of developments in materials technology. Next-generation materials are being developed that can only be machined using abrasive technologies; these are not just difficult-to-machine metals, but ceramics, cermets, and whisker-reinforced polymers. Indeed, grinding and abrasives processes will be around for a long time to come, and it will behoove manufacturing to recognize and understand the technology. Unfortunately, we still rest in our ruts of what we know best. The traditional approach to machining a component tends to lead us to try to do the same old things better. Content in their comfort zone, the enemies of the latest manufacturing technology are those who have been successful with the old methods, and are therefore reluctant to change, or even consider a different way.
The better method—economically, ecologically, and technically—may be an alternative technology. Laser, waterjet, wire EDM, and fine blanking can all produce similar part configurations in sheet materials. But depending on the material, the accuracy, the surface integrity, and the production rate, one process might prove superior to another in certain circumstances. Similarly, parts that are turned, milled, and broached can be ground, oftentimes with results that are superior to the traditional approach. Beginning at the grassroots of schools and colleges, there needs to be a change in how manufacturing processing is taught. The approach must be more on how to make "stuff" rather than: "if the part is round, it had better go on a lathe" or, "if it needs a high degree of surface finish, then it had better be ground." There is a need to move away from the "process-oriented" way of teaching to the "manufacturing-methods" way of teaching, where the total cost (economically, ecologically, and technically) is the overriding factor.
The change from large-chip to small-chip machining is beginning to show up in the multiaxis machining centers that once held mills and drills and reamers in their tool banks, and now store grinding wheels and have programmable diamond-roll-dressing cycles. Admittedly, one has to be aware that a milling machine is not a grinding machine. The large-chip industry is hailing high-speed machining as a "new world," whereas the grinder has been running at 6000 fpm (30 m/sec) as a "normal" speed forever. There are production-grinding installations running 24,000 fpm (120 m/sec) on grinding machines today, without great fanfare. But it is this area, where the rotational speed of the spindle for a milling cutter versus a grinding wheel is so different, that the user must beware.
The dominant frequency of vibration in a milling operation is typically linked to tooth engagement. A 4" (100- mm) diam milling cutter with eight inserts, running at, say, 475 fpm (2.4 m/sec) transmits a vibration of approximately 60 Hz through the system. The dominant frequency of vibration in a grinding machine typically originates at the spindle, at once every wheel revolution. A 4" (100-mm) diam plated grinding wheel running at 9000 fpm (45 m/sec) transmits a vibration of approximately 12 Hz. This means that the vibrational stability range of the multiprocess machine tool needs to be particularly broad to accommodate cutters as well as grinding wheels. Now, it's important for those who take an old vertical-spindle CNC milling machine and replace the machining spindle with a grinding spindle, and therefore believe that they have made themselves a jig grinder, to note that the grinding may not turn out quite as well as they initially thought.
Grinding wheels do not work well under conditions of excessive vibration. Not only do they tend to wear rapidly, but workpiece surface finish suffers. Also, when combining machining and grinding on the same machine tool, the fluid type and filtration system must be considered. With respect to tool life, milling is better performed using minimum quantity lubrication (MQL). On the other hand, the proper application of a large flow of fluid through a nozzle (1–1.5 gpm for every horsepower in the grind—or 0.075–0.125 L/sec for every kW in the grind—is a good rule of thumb), where the speed of the fluid matches that of the grinding wheel periphery, is critical for a grinding operation. Consider also the filtration of the fluid, which may contain a mix of grinding swarf and abrasive particles, as well as large chips from the milling operation and the contaminated MQL fluid. Both the fluid-application method and the filtration system need to be all-encompassing. The MQL fluid must be compatible with the flood fluid used in the grinding operation, so as not to cause a breakdown in the bulk fluid's stability.
To avoid wheel dressing, plated superabrasive wheels are often employed in multiprocess, multiaxis machining centers. These wheels have no porosity, and can "pump" far less grinding fluid through the arc of cut than an equivalent vitrified grinding wheel. Where a vitrified grinding wheel can pump 50–60 gpm (5–6 L/sec) through the arc of cut, a plated wheel, even one running at perhaps four times the speed of a vitrified wheel, can only pump 2 or 3 gpm (2 to 3 L/sec) through the arc of cut. This reduction in flow not only decreases the cooling effect of the fluid, but also the flushing action of the fluid as it leaves the wheel periphery after the arc of cut. Given that fact, there are many installations blasting large volumes of straight oil at a plated grinding wheel, in the hope of somehow pushing more fluid through the cutting zone because of the sheer force exerted by the pressure of the fluid. In this case, excessive foaming and—worst of all— misting occurs that can create both fires and health hazards in the workplace. Cryogenics are a good alternative in this area, not simply liquid nitrogen, but liquid nitrogen inoculated with lubricating oil droplets. The proper fluid application can often lead to reduced costs and a cleaner, safer working environment.
As grinding wheel speeds increase—and they will continue to increase—plated superabrasive grinding wheels will be and are becoming more popular. The higher the wheel speed, the lower the grinding forces, the longer the wheel life, and the better the surface finish—an all-positive direction. Plated wheels can and have run at 50,000 fpm (250 m/sec) in the laboratory without fear of the wheels bursting. As wheel speeds approach sonic speed (the speed of sound is approximately 67,000 fpm or 330 m/sec), wheel bursting is not the issue—"wheel-off" is the concern. Should a bearing fail or a shaft break, then the wheel will come loose and free, in one piece, with its "single layer" of abrasive intact and designed to cut efficiently. Wheel guarding has to be properly addressed, and such machines should be used in areas where only fully automated operations are being employed, so as to avoid human injury or loss of life should the worst occur.
The next generation of grinding machines will be more extensively automated with respect to wheel changing, wheel dressing (if necessary), part manipulation, and grinding-fluid control. The machines will have a far-smaller footprint than the mammoth cast-iron and V-flat guideway machines of the 1940s and 1950s, some of which are still around today. Hydrostatic bearings, magnetic bearings, linear motors, and epoxy-concrete and glass-fiber-reinforced plastic bases are proven technologies that are being incorporated into some of the next-generation machine-tool designs. The control systems will be more user-friendly and based upon the level of the operator, be it the production engineer designing or researching a process cycle, or an operator needing only to make minor corrections or adjustments once the process is running.
The Smart machine is thought to be the ultimate answer, of course. And indeed, we can sense much of what is going on in the grinding process. It's easy to control and maintain spindle speeds and slideway speeds and positions. We can sense process forces and monitor power during the grinding operation. There are in-process gaging and grinding-burn detection systems, autobalancing, and both wheel and dresser wear-compensating systems. The one fly in the ointment, however, is mechanical lag. Sensors and feedback systems can make computational calculations and operate at electronic speeds. Yet once a decision has been made as to what needs to be done, say increase wheel speed or decrease a table feed, the mechanical lag time caused by the inherent inertia in the system is too great to make the computer-generated change in time. It is possible to attempt to predict errors or run a grinding process in a modeled algorithm that sets safe levels below or above which the process must remain, but this is not always the absolute optimum. Based on the projected cost of a Smart machine this sub-optimal outcome may be just fine.
Nanoprecision is another world of grinding altogether. The nano prefix is bandied about today to gain recognition for perceived advancement. "Put nano in front of anything and funding will waterfall your way" has been the prevailing thought. Let's put nano into perspective: Those who mill find a tolerance of 0.010" (0.25 mm) commonplace and 0.001" (0.03 mm) difficult. The decimal point only shifted one place to the left to make things "difficult." Those who grind find a tolerance of 0.0005" (0.013 mm) commonplace and 0.00005" (0.0013 mm) difficult. The decimal point shifted one more place to the left to make things "difficult". A nanometer, however, is 0.00000004" (0.0000010 mm), and the decimal point has shifted three places to the left! The world of nanoprecision is truly another world. I well remember, as an apprentice, being told that a human body gives off about 100W, so the results from a super-precision inspection machine could be drastically influenced by simply standing nearby. On this scale, a nanomachine would be sensitive to breathing. Nanogrinding machines do exist, but their design and the environment in which they operate are a far cry from the shop-floor grinders I am writing about. Nanogrinders are not a natural follow-on from the grinding machine designs of today. So we will leave them out of this article, and perhaps make room for them some other time.
The future for the production grinder is to be able to deal with higher wheel speeds and a multitude of tools that are not all necessarily abrasive tools. Able to use multiple machining fluids, the machine will be fully enclosed. The machine will be able to grind both flat, round, and complex surface shapes. It will facilitate automated tool and part changing with a level of in-process inspection not only for size, but also surface integrity.
Grinding ceramic parts requires diamond grinding wheels—resin bond for machines that lack vibrational stability, but vitrified bond or metal bond for those that have good vibrational stability and stiffness. Grinding ceramics is a challenge, as the swarf consists of fine micron and submicron particles that require special filtration. Ceramics play havoc with emulsions, but grind well in full synthetics or straight oils. They require high stiffness in a grinder, but not high power. Ceramics creep-feed well for high stock-removal applications, and can be ground to very high levels of surface finish using both grinding and lapping. Applications for ceramics are increasing in the semiconductor, medical, and aerospace industries, as well as high and low-tech applications—for instance lightweight armor and the likes of washerless faucets. Cermets comprise lightweight and highstrength materials that tend to work-harden, so that conventional machining and grinding is very difficult. Titanium aluminide is one such material that is beginning to be used in gas turbines and high-performance racing engines. Whisker-reinforced materials are easier to grind than ceramics andcermets, whether in a metal or a polymer matrix. The directional properties of the fibers, however, often lead to part distortion post-grinding (as opposed to the grinding process being the direct cause).
Hard-lubricant coatings on plated wheels have been shown to extend their life by at least 30% in sticky and gummy materials like stainless and nickel and cobalt-based superalloys. The coating is designed more to prevent wheel loading than to decrease the wear of the abrasive particle.
The move to more superabrasive applications is increasing. Grinding machines are becoming smaller for higher power, and for better stiffness and vibrational stability. Spindle speeds are still increasing, and large and small-chip machining are being incorporated into one machine tool. CNC controls are being designed to accommodate the unskilled operator. With the lack of apprenticeships and fewer prospective journeymen learning the "hands-on" skills of abrasive machining and precision grinding, there is an urgent need to control the process via a computer using feedback from sensors. Small-batch quantities and one-off toolroom applications, performed manually, will be more and more expensive due to the cost and time required to program a CNC control. More user-friendly CNC controls means more-sophisticated CNC controls to accommodate the less-skilled man-machine interface. Linked to the Internet, future control systems will be able to download grinding cycles for given workpiece/wheel combinations, as well as interrogate experts, both human and synthetic, to refine or troubleshoot grinding operations around the world. With the demand for higher precision and the need to machine the unmachinable, grinding and abrasive processing will be around for a long time to come.
This article was first published in the August 2009 edition of Manufacturing Engineering magazine.
Published Date : 8/1/2009 | <urn:uuid:83e5d924-295b-4de5-9e93-5cb0a21c9df6> | CC-MAIN-2015-35 | http://www.sme.org/MEMagazine/Article.aspx?id=21320&taxid=1459 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00041-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.942935 | 3,187 | 3.21875 | 3 |
30 November 2007
A few days ago I was prompted by a TalkAntietam query to look into the strength of the Federal Artillery at Antietam on 17 September 1862. In particular, that of the long-range guns overlooking the battlefield from the heights east of Antietam Creek.
Those guns were largely responsible for Sharpsburg’s reputation–at least among Confederate artillerymen who survived–as “Artillery Hell”. Their impact on the battle was significant, and they loom large in most battle narratives, but just how many were there?
The bulk of the long range (i.e., rifled) pieces along the ridge east of the Antietam opposite Sharpsburg were of the Fifth (V) Corps Artillery Reserve. Others of the V Corps Divisional artillery were also present and had range and sight of much of the the battlefield north of the town, particularly the plateau just east of the Dunkard Church – a key spot on the battlefield which was occupied by considerable Confederate artillery early in the day.
Some of the artillery of Ambrose Burnside’s* Ninth (IX) Corps in the vicinity of the Lower Bridge could also reach Confederate artillery positions across the creek, notably their concentrations on the high ground south of Sharpsburg and on Cemetery Hill east of the town.
Other Federal artillery units were also placed, at least for part of the day on the 17th, east of the creek, but were not significantly deployed or engaged there. These included guns of the Second and Sixth Corps**, and Pleasonton’s Horse artillery (ordered across the creek early on the 17th). I have not included discussion of them here.
The real heavy hitters, as shown on several of Carman’s maps, were batteries of Fitz John Porter’s V Corps deployed along the crest of the ridge immediately east of the creek on either side of the Boonsboro Pike as it crosses the Middle Bridge. It looks like they were in place all day on the 17th.
They were (from north to south) commanded by Wever, Lagner, Martin, Hazlett, Waterman, and Kusserow north of the bridge, and Von Kleiser, Weed, and Taft to the south. Also south of the pike, but further removed to the east and effectively out of action were the remaining V Corps batteries of Randol, Weed, and Van Reed.
Among these V Corps units were 16 of the big 20-pounder Parrots, and another 8 rifled guns: 10-pounder Parrots and 3-inch Ordnance Rifles. A total of 24 guns I’d characterize as long-range. Mixed in were 36 various smoothbores (Napoleons, mostly) of lesser effective range.
Continuing along the creek from where we left off were the IX Corps batteries of Clark and Simmonds. At least during the morning, the batteries of Durell, Cook, Benjamin, Muhlenberg, Roemer, and McMullen completed the line of Federal artillery roughly following the east bank of the Antietam.
Most of these crossed Burnside’s Bridge to the west side of the creek with the infantry between about 1 and 3 pm.
Simmonds and Benjamin between them boasted the other 6 20-pound Parrots at Antietam. Also in the IX Corps were 33 smaller rifled guns for a total of 39 long range pieces. The infantry was also supported by about 15 smoothbores.
For what it’s worth, then, I count about 63 long range guns east of the creek til at least 1pm. Certainly, not all were engaged or had targets all day, so you can make of the count what you will.
Details for each of these Federal batteries are shown below:
V Corps Artillery Reserve – On the ridgeline in the vicinity of the Middle Bridge
LCol. William Hays, commanding
1st Battalion New York Light Artillery, “German Battalion”
Maj. Albert Arndt
– 1st Battalion New York Light Artillery, Battery A
Lt. Bernhard Wever; (4) 20-lb Parrotts
– 1st Battalion New York Light Artillery, Battery B
Lt. Alfred von Kleiser; (4) 20-lb Parrotts
– 1st Battalion New York Light Artillery, Battery C
Capt. Robert Langner; (4) 20-lb Parrotts
– 1st Battalion New York Light Artillery, Battery D
Capt. Charles Kusserow; (6) 32-lb Howitzers
1st United States Artillery, Battery K
Capt. William Graham; (4) Napoleons
4th United States Artillery, Battery G
Lt. Marcus Miller; (6) Napoleons
New York Light Artillery, 5th Battery
Capt. Elijah Taft; (4) 20-lb Parrotts
1st Division, V Corps Artillery – north of the pike
1st Rhode Island Light Artillery, Battery C
Capt. Richard Waterman; (6) Napoleons
5th United States Artillery, Battery D
Lt. Charles Hazlett; (2) Napoleons, (4) 10-lb Parrotts
Massachusetts Light Artillery, Battery C
Capt. Augustus Martin; (6) Napoleons
2nd Division, V Corps Artillery – south of the pike
1st United States Artillery, Batteries E & G
Lt. Alanson Randol; (4) Napoleons
5th United States Artillery, Battery I
Capt. Stephen Weed; (4) 3-in Ordnance Rifles
5th United States Artillery, Battery K
Lt. William VanReed; (4) Napoleons
….. total V Corps, east of the Antietam (60) guns:
(16) 20 pound Parrots
(8) 10 pounder Parrots & 3 inch Ordnance Rifles
….. subtotal, (24) long-range pieces
(36) Napoleons, howitzers, other smoothbores
IX Corps (HQ Reserve) Artillery – vicinity of the Lower Bridge
3rd United States Artillery, Batteries L and M
Capt. John Edwards Jr.; (4) 10-lb Parrotts
2nd New York Artillery, Battery L, “Flushing Battery”
Capt. Jacob Roemer; (6) 3-in Ordnance Rifles
1st Division, IX Corps Artillery
2nd United States Artillery, Battery E
Lt. Samuel Benjamin; (4) 20-lb Parrotts
Massachusetts Light Artillery, 8th Battery
Capt. Asa M. Cook; (4) 12-lb James Rifles, (2) 12-lb Howitzers
2nd Division, IX Corps Artillery
4th United States Artillery, Battery E
Capt. Joseph Clark, Jr., Lt. George Dickenson, 1Sgt. C.F. Merkle; (4) 10-lb Parrotts
Pennsylvania Light Artillery, Battery D
Capt. George W. Durell; (6) 10-lb Parrotts
3rd Division, IX Corps Artillery
9th New York Infantry, Company K, “Whiting’s Battery”
Capt. James Whiting; (6) Naval Howitzers
5th United States Artillery, Battery A
Lt. Charles Muhlenberg; (6) Napoleons
Kanawha Division, IX Corps Artillery
Ohio Light Artillery, First Battery
Capt. James McMullin; (6) 14-lb James Rifles
Kentucky Light Artillery, Simmonds’ Battery
Capt. Seth Simmonds; (2) 20-lb Parrotts, (3) 10-lb Parrotts, (1) 12-lb
….. totals IX Corps, east of the Antietam (54) guns:
(6) 20 pound Parrots
(33) 10 pound Parrots, 3-in Ordnance Rifles, & James Rifles
….. subtotal, (39) long-range pieces
(15) Napoleons, howitzers, other smoothbores
Most of my data is as collated by Johnson and Anderson (Artillery Hell, 1995) with occasional updates from other sources. All of the units, commanders, and weapons discussed above can be viewed in some further detail at Antietam on the Web.
Unit locations are per Carman’s maps (Atlas of the Battlefield of Antietam – found online at the LoC). My versions of those maps with some clickable details are also on AotW.
* IX Corps was nominally commanded by Jacob Cox, though in practical fact by Burnside, his ‘wing’ commander.
** In a 1992 article in Field Artillery Magazine called The Creation of Artillery Hell (pdf), Major Albert A. Mrozek, Jr. wrote that Artillery Chief Hunt positioned
… batteries of Parrotts and three-inch rifles from V and VI Corps on the bluffs east of Antietam Creek (map Positions H and K). These 68 long-range guns had fields of fire that enfiladed parts of the Confederate lines. More importantly, they covered most of the hills and ridges that were likely Confederate artillery positions.
The severe counterbattery fire inflicted by these guns caused at least one Confederate battalion commander to remember it as Hell. Stephen D. Lee in a letter to Colonel Alexander said, “Pray that you may never see another Sharpsburg. Sharpsburg was artillery Hell” (quoting Sears’ Landscape Turned Red).
I don’t know which Sixth (VI) Corps batteries these might have been, where they were posted, or for what period; so don’t know where his number of 68 guns for those units derives. Carman has the first VI Corps units reaching the field after 10 AM, and shows none of their artillery deployed at all on his lovely maps until 3:30pm in the vicinity of the Miller Cornfield. I’d appreciate hearing from anyone with more detailed knowledge of the VI Corps artillery operations on the 17th. | <urn:uuid:af08825a-f81c-4ca2-b5dc-bb7d23955c2f> | CC-MAIN-2015-35 | http://behind.aotw.org/2007/11/30/federal-artillery-east-of-the-antietam/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00158-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.93902 | 2,162 | 2.5625 | 3 |
:: 5 Sources Cited
1602 words (4.6 double-spaced pages)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Effects of Tourism on the Environment
Tourism is a big part in not just the United States but in every countries economy. It is constantly growing and according to the United Nations World Tourism Organization there are more than 800 million people that travel either internationally or domestically each year (Goodstein C. Traveling Green. Natural History. Jul/Aug 2006; 115:16.). The world of tourism is always evolving the technology of travel has made it easier and more intriguing to travel. It is believed that the number of people traveling will grow by as little as 4% each year over the next twenty years. This means that the number of travelers a year could reach a staggering one billion people a year (Goodstein C. Traveling Green. Natural History. Jul/Aug 2006; 115:16.).
The financial gain that comes of this is great for a country. In Rwanda for example they charge 375 dollars per person to go out and see the silverback gorilla that is indigenous to the area. This works out to be about 1 million dollars a year for the government to help deter the cost of damages that may be directly caused by tourism. Other countries that do this include the Galapagos Islands, which generates in excess of $38 million a year, and also in Belize the government uses a conservation tax of $3.75 for every foreign visitor leaving the country. The tax in Belize generates almost 750,000 dollars a year to help the government with conservation.
With the advantages of financial gain come some drawbacks. The environment is said to take a big hit when dealing with the effects of tourism. There are a lot of things that are said to happen when dealing with the effects, anything from water in lakes being impure due to nutrients that may be added to it while people are swimming in it. The example that was used in the journals were the bodies of water around the Fraser Islands in Australia. It was said that tourists may be bringing nutrients in directly from another area (i.e. sand from another beach with different nutrient on their feet, or water with different nutrients in it). The other one that I find quite entertaining was nutrients that me in a humans urine that have a negative effect on the organisms such as coral, or fungi that are in the lake. (Hadwen L., Bunn S., Arthington, A,. Mosisch T. Within-lake detection of the effects of tourist activities in the littoral zone of oligotrophic dune lakes. Aquatic Ecosystem Health and Management, 2005; 8, 159-173). The study showed that there neither short term or long term effects to the nutrient levels of the lake water from people being in and out of the lake.
In the Mount Everest area of Nepal we see a lot bigger problem that the government feels is directly related to tourism. The government fells that deforestation is becoming a problem in the area as a result of new visitors coming to the area to either climb the mountains or just to be there for the scenery. The problems that soon arose was the forest being cut down for firewood for both the native Sherpas and the tourists. The amount of firewood used by the various groups is quite large. Some days the amount used in the area can reach in excess of 200 tons. The other problem that soon arose with the forest was the Sherpas expanding the sizes of their own homes and building motels to accommodate all the new guests coming to the area. Although most of the buildings are made of stone there is still a good amount of logs being used to construct the buildings. In 1976 the idea of deforestation was a big concern among all of the natives around the area, finally the area was turned into the Sagarmatha National Park and the government made laws and regulations on how much wood could be used by everyone including the native Sherpas.
The Sherpas soon started to going to forests outside of the park and bringing it in to help with firewood and building materials. As soon as the areas exporting the lumber became worried of the same deforestation issue that the National Park was, they soon stopped exporting to the Sherpas. The problem with deforestation was probably not as big of a deal as it was thought to be in the beginning. In 1981 the a man belonging to the first group to climb Mount Everest , Charles Houston, took a return trip to the mountain and compared his pictures from his first assent in 1950 to the way the scenery looked in the present day. It showed two things, it showed that the area had a lot more forest than before but it did show slight degradation in the forest. So to a point the forest work regulations were working for the National Park. (Stevens S. Tourism and deforestation in the Mt Everest region of Nepal. The Geographical Journal, September 2003;169, 260-277).
In the Rainforests of the Queensland World Heritage Area, there was a study done to see what effects tourism had on the local rainforests. The study was done on all aspects of the rainforest from the canopy to the ground. They did the same study like in the lake study previously mentioned; they found a low traffic area, a place with less than 20,000 annual visitors and a high traffic area, more than 20,000 annual visitors. Over the course of the study they found that the canopy of the disturbed area was the same as the control, the only real big difference in the two was compaction in the areas where walking paths had been established. Probably the most severe thing the study showed was soil erosion on slopes where mountain bike trails had be formed and in the direct area around both walk, and bike paths had lower seed germination. (Turton S. Managing Enviromental Impacts of Reaction and Tourism in Rainforests of the Wet Tropics of Queensland World Heritage Area, Geographical Research, June 2005; 43, 140-151).
In today?s society a lot is being done to help the environment out with the effects of tourism. Although in most cases the effects of tourism are not as substantial as first thought there are still many organizations that believe tourism is drastically hurting the worlds ecosystem. A lot of smaller companies are going into ?green? or environmentally friendly certification through a multitude of these organizations. In the islands of Bocas del Toro, a lot is being done by the local motel owners that want to save the place they live in. Their biggest concern is the use of water, they ask their guests to re use towels, they irrigate their lawns in the cooler part of the days, have installed low flush toilets, and low water usage shower heads throughout the motel. (Goodstein C. Traveling Green. Natural History. Jul/Aug 2006; 115:16.).
By installing these parts to help with water usage the Islands of Bocas del Toro are using a concept called the Precautionary Principle. The basis of the precautionary principle is to do preventative actions now to stop future problems. This Principle is more heavily followed in Europe. The reason for this is because Europe is where it was developed and it was put into almost all environmental laws when it was introduced in the 1980?s. The United States uses a general idea with this but it is more with science and health safety issues. Canada is implementing a loose version of this with their commercial fishing companies, but nothing has been implemented with the idea the tourism industry. (Fennel D. and Ebert K. Tourism and the Precautionary Principle. Journal of Sustainable Tourism. June 2006; 12 461-479).
There have been a lot of rules and regulations implemented to help out with the adverse effects that tourism may have on the environment. As the studies in the paper show the effects may not be as bad as once believed, but with the number of tourist growing at the rate it is, there may be a problem in the very near future. A lot of these areas that were looked at were very diverse and that?s what people want to see, the problem with this diversity is the areas that appear to have this diversity have not been visited much and are not used to the problems that arise from tourism. The amount of people that have been in a lot of these areas is growing and has not reached its peek. With these implications the areas may never see any severe damage and will hopefully be enjoyed by tourists in their original state for years to come.
Fennell, D., & Ebert, K. (2004). Tourism and the Precautionary Principle. Journal of Sustainable Tourism, 12(6), 461-479. Retrieved Monday, October 09, 2006 from the Academic Search Premier database.
Goodstein, C. (2006). Traveling Green. Natural History, 115(6), 16-19. Retrieved Monday, October 09, 2006 from the Academic Search Premier database.
Hadwen, W., Bunn, S., Arthington, A., & Mosisch, T. (2005). Within-lake detection of the effects of tourist activities in the littoral zone of oligotrophic dune lakes. Aquatic Ecosystem Health & Management, 8(2), 159-173. Retrieved Monday, October 09, 2006 from the Academic Search Premier database.
Stevens, S. (2003). Tourism and deforestation in the Mt Everest region of Nepal. Geographical Journal, 169(3), 255-277. Retrieved Monday, October 09, 2006 from the Academic Search Premier database.
Turton, S. (2005). Managing Environmental Impacts of Recreation and Tourism in Rainforests of the Wet Tropics of Queensland World Heritage Area. Geographical Research, 43(2), 140-151. Retrieved Monday, October 09, 2006 from the Academic Search Premier database.
How to Cite this Page
"Effects of Tourism on the Environment." 123HelpMe.com. 05 Sep 2015 | <urn:uuid:85918b9e-ff82-461e-a5fe-bb2d48c4809c> | CC-MAIN-2015-35 | http://www.123helpme.com/effects-of-tourism-on-the-environment-view.asp?id=150581 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00099-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.961424 | 2,057 | 3.015625 | 3 |
THE NEW GENERATION OF SOVIET ARMOR VS. TIGERS
by Dmitry Pyatakhin
Edited by Joe Koss & George Parada
The first appearance of the Tiger on the Eastern Front was unsuccessful. The first Tigers were issued to the 1st platoon of the 502 Battalion of Heavy Tanks (Schwere Panzer Abteilung 502). On the 29th of August 1942, the four Tigers arrived at the Mga railway station near Leningrad. Early that day, the tanks were unloaded and prepared for battle. At 11:00 AM, the Tigers went into their battle positions. Major Richard Merker commanded the platoon, which included four Tigers, six PzKpfw III Ausf. L and J, two infantry companies and several trucks of the technical support unit. A representative of the Henshel firm – Hans Franke accompanied this unit in a VW Kubelwagen right behind the first Tiger. After the attack, the error in trying to use the heavy Tiger in ground so soft was realized, for their maneuverability was hampered.
The Russian infantry retreated, and their artillery opened heavy fire to cover the troops. Major Merker’s unit, divided into two groups, started to attack on two parallel side roads. Very soon the first Tiger was abandoned because of transmission failure. The second one was abandoned a few minutes later after engine failure. In spite of Russian fire, the Henschel representative started to inspect the tanks, but very soon Merker came by with his Tiger and said that the third tank was disabled because the steering control failed. During the night, all three damaged Tigers were evacuated using Sd Kfz 9 prime movers-three per tank. Fortunately for the Germans, the Russians could not take any action to capture the disabled tanks. After the inspection, spare parts for the Tigers were delivered by plane from the Henshel plant in Kassel and on the 15th of September all four Tigers were repaired and ready for action.
The second action of the Tigers was no better than the first. On the 22nd of September, four Tigers, supported by PzKpfw III tanks, were to accompany the 170th Infantry Division in attacking the 2nd Soviet Army. The terrain was very bad, the ground was too soft after the rains, and Major Merker opposed the use of Tigers in this operation. After a direct order from Hitler, the Tigers went into battle. Very soon after the attack began, the first Tiger received a direct hit in the front armor plate. The shell did not penetrate, but the engine stopped and there was no time to restart it. The crew abandoned the Tiger and threw hand grenades into the fighting compartment. The other three Tigers reached the Russian trenches, but very soon were damaged by Russian artillery crossfire as they lost maneuverability on the soft ground. Later on, the three Tigers were evacuated, and German engineers destroyed the fourth in order to prevent its capture.
Soviet SU-122 medium assault gun armed with 122mm howitzer.
General Guderian: "It was not only the heavy losses, it was the loss of secrecy and suprise in the future".
The Tigers were successful in their third battle. On the 12th of January 1943, the 502nd supported the 96th Infantry Division opposing an attack of Russian tanks. Four Tigers destroyed 12 T-34/76 tanks and the rest of the Soviet tanks were forced to retreat.
On the 16th of January 1943, the Russians captured their first Tiger during a German attack near the Shlisselburg on the Leningrad front. The captured tank was immediately delivered to the Kubinka Proving Grounds and inspected by Soviet Engineers. The Tiger was no longer a new secret weapon.
In early 1943, the Red Army had no weapon comparable to the firepower of the Tiger‘s 8.8cm KwK 36 L/56 gun or its heavy armor. For close combat, the Red Army Infantry had the PTRD-41 and PTRS-41 anti-tank rifles which had a 1.2 meter-long barrel firing 14.5 mm shells with tungsten cores. This weapon was not able to knock out the Tiger, but, in the right hands, could destroy the tank’s optic devices or damage the suspension. Nevertheless, it was effectively useless against the heavy German tanks, and later on Soviet troops used captured Panzerfausts.
Soviet IS-2 heavy tank armed with 122mm gun.
Artillery was the main weapon of the Red Army. Not all Russian cannon types could penetrate Tiger’s armor, but concentrating the fire of all possible guns on the tanks could heavily damage them, even to the point of stopping the engine or detonating the ammunition. The 76.2mm ZIS-3 cannon, using anti-tank shells, could penetrate Tiger side armor (at 300-400 meters) or destroy the running gear, while it couldn’t penetrate the frontal armor. Because of poor maneuverability, the Tiger could be an easy target for an anti-tank gun in defense. Only the 85mm anti-aircraft gun and especially 122mm A-19 cannon could destroy the Tiger at extended ranges. The Soviets made many anti-tank guns, up to 100mm in bore diameter, by the end of the war.
Otto Carius: "Even the Americans, whom I would know very well on the Western Front later on, can not be compared with Russians. The Ivans fired on our positions with all kinds of artillery, from light mortars up to heavy howitzers. We were not able to come out from our shelters in order to check our Tigers. It is not strange that the Russians easily broke our front line after such heavy fire".
Otto Carius: "The destruction of an anti-tank gun can cost a couple of tanks, because they are small, well-covered and waiting for the tanks in ambush. Usually it takes [just] the first shot. If the gunners are skilled, they can knock out the Tiger. If they did not destroy your tank with the first shot-you will have no more time to react before you receive the second shell."
Michael Wittmann: "The anti-tank gun is more difficult to find than the tank. The gun can fire several shots before I find it"
The Red Army’s field artillery provided the main antitank support for the infantry.When the Tiger I first appeared on the Eastern Front, the Red Army had the T-34/76 and different models of the KV-1. Until the autumn 1943, Red Army had only two types of SP guns: the SU-122 Medium Assault Gun and the SU-76 Light Self-Propelled Gun.None of these were effective against the Tiger at ranges over 500 meters. The Tiger had a great advantage over long distances. During the famous tank battle near Prokhorovka, the Soviet commanders tried to take advantage of the greater mobility of the T-34 and the assault guns by closing in to short ranges and shooting at the Tiger’s thinner side armor. The result of the battle was that the new German tanks were equal to older Soviet tanks because of the correct choice of the battlefield. This was the great maneuver of Gen. Col. Rotmistrov and Gen Leut. Zhadov. The battle ended with almost equal losses, but Soviets kept more tanks in reserve for the counterattack, while Germans were unable to continue with their offensive.
In February of 1944, T-34 was rearmed with the new long-barreled 85mm S-53 gun and then in mid-1944 with 85mm ZIS-S-53. This new gun could penetrate the side armor of the Tiger I from a distance of 800 meters and the turret side from a distance of 600 meters. It was not enough-as before, the Tiger could destroy the T-34 from a distance of 1,500 to 2,000 meters. 85mm AA gun was the anti-aircraft gun without any special modifications. The S-53 was a modified version designed by the F.F.Petrov’s Design Bureau to be mounted in the turret of T-34-85. The ZiS-S-53 was a modified S-53 designed by Grabin’s Design Bureau in order to simplify the gun and reduce its price, while ballistic of both guns were same.
Soviet ISU-152 heavy assault gun armed with 152mm howitzer.
From early 1943 to mid-1944, the main opponents of the Tiger on the Eastern Front were the assault guns based on T-34 and KV-1 chassis. When it was discovered that the existing SU-76 and SU-122 types could not penetrate the Tiger’s armor at any distance under 1,000 meters, the Soviets decided to create a new assault gun, the SU-85, armed with an adaptation of the 85mm anti-aircraft gun. Production of the SU-122 was stopped and the SU-85 was adopted in its place. It was later followed by the SU-100 medium assault gun. In mid 1943, SU-152 heavy assault gun entered service. It was based on KV-1 heavy tank and was armed 152mm howitzer.It was nicknamed Zveroboi (Animal Killer). At the end of 1943, a new assault gun, the ISU-152, based on IS-2 heavy tank was produced. It was armed with a very powerful 152mm howitzer. The shell of this gun could penetrate any part of the Tiger’s armor and even cut the turret from the hull. This assault gun was nicknamed "Animal Hunter". The weight of the AP shell was 48kg, while HE shell was 41kg.
Otto Carius: "The shell cut the right part of the commander’s cupola. I was not beheaded because I had bent down to light my cigarette. Suddenly the Russian assault gun appeared and I gave an order to the gunner to open fire. Kramer shot, and a second shell, from another assault gun, hit in the turret. I can not remember which way I left the Tiger. The head phones-the only thing I have from my destroyed Tiger".
Using assault guns to their maximum ability, the Red Army fought for the time it needed to develop a new tank comparable to the Tiger. At the end of 1943, new heavy tank IS-1 was developed and the Red Army received first tanks in February of 1944. It was followed by the famous IS-2 heavy tank. The IS (for Iosif Stalin, for the Cyrillic alphabet does not have the Western "J" for Joseph Stalin) tanks had a low profile-lower than the Tiger or the Sherman. The turret and front armor plate were 100mm thick. The side armor plates were 75mm. This tank was armed with a powerful 122mm D25T gun that had a barrel five meters (16 feet) long. The IS tanks had a great advantage in comparison to the Tiger because of their well-sloped armor plates. With these tanks, the Red Army finally had armor that was better then the Tiger I and equal to the King Tiger (Tiger II) in many ways. In March of 1944, the first IS-2s were tested in action and proved their power. More then 3,000 IS-2 tanks were built up to the end of the war. In the opinion of Hasso von Manteuffel, it was the best tank of WW II.
During the war, the Soviet Union built more than 125,000 AFVs. Germany built some 89,000 AFVs and only 2,000 of them were Tigers and King Tigers. There was no chance for Germany to win the war on the Eastern Front against the power of the Red Army.
Credits / Sources:
H. Guderian: "Memories of a Soldier", Moscow Voenizdat 1962;
F. Mellenthin: "Panzer Battles 1939-1945", Moscow, AST 1998;
E. Middeldorf: "Russian Campaign Tactics and Weapons", Moscow, AST 1999;
D. Crow: "Armored Fighting Vehicles of Germany", Chancellor Press, London 1973;
J. Ledwoch: "Tiger", Wydawnictwo Militaria, Warsaw;
B. Culver: "Tiger in Action", Squadron Publication, London 1980;
M. Svirin: "IS Tanks", Moscow, Armada 1998; | <urn:uuid:a58cc27b-3a08-4175-8768-8e5b36dc7cdd> | CC-MAIN-2015-35 | http://www.achtungpanzer.com/the-new-generation-of-soviet-armor-vs-tigers.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065464.19/warc/CC-MAIN-20150827025425-00341-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.97303 | 2,568 | 2.765625 | 3 |
The Lure of Sri Pada
by Aryadasa Ratnasinghe
The open semester to Sri Pada, the holy mountain, begins on the 'Unduvap' (December) full moon day and ends on the 'Vesak' full moon day (May) in the ensuing year. This mountain is also known as Samantakuta, Sumanakuta, Samanalakanda, Samanhela, Samangira, Medumhelaya, etc. The Christians call the mountain Adam's Peak, derived from the Portuguese Pico de Adam ('Peak of Adam').
This conical mountain is situated 16 km northeast of Ratnapura, and rises much abruptly from the lower valley than any other mountain in the island. Although it is not the highest mountain, it rises to a trignometrical altitude of 2,243 m. (7,360 ft.) above sea level, offering an unobstructed view over land and sea, overlooking the South Central mountain ridges.
The splended view of the tropical wilderness, with its hills, dales and plains, all luxuriantly wooded, bounded by blue mountains, fleecy clouds resting on low ground, and a brilliant sky over-head adds to the panorama of the resplendent island. The charms of the prospects are heightened by the coolness and freshness of the air, and by animation of the scene produced by the singing of birds, in addition to the harsh cries of the wild peacock and the jungle fowl.
From remote antiquity, the visibility of the conical mountain from vessels off-shore to a distance of about 15 km, excited great interest of foreigners, when the island's interior was unknown to the outside world. It was also the landmark of the sea-faring Arabs, Moors, Greeks and Persians, who came to the island to barter in gems, ivory, spices, elephants etc.
The sacred foot mark atop the mountain (as most of us have seen) is a superficial hollow of gigantic size, measuring 156 cm. in length, and 76 cm. towards the toes and 71 cm. towards the heel in width. There is the belief that the actual footmark lies on a blue sapphire beneath the huge boulder upon the summit, and what we see is only an enlarged symbolic presentation. The placement of such a huge boulder is attributed to god Visvakarma, who had done so for purpose of protection.
The summit is a small plateau, having an area of 164 sq.m., or 1,776 sq.ft. (74 X 24 ft.), according to measurement taken by Lieut. Malcolm of the British Rifle Regiment, the first European to ascend the mountain in 1816. He had signalled his arrival at the summit by firing three cannon shots from his swivel musket, into the valley below.
The sacred footmark as seen by Dr. John Davy in 1817, was ornamented with a single margin of brass and studded with a few gems. These are now not to be seen. He says, "The cavity of the footmark certainly bears a coarse resemblance to the figure of a human foot but much oversized. Whether it is really an impression is not very flattering, if not for its huge size. There are little raised partitions to represent the interstices between the toes, to make it appear a human foot."
Robert Knox, the European captive, who spent 20 years (1660-1679), in the Kandyan kingdom, says
According to the Mahavamsa, the Great Chronicle of the Island, the first person to ascend the holy mountain Sri Pada, was King Vijayabahu I (1058-1114), having come to know that atop the mountain is seen the footmark of the Buddha. It is said that he had gathered this information from the pious woman Manimekhala, who, as a devout Buddhist, was living in South India. Another version is that the King had seen, in the early hours of one morning, angels plucking flowers in his garden. When questioned, one of them had said "We are plucking flowers to worship the footmark of the Buddha atop the Samanalakanda."
The Ambagamuwa rock edict and the Panakaduwa copper plate bear witness to the royal patronage extended by king Vijayabahu, by building 'ambalamas' (rest camps) on route for the convenience of the pilgrims, and also provided them with food and water. The king also built a lower 'maluwa' (place of worship) for his Hindu consort Tiloka Sundari to make her benefactions to the Hindu deity Siva alias Iswera. Actual pilgrimage to the mountain began during the reign of Śrī Nissankamalla (1187-1196), after he ascended the mountain with his fourfold army with great faith and devotion.
There are two historic approaches to the summit of Sri Pada. The oldest is the Ratnapura path, popularly known as the 'difficult path' via Malwala, Kuruwita, Eratna and Gilimale. The last station is Palabaddala. The path runs through ascending and descending hills, deep valleys, along edge of precipices, with a river foaming beneath and, sometimes, under over-hanging rocks and along the beaten track, highly infested with leeches (blood-sucking worms). On this path, pilgrims have to walk long distances until a camp is reached.
Half way up the mountain, there is a small torrent that flows over an immense tabular mass of rock, which forms the 'Seetagangula' (stream of icy water), the parent stream of the Kalu-ganga. At this point, the scene is very impressive and the atmosphere calm. The pilgrims stop here for a break to perform their ablutions, while some bathe, some make a frugal repast of rice or bread, some rest themselves before making the steep climb, some chew betel and others chat to break the monotony of the jungle.
The itinerant Arab pilgrim Ibn Batuta alias Abu Abdallah Mohammed (1304-1377), and the Venetian traveller Marco Polo (1254-1324), had ventured to reach the summit via the Ratnapura path "to worship the sepulchre of Adam" as they believed the footmark atop the mountain to be that of Adam (the first parent of the human race). From Barberyn (Beruwala), they had followed the Kalu Ganga to the summit.
The other path is the Rajamawatha (now the Hatton path), and it came to be so known because many kings, during and after the Gampola period (1347-1412), had made their way to the mountain through that path. It began from Gangasiripura (now Gampola) via Ambagamuwa, Kehelgamuwa, Ulapangama, Horakada, Dagampitiya, Makulumulla, Hangarapitiya, and by the Laxapana pass to the summit. There is also a 'Seetagangula' on this route which is the parent stream of the Mahaveli-ganga.
The Rajamawatha was constructed by the Chief Minister Devapathiraja who served under King Parakramabahu III (1283-1293). Pilgrims travelling by train break journey at Hatton (173 km. from Colombo) on the Main Line, and continue by bus to Maskeliya and thence to the Delhousie Bazaar, from where all transport facilities cease. A serpentine gravel road leads the way to the Sama Cetiya, en route, where camping is available for cooking food and for resting. The next halt is the 'Seetagangula', where pilgrims get ready to make the ascent.
A group of pilgrims is known as a 'nade' and the chief is the 'nade gura' who is supposed to have made many visits to the holy mountain during his lifetime. A newcomer is known as 'kodukaraya' and he or she is at the mercy of the 'nade gura'. Age is no barrier to this novice.
As we see from the valley below, the upper part of the mountain is free from jungle growth. Only tundra vegetation adorns the granite surface, as such incomplete plant layers are generally characteristic to exposed sites under humid tropical conditions. Climbing this part of the mountain is risky, if not for the concrete steps now built because the surface of the bare rock is slippery at most places, where water flows from in between crevices of the rock.
Before the concrete steps were built from Indikatupana to the summit, iron railings fixed on to iron posts driven into the rocky surface, supported the pilgrims along this stretch, to make the ascent safely. It is said that these railings were fixed on the orders of Alexander the Great (BC 356-323), the Macedonian king, for pilgrims to ascend the mountain without risk to their lives.
Many pilgrims make an effort to reach the summit before dawn to witness the unique phenomenon known as the 'irasevaya' (the effulgence of the rising sun) extremely bright and splendid, as it punctures the eastern horizon like a ball of fire. Simultaneously, on the western side of the mountain slope could be seen the conical shadow of the mountain as it falls upon the valley below. Buddhists call this natural phenomena as the worship by sun-god.
The apostate Rajasinha I (1581-1592), the king of Sitawaka, in order to overcome the retribution of patricidal sin in killing his father Mayadunna of Sitawaka, assigned the administration of the holy mountain to a non-brahminical Saiva sect known as 'Andis' of South India. He did so on the advice of his Hindu priest Arittakivendu Perumal. These 'Andis' collected enormous wealth offered to the footprint by the devout Buddhists. King Kirti Śrī Rajasinha (1747-1781) appointed Ven. Asarana Sarana Saranankara Sangharaja thera, as the new incumbent of the holy mountain to preserve it from further damage.
With the onset of the open semester, the statue of god Saman (the tutelary deity of the mountain), along with the insignia of his divine vehicle (white elephant) and other sacred paraphernalia are carried to the mountain in procession, to be placed within the niche below the summit. During the close semester (June to November), these objects of veneration are safely deposited at the Galpottawala Rajamaha vihara at Pelmadulla. At the appointed time, they are taken out, in the presence of the incumbent priest of the temple, and make the historic journey (now through the Hatton path), after a short break at the Maha Saman Devale in Ratnapura.
The collosal brass lamp ('dolosmahe pahana') atop the mountain, which keeps burning through night and day, is an offering made by King Virawickrema in 1542. Fuel is supplied by the pilgrims in the form of oil, copra etc., to keep the lamp burning.
Courtesy: The Daily News of 7 January 2002. | <urn:uuid:939cabb8-e81e-41e4-a8c2-c12592deba1c> | CC-MAIN-2015-35 | http://sripada.org/ratnasinghe2.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00277-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.952026 | 2,390 | 2.609375 | 3 |
The Ways of Mappers and Packers
It is a surprise to discover that there are two distinct states of mind
It is similar to the experience
of learning that someone
you've known for months is illiterate
At first you are astonished:
this cannot be possible!
But then you realise how someone else can live a life
very different to yours,
that looks superficial
ly almost the same.
In this section we look at traits of the two strategies.
As we do so, many of the woes of the modern age,
particularly in high tech disciplines, will come into a simple picture
- the mark of a useful theory!
Remember, most people, be they mappers or packers,
have no reason to believe there is any other state of mind but theirs.
What is packing?
Well, you just stop yourself asking `Why?'.
You never really clean up your map of the world,
so you don't find many of the underlying patterns that
mappers use to `cheat'.
You learn slower,
because you learn little pockets of knowledge that you can't
check all the way through,
so lots of little problems crop up.
You rarely get to the point where you've got so much of the map sorted
out you can just see how the rest of it develops.
In thinking-intensive areas like maths and physics,
mappers can understand enough to get good
GCSE grades in two weeks,
while most schools have to spend three years or more bashing
the knowledge packets into rote-learned memory,
where they sit unexamined because the kids are
good and do not daydream.
It really isn't a very efficient way to go about things in the Information Age.
With no map of the world that checks out against itself and explains
just about everything you can see,
it is very hard to be confident about what to do.
The approach you have to take in any situation is
to cast about frantically until you find a little packet of knowledge
that kind of fits
(everything has a little bit of daydreaming at its core,
but the confused objective is to stop it as soon as humanly possible).
Then you list the bits that kind of fit,
and you assert that the situation is one of those,
so the response is specified by your `knowledge'.
Your friend has happened to grab another packet of `knowledge'
and so you begin an `argument' where your friend lists bits of your
knowledge that don't fit and says that you are wrong and he is right,
and you do the same thing.
You don't attempt to build a map that includes both
your bits of knowledge and so illuminates the true
answer because you don't have access to the necessary faculty of
mapping, and anyway, without the experience, it is hard to believe
that it is possible in the time allowed.
Being devoid of the clarity that comes from a half-way decent map,
you would rather do something ineffective by the deadline
than something that might even work.
Then when things go pear-shaped you say it is bad luck.
The consequences go further.
Not having a big map means that you often don't understand what is happening,
even in familiar settings like your home or workplace.
You assume that this means that you do not possess
the appropriate knowledge packet,
and this may be seen as a moral failure on your part.
After all, you have been told since childhood that
the good acquire knowledge packets and stack them up
in their heads like dinner plates, the lazy do not.
You are also overly concerned about certainty.
Mappers have a rich, strong, self-connected structure
they can explore in detail and check the situation
and their actions against.
Logic for them is being true to the map,
and being honest when it stops working.
It's not a problem,
they just change it until it's `logical' again.
Without mapping, you have to use rickety chains
of reasoning that are really only supported at one end.
Because they are rickety you get very
worried that each link is absolute, certain, totally correct
(which you can never actually achieve).
You have to discount evidence that is not `certain'
(although tragically it might be if your map was bigger),
and often constrain your actions to those that you can convince
yourself are totally certain in an inherently uncertain world.
The issue of certainty then becomes dominant.
People are unwilling to think about something
(erect a rickety chain)
unless they are `certain' that the `procedure'
will have a guaranteed payoff,
because that, they believe, is how the wise proceed.
You become absorbed by the fear of being found to be `in the wrong',
because of the idea that the `good' will have acquired the
correct knowledge packet for dealing with any situation.
The notion that the world is a closed,
fully understood (but not by you) thing
kind of creeps in by implication there.
The idea of a new situation
becomes so unlikely that you rarely spot one when it happens,
although mappers notice new situations all the time.
Your approach becomes focussed on actions that you cannot be `blamed' for,
even though their futility or even counter-productivity is obvious.
You insist on your specific actions being specified in your job,
even when your map is already easily good enough for you to accept personal
responsibility for the objectives that need to be achieved,
which would be more in keeping with your true dignity.
Some people have so little experience of direct understanding,
produced by mapping over time, that they cannot believe that
anything can be achieved unless someone else spells out in exact
detail how to do absolutely everything.
They believe that the only
alternative to total regimentation is total anarchy,
not a bunch of people getting things done.
Now, if you are used to talking to your imaginary friend about your
map of the world, and keep finding holes and fixing them,
you don't become very attached to the current state of it
at any particular time.
You do sometimes, if you find an abstraction that was a wonderful
surprise when you got it and has been useful, but now needs to go.
It's always important to remember that the fun only adds up:
if finding something was fun, finding something deeper is even more fun.
Generally though, you don't mind your imaginary friend
knocking bits off the map if they don't work.
So you don't mind real friends doing it either!
When you see thing in different ways you try
to understand each others' maps and work through the differences.
Two messy maps often point the way to a deeper way of seeing things.
Great thinkers are mappers.
They rarely proceed by erecting edifices of great conceptual complexity.
Rather they show us how to see the world in a simpler way.
Mappers experience learning as an internal process in response to
external and self-generated stimuli.
Packers experience learning as another task to be performed,
usually in a classroom,
using appropriate equipment.
Particularly in the early years,
efficient mapper learning requires internal techniques for exploring
conceptual relationships and recognising truths,
while efficient packer learning focuses on memorisation skills.
Aspects of mapper learning require higher investment than packer
learning, and this has consequences.
An emphasis on succinct,
structured knowledge means that low structured off-topic
considerations can displace disproportionally larger issues from a
problem the mapper is contemplating.
If a child is trying to understand a new idea
in terms of as much as possible of what is
then likely the child's awareness will be spread over
as much `core knowledge' as possible already.
The requirement to then consider the question `Shall I take
my library books back today?',
bringing with it conceptually networked questions such as `Where is
my satchel?', `Will it rain?', `Will it rain tomorrow?'
and so on is an imposition on the mind that a packer child would simply
not experience in apparently similar circumstances.
The packer child simply never has (for example)
the form of the flows resulting
from economic supply and demand curves
(which might also actually be the same representations
that are used to hold, say,
parts of thermodynamic understanding) floating about to be
displaced by a simple question about a library book.
Accepting a fact and being ready for the next is also a different
process in mapping and packing.
The mapper mind must explore
the fact and compare it against core knowledge to see if it is a
consequence that already has a place in the mapper's conceptual
model of the world, or if it is in fact new fundamental knowledge that
requires structural change.
Mappers are likely to be much more aware of the comparative
reliability of information.
Whereas packers tend to regard knowledge as planar,
a series of statements that are the case,
mappers tend to cross-index statements to verify
and collapse them into more profound truths.
Mappers are more likely to work with contingent
thinking of the form: `If
X is true then Y must be true also, Z is certainly true,
and W is nonsense although everyone keeps saying it is the case'.
Mappers are likely to be concerned about the
soundness of packer reasoning.
An aspect of packer thinking that drives mappers up the wall,
is that packers often seem to neither seek out the flaws in their own logic,
nor even hear them when they utter them.
Worse, when flaws are pointed out to them,
they are likely to react by justifying following
logic that they cheerfully admit is flawed,
on grounds of administrative convenience.
The evidence of their own senses is not
as important as behaviour learned through repetition,
and they seem to have no sense of proportion when
performing cost/benefit analyses.
This is because packers do not create integrated
conceptual pictures from as much as possible of what they know.
The mapper may point out a fact, but it is one fact amongst so many.
The packer does not have a conceptual picture of the
situation that indicates the important issues,
so the principal source of guidance is a set of
procedural responses that specify action to be taken.
The procedure that is selected to be followed will be
something of a lottery.
For the mapper, one fact that should fit the
map but doesn't, means the whole map is suspect.
The error could wander around like a lump in a carpet,
and end up somewhere really important.
Both parties agree that they should do the `logical' thing,
but two people can disagree about logic when one sees
relationships that the other has only ever been dissuaded from
Mappers have lots of good ideas based in profound insights into
relationships that packers rarely have the opportunity to recognise.
Part of mappers' extraordinary flexibility and learning speed comes
from the benefits of seeking understanding rather than data,
but some of it comes from the sheer amount of playing with a topic they do.
It is quite usual for mappers to spend every spare moment for a
week wandering around a topic in their heads,
and then spend all weekend focused on it.
Mapper focus is a terrible thing.
A few hours of it can produce breathtaking results where a team of
packers could strive for months.
Every IT manager who has ever had an effective mapper around knows this.
Mappers have a linguistic tendency to want to talk in terms of the
form of the concentrated knowledge they reduce experience into.
Although mappers often use different internal representations of a
sphere of discourse, they are adept at negotiating mutually agreed
terminology at the onset of discussions between themselves,
and this is one way that mappers are able to recognise one another.
Mutual recognition occurs because of this series of transactions
where one party traces a route through the map, stops,
and invites the other to pick up where they left off.
The objective of the exercise is to align mental maps,
but it also reveals the presence of the other's map in the first place!
Mappers advocate changing descriptions and approaches often,
because they see simplification benefits that are of high value to
understanding, and whose map is it anyway?
In social or administrative situations, this can cause confusion because the
mapper does not realise that the packers do not have a map that
they can move around in chunks.
Mappers see packers as wilfully ignorant,
packers see mappers as confused.
In software engineering contexts, this failure of communication leads to
arguments about `churn'.
The mapper wants to move from a large
mass of software to a smaller one that is more robust because of its
necessary and sufficient structure.
The packers are not practiced at seeing the proposed new structure,
and see only a maniac who wants to change every single file in one go.
Mappers have a direct, hands-on awareness of the effectiveness of
their reflections and so, in most areas, they have a sense of the
universe in some unseen way `playing fair' with them,
even rewarding them with wonderful surprises when they look deeply enough.
This often gives rise to a `spiritual' or `mystical' element to
their character, and often to unusually high spirits,
even in situations where packers are despondent.
Mappers ensure that the known elements of a problem are held in
their minds, before embarking on it.
They draw on their own strength of character to find
the motivation to do the hard work involved
in keeping their background explorations going.
To achieve a solution to a problem,
a mapper engages all his or her strengths,
and is rewarded with elation or a sensation of betrayal if things do
not work out well.
Mappers are `passionate' about `dry' subjects.
Mappers excel at conceptually challenging work such as complex
problem-solving with many inter-related elements.
They can perform tasks requiring insight, or imagination,
that packers simply cannot do at all.
Best quality software engineering, mathematics and physics,
with genetics emerging as a likely area of unique contribution,
are amongst mapper challenging science disciplines.
Amongst the traditionally recognised arts, poetry and music are
areas where the mapper faculty for manipulating structure is of
particular benefit, although there may be value in redefining the `Arts'
as what mappers do well.
The very power of great art is only available to mapper thinking,
because the artist uses a tone of sound or light,
itself representative of nothing,
but triggering the recognition of a deep structure.
Pointing out the structure can then
bring to mind instances of that structure,
and the artist has added to the audiences maps!
All these differences are simply consequences of one person having
a big map built by a great deal of disciplined daydreaming,
and the other not.
That these profound differences between two clearly
distinct groups of people exist is the major surprise of the approach
It means that it is very unlikely that
either kind is likely to have any appreciation of the other's state of
Packing as a Self-Sustaining Condition
We live in an action oriented society.
It's been that way since we
invented agriculture and developed a stable environment in which
the tasks to be performed could be defined within.
Not much thinking was needed.
We have little experience of discussing and managing
subjective, internal states - although they are as much shared
experiences as external objects visible to all.
We have a general heuristic that says we should confine our observations to
the externally visible, which kicks in to prevent the exploration of
subjective phenomena even before they have had the chance to
give results and justify themselves.
When things go wrong, we seek to clarify action, and capture better
descriptions of more effective actions.
In situations where flexibility is an asset,
this leads to reduced aspirations.
If things are proceeding according to the actions written on paper,
they are deemed to be going well, and the opportunity cost is not considered.
Worse, the behaviour of people trapped in lack of understanding
can reinforce each other.
If one person just doesn't understand what is happening,
they look about them and see others apparently
knowing what they are doing, feel vulnerable, because lack of
knowledge packets is supposed to be a personal failure, and
therefore they bluster.
They stick their noses in the air and waffle
about `due consideration' and `appropriate action'
as if `undue consideration' or `inappropriate action'
was also on the table,
but don't suggest what the appropriate action might be.
The thing is, everybody is doing it!
So the silent conspiracy to maintain the etiquette of bluster develops.
If anyone violates the etiquette,
that person will be assailed by inherently unclear
objections and other pressures to `conform',
apparently for the sake of it.
These cannot be countered in action-oriented terms,
only by reference to causal relationships that
only one person is fully cognizant of.
Mapping in a packing world can be a painful and
particularly if one does not understand the
shattered reality one's packing associates inhabit.
In pathological situations, this can lead to an infinite regress wherein
every problem is addressed by attempting to delegate it to someone
else, a procedure, or a blame allocation mechanism.
It's rather like holding your toothbrush with chopsticks -
if you are holding the chopsticks just like on the diagram,
the brush up your nose and the paste all over the mirror
are not your responsibility!
The Mapper/Packer Communication Barrier
It's worth reiterating some key points here:
Mapping and packing are very different strategies
Packing is the strongly enforced social norm
The world is set up for packers
Business language is packer language
The results of mapping are called `common sense'
Common sense isn't so common
Mappers think packers are cynical or lazy
Packers think mappers are irrational
Packers spend much of their time playing politics
The last thing that counts in politics is reason
Mappers are often wrong about packer psychology
Packers are usually right about packer psychology
Mappers are often wrong about mapper psychology
Packers are always wrong about mapper psychology.
Mappers do not have a culture to guide them
Most mappers teach themselves, like Mowgli
Mappers can teach themselves!
Mappers can learn from others
Mappers often face significant social challenges
Mappers currently rarely fulfill their potential
Once a situation is understood, it can be addressed.
, see also Jock Engineer | <urn:uuid:e1ffe2be-621d-4cf1-b4c7-b7037d3fe31a> | CC-MAIN-2015-35 | http://everything2.com/title/mapper?showwidget=showCs902858 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00043-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.951005 | 4,042 | 2.671875 | 3 |
Of all the different conditions which are the subject of artistical representation, there is none more important, and, at the same time, more difficult efficiently to describe than that of motion, in which state of being a large number of objects in nature are constantly existent, and which is, indeed, the ordinary condition under which a considerable proportion of them are viewed. The representation of objects in motion is, therefore, essential to be attained,if nature is faithfully to be portrayed. Different arts differ, of course, extensively as to their adaptation to effect this end; as while some of them, such as poetry, music, and dramatic acting seem especially qualified for the description of motion, and indeed are but very imperfectly fitted for that of stationary objects; others, such as painting and sculpture, and more especially architecture and gardening, appear from their nature wholly unadapted to represent motion of any kind. I shall, therefore, endeavour to consider each of these arts in their order as regards their particular capacity for the end before us.
The representation of motion in painting or sculpture by figures, which, from the nature of the art, must remain perfectly stationary, will necessarily be an object of some difficulty efficiently to attain, and might, at first sight, appear wholly impractical. It is effected, however, to some extent, and mainly in two ways :-(1.) By a representation of such an attitude of the figure as will necessarily denote its being in action, as a posture of running or flying. (2.) By the representation of the appearance of certain adjuncts, which, from their very nature, are necessarily more or less in motion; such as drapery blowing in the wind, the sea when agitated into waves. Some figures, on the other hand, are, from their kind, quite unfitted for motion, and whenever represented are, without any effort of the artist, at once perceived to be stationary, such as mountains and houses.
It is to be observed, however, that, even in real objects, all motion is not perceivable. Some bodies, as a bullet from a pistol, fly so swiftly that we cannot see them move ; others, as the hands of a clock, move so slowly that their motion is in-visible to us. It is therefore only motion of a moderate or middle kind, which alone we can perceive, that we are called upon to represent.
Real motion itself, indeed, we are wholly unable to depict, but can only describe objects as they appear in motion. Thus, although on the one hand, on viewing them, we know at once by their appearance that they are moving; yet, on the other hand, we view them but for a moment, and only in one position. Painting should therefore both represent them as they are seen when moving, and should fix on some particular attitude in which they actually appear for the moment, and in which they may be fairly represented with perfect truth to nature, as in the case of a flying bird, a ship in a storm, or an army in battle. The image of the object or scene is retained in the eye after a momentary glance at it, as it was visible at that transient instant, and is not obtained from examining and comparing the various movements that occurred. So should it be in pictorial representation, which but follows that which nature effects. Of these different movements, that which is the most striking and affecting should, with judgment, be selected.
Certain objects whose motion is constant, appear, nevertheless, motionless, where that motion is regular and unvarying, and does not change the actual position of the object; as in the case of a waterfall, a shower of rain, a storm, the sea raging when viewed at a distance, a ship sailing, and a carriage travel-ling towards us. These objects may, therefore, be correctly and efficiently represented in painting.
The chief use of drapery is like that of sails, to assist in action, and to point out the motion of the figure, or rather to second the impression, created by its attitude. It aids however, when so applied, as well to denote repose as action. Flying hair or drapery, like agitated water, must necessarily , represent motion, because its position is such that it cannot be permanent, and can only exist in a state of motion.
For the description of motion, poetry is on the whole more adapted than painting, especially as regards the variety of different motions, But poetry is here perhaps, after all, rather suggestive than descriptive. At least it is far more powerful and successful in suggestion than in description. Painting, on the other hand, which is so successful in description, effects but little by suggestion. There is indeed this essential distinction between the narrations effected by painting and sculpture on the one hand, and by poetry, eloquence, and music on the other ; that while the representations of the former are from their nature, whatever be their subject, always necessarily stationary and immovable, those of the latter are always necessarily moving and changing.
An effective description of motion in a confused irregular multitude rapidly hurrying along in disorder, accompanied by discordant noises of different kinds, and which is much heightened by apt comparisons, is contained in the following passage from Chaucer, relating to the pursuit of the fox which had run off with a cock in the widow’s farmyard
” They crieden, out ! harow and wala wa ! A ha the fox ! and after him they ran, And eke with staves many another man ; Ran Colle our dogge, and Talbot, and Gerlond, And Malkin, with hire distaf in hire hond ; Ran cow and calf, and eke the veray hogges So fered were for barking of the dogges, And shouting of the men and women eke, They ronnen so, hem thought hir hertes breke, They yelleden as fendes don in hello ; The dokes crieden as men wold hem quelle : The gees for fere flewen over the trees, Out of the hive came the swarme of bees, So hidous was the noise.”
A fine and striking representation of the action of a huge living body, is afforded by the following passage in Dante,t where Geryon is described as rising into and flying through the air. The account of his stupendous size, of the efforts that he puts forth, and of his motion while flying, together with the suggestions introduced to set off the narration, alike conduce to give vigour to the scene :
” As a small vessel, backening out from land, Her station quits ; so thence the monster loosed, And, when he felt himself at large, turn’d round There, where the breast had been, his forked tail. Thus, like an eel, outstretch’d at length he steer’d, Gathering the air up with retractile claws. … Round me on each part The air I view’d, and other object none Save the fell beast. He, slowly sailing, wheels His downward motion, unobserved of me, But that the wind, arising to my face, Breathes on me from below.”
In dramatic acting, motion is of course one of the main elements, but it is subject to the principles for artistical regulation, and its accordance with nature is ever of the utmost importance. Indeed, the real difficulty here is, not as in the case of the other arts, already alluded to, how actually to represent, but how duly to regulate motion; not merely how to imitate it, but how to cause it in its mode of operation to imitate the manner of nature which it aims to follow.
In compositions in architecture, costume, and gardening, it is impossible directly to describe or represent motion. In each of these arts motion is, however, capable of being suggested by the mode of carrying out the design. In addition to this it may be observed that, although buildings and trees, and other fixed objects, are incapable of motion, and so far may be considered as less effective in an artistical point than animate objects which continually change their situation, and thereby vary their appearance also ; yet, on the other hand, a compensation to some extent is made for this deficiency by the opportunities which occur of seeing them from different positions, so that although they are in reality quite stationary, and never vary their aspect in the least, a new and a different view is afforded us of them at each turn; and as we ourselves shift our position, the very objects themselves seem to move, and to occupy fresh relative stations. Each of these objects changes also according to the various perspective distances at which they are viewed. This is the case with regard to sculptural as well as architectural objects. And it is equally applicable to rocks and mountains, and even to some extent to landscape scenery generally.
Indeed, the constant change of hues and tints, and light and shade, which takes place both in water and in landscape scenery, whether seas, or lakes, or mountains, or even plains, may be considered as closely allied to, and at all events analogous to motion as regards the alteration of the appearance of these objects, and is essentially productive of quite as much variety as motion itself. Consequently, a change, equivalent to motion, is effected in scenery, even in respect to the most solid and stationary bodies which compose it, by the alterations which constantly occur in the atmosphere, particularly as regards the clouds, by which these different objects not only greatly vary in their appearance at different times, as to their hues and dimensions, and according to their distance from us; but from being at certain periods in part obscured, and at other times brought into clear view, their motion, the alteration of their position as regards sight (the most important point in their relation to us), is as extensive as though real trans-migrations of them occasionally occurred. In the case of certain mountains, the shifting of the clouds that hover about their peaks, which are constantly varying the scene, exposing and obscuring alternately different objects and points of view, together with the revolutions of the earth so as to affect their position in regard to the sun, accomplish really all that would be produced by a change of their situation. Occasionally, more-over, an apparent alteration takes place both of form and of colour, as may be witnessed upon the Italian lakes, clouds taking the appearance of mountains, and mountains seeming all at once to be transformed into clouds, while the tints on the surface of the lake itself assume various hues at different periods.
It is more particularly, however, with regard to architectural objects, especially when viewed at a distance, that all the effect is produced of their moving and changing their situation, both individual and relative, by the change of that of the person looking at them, who can thus cause them apparently to assume almost any position that he pleases, whereby they seem to be either near together or far apart, united or joined in one, according as he places himself to view them ; so that drawings from them made from different points, would almost induce the supposition that these objects themselves occasionally changed their situation. | <urn:uuid:795375c2-46ad-45d2-8068-f4046b3eea8e> | CC-MAIN-2015-35 | http://art.yodelout.com/art-theory-motion-how-represented/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00099-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.978187 | 2,305 | 3.09375 | 3 |
Nature Of Plant Diseases
( Originally Published 1920 )
THE successful greenhouse operator will realize the necessity of recognizing readily any plant disease. Very often this is overlooked and attention is attracted only when the trouble takes the form of an epidemic, and a large number of plants are thus carried off by it.
Plant diseases are usually of four kinds :
1. Those of a mechanical nature.
2. Those brought about by physiological disturbances or unfavorable environment.
3. Those brought, about by parasitic flowering plants, fungi or bacteria.
4. Diseases the cause of which is unknown.
A familiarity with the symptoms of diseases will enable us to determine the contagious nature of the trouble and often the methods of control to pursue. The following outline briefly summarizes the principal symptoms of disease in plants :
1. Discoloration or change of color.
a. Pallor, yellowish or white instead of the normal green.
b. Colored areas or spots.
White or gray, such as mildews, white rusts, etc.
Yellow, many leaf spots.
Red or orange, rusts, leaf spots.
Brown, many leaf spots.
Black, black rusts.
Variegated, leaf spots, mosaic
2. Shot hole, perforation of leaves.
3. Wilting, wilts, damping off.
4. Necrosis, death of parts such as leaves, twigs,
5. Atrophy, dwarfing or reduction in size.
6. Malformations or excrescences, galls, pus-
tules, tumors, cankers, rosettes.
7. Exudation, slime or gum flow.
8. Rotting, dry or soft rots.
I. DISEASES OF A MECHANICAL NATURE
Greenhouse plants, contrary to those grown out-doors, are open to but few injuries of a mechanical nature, for it is seldom, indeed, that indoor plants are injured by rain, hail, or frost.
Sunburn. While most greenhouse crops require a great deal of light, a few are injured by it. Some varieties of tomatoes, the Earliana especially, under the influence of strong sunlight are subject to sun-scald. Sunburn may be overcome by shading the glass. Of the various shading materials, the cheapest and quickest to use is air-slaked lime. The most expedient to use is air-slaked lime which has been slaked dry by sprinkling lightly with water. This is diluted in water and applied as a spray. If new lime is used it will be more difficult to wash off later. Moreover, it .seems that air-slaked lime sticks a good while, but rubs off easily. It is far more desirable to use shading material that must be applied twice in the summer than something that will stick hard and remain during the fall and winter season.
Smoke injury. As a rule large greenhouse establishments are situated near large cities which are centers of industrial production and manufacture. Greenhouse plants are often injured from the effect of smoke or gases which escape from the furnaces into the air.
The sources of smoke may be classified into three divisions : (1) Smoke from large buildings or from manufacturing plants; (2) Smoke from locomotives; (3) Smoke from chimneys of dwelling houses. Smoke is generally produced because of improper furnace construction, such as improper draft, over-loaded boiler, insufficient air space, insufficient air supply to boiler room, and also by carelessness of operation.
Smoke contains large quantities of carbon dioxide, steam and sulphur dioxide, besides its characteristic soot. The latter consists of carbon, tar, and mineral matter mixed with small quantities of sulphur, arsenic and nitrogen compounds which are of an acid nature. Soot adheres to plants, especially to foliage, giving them a burned, contorted appearance. Another effect of soot and smoke is to close up the stomata or respiratory openings of the leaf, so that asphyxiation results. The effect of smoke on plants is a loss of leaflets in case of compound leaves, and an abnormal curling and distortion. Lesions and spots may be formed on the foliage as a result of the sulphur dioxide which is present in smoke. The spots are at first small, but soon enlarge and finally involve the whole leaf, which dries and becomes gray. Smoke injury, although of a mechanical nature, may also be considered from a physiological point of view. The after effect of smoke on plants resolves itself into a question of insufficient food supply and assimilation. This is indirectly brought about by diminished illumination, interference with the normal transpiration and the reduction of leaf surface.
Methods of Control. There is as yet no definite method of control known, consequently all that can be done is to avoid the smoke belts. The greatest injury usually occurs in locations to the leaward of smoky districts and when the soil is wet. As far as possible, therefore, postpone irrigation during the windy days.
2. PHYSIOLOGICAL DISEASES
In this class are included disturbances which are due to unfavorable conditions of nutrition. There are numerous diseases of plants which are brought about by lack of, or by an excess of, certain food elements in the soil. The effect is an interference with the proper life functions of plants.
Symptoms. The symptoms of malnutrition are not always the same. They differ somewhat with the crop, the nature of the soil, and the fertilizer applied. In malnutrition the symptoms to be looked for are retarded growth, change of color in the foliage and root injury. Affected plants remain dwarfed at a time when maximum growth is expected. The color of the foliage turns a lighter green, especially in the spaces between the veins, which become yellowish green to brown. Roots of such plants are poorly developed, and secondary roots are often missing.
Causes of Malnutrition. The work of Stone*, and Harter f and others seems to have established the fact that malnutrition cannot be attributed to the work of parasitic organisms. Stone cites instances where constant watering with liquid fertilizers or manure would cause malnutrition in cucumber plants. The same is also induced when pig and cow manure are mixed, or when manure is worked into a soil already well fertilized otherwise. Harter records cases of malnutrition brought about by an excess of acidity in the soil. In soils where plants suffer from malnutrition, from 3,500 to 6,000 pounds of lime per acre area are often required to neutralize the excess of the soil acidity. This condition is apparently the result of intensive trucking and the heavy application of chemical fertilizers which leave the soil acid. Sulphate of ammonia, muriate and sulphate of potash and acid phosphate when used continuously will leave the soil in a very acid condition. On the other hand, nitrate of soda, carbonate of potash and Thomas phosphate tend to make the soil alkaline.
Another important cause of malnutrition is the exhaustion of humus. This is a natural result where commercial fertilizers are used instead of some form of organic manure.
Methods of Controlling Malnutrition. It is quite bvious from what has already been said, that the greenhouse grower is the loser if he uses his fertilizer injudiciously. Not only is malnutrition favored by such a course, but the yields, too, are considerably reduced. With acid soils, liming to neutralize the soil acidity will help control malnutrition.
This disease may be attributed to several causes. Greenhouse plants that receive too much shade will become yellowish, then whitish, and in time may lose all their green color and finally die. Chlorosis is often brought about when plants grow in soils that have become too alkaline. This is true for soils containing an excess of lime, wood ashes, or magnesia, and especially when nitrate of soda is used in excess.
Control. Chlorosis when brought about by the lack of available iron in the soil may be remedied by the application of small quantities of iron sulphate. If the disease is caused by the other factors previously mentioned, a cure may be effected by re-moving the cause.
This is another trouble which may be termed physiological and the cause of which cannot be attributed to the work of parasitic organisms. It is often noticed on tomatoes and various other plants. Various causes lead to it. Sudden drops of temperature at blossoming will induce many plants to shed their blossoms. Blossom drop may also be brought about when too much nitrogen is applied to the soil in the form of manure, especially hen manure. To overcome this, the fertilizer in the soil must be balanced by the addition of 600. pounds of acid phosphate and 150 pounds of muriate of potash per acre. Overacidity in the soil may also cause the shedding of blossoms. A sudden checking of the water supply, or overwatering may have the same effect. Finally, improper pollination is often one of the main causes for the blossom drop of greenhouse plants. In the field, pollination is favored by both wind and insects. In the greenhouse, these two agencies are practically shut out. 'With forced cu-cumbers, the difficulty is often overcome by installing beehives in the house. Bees are very active under high temperature conditions, and perfect pollination is the result. The usual practice is to sup-ply a beehive to every 200 feet of house. The hives should be placed on platforms several feet above the bed to protect the bees from becoming drenched during the watering or sprinkling of the beds. We should bear in mind that the hives must be taken out whenever the house is fumigated with potassium cyanide. Nicotine fumes do not seem to injure the bees, especially if the fumigation is carried on at night. Bees may be used to pollinate practically every crop grown in the forcing house. It seems, however, that bees refuse to work on tomatoes, perhaps because of a dislike for their nectar. In this case, then, it is necessary to pollinate by hand. The investigations of Fletcher and Gregg and others have shown that the setting of a good crop of smooth heavy tomatoes depends largely on the proper distribution of pollen over the stigma. A lack of pollination will of course result in no crop. An uneven distribution of pollen will result in too large or irregular fruit. During the winter and on sunny days, it will pay to go over the plants and tap each blossom with the finger or with a stick on which is fastened a small glass rod or spoon. This will shake out the pollen and enough of it will be liberated by this operation to insure complete fertilization. A high temperature will favor the maturing and the bursting of the pollen sacs even during cloudy weather. It is, therefore, advisable to run up the temperature of the house as high as is expedient on the days when the tapping of the blossoms is done. This should always be done during the day and never at night. The pollen sacks (anthers or male organs) do not burst freely until after the yellow petals have fully expanded and have begun to wither slightly. The pollen is discharged most freely in a hot dry atmosphere.
3. DISEASES BROUGHT ABOUT BY PARASITIC FLOWERING PLANTS OR MICRO-ORGANISMS
In this class of diseases may be mentioned those which are induced by parasitic flowering plants such as the dodder and the broom rape. These, however, as well as the diseases induced by bacteria and fungi, will be considered under their respective hosts.
Carriers of Diseases. In the greenhouse, disease producing organisms are often brought directly with infected soil or manure in the compost. Fusarium lycopersici Sacc., the cause of sleeping sickness of tomato, as well as large numbers of other parasites, are brought in that way.
Little as yet do we realize the importance of insects as carriers and disseminators of plant diseases, although we are becoming increasingly aware of their rôle in human and animal pathology. Acting as carriers of spores of parasitic fungi, which may adhere to any part of their body, they are responsible for distributing plant diseases. Insects, too, by feeding on plants or in searching for the nectar of the blossoms, are likely to come in contact with diseased parts. Their bodies may become coated with spores of parasitic bacteria or fungi, which are thus carried from plant to plant and from field to field. The striped cucumber beetle is known to carry the virus of cucumber mosaic, and the germ of cucumber wilt (Bacillus tracheiphillus Ew. Sm.). It is there-fore very essential that every effort should be made to keep insect pests out of the greenhouse.
4. DISEASES OF UNKNOWN ORIGIN
In this class will be included those diseases which spread by contact, but the exact cause of which is unknown. Special emphasis will be given to that important disease known as mosaic, This trouble attacks a variety of greenhouse plants. It is especially severe on the tomato, cucumber, and sweet pea.
Symptoms. Mosaic is readily distinguishable by a yellow dotting or mottling on the foliage, presenting in some instances a beautiful mosaic structure, whence its name. Affected leaves linger and often curl
Cause of Mosaic. The cause of Mosaic is as yet a disputed question. Allard claims that mosaic is caused by an ultra-microscopic pathogen, that is, a parasitic organism which cannot be detected by our present technic in microscopy. Mosaic may be transmitted from plant to plant. The easiest way to prove this is to rub with the fingers a diseased leaf and then immediately rub a healthy one. The disease will appear on the inoculated host in about ten days. In the greenhouse, the green aphid and the white fly act as carriers of mosaic.
Control. Methods of control in mosaic lie in the direction of prevention. Diseased plants should be destroyed by fire, and all indoor insect pests kept in check. | <urn:uuid:c6c53b75-9f2a-4741-bce4-1a5b88394d5a> | CC-MAIN-2015-35 | http://www.oldandsold.com/articles24/greenhouse-7.shtml | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00043-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.959051 | 2,898 | 3.203125 | 3 |
Caring for a Person with Alzheimer's Disease: Your Easy-to-Use Guide from the National Institute on Aging
Adapting Activities for People with AD
"Mom has always been a social person. Even though she can't remember some family and friends, she still loves being around people."
Doing things we enjoy gives us pleasure and adds meaning to our lives. People with AD need to be active and do things they enjoy. However, don't expect too much. It's not easy for them to plan their days and do different tasks.
Here are two reasons:
- They may have trouble deciding what to do each day. This could make them fearful and worried or quiet and withdrawn.
- They may have trouble starting tasks. Remember, the person is not being lazy. He or she might need help organizing the day or doing an activity.
Plan activities that the person with AD enjoys. He or she can be a part of the activity or just watch. Also, you don't always have to be the "activities director." For information on adult day care services that might help you, see Adult day care services.
Here are things you can do to help the person enjoy an activity:
- Match the activity with what the person with AD can do.
- Choose activities that can be fun for everyone.
- Help the person get started.
- Decide if he or she can do the activity alone or needs help.
- Watch to see if the person gets frustrated.
- Make sure he or she feels successful and has fun.
- Let him or her watch if that is more enjoyable.
The person with AD can do different activities each day. This keeps the day interesting and fun. The information below may give you some ideas.
Doing household chores can boost the person's self-esteem. When the person helps you, don't forget to say "thank you."
The person could:
- Wash dishes, set the table, or prepare food.
- Sweep the floor.
- Polish shoes.
- Sort mail and clip coupons.
- Sort socks and fold laundry.
- Sort recycling materials or other things.
Cooking and baking
Cooking and baking can bring the person with AD a lot of joy.
He or she might help do the following:
- Decide on what is needed to prepare the dish.
- Make the dish.
- Measure, mix, and pour.
- Tell someone else how to prepare a recipe.
- Taste the food.
- Watch others prepare food.
Being around children also can be fun. It gives the person with AD someone to talk with and may bring back happy memories. It also can help the person realize how much he or she still can love others and can still be loved.
Here are some things the person might enjoy doing with children:
- Play a simple board game.
- Read stories or books.
- Visit family members who have small children.
- Walk in the park or around schoolyards.
- Go to sports or school events that involve young people.
- Talk about fond memories from childhood.
Music and dancing
Music can bring back happy memories and feelings. Some people feel the rhythm and may want to dance. Others enjoy listening to or talking about their favorite music. Even if the person with AD has trouble finding the right words to speak, he or she still may be able to sing songs from the past.
Consider the following musical activities:
- Play CDs, tapes, or records.
- Talk about the music and the singer.
- Ask what he or she was doing when the song was popular.
- Talk about the music and past events.
- Sing or dance to well-known songs.
- Play musical games like "Name That Tune."
- Attend a concert or musical program.
Many people with AD enjoy pets, such as dogs, cats, or birds. Pets may help "bring them to life." Pets also can help people feel more loved and less worried.
Suggested activities with pets include:
- Care for, feed, or groom the pet.
- Walk the pet.
- Sit and hold the pet.
Gardening is a way to be part of nature. It also may help people remember past days and fun times. Gardening can help the person focus on what he or she still can do.
Here are some suggested gardening activities:
- Take care of indoor or outdoor plants.
- Plant flowers and vegetables.
- Water the plants when needed.
- Talk about how much the plants are growing.
Early in the disease, people with AD may still enjoy the same kinds of outings they enjoyed in the past. Keep going on these outings as long as you are comfortable doing them.
Plan outings for the time of day when the person is at his or her best. Keep outings from becoming too long. You want to note how tired the person with AD gets after a certain amount of time (1/2 hour, 1 hour, 2 hours, etc.).
The person might enjoy outings to a:
- Favorite restaurant
- Zoo, park, or shopping mall
- Swimming pool (during a slow time of day at the pool)
- Museum, theater, or art exhibits for short trips
Remember that you can use a business-size card, as shown below, to tell others about the person's disease. Sharing the information with store clerks or restaurant staff can make outings more comfortable for everyone.
Going out to eat can be a welcome change. But, it also can have some challenges. Planning can help. You need to think about the layout of the restaurant, the menu, the noise level, waiting times, and the helpfulness of staff. Below are some tips for eating out with the person who has AD.
Before choosing a restaurant, ask yourself:
- Does the person with AD know the restaurant well?
- Is it quiet or noisy most of time?
- Are tables easy to get to? Do you need to wait before you can be seated?
- Is the service quick enough to keep the person from getting restless?
- Does the restroom meet the person's needs?
- Are foods the person with AD likes on the menu?
- Is the staff understanding and helpful?
Before going to the restaurant, decide:
- If it is a good day to go.
- When is the best time to go. Going out earlier in the day may be best, so the person is not too tired. Service may be quicker, and there may be fewer people. If you decide to go later, try to get the person to take a nap first.
- What you will take with you. You may need to take utensils, a towel, wipes, or toilet items that the person already uses. If so, make sure this is OK with the restaurant.
At the restaurant:
- Tell the waiter/waitress about any special needs, such as extra spoons, bowls, or napkins.
- Ask for a table near the washroom and in a quiet area.
- Seat the person with his or her back to the busy areas.
- Help the person choose his or her meal, if needed. Suggest food you know the person likes. Read parts of the menu or show the person a picture of the food. Limit the number of choices.
- Ask the waiter/waitress to fill glasses half full or leave the drinks for you to serve.
- Order some finger food or snacks to hold the attention of the person with AD.
- Go with the person to the restroom. Go into the stall if the person needs help.
Taking the person with AD on a trip is a challenge. Traveling can make the person more worried and confused. Planning can make travel easier for everyone. Below are some tips that you may find helpful.
Before you leave on the trip:
- Talk with your doctor about medicines to calm someone who gets upset while traveling.
- Find someone to help you at the airport or train station.
- Keep important documents with you in a safe place. These include: insurance cards, passports, doctor's name and phone number, list of medicines, and a copy of medical records.
- Pack items the person enjoys looking at or holding for comfort.
- Travel with another family member or friend.
- Take an extra set of clothing in a carry-on bag.
After you arrive:
- Allow lots of time for each thing you want to do. Do not plan too many activities.
- Plan rest periods.
- Follow a routine like the one you use at home. For example, try to have the person eat, rest, and go to bed at the same time he or she does at home.
- Keep a well-lighted path to the toilet, and leave the bathroom light on all night.
- Be prepared to cut your visit short.
People with memory problems may wander around a place they don't know well (see How to cope with wandering).
In case someone with AD gets lost:
- Make sure they wear or have something with them that tells who they are, such as an ID bracelet.
- Carry a recent photo of the person with you on the trip.
Like you, the person with AD may have spiritual needs. If so, you can help the person stay part of his or her faith community. This can help the person feel connected to others and remember pleasant times.
Here are some tips for helping a person with AD who has spiritual needs:
- Involve the person in spiritual activities that he or she has known well. These might include worship, religious or other readings, sacred music, prayer, and holiday rituals.
- Tell people in your faith community that the person has AD. Encourage them to talk with the person and show him or her that they still care.
- Play religious or other music that is important to the person. It may bring back old memories. Even if the person with AD has a problem finding the right words to speak, he or she still may be able to sing songs or hymns from the past.
Many caregivers have mixed feelings about holidays. They may have happy memories of the past. But, they also may worry about the extra demands that holidays make on their time and energy.
Here are some suggestions to help you find a balance between doing many holiday-related things and resting:
- Celebrate holidays that are important to you. Include the person with AD as much as possible.
- Understand that things will be different. Be realistic about what you can do.
- Ask friends and family to visit. Limit the number of visitors at any one time. Plan visits when the person usually is at his or her best (see the section below about "Visitors").
- Avoid crowds, changes in routine, and strange places that may make the person with AD feel confused or nervous.
- Do your best to enjoy yourself. Find time for the holiday activities you like to do. Ask a friend or family member to spend time with the person while you're out.
- Make sure there is a space where the person can rest when he or she goes to larger gatherings such as weddings or family reunions.
Visitors are important to people with AD. They may not always remember who visitors are, but they often enjoy the company.
Here are ideas to share with a person planning to visit someone with AD:
- Plan the visit when the person with AD is at his or her best.
- Consider bringing along some kind of activity, such as a well-known book or photo album to look at. This can help if the person is bored or confused and needs to be distracted. But, be prepared to skip the activity if it is not needed.
- Be calm and quiet. Don't use a loud voice or talk to the person as if he or she were a child.
- Respect the person's personal space, and don't get too close.
- Make eye contact and call the person by name to get his or her attention.
- Remind the person who you are if he or she doesn't seem to know you.
- Don't argue if the person is confused. Respond to the feelings that they express. Try to distract the person by talking about something different.
- Remember not to take it personally if the person doesn't recognize you, is unkind, or gets angry. He or she is acting out of confusion.
Publication Date: July 2012
Page Last Updated: January 22, 2015 | <urn:uuid:0187e67d-ea71-40ca-86bd-d678119d3f7b> | CC-MAIN-2015-35 | https://www.nia.nih.gov/alzheimers/publication/caring-person-ad/adapting-activities-people-ad | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00106-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.961071 | 2,606 | 2.828125 | 3 |
Order for home delivery today!
|VOL. 35, NO. 10||P.O. BOX 618, ALTON, ILLINOIS 62002||MAY 2002|
|The Premier American Hero George Washington|
George Washington was history's indispensable man. It's no exaggeration to say that, without his leadership, we would not have a United States of America. George Washington's greatest biographer, Douglas Southall Freeman, was once asked what was the most important single thing he had learned from his lifetime of historical study. He replied, "The influence of personality on history."
Of no person in American history was that more true than of the man whom schoolchildren are (or used to be) taught is "first in war, first in peace, and first in the hearts of his countrymen." The sheer power of his character and personality made him the acknowledged leader among the extraordinary men of intellect, learning, and vision whom we call the Founding Fathers.
Freeman concluded that Washington gave the American cause what it needed most: "patience and determination, inexhaustible and inextinguishable." Some years ago, I ran a national essay contest for junior high school students on George Washington and the winning essayist grasped that same point. "I admire George Washington," the student wrote, "because he never gave up."
In 1995, a federally tax-funded 271-page book called National Standards for United States History was released to the public. It was so antagonistic to American heroes and history, as well as to Western civilization, that the United States Senate denounced it in a vote of 99 to 1. The authors condescendingly made some cosmetic changes, but the original version was already in the hands of schools and textbook publishers.
The National Standards displayed a conscious effort to omit or debunk our country's heroes while teaching schoolchildren about obscure individuals deemed more Politically Correct. Entirely omitted from the National Standards were such outstanding Americans as Paul Revere, Thomas Edison, the Wright brothers, and General Robert E. Lee.
The professors who wrote National Standards did their best to minimize George Washington's importance. Students were told to construct a fictional dialogue between Washington and an Indian leader at the end of the Revolution. Nothing was suggested to be taught about his extraordinary leadership, military skill, presidency of the Constitutional Convention, or service as our nation's first President. Students were instructed to "read selections from the writings of major leaders" such as John Dewey and Margaret Sanger. No such instruction was given about George Washington.
Schoolchildren are no longer taught the famous story about the bulletproof George Washington which scholar David Barton discovered used to be included in most history textbooks. Washington was a 26-year-old officer fighting the battle of Monongahela on July 9, 1755 when the colonial troops were ambushed by the French and Indians who fired from behind trees instead of on an open field as the English commander, General Braddock, was convinced that wars should be fought. Braddock was killed, and 977 out of 1,459 of his men were killed or badly wounded, including 63 out of 86 British and American officers. Washington's physical stature (he was 6' 2-1/2" tall) and majestic bearing in the saddle made him an easy mark for the hidden riflemen, but they could not kill him.
After Washington led the survivors in retreat, he wrote to his brother: "By the all-powerful dispensations of Providence, I have been protected beyond all human probability or expectation; for I had four bullets through my coat, and two horses shot under me, yet I escaped unhurt, although death was leveling my companions on every side of me."
In George Orwell's great classic 1984, the totalitarian government was constantly rewriting history. When the government wanted to pretend that some historical fact or person didn't exist, the government would wipe those events and persons out of the people's memory by dropping them down the Memory Hole. The liberals and the devotees of Political Correctness are now trying to drop some of our essential history and heroes down the Memory Hole. American citizens will have to be vigilant to protect America's great history and heroes.
You can do your part to restore George Washington to his proper place in history by reviewing the history books in your local schools to see if they tell the truth about his greatness.
Washington's greatness was based on his leadership and character, so acknowledged by the many other great men of his time. Washington is the hero we need today because he is an extraordinary example of a President whose character was above reproach and whom adulation did not corrupt. In Daniel Webster's words: "America has furnished to the world the character of Washington, and if our American institutions had done nothing else, that alone would have entitled them to the respect of mankind."
When sensational journalists of his and succeeding generations scraped the countryside for revelations, they did not find even one tale of a tryst behind a haystack or a plundering escapade with the boys. Item-by-item scrutiny of his cash-book and ledger, which were the disclosure records of his generation, do not reveal even one entry that hints of a financial or moral impropriety. His spotless reputation has stood the test of time.
No investigative reporter ever discovered any misdeeds of the kinds that tarnished the reputations of later Presidents. Washington did not have any secret life of womanizing, cheating, building a personal fortune through control of government television licenses, talking in profanities, lying to his supporters as well as his enemies, keeping close friendships with traitors or men of deviant behavior, betraying his campaign promises, making secret deals with foreign countries, accepting campaign donations that smelled of bribery, conspiring to involve our country in war, or stuffing the ballot box to win elections.
Washington never would have accepted the popular line that the personal lives of public officials are none of the public's business. With Washington, what you saw was what you got; the public man and the private man were one and the same. Representative Richard Henry Lee's eulogy correctly stated: "The purity of his private character gave effulgence to his public virtues."
Washington wanted our nation to be bound by the same rules of honor and honesty that should bind individuals. In his Farewell Address he reminded us: "I hold the maxim no less applicable to public than to private affairs that honesty is always the best policy."
The famous story about not telling a lie about chopping down the cherry tree has been demoted in modern times to apocryphal status, but we have the record that, as a schoolboy, Washington wrote in his copybook, "Labor to keep alive in your breast that little spark of celestial fire -- conscience."
With almost no formal education, Washington educated himself by reading. He was not an eloquent speaker, having no special flair with words or his generation's equivalent of the 20th century sound-bite. Nevertheless, all the college-educated Founding Fathers acknowledged him as their leader.
Washington earned the loyalty of the men who served with him not from stirring their emotions but because of his reliable integrity, incorruptible judgment, and persevering zeal. He certainly didn't retain their enthusiasm for the American cause because of a succession of military victories -- he lost more battles than he won. His leadership and commanding presence enabled him to lead his ragged, ill-clothed, underpaid troops through defeats and retreats toward an improbable victory.
Washington's total dedication to the duty assigned to him of winning our War of Independence gave him personal peace of mind. His will and self-discipline were his rod and staff; he could persevere in the war against England because he was not at war with himself.
Washington's code of living was built on the principles of conduct he regarded as the code of gentlemen, laboriously handwritten as a teenager in his 110 Rules of Civility and Decent Behavior. The gentleman's code was not founded on love and compassion, but on honesty, duty, truth, respect for others, courtesy, and justice, which demanded that he do his utmost and in return receive what he had earned. What he was, he made himself by will, effort, self-discipline, ambition, and perseverance.
One of my treasured possessions is an original sculpture of Washington on horseback at the battle of Monmouth in 1778. It captures a moment during the Revolution when his leadership was put to its severest test. Finding his advance troops in full retreat because of a traitorous officer, Washington galloped through his frightened regiments and saved the day by turning them around and leading them forward to attack the British.
Late in life, Washington himself told an old friend his own explanation of his remarkable success in accomplishing what seemed impossible in the American Revolution. He said he "always had walked on a straight line." As a youth, he acquired a positive love of the right, and he developed an iron will to do always what is right and honorable.
Today, when there seem to be so few heroes, George Washington is a man for all seasons. He had the strength he needed for the long and dangerous journeys of his incredible life because he always walked that "straight line."
George Washington warned against allowing politicians or judges to take unconstitutional actions, no matter how well intentioned: "If in the opinion of the people the distribution or modification of the constitutional powers be in any particular wrong, let it be corrected by an amendment in the way which the Constitution designates. But let there be no change by usurpation; for though this, in one instance, may be the instrument of good, it is the customary weapon by which free governments are destroyed."
Washington's advice about the conduct of foreign policy is particularly apt today. He encouraged us to extend our commercial relations with foreign nations, but "to have with them as little political connection as possible": "'Tis our true policy to steer clear of permanent alliances with any portion of the foreign world."
Explaining further, Washington warned us: "History and experience prove that foreign influence is one of the most baneful foes of republican government. . . . A passionate attachment of one nation for another produces a variety of evils. Sympathy for the favorite nation, facilitating the illusion of an imaginary common interest, in cases where no real common interest exists, and infusing into one the enmities of the other, betrays the former into a participation in the quarrels and wars of the latter, without adequate inducement or justification. . . . Europe has a set of primary interests, which to us have none, or a very remote relation. Hence, she must be engaged in frequent controversies, the causes of which are essentially foreign to our concerns."
Washington cautioned us to avoid "the accumulation of debt, not only by shunning occasions of expense, but by vigorous exertions in time of peace to discharge the debts which unavoidable wars have occasioned, not ungenerously throwing upon posterity the burden which we ourselves ought to bear." He warned us to avoid "overgrown military establishments, which under any form of government are inauspicious to liberty, and which are to be regarded as particularly hostile to republican liberty."
The famous command given the night Washington crossed the Delaware River, "Put none but Americans on guard tonight," may be only legend. But we read in the Farewell Address this ringing endorsement of patriotism: "The name of American, which belongs to you in your national capacity, must always exalt the just pride of patriotism more than any appellation."
In his Fifth Annual Address to Congress in 1793, Washington showed himself a master military strategist. He gave us the most succinct two-part formula for peace: (a) be ready for war and (b) let it be known that we are ready. "There is a rank due to the United States among nations, which will be withheld, if not absolutely lost, by the reputation of weakness. If we desire to avoid insult, we must be able to repel it; if we desire to secure the peace, one of the most powerful instruments of our rising prosperity, it must be known that we are at all times ready for war."
During the terrible times of the Revolutionary War, Washington repeatedly counseled his troops to put their trust in God. Here is one of his messages: "The time is now near at hand which must probably determine whether Americans are to be freemen or slaves; whether they are to have any property they can call their own. . . . The fate of unborn millions will now depend, under God, on the courage and conduct of this army. . . . Let us therefore rely on the goodness of the cause and the aid of the Supreme Being, in whose hands victory is, to animate and encourage us to great and noble actions."
By the end of the American Revolution, Washington came to believe that a personal God had intervened to save America and that our Revolutionary cause could not have succeeded without the direct intervention of Divine Providence. Washington's years in public life after the Revolution were filled with references to his deep religious faith and its necessity in our public and private lives.
As president of the Constitutional Convention of 1787 which wrote our great United States Constitution, Washington's leadership held together that assemblage of strong-minded men with conflicting sectional interests. One of the few times he spoke during those four hot months in Independence Hall in Philadelphia, he said: "If to please the people, we offer what we ourselves disapprove, how can we afterwards defend our work? Let us raise a standard to which the wise and honest can repair; the event is in the hand of God."
When George Washington took the oath as first President of the United States in 1789, he added this four-word prayer of his own: "So help me God." Those words are still used in official oaths by Americans taking public office, in courts of justice, and in other legal proceedings.
In his first Inaugural Address, Washington acknowledged our country's dependence on God: "It would be peculiarly improper to omit in this first official act, my fervent supplications to that Almighty Being who rules over the universe -- who presides in the council of nations."
President Washington is responsible for making Thanksgiving our unique American holiday. His Proclamation in 1789 made the last Thursday in November "a day of public thanksgiving and prayer" on which Americans should thank Almighty God for granting us "an opportunity peaceably to establish a form of government for their safety and happiness."
You can hardly read an important letter or paper written by Washington without a reference to Providence, and the context makes it clear that Washington believed in a God who is actively engaged in granting benefits and blessings to His people. As his biographer Douglas Southall Freeman concluded, "The war convinced him that a Providence intervened to save America from ruin."
In 1968, Congress enacted the Monday Holiday Law (Public Law 90-363, 82 Stat. 250) to go into effect in 1971. Its sole purpose was to give Americans five guaranteed three-day weekends. The law provided that George Washington's Birthday, which had always been celebrated on February 22, should henceforth be observed with a holiday on the third Monday in February. The law did not change the name of the holiday.
In 1971, President Richard Nixon issued a proclamation calling the third Monday in February Presidents Day. His unauthorized proclamation has no legal effect. Neither his proclamation nor any subsequent action by any President or Congress has ever changed the name of the holiday. But somehow the name Presidents Day stuck and many calendars began to use it. This switch coincided with the period when it became popular to debunk our heroes and deemphasize the history of the American Revolution. You can do your part to maintain George Washington's standing as our greatest American hero by refusing to buy calendars that identify the third Monday of February as Presidents Day instead of by its proper legal name, George Washington's Birthday. The calendar companies should not be allowed to force us to honor all Presidents when there are many of them who don't deserve to be honored.
Rep. Roscoe Bartlett (R-MD) has introduced a bill to require the federal bureaucracy to obey the law and use the term George Washington's Birthday to identify the holiday we observe on the third Monday of February.
However, it's up to the free market to let calendar producers know that we want George Washington restored to his proper day in the year, and that requires action by citizens and organizations.
We know we will be honoring a real hero. Douglas Southall Freeman, who authored a monumental and definitive seven-volume biography of our first President, concluded: "The more I study George Washington, the more am I convinced that the great reputation he enjoyed with his contemporaries and with men of the next generation was entirely justified. He was greater than any of us believed he was."
|Order extra copies of this report online!| | <urn:uuid:72567b19-4c80-40fd-9796-720e0aad15dc> | CC-MAIN-2015-35 | http://www.eagleforum.org/psr/2002/may02/psrmay02.shtml | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00340-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.974157 | 3,464 | 2.71875 | 3 |
Origami in the Classroom: Where Every Child Counts!
What did Leonado daVinci, Lewis Carroll and Fredrich Froebel all have in common? Origami! From its ancient origins in Japan to its practice by the Moors of Northern Africa during the middle ages and its recent surge in popularity in the United States and around the world, origami has delighted artists, mathematicians, architects, and educators world wide. Learn the basic folds, explore the mystery, geometry and joy of origami.
Folding to Learn/Learning to Fold
As parents and educators, we are always looking for ways to teach and improve basic reading and math skills, develop critical thinking and problem solving, foster cooperation and socialization, and introduce our students to different cultures. An ideal lesson might strive to achieve all of these goals and take a form so appealing that children of all ages are eager to learn.
Kids love origami and learning by doing keeps students' attention as they naturally want to play with and explore objects. By constructing and deconstructing paper manipulatives, activating prior knowledge, and experiencing it first hand, children find learning becomes more meaningful. The shapes and forms they create with paper folding exercises help them apply math concepts and build vocabulary concretely, retain information longer, and bridge the gap between words and their meaning. Learning with understanding is essential to enable students to solve the new kinds of problems they will inevitably face in the future. (Common Core State Standards: Mathematics [CCSS 2010] and National Council of Teachers of Mathematics, Principles and Standards for School Mathematics [NCTM] 2000).
Research has shown that paper folding, particularly in the elementary school years, is a unique and valuable addition to the curriculum. Origami is not only fun, it accommodates a variety of learning styles that helps children develop educational, cultural, and social skills. In Japan, children learn origami at home and in kindergarten.
I was invited to Japan to present Math in Motion: Where Every Child Counts workshops. In my workshops, I connect origami across the curriculum. In the Japanese schools I visited, I observed the teacher supply rooms were stocked with origami paper squares the way our rooms are filled with construction paper. Origami is integrated into their curriculum. Today, many schools in the West integrate origami into the classroom. You can set up an Origami Learning Center in your classroom where children can work in small groups. Provide paper and origami diagrams and books. Many free, printable diagrams can be found at Origami Club.
Origami builds patience, perseverance and precision. It will enhance memory and retention. Paper folding reaches children’s hearts as well as their minds. As students master the art of paper folding, it instills in them a sense of satisfaction from completing a model. As students feel successful, they will transfer it to other areas of learning.
My vision is that origami will be a part of every grade child’s educational experience. I guarantee you that once you share some of these activities together, they will become one of the best parts of your day and you will treasure them for years to come.
Origami isn’t just for Squares
Traditional origami paper is precut into squares ranging in different sizes, colors and patterns. It is usually color on one side and white on the other side. The 6-8 inch paper squares are easier to manipulate and good for beginners. Origami paper is available in most craft stores and online.
Rectangles also provide lots of budget-friendly paper resources and can be transformed into a variety of origami models including hearts, boxes and jumping frogs. (Note: Origami Club has free, printable origami instructions using newspaper and rectangular paper).
Tips for collecting paper:
- Start a recycling resource box in your classroom.
- Let colleagues, friends and families know to save these materials and help contribute to your project.
- Visit travel agencies for brochures and magazines, garage sales for calendars, gift wrapping paper and maps and libraries and bookstores for promotional posters and fliers. Trim to size with a paper cutter or fold as is.
Once students master a model by practicing with recycled paper, they can personalize it by selecting a design that may enhance their project. Nurture children’s creativity. Prior to folding, challenge students’ imagination by asking them to draw their own designs on a piece of paper. They can create one-of-a-kind artwork with computer graphics, clip art and coloring pages from the Internet
The following tips and techniques will provide you with more practical steps to make teaching with origami in your classroom as easy as ichi, ni, san (1, 2, 3)…all it takes is a piece of paper! Try the sample “Heart Lesson Plan”. Write to share your experiences in the comment section below or send photos using origami in the classroom to: firstname.lastname@example.org. It may also be posted on the Math in Motion website.
Ten Teaching Techniques
1. Begin with a simple model. Place yourself where all the students can see your hands and the sample. If not everyone can see you at once, repeat the step for each side of the room. Encourage students to observe your demonstration of each step before they attempt it. Prior to the lesson, teach one or two students in the class to help other students as needed.
2. Choose larger paper to demonstrate. Your sample should be large enough to be seen from the back row, but not too large to manipulate. Precrease your model so that you can pay attention to your class. Highlight the lines on your model using different color markers to indicate the folds so that everyone can see the next step.
3. Fold on a firm surface like a table or a book. Emphasize folding neatly and accurately. The more precise you fold, the nicer it will look. Crease each step sharply at least three times. The sharper you fold, the easier it will be to see and follow the guidelines on the paper to the next step.
4. Try to ensure that your students are quiet and attentive. Students must be able to listen and follow directions in a supportive learning environment.
5. Encourage students to explore the qualitative and quantitative characteristics of the materials and shapes they use. Ask: “What can we say about the shape we see? How does this material feel?” This open-ended question approach encourages students to analyze the figure without the pressure of obtaining one right answer. It also enables the teacher to assess what the class already knows and what they may need to learn.
6. While teaching each step to the class, introduce math concepts and vocabulary so that your students can experience them first hand and learn them in context. Have students identify and label each part of the model on their paper. Younger children can trace the same areas with their fingers as they recite the parts of the figure. Write the words on the board so that they can say it and associate it with the shape.
7. When describing a fold, mention the place where the fold begins and ends, or other “landmarks.” Orient your sample the same way your students are folding. Treat each step as one unit: First identify the present position and orientation of the model, perform the step, and then confirm the new position. Make sure each of your students has performed this step correctly before moving on to the next step. If you sense any uncertainty, repeat your instructions. Try to find a clearer explanation. If a step is challenging, ask students to hold their papers up to check the whole class at the same time. Encourage them to help each other.
8. Avoid folding the student’s model. Frustration and failure may alienate them from trying. Establish that a raised hand signals a sign for help without disturbing others. Help individual students or assign another student to assist them. If you have to perform the step on their model, unfold it and let them try it again. Self-satisfaction is very important. If they are still unable to perform the step, you may need to fold their model to enable them to complete it. Practice makes progress. With practice and patience, they will quickly develop the confidence they need to succeed.
9. Be supportive and nonthreatening in your instructions and corrections. Everyone learns at a different pace. Some students may seem more cautious than others and may be afraid to fail and make mistakes. Give the class as much reassurance and positive encouragement as possible.
10. Have Fun! If you enjoy teaching and learning with origami, your students will too! Remember to be patient with yourself and take your time. Make notes of what works well and what you may need to improve.
May the fold be with you!
Barbara Pearl, is an award winning educator and author of Math in Motion: Origami in the Classroom (K-8) and Whale of a Tale (PreK-2). She has an M.A. in Education from La Salle University where she received the Graduate Faculty award for “Excellence in Academic Achievement and Leadership.” Her background in elementary education and mathematics inspired her to explore strategies that get students and teachers excited about mathematics and learning. Barbara is an adjunct math professor at a college and a featured speaker for the National Council of Teachers of Mathematics. She has presented on National TV for Comcast-On-Demand and is the 3x recipient of the National Library Week Award. As President of the Philadelphia Chapter for Pi Lambda Theta, the Honor Society for Educators, Barbara presents staff development for teachers and is available for student and family workshops. For more information, please contact: email@example.com or call (215) 840.1190. Visit Barbara’s website at: www.mathinmotion.com.
For a free copy of “101 Ways to use Origami in the Classroom,” email: firstname.lastname@example.org.
Additional support is provided by The Norinchukin Foundation, Inc., Chris A. Wachenheim, the Wendy Obernauer Foundation, James Read Levy, and Jon T. Hutcheson.
About Japan: A Teacher’s Resource is generously funded, in part, by a three-year grant from the International Research and Studies (IRS) Program in the Office of Postsecondary Education, U.S. Department of Education (P017A100018). | <urn:uuid:cfa08a59-fe6c-44a3-9e4b-0797de72ae49> | CC-MAIN-2015-35 | http://aboutjapan.japansociety.org/content.cfm/origami_in_the_classroom | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00284-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.942132 | 2,162 | 3.46875 | 3 |
Budgerigars and a Cockatiel in captivity
Aviculture is the practice of keeping and breeding birds and the culture that forms around it. Aviculture is generally focused on not only the raising and breeding of birds, but also on preserving avian habitat, and public awareness campaigns.
1 Types of aviculture
2 Avicultural societies
3 Avicultural publications
4.1 Wild Bird Conservation Act of 1992
5 See also
Types of aviculture[edit source]
There are various reasons that people get involved in aviculture. Some people breed birds to preserve a species. Some people breed parrots as companion birds, and some people breed birds as a business.
Aviculture is the practice of keeping birds in captivity using controlled conditions, normally within the confines of an aviary, for business, hobby, research & conservation purposes and food as poultry and turkey, game.
Some important reasons for Aviculture are: propagating pairs of birds to preserve the species because many avian species are at risk due to habitat destruction and natural disasters. Aviculture encourages conservation, provides education about avian species, includes research on avian behavior, and provides thousands of pet bird companions for the public.
Since the 1970's aviculture in the U.S. has included exotic bird farming, the business of raising exotic birds for companion animals for the general public. These exotic bird farms have provided domestic raised parrots and other avian species which would otherwise not be available. Large and small exotic bird farms exist in many countries, including the U.S., Europe, S. Africa, Australia, the Mid-East and Asia. Along with the rise of exotic birds kept as pets, other businesses have arisen to provide for the needs of those pet birds: cage and aviary structures, commercial bird feeds, toys and other equipment, and a specialized veterinary practice for avian species: the Association of Avian Veterinarians.
The truest meaning of aviculture, was described by Dr. Jean Delacour, the most dedicated, influential, and highly respected individual in the modern history of aviculture.
"Aviculture - The worldwide business of keeping and breeding numerous species of wild birds in captivity to maintain their numerical status in nature with a view of forestalling their extinction by supplying aviary raised stock."
AVICULTURE HISTORY: 1. 1903. Aviculture was recognized as a leading influence in the Agriculture industry. In 1913 the first and only text book was published for the Universities and schools: Elementary Lessons In Aviculture. LIBRARY OF THE AMERICAN MUSEUM OF NATURAL HISTORY.
2. 1894 Council meeting documented in THE AVICULTURAL MAGAZINE BEING THE JOURNAL OF THE AVICULTURAL SOCIETY FOR THE STUDY OF BRITISH AND FOREIGN BIRDS IN FREEDOM AND CAPTIVITY.
1894 council gave executive orders to the following "A word as to our name. It seems 'desirable and even necessary, to
invent or acclimatize a word which shall denote " a person interested in the
keeping and breeding of l)irds," and AvicuUurlst (being analogous to
Horticulturist) will do perhaps as well as another. If any one will suggest
a better, we shall be glad to adopt it — till then, we beg to subscribe our-
Avicultural societies[edit source]
There are avicultural societies throughout the world, but generally in Europe, Australia and the United States, where people tend to be more prosperous, having more leisure time to invest. The first avicultural society in Australia was The Avicultural Society of South Australia, founded in 1928. It is now promoted with the name Bird Keeping in Australia. The two major national avicultural societies in the United States are the American Federation of Aviculture and the Avicultural Society of America, founded in 1927.
Avicultural publications[edit source]
Like many businesses and education about birds there are many publications catering to aviculture, such as books on species which include pets, books on breeding, and introductory books for parrots and softbills and pigeons and poultry. There are also numerous periodicals, both generalized and specific to types of birds, although they are rarely more specific than "parrot." These periodicals contain articles on breeding, care, companionship, choosing a bird, health effects and usually, several articles on an individual species or genus. Supply companies publish catalogs of products for bird keepers including equipment and aviaries. Their products range from hand-rearing supplies to cages as large as a walk-in aviary. The oldest Avicultural Society in the United States is the Avicultural Society of America, founded in 1927. The ASA produces a critically acclaimed bi-monthly magazine entitled ASA Avicultural Bulletin. The ASA is a 501(3)(c) non-profit organization that focuses on breeding, conservation, restoration and education. Their yearly education conference features notable speakers from around the world.
The Avicultural Society of South Australia (founded in 1928) produces a monthly full-colour magazine called "Bird Keeping in Australia". It deals with all aspects of aviculture in Australia. The ASSA is registered as an educational organization, having the motto: Founded 1928, for the Study, Care, Breeding and Conservation of Birds.
Wild Bird Conservation Act of 1992[edit source]
Aviculture includes Exotic Birds selected and described in the WBCA, 1992. Aviculture also includes the birds that are selected and described to be not included in the WBCA, 1992. Exotic Birds is the terminology used in the WILD BIRD CONSERVATION ACT OF 1992 that are protected.
Avianitarian is the caretaker of the selected and described exotic birds written in the WILD BIRD CONSERVATION ACT OF 1992. (1) A Propagator of exotic birds; (2) a person who has knowledge of, ability to practice and perform basic avicultural medical procedures; (3) a person who has knowledge of and skill to artificially incubate and hatch eggs of a species of exotic birds; (4) a person who has knowledge and ability to hand-feed hatchling exotic birds and perform duties required to maintain an avian nursery; (5) a person who has the knowledge and ability to enter and maintain data in a avian record-keeping program(s); (6) a person who has knowledge and ability to manage an aviary or groups of exotic birds including knowledge on how to implement proper quarantine facilities and housing environments for the exotic birds, and (7) a person who is able to provide exotic birds with adequate dietary and nutritional needs. (8) a person who takes the time to observe and attempt to meet the behavioral needs of the exotic birds in their care.
From the common name canary (associated with the Serinus canaria), a song bird is native to the Canary Islands, Madeira, and the Azores. This bird has been kept as a cagebird in Europe from the 1470s to the present, now enjoying an international following. The terms canariculture and canaricultura have been used in French, Spanish and Italian respectively, to describe the keeping and breeding of canaries for some time. English speaking canary breeders are beginning to use the term more commonly.
Avianitarian (Parrot keeper) is a person who specializes in propagating and conserving WBCA selected psittacines species, also on preserving WBCA selected psittacines habitat and public awareness campaigns of the threats to the ongoing existence of ENDANGERED SPECIES ACT parrots worldwide
OCCUPATIONS RELATED TO AVICULTURE The value of a knowledge of domestic birds is not limited to the use which may be made of it in keeping them for profit or for pleasure. Any occupation in which a great many people are interested affords opportunities to combine the knowledge relating to it with special knowledge or skill in other lines, to the advantage of those who are able to do so. Just as the large market or fancy poultry business may develop from a small flock kept to supply the owner's table or to give him a little recreation with a pet, many special occupations grow out of particular interests of aviculturists and Avianitarians. the principal occupations associated with aviculture to devote themselves to lines of work which would qualify them for special service in aviculture.
Judging There is the same difference between selecting one's own birds according to quality and judging the birds of others in competition that there is between performing well in a friendly game and performing well in a competition where the stakes are important and feeling runs high. Journalism. Advertisers for birds, and of goods bought by aviculturists/avianitarians also used advertising mediums through which they could reach buyers at less cost than they could through the agricultural papers.
Art In order to successfully portray birds for critical fanciers, an artist must be something of a bird lover. It is not enough that he should draw or paint them as he sees them ; he must know how to pose birds of different kinds, types, and breeds so that his pictures will show the proper characteristic poses and show the most important characters to their best advantage.
Invention The most important invention used in aviculture is the artificial incubator. Methods of hatching eggs by artificial heat were developed independently by the Egyptians. Operating incubators is a business continued in the same families for centuries.
Experiment Stations As the demands for more accurate information on many avicultural topics increased, many of the stations began to make important poultry, parrot investigations. For this work men specially trained in various sciences were required. As a rule the men that were secured for such work knew very little about poultry when they began their investigations, but it was much easier for them to acquire a knowledge of poultry sufficient for their needs than for persons who had poultry knowledge and no scientific training to qualify for positions as investigators. The field of investigation of matters relating to poultry and parrots is constantly being extended. Proficiency in physics, chemistry, biology, surgery, medicine, and in higher mathematics as far as it relates to the problems of any of the sciences mentioned, will always be in demand for scientific work in aviculture. In the future the most efficient teachers and investigators will be those whose early familiarity with domestic birds has given a greater insight into the subject than is usually possessed by those who take up the study of the subject comparatively late in life.
Manufacturing and commerce. It is much easier to build up a large business in the manufacture or the sale of articles used by poultry, pigeon and parrot keepers than to build up a large business as a breeding facility of domestic birds of any kind. A special field is opening for lawyers familiar with aviculture and with its relations to other matters, just as within a few years the field has opened to teachers and investigators. The possible uses of a knowledge of aviculture for young people who are naturally inclined toward intellectual professions, art, invention, manufacturing, or trading have not been given for the sake of urging students to direct their course especially toward work connected with aviculture. The object is only to show those who take an interest in the subject that it is worth while to cultivate that interest for other reasons, as well as for the profit or the pleasure that may be immediately derived.
See also[edit source]
American Federation of Aviculture
3. Our domestic birds; elementary lessons in aviculture ([c1913])
Author: Robinson, John Henry, 1863-1935
Subject: Poultry; Pigeons; Cage birds
Publisher: Boston, New York [etc.] Ginn and company
Possible copyright status: NOT_IN_COPYRIGHT
Volume: 1, 1894-1895
Identifier: cbarchive_131949_council1894 Mediatype: texts Original_publication: The Avicultural magazine. Publication_type: Journal Article Section: 2 Source_organization: Biodiversity Heritage Library OAI Repository Source_project: Biodiversity Source_url: http://biodiversitylibrary.org/oai Url: http://citebank.org/node/131949 Identifier-access: archive.org/details/cbarchive_131949_council1894 Identifier-ark: ark:/13960/t9q24xf2z Ppi: 300 Ocr: ABBYY FineReader 8.0
5. Author: Avicultural Society
Volume: 12, ser. 3, 1921
Subject: Aviculture; Birds; Cage birds
Publisher: [Ascot, Berkshire, etc., Avicultural Society, etc.]
Possible copyright status: NOT_IN_COPYRIGHT
Call number: b1127265
Digitizing sponsor: Biodiversity Heritage Library
Book contributor: American Museum of Natural History Library
Collection: biodiversity; americanmuseumnaturalhistory; americana | <urn:uuid:d407d050-911d-47ff-a0ee-fb7510cba588> | CC-MAIN-2015-35 | http://www.aviculture.org/id199.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00340-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.940325 | 2,683 | 3 | 3 |
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (June 2014)|
Émile Bernard, part of an anonymous photograph, c. 1892
28 April 1868
|Died||16 April 1941
|Movement||Post-Impressionism; Synthetism; Cloisonnism|
|Patron(s)||Count Antoine de La Rochefoucauld, Andries Bonger, Ambroise Vollard|
Émile Henri Bernard (28 April 1868 – 16 April 1941) was a French Post-Impressionist painter and writer, who had artistic friendships with Vincent van Gogh, Paul Gauguin and Eugène Boch, and at a later time, Paul Cézanne. Most of his notable work was accomplished at a young age, in the years 1886 through 1897. He is also associated with Cloisonnism and Synthetism, two late 19th-century art movements. Less known is Bernard's literary work, comprising plays, poetry, and art criticism as well as art historical statements that contain first hand information on the crucial period of modern art to which Bernard had contributed.
Émile Henri Bernard was born in Lille, France in 1868. As in his younger years his sister was sick, Émile was unable to receive much attention from his parents; he therefore stayed with his grandmother, who owned a laundry in Lille, employing more than twenty people. She was one of the greatest supporters of his art. The family moved to Paris in 1878, where Émile attended the Collège Sainte-Barbe.
He began his studies at the École des Arts Décoratifs. In 1884, joined the Atelier Cormon where he experimented with impressionism and pointillism and befriended fellow artists Louis Anquetin and Henri de Toulouse-Lautrec. After being suspended from the École des Beaux-Arts for "showing expressive tendencies in his paintings", he toured Brittany on foot, where he was enamored by the tradition and landscape.
In August 1886, Bernard met Gauguin in Pont-Aven. In this brief meeting, they exchanged little about art, but looked forward to meeting again. Bernard said, looking back on that time, that "my own talent was already fully developed." He believed that his style did play a considerable part in the development of Gauguin's mature style.
Bernard spent September 1887 at the coast, where he painted La Grandmère, a portrait of his grandmother. He continued talking with other painters and started saying good things about Gauguin. Bernard went back to Paris, met with van Gogh, who as we already stated was impressed by his work, found a restaurant to show the work alongside van Gogh, Anquetin, and Toulouse-Lautrec's work at the Avenue Clichy. Van Gogh called the group the School of Petit-Boulevard.
One year later, Bernard set out for Pont-Aven by foot and saw Gauguin. Their friendship and artistic relationship grew strong quickly. By this time Bernard had developed many theories about his artwork and what he wanted it to be. He stated that he had "a desire to [find] an art that would be of the most extreme simplicity and that would be accessible to all, so as not to practice its individuality, but collectively…" Gauguin was impressed by Bernard's ability to verbalize his ideas.
1888 was a seminal year in the history of Modern art. From October 23, until December 23 Paul Gauguin and Vincent van Gogh worked together in Arles. Gauguin had brought his new style from Pont-Aven exemplified in Vision after the Sermon: Jacob Wrestling with the Angel, a powerful work of visual symbolism of which he had already sent a sketch to van Gogh in September.
He also brought along Bernard's Le Pardon de Pont-Aven which he had exchanged for one of his paintings and which he used to decorate the shared workshop. see in: (ref. Druick 2001) This work was equally striking and illustrative of the style Émile Bernard had already acquainted van Gogh with when he sent him a batch of drawings in August, so much so that van Gogh made a watercolor copy of the "Pardon" (December 1888) which he sent to his brother, to recommend Bernard's new style to be promoted. The following year van Gogh still vividly remembered the painting in his written portrait of Émile Bernard in a letter to his sister Wil (Dec.10,1889):"...it was so original I absolutely wanted to have a copy."
Bernard's style was effective and coherent (see:woman at haystacks,) as can also be seen from the comparison of the two "portraits" Bernard and Gauguin sent to van Gogh at the end of September 1888 at the latter's request: self-portraits -at Gauguin's initiative- each integrating a small portrait of the other in the background. (ref. Druick 2001)
One of Émile Bernard's drawings from the August batch ("...a lane of trees near the sea with two women talking in the foreground and some strollers" – Vincent van Gogh in a letter to Bernard – Arles 1888) also appears to have inspired the work van Gogh and Gauguin did on the Allée des Alyscamps in Arles.
In 1891 he joined a group of Symbolist painters that included Odilon Redon and Ferdinand Hodler. In 1893 he started traveling, to Egypt, Spain and Italy and after that his style became more eclectic. He returned to Paris in 1904 and remained there for the remainder of his life. He taught at the École des Beaux-Arts before he died in 1941.
- "[…] this creative, avant-garde young man destroyed himself in a fight against that same avant-garde he had helped to create. His rivalry with Gauguin led him out of spite along a different path: classicism. This change took place when he was living in the Middle East, in a period of great crisis. But the fact remains that the young Bernard played an essential part as an initiator for Gauguin, and that he was the inventor of a new artistic vision."
Theories on style and art: Cloisonnism and Symbolism
Bernard theorized a style of painting with bold forms separated by dark contours which became known as cloisonnism. His work showed geometric tendencies which hinted at influences of Paul Cézanne, and he collaborated with Paul Gauguin and Vincent van Gogh.
Many say that it was Bernard's friend Anquetin, who should receive the credit for this "closisonisme" technique. During the spring of 1887, Bernard and Anquetin "turned against Neo-Impressionism." It is also likely that Bernard was influenced by the works he had seen of Cézanne. But Bernard says "When I was in Brittany, I was inspired by "everything that is superfluous in a spectacle is covering it with reality and occupying our eyes instead of our mind. You have to simplify the spectacle in order to make some sense of it. You have, in a way, to draw its plan."
"The first means that I use is to simplify nature to an extreme point. I reduce the lines only to the main contrasts and I reduce the colors to the seven fundamental colors of the prism. To see a style and not an item. To highlight the abstract sense and not the objective. And the second means were to appeal to the conception and to the memory by extracting yourself from any direct atmosphere. Appeal more to internal memory and conception. There I was expressing myself more, it was me that I was describing, although I was in front of the nature. There was an invisible meaning under the mute shape of exteriority."
Symbolism and religious motifs appear in both Bernard and Gauguin's work. During the summer of 1889, Bernard was alone in Le Pouldu and began to paint many religious canvasses. He was upset that he had to do commercial work at the same time that he wanted to create these pieces. Bernard wrote about his relationship with the style of symbolism in many letters, articles, and statements. He said that it was of a Christian essence, divine language. Bernard believed that it "It is the invisible express by the visible," and those previous attempts of religious symbolism failed. That period of symbolism represented the nature of beauty, but did not find the truth in the beauty. Art until the renaissance was based on the invisible rather than the visible, the idea, not the shapes or concrete. The history of the painting of symbols was spiritual. Everything, meaning symbols, were forgotten with the paganist ideas and doctrines. That is what Bernard was attempting to accomplish with the rebirth of symbolism in 1890. In his idea of the new symbolism, he concentrated on maintaining a grounded art, more authentic in Bernard's mind meant reducing impressionism, not creating an optical trip like Georges-Pierre Seurat, but simplifying the actual symbol.
His concept was that through ideas, not technique, the truth is found.
It was always Émile Bernard's great frustration that Paul Gauguin never mentioned him as an influence on pictorial symbolism (see for instance his own notes attached to the Belgian edition (1942) of his selected letters, published shortly after his death). In 2001/2002 The Art Institute of Chicago and the Van Gogh Museum, Amsterdam held a joint exhibition:Van Gogh and Gauguin:The Workshop of the South that put Émile Bernard's contribution in perspective. (ref. Druick 2001)
One of Émile Bernard's students was the Swedish painter Ivan Aguéli.
- Au Palais des Beaux-Arts. Notes sur la peinture
- Le Moderniste I/14, 27 July 1889, pp. 108 and 110
- Paul Cézanne
- Les Hommes d'aujourd'hui, no. 387
- Vincent van Gogh
- Les Hommes d'aujourd'hui, no. 390, (1891)
- reprinted in: Lettres & Recueil (1911), pp. 65–69
- Néo-traditionnistes: Vincent van Gogh
- La Plume III/57, 1 September 1891, pp. 300–301
- Charles Filliger (!)
- La Plume III/64, 15 December 1891, p. 447
- Vincent van Gogh
- Mercure de France VII/40, April 1893, pp. 324–330
- reprinted in: Lettres & Recueil (1911), pp. 45–52
- Mercure de France VII/44, August 1893, pp. 303–305
- reprinted in: Lettres & Recueil (1911), pp. 53–57
- Avant-propos pour le premier volume de la correspondance de Vincent
- dated 10 June 1895
- first published in: Lettres & Recueil (1911), pp. 59–63
- Notes sur l'école dite de "Pont-Aven"
- Mercure de France XLVIII, December 1903, pp. 675–682
- Julien Tanguy dit le "Père Tanguy"
- Mercure de France LXXVI/276, 16 December 1908, pp. 600–616
- Lettres de Vincent van Gogh à Émile Bernard & Recueil des publications sur Vincent van Gogh faites depuis son déces par Émile Bernard, précédées d'une preface nouvelle par le même auteur, Ambroise Vollard, éditeur, Paris, 1911, pp. 1–43
- La méthode de Paul Cézanne. Exposé critique
- Mercure de France CXXXVIII/521, 1 March 1920, pp. 289–318
- Une conversation avec Cézanne
- Mercure de France CXLVIII/551, 1 June 1921, pp. 372–397
- Souvenirs sur Van Gogh
- L'Amour de l'Art, December 1924, pp. 393–400
- Souvenirs sur Paul Cézanne: une conversation avec Cézanne, la méthode de Cézanne. Paris: Chez Michel, 1925.
- Louis Anquetin
- Gazette des Beaux-Arts VI/11, February 1934, pp. 108–121
- Le Symbolisme pictural, 1886–1936
- Mercure de France CCLXVIII/912, 15 June 1936, pp. 514–530
- Souvenirs inédits sur l'artiste peintre Paul Gauguin et ses compagnons lors de leur séjour à Pont-Aven et au Pouldu
- Nouvelliste du Morbihan, Lorient, (1939)
- Note relative au Symbolisme pictural de 1888–1890
- first published in:
- reprinted in: Lettres à Émile Bernard, Editions de la Nouvelle Revue Belgique, Brussels 1942, pp. 241–257
His correspondence with other artists is of great art historical interest. Van Gogh, Gauguin, and Bernard traded ideas and art. Many letters sent from van Gogh and Gauguin to Bernard give historians a better idea of the artists lives and connection to their artwork.
- Lettres à Émile Bernard de Vincent van Gogh, Paul Gauguin, Odilon Redon, Paul Cézanne, Elémir Bourges, Léon Bloy, G. Apollinaire, Joris-Karl Huysmans, Henry de Groux, Editions de la Nouvelle Revue Belgique, Brussels 1942
- Neil McWilliam (ed.), "Émile Bernard. Les Lettres d'un artiste (1884-1941)", Les Presses du réel, Dijon, 2012, an edited selection of 430 letters covering the entirety of the artist's career.
Notes, references and sources
- Notes and references
- http://www.eugeneboch.com Eugène Boch a common friend of Vincent van Gogh and Émile Bernard
- This text and the Note following accompanied excerpts from Vincent van Gogh's letters to Bernard and to Theo, his brother, published in the Mercure de France 1893 through 1897. Translated to the German by Margarethe Mauthner, this selection was pre-published by Bruno Cassirer in Kunst und Künstler, Berlin, June 1904 to September 1905, and finally in a bestselling volume.
- Alley, Ronald. The Burlington Magazine, Vol. 133, No. 1056 (Mar. 1991)
- Dorra, Henri: Émile Bernard and Paul Gauguin, Gazette des Beaux-Arts 1955. Vol. 45.
- Druick, Douglas W., and Seghers, Peter Kort: Van Gogh and Gauguin: The Workshop of the South – Art Institute of Chicago Museum Shop, Paperback, 2001
- Luthi, Jean-Jacques: Émile Bernard, Catalogue raisonné de l'œuvre peint, Editions SIDE, Paris 1982 ISBN 2-86698-000-X
- McWilliam, Neil; Karp-Lugo, Laura, and Welsh-Ovcharov, Bogomila: "Émile Bernard. Au-delà de Pont-Aven", Institut national d'histoire de l'art, Paris, 2012
- Morane, Daniel: Émile Bernard 1868–1941, Catalogue de l'œuvre gravé, Musée de Pont-Aven & Bibliothèque d'Art et d'Archéologie – Jacques Doucet, Paris, 2000 ISBN 2-910128-20-2
- Stevens, MaryAnne, et alt.: Émile Bernard 1868–1941, a pioneer of Modern Art / Ein Wegbereiter der Moderne, Waanders, Zwolle 1990 ISBN 90-6630-151-1
- Waschek, Matthias: Eklektizismus und Originalität. Die Grundlagen des französischen Symbolismus am Beispiel von Émile Bernard, PhD Bonn 1990 ISBN 3-89191-342-7
- Welsh-Ovcharov, Bogomila: Vincent van Gogh and the Birth of Cloisonism, Toronto & Amsterdam, 1980
- Signac, 1863-1935, a fully digitized exhibition catalog from The Metropolitan Museum of Art Libraries, which contains material on Bernard (see index)
- Van Gogh Letters to Bernard, The Morgan Library online exhibition (facsimiles and translations). | <urn:uuid:2e6e687c-6b94-4e15-a0a0-1a990241c239> | CC-MAIN-2015-35 | https://en.wikipedia.org/wiki/%C3%89mile_Bernard | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00045-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.904055 | 3,603 | 2.53125 | 3 |
MACROMOLECULES CHAPTERS 2,3 LECTURE 4 1. LIVING ORGANISMS OBEY CHEMICAL AND PHYSICAL LAWS 2. CARBON CONTAINING MOLECULES ARE STABLE 3. FUNCTIONAL GROUPS 4. WATER 5. SYNTHESIS BY POLYMERIZATION 6. CELLS CONTAIN FOUR MAJOR FAMILIES OF MACROMOLECULES What I study: Anti-Tumor Immunity Immune Control of Cancer: B cell activation Antigenic Targets Antigen Processing Trafficking T cell activation Innate immunity Immune evasion of Cancer: T cell Dysfunction APC Dysfunction Cytokine Dysregulation Inflammation Immune cells Breast cancer Infiltration of CD8+ lymphocytes into tumor sites after cancer vaccine Red: anti-CD8 CD8+ cells Before After 1. LIVING ORGANISMS OBEY CHEMICAL AND PHYSICAL LAWS: - It is based overwhelmingly on carbon compounds. - It depends most exclusively on chemical reactions that take place in watery solutions and in the relatively narrow range of temperature and pH. Keeping in this range is called homeostasis. - It is dominated and coordinated by enormous polymeric molecules ? chains of chemical subunits linked end-to-end. The unique properties of these long polymeric molecules enable cells and organisms to grow and reproduce and do all the other things that are characteristic of life. - The chemical processes in cells are tightly regulated. - It is complex; even the simplest cell is vastly more complicated in its chemistry than any other chemical systems known. Some allotropes of carbon: a) diamond b) graphite c) lonsdaleite d?f) fullerenes (C60, C540, C70); g) amorphous carbon h) carbon nanotube 2. CARBON CONTAINING MOLECULES ARE STABLE: - The stability of carbon containing molecules is based on the property of the favorable electronic configuration of each carbon atom in the molecule. - The most significant characteristic of carbon is its tetravalent nature. - Each carbon atom forms covalent bonds with four other atoms; creating, for example, short to long chains or ring compounds. - When only hydrogen atoms are used to satisfy the valence requirements the compounds are called HYDROCARBONS. Hydrocarbons in biology play only modest roles because they are essentially insoluble in water. The exception is found in membranes which are very important to biology and whose interior is a non- aqueous, hydrophobic environment of long hydrocarbon tails of phospholipid molecules. The hydrocarbon tails of phospholipid molecules are 'water-hating.? 3. FUNCTIONAL GROUPS: - Most biological compounds contain atoms of oxygen, nitrogen, phosphorus or sulfur (in addition to carbon and hydrogen). - These are typically part of FUNCTIONAL GROUPS conferring water solubility and chemical reactivity. Common functional groups include carboxyl and phosphorus groups, amino groups, and hydroxyl, sulfhydryl, carbonyl and aldehyde groups. 4. WATER: - Water is important because of its critical role as the UNIVERSAL INORGANIC SOLVENT in biological systems. - It is the most abundant component of cells (It is important to note that some organisms have adapted to be able to survive long periods of desiccation.) - Polarity is the key: temperature stability and cohesive properties. - Water molecules are triangular; not linear; two hydrogen atoms bound to the oxygen at an angle of 104.5 degrees. -The molecule is not charged but the electrons are unevenly distributed. The oxygen at the head is ELECTRO- NEGATIVE (it tends to draw electrons toward it giving the end of the molecule a partial negative charge) leaving the other end with a positive charge around the hydrogen atoms. This charge separation gives the water molecule its POLARITY. THIS IMAGE TELLS YOU WHY IONS DO NOT DIFFUSE (OR HAVE A VERY DIFFICULT TIME) ACROSS MEMBRANES Water molecules are closely associated with ions forming a tight shell around them - WATER IS AN EXCELLENT SOLVENT: The most important property of water is its ability to solubilize polar materials of great variety. Most molecules in cells are polar and interact with water. Polar molecules thus dissolve readily in water and are called HYDROPHILIC such as sugar, organic acids and some amino acids. Non-polar molecules are much less soluble in water and are called HYDROPHOBIC such as lipids and most proteins. M. Zinkova 5. SYNTHESIS BY POLYMERIZATION: - Most cellular structures are made up of ordered arrays of linear polymers, i.e., macromolecules: proteins, nucleic acids, polysaccharides, lipids. - Not all macromolecules are linear, some branching occurs such as in polysaccharides. - Macromolecules are made up of repeating subunits (MONOMERS). For example, glucose in cellulose or glycogen, amino acids in proteins, and nucleotides in nucleic acids. - Monomers are added onto one end of a growing polymer chain. - The subunits are added in a particular order, or sequence. - Macromolecules themselves are used as building blocks for the formation of larger structures. - There are four kinds of macromolecule based on FUNCTION: informational: nucleic acids (DNA, RNA=coding information), proteins (recognition, signals), oligosaccharides (recognition), lipids (signals) storage: polysaccharides (starch, glycogen); lipids (triglycerides) structural: polysaccharides (cellulose, chitin); proteins (cytoskeleton). functional: proteins (enzymes) - Macromolecules are synthesized by stepwise polymerization of monomers. - Generally the stepwise polymerization of similar or identical monomers is as follows: i. Addition of each monomer occurs with the removal of a water molecule = CONDENSATION REACTION. ii. Monomers must be present in an ?activated? form. iii. The synthesis of polysaccharides, proteins, and nucleic acids requires an input of energy. This occurs through the consumption of high-energy nucleoside- triphosphates that activate each monomer before its addition to the growing polymer chain. THESE ARE ANABOLIC REACTIONS ATP and NADH (and NADPH) are the most important of the activated carrier molecules. Activated carriers store energy in an easily exchangeable form: ATP stores energy as a readily transferable chemical group. NADH (and NADPH) stores energy as high-energy electrons. 4. macromolecules typically have an inherent directionality. The two ends of the polymer chain are chemically different from each other. 6. CELLS CONTAIN FOUR MAJOR FAMILIES OF MACROMOLECULES: The composition of a cell (bacterial or animal) A limitless variety of polymers can be built from a small set of monomers (=subunits) a. PROTEINS: We will cover this in the next two lectures b. NUCLEIC ACIDS - Nucleic acids store, transmit, and express genetic information. - DNA (=deoxyribonucleic acid) is the primary repository of genetic information and RNA (=ribonucleic acid) has several roles in the expression of DNA information during protein synthesis (messenger RNA, transfer RNA, ribosomal RNA). - Nucleotides consist of a five carbon sugar, one or more phosphate groups, and a nitrogen-containing aromatic base. The sugar ribose for RNA or deoxyribose for DNA. (note that there is less variety in number of nucleotides than in number of amino acids) -The base may be either a purine or a pyrimidine. - DNA contains the purines adenine (A) and guanine (G) and the pyrimidines cytosine (C) and thymine (T). - RNA also has adenine, guanine, and cytosine but contains the pyrimidine uracil (U) in place of thymine. (nucleoside) (nucleotide) - Nucleotides are the Building Blocks of DNA and RNA - Nucleotides can act as short-term carriers of chemical energy (ATP) and signaling molecules (GTP) -The three phosphates are linked by two phospho- anhydride bonds; hydrolysis of these phosphate bonds releases large amounts of chemical energy - The Double helix consists of two complementary chains of DNA twisted together around a common axis to form a right-handed helical structure that resembles a circular staircase. - The chains are oriented in opposite directions along the helix running in the 5' to 3' direction the other in the 3' to 5' direction (antiparallel). - The two strands are also complementary. That is, each base in one strand forms specific hydrogen bonds with the base in the other strand directly across from it. Each adenine must be paired with a thymine and each guanine must be paired with a cytosine. Phosphodiester bond Photograph 51 The Double Helix: 1953 Rosalind Franklin Xray crystals James Watson and Francis Crick Ovarian Cancer age 37 c. POLYSACCHARIDES - Polysaccharides=organic molecules made of sugars - Play no known informational role in the cell (are involved in cell-cell recognitions) and usually consist of a single kind of repeating unit or an alternating pattern of two kinds. -The major polysaccharide in higher organisms are storage molecules like starch and glycogen and structural like cellulose and chitin. - Monomers are monosaccharides (=single sugar). Most sugars have three and seven carbon atoms. The single most common sugar in the biological world is glucose (C6H12O6). The energy is stored in chemical bonds harvested in cellular respiration. - Disaccharides consist of two monosaccharides linked covalently. * Common disaccharides are maltose (2 glucose), lactose (milk sugar, composed of a glucose and galactose), and sucrose (common sugar, composed of glucose and fructose). - Oligosaccharides consist of seven to ten monosaccahrides - Polysaccharides perform either storage or structural function. *Glycogen is highly branched for example. Glycogen is stored in the liver (source of glucose and to maintain blood sugar levels) and muscles (serves as fuel source to generate ATP). * Starches storage in plants. * Cellulose is the best known example of a STRUCTURAL POLYSACCHARIDE and is found in plant cell walls. More than half of the carbon in higher plants is present in cellulose. Like starch and glycogen, it is a polymer of glucose but the repeating monomer is ß glucose, which has nutritional implications. We do NOT have the enzymes to break these bonds so we can?t go out and eat a tree for nutritional purposes while we can hydrolysis the ? bonds of starch. *So... ? bonds for storage; ß bonds for structure. - Functions of Sugars * Production and Storage of Energy: the monosaccharide glucose is a key energy source for cells. * Mechanical Support: components of the extracellular matrix; cellulose of plant cell walls; chitin of insect exoskeleton and fungal cell wall. * Molecular Recognition: small oligosaccharides can be covalently attached to proteins to form glycoproteins and to lipids to form glycolipids; both glycoproteins and glycolipids are found in cell membranes; the sugar side chains on these molecules are recognized by other cells. Example: Human blood groups, termed A, B, AB and O groups). d. LIPIDS - Most lipids are not formed by the kind of stepwise polymerization. - Lipids constitute a heterogeneous group of cellular components that resemble one another more in their solubility properties than in their chemical structures. - The distinguishing feature is their hydrophobic nature, i.e., they have little (at best) affinity for water but are soluble in nonpolar solvents (chloroform). Some lipids are AMPHIPATHIC having both a polar and nonpolar regions. - Lipids are important for energy storage, membrane structure and specific biological functions (such as transmission of chemical signals into and within the cell=signal transduction). - There are 6 main classes of lipids: fatty acids, triacylglycerols, phospholipids, glycolipids, steroids, and terpenes. i. Fatty acids: Long, unbranched hydrocarbon chain with a carboxyl group at one end. * The FA molecule is amphipathic (the carboxyl group renders one end polar and the hydrocarbon tail is nonpolar). * FAs contain 12 to 24 carbons atoms per chain with 16 and 18 common. Each molecule is generated by the stepwise addition of 2 carbon units. * Because they are greatly reduced they hold more potential energy than other biological molecules; yield a great deal of energy when they are oxidized = good energy storage. * Fatty acids with only single bonds between the carbons are saturated fatty acids because every carbon atom in the chain has the maximum number of hydrogen atoms. Unsaturated fatty acids contain one or few double bonds. As you can see this changes the shape of the molecule. ii. Triacylglycerols (TAG): TAGs are also called TRIGLYCERIDES * TAGs consist of a glycerol molecule and three fatty acids. Glycerol is a 3 carbon alcohol with a hydroxyl group on each carbon. * The main function of TAGs is to store energy and contain mostly saturated fatty acids usually solid or semisolid at room temperature. THESE ARE FATS (butter, lard, oils) * Fats do not mix with water because they have three long nonpolar hydrocarbon tails * An important characteristic of fats is the number of double bonds in fatty acid tails. Unsaturated fats are liquid at room temperature (e.g., vegetable oils). Saturated fats like butter and lard are solid at room temperature (contributes to heart disease). NON POLAR TAILS iii. Phospholipids (PL): PL are important components in cell membranes. -They are the critical bilipid layer structure found in all membranes. iv. Glycolipids (GL): These are derivatives of phospholipids that contain a carbohydrate group instead of a phosphate group. The carbohydrate may contain one to six sugars which are water soluble giving glycolipid an amphipathic nature. * GLs are specialized constituents of some membranes and can be sites for biological recognition of surfaces. v. Steroids: Most common is cholesterol which is found primarily in the membranes. It is the starting point for the synthesis of steroid hormones. Vitamin D is a steroid. * All steriods have the same carbon skeleton made of four linked rings. Steroids differ in what is attached to the rings. vi. Terpenes: Joined together in various combinations to produce Vitamin A, carotenoid pigments, and coenzyme Q. ltorvalds Slide 1
Want to see the other 26 page(s) in Lect_04_Macromolecules(1).pdf?JOIN TODAY FOR FREE! | <urn:uuid:90cca284-a5a0-45bb-9f73-0c6c99fe2800> | CC-MAIN-2015-35 | https://www.studyblue.com/notes/note/n/lect_04_macromolecules1pdf/file/1931946 | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00107-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.894251 | 3,290 | 3.09375 | 3 |
The Ransom Center's music holdings include materials found in many different collections throughout the Center. In addition to manuscript and printed scores, libretti, and books on music, the collections contain musicians' correspondence, photographs, artwork, recordings, clippings, programs, and costume and set designs. One of the collection's strengths lies in the illumination of relationships between authors, composers, designers, agents, record producers, film producers, and others involved in the creative process, thus providing a multifaceted look at the history of music-making.
The Ransom Center houses a small collection of Medieval and Renaissance liturgical manuscripts, Bibles, and Books of Hours containing music. The oldest manuscript of musical interest is an eleventh-century codex compiled by Abbot Ellinger of Tegernsee, informally known as the "Bede manuscript," since it begins with Bede's De natura rerum. It contains a poem on the constellations, Ad Boreae partes, attributed to the fourth-century poet Ausonius, which is provided with musical notation (unheightened Sangallian neumes). The penultimate work in the manuscript is De diversis generibus musicorum, an important essay on the spiritual significance of certain musical instruments, spuriously presented as a letter from St. Jerome to Dardanus. Among the manuscript liturgical books and separate leaves of chant is a fifteenth-century German ferial psalter and hymnal important for its relationship to the Gutenberg Bible and early printed psalters.
The so-called Gostling Manuscript, made by the celebrated bass soloist John Gostling in 1706, contains sixty-four anthems by Henry Purcell, John Blow, and their contemporaries. (A facsimile of the Gostling Manuscript was printed by The University of Texas Press in 1977.)
A large number of diverse music-related materials has been cataloged in the Music Manuscripts collection. Dates of the material range from the seventeenth to the twentieth centuries, and the collection includes both music scores and correspondence. Significant among the early works are a score for "Il trionfo di Camilla" by Giovanni Bononcini (1670-1747) and part books for operas by Jean-Baptiste Lully (1632-1687) and his student Pascal Collasse (1649-1709). Other notable items in the collection are letters or manuscripts by Ludwig van Beethoven (1770-1827), Hector Berlioz (1803-1869), Leonard Bernstein (1918-1990), Georges Bizet (1838-1875), Benjamin Britten (1913-1976), Luigi Cherubini (1760-1842), George Gershwin (1898-1937), W. C. Handy (1873-1958), Franz Liszt (1811-1886), Felix Mendelssohn-Bartholdy (1809-1847), Giacomo Puccini (1858-1924), Gioacchino Rossini (1792-1868), Robert Schumann (1810-1856), John Philip Sousa (1854-1932), Sir Arthur Sullivan (1842-1900), Ralph Vaughan Williams (1872-1958), Giuseppe Verdi (1813-1901), and Richard Wagner (1813-1883).
The collection of American musicologist Theodore M. Finney (1902-1978) consists of manuscripts dating from the seventeenth to the nineteenth century, including songs, anthems, glees, opera arias, and music for keyboard, harp and mandolin. The collection is rich in materials documenting the normal daily use of music in church and in the home: for example, volumes of choral music used by cathedral choirs, a French noblewoman's copy book containing popular songs for voice and guitar, and English and Irish country dances. One of the more important items is a contemporary manuscript of Handel's Coronation Anthems (1727).
In 1958 the Center acquired the library of bibliophile, collector, and concert violinist Edwin Bachmann. His library includes first and early editions of music by major western European composers (with particular strengths in Ludwig van Beethoven, Wolfgang A. Mozart, and Frédéric Chopin), early treatises on music, and a few copyist's manuscripts, including works by Joseph Haydn, Mozart and Giovanni Battista Viotti. The collection also contains the piano vocal score from the first edition (1871) of Verdi's Aida, as performed at its debut on December 24, 1871 in Cairo, as well as the libretto for the first Milanese performance of the opera at La Scala in 1872. Important early theoretical works in the Bachmann collection are complemented by those from the library of former University of Texas professor Fritz Oberdoerffer (1895-1979).
In 1969 New York book dealer Hans P. Kraus (1907-1988) donated his collection of approximately thirty-eight hundred Italian libretti dating from 1600 to the early twentieth century. With libretti for operas, cantatas, serenatas, oratorios, dialogues, and passions, and including non-Italian works performed in translation in Italy, it is one of the most important such collections in the United States. The Kraus Collection includes the first libretto printed in Italy, Ottavio Rinuccini's (1562-1621) La Dafne (1600), as well as his L'Euridice from the same year. These and other libretti, such as the 1749 libretto for Handel's Susanna, offer the possibility of research on language use, symbolism, and censorship, as well as the study of historical and political changes, the effects of which are reflected in the libretti.
The Center also has eighty-two early printed hymnals, psalters, and liturgical books from the library of French pianist, conductor, and teacher Alfred Cortot (1877-1962). These items originally formed part of the categories Musique vocale religieuse and Liturgie in Cortot's own catalog of his vast music library, and they illustrate various facets of the development of European sacred music.
Additionally, the Center holds large collections of sheet music dating back to the seventeenth century, as well as songbooks, hymnals, collections of Irish and Scottish folk music, and French contredanse pamphlets.
Nineteenth & Twentieth Centuries
The Carlton Lake Collection contains one of the finest collections of modern French music materials in the world. A group of ninety manuscript scores by Claude Debussy, Gabriel Fauré, Maurice Ravel, Paul Dukas, and Albert Roussel was acquired in 1983, making the Ransom Center the largest American repository of these composers' works. Among the many composers represented in the Lake Collection are:
Georges Auric (1899-1983): manuscript music scores; manuscripts of reviews; correspondence, including forty-seven letters to Valentine and Jean Hugo documenting the music world in the era of Parade; photographs and portrait-drawings.
Hector Berlioz (1803-1869): autograph manuscript, with corrections, of an aria from Benvenuto Cellini.
Ernest Chausson (1855-1899): complete autograph manuscript of Le Roi Arthus, an opera in three acts, written on 194 sheets, with additions and variants on the verso of 123 of these.
Claude Debussy (1862-1918): autograph manuscripts for En Sourdine, Printemps, and the ballet Khamma; the manuscript for his libretto to the uncompleted opera La Chute de la maison Usher; and numerous letters, including significant exchanges with Georges Jean-Aubry and Emile Vuillermoz.
Paul Dukas (1865-1935): autograph manuscripts of six works including La Peri, the Sonate for piano, and L'Apprenti sorcier, in both the orchestral and two-piano versions.
Gabriel Fauré (1845-1924): autograph manuscripts for fifteen compositions, including Le Jardin clos, Masques et bergamasques, and the Barcarolle No. 13.
Franz Liszt (1811-1886): complete manuscript score of Gaudeamus Igitur, and important correspondence with his two daughters, Blandine and Cosima, expressing his concern over their education and their intellectual and artistic development.
Maurice Ravel (1875-1937): autograph manuscripts for eighteen works, including Daphnis et Chloë, Gaspard de la nuit, Introduction et allegro, Ma Mère l'oye, Rapsodie espagnole, Shéhérazade, and the piano trio. Also present are letters to Valentine Hugo, Emile Vuillermoz, and others.
Albert Roussel (1869-1937): autograph manuscripts for fifty compositions, representing sixty percent of his complete output. Included are the scores for the opera Padmâvatî, the ballets Aeneas and Le Festin de l'araignée, the piano concerto op. 36, numerous works for voice, and many instrumental pieces.
Camille Saint-Saëns (1835-1921): forty-three-page autograph manuscript score of La Nuit and numerous letters.
Erik Satie (1866-1925): original, complete score for orchestra of Relâche. Cinéma ("Entr'acte") used by the conductor Roger Desormière at the ballet's première, and letters, including a large correspondence with Valentine Hugo illuminating his relations with Diaghilev, Misia Sert, Picasso, Cocteau, Fargue, and Auric, especially during the years of Parade.
Igor Stravinsky (1882-1971): autograph manuscript, full score, of Stravinsky's orchestration of Chopin's Grande valse brillante; and letters to Jean Cocteau, Valentine Hugo, Man Ray and others.
Giuseppe Verdi (1813-1901): draft of the Act I Scena e Duetto from Alzira.
The Lake Collection also contains the papers of critics Emile Vuillermoz (1878-1960) and Georges Jean-Aubry (1882-1950).
Among the other collections related to French music are the complete papers of author Edouard Dujardin (1861-1949), founder of the Revue wagnérienne, and the library of Edgard (1885-1965) and Louise Varèse (1891-1989).
The music library of Adolfo Betti (1875-1950), leader of the Flonzaley String Quartet, was purchased by the School of Music in 1951 and transferred to the Ransom Center in 1996. Founded in 1902, the Flonzaley was one of the most important early string quartets in America and one of the first to make recordings.
The composer, musicologist and pianist Paul A. Pisk (1893-1990) donated his manuscript scores to the Ransom Center. The gift included his opus numbers 1-100 and 105-113 as well as sixty-one compositions and sketches without opus number. The papers of Robert Haven Schauffler (1879-1964), poet, cellist, and writer on music, consist primarily of letters to him from a wide range of musicians and literary figures.
The papers of composer Nicolas Nabokov (1903-1978), spanning the years 1933-1978, include autograph and duplicated scores by Nabokov as well as correspondence with W. H. Auden, George Balanchine, Igor Stravinsky, and many other composers, performers, and writers.
George Lessner (1904-1997) emigrated to the U.S. from Hungary and composed operas and concert music as well as popular songs and scores for film and musical theater. His papers include manuscripts of his work in all these genres, as well as scores written by his son, Alford Lessner.
The papers of musicologist Eric Walter White (1905-1985) contain a few autograph scores by Michael Tippett (1905-1998), manuscripts and research materials for White's books on Tippett, Benjamin Britten, Stravinsky, and English opera, and extensive correspondence with Tippett, Britten, and other leading musical and literary figures. The Center also has Tippett's library.
The papers of Ross Russell (1909-2000), writer on jazz and founder of Dial Records, contain books, periodicals, correspondence, documents, photographs, recordings, and the business files of Dial Records, famous for issuing some of the most important recordings of Charlie Parker, Dexter Gordon, and other legendary Bebop musicians. The archive also contains materials on classical musicians recorded by Dial, notably seventeen letters to Russell from Arnold Schoenberg (1874-1951).
The Paul Bowles (1910-1999) collection contains the most extensive gathering of his literary and music manuscripts, printed books and music, recordings, and correspondence. (See also American Literature.)
The archive of American composer and teacher Kent Kennan (1913-2003) includes manuscript and published scores, scrapbooks, correspondence with friends, family, and colleagues, articles, yearbooks, awards, diplomas, a 1924 diary, lists of performances, photographs, programs, reviews, and books from his library. There are also articles about and correspondence with his brother, the statesman George Kennan, in the collection.
The papers of novelist and composer Anthony Burgess (1917-1993) include manuscripts for approximately one hundred twenty musical works dating from 1970 to 1993, along with sketches, drafts, and fragments. There are songs, piano pieces, string quartets, guitar quartets, sonatas and other chamber works, choral works, concertos, scores for plays and films, overtures, and other symphonic works. Vocal pieces include settings of texts by James Joyce, D. H. Lawrence, Gerard Manley Hopkins, and T. S. Eliot. Also present are published editions of Weber's Oberon and Berlioz's Enfance du Christ containing Burgess's working notes for his English translations of the texts.
The collection of manuscripts by composer Aaron Copland (1900-1990) includes sketches and the orchestral score for his Lincoln Portrait as well as the manuscripts of early songs.
The Jablonski and Stewart Collection was generated by the collaborative efforts of Edward Jablonski (b. 1922) and Lawrence D. Stewart (b. 1926) during the writing of their books The Gershwin Years (1958) and The Gershwin Years in Song (1973). The collection includes the authors' research materials, correspondence, transcripts of interviews, and other materials relating to the publication of the books, as well as a few original manuscripts of George and Ira Gershwin. There are also Gershwin materials in several other collections at the Center, to include the painting made by Siqueiros. (See Latin American Studies.)
Daniel Catán (1948-2011) is best known for his five completed operas, four of which have had highly successful performances in the U.S. His papers include the scores for his operas and for a large number of his other compositions, as well as other materials related to his professional career.
The papers of Peter Garland (b. 1952), composer and founder of the Soundings Press. which specialized in avante-garde American music, include extensive correspondence with Conlon Nancarrow, Dane Rudhyar, Harry Partch, and others.
The Performing Arts Collection houses popular sheet music, material on American musical theater and British music hall shows, photographs of popular and classical performers and ensembles, and material relating to ballet and opera, including set and costume designs as well as actual costumes. There is material relating to George Gershwin, Cole Porter, Burl Ives, John Philip Sousa, Fred Waring, and Paul Anka.
The papers of American librettist and lyricist Harry Bache Smith (1860-1936) include correspondence with Victor Herbert, Reginald DeKoven, Irving Berlin, Jerome Kern, and others, as well as over fifty manuscripts of his plays and librettos, spanning the years 1902-1934.
A small collection of correspondence between songwriters Anelu Burns and Madelyn Sheppard and film producer Harry H. Poppe illuminates the world of songwriting for films in 1919 and 1920.
The Opera Collection (1800s-1990) consists of biographical holdings on operatic performers. The careers of approximately one thousand performers are documented with photographs, clippings, prints, programs, and playbills. The collection also includes production photographs relating to operatic works produced for the American stage, and materials documenting the history of prominent opera companies in the United States, as well as a selection of European companies.
The Film Collection houses the David O. Selznick (1902-1965) archive, which contains manuscript full scores, and conductor and orchestral parts for twenty-two films produced between 1937 and 1946. These include Gone With the Wind, A Star is Born, Nothing Sacred, Duel in the Sun, and Spellbound. (See also Film & Television.)
Areas of Study
The Reading Room Will Be Closed:
Sept 7, 2015
Nov 26-28, 2015
Dec 23-31, 2015
January 1, 2016
January 18, 2016
May 30, 2016
July 4, 2016
Always closed on Sundays | <urn:uuid:c89fd737-8ef0-4425-981b-3d4d012de5ce> | CC-MAIN-2015-35 | http://www.hrc.utexas.edu/collections/guide/music/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00163-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.922415 | 3,755 | 2.90625 | 3 |
Few trade policies engender more bitterness and international ill will than the U.S. antidumping law. For many years, that law has been the weapon of choice among domestic producers seeking to quell import competition. While defenders of the antidumping regime point to its use as a means for redress of unfair trade, scrutiny of the law in practice exposes the fallacy—even irony—of that justification.
In reality, administration of the antidumping law is entirely divorced from the supposed theoretical justifications articulated by its defenders. Furthermore, it is fraught with methodological distortions that routinely exaggerate and even fabricate dumping margins. The result is to cripple normal, healthy import competition and injure downstream U.S. industries and consumers.
Possibly the most egregious distortion is the practice known as “zeroing.” Its application is a significant cause of the systemic overestimation of dumping margins and subsequent application of inflated antidumping duties.
To appreciate the impact of zeroing, it is important to understand how the U.S. Department of Commerce calculates dumping margins. In a typical antidumping investigation, DOC calculates weighted-average net prices for each product sold in the United States. It then compares each of those U.S. prices to the product’s normal value, which can be calculated a number of different ways but is ideally the weighted-average net price of the most similar product sold in the home market. Zeroing is introduced after the comparison of the U.S. price and normal value.
When normal value is higher than the U.S. price, the difference is treated as the dumping amount for that sale or that comparison. When, however, the U.S. price is higher, the dumping amount is set to zero rather than its calculated negative value. All dumping amounts are then added and divided by the aggregate export sales amount to yield the company’s overall dumping margin. Zeroing thus eliminates “negative dumping margins” from the dumping calculation. In so doing, it can create dumping margins out of thin air.
In Table 1 comparisons of the average prices of five products in both markets are presented. Each product is sold at identical net prices in both markets with the exception of Product 1 and Product 5. Product 1 is sold for $0.50 less in the home market than in the U.S. market, and Product 5 is sold for $0.50 more. The unit margin is equal to the amount of dumping calculated for each unique comparison. The arithmetic sum of the individual dumping margins (total margin) is zero because the price differences for products 1 and 5 cancel each other out. But surprise: This is not how dumping is calculated by DOC.
Rather, the negative dumping margin on Product 1 is set equal to zero and is thus denied any impact on the overall margin. Thus, by engaging in zeroing in this example, the DOC would find a dumping margin of 10 percent (the sum of the total PUDD divided by the sum of the total value) despite the lack of any difference in overall price levels between the two markets.
Consider the results of 18 actual U.S. dumping determinations in Table 2. Using actual case data and the DOC’s dumping calculation computer programs, it was possible to calculate the actual effects of zeroing in these particular cases. In 17 of the 18 determinations, the dumping margin was inflated by zeroing. In 5 of the cases, the overall dumping margin would have been negative. On average, the dumping margins in the 17 cases would have been 86.41 percent lower if zeroing had not been employed.
Certainly, the impact of zeroing varies from case to case. If every comparison generates a positive dumping margin, then the prohibition of zeroing will have no impact. But if there are many comparisons generating negative margins, or if there are only a few generating large negative margins, the prohibition of zeroing can have a very substantial impact on the amount of antidumping duties ultimately applied.
On April 13, 2004, a WTO dispute panel ruled against the U.S. practice of zeroing in a case brought by Canada involving softwood lumber. The panel found that “the United States has violated Article 2.4.2 of the AD Agreement by not taking into account all comparable export transactions when DOC calculated the overall margin of dumping as Article 2.4.2 requires that the existence of margins of dumping has to be established for softwood lumber on the basis of a comparison of the weighted-average-normal value with the weighted average of prices of all comparable export transactions, that is, for all transactions involving all types of products under investigation.” By setting equal to zero the dumping margins of those comparisons where the average export price exceeded the average normal value, the DOC failed to take into account all comparable export transactions.
The WTO’s ruling in the lumber case is not surprising since the practice of zeroing had already been found to violate the WTO Antidumping Agreement in a previous case brought by India against the European Union involving bed linen. In that case, the WTO Appellate Body ruled in March 2001 that the EU’s practice was WTO-inconsistent for the same reason. The European Union has since changed its practice as a consequence of the Appellate Body’s ruling, but loopholes remain.
The U.S. zeroing methodology is also under WTO attack from another quarter. In February 2004 the European Union requested the formation of a WTO panel to hear its complaint about 31 different U.S. antidumping cases in which zeroing had been used. The EU seems to be approaching this complaint particularly fastidiously, apparently—and rightly—concerned that the United States will exercise every contingency available to retard antidumping reform. Presumably, the inclusion of 31 different cases in the complaint is designed to nip in the bud any attempts by the United States to respond to WTO indictments about zeroing on a case-by-case basis.
Importantly, the 31 cases include antidumping investigations as well as administrative reviews of existing antidumping orders. The methodologies for calculating dumping are slightly different for each type of proceeding, presenting a potential loophole if the WTO were to rule on only one type or the other. It is important that the WTO issue an unambiguous decision that the practice of zeroing violates the Antidumping Agreement regardless of whether individual-to-individual, individual-to-average, or average-to-average comparisons are used to calculate dumping.
While the United States is likely to appeal the panel’s decision in the Canadian lumber case, it is likely that the Appellate Body will come to the same conclusion that the panel did, just as the Appellate Body did in the case involving Indian bed linens. When it does—or if there is no appeal—the United States should seek to bring its antidumping calculation methodology into conformity with the Antidumping Agreement in an expeditious manner.
The growing list of adverse WTO rulings with which the United States has failed to comply is serving to undermine the integrity of the dispute settlement system. Congressional resistance to repealing or revising the Continued Dumping and Subsidy Offset Act (or Byrd Amendment), which was ruled a violation of both the Antidumping Agreement and the Agreement on Subsidies and Countervailing Measures by a panel and the Appellate Body, is fostering doubts among U.S. trade partners about U.S. commitment to the WTO. Congress’s failure to act on the Foreign Sales Corporation/Extraterritorial Income (FSC/ETI) issue, which was determined to constitute illegal export subsidies by two panels and upheld twice by the Appellate Body, has led to the European Union’s imposing retaliatory tariffs on U.S. exporters. Failure to repeal the Antidumping Act of 1916—and the decision of a U.S. court to award damages under that statute even after it was deemed a violation of the Antidumping Agreement—threatens further damage to the integrity of the dispute settlement process.
Although the United States pushed hard in the Uruguay Round for the creation of a dispute settlement body that would render determinations that would be respected, ironically it is the United States that has been especially dismissive of its findings. The unfortunate implications of this obstructionism are clear. As U.S. Trade Representative Robert Zoellick explained to a congressional subcommittee last month, “Our ability to demand that others follow the trade rules is strengthened when the United States addresses cases we lose.”
For a comprehensive review of many of the glaring flaws of antidumping methodology and a set of related reform proposals, see Brink Lindsey and Dan Ikenson, “Reforming the Antidumping Agreement: A Road Map for WTO Negotiations,” Cato Trade Policy Analysis no. 21, December 11, 2002. Much of the discussion in this bulletin was culled from that report. See also Brink Lindsey and Daniel J. Ikenson, Antidumping Exposed: The Devilish Details of Unfair Trade Law (Washington, D.C.: Cato Institute, 2003).
For an in-depth explanation of how antidumping calculations are performed, see Brink Lindsey and Dan Ikenson, “Antidumping 101: The Devilish Details of ‘Unfair Trade’ Law,” Cato Trade Policy Analysis no. 20, November 26, 2002. See also Lindsey and Ikenson, Antidumping Exposed.
The average normal value is often based on a subset of home market sales: those sold at prices above the average cost of production. Alternatively, it can be based on the cost of production plus allowances for expenses and profit, or on prices in a third-country market.
PUDD is a DOC acronym for Potentially Uncollected Dumping Duties.
Report of the Panel on United States—Final Dumping Determination on Softwood Lumber from Canada, WT/DS264, April 13, 2004, p. 128.
Report of the Appellate Body on European Communities—Antidumping Duties on Imports of Cotton-Type Bed Linen from India, WT/DS141/AB/R, March 1, 2001.
In the EU-Bed Linen case, the Appellate Body concluded that zeroing is WTO-inconsistent because it prevents true average-to-average comparisons as called for by Article 2.4.2 of the Antidumping Agreement. This reasoning leaves open the possibility that zeroing may be permissible when dumping is calculated another way. Indeed, since the agreement explicitly allows individual-to-average comparisons under certain circumstances, and since those comparisons would yield exactly the same results as average-to-average comparisons unless zeroing is employed for the former, there is a plausible argument that zeroing is implicitly permitted under current WTO rules whenever individual-to-average comparisons are allowed. Thus, zeroing may be consistent with Article 2.4.2 as currently worded in targeted dumping cases. That is the EU’s position at present.
See Dan Ikenson, “Byrdening’ Relations: U.S. Trade Policies Continue to Flout the Rules,” Cato Free Trade Bulletin no. 5, January 13, 2004.
Statement of Robert B. Zoellick, U.S. Trade Representative, before the Committee on Appropriations Subcommittee on Commerce, Justice, and State, the Judiciary, and Related Agencies of the United States House of Representatives, March 25, 2004. | <urn:uuid:f091620a-32ba-4f63-b40e-0f5d281709e2> | CC-MAIN-2015-35 | http://www.cato.org/publications/free-trade-bulletin/zeroing-antidumpings-flawed-methodology-under-fire | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645348533.67/warc/CC-MAIN-20150827031548-00328-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.941582 | 2,364 | 2.65625 | 3 |
The time program is a handy tool to not only guage how much time in seconds it takes a program to run, but will also display how much user CPU time and system CPU time was used to execute the process. To understand these values you must grasp how the kernel handles the time reporting for the process. For example, the output of:
The first line, real, is how much real time or “wall-clock” time the process took to execute. In this case, 0.007 seconds or micro-seconds. The second line, user, is how much user time the CPU used. User time is amount of time the CPU spends performing an action for a program that is not a system call. The third value, “sys” referred to as system CPU time, is the amount of time the CPU spends peforming system calls on behalf of the program. A system call is a request made to the Kernel by the program. Therefore, CPU time is the total of user time plus system time. CPU time will not necessarily equal real (wall-clock) time, and if it does not total CPU time would be less than wall-clock time. There can be many reasons for the discrepancy between wall-clock and CPU time including the process waiting for another process to complete. In the example above total CPU time is 0.003 seconds where as wall-clock time is 0.007s.
Most Linux distributions default to using the Bash shell. Bash has a built in time function that operates as listed above and is quite different than the actual time command. The functionality of the bash time command is very limited to the actual time command. The remainder of this article will focus on the actual time command as opposed to the built in bash time command.
Most distributions have the time command installed under /usr/bin. Executing:
Should output the path to the time command:
If no value is returned chances are you do not have the time command installed (I had this issue on Arch and had to install the time command by sudo pacman -S time, on Slackware it was already installed).
To use the time command as opposed to the bash built in command you must provide the full path to time:
Note that the output is quite different:
0.00user 0.00system 0:00.00elapsed 50%CPU \
0inputs+0outputs (0major+250minor)pagefaults 0swaps
There is a lot more information provided compared to the builtin Bash time command.
The first three items: 0.00user 0.00system 0:00.00elapsed should look familiar. These values are user CPU time, system CPU time, and wall-clock time respectively. To understand the remaining values better focus will be directed on how to actually format the output of the time command. This is done with the -f, or –format=, switch followed by the formatting strings. Similar to the date command, the formatting strings are in the form of “%” + variable:
- %E = Elapsed wall-clock time in [hours:]minutes:seconds – note hours will not show unless the process does take over an hour
- %e = Elapsed wall-clock time in seconds only (this option is not available in tcsh).
- %S = system CPU time time in seconds
- %U = user CPU time in seconds
- %P = Percentage of CPU that was given to the process – equation is (%U + %S) / %E
- %M = Maximum resident set size of the process during its lifetime in KB. This value is the maximum amount of physical memory (RAM) this process utilized during the life of the process.
- %t = Average resident size of the process in Kilo Bytes. This is the average amount of physical RAM the process utilized over its lifetime (not available in tcsh).
- %K = Average total (data + stack + text) memory used by the process in Kilo Bytes. In a nut shell
- data = Global data, variables, etc.
- stack = Where variables are declared and initialized
- text = The actual programs
- %D = Average size of the processes unshared data area in Kilo Bytes – This is where data for the process is stored.
- %p = Average size of the processes unshared stack space in Kilo Bytes – This is where variables are declared and initialized (not in tcsh)
- %X = Average size of the processes unsahred text space in Kilo Bytes – This is where the actual program resides
- %Z = System’s page size – This is the size of a single block of contiguous virtual memory (likely to be 4096 bytes) (not in tcsh)
- %F = Number of major page faults that occurred while the process was running. A major page fault is when the process attempts to access a page that is mapped in the virtual address space but is not available in physical memory, it needs to find a space in physical memory to map the page.
- %R = Number of minor page faults that occurred while the process was running – Similar to a major fault, a minor page fault is when the page is actually loaded in memory at the time but is not marked in the memory management unit as being loaded in memory. The operating system needs to mark the page as loaded in the memory management unit before it can be loaded.
- %W = The number of times the process was swapped out of main memory
- %c = The number of times the process was context-switched involuntarily. In a multi-processing environment a process may need to be switched out for another process that needs the CPU. The CPU state is saved so the process can be resumed. Involuntary swapping can be caused by the application being forced to swap out because it’s time slice (a lotted time to execute) was exceeded.
- %w = The number of waits that the program was context-switched voluntariliy. A volunary swap may be because the process was waiting for another process or I/O action to complete before it could continue.
- %I = Number of file system inputs by the process
- %O = Number of file system outputs by the process
- %r = Number of socket messages received by the process
- %s = Number of socket messages sent by the process
- %k = Number of signals delivered to the process
- %C = Name and command-line arguements of the command being timed (not in tcsh).
- %x = Exit status of the command (not in tcsh).
Now that these values have been defined the output of:
From above can be further explained:
0.00user 0.00system 0:00.00elapsed 50%CPU (0avgtext+0avgdata 784maxresident)k 0inputs+0outputs (0major+250minor)pagefaults 0swaps
In a nutshell:
%U %S %E %P (%X + %D %M)k
%I + %O (%F+$R)pagefaults %W
(User CPU Time) (System CPU Time) (Wall-Clock Time) ( (Average size of shared text space) + (Average size of unsahred data are) (Maximum Resident Set Size of Process) )k
(Number of File System Inputs) + (Number of File System Outputs) ( (Number of Major Page Faults) + (Number of Minor Page Faults) pagefaults (Number of times process was swapped out of memeory)
As stated, formatting of the /usr/bin/time command can be done with -f, or –format=, switch followed by a string including the above variables:
/usr/bin/time -f “%e may not equal %S + %U” ls
The output of this may look something like:
0.04 may not equal 0.00 + 0.00
To emulate something like the output of the Bash built in time command you could execute:
/usr/bin/time -f “real\t%e \nuser\t%U \nsys\t%S” ls
The output may look like:
Or you could pass the -p, –portability, switch which provides the same formatting:
The output of time can be directed to a file with the -o, or –output=, switch and supplying the file name:
/usr/bin/time -o times.txt ls
This would redirect the output of the /usr/bin/command to the file times.txt. If you executed the same command again, it would overwrite the original times.txt file. If you want to preserve the original contents of the times.txt file use the -a, or –append, switch with the -o, –output=, switch:
/usr/bin/time -o times.txt -a ls
The final switch to discuss is the -v, or –verbose, switch which will output all the values possible that /usr/bin/time can report regarding a process.
/usr/bin/time -v tar cvf test.tar.gz lib
Produces the following output:
That is the time and /usr/bin/time commands in a nutshell. Remember, if you want the full feature set of the time command you must provide the full path to the time command. Otherwise you will more than likely use the built in time command of the shell you are running.
- man time
- Wikipedia page on Unix Time
- Wikipedia page on System Call
- Wikipedia Page on Resident Set Size
- Data and Text Segments
- Memory Layout of a C Program
- Wikipedia Page on Page Faults
- Wikipedia Page on Context Switching
- Time Command Examples
Video failed youtube processing, will be up shortly, use archive.org link below.
If the video is not clear enough view it off the YouTube website and select size 2 or full screen. Or download the video in Ogg Theora format:
- Episode 024 – time Ogg Theora Video – Archive.org
Thank you very much! | <urn:uuid:b9e61476-2ad3-401e-8d90-16c49b98243a> | CC-MAIN-2015-35 | http://www.linuxintheshell.org/2013/02/26/episode-024-time-and-usrbintime/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00162-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.90055 | 2,155 | 3.25 | 3 |
It’s one of the great mysteries of human evolution––the disappearance of the Neanderthals, a species of remarkable primates who lived and hunted the mountains and vast plains of Europe and Asia for 200,000 years until one day, some 25,000 years ago, they were gone.
For 150 years scientists have worked up theory after theory to explain their demise; the most prevalent being that they were done in by their own shortcomings––lack of brains, lack of tools, lack of speech––with some generous help from our Homo sapiens ancestors, the Cro-Magnon people.
It’s easy enough to come to that conclusion if you buy into the popular view that Neanderthals were dumb brutes sitting somewhere along the evolutionary chain between a Harry Potter troll and the wrestlers in the WWF. The term Neanderthal itself is a synonym for mean, big and stupid.
But the story of how they met their end is a good deal more complicated than all that, and considerably more mysterious. It’s now clear that though they lived under brutal and unforgiving circumstances, they were themselves neither brutal nor stupid. In fact their brains were slightly larger than ours are today, and their accomplishments remarkable. They managed to survive not one, but three ice ages. The bones of over 400 Neanderthals have been unearthed going back almost 100 years revealing that at one time or another clans were living as far west as the Iberian Peninsula and as far east at the Altai Mountains in Southern Siberia. When the weather grew colder they ventured as far south as the Arabian Peninsula and Gibraltar, and when glaciers receded they receded with them to the mountain ranges of northern Europe. There is no evidence that they ever made their way to Africa, something that makes sense. Their bodies were optimized for the cold, and over the past 200 millennia there was plenty of that in Europe and their haunts in Asia.
But if they were so clever and so hardy, why did they fail? Here are three theories.
Theory One: Murder Most Foul:
The story goes this way. The Cro-Magnon people, our direct ancestors (named, like Neanderthals, for the location in Europe where the first of their fossils were discovered), systematically wiped their burly cousins out. To be clear, the Cro-Magnon were us, anatomically modern humans, who migrated north from Africa 40,000 to 50,000 years ago. When we crossed paths, hunting grounds and choice settlements were at stake, and so the Cro-Magnon, with their superior weapons, and possibly their superior planning skills, killed or enslaved whomever was unfortunate enough to get in their way. It wouldn’t have been an all out war in the sense that armies were assembled and clashed, but the damage done to the Neanderthal would have been inexorable with one settlement, tribe or clan after another falling to the new intruders.
There’s not much evidence for warfare or murder in the fossil record, however. We haven’t found ancient killing fields, strewn with the hacked and broken bones of the two human species; no sites where dead Homo sapiens lie next to the skeletons of Neanderthals. But then, maybe, the evidence of battles between competing tribes of the two peoples simply hasn’t deigned to show its archaeological face. Maybe, somewhere in Europe, in a remote mountain forest or beneath a broad river re-routed by the last retreating glaciers lie the bones of lost prehistoric warriors who fell to the invaders from the southern seas. But so far, no such evidence has been found.
Theory Two: Survival of the Fittest.
A second theory is that we out competed the Neanderthal for resources, food and land. The thinking here is we didn’t murder them hand to hand and face to face, we wiped them out them in a long war of attrition by taking control of the best habitats and hunting grounds, and then killing game faster than they could, and in larger numbers. Very slowly, over thousands of years, we drove the already sparse and scattered Neanderthal population into pockets where it became increasingly difficult for them to survive. This might have crippled their ability to band together, weakening them still further, until in the end they simply couldn’t go on.
There is some evidence for this. Neanderthals did become progressively rare as Europe moved into the coldest phase of the last ice age and as modern humans migrated into Europe. Leslie Aiello of University College London suggests that even though Neanderthals were well adapted to frigid climates, they couldn’t survive temperatures below 0° F. (-18° C.) Their clothing and technology simply weren’t up to it. 30,000 years ago, as temperatures dropped and a new ice age descended, warm pockets of land would have become more scarce. If the Neanderthals retreated to them they may have been trapped and died as these locations themselves grew increasingly cold. It wouldn’t be the first time one species snatched an ecological niche from another. In fact today we continue to wipe out species all around the planet, not as part of a master plan, but simply by doing what we do and living as we live.
Nevertheless, Europe and Asia are immense territories, and it’s difficult to imagine that there wouldn’t have been enough resources to go around. Neanderthal ranges covered millions of square miles. While each clan probably needed several square miles of land to sustain them, much of the land was rich with food and resources and herds of large animals like mammoths and woolly Hippopotamuses, deer and bison. Even if the combined numbers of both species reached into the hundreds of thousands, there would seem to have been plenty of space and food and resources to go around. The cold weather could certainly have battered Neanderthals trapped in cold areas, but why wouldn’t those already living in southern Italy, Spain, France and the Mideast have survived?
Theory Three: Love and Lust.
The most intriguing Neanderthal disappearance theory is that if we killed them at all, we killed them with kindness. We neither murdered them, nor out-competed them. We mated with them and, in time, simply folded them into our species until they disappeared, swallowed up in the larger Homo sapiens gene pool. This may explain the origins of the red hair, light skin and freckles some of us sport. Neanderthals, unlike the slim, dark skinned interlopers from the south, would have evolved fairer complexions and lighter hair as a way to extract more vitamin D from the stingy sun of cold, northern latitudes. Spreading these genes around may even have benefited the Cro-Magnon.
It’s fascinating to consider the possibility that we and another kind of human bred a new third variety. Whether this happened or not is one of the great controversies in paleoanthropology. But there is evidence. Recently very strong evidence.
In 1952 an armful of bones belonging to an adult woman were found lying on the floor of a Romanian cave–a leg bone, a cranium, a shoulder blade and some other fragments. The discoverers didn’t think much of the find at the time. How old could the bones be, after all, if they were simply strewn there on the surface of the cave’s interior. And so they were relegated to a drawer where they lay undisturbed for more than half a century.
When a team of researchers from the United States finally began to inspect the old findings, radio carbon dating revealed the woman hadn’t lived very recently at all, but last walked the Earth 30,000 years ago. Not only that, but the bones exhibited features that were both Cro-Magnon and Neanderthal. The back of the woman’s head, for example, protruded with a Neanderthal-like “occipital bun,” her chin was unusually large and her brow more sloped than any belonging to a modern human.
There have been other similar finds that raise the interesting possibility that we and our big-boned relatives developed more than Platonic relationships. There is the skeleton of a young boy uncovered in Portugal that dates to 24,500 years ago. He is believed to be Neanderthal with a large jaw and front teeth, foreshortened legs, and a broad chest and, yet his chin is square, more like ours and his lower arms were smaller than you might expect. And in another cave in France, scientists have found, not bones, but tools that date back 35,000 years. The location of the tools indicate that for at least 10,000 years both Cro-Magnon and Neanderthals coexisted in this same place. If they could do that, and if they could communicate and cooperate, isn’t it likely they also mated?a modern human. And the woman’s shoulder blade, her scapula, was narrow, not as broad compared with ours, another Neanderthal trait. Was she simply a rugged looking modern human, or, as one scientist put wryly it, proof that moderns “were up to no good with Neanderthal women behind boulders on the tundra.”
Strangely enough, this was one of the last areas of Europe where Neanderthals lived before they disappeared. Were they dying out, or was this boy simply archaeological proof that Neanderthals had at last been genetically subsumed into the rising tide of modern humans spreading across the planet?
Maybe. Now there’s genetic evidence that we today are a hybrid species. Just last year a scientific consortium headed by the Max Planck Institute for Evolutionary Anthropology completed an analysis of the Neanderthal genome. They compared the ancient DNA with the genomes of five living people of different lineages––French, Han-Chinese, Papuan and members of the Yoruba and San people of Africa. The San are, genetically, among the most ancient modern humans on earth. They found that all the genomes from every part of the world except Africa contained 1 to 4 percent Neanderthal DNA. In other words, most of the human race from Europe to the islands of Southeast Asia (and probably farther) are part Neanderthal. What is perplexing about the find, however, is that this DNA test indicates that the Cro-Magnon and Neanderthals mated between 50,000 and 80,000 years ago, before they met in Europe. Whether they mated later in subsequent encounters remains unresolved, for now.
So which way did the hardy and quiet white people of the North meet their end? Murder, competition, love? There is no reason why it has to be any one of these. Nature, evolution and human relations are all chaotic and unpredictable as much as we might like it otherwise. When Europeans colonized north and south America, they sometimes befriended the natives, sometimes brutally exterminated them, sometimes raped their women and sometimes fell in love and raised families. Given the nature of human nature, our encounters probably included all of the above. Were the Neanderthal so different from the Cro-Magnon that sex was out of the question? Not likely. Both species were human and the drive to procreate is strong and primal. Surely there were Romeos and Juliets, even this far back in our history, who found enough common ground during those long, wintry millennia to bed down together. And as a result, nearly all of us can say we have a little Neanderthal in us. | <urn:uuid:1fc946f8-ba72-40a1-9714-6af123827d66> | CC-MAIN-2015-35 | http://www.allthingshuman.net/what-happened-to-the-neanderthals/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.970545 | 2,382 | 3.34375 | 3 |
Introduction: The Limits of a Good Idea
It started as a good idea. Rather than taking the path of the old Latin American left, in the form of the guerrilla movement, or the Stalinist party, Brazil's Workers’ Party (Partido dos Trabalhadores, PT), aided by strong union and social movements, decided to try something new. The challenge was to somehow combine the institutions of liberal democracy with popular participation by communities and movements. The answer eventually became participatory budgeting (PB). Introduced in the city of Porto Alegre in 1989, PB was a highly innovative experiment in co-management and de-centralization (Weyh, 2011). It allowed communities of diverse political stripes to democratically manage a small portion of their city's budget. Not only did this result in more and better services for poor communities, it also opened a space where people could learn new democratic skills and build new solidarities. In PB, a virtuous cycle of democracy was unleashed: the more people participated, the more people learned to participate. Add to this, a number of poverty reducing programs at the national level, such as Bolsa Familia, and you suddenly had a new path to social transformation: peaceful, gradual and pluralist.
It wasn't long after the PT acquired power at the national level in 2003 that cracks in its political economic model began to show, however. Agrarian reform, the key demand of one of its most important early allies, the Brazilian Landless Workers’ Movement (MST), was effectively dropped from the PT's program. More accurately, the PT re-articulated the MST's demand of agrarian reform by strengthening the productive capacities of existing MST lands, rather than addressing Brazil's highly unequal land ownership structure, in which the top 1 per cent own 50 per cent of the land. In other words, of the three key elements of the MST's program, namely “occupy, resist, produce,” the PT opted to act only on the last point. It did so by, for example, opening avenues for the sale of products produced by MST run cooperatives, as in the case of Cooperdotchi, an agricultural cooperative in the state of Santa Catarina. This re-articulation of the MST's goals has created ongoing conflict between the government and the MST who has itself re-articulated its demand for agrarian reform as, “food sovereignty,” focusing instead on the government's alliance with agribusiness (Ferrero, 2012).
The other traditional ally of the PT is the middle class (particularly in the southern states), who have benefited from strong growth and employment in recent years. However, health and education have been badly neglected, and sections of the middle class are feeling the effects. This comes at a time when the government is spending billions of dollars on the FIFA World Cup (in 2014). This situation has created a growing cynicism among people expressed in the popular phrase, “imagina na copa” (imagine during the cup!). First heard in middle class circles, the expression is a criticism of problems in transportation, health care or any situation in which there is delays, line ups or disorganization. The expression soon became more generalized, as people began to perceive the world cup as the source of the country's growing problems. In other words, if its this bad now, “imagine during the cup!”
Unions, another traditional bastion of PT support, have also gone on board with this anti-cup sentiment, several of them threatening strike actions during the tournament next year if their demands are not met now. In addition, indigenous groups are facing displacement around the country, as new stadiums and infrastructure is built. For example, in March of this year, an indigenous community was forcibly evicted from an abandoned museum next to the Maracanã, Brazil's most emblematic soccer stadium located in Rio de Janeiro, which recently underwent a $500-million renovation. Poor communities living in favelas (shanty towns), many of them new supporters of the PT, have also faced similar displacement.
Many people have also grown frustrated with the PT's “strategic alliances” with the right, for example, its decision to allow Marco Feliciano to be the chair of Brazil's Human Rights Commission. To the outrage of the LGBTQ community (and progressive sectors more generally), Feliciano recently led an initiative that encourages gay people to undergo psychological treatment. In short, a variety of sectors supportive of the PT have real reasons for being dissatisfied with the government's handling of, not only the World Cup, but a number of other issues also. It seems the PT's model of a new left was reaching its limits.
“Não me representa”
The protests have their most immediate roots in Porto Alegre, and indeed it is here that the contradictions of the PT's model are most evident. To the surprise of many, Porto Alegre elected a right wing government in 2004, that is, after 16 consecutive years of PT rule. This was a huge blow to the party in the city that hosted the World Social Forum no less than five times. Some even began to see the PT as having reached a crisis. After a few years of relative quiet, middle class students began to organize against the privatization of public spaces in the city center, as part of preparations for the World Cup. Parallel to this, in 2011, the Comitê Popular da Copa (World Cup Popular Committee) was formed, composed of several groups including the MST. Their goal was to raise awareness and begin organizing against the cup and what they saw as the Municipal government's mistaken priorities.
Since 2004, Participatory Budgeting, the PT's signature program, has been noticeably weakened. For example, by June of 2012, only 17 per cent of the money allotted that year to various programs through the city's PB had actually been spent (De Olho). Meanwhile, participation has decreased in recent years. In addition, poor communities near one of the local soccer stadiums, Arena Grêmio, were displaced in preparation for the cup. Lastly, there has been a growing discontent about the city's public transportation, resulting in successful mobilizations in April 2013, which managed to stop a 20 cent hike in transit fares. From here, the movement took root in São Paulo, where it grew quickly and became strengthened by the participation of the Subway Workers Union who were already struggling for a new contract (Costa, 2013). The transit movement is led by Movimento Passe Livre (Free Fare Movement), or MPL. Formed in 2004, the MPL considers itself a horizontal, autonomous, and non-partisan movement. Importantly, although non-partisan, the MPL does not consider itself anti-party. Their demand of free public transit has resonated throughout the country, posing direct challenge to Brazil's oligopolistic transit system (Gibb, 2013).
The magnitude of the mobilizations caught the PT totally by surprise. Indeed, during the first large demonstrations on June 17, there was no visible PT presence in Porto Alegre. The demonstration was organized by Bloco de Luta Pelo Transporte Público (Struggle Block for Free Transit), a popular front bringing together several organizations, and greatly amplified via social media networks. About five to seven thousand people amassed at the city's prefecture. The mood was confident, energetic and inspiring. Youth between 15-25 years of age were the majority. People freely experimented with various chants, including, “sem partido” (without a party), “Não nos representam” (doesn't represent us), “Brasil acordo” (Brazil woke up), “acabou o amor, Brasil vai virar Turquia,” (the love is over, Brazil will turn into Turkey), “vem pra rua” (come to the streets), and “sem violencia” (without violence).
Some people held Brazilian flags with the words, “primaveira brasilera” written on them, while others held up Turkish flags. Clearly, in addition to the national context, protestors had similar global revolts in mind. Finally, placards demanded better public education, health care and an end to the World Cup. Interestingly, despite the anti party sentiment, party flags were in plain view, including those of the Socialism and Freedom Party (Partido Socialismo e Liberdade, PSOL), a socialist party to the left of the PT. As with all subsequent demonstrations, protestors were met with police repression, which included the use of teargas and rubber bullets.
Some have noted that the demonstrations contain right wing and even fascist elements (particularly in the bigger cities), citing examples of vandalism and violence against left wing parties. However, it would be very premature to say that these elements are becoming a social force. For example, in Porto Alegre, violent groups are a tiny minority and there were no reports of organized fascist activities. It is true that in some of the bigger cities attacks on left wing parties have been reported, but at least some of these can be attributed to individuals or small groups that simply reject the presence of any parties on the streets.
It is true that a simplistic rejection of all parties can lead to an apolitical nihilism, which indeed formed part of the sentiment right before the 1964 Brazilian dictatorship. However, we must not immediately equate an anti-party sentiment with right wing politics, as some have done. It is also worth noting who is making the claims of this being a right wing movement, namely the PT (at least certain currents within it) and the corporate media, supposed bitter enemies. They each have their own reasons, however. Given that the movement contains anti-party elements (and therefore is not pro-PT), the PT's thinking is, “if they are not with us, then they must be right wing.” For its part, the corporate media's strategy has been spinning the movement as being simply anti-corruption and anti-taxes so as to impose its own right wing agenda and create the conditions for replacing the PT.
The next demonstration took place on June 20, only this time it brought together close to 10,000 people, that is, despite persistent heavy rain. Overall, over a million people took to the streets in over 100 cities that night. Unions had a significant presence for the first time in Porto Alegre, including members of SIMPA, representing the city's municipal workers. Also of note is that this time, few if any party flags were in view. This was also the case during the following demonstrations on June 24th and 27th. However, gone were also the anti-party chants. Interviews revealed that by now the movement had recognized the co-optation attempts by the right wing media, and dropped its simplistic anti-party position. Indeed, they recognized that a number of parties had been on the ground floor of organizing. For example, Lucas Monteiro, an important figure in the free transit movement in Porto Alegre, is also a member of PSOL.
The movement's rapid learning was evident at a student assembly held at Universidade Federal do Rio Grande do Sul (UFRGS) on June 27th, where activists from a variety of groups, including the MPL, suggested that the way forward was to develop a coherent political program. In addition, at the demonstration later that night, a mysterious plane or helicopter was flown over the protestors, projecting a number of political messages, including “sem partido.” The crowds responded with what, up to this point, most accurately captures their political sentiment: “não nos representam” (doesn't represent us). In other words, in a matter of days, protestors had transformed a simplistic “sem partido” to “não nos representam,” therefore avoiding co-optation by the right while asserting their commitment to direct political participation.
Beyond the Workers’ Party?
After the PT's initial surprise, it reacted by trying to sympathize with the protestors. Brazilian president, Dilma Rousseff, stated that the protests were a sign of Brazil's democratic strength and that the government was ready to listen to the streets. In addition, more than 20 municipalities accepted a 20 cent reduction in transit fares. Nevertheless, in preparation for the next demonstrations, Dilma quickly deployed the military to several major cities. Former president Lula perhaps best encapsulated the PT's position, stating that the movement should now channel its demands to the negotiating table. Indeed, Dilma invited Free Fare Movement (MPL) leaders to a meeting in order to find a solution. This resulted in a number of proposals by the government, with the two most important ones being: a popular plebiscite for a constitutional reform, and a public transportation plan with $25-billion of new funding. In addition, Tarso Genro (PT), governor of Rio Grande do Sul, announced free transit for students in the state. These are major victories for the movement.
If carried through, these proposals demonstrate the PT's ability to incorporate some of the protestor's grievances into the PT model. However, it is clear that it can't incorporate others. For example, consider one of Dilma's recent statements regarding the protests: “The streets are telling us that the country wants quality public services, more effective measures to combat corruption ... and responsive political representation.” Yet, for better or worse, what the protestors want is not “responsive political representation.” Indeed, one of the central themes of the movement (maybe even the central theme) is people's deep suspicion of political representation. Direct democracy, at least as much as free transit or quality education, is what this movement stands for. The question now is whether the PT's re-articulation of the movement's demands will placate the streets.
However, this seems unlikely at the moment. Several unions, including the Central Única dos Trabalhadores (CUT), Brazil's most important union federation, called for a general strike to take place on July 11, and their demands included quality public transportation new investments in health and education, and agrarian reform, hardly traditional union demands. In Porto Alegre, Bloco de Luta organized a popular assembly that brought together over 300 activists. Several bold and innovative proposals were discussed, such as organizing a permanent encampment in the city. One woman demanded transit companies’ books be opened so that, to paraphrase, we find out how much money is going to workers, and how much is being spent by the owners on champagne. Several union members were in attendance, including the leadership of SIMPA who pledged full support for the movement. In their general membership meeting later that week, the union approved full participation in the upcoming general strike. Lastly, the MST is also pledging full support for the movement, now calling for agrarian and urban reform.
A recent poll of Brazilian voters shows Dilma's ratings have dropped from 57 per cent to 30 per cent, while 81 per cent say they support the protests (Guardian). Nevertheless, it is certainly too early to say how far this movement has weakened the PT hegemony in the country. It has, however, exposed its weaknesses and limits. More than this, by emphasizing the centrality of direct political participation and active struggle, the demonstrations are also posing a tentative alternative to the PT model for social transformation. Although Participatory Budgeting, the PT's definitive political initiative, does provide important avenues for democratic learning, so do the streets. As one demonstrator's placard read, “Essa é nossa educação!” (this is our school!).
However, as Le Monde reports, for 70 per cent of the youth on the streets, these protests were their first (Bava). Hence, as Emir Sadr notes, this is a young movement that lacks clear goals for the future. As such, it can also create a new space for the right, as we have seen in the “Arab Spring” and in the case of Spain (something PT supporters like to point out). This should not deter us from fully supporting this movement, though. As Rosa Luxemburg put it, “the errors committed by a truly revolutionary movement are infinitely more fruitful than the infallibility of the cleverest Central Committee.” Regardless of where the movement goes from here, it has already shown the world that its capable of big victories.
Post Script: Historic General Strike
The first since 1991 (and the fourth in Brazil's history), the general strike that took place on July 11, 2013 successfully brought much of the country to a standstill. The strike came amidst, not only widespread popular protests, but also an upswing of strike activity that began in 2008. This culminated in a yearly average of 560 strikes by 2012, a record since 1998 (Le Monde). The strike also took place a day after the national chamber of deputies rejected Dilma Rousseff's proposal of a popular plebiscite for political reforms, opting instead to form a “working table” to discuss the issue in the future. Unlike previous general strikes, this one brought together workers and social movements. Diverse actions were witnessed throughout the country, including road blockades, building occupations, demonstrations and marches.
Porto Alegre was one of the city's most affected by the strike. The public transportation system was almost completely paralyzed and practically all businesses were closed, giving the city the feel of a national holiday. The strike actually began on the night of July 10th in Porto Alegre, as a number of activist groups occupied the municipal chamber of aldermen demanding “free transit now” and the opening of transit companies’ books. The next day, hundreds of workers gathered in several spots throughout the city and marched toward the city center. In the early afternoon, Bloco de Luta, asked the unions to continue their march all the way to the Chamber, where the occupation was ongoing. This revealed a certain disorganization in the movement, as many activists were left wondering where exactly to meet and march toward.
Once under way, the march split downtown, with about 3000 people continuing to the Chamber and 2000 remaining near the prefecture. Surprisingly, these numbers were a bit lower than in previous marches and demonstrations. Although organized workers were much more visible than in previous days, it seems most decided to stay home rather than go out to the streets. Also more visible were party flags, demonstrating that the anti-party sentiment of the first wave of demonstrations had considerably eroded. Now it's June 12, 2013, and things are back to “normal” ... at least for now. •
Manuel Larrabure is a Ph.D. candidate in the Political Science department at York University in Toronto, Canada. His research is on Latin America's new cooperative movement and 21st-century socialism. He is currently in Brazil conducting fieldwork. | <urn:uuid:8e3381d2-29a3-466f-a1f5-bd6108bba6de> | CC-MAIN-2015-35 | http://www.socialistproject.ca/bullet/853.php | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00340-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.970694 | 3,944 | 2.515625 | 3 |
Battle of Colachel
|This article needs additional citations for verification. (December 2010)|
|Battle of Colachel|
|Part of the Travancore-Dutch War|
Eustachius De Lannoy's Surrender at the Battle of Colachel
|Kingdom of Travancore||Dutch East India Company|
|Commanders and leaders|
|Eustachius De Lannoy|
|Travancore Army including the Nair Brigade||Unknown number of Dutch East India company troops, equipped with artillery|
|Casualties and losses|
|Light||Heavy, 24 officers including Eustachius De Lannoy captured|
|Part of a series on the|
|History of Kerala|
The Battle of Colachel (or Battle of Kulachal) was fought on 10 August 1741 [O.S. 31 July 1741] between military (composed primarily of Nairs) of the Indian kingdom of Travancore and the Dutch East India Company, during the Travancore-Dutch War. The Dutch never recovered from the defeat and no longer posed a large colonial threat to India, assisting the British East India Company's eventual rise to dominance on the Indian subcontinent.
Almost all the pepper that the Dutch imported into their country came from the kingdom of Kayamkulam. When Marthanda Varma became king of the small kingdom of Venad, he started a policy of assimilating neighboring kingdoms into the new kingdom of Travancore. In a series of battles, Marthanda Varma annexed the kingdoms of Attingal and Quilon (now known as Kollam). On the pretext that the Rajah of Kayamkulam was involved in certain conspiracies against him, Marthanda Varma began a military campaign against Kayamkulam with the aim of incorporating the kingdom into Travancore.
This endangered the Dutch East India Company's interests since they feared that the British, who had already signed a treaty with Marthanda Varma, would gain the rights to the pepper trade in the Malabar area, thus ending the Dutch monopoly. With this threat to their commercial interests in view, the Dutch Governor of Ceylon Gustaaf Willem van Imhoff wrote to Marthanda Varma demanding that he should end the aggression against Kayamkulam. Marthanda Varma wrote back to Van Imhoff, ordering him not to interfere in matters that did not concern him.
In a subsequent meeting, Imhoff demanded that Marthanda Varma restore the annexed kingdom of Kayamkulam to its former ruling princess, threatening to invade Travancore should he refuse. Marthanda Varma countered that he would overcome any Dutch forces that were sent to his kingdom, going on to say that he was considering an invasion of Europe. Thus, the interview ended in tension and subsequently led to the Travancore-Dutch War. In 1741, the Dutch installed a princess of the Elayadathu Swarupam as the ruler of Kottarakara in defiance of the demands of Marthanda Varma. The Travancore army inflicted a crushing defeat upon the combined Kottarakara-Dutch armies and assimilated Kottarakara into Travancore, forcing the Dutch to retreat to Cochin. Following this, Marthanda Varma captured all of the Dutch forts in the area.
Following the losses that the Dutch and their allies had suffered in the war, a force of Dutch marines from Ceylon under the leadership of a Flemish commander, Captain Eustachius De Lannoy (also spelt D'lennoy) landed with artillery in Kulachal, then a small but important coastal town, to capture the capital of Travancore, Padmanabhapuram. They captured the territory up to Padmanabhapuram and laid siege to the Kalkulam (Padmanabhapuram) fort. Marthanda Varma promptly marched south with his army and his timely arrival prevented the capture of Kalkulam fort by the Dutch, who, in turn, were forced to retreat to defensive positions in Kulachal. On the 10th of August, 1741 both the armies met in battle and Marthanda Varma's army won a decisive victory over the Dutch, capturing a large number of Dutch soldiers; apart from the rank and file, 24 officers including Eustachius De Lannoy and his second in command, Donadi were taken prisoner. A key role was played by the royal guard (Nair Pattalam) of the maharajah.
Impact of the battle
In the words of the noted historian, Prof Sreedhara Menon, "A disaster of the first magnitude for the Dutch, the battle of Colachel shattered for all time their dream of the conquest of Kerala". Despite participating in favour of the enemies of Travancore in the subsequent battles, right up to the battle of Ambalapuzha (1756), the battle of Colachel was a death blow to the power the Dutch East India company in the Malabar coast. Subsequent peace treaties with Travancore saw the transfer of the remaining Dutch forts which were incorporated into the Nedumkotta lines.
In addition to the destruction of the Dutch East India Company's designs in the Malabar coast, the capture of the leaders of the expedition, Eustachius De Lannoy and his second in command Donadi, were very beneficial to the kingdom of Travancore. When De Lannoy and Donadi were paroled, they took up service with Travancore and modernized the Travancore Army (which, till then, had been armed mainly with melee weapons) into an effective fighting force. De Lennoy modernized the existing firearms and introduced better artillery and, more importantly, trained the Travancore army in the European style of military drill and military tactics. He carried out his orders with such sincerity and devotion that he rapidly rose through the ranks, eventually becoming the "Valia Kapitaan" (Commander in Chief) of the Tranvancore military and was given the Udayagiri Fort, locally known as the "Dillanai kotta" (De Lennoy's fort), near Padmanabhapuram, to reside. He was one of the commanders of the Tranvancore army during the decisive battle of Ambalapuzha where his erstwhile employers were fighting on behalf of Cochin and her allies. Following Travancore's victory over Cochin and her allies, the Dutch signed a peace treaty with Travancore and later sold their forts which were incorporated by De Lannoy into the Northern Lines (the Nedumkotta) that guarded the northern border of Travancore. The Travancore military that De Lannoy was instrumental in modernizing, went on to conquer more than half of the modern state of Kerala, and the Nedumkotta forts De Lannoy had designed, held up the advance of Tipu Sultan's French trained army during the Third Anglo-Mysore War in 1791 AD till the British East India Company joined the war in support of Travancore.
A key element of the Raja's army during the battle of Colachel was his personal guard, known as the Travancore Nair Brigade or locally known as the Nair Pattalam. This unit was later integrated into the Indian Army as the 9th Battalion Madras Regiment and the 16th Battalion Madras Regiment in 1954.
Another direct outcome of the event at Kulachal was the takeover of the black pepper trade by the state of Travancore. This development was to have serious repercussions on the Dutch and the trading world of Kerala at large. In 1753 the Dutch signed the Treaty of Mavelikkara with the Dutch agreeing not to obstruct the Raja's expansion, and in turn, to sell to him arms and ammunition. This marked the beginning of the end of Dutch influence in India. The VOC (Vereenigde Oostindische Compagnie, or the Dutch East India Company) continued to sell Indonesian spices and sugar in Kerala until 1795, at which time the English conquest of the Kingdom of Kochi ended their rule in India.
- The Indian government has built a pillar of victory in Kulachal to commemorate the event.
- The Indian Post Department released a Rupee 5 stamp on April 1, 2004 to commemorate the tercentenary (300th anniversary) of the raising of the 9th Battalion of Madras Regiment.
- http://mod.nic.in 9th Madras Regiment
- Koshy, M. O. (1989). The Dutch Power in Kerala, 1729-1758. Mittal Publications. p. 61. ISBN 978-81-7099-136-6.
- A survey of Kerala History, by Prof A. Sreedhara Menon, published by Viswanathan publishers, Madras, 1996, pp287
- "அனந்த பத்மநாப நாடார் - தமிழ் விக்கிப்பீடியா" (in Tamil). Ta.wikipedia.org. 2013-09-27. Retrieved 2013-10-20.
- Iyer, Dr. S. Krishna. Travancore-Dutch Relations, Nagercoil: CBH Publications, 1994, 164 pgs. ISBN 81-85381-42-9
- Menor, Sheela. Military History of Travancore with special reference to the Nayar Brigade, Ethiraj College for Women, 1995
- Menon, Dr. Sreedhara. A survey of Kerala history, S. Viswanathan Printers and Publishers, 1996. | <urn:uuid:738037d2-e864-4084-bc12-5007dc843c80> | CC-MAIN-2015-35 | https://en.wikipedia.org/wiki/Battle_of_Colachel | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00101-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.945701 | 2,131 | 3.21875 | 3 |
Libraries aren’t just musty places to store books with librarians shushing anyone who makes a peep. They’ve become much more than that and the modern library is often home to sleek architecture and the latest technology. These 25 libraries, in no particular order, demonstrate how libraries have become part of the cutting edge of academia, information management, design and Web technology, and all of them can help you get some ideas on how to bring your library into the future.
While traditional libraries still abound, these libraries have opted to create spaces that are modern and user friendly.
- Library of Picture Books in Iwaki City of Fukushima Prefecture: This library challenges the old ideas of what a library space should be. Integrated into the landscape with beautiful views from almost everywhere, this library is bright, airy and free from the stodginess that infects many older institutions. Books are arranged in cubicles, with their colorful covers exposed encouraging children to pick them up and read them. Changing the face of educational institutions from quiet, controlled places to playful and free places help bring libraries into the modern era and instill a love of reading in children.
- Det Kongelige Bibliotek: The Danish Royal Library, or the Black Diamond as it’s often called due to the shape of the building, is a modern facility inside and out. Featuring cutting edge design by Danish architects schmidt hammer lassen, it employs marble and glass to create a distinctive form on the outside. The design continues to the inside, with open spaces and playful walkways. Of course, the collections are extensive as well, with loads of online resources, old manuscripts, a large number of photographs, and access to a number of IT resources.
- Bibliotheque Nationale de France: Some have suggested that the French National Library is a bit too modern, creating a sterile space too cold for people and unfriendly to books. While not all would agree, the library does attempt to create a wholly modern approach to library space, focusing on computers more than books, including services from four super computers. Of course, it does have quite a few flaws as well, as the ultra modern building designed by Francois Mitterrand isn’t easy to navigate and none of the services offered by the library are available without a cost. If anything, this building is a lesson in creating modern spaces that aren’t just focused on design but on function as well.
- Seattle Public Library: This award winning building designed by Rem Koolhaas is the central home of Seattle’s library system. Modern on both the inside and the out, the library creates an easy-to-navigate and unique space for readers, browsers and students alike. The library doesn’t just look modern, however; it’s filled with loads of technological features as well. The library employs an RFID system that allows patrons to check out their own materials and leaves library staff free to deal with other matters as well as working with online resources and creating their own podcast.
- Malmo City Library: This bright, glass enclosed Swedish library was designed by Henning Larsen. It employs design that is both functional and attractive while embracing many modern features that help the library run more smoothly and efficiently as well. The new FKI Logistex self-service check-in kiosks allow books to be checked in without the assistance of library staff, visitors use internet and other computer services, and plans have been made to link library data nationwide to make finding and using materials easier and more efficient. Perhaps most notably, the library offers the ability to check out a person for a 45-minute chat in an attempt to promote understanding and break down stereotypes.
- Geisel Library: This library isn’t particularly modern in function, but is notable for its design which resembles a large metal and glass treehouse. The library boasts several stories and is home to five of UCSD’s on campus libraries. It shows that libraries can be innovative and sometimes even notable parts of the architecture of cities, countries and universities.
- Halmstad Library: For a library that blurs the line between the indoors and outdoors, check out this Swedish design. Built to extend over the nearby Nissan River, the building is bright and airy, allowing in plenty of light. An atrium at the center of the building surrounds an existing chestnut tree, bringing a bit of the outdoors into the library’s interior spaces and creating an innovative and soothing library experience.
- National Library of the Czech Republic: While this library is still in the conceptual stages only, it represents one of the most distinctive and unique architectural plans for a library in the world. The current design for the library is an organic green form resembling a hill, a blob, or some say, an octopus. Created to blend in with the surrounding landscape while providing bright and thoroughly modern interior spaces, the library reflects an increasing attitude of playfulness and daring when it comes to design, hopefully reflecting the attitudes within the library as well.
Technology and Innovation
These libraries have found new and creative ways to use technology and design.
- DOK (Delft Public Library): Billed as a “library concept center” rather than a traditional library, this Dutch library takes modern libraries to a new level. Filled with bright colors and sleek modern design, this library makes use of professionally designed graphics, comfy furniture and shelving made from recyclable materials. Patrons have access not only to traditional books but to video games, listening stations, toys for kids to play with, comic books, a piano and even an art collection. On the technology side, the library is wired to deliver a text message to your phone when you enter, welcoming you. Additionally, books and cards use RFID, LCD screens around the building filled with information, stations for podcasting and videocasting and what is planned to be a “genius bar” to give technology help to the public.
- Turku City Library: This modern library building in Finland is full of all the normal resources found in libraries like books, DVDs, CDs, and magazines but with one big difference. While most libraries are organized by the type of material, putting books in one place and DVDs in another, the Turku library is arranged entirely by subject, putting all related materials together in one place. Staff placed in the sections are specialists in each subject, and patrons are able to check out their own books with automated machines.
- Bow Idea Store: This library is yet another that is taking a different approach to what a library is, preferring to call itself an Idea Store rather than a library. The idea is to combine traditional service provided by libraries with access to technology and lifelong learning opportunities. The library wants to not only provide resources, but to educate and improve the lives of those in the community. Patrons are encouraged to hang out in the library, meet friends, have coffee at the cafe and pursue hobbies using the library’s resources.
- Cerritos Library: Called the “Experience Library,” Cerritos was designed to be an open and modern space that takes a different approach to library services. The library is home to more than books and also includes a saltwater aquarium, sculptures by Dale Chihuly and a replica of a T-Rex fossil encouraging exploration and the pursuit of knowledge. Rooms in the library are designed by themes ranging from Old World reading to World traditions. Info Stations are located around the library to help assist patrons in finding what they need, and the local intranet allows users to customize their viewing experience. Additional technology in the library is found in the huge multimedia lab, thousands of laptop stations, wireless headsets and computers for librarians and an RFID tracking system for books.
- Cuyahoga County Public Library: Ranked as the top library by Hennen’s American Public Library Ratings in 2006, this Cleveland, Ohio, library works to keep up to date with the latest technologies. Their website was ranked as the best by Ektron in 2006 and gives patrons the ability to access their accounts, purchase tickets to library events and much more. The library also offers text message delivery of library notices, the first in the nation to offer this service. The library offers access to 85 colleges and universities through its online OhioLink program as well as a host of other Ohio libraries, greatly increasing the number of resources patrons can draw upon. If that weren’t enough, the library also participates in a podcasting program and places videos of speakers and visitors to the library online for all patrons to enjoy.
- Pace University Library: This university library in New York has made it easier than ever to get access to library materials. The library was granted the Library of the Future award for an innovative media network it has implemented. An internal streaming system called MediaPatch allows the library to share various types of media across campuses quickly and easily, allowing patrons at one branch to access the resources from another at the touch of a button. This solves several copyright concerns as the information never leaves the school’s secure servers but still allows distance learners and those in the classroom to quickly and easily access information. The library also participates in a podcasting program designed to cover a variety of subjects.
- Richmond Public Library: Billed as the “library of the future” when it was opened in 1998, the Richmond Public Library’s Ironwood Branch employs a modern design that attempts to bring together technological resources with a comfortable and warm environment. A large computer center, laptop stations and a digital resource center form a large part of the library. There are also numerous listening stations for music, a quiet study room, a large children’s section and a huge Chinese language collection to reflect the area’s large Asian population. The library also uses express check out stations so librarians are free to do other things, and the library boasts a huge online collection of resources.
- Denver Public Library: The Denver Public Library, housed in a whimsical modern facility designed by Michael Graves, has worked to make the Internet a major part of its operations, even having its own MySpace site. The library also has an extensive webpage, a podcasting series, and a huge digital download site. Users of the digital downloads can get audio books, online movies and ebooks for use on their computer or MP3 player. Additional modern conveniences include Denver Library Firefox plug-ins, an iGoogle catalog gadget, and a toolbar for IE.
- San Diego Public Library: This library was one of the first to embrace wireless technology, offering free wifi at all of its locations. The website for the library is extensive with services for live online homework help, a variety of ebooks and audio books, online assistance and more. Sleek modern design at its present location, plans to build an ultra modern facility and self checkout systems help make this a modern facility.
- Cleveland Public Library: The Cleveland Public Library offers patrons a wide range of downloadable materials on its website including audio books, ebooks, music and video. The library is part of a network of libraries in Ohio and offers patrons access to materials not only at the main location but at other locations as well. The library works with a NetNotice plan sending information on the library or reserved materials directly to patrons’ inboxes. Additionally, the library has an iGoogle gadget for its catalog, a Twitter feed, and participates in the Library Elf notification program.
- Carnegie Library of Pittsburgh: This library is one that is making big strides to be different than the traditional library. With online services that provide patrons with online chat with librarians, an RSS feed, a blog, podcasts, online requests, downloadable media and more, the library is making the move into the next century. Of course, their services extend beyond the web with career classes, gaming competitions and self checkout kiosks on site to keep patrons engaged as well. The library has made an effort to reach out to teens with MySpace and Facebook pages, gaming nights, art and anime clubs and a variety of teen centered programs and organizations.
- New York Public Library: The New York Public Library is one of the largest in the nation offering patrons access to millions of books, periodicals, CDs and more. It also offers a large number of digitized collections that include images, prints and photographs. The library worked with Google to create a selection of digital books and offers patrons a large number of online text collections. The library is also highly tech savvy with an active RSS feed as well as podcasts on iTunes U. Patrons can download ebooks, video and audio directly from the website or enjoy video storybooks, video on demand and webcasts as well.
These libraries boast extensive digital collections.
- National Diet Library: Japan’s National Diet Library provides a huge online catalog system so that it’s easy to locate and request many of the library’s materials. Users of the catalog can search the library’s entire collection from anywhere in the world, with sites in both English and Japanese. This service allows anyone to request materials from the library. Perhaps more impressive, however, is the library’s digital collection of Meiji era books, numbering around 60,000. Users can search through these and see actual digital images of the materials. Additional online collections include almost 37,000 rare books from the pre-Edo periods of Japan, making researches of Japanese history easier for those who cannot physically travel to Japan.
- Bavarian State Library: Located in Munich, this large library was named Germany’s library of the year last year. It’s part of a nationwide program called Libraries-Link which serves as an access portal to all of Germany’s libraries making it easy to find information on any library. Additionally, it has partnered with Google to scan and make public many works that are public domain. The library is home to many rare books, numerous online databases and journals and a fast and nationwide resource search program. The library is working to digitize much of the rarer elements of its huge collection so that those within Germany and around the world can enjoy them from anywhere.
- Library of Congress: The Library of Congress has some of the most impressive online collections of material that you will find anywhere. With materials ranging from historical photographs to sheet music, the library offers high quality digital images of tens of thousands of items from its collection. The library’s American Memory site provides visitors with a visual, audio and historical account of some of the most important events in American history. Visitors to the site can also search through the library’s catalog, request materials, and get detailed information on the goings on of congressional matters. In 2005, the library announced plans to begin putting together a World Digital Library that will put together important text, photographs, rare books and recordings from cultures all over the world.
- The British Library: As one of the largest and most prestigious libraries in the world, the British library has loads of resources to offer researchers and patrons from all over the world. The library has access to its complete catalog online so that anyone can see what materials the library holds. Of course, online resources are much more extensive than this. The sound archive has placed over 4,200 hours of archival sound recordings online for download. The main online collections are housed in the digital library which contains rare items like Leonardo Da Vinci’s notebooks. There are approximately one hundred million items available digitally, including journals, patents, dissertations, reports and more.
- National Library of Australia: The National Library of Australia is Australia’s largest reference library, providing access to millions of items related to Australia and cultures abroad. This library is a world leader in digital preservation techniques and has so far digitized over 105,000 items from its collection including a range of photos, maps, manuscripts, books, sheet music and audio recordings. These materials are accessible to patrons both in Australia and around the world. | <urn:uuid:1b830ff0-874b-4c14-8236-4a87ed3c0f65> | CC-MAIN-2015-35 | http://www.bestcollegesonline.com/blog/2008/07/02/the-25-most-modern-libraries-in-the-world/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00340-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.947786 | 3,282 | 2.640625 | 3 |
NOTE: There are serious potential health risks including death, which
can arise from improperly, processed and handled foods. Be sure you know
what you are doing and access the risk carefully for each food you wish to
attempt to process!
NOTE: There are serious potential health risks including death, which can arise from improperly, processed and handled foods. Be sure you know what you are doing and access the risk carefully for each food you wish to attempt to process!
Everybody loves pickled food products! There are shelves and shelves of pickled fruit, vegetables, meats and fish at your local grocery store. Check it out sometime! Pickling seems to be more ethnic than many other forms of food preservation and many times the only difference is the addition or deletion of common household spices and herbs like garlic, cinnamon, cloves or nutmeg.
Pickling in my humble opinion is the most exciting of all the food preservation processes because it opens up a whole new word of wonderfully tart, tangy, sweet, hot and spicy flavors to try. Mmmmmmmm – ya gotta love it!
I love to experiment with relishes. You can make a relish from any kind of fruit or vegetable. You can make ay size you have a mind to – but keep good notes! You never know when you’ll come up with a combination that will knock your socks off! Okay enough rambling … I do love the pickling process!
So what is pickling anyway?
We’ve all had a jar of pickles, maybe pickled eggs or pickled artichokes, but what makes a pickled product? Pickling is placing food in a brine, similar to a brine you would use before smoking poultry but with the addition of vinegar and maybe sugar and a few spices. It’s a pretty simple process really – so let’s get to it!
Pickling is actually brining or corning. It is a preservation process causing the food to ferment reducing the pH level to less than 4.6, which is sufficient to kill most necrobacteria. If done properly it safer than just canning and the flavors can be exciting! Let’s look at a few diferent types of pickles.
Pickles and relishes are high-acid products. This acid is from the large amount of vinegar
added or, in the brined or fermented pickles, the acid is produced naturally during the fermentation process by lactic acid bacteria. Because they are high acid foods, they are processed in a boiling water bath canner.
Brined Pickles or Fermented Pickles
These go through a curing process in a brine (salt and water) solution for one or more weeks. Curing changes the color, flavor and texture of the product. If the product is a fermented one, the lactic acid produced during fermentation helps preserve the product. In brined products that are cured but not fermented,
acid in the form of vinegar is added later to preserve the food.
Fresh Pack or Quick Process Pickles
These are covered with boiling hot vinegar, spices and seasonings. Sometimes, the product may be brined for several hours and then drained before being covered
with the pickling liquid. These pickles are easy to prepare and have a tart flavor. Fresh pack or quick pickles have a better flavor if allowed to stand for several weeks after they are sealed in jars.
These are prepared from whole or sliced fruits and simmered in a spicy, sweet-sour
syrup made with vinegar or lemon juice.
These are made from chopped fruits and vegetables cooked to desired consistency in a spicy vinegar solution. The level of acidity in a pickled product is as important to its safety as it is to its taste and texture. Never alter the proportions of vinegar, food or water in a recipe. Use only tested recipes. By doing so, you can help prevent the growth of Clostridium botulinum, the bacteria that produce
a highly toxic poison in low acid foods.
Fruits or Vegetables
For highest quality, plan to pickle the fruits or vegetables within 24 hours after they have been harvested. If the produce cannot be used immediately, refrigerate it, or spread it where it will be well-ventilated and cool. This is particularly important
for cucumbers because they deteriorate rapidly, especially at room temperature.
Pure granulated salt, such as “pickling” or “canning” salt should be used. It can be purchased
from grocery, hardware or farm supply stores. Other salts contain anti-caking materials that may
make the brine cloudy. Do not alter salt concentrations in fermented pickles or sauerkraut. Proper
fermentation depends on correct proportions of salt and other ingredients.
Use cider or white vinegar of 5-percent acidity (50 grain). This is the range of acidity for most commercially bottled vinegars. Cider vinegar has a good flavor and aroma, but may darken white or light-colored fruits and vegetables. White distilled vinegar is often used for onions, cauliflower and pears where clearness of color is desired. Do not use homemade vinegar or vinegar of unknown acidity in pickling. Do not dilute the vinegar unless the recipe specifies. If a less sour product is preferred, add sugar rather than dilute the vinegar.
Use white sugar unless the recipe calls for brown. White sugar gives a product a lighter color,
but brown sugar may be preferred for flavor. If you plan to use a sugar substitute, follow recipes developed for these products. Sugar substitutes are not usually recommended in pickling, as heat and/or storage may alter their flavor. Also, sugar helps to plump the pickles and keep them firm.
Use fresh whole spices for the best quality and flavor in pickles. Powdered spices may cause
the product to darken and become cloudy. Pickles will darken less if you tie whole spices loosely in a clean white cloth or cheesecloth bag and then remove the bag from the product before packing the jars. Spices deteriorate and quickly lose their pungency in heat and humidity. Therefore, store any unused spices in an airtight container in a cool place.
When brining pickles, hard water may interfere with the formation of acid and prevent pickles from curing properly. To soften hard water, simply boil it 15 minutes and let set for 24 hours,
covered. Remove any scum that appears. Slowly pour water from the containers so the sediment will not be disturbed. Discard the sediment. The water is now ready for use. Distilled water can also be used in pickle making, but is more expensive.
NOTE: If good-quality ingredients are used and up-to-date methods are followed, chemical firming agents are not needed for crisp pickles! Soaking cucumbers or peppers in ice water for four to five hours prior to pickling is a natural and safer method for making crisp pickles than using chemical firming agents.
Chemical firming agents will not work with quick process pickles. Pickling lime and Alum can be used for firming pickles however if used improperly they can actually increase the risk of the risk of botulism. If you choose to use these you will have to go elsewhere for instructions on use. I will not use them!
Do not use aluminum, copper, brass, galvanized or iron containers or utensils while pickling. Be sure that enameled canning pots are not chipped exposing the metal pot to the pickle solution!
These metals can react with acids or salts and cause undesirable color changes or off flavors in the pickles.
For Pickle/Sauerkraut Fermenting
Pickles and sauerkraut can be fermented in large stoneware crocks, large glass jars or food-grade
plastic containers. To determine if a plastic container is food-grade, check the label or contact its
manufacturer. Or, line the questionable container with several thicknesses of food-grade plastic bags.
Do not use aluminum, copper, brass, galvanized or iron containers for fermenting pickles or sauerkraut. The container needs to be large enough to allow several inches of space between the top of the food and the top of the container. Usually a 1-gallon container is needed for each 5 pounds of fresh vegetables.
After the vegetables are placed in the container and covered with brine, they must be completely submerged in the brine. A heavy plate or glass lid that fits down inside the container can be used. If extra weight is needed, a glass jar(s) filled with water and sealed can be set on top of the plate or lid. The vegetables should be covered by 1 to 2 inches of brine.
Another option for submerging the vegetables in brine is to use Vacuum Sealers. I use the Food Saver Vacuum SealerÒ for this and it works great! Always double seal the ends to ensure they don’t leak.
For Fresh Pack Pickles
Pickling liquids should be heated in a stainless steel, aluminum, glass or unchipped enamelware saucepan. For short-term brining or soaking, use crocks, saucepans or bowls made from stoneware, glass, stainless steel, aluminum or unchipped enamelware.
Household scales will be needed if the recipes specify ingredients by weight. They are necessary in making sauerkraut to ensure correct proportions of salt and shredded cabbage.
The same equipment is needed for processing in a water bath canner. For details on equipment go to the Canning page.
Getting Ready To Process
All canning jars should be washed in soapy water, rinsed well and then kept hot. Jars that will be processed for less than 10 minutes in a boiling water bath canner do need to be sterilized by boiling them for 10 minutes before filling. Jars processed in a boiling water bath canner for 10 minutes or more will be sterilized during processing. Use new two-piece lids and follow the manufacturer’s instructions for treating them.
Carefully place the filled jars onto a rack in the canner containing hot water. The water should be
deep enough to cover the jars by at least 1 inch. Cover the canner and bring water to a boil. Start counting processing time as soon as the water begins to boil. Process for the length of time specified in the recipe. Keep the water boiling. If no time is given, process the pickled product for at least 10 minutes. For more information on Water Bath Canning go to the Canning page.
Now that you know what you need and have a basic idea of how to proceed let’s go to the download section to see what we can do with all this new information!
Site Topical Menu
(talk to others, ask questions and share your experiences)
Stay on top of your DSP recipes and links! Download our FREE Toolbar by clicking the link below!
toolbar powered by Conduit
The Smoke Ring - A linked list of BBQ websites
A complete list of The Smoke Ring members
© DJx2 2007 | <urn:uuid:d5d56a64-a342-4325-8bbc-ab773dfd49b2> | CC-MAIN-2015-35 | http://www.deejayssmokepit.net/Pickling.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00041-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.918379 | 2,325 | 2.53125 | 3 |
The United States has been the most hospitable country for Jews in the entire two-thousand-year history of the Jewish diaspora. Any suggestion to the contrary is sheer lunacy.
The Torah says we are created "b'tselem elokim" – in the image of God. Invoking a strikingly parallel notion, the Declaration of Independence announces that all men are endowed by their Creator with "unalienable rights." And the U.S. Constitution applies that principle. Under the Constitution, Jews are born with the same unalienable rights that belong to every citizen of any faith, of any race, color, or creed. What's more, religious tests for office are prohibited; there is no official religion; and the free exercise of religion is protected.
In 1790, replying to a letter from the Hebrew Congregation in Newport, Rhode Island, George Washington wrote:
The Citizens of the United States of America have a right to applaud themselves for having given to mankind examples of an enlarged and liberal policy: a policy worthy of imitation. All possess alike liberty of conscience and immunities of citizenship. It is now no more that toleration is spoken of, as if it was by the indulgence of one class of people, that another enjoyed the exercise of their inherent natural rights. For happily the Government of the United States, which gives to bigotry no sanction, to persecution no assistance requires only that they who live under its protection should demean themselves as good citizens, in giving it on all occasions their effectual support.
We have now reached the 350th anniversary of the arrival of the first Jews in New Amsterdam from Brazil. In an essay entitled "The Citizen Stranger" on the op-ed page of the New York Times (Sept. 12, 2004), Jonathan Rosen observed that it is a fiction that we can celebrate 350 years of Jewish American life; in fact, "there has been Jewish American life for only 228 years, because that's how long there's been an America." Although this might sound like a smart-aleck riposte that an 11-year-old would offer at the dinner table, Rosen was correct in an important way. We should in fact measure from 1776, because the creation of this country was not just one of the most important events in the history of mankind. It was incidentally one of the most important events in Jewish history as well.
While I don't share Rosen's ambivalence about Jews in America, he is surely correct that the date of Jewish arrival
doesn't really matter anyway because of the nature of America, which in this regard has something uncannily in common with Judaism, a religion that maintains that all Jews stood at Mount Sinai to receive the Torah; even if they are converts, their souls are retroactively invested with a kind of primary authenticity. America does the same for its citizens, whenever they become citizens. Everyone, naturalized or born here, is the inheritor not only of the rights and freedoms of the place, but its responsibilities too – whether or not one's ancestors were here to perpetrate past injustices or fight for greater equality. In this sense America itself is like Mount Sinai, which is hardly surprising, given the biblical inspiration of its founders.
* * * * *
I am not trying to say that it has been an unwavering straight line from the Declaration, through the Constitution and Washington's letter, to the present. I am not denying the existence of Leo Frank, Father Coughlan, or Crown Heights. I am not suggesting that there has never been anti-semitism here or that all anti-semitism has departed America forever. I am simply pointing out an incontrovertible truth that under our law, under our fundamental law, under the natural law principles of the Declaration, every Jew born or naturalized in America is 100% pure American. As George Washington wrote, all citizens – including Jews – "possess alike liberty of conscience and immunities of citizenship."
Full citizenship was a unique status in Jewish history. A few months ago, Ed Koch was interviewed in Hadassah magazine (March 2004) and compared the status of Jews in America to that of Jews in the Spain during the Golden Age: "If you were to compare this experience to anything, you'd have to go back to the Golden Age in Spain, where Jews were treated respectfully and equally and were permitted to rise to heights unknown in the rest of the world. That is what we have today in the United States." I am quite certain that Mayor Koch meant this as a tribute to America, but I am even more certain that he got it fundamentally wrong. In medieval Spain under Muslim rule, the Jews were treated very well, but that assessment is subject to the old joke – "compared to whom?" Compared to the treatment of the Jews during most of the diaspora, the Golden Age of Spain was indeed an idyllic period. While Jews were permitted to participate in society – Maimonides is a perfect example – Jews were dhimmis under Muslim law. They were respected as "people of the Book," but they were nonetheless second-class subjects. Nothing of the sort has existed in the history of the United States.
This has always been a Christian country in nearly every respect short of establishment. Jews today are about two percent of the population, while Christians are about eighty percent – higher if those who don't identify with a religion are excluded. But as George Washington recognized, the Jews are not merely "tolerat[ed]" by this Christian country; they have been full citizens from the start.
* * * * *
My grandmother was born in a small town in the Ukraine in 1895 and immigrated to the United States as a teenager. One of the stories she told about her childhood in the Ukraine – and told at great length – had to do with her yearning for an education. In her community, so the story went, the school was empty. The local peasants wanted their children to work on the farm, and the Jews were prohibited from attending school. One year, a new teacher came to her town. According to my grandmother, he didn't know much about how the school worked, but he quickly discovered that he had virtually no students in his classroom. One day, he stopped in at my great-grandfather's store and got to talking with the local Jews. The Jews hired him to teach their children in secret. This arrangement lasted until a peasant passed by the store, noticed what was going on, and alerted the authorities. My great-grandfather and others were fined a large amount. ("For what?" my grandmother always asked. "For the crime of wanting to teach their children about their country?") The history of our family is somewhat unclear, but there seems to be some truth to the story that my great-grandfather left the Ukraine and came here because he was unable to pay the fine assessed by the authorities.
Whenever my grandmother told this story, she always let us know that when she came to the United States, she loved this country. The reason she always gave us was that while Jews in her part of the Ukraine were prohibited from attending school, this country gave her children a free education.
The epilogue to my grandmother's story is that my grandmother accompanied my aunt and uncle to Washington for a year when she was in her eighties. During that year, my sister made a phone call to my grandmother and in the course of the conversation asked her how the weather was. Now, in response to the question about the weather, my grandmother launched into the familiar tale about her childhood in the Ukraine when she wasn't allowed to attend school. My sister had heard it many times, of course, and all she could think of was that my grandmother had completely lost her marbles. What did that story have to do with the weather? My grandmother's story went on and on and on, as usual. Eventually, after quite some time, my grandmother reached the point about getting a free education for her children in America, which was usually the end of the story. This time, she said to my sister, "And I always said that if I ever got to Washington, I would kiss the ground that the Capitol is on. But I haven't been feeling very well, and I haven't gotten out, so I don't know what the weather is."
* * * * *
When observant Jews pray in the morning, they recite a series of 15 blessings. Three of those blessings are existential; they thank God for having made us who we are and who we are not. An Orthodox Jew blesses God for having not made him a gentile. He blesses Him for having not made him a slave. An Orthodox man blesses Him for having not made him a woman, and an Orthodox woman blesses Him for having made her according to His will. Conservative Judaism has modified these blessings so that they focus on the positive. A Conservative Jew blesses God for having made him an Israelite (a Jew). He blesses God for having made him a free person. And the third blessing is no longer bifurcated. A Conservative Jew (man or woman) blesses God for having made him in His image (sheasani b'tzalmo).
In Judaism, prayer is formalized in both time and substance. Jews are required to pray three times a day, and within each branch and tradition of Judaism, the liturgy is wholly standardized. It is quite unusual for Jews to choose one from Column A and one from Column B.
But the standardized liturgy may be supplemented by individuals who wish to speak personally to God, and it really is long past time for American Jews to reflect on their own God-given good fortune in being Americans. Yes, it's true that in many Jewish congregations, we recite a prayer for our government on the sabbath. But some common versions of the prayer are quite old-fashioned and might just as well have been wishing God's blessing on a medieval king or sultan. These versions of the prayer don't reflect the relationship between a Jewish citizen in America and the government, which George Washington noted over 200 years ago. They almost sound as if we are subjects in a very tenuous position in the kingdom and are beseeching the ruler to be nice to us, when we really should be praying as citizens for the nation's leaders to do what is right on behalf of this country.
I suggest that American Jews reflect on their fortune and thank God for having given wisdom to our Founders who established this nation; thank God for having blessed this nation with judgment, strength, and a willingness to repent for our sins; and, finally, thank God for having given our parents, grandparents, or more distant ancestors the courage, like Abraham, our forefather, to leave their homelands and start a new life in America. In short, I think it is time for American Jews to say:
"Thank you, God, for having made me an American." | <urn:uuid:5bd5d027-6034-47ca-84c6-e2e4895747b8> | CC-MAIN-2015-35 | http://pillageidiot.blogspot.com/2004/10/jew-in-america.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00102-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.981176 | 2,256 | 2.796875 | 3 |
This bulletin does not offer easy solutions to all the problems of soil and water conservation in semi-arid regions. There is no storehouse of tested methods and techniques which can be taken off the shelf for immediate application. The conditions vary too much - the soil, the climate, the social factors such as land tenure, availability of mechanization and of labour, the place of livestock - the list is endless. This bulletin is therefore a review of techniques which have been tested and found useful somewhere, and which might be suitable for use in other conditions. Some of the methods are at the stage of being promising ideas which need more testing.
Since the bulletin covers such a wide range of geography, people, and climate, it will not meet the requirements of every reader. There will be too much detail in one place, not enough in others. In particular there might be more emphasis on the Mediterranean region, where the winter rain- fall regime gives a situation very different from the semi-arid regions with summer rainfall. We are also conscious that we have not done justice to the work in the Francophone countries of North and West Africa.
We have not attempted to define semi-arid areas, nor have we been concerned whether areas should be classified as arid or semi-arid. The working definition of areas which the bulletin is intended to help is anywhere that rainfall is a problem because of amount, distribution, or unreliability. We have not discussed all the problems of semi-arid areas, and particularly have omitted the questions of salinity and alkalinity, and wind erosion, and mechanization. A subject as wide as this could be arranged in chapters in a number of different ways, so we have tried to set up sign posts where a method or technique is discussed in more than one chapter. Figure 1.1 is a generalized map of arid and semi-arid areas.
The bulletin does not go into detailed design of all the practices because the circumstances of soil and climate make each application different, so we have tried to be more concerned with principles. Neither have we discussed the practicability of the methods reported in terms of manpower requirements, nor in terms of costs and benefits. Whether it makes sense to apply a method will depend mainly on the urgency of the problem; if a method will keep people alive the benefit/cost ratio is not the most important criterion.
The book does not discuss the political issues, such as land tenure and social systems, nor does it discuss political structure of government and the social structure of land use. This is not to imply that these aspects are not important. The consensus of many recent analyses of soil and water conservation programmes is that past programmes have been biased towards top-down projects with insufficient attention to the needs, aspira- tions and abilities of the peasant farmer who, as a result, has not been sufficiently involved. It is perhaps significant that many of the ideas and techniques discussed in this bulletin are either indigenous or have been developed together with the farmers by one-man projects operated by non-government organizations.
The other feature which comes through strongly when studying successful projects, is the importance of helping farmers by training them in the use of simple equipment. Examples are teaching the use of the water-tube level to lay out works on the contour in Burkina Faso, or the use of the line level for similar purposes in Kenya. Another important non-technical contribution can be to help the organization of local com- munities to manage their affairs more effectively. In India there is a centuries old tradition of local groups to manage water affairs. There are the Pani Panchayat for general water management, and the irrigation com- mittees, Pukka Warabundi (formally structured), and Katcha Warabundi (informal). In other countries, political and social change is leading to new organizations of Cooperatives or Peasant Associations, and guiding, assisting, or training these groups may be essential for the implementation of any soil and water conservation programmes.
Agricultural development naturally takes place first on the best land. Whether at the scale of the individual farm or a whole country, the tendency is to use the best land first. When there is a need to increase agricultural production it is usually directed to maximizing production in the areas which have the best potential. But as demand increases for the products of the land - food, fuel, shelter, and clothing - it is necessary to make increasing use of land which is less suitable for agriculture, or land in less favourable climates. People concerned with agricultural planning, or development, or production, must all pay more attention to the semi-arid regions.
Attention has been sharpened by the widespread droughts in Africa in the early nineteen-seventies, and again in the mid-eighties, as shown by the surge of conferences and workshops in the last ten years, and the explosion of aid programmes in semi-arid Africa.
It is futile to expect magic solutions to the problems of semi-arid regions. We cannot alter the unreliability of the rainfall, nor the fact that unpredictable rainfall and the occurrence of droughts are inevitable. Neither can centuries of abuse and mismanagement of the land and the people be corrected overnight. Short-term results to reduce starvation through improved yields is only part of the story; a programme to win the confidence of subsistence farmers should be planned to a timespan of at least five years, and plans to direct the attitudes of governments more towards land use will need much longer.
What this bulletin tries to do is to put ideas and techniques into a large array of labelled pigeonholes, from which technicians can select components to build into a project or a programme.
The place of droughts in relation to food supplies was admirably expressed by Sir Joseph Hutchinson in the preface to the Royal Society Symposium of 1977, when he said:
"These problems (widespread droughts of the early nineteen seventies) were not created by drought. Drought is but the dry extreme of the natural climate variability of semi-arid areas. The problems arise from the pressure of human and domestic animal populations on limited resources, and they have been aggravated by the rapid explosion of populations in recent years....
"It is in the nature of these marginal areas that agriculturally disastrous seasons are within the normal range of climatic variations. Such seasons must be accepted as something to be foreseen and planned for, and not as an "act of God" to be met by international charities. The difficulty of planning for drought conditions and hence of preventing famine has been increased by the great successes of science and technology in the last quarter century in the control of parasitic and nutritional disease.
Improvement in health has led to great increases in human and animal populations, and these have been maintained by fuller exploitation of the environment. Thus the pressure at the margin has increased, and what was a shortage that could be met by encroachment on unused resources becomes a famine because no unused resources remain."
The all too common attitude towards drought is typified by the Australian outback farmer who is reported to have explained his problems to the extension officer because "we have not had a normal rainy season for 25 years".
Hutchinson's comment about increasing pressure at the margins is reinforced by Ormerod (1978) who is concerned about the problems which may inadvertently arise from economic development in dry areas, and argues that "One of the most important factors operating to increase aridity in West Africa is the economic demand which has stimulated the growth of herds and of arable farming which compete for land in the arid range areas."
Ormerod suggests that the conventional view that demand is best satisfied by increasing local production may increase the rate of agricul- tural degradation and goes on to say:
"There is a particular danger in stimulating economic advance in desert areas and particularly in linking nomads to the monetary system and I make the plea that, before barriers to development ... in this remote and fragile part of the world are broken down, more information should be obtained about the dynamics of nomadic grazing and of the possible effects that the extension of their operations might have upon the environment and the climate ........"
He also urges the development of quantitative techniques which could be used to assess the capacity of soils to withstand exploitation, to provide a measure of the limits to which economic development can be pressed without causing ecological degradation.
A third commentary is the recent and thoroughly researched review by Sinclair and Fryxell (1985) who present a powerful argument for the view that the ecological disaster of the Sahel is primarily man-made.
Proof of the damage caused by overgrazing comes from satellite imagery, and examples are shown from Niger in the Frontispiece, and from Namibia in Plate 7.2
A recent change has been the increasing political interest in many countries to promote more agricultural use of areas of low or erratic rainfall. In some cases the main reason has been political change associated with the move from colonial status to independence, and an example is Zimbabwe. In colonial times, agricultural development and agricultural research were primarily directed towards maximizing the production on the highly mechanized large farms on the better soils in the regions with most reliable rainfall. Since independence there has been a reversal of this policy and the emphasis is now placed on development where there is the greatest need rather than the greatest opportunity, i.e. the small-scale subsistence farmer on poor soils in areas of low and unreliable rainfall. Another powerful factor which has led to increasing emphasis on production from the dry areas is the simple necessity to increase total national production. Examples of this are the massive development of grain production in the south-east of USSR, and the increasing attention to production in the low rainfall areas of Kenya.
Yet another reason for opening up areas previously considered too dry for arable farming has been the development of new techniques such as mechanized minimum tillage farming which in New South Wales is leading to the spread of arable farming to the west.
However increasing government interest in dry regions is not universal, and in some countries production is decreasing, because there is apparently no political will to develop the dry regions. A correspondent reports that this is the situation in Morocco, Algeria, and Tunisia. In broad terms, each of these countries was self-sufficient in food grains with average yields of around 1000 kg/ha. Morocco has maintained this average yield, but now has to import 20 percent of its food requirements. In Tunisia the yield has fallen to 800 kg/ha and 40 percent of the food requirements are imported. In Algeria the yield has fallen to 600 kg/ha and food imports are running at 60 percent (FAO 1985).
In some countries a reduction in the population has made it impossible to maintain the conservation works necessary to sustain agricul- ture. This is the case in the Yemen Arab Republic, and also applies to some of the Mediterranean countries on the north African coast.
Figure 1.1 Generalised map of arid and semi-arid regions | <urn:uuid:fe654d2f-3b5f-4a5b-941c-031a112cf4b7> | CC-MAIN-2015-35 | http://www.fao.org/docrep/T0321E/t0321e-07.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645339704.86/warc/CC-MAIN-20150827031539-00043-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.957148 | 2,248 | 3.015625 | 3 |
|Rat Target Sites||Mouse Target Sites||TD50 (mg/kg/day)|
|no positive||no positive||liv||no positive||no positive||228|
Key to the Table Above
The Carcinogenic Potency Database (CPDB) is a unique and widely used international resource of the results of 6540 chronic, long-term animal cancer tests on 1547 chemicals. The CPDB provides easy access to the bioassay literature, with qualitative and quantitative analyses of both positive and negative experiments that have been published over the past 50 years in the general literature through 2001 and by the National Cancer Institute/National Toxicology Program through 2004. The CPDB standardizes the diverse literature of cancer bioassays that vary widely in protocol, histopathological examination and nomenclature, and in the published author’s choices of what information to provide in their papers. Results are reported in the CPDB for tests in rats, mice, hamsters, dogs, and nonhuman primates.
For each experiment, information is included on species, strain, and sex of test animal; features of experimental protocol such as route of administration, duration of dosing, dose level(s) in mg/kg body weight/day, and duration of experiment; experimental results are provided on target organ, tumor type, and tumor incidence; carcinogenic potency (TD50) and its statistical significance; shape of the dose-response, author’s opinion as to carcinogenicity, and literature citation.
Only tests with dosing for at least ¼ the standard lifespan of the species and experiment length at least ½ the lifespan are included in the CPDB. Only routes of administration with whole body exposure are included. Doses are standardized, average dose rates in mg/kg/day. A description of methods used in the CPDB to standardize the diverse literature of animal cancer tests is presented for: 1) Criteria for inclusion of experiments 2) Standardization of average daily dose levels and 3) TD50 estimation for a standard lifespan. See Methods for other details.
TD50 provides a standardized quantitative measure that can be used for comparisons and analyses of many issues in carcinogenesis. The range of TD50 values across chemicals that are rodent carcinogens is more than 100 million-fold. More than half the chemicals tested are positive in at least one experiment.
A plot of all results on each experiment in the CPDB for this chemical is presented below. These results are the source information for the Cancer Test Summary table above.
Chemical (Synonym) CAS # Species Sex Strain Route Xpo+Xpt PaperNum 0 Dose 1 Dose 2 Dose 3 Dose Literature Reference or NCI/NTP:Site Path Site Path Notes TD50 DR Pval AuOp LoConf UpConf Cntrl 1 Inc 2 Inc 3 Inc Brkly Code
TETRACHLORVINPHOS 961-11-5 5918 M f b6c eat 80w92 TR33 : 0 904.mg 1.81gm liv MXA u 905.mg \ P<.02 a 528.mg n.s.s. 0/9 19/50 (11/49) liv:hpc,nnd. TBA MXB 7.88gm * P<.8 1.04gm n.s.s. 1/9 25/50 19/49 liv MXB 905.mg \ P<.02 528.mg n.s.s. 0/9 19/50 (11/49) liv:hpa,hpc,nnd. lun MXB 7.79gm * P<.5 2.87gm n.s.s. 0/9 5/50 5/49 lun:a/a,a/c. 5919 M f b6c eat 80w90 TR33 : pool 0 904.mg 1.81gm liv MXA u 1.07gm \ P<.002 a 558.mg 4.20gm 3/49 19/50 (11/49) liv:hpc,nnd. liv nnd u 1.38gm \ P<.0005 a 705.mg 4.79gm 1/49 14/50 (9/49) 5920 M f b6c eat 24m24 1696m 0 2.08gm Parker;faat,5,840-854;1985 liv mix er 10.2gm P<.0005 4.17gm 36.9gm 0/99 6/47 liv hpc er 12.4gm P<.002 4.72gm 54.7gm 0/99 5/47 kid mix er 32.1gm P<.04 7.91gm n.s.s. 0/99 2/47 liv hpa er 65.0gm P<.2 10.6gm n.s.s. 0/99 1/47 5921 M f b6c eat 24m24 1696n 0 2.28mg 8.32mg 41.6mg 208.mg 1.04gm 2.08gm liv mix er 6.33gm * P<.0005 4.04gm 15.4gm 0/99 1/48 0/49 0/50 4/49 7/49 7/50 liv hpc er 8.99gm * P<.0005 4.63gm 25.8gm 0/99 0/48 0/49 0/50 3/49 5/49 4/50 liv hpa er 21.4gm * P<.007 12.2gm 428.gm 0/99 1/48 0/49 0/50 1/49 2/49 3/50 kid tum er no dre P=1. 16.5mg n.s.s. 0/99 0/48 0/49 0/50 0/49 0/49 0/50 5922 M m b6c eat 80w92 TR33 : 0 835.mg 1.67gm liv MXA 228.mg \ P<.0005 c 158.mg 432.mg 0/10 47/50 (42/50) liv:hpc,nnd. liv hpc 466.mg * P<.002 c 348.mg 1.93gm 0/10 36/50 40/50 TBA MXB 295.mg \ P<.007 175.mg 3.98gm 2/10 47/50 (42/50) liv MXB 228.mg \ P<.0005 158.mg 432.mg 0/10 47/50 (42/50) liv:hpa,hpc,nnd. lun MXB 401.gm * P<1. 3.40gm n.s.s. 0/10 4/50 2/50 lun:a/a,a/c. 5923 M m b6c eat 80w90 TR33 : pool 0 835.mg 1.67gm liv MXA 280.mg \ P<.0005 c 176.mg 541.mg 8/50 47/50 (42/50) liv:hpc,nnd. liv hpc 534.mg * P<.0005 c 371.mg 905.mg 5/50 36/50 40/50 liv nnd 1.77gm \ P<.02 a 737.mg n.s.s. 3/50 11/50 (2/50) 5924 M m b6c eat 24m24 1696m 0 1.92gm Parker;faat,5,840-854;1985 liv mix er 1.15gm P<.0005 660.mg 2.42gm 26/99 35/46 liv hpc er 1.53gm P<.0005 855.mg 3.63gm 24/99 31/46 kid mix er 4.42gm P<.0005 2.23gm 11.3gm 1/99 12/46 kid tuc er 5.26gm P<.0005 2.56gm 13.5gm 0/99 10/46 liv hpa er 18.3gm P<.08 5.26gm n.s.s. 2/99 4/46 kid tua er 37.6gm P<.3 7.50gm n.s.s. 1/99 2/46 5925 R f osm eat 19m26 TR33 : 0 153.mg 306.mg pit cra v# 1.58gm * P<.03 - 743.mg n.s.s. 0/10 1/50 8/50 S TBA MXB v no dre P=1. 454.mg n.s.s. 7/10 23/50 25/50 liv MXB v no dre P=1. 2.01gm n.s.s. 0/10 2/50 0/50 liv:hpa,hpc,nnd. 5926 R f osm eat 19m25 TR33 : pool 0 153.mg 306.mg thy cca uv 1.92gm * P<.05 a 775.mg n.s.s. 1/55 2/50 7/50 adr coa uv 2.28gm * P<.4 a 667.mg n.s.s. 2/55 7/50 6/50 5927 R m osm eat 19m26 TR33 : 0 122.mg 245.mg TBA MXB v no dre P=1. - 226.mg n.s.s. 6/10 24/50 18/50 liv MXB v no dre P=1. 1.10gm n.s.s. 0/10 2/50 0/50 liv:hpa,hpc,nnd.
See full CPDB Summary Table on 1547 chemicals. See Full CPDB for all results on 6540 experiments of 1547 chemicals.
A complete list of CPDB chemicals, which is searchable by name or by CAS number, is available here.
For a compendium of CPDB results organized by target organ, which lists all chemicals in each species that induced tumors in each of 35 organs, see Summary Table by Target Organ.
The CPDB is available in several formats that permit printing and downloading into spreadsheets and statistical databases.
A Supplementary Dataset gives details on dosing and survival for each experiment.
Relatively precise estimates of the lower confidence limit on the TD10 (LTD10) are readily calculated from the TD50 and its lower confidence limit, which are reported in the CPDB. For researchers and regulatory agencies interested in LTD10 values, we provide them in an Excel spreadsheet.
PDF versions of our publications of analyses using the CPDB are available, organized by year and by research topic. | <urn:uuid:79edce9b-d39a-4fba-8fdc-7588294829da> | CC-MAIN-2015-35 | http://toxnet.nlm.nih.gov/cpdb/chempages/TETRACHLORVINPHOS.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645371566.90/warc/CC-MAIN-20150827031611-00220-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.691578 | 2,274 | 2.515625 | 3 |
Authors: Respectively, Extension Dairy Specialist, Superintendent/Agricultural Economist, Agricultural Science Center at Clovis and Tucumcari, and Extension Livestock Specialist, all of New Mexico State University.
Dairy production in New Mexico has increased 11-fold in the last 20 years (Cabrera and Hagevoort, 2006). This tremendous increase creates challenges to cost-effectiveness in heifer production and dry cow maintenance, as well as concerns over environmental impacts. Raising heifers and maintaining dry cows in confined lots is associated with high production costs related to feed, machinery, and fixed expenses as well as higher environmental impacts. These costs may be lowered by using grazing systems (Lopez et al., 2000) that require less nutrient management input. With 340,000 dairy cows in New Mexico (Cabrera and Hagevoort, 2006) culled at a rate of about 27% annually (Cabrera et al., 2006), there is an opportunity in this state to manage more than 90,000 replacement heifers and 30,000 dry cows in a pasture-based grazing system rather than in confinement where manure production and nutrient management inputs are higher.
The use of forage is important in the agricultural economies of the Great Plains (Hossain et al., 2004). Intensive, cool-season grazing systems are increasingly being adopted by dairy farmers to reduce operating costs (Stout, 1995) and lower management needs for manure disposal and nutrient input.
To optimize production and minimize environmental impact, it is necessary to understand the N fluxes in these intensive grazing systems (Stout, 1995). Grazing livestock play an important role in the ecology of forage fields because livestock consume nutrients in forages that subsequently are converted into energy and tissue and exported out from the ecosystem (White et al., 2001). Most nutrients ingested by livestock are returned to the field in feces and urine (Haynes and Williams, 1993). However, only a portion of the excreted nutrients are available for plant uptake because of nutrient losses from volatilization. Over time, forage growth declines when soil nutrients become depleted. The need to reverse the negative balance of nutrients in the fields creates an opportunity to use and recycle nutrients that may be in excess in other parts of the dairy production system.
The main goal of this study was to develop and provide New Mexico dairy producers, consultants, and officials with a user-friendly application (Grazing-N) for estimating N balance on intensively managed grazing systems. The purpose of this application is to help individual dairies grazing dairy heifers and dry cows develop and comply with their (Comprehensive) Nutrient Management Plan (CNMP) required by regulatory agencies such as the New Mexico Environment Department (NMED), the U.S. Environmental Protection Agency (EPA), and the Natural Resources Conservation Service (NRCS).
Materials and Methods
Net balance of N
Net balance of N (Nnet) in a grazing system is the difference between the amount of N deposited on the ground in feces plus urine that is available for uptake in forage production (Nremain), and the N consumed by the animals through forage ingestion (Nintake). This is represented in Equation 1.
Nnet = Nremain - Nintake (lbs/day/animal)
The amount of N incorporated into the forage (Nremain) is calculated as the N excreted on the field in feces and urine (Nexc), less the amount of N lost or volatilized as ammonia before being taken up by the plants (Nvol). This is represented in Equation 2. The amount of N volatilized after excretion (Nvol) varies according to weather and soil conditions. Volatilization can vary from 20 to 80% of the excreted N. The most widely used estimate is 50 percent volatilization (Van Horn et al., 1998), and this is the default value in the Grazing-N model. This parameter should be adjusted by the user of the model to be site-specific.
Nremain = Nexc Nvol (lbs/day/animal)
Table 1. Variables used in the Grazing-N in alphabetical order.
|ADG||Average daily weight gain||lbs/day|
|An||Number of animals||head|
|BW||Live body weight||lbs|
|CCcalc||Calculated maximum number of animals to graze||head/acre|
|CCentered||Intended number of animals to graze a field||head/acre|
|CP||Crude protein content (dry matter basis)||%|
|DMI||Dry matter intake||lbs/day/animal|
|Fd||Size of forage field||acre|
|Gz||Days of grazing||day|
|i||Group of animals grazing||type|
|n||Number of animal groups grazing||number|
|Nexc||Nitrogen excreted on the field in feces and urine||lbs/day/animal|
|Nintake||Nitrogen ingested forage consumption||lbs/day/animal|
|Nnet||Nitrogen net balance||lbs/day/animal|
|Nitrogen excreted available for plant uptake||lbs/day/animal|
|Nremoval||Nitrogen intake of a group of animals||lbs/day/animal|
|Nvol||Nitrogen lost before taken up by plants||lbs/day/animal|
|Total Nremoval||Nitrogen intake of all groups of animals||lbs/acre/ grazing period|
|Yforage||Forage field productivity of consumable forage||lbs/acre|
The N excreted (Nexc) is calculated as a function of the body weight of the animals, following standard values compiled by the Natural Resource and Conservation Service in the Agricultural Waste Management Field Handbook (USDA, 1992). This handbook indicates heifers excrete 0.31 lbs of N and dry cows 0.36 lbs of N for every 1000 lbs of body weight. The Grazing-N model allows 11 user-selected body weight ranges for heifers, between 330 and 1430 lbs. For dry cows, the average body weight of 1600 lbs is the model default, but the user has the option to change this to a more appropriate value.
Nitrogen intake (Nintake) is calculated by converting the crude protein (CP) content of the consumed forage to N amounts, based on a 6.25 to 1, CP to N conversion. Crude protein intake is calculated by multiplying the forage dry matter intake (DMI) by the forages CP content. This is represented in Equation 3. According to NRC (2001), intensively managed cool-season forages contain 26.5% CP (dry matter basis; SD=5.6). This value is the default value in the Grazing-N model. The user should customize this value with site-specific data from analysis of forage field samples or other records.
Nintake = DMI × %CP / 6.25 (lbs/day/animal)
The DMI is calculated differently for dry cows and for heifers. Dry cows consume between 1.8 and 2.1% of their body weight daily on a dry matter basis (Van Horn et al., 1998). A DMI of 1.8% of BW is used as the default value for dry cows. This should be verified and, if necessary, changed by the user. For heifers, DMI is based on BW and the average daily gain (ADG), following NRC (2001). By default, the application Grazing-N uses a value of 1.98 lbs ADG, but the user should re-define this parameter according to specific conditions.
As defined in this section, Nnet is a negative value that reflects higher amounts of Nintake than Nremain. This indicates the depletion of N in the soil through grazing and the opportunity to replenish N for increased forage production.
There are limitations to the capacity of the fields to sustain grazing heifers and dry cows. Carrying capacity (CCcalc) is defined as the maximum number of animals that can be grazed on a forage field for a determined period. CCcalc is calculated as the forage field productivity of consumable forage (Yforage) divided by the DMI required for the animals (Equation 4). Productivity of consumable forage (Yforage) should account for forage losses due to trampling, fouling, decomposition, manure deposits, etc.
CCcalc = Yforage / DMI (animals/acre)
Productivity of consumable forage (Yforage) for cool season forages, for example, may be around 0.75 tons dry matter/acre/month, or the equivalent of 50 lbs of dry matter/acre/day (Van Horn et al., 1998), which is used as the default in Grazing-N. The user should custom-tailor this value to site, species, and season.
Grazing management and N removal
Grazing management includes decisions such as the selection of the type and weight of animals, the number of animals, the grazing duration, and the forage species in the field. The user identifies the type and weight of animals by selecting the proper row in the Grazing-N application matrix. The matrix allows selection of heifers between 330 and 1430 lbs, and dry cows with a user-defined weight. The user enters the number of animals (An), the days of grazing (Gz), and the size of the forage field (Fd; acres). The model will validate that the forage fields carrying capacity exceeds the forage requirements for the specified groups of animals. If the CCcalc is higher than the entered carrying capacity (CCentered), the application warns the user. In some circumstances, exceeding CC is allowable for a short period of time when it is done with the intention of consuming accumulated for age growth.
The Nremoval is expressed in lbs/acre for the entire grazing period. To calculate Nremoval, the net balance (Nnet) is multiplied by the number of animals (An) and the days of grazing (Gz), then divided by the number of acres in the forage fields (Fd). This is represented in Equation 5.
Nremoval = (Nnet × An × Gz) / Fd (lbs/acre/grazing period)
The total N removal (Total-Nremoval) is calculated by adding all N removal activities (i) of different group of animals, for different grazing periods, on the same forage field (Equation 6).
Where n is a group of animals grazing on the same forage field.
It is common to provide supplemental feed to grazing heifers and dry cows. The Grazing-N application allows adjustment for supplementation activity. For simplicity, it is assumed that the supplemented N would provide the same amount of N that otherwise would be ingested through grazing. The user needs to know the N content of the supplemented feed (Nsupplement), which will reduce the earlier calculated value for Nintake and re-calculate the N balance in the grazing system. This is represented in Equation 7.
Nintake = Nintake - Nsupplement
Some Applications Using Grazing-N and Discussion
Following are the results of calculations performed using the Grazing-N application.
Nitrogen balances with default input data
Maximum carrying capacities and levels of removal of N (Nnet) in grazing programs lasting 180 days were calculated using the model default values (Table 2) without feed supplementation. Results for all heifer groups and dry cows are shown in Figure 1.
Table 2. Parameters of Grazing-N, their default values and normal ranges used in Grazing-N.
|BW dry cows||lbs||1600||1400-1800|
|DMI dry cows||lbs||28.8||28.8-33.6|
|DM forage yield||tons/acre/month||0.75||0.375-1.125|
Figure 1. Removal of N for different groups of heifers and dry cows at maximum carrying capacity for a period of 180 days.
Sensitivity of N Balances to Custom ParametersSensitivity to DMI
Predicted dry matter intake is used in the model to estimate N intake. However, changes in dry matter intake yielded only marginal changes in the predicted N balances. For heifers, changing ADG from 1.1 to 2.4 lbs/animal/day (default ADG is 1.8 lbs/day) changed N balance by only 0.34% for 330-lb unbred animals and 0.02% for 1430-lb bred animals. For dry cows, the N balance changed by 4.34% when the DMI was increased from 1.8% (default) to 2.1%.
Sensitivity to percentage of crude protein on forage
The protein content of the forage is important because the model uses forage protein content to calculate N uptake by plants and N consumed by the animals. Changing the default content of protein from 26.5% to 20.9% resulted in a decrease in N balance of 24% for unbred heifers, 25% for bred heifers, and 28% for dry cows. Changing the protein to 32% resulted in the same magnitude of change, but in the opposite direction.
Sensitivity to percentage of volatilization
Volatilization of N to the air has a direct impact on N balance. The more N is volatilized, the more negative the N balance, or the higher the amount of N needed to replenish the soil. The proportion of N balance change is 3% for unbred heifers, 4% for bred heifers, and 6% for dry cows for every 10% change in N volatilization.
Sensitivity to forage production
Forage productivity or biomass accumulation determines the carrying capacity of animals per forage field. Increasing forage production increases the capacity of the field to sustain animals and their accompanying N removal. Changes in DM production impact the carrying capacity, which is translated to the same proportion change in the N balance.
Sensitivity to diet supplementation
Diet supplementation will impact N balances inversely. For every supplementation of 10% of the daily N requirement, the N removal will decrease 11.5% for unbred heifers, 12.0% for bred heifers, and 13.0% for dry cows.
A computer application (Grazing-N) has been created and is available for use by dairy farmers, consultants, and officials. The model and its user manual can be accessed at: http://dairy.nmsu.edu under the tools section. Grazing-N is a user-friendly spreadsheet that calculates N removal by intensively managed grazing dairy heifers and grazing dry cows. Grazing N demonstrates that intensive grazing systems may require between 290 and 335 lbs of additional N to replenish N removed by grazing animals in a six-month period. The N balance in a grazing system is impacted by the following conditions, in decreasing order: percentage of CP in forage, percentage of N volatilization after excretion, feed supplementation, dry matter biomass production by the forage, and dry matter intake of animals. It is important to keep in mind, when approaching the maximum carrying capacity of a piece of land, that smaller heifers remove more N than larger ones and dry cows less N than heifers.
Cabrera, V.E., Hagevoort, R. 2006. Importance of the New Mexico dairy industry. Agricultural Science Center at Clovis. College of Agricultural, Consumer and Environmental Sciences. New Mexico State University. Available at: http://dairy.nmsu.edu.
Cabrera, V.E., Kirksey, R., Hagevoort, R. 2006. NM-Manure: A Seasonal Prediction Model of Manure Excretion for Lactating Dairy Cows in New Mexico. Available at: http://dairy.nmsu.edu.
Haynes, R.J., Williams, P.H. 1993. Nutrient cycling and soil fertility in the grazed pasture ecosystem. Advanced Agronomy 49, 119-199.
Hossain, I., Epplin, F., Horn, G., Krenzer, E. 2004. Wheat production and management practices used by Oklahoma grain and livestock producers. Oklahoma Agricultural Experimental Station, B-818. Oklahoma State University, Stillwater.
Lopez, R., Krehbiel, C., Duncan, K., Hanson, E., Thomas, M., Looper, M., Castellanos, E., Donart, G., Barnes, C., Flynn, R. 2000. Influence of grass-legume pastures on forage availability and growth performance of Holstein heifers. Proceedings Western Section America Society of Animal Science, 51-2000.
National Research Council (NRC). 2001. Nutrient requirement of dairy cattle. 7th rev. ed. National Research Council, Washington, DC.: National Academy Press. Available at: http://www.nap.edu/openbook.php?isbn=0309069971.
Stout, W.L. 1995. Evaluating the added nitrogen interaction effect in forage grasses. Communications in Soils Science and Plant Analysis 26(17-18), 2829-2841.
United States Department of Agriculture (USDA). 1992. Agricultural Waste Management Field Handbook, Chapter 4: Agricultural Waste Characteristics. 210-VI-NEH- 651.04. Soil Conservation Service, Washington, DC. Available at: http://directives.sc.egov.usda.gov/OpenNonWebContent.aspx?content=17768.wba
Van Horn, H.H., Newton, G.L., Nordstedt, R.A., French, E.C., Kidder, G.K., Graetz, D.A., Chambliss, C.G., 1998. Dairy manure management: strategies for recycling nutrients to recover fertilizer value and avoid environmental pollution. Florida Coop. Ext. Serv. Circ. 1016. University of Florida, Gainesville, FL. Available at: http://edis.ifas.ufl.edu/DS096.
White, S.L., Sheffield, R.E., Washburn, S.P., King, L.D., Green, J.T. 2001. Spatial and time distribution of dairy cattle excreta in an intensive pasture system. Journal of Environmental Quality 30, 2180-2187.
To find more resources for your business, home, or family, visit the College of Agricultural, Consumer and Environmental Sciences on the World Wide Web at aces.nmsu.edu.
Contents of publications may be freely reproduced for educational purposes. All other rights reserved. For permission to use publications for other purposes, contact email@example.com or the authors listed on the publication.
New Mexico State University is an equal opportunity/affirmative action employer and educator. NMSU and the U.S. Department of Agriculture cooperating.
Printed and electronicaly distributed March 2007, Las Cruces, NM. | <urn:uuid:6d0448e0-330a-4ab8-9b4d-3b37b3a057c6> | CC-MAIN-2015-35 | http://aces.nmsu.edu/pubs/_circulars/CR-611/welcome.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.86936 | 4,084 | 3.171875 | 3 |
Issue Date: January 14, 2013
The Chemistry Of A Solar Airplane
Sitting in a vast hangar in Payerne, Switzerland, nestled between the Alps and Lake Neuchâtel, is Solar Impulse HB-SIA, its four motors powered only by lithium-ion-polymer batteries charged by solar cells. The airplane’s 63-meter wingspan rivals that of a Boeing 747. But unlike a 747, the upper surfaces of its wings are coated entirely in solar cells. Built with lightweight materials and an innovative design, it weighs about the same as a family car.
Solar Impulse’s Swiss founders and pilots are Bertrand Piccard, 54, a psychiatrist and explorer, and André Borschberg, 60, a former fighter pilot, engineer, and founder of semiconductor technology start-ups. They came up with the notion to build a solar-powered plane in 2003 and since then have convinced corporate sponsors and partners to provide the project with $130 million, including materials and manpower.
In the process of developing and testing the plane, they have inspired many, including the research teams of their chemical company partners, Solvay and Bayer MaterialScience, to think more creatively. Bayer and Solvay have reaped a significant public relations benefit from involvement with the project. But more important, company executives say, is the chance to apply what they have learned in other areas, including projects with car companies.
Solar Impulse isn’t an aircraft that you will be able to fly on anytime soon. “It’s not designed to carry people or even freight but as a message that sustainable energy is a viable option for mankind,” Borschberg says.
Flying at an average speed of 44 mph, it has already completed a series of flights across Europe, including one from Payerne to Morocco, and has even flown overnight.
In May the Solar Impulse team plans to freight the one-seat plane to the West Coast of the U.S. and fly it, in three or four legs, to the East Coast. The team also is building Solar Impulse HB-SIB, a stronger version of the plane, in which “everything is upgraded” so that in 2015 it will be capable of flying around the world, Borschberg tells C&EN.
It was a motivational lecture by Piccard that in 2003 led Solvay to join the project as its founding partner. “This type of project is unique in the history of Solvay. It’s a project with an idea to make a better world,” says Claude Michel, who heads up Solvay’s Solar Impulse team of about 10 staffers. “We recognized the value Piccard has in innovation, his pioneering spirit and respect for people and the planet, and we found we had the same set of values.”
Solvay’s contribution to the construction of the plane includes 11 materials used in 25 different applications and more than 6,000 parts. Among its activities, the firm has provided lightweight plastics to replace metals, techniques for improving lithium ion-polymer batteries, and a broad body of materials research and know-how.
Solvay’s Halar brand fluorine copolymer, for example, is being used to encapsulate the plane’s thin photovoltaic cells. Halar is resistant to ultraviolet radiation, is waterproof, and forms a lightweight film less than 20 µm thick. Before the Solar Impulse project, Solvay used Halar only for coating materials such as metals, but the firm is now looking at using it across a range of applications, Michel says.
Solvay will have invested $16 million in the project, including a cash contribution and the value of its time and parts, by the time Solar Impulse HB-SIB makes its global flight in 2015. “It’s a good investment,” Michel says without hesitation.
The benefits to Solvay, he explains, have been multiple: Participation has driven the development of specialty plastics and chemicals across the company, enhanced the image of the firm as a solutions provider, and proven to be a powerful tool to motivate staff. R&D staffers typically don’t see a real-world outcome from their labors. With Solar Impulse, though, they quickly appreciate their role in preparing Solvay materials for the plane, Michel says.
Solar Impulse has had other influences on Solvay’s scientists, not least that they have adapted to the project’s tough time and performance requirements.
Meanwhile, working on Solar Impulse has led researchers at Bayer to be more creative in their approach to projects, says Martin Kreuter, a senior marketing manager in the firm’s materials science division.
“The removal of the expectation for commercial success has allowed people to work differently and to get into an open innovation mind-set,” Kreuter says. “Working with Piccard and his team is inspirational.” About 30 staffers from a range of Bayer departments have contributed to the Solar Impulse project.
Bayer joined Solar Impulse as a financial sponsor and materials partner in 2010. The German company’s contributions include polyurethane foam for the wingtips, motor casings, and cockpit; polycarbonate film for the cockpit window; and adhesive and coating materials used in the cabin and wings.
Bayer has used carbon nanotubes in combination with epoxy to make the spars—the backbones of the wings—and other structural components lighter and stronger.
For the project, Bayer has drawn on its experience in the automotive sector, where weight and performance are also key parameters, explains Kreuter, whose role at the firm involves partnering with car companies. And the materials and techniques Bayer has developed for Solar Impulse, such as lightweight and rigid insulating foam, could be used in cars.
“There are many things that we are developing with Solar Impulse that you might see in an electric vehicle 20 years from now,” says Kreuter, whose office in Leverkusen, Germany, has one wall covered in pictures of futuristic-looking cars. “Everything we are doing with Solar Impulse has high relevance to our most important sectors including automotive, electronics, and construction.”
In addition to helping reduce the weight of the solar plane, Solvay and Bayer are providing materials that can buffer the extreme temperatures of flight, which without safeguards could range from –40 to 30 °C .
To insulate the cockpit and other temperature-sensitive components of the plane, the chemical companies have codeveloped a strong and lightweight insulating foam based on Bayer’s Baytherm Microcell polyurethane and Solvay’s 365mfc fluorinated blowing agent. Owing to pores that are smaller than those in standard foam, the new product provides rigidity and structural strength but remains lightweight. The foam is designed to ensure that the temperature does not drop below 15 °C for the batteries and below freezing in the cockpit, Michel says.
Still, the conditions pilots experience in the Solar Impulse are extreme enough that they have had to resort to meditation and even self-hypnosis during flights. “It’s a case of knowing ourselves,” Borschberg says.
To try to make the pilots more comfortable, Solvay has provided a nylon 6,6 fiber for their undergarments. The material incorporates a special filler that helps keep the pilots cool in the heat and warm when it gets cold. The nylon recycles infrared heat back to the surface of the skin when it is cold but also prevents sweating during periods of intense heat. “We have had to be very clever, open, and curious,” Michel says.
The Solar Impulse team of about 80 staffers, excluding headcount from partners and sponsors, has engineering expertise from backgrounds as diverse as Formula 1 racing cars and aeronautics, but it had little experience building airplanes. “So we were extremely open and entrepreneurial and flexible in our thinking,” Borschberg says. This also meant that the staffers developed an approach that was unrestrained by protocol. Solar Impulse’s designers and engineers cross-fertilized their ideas with those of materials scientists and chemists from Solvay and Bayer, he adds.
Borschberg has been “extremely impressed” by the way researchers from Solvay and Bayer have engaged in the project, the way they have made resources available, and their culture of supporting the project’s goals. “The motivation of our partners and the public has helped us keep our energy levels high,” he says.
The project has hit pockets of turbulence, however. In the summer of 2012 development of Solar Impulse HB-SIB was set back when the main spar of the wing failed a load test. “We had pushed a little bit too hard to reduce weight. We were just on the other side of the limit,” Borschberg says. The Solar Impulse team has since modified the design, but the glitch set back the attempt to circumnavigate the world by more than a year.
Although someone with Piccard’s background in psychiatry and exploring may be an unusual partner for a chemical company, it is not the first time that the Piccard family and Solvay have worked together.
In 1911 Solvay founded the Conseil de Physique Solvay, a regular gathering of Europe’s finest scientific minds to develop solutions to the scientific problems of the day. A regular attendee was Auguste Piccard, Bertrand’s grandfather, a professor of physics at the Free University of Brussels and a balloonist who became the first man to view the curvature of Earth. Other participants included Marie Curie and Albert Einstein.
The goal of the Conseil de Physique was to advance the scientific thinking of the day. And the Solar Impulse project has already influenced Solvay and Bayer to think differently. Piccard and Borschberg hope to have shared their message about the possibilities for innovation and renewable energy with an even wider audience by the time they circumnavigate the world in 2015.
- Chemical & Engineering News
- ISSN 0009-2347
- Copyright © American Chemical Society | <urn:uuid:ae32d9df-7c50-48d0-8f5c-21734db8bbfb> | CC-MAIN-2015-35 | http://cen.acs.org/articles/91/i2/Chemistry-Solar-Airplane.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060413.1/warc/CC-MAIN-20150827025420-00284-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.957168 | 2,151 | 2.71875 | 3 |
Ektachrome is a brand name owned by Kodak for a range of transparency, still, and motion picture films previously available in many formats, including 35 mm and sheet sizes to 11×14 inch size. Ektachrome has a distinctive look that became familiar to many readers of National Geographic, which used it extensively for color photographs for decades in settings where Kodachrome was too slow.
Ektachrome, initially developed in the early 1940s, allowed professionals and amateurs alike to process their own films. It also made color reversal film more practical in larger formats, and the Kodachrome Professional film in sheet sizes was later discontinued.
Whereas the development process used by Kodachrome is technically intricate and beyond the means of amateur photographers and smaller photographic labs, Ektachrome processing is simpler, and small professional labs could afford equipment to develop the film. Many process variants (designated E-1 through E-6) were used to develop it over the years. Modern Ektachrome films are developed using the E-6 process, which can be carried out by small labs or by a keen amateur using a basic film tank and tempering bath to maintain the temperature at 100 °F (38 °C).
Ektachrome has been used occasionally as a motion picture film stock, such as in the 1999 film Three Kings and the 2006 film Inside Man, in which each used cross processing in C-41 color negative chemistry to give a unique appearance.
Several years before Ektachrome's discontinuation, some of Kodak's consumer E-6 films were rebranded as Elite Chrome. In late 2009, Kodak announced the discontinuation of Ektachrome 64T (EPY) and Ektachrome 100 Plus (EPP) films, citing declining sales. On February 4, 2011, Kodak announced the discontinuance of Ektachrome 200 on its website. On March 1, 2012 Kodak announced the discontinuance of three color Ektachrome films. In December 2012 Kodak announced its discontinuance of Ektachrome 100D color reversal movie film in certain formats. By late 2013, all Ektachrome products were discontinued.
Although Kodachrome is often considered a superior film due to its archival qualities and color palette, advances in dye and coupler technology blurred the boundaries between the differing processes, along with Kodak having abandoned Kodachrome research and development since the mid-1990s. Furthermore, the developing of Kodachrome always required a complex, fickle process requiring an on-site analytical lab and typically required a turnaround of several days to allow for shipping times. By contrast, small professional labs have been able to process Ektachrome on-site since the 1950s, with product safety and effluent discharge having been drastically improved since the 1970s, when Kodak reformulated their entire color chemistry lineup. It is even possible for amateur labs to process Ektachrome within an hour using a rotary tube processor (made by Jobo, WingLynch or PhotoTherm), sink-line, or even by hand inversion in a small drum.
- Before Process AR-5 there was EA-5 for aero film. This is a hot version of E-4 and similar to ME-4 for Ektachrome motion picture film.
- E-6 was made available to the public in 1975, but only the pro films were available at the time. There were some color stability ("keeping") issues to verify before the amateur films could be released.
- E-7 is the "mix-it-yourself" version of E-6. Functionally it was equivalent, but there were a few differences.
- ES-8 is a special process for one type of Super 8mm movie film. It was introduced in 1975.
There were some other Ektachrome processes for 16 mm motion picture films:
The following processes were used for amateur Ektachrome super 8 mm movie film:
- Ektachrome Movie process introduced in 1971 (movies without movie lights). The process was later designated EM-24
- EM-25 is the mix-it-yourself version of EM-24.
- EM-26 is the updated process for improved Ektachrome super 8 films introduced in 1981.
- EM-27 is the mix-it-yourself version of EM-26.
- Initial Ektachrome process for sheet and roll film (1946–c. 1950s)
- Updated Ektachrome process for roll film and 135 film (1955–1966)
- Updated "professional" Ektachrome process for sheet film and Kodak EP professional rollfilm (c. 1950s to 1976)
- Updated Ektachrome process for roll film and 135 film (1966–1996, see note)
- Research project, only saw minor use in a revised form as the aerial film process AR-5
- Current Ektachrome process used for all major color reversal films and formats, first released in 1977. The conditioner, bleach and stabilizer baths were modified in the mid-1990s to remove the formaldehyde from the stabilizer: This change was indicated by changing the names of the conditioner step to pre-bleach step, and the stabilizer step to the final rinse step
- Used for push processing of Kodak Ektachrome films in general, and particularly for Kodak Ektachrome EPH ISO 1600 film, which has a speed of ISO 400 in normal E6, but is exposed at EI 1600 and push processed two stops in the first developer bath (10:00 @100.0F) to achieve the ISO 1600 speed rating. (It is natural for a faster film to require a longer first development time. This is sacrificed in the case of most color processing for consistency in processing, especially in machine processing.)
Other film manufacturers use their own designations for nearly identical processes. They include Fujifilm's process CR-55 (E-4) and CR-56 (cross-licensed with Kodak's process E-6; but with slight variations in the first developer); and the now-discontinued Agfachrome and Konica's CRK-2 (E-6 equivalent).
High Speed Ektachrome, announced in 1959 provided an ASA 160 color film, which was much faster than Kodachrome. In 1968, Kodak started offering push processing of this film, allowing it to be used at ASA 400.
The E-4 process was generally stopped after 1977, although continued in use for Kodak PCF (Photomicrography Color Film) until the 1980s, and for Kodak IE (Color Infra-red film) until 1996. This was due to a legal commitment by Kodak to provide the process for 30 years.
The Washington (W) Processing Lab operated between 1967 and July 1999. The lab facility was located in Montgomery County at the address of 1 Choke Cherry Road, Rockville, Maryland.
The Palo Alto (P) California Processing Lab was located at 925 Page Mill Road, Palo Alto, California.
The Rochester (R) New York Processing Lab is located at Kodak Park in Rochester, New York.
There were also Kodak processing laboratories in other locations, including Hollywood (California), Atlanta (Georgia), and Hemel Hempstead (England).
- "What type of film is this? - Photo.net Film and Processing Forum". Photo.net. Retrieved 2015-05-14.
- [dead link]
- Calhoun, John (April 2006). "The ASC -- American Cinematographer: Cop vs. Robber". American Cinematographer. American Society of Cinematographers. Retrieved 29 June 2013.
- [dead link]
- "KODAK EKTACHROME 100D Color Reversal : Film 5285 / 7285 Discontinued" (PDF). Motion.kodak.com. Retrieved 2015-05-14.
- The New York Times:"News Along Camera Row", February 9, 1947.
- "Early Kodak Ektachrome". Photomemorabilia.co.uk. 2015-04-25. Retrieved 2015-05-14.
- The New York Times: "A Faster Color Film", January 2, 1955.
- Popular Photography: "Tools and techniques: 35mm & Bantam Ektachrome", March 1955.
- The New York Times: "Ektachrome in 120-620 Announced by Kodak ", July 3, 1955.
- The New York Times: "One Solution Processing is Theme of New Volume". July 3, 1966.
- "KODAK EKTACHROME P1600 : Technical Data" (PDF). Kodak.com. Retrieved 2015-05-14.
- The New York Times: "Color Film Rated at 160 Announced by Kodak", March 29, 1959.
- The New York Times: "Photo Trade Show Opens", February 25, 1968.
- "Former Kodak Processing Plant Property : Voluntary Cleanup Program" (PDF). Mde.state.md.us. Retrieved 2015-05-14.
- "KODAK PROCESSING LAB, 925 PAGE MILL RD, PALO ALTO, California (CA) - Company Profile". Start.cortera.com. 2014-06-26. Retrieved 2015-05-14.
Official Kodak information
- Kodak process E6 Ektachrome (color transparency) processing manual Z-119
- Kodak process E6 Q-LAB processing manual Z-6 (more details than processing manual Z119 above)
- Ektachrome type EPH film data sheet
- Kodak sheet film notch codes
Processing of older Ektachrome films
Processes E-2, E-3 and E-4:
- Process C-22 UK and Europe
- Film Rescue USA and Canada
- E-6 Ektachrome Super-8 DIY processing
- Fotostation UK & worldwide | <urn:uuid:a0a76f8a-563c-442b-96d6-eaee10b8ea87> | CC-MAIN-2015-35 | https://en.wikipedia.org/wiki/Ektachrome | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064362.36/warc/CC-MAIN-20150827025424-00041-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.920032 | 2,097 | 3.28125 | 3 |
Within a century of Muhammad's conquest of Mecca, Islamic armies conquered lands from northern Africa, southern Europe, through the Middle East and east up to India. Within a century of that the Caliphate split up into several parts. The eastern segment, under the Abbasid caliphs, became a center of growth, of luxury, and of peace. In 766 the caliph al-Mansur founded his capitol in Baghdad and the caliph Harun al-Rashid, established a library. The stage was set for his successor, Al-Ma'mum.
In the 9 century Al-Ma'mum established Baghdad as the new center of wisdom and learning. He establihed a research institute, the Bayt al-Hikma (House of Wisdom), which would last more than 200 years. Al-Ma'mum was responsible for a large scale translation project of as many ancient works as could be found. Greek manuscripts were obtained through treaties. By the end of the century, the major works of the Greeks had been translated. In addition, they learned the mathematics of the Babylonnians and the Hindus.
What follows is a brief introduction to a few of the more prominent Arab mathematicians, and a sample of their work
Abu l'Hasan al-Uqlidisi
In al-Uqlidisi's book Kita b al-fusul fi-l-hisab al-Hindii (The book of chapters on Hindu Arithmetic), two new contributions are significant: (1) an algorithm for multiplication on paper is given, and (2) decimal fractions are used for the first time. Both methods do not resemble modern ones, but the methods are easily understood using modern terminology.
Abu Ja'far Muhammad
ibn Musa Al-Khwarizmi
Born: about 790 in Baghdad (now in Iraq)
Died: about 850
sometimes called the ``Father of Algebra''.
Al-Khwarizmi most important work Hisab al-jabr w'al-muqabala written in 830 gives us the word algebra . This treatise classifies the solution of quadratic equations and gives geometric methods for completing the square. No symbols are used and no negative or zero coefficients were allowed.
Al-Khwarizmi also wrote on Hindu-Arabic numerals. The Arabic text is lost, but a Latin translation, Algoritmi de numero Indorum in English Al-Khwarizmi on the Hindu Art of Reckoning gave rise to the word algorithm deriving from his name in the title.
To him we owe the words AlgebraAlgorithm
His book Al-jabr wál Mugabala, on algebra, was translated into Latin and used for generations in Europe.
Other Arab mathematicians
Abu Kamil Shuja ibn Aslam ibn Muhammad ibn Shuja
Born: about 850 in (possibly) Egypt
Died: about 930
Abu Kamil Shuja is sometimes known as al'Hasib and he worked on integer solutions of equations. He also gave the solution of a fourth degree equation and of a quadratic equations with irrational coefficients.
Abu Kamil's work was the basis of Fibonacci's books. He lived later than Al-Khwarizmi; his biggest advance was in the use of irrational coefficients (surds).
Abu'l-Hasan Thabit ibn Qurra
Born: 826 in Harran, Mesopotamia (now Turkey)
Died: 18 Feb 901 in Baghdad, (now in Iraq)
Thabit was a native of Harran and inherited a large family fortune which enabled him to go to Baghdad where he obtained his mathematical training. He returned to Harran but his liberal philosophies led to a religious court appearance when he had to recant his 'heresies'. To escape further persecution he left Harran and was appointed court astronomer in Baghdad.
Thabit generalized Pythagoras's theorem to an arbitrary triangle (as did Pappus. He also considers parabolas, angle trisection and magic squares.
He was regarded as Arabic equivalent of Pappus, the commentator on higher mathematics.
He was also founder of the school that translated works by Euclid, Archimedes, Ptolemy, Eutocius but Diophantus and Pappus were unknown to the Arabs until the 10 century. Without his efforts many more of the ancient books would have been lost.
Perhaps most impressive is his contribution to amicable numbers, that is two numbers who are each the sum of the divisors of the other.
Theorem. , then and are amicable.
Theorem. (Generalization of Pythagorean Theorem.) From the vertex A of , construct B' and C' so that Then
Proof. Apply similarity ideas
Note: If , this is the Pythagorean Theorem.
This is the third generalization of the Pythagorean Theorem.
Mohammad Abu'l-Wafa al'Buzjani
Born: 10 June 940 in Buzjan (now in Iran)
Died: 15 July 998 in Baghdad (now in Iraq)
Abu'l-Wafa translated and wrote commentaries, since lost, on the works of Euclid, Diophantus and Al-Khwarizmi. For example, he translated Arithmetica by Diophantus.
He is best known for the first use of the tangent function and compiling tables of sines and tangents at 15' intervals. This work was done as part of an investigation into the orbit of the Moon.
His trigonometric tables are accurate to 8 decimal places (converted to decimal notation) while Ptolemy's were only accurate to 3 places!!
Abu Bakr al-Karaji
early 11 century)
Arabic disciple of Diophantus - without Diophantine analysis.
Gave numerical solution to equations of the form
(only positive roots were considered).
in such a way that it was extendable to every integer. The proof is interesting in the sense that it uses the two essential steps of mathematical induction. Nevertheless, this is the first known proof.
al-Karkji's mathematics, more that most other Arab mathematics, pointed to the direction of Renaissance. mathematics.
Born: May 1048 in Nishapur, Persia (now Iran)
Died: Dec 1122 in Nishapur, Persia (now Iran)
Omar Khayyam's full name was Abu al-Fath Omar ben Ibrahim al-Khayyam. A literal translation of his name means 'tent maker' and this may have been his fathers trade. Khayyam is best known as a result of Edward Fitzgerald's popular translation in 1859 of nearly 600 short four line poems, the Rubaiyat.
Khayyam was a poet as well as a mathematician. He discovered a geometrical method to solve cubic equations by intersecting a parabola with a circle but, at least in part, these methods had been described by earlier authors such as Abu al-Jud.
Consider the circle and parabola
Substitute and simplify to get
which factored gives
So, the intersection x is the solution of the cubic:
Khayyam was an outstanding mathematician and astronomer. His work on algebra was known throughout Europe in the Middle Ages, and he also contributed to a calendar reform. Khayyam refers in his algebra book to another work of his which is now lost. In that lost work Khayyam discusses Pascal's triangle but the Chinese may have discussed triangle slightly before this date.
The algebra of Khayyam is geometrical, solving linear and quadratic equations by methods appearing in Euclid's Elements.
Khayyam also gave important results on ratios giving a new definition and extending Euclid's work to include the multiplication of ratios. He poses the question of whether a ratio can be regarded as a number but leaves the question unanswered.
Khayyam's fame as a poet has caused some to forget his scientific achievements which were much more substantial. Versions of the forms and verses used in the Rubaiyat existed in Persian literature before Khayyam, and few of its verses can be attributed to him with certainty.
Ghiyath al'Din Jamshid Mas'ud al'Kashi
Born: 1390 in Kashan, Iran
Died: 1450 in Samarkand (now Uzbek)
Al-Kashi worked at Samarkand, having partron Ulugh Beg.
He calculated to 16 decimal places and considered himself the inventor of decimal fractions. In fact, he gives as
which was the best until about 1700.
He wrote The Reckoners' Key which summarizes arithmetic and contains work on algebra and geometry.
In another work, al'Kashi applied the method now known as fixed-point iteration to solve a cubic equation having as a root.
Generally, for an equation of the form
we define the iteration
where is some initial ``guess". If the iterations converge, then it is a solution of the equation. Such a method is called a fixed point iteration. Another more famous fixed point iteration is Newton's Method
He also worked on solutions of systems of equations and developed methods for finding the root of a number - Horner's method today. [Note. This method also appeared in Chinese mathematics in 1303 in the Ssu-yüan-yü-chien (Precious Mirror of the Four Elements)]
First determine that a solution lies between x=19 and x=20. Now apply the transformation
We know there is a root between y=0 and y=1. Thus there are two ways to approximate the solution for y:
If then is even closer to zero, and this term may be taken as zero, giving the approximate solution
so that . We may also factor the equation as
Letting the y in the parentheses be 1, solve for the other to get hence the approximation
so that .
Clearly the first is slightly too large, while the second is slightly too small. Which should be selected? al'Kashi selects the second, . Why?
After Al-Kashi, Arabic mathematics closes as does the whole Muslim world. But scholarship in Europe at this time was on the up-swing. | <urn:uuid:200b4f8f-eb21-4039-9d2d-2f59083be951> | CC-MAIN-2015-35 | http://www.math.tamu.edu/~dallen/history/arab/arab.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00163-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.964027 | 2,172 | 3.5 | 4 |
I believe it is natural for Women to be the Dominant sex, and the fact that we all live in patriarchal (Male dominated) societies, means that we are going against our natural instincts. This is why we live in a world of conflict, chaos, violence, war, injustice, poverty and suffering. If we were to go back to a Matriarchal (female dominated) world, then most of the problems of our world would be solved. We can prove that men are the natural submissive sex by looking at their behaviour.
It is natural for men to be totally obedient to authority; we can see this in the military. In the First World War, it was possible to take any young man of the street give him a few months training and then order him to come out of the trenches to face near certain death, facing enemy machine gun fire. These young men obeyed these orders without question. In the Second World War, the Japanese military ordered whole squadrons of aircraft to commit suicide attacks on enemy shipping. Again these Kamikaze pilots blindly obeyed these orders. This has been true throughout history; men will follow their leaders to the death. When the famous general Hannibal marched an army across the Alps, in winter, to invaded Italy in 218 BC, in the process he lost half of his men. Yet his men still blindly followed him through this horrendous journey and into Italy to fight the Roman legions. There are thousands of similar cases of this throughout history.
It is men’s blind obedience to authority that is the foundation of patriarchy. Every despotic dictator in the world can hold power because he has an army of young men who will obey him without question. These young men will terrorise, beat up, murder, arrest and torture the general population when ordered to do so. So it is these young men’s blind obedience to authority, that gives dictators such great power.
Another reason why I believe it is natural for Women to be the dominant sex, is that the patriarchal society has to put in so much effort to make Women submissive. In the 19th century in the USA, when rich and middle-class Women protested against slavery, they were surprised to learn that Women legally had less rights than black slaves. Up until the 20th century everything a Woman had, was owned either by her father or husband. Women were also barred from all earning too much money and were only allowed to do the poorest paying jobs. A wife had to swear to obey her husband when she got married, and her husband was legally entitled to beat her with a stick if she disobeyed her. Even in the early 20th century it was considered that a husband wasn’t a ‘real man’ unless he couldn’t make his wife obey him through violence.
The same is true of many Islamic countries today, where they have the ‘honour’ system. In this system, a man is ‘dishonoured’ if his wife disobeys him or is even ‘disrespectful’ of him. In this situation the only way he can regain his ‘honour’ is to beat her up or even murder her. There have even been cases where sons have murdered their mothers, because the mother has shown ‘disrespect’. (‘Disrespect’, can be shown, by simply protesting against this unjust system). All over the world patriarchal countries subject their people to powerful propaganda to convince everyone that men are ‘naturally’ the dominant sex.
Feminists have pointed out that history is basically, his-story. This is because the history we are taught at school is very heavily biased in favour of men, while all of women’s achievements are ignored and censored. His-story claims that men have always been the dominant sex from the time of the early Stone-Age until today. Yet there is archaeological evidence that Women once rule the world in the Neolithic and Stone-Ages, but this evidence is kept from the general public. (You can read more about Her-Story in my books, “Why Women Should Rule the World” and “Mermaids, Witches and Amazons”).
All these laws and customs, as well as the censoring of history, would not be needed if it were natural for men to be the dominant sex. Because if this was true, then men would not need to use violence, propaganda and oppressive laws and customs to keep Women submissive. Yet in the opposite case, Women can dominate men without any help of violence and oppressive laws. (I know some Dominant Women do beat and whip submissive men, but as most men are bigger and stronger than Women, a man has to voluntary agree to allow her to do this.) In many cases it is the man who is asking and even demanding that his wife or girlfriend dominate him. While in the case of Professional Dominas he will pay a lot of money to be dominated by a Dominatrix.
Paradoxically it is men’s natural submissiveness to authority that makes patriarchy possible. Unfortunately, in a patriarchal society instead of men giving their total obedience to Women, they will blindly obey alpha males, who will abuse their authority over other men. Leaders of patriarchal countries will irresponsibility start wars with each other, not caring about the destruction and chaos of these wars. Generals will send whole armies into battle and be indifferent to the casualties on each side. While in the 20th and 21st centuries military leaders have ordered the bombing of defenceless cities. In the Second World War large 1,000 bomber raids destroyed cities in Germany and two cities in Japan were destroyed by atomic bombs.
It has been mention by other Femdom writers, men are very much like dogs in the way they can be trained. A well-trained dog is a happy, obedient and loyal dog, who will always wag its tail to greet its owner and will enthusiastically do everything it is trained to do. A badly trained dog, like a pit- bull terrier trained to fight other dogs, for the purpose of ‘entertainment’, is extremely savage vicious and dangerous dog. Who is not happy, as it lives a life of abuse, fear and anger and will attack humans as well as other dogs. Dogs trained like this, have killed many children and badly mauled adults.
Men trained by alpha males can be even more dangerous. In the army men are trained to kill other men, and even outside of the military, men and boys are indoctrinated by what they see on films, TV, comics and nowadays, in video games. If men only see films of ‘heroes’ who are violent men and solve all problems through violence, then they are brainwashed to become the same. In Britain on average two Women per week are murdered by their husbands or male partners. Men commit the vast majority of murders and violent crimes, and it is patriarchy that encourages them to be violent so they can dominate Women through violence.
Patriarchal propaganda likes to claim that men are ‘naturally’ violent but this was proven to be wrong by the USA military. After the Second World War, the American army interviewed large numbers of soldiers who saw action. In these interviews the vast majority claimed they never killed an enemy soldier, they explained they prefer to shoot over the heads of the enemy. Only a small minority admitted to killing the enemy troops. This is not unusual, as military officers all over world have had problems in forcing their men to kill the enemy and this has been happening for centuries. In the past, to make troops kill, generals have had to completely brutalise there troops and teach them to hate the enemy. Since the Second World War the USA military have had to adopt clever behaviourist psychological techniques to brainwash their troops into becoming killing machines.
In our education and mainstream media there is very little to encourage men to be submissive and obedient to Women. Up until the 1950s male domination was considered to be ‘normal’. Since the rise of the Women’s liberation Movement in the 1960s sexual equality has now become the norm, but Female domination is still seen as very unusual. So that even if children are brought up by a mother who dominates their father and encourages her sons to be submissive towards Women and her daughters to be dominant, when they go to school they are encouraged to behave in a completely different way. And if they watch the TV or read books, newspapers or magazines there is little to support Female Domination. Boys at school will be told they need to learn to be macho, or they will be bullied. While girls are taught they need to ‘cool’ to socialise with other girls. This is because few children want to be called a ‘freak’ or be an outsider, and will do anything to be liked by other children.
It is a sad truth that a man can be trained to be a loyal and caring slave to Women or be trained to be a violent and savage brute. Unfortunately in male dominated societies, men are mostly trained to be savage brutes.
I can see this in myself. I was brought up to be a male chauvinist and really believed that men were the ‘natural’ dominant sex, and felt that something was wrong with me because I wasn’t like that myself. Then I met a Dominant Woman and I discovered through my relationship with her, my true nature. This caused me a lot of confusion because my true nature was completely different to the way I was brought up. I believe this is true for most submissive men.
This is why many men in the Femdom scene are very bewildered by their desires and behave like “Jekyll and Hydes”. This is because they have been brainwashed as children to believe that if they are submissive towards Women then there must be, “something wrong with them” and that they are ‘weak’ or are ‘wimps’. Patriarchal psychologists support this by claiming that submissive men have an ‘inferiority complex’. Though patriarchy also says to men that it is all right to be submissive to your male boss, or if you are in the army to be submissive to your sergeant and officers.
Fortunately patriarchal brainwashing is becoming less and less effective in Western countries and many men are starting to be more aware of their natural desires to be submissive towards Women. But this causes them confusion because sometimes they will follow these natural feelings and other times they remember their upbringing and resist these powerful desires. Other men try to rationalise their behaviour by claiming that it is just a perverted sexual desire. So they will only allow themselves to be submissive during the sexual act and resist their submissive desires in ‘normal’ life. The irony is that most men who go to Professional Dominas do not have sex with the Dominatrix, yet still claim it is all about sexual desire and fantasy.
Because patriarchal brainwashing has convinced men that there is, “something wrong with them”, if they are submissive towards Women, many men greatly fear these powerful desires. And for this reason keep well away from Dominant Women, and only have relationships with Women who have been totally brainwashed by patriarchy into being submissive.
So it means that most men have been very badly trained as children by the patriarchal establishment, in much the same as badly train dogs, are trained to fight and kill other dogs. This is why Femdom men do need proper training by Dominant Women. They need to know that their submissive desires towards Women are not a sign of ‘weakness’ or is ‘unnatural’, but are perfectly natural for all men.
Undoing all the bad training men have received as children is not always easy. This is because they have been brainwashed to have very bad habits. For this reason even though a man becomes in touch with his natural desires to serve and worship Women, the patriarchal brainwashing he has been subjected to, gets in the way of acting out these desires in reality.
Unfortunately because Women have also been brainwashed by patriarchy, most Women do not know how to be dominant. So when a man comes to them and asks or even demands that she dominates him, she also doesn’t know how to respond. This is why many Femdom men, ‘top from the bottom’ and tell the Woman how he wants to be dominated. Regrettably his desire to be dominated by a Woman is greatly influenced by the way patriarchy has brainwashed him, so he generally gets it completely wrong.
Another point is that by ‘topping from the bottom’ a man can still stay in control, and this is why many men get stuck in this phase, and don’t move on. It is understandable why many men do fear surrendering total control over their lives to Women, but because of this, he doesn’t experience fully his natural desire to serve Women. When a man realise just how unsatisfactory it is pretending to serve a Woman and not surrendering total control to her, he is ready to move on.
A man who is ready to surrender total control still needs to be trained properly, because he still has habitual responses, from the patriarchal society he lives in, that makes total surrender difficult. A Women in this situation needs to train a man very much like a dog-trainer trains a dog. A good dog trainer demonstrates to the dog she is training, that she is the pack leader or alpha female. She can do this simply by her body language and taking control. Sometimes this can work for men, but other men need a long period of training, to overcome the patriarchal training they received as children.
This is why men are trained to worship Women as Goddesses. The very act of kissing a Woman’s feet, repeatively, demonstrates to the man his lowly position, and learn a habitual response of seeing Women as the superior sex. In dog training a dog-trainer uses repetition of what she wants the dog to do, to get the dog properly trained. The military also uses repetition of giving soldiers orders, to teach them to habitually obey without question. This is also true of Dominant Women when training men. To overcome his patriarchal indoctrination, she needs to instil in him a habit of obeying Women as a habitual response.
It is true that different Dominant Women will want to train their men in different ways. But even so, men need to be trained in the basics. Just being trained to worship women as Goddesses and learn a habitual response to obey Women without question. will make if very easy for any other Dominant Woman to train him in ways she wants.
Also if a well trained man goes off to have a relationship with a Woman who is not confident in her Dominant nature, it means he is less likely to ‘top from the bottom’. He will encourage her Dominant desires without wanting to control her, and give her total power over him.
This is why people need to know and understand how the powerful patriarchal propaganda machine has brainwashed them as children to be the total opposite to their natural desires. Then through this awareness, human beings can come back to the natural instincts and live in a peaceful and loving world ruled by Women. | <urn:uuid:6692a91f-e31d-4e42-beee-77d483e6a82a> | CC-MAIN-2015-35 | http://williamabond.blogspot.com/2008/09/why-men-need-to-be-trained-by-women.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063825.9/warc/CC-MAIN-20150827025423-00164-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.977207 | 3,166 | 2.65625 | 3 |
|No. 2||Rome, June 2004|
Coarse Grains Production
Source: FAO. Note: Totals computed from unrounded data.
Contrary to earlier predictions, the global coarse grains crop in 2004 is now forecast at 951 million tonnes, 2.1 percent up from last year and the largest output on record. The increase since the April report is mostly on account of very favourable planting conditions in the United States, the world’s largest producer, where a record crop is now expected. The larger United States crop accounts for the bulk of the increase over last year. In Europe, a significant surge is also expected after increased plantings and better weather conditions.
In Far East Asia, planting of the main 2004 summer coarse grains is virtually complete. The maize area in China is expected to increase marginally from the previous year, mainly in the Northeast region following the Government’s measures to reverse the declining production trend of recent years. Preliminary forecasts point to an increase in maize production by 3.6 percent to 118 million tonnes. India is forecast to have another good crop and in Indonesia, a bumper maize crop of 11.5 million tonnes has already been harvested, the combined result of increased plantings and above-average precipitation. A good maize crop has also been gathered in the Philippines, where attractive prices have led to increased maize area and the adoption of techniques boosting yields.
In the Asian CIS countries the aggregate area planted with coarse grains, mainly barley and maize, is estimated to be similar to last year’s. However, the previous exceptionally good yields are not likely to be repeated and aggregate output in the subregion should decrease slightly.
In western Africa, seasonably normal conditions so far prevail in the Sahelian zone where the growing season starts in most countries in May. However, desert locusts remain an extremely serious threat in Morocco, Algeria and Mauritania where control operations continue to be hampered by lack of resources. This could allow swarms to move to other Sahelian countries later in the season. In the coastal countries along the Gulf of Guinea, from Nigeria to Guinea, the rainy season has started, and planting is underway. In Central Africa, the rainy season started in time in Cameroon, allowing land preparation and sowing of the first 2004 maize crop, due for harvest from July.
In the eastern African subregion, planting of the 2004 main season coarse grains is underway or about to start in several countries. Early prospects are uncertain due to a combination of dry spells and excessive rains and flooding in several areas.
In southern Africa, harvesting of the 2004 coarse grain crops is underway. Early production estimates point to an aggregate subregional output of some 15 million tonnes, almost 10 percent below the average of the past five years, reflecting the delayed, erratic and inadequate rainfall pattern during the first half of the season in several countries. In South Africa, the subregion’s largest producer, maize output is estimated at 7.9 million tonnes, about 18 percent down from the previous year’s crop. In Zimbabwe, production is expected to fall slightly from last year’s already low levels. By contrast, in Zambia, where weather conditions have been generally favourable, the 2004 main maize crop is forecast to reach a record 1.4 million tonnes. In Mozambique maize output increased substantially reflecting a recovery of production in southern provinces. In Malawi, however, production is estimated at 1.7 million tonnes, 15 percent below last year’s about normal harvest.
In Central America and the Caribbean, planting of 2004 first season coarse grain crops is about to start, while harvesting of the 2003/04 winter maize crop is still under way in Mexico. The subregion’s maize output in 2004 is tentatively forecast at 23.3 million tonnes, close to the good results of the previous year and above average.
In South America, harvesting of the 2004 coarse grains is underway in the main producing countries. Aggregate output for the subregion is forecast at about 71 million tonnes, lower than last year’s record of 80 million tonnes, but still above average. In Brazil, aggregate maize production is forecast at 42.6 million tonnes, about 12 percent less than the 2003 record crop. This decline is mainly due to diversion of land to soybeans and rice, which offer more attractive prices and trade opportunities, and to the negative impact of dry weather conditions from the beginning of 2004. In Argentina, the latest official forecast points to a decrease in maize output from 15 million tonnes in 2003 to about 12.4 million tonnes in 2004, due to reduced plantings following insufficient rains at sowing. In Peru and Ecuador, dry weather conditions in the first months of 2004 severely affected maize crops.
In North America, April and May weather conditions were very favourable for the main planting season across the United States’ Corn Belt, allowing crops to be planted early, with prospects of good yields. Reflecting the good start to the season, maize output is now forecast at almost 265 million tonnes, 3 percent up from last year and almost 9 percent above the average of the past five years. Coarse grain planting in Canada proceeded well in late April and early May and some good precipitation improved conditions in previously dry parts of Alberta. This year’s output is expected to remain close to the previous year’s above-average level, with improved yields expected to largely offset a shift of land into non-cereal crops.
In Europe, prospects for the coarse grain crop in the EU-25 are favourable. The planted area has increased and generally favourable weather conditions are pointing to above-average yields. The aggregate output of the 25 countries is forecast to rise by 12 percent from the previous year to 140 million tonnes. In the Balkan countries, prospects for the coarse grains are also better than a year ago reflecting an improvement in moisture availability. However, some recent dry weather in eastern and southern Romania could begin to affect yield potential, if it persists. In the European CIS, the area planted with winter coarse grains is estimated to be up from last year and similar to the 2002 bumper harvest. The bulk of the coarse grains are planted in the spring (April/May); assuming normal weather, the harvest should recover from last year’s sharply reduced level.
In Australia, planting of the main 2004 coarse grain crops is still underway. The outcome is still very uncertain as seasonal planting rains eased off in late April and early May, especially in eastern parts, and many farmers are awaiting the arrival of more precipitation to finalize planting decisions. Planting could continue into June if more rainfall arrives in time.
At 105 million tonnes, FAO’s first forecast for global trade in coarse grains in 2004/05 (July/June) points to a significant decline from 2003/04, mostly reflecting a drop in imports by developed countries. However, this early assessment depends heavily on currently tentative production forecasts for 2004: in several countries this year’s crops have only recently been planted or have not yet been sown.
Coarse grains imports by developed countries in 2004/05 are forecast at 33 million tonnes, down 5 million tonnes from 2003/04, mostly in Europe. Given the impact of EU enlargement, which is estimated to account for at least 1.5 million tonnes of the overall decline, a strong recovery in coarse grains production in Europe, including the EU, could result in a further 4 million tonnes reduction in imports by the region as a whole. A different picture emerges for developing countries, where total imports could increase slightly, to around 72 million tonnes. Coarse grains purchases by most countries in Asia are forecast to remain close to the estimated levels in 2003/04 or even increase, driven by expected recovery in demand among countries affected by animal diseases in 2003/04. In Indonesia, however, the anticipated rise in maize production may lead to a sharp fall in imports, while exports could increase. In Africa, larger barley imports by Algeria would account for most of the anticipated small increase in imports. Elsewhere, 2004/05 imports are likely to remain mostly unchanged from the previous season.
On the export side, supplies in the United States, the world’s largest exporter, are likely to be larger than in 2003/04, given more favourable production prospects. With a strong recovery also expected in the EU-15 as well as the 10 new accession countries, exportable supplies from EU-25 to third parties are likely to increase significantly compared to 2003/04. A repeat of another good year in Canada and Australia will keep export supplies from these two countries at 2003/04 levels, but in Argentina, dry weather and lower plantings are expected to result in lower production and smaller exports. Among other exporters, a strong rebound in barley and maize production in Ukraine could also drive up exports. However, maize shipments from China are forecast to be cut further in 2004/05, reaching 4 million tonnes as a result of tighter domestic supplies. This compares to 11 million tonnes in 2003/04 and 15 million tonnes in 2002/03. In Brazil, with a reduction in overall maize output, exports are also forecast to decline in 2004/05 although, at 4 million tonnes, they would still compare positively with only a few years ago when the country was still a net maize importer. Lower sorghum production in Sudan is seen to cut exports by over 60 percent. A bumper maize crop in Zambia could result in a surge in exports, whereas, sales from the regions’ largest maize exporter, South Africa, may decline.
World coarse grain utilization in 2004/05 is likely to increase by only 1 percent, to 964.5 million tonnes. While the anticipated expansion is relatively small, at this forecast level, world coarse grains use would still be above the 10-year trend for the second consecutive season. Prospects for continued high coarse grains prices well into the new marketing season coupled with likely improved supplies of feed wheat could restrain the growth in feed use of coarse grains to only 0.3 percent, compared to 3 percent in 2003/04. On the other hand, FAO forecasts continued growth in the industrial use of coarse grains, maize in particular. Recent surges in fuel prices may provide a further boost to the industrial use of maize for ethanol in the United States, building upon the new record set in 2003/04.
World coarse grains stocks for crop years ending in 2005 are put at 124 million tonnes, down 15 million tonnes, or 11 percent, from their revised opening levels. As for wheat stocks, the recent revision of stocks in China (see box on page..) also affected estimates for global coarse grains stocks, which for crops ending in 2004 have been revised down to 138.5 million tonnes, much below the 152 million tonnes reported in April. Based on the current estimates, China would again account for most of the anticipated reduction in 2005 world coarse grains inventories. Total coarse grains production in China is forecast to increase only slightly from the previous year; with fast rising consumption, further reductions in inventories are foreseen.
Aggregate stocks held by the five major exporters by the end of seasons in 2005 are forecast at 42.5 million tonnes, almost unchanged from their opening levels in spite of anticipated reductions in the United States. The decline in the United States is likely to be more than offset by rises in the EU, where a strong recovery in production coupled with the addition of the 10 new countries could result in larger stocks. Nonetheless, ending stocks in 2005 among major exporters, as a group, would still point to relatively tight situation; with the ratio of their total coarse grains stocks to disappearance (the sum of their domestic consumption and exports) dropping to 8.6 percent, slightly below the estimated 9.6 percent in 2003/04, and well below the 16 percent long-term average.
The outbreak of the avian influenza in Asia, combined with rising freight rates, reduced feed grain purchases and applied downward pressure on prices. At the same time, reduced sales from China, strong demand in the United States, and a tight feed market in Europe have had an opposite effect. Coarse grains prices during this time of year are most sensitive to the weather situation and the size and condition of the new crop in the United States. Maize prices moved within a US$124-138 per tonne range since March but began to weaken more consistently only in recent weeks as prospects for new crops began to improve. In May, the export price of US maize (US No.2 Yellow) averaged US$130 per tonne, as much as US$22 per tonne, or 20 percent, above the corresponding month last year. Influenced by favourable planting conditions, weaker soybeans and smaller trade prospects for the next season, the Chicago maize futures fell sharply in May. By the fourth week of the month, September futures stood at around US$118 per tonne, some US$5 per tonne below the values quoted in March. Production in 2004 is currently forecast to increase, and supplies among major exporters have improved, but a recovery in Asia and lower exportable supplies in China and Brazil could keep prices firm into 2004/05. | <urn:uuid:fac784da-df20-48b3-b3e3-ee6ed3b73f1f> | CC-MAIN-2015-35 | http://www.fao.org/docrep/006/J2518e/J2518e07.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00279-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.946431 | 2,683 | 2.578125 | 3 |
American Cancer Society Releases Guidelines on Nutrition and Physical Activity for Cancer Prevention
FREE PREVIEW. AAFP members and paid subscribers: Log in to get free access. All others: Purchase online access.
FREE PREVIEW. Purchase online access to read the full version of this article.
Am Fam Physician. 2002 Oct 15;66(8):1555-1562.
The American Cancer Society (ACS) has issued its 2002 update on guidelines for reducing the risk of cancer with healthy food choices and physical activity. The full report appears in CA: A Cancer Journal for Clinicians, March/April 2002.
These guidelines are developed and published every five years by a national panel of experts in cancer research, prevention, epidemiology, public health, and policy. Recognizing that the ability to make healthy choices is often affected by factors within the environment in which people live, work, and play, the panel tried to identify key social and structural factors that influence access to resources for an active lifestyle. This year, the committee adds a recommendation for community action to accompany the four recommendations for individual choices for nutrition and physical activity.
In the United States, evidence suggests that one third of the more than 500,000 cancer deaths that occur each year can be attributed to diet and physical activity habits. The relative strength of current scientific evidence linking major components of diet to common cancer sites is summarized in the accompanying table.
Recommendations for Community Action
According to the ACS, most Americans would like to have a healthier lifestyle, but many encounter social, economic, and cultural barriers that make it difficult to follow diet and activity guidelines. Longer workdays and more homes with multiple wage earners reduce the time available for meal preparation and leisure activities. This results in a shift toward eating more fast food and leading a more sedentary lifestyle.
The ACS suggests that public, private, and community organizations work to create social and physical environments that support the adoption and maintenance of healthful nutrition and physical activity behaviors. Facilitating these changes will require increased access to healthy foods in schools, at work, and in the community. Efforts also are needed to provide safe and accessible environments for physical activity and transportation to and from these areas.
Recommendations for Individual Choices
In the United States, about 35 percent of cancer deaths may be avoidable through dietary modification. Epidemiologic studies have shown that populations with diets high in fruits and vegetables and low in animal fat, meat, or calories have a reduced risk of some of the most common types of cancer. The panel focuses on the following recommendations:
Eat a variety of healthy foods, with an emphasis on plant sources. Eat five or more servings of a variety of vegetables and fruits every day in various forms (fresh, frozen, canned, dried, and juiced); limit french fries, snack chips, and other fried vegetable products; choose 100 percent juice if you drink fruit or vegetable juices. Greater consumption of fruits and vegetables has been associated with a lower risk of lung, oral, esophageal, stomach, and colon cancer.
Choose whole grain rice, bread, pasta, and cereals; limit consumption of refined carbohydrates, including pastries, sweetened cereals, soft drinks, and sugars. Whole grains are an important source of vitamins and minerals associated with lower risk of colon cancer, such as folate, vitamin E, and selenium. They are higher in fiber and other nutrients than refined flour products. Beans are particularly rich in nutrients that may protect against cancer, and are a low-fat, high-protein alternative to meat.
Limit consumption of red and processed meats, especially those high in fat. Choose fish, poultry, or beans as an alternative to beef, pork, and lamb; when eating meat, select lean cuts and have smaller portions, using meat as a side dish; prepare meat by baking, broiling, or poaching, rather than frying or charbroiling, to reduce the overall fat content. High-fat diets have been associated with an increase in risk for cancer of the colon, rectum, prostate, and endometrium. Choose lean meats and lower-fat dairy products, and substitute vegetable oils for butter or lard.
The rightsholder did not grant rights to reproduce this item in electronic media. For the missing item, see the original print version of this publication.
Adopt a physically active lifestyle. Adults should engage in at least moderate activity for 30 minutes or more on five or more days a week. Forty-five minutes or more of moderate-to-vigorous activity a week may further enhance reductions in the risk of breast and colon cancer.
Children and adolescents should have at least 60 minutes a day of moderate-to-vigorous physical activity for at least five days a week. This should be encouraged because one of the best predictors of adult activity is activity levels during childhood and adolescence, and because of the critical role activity plays in maintaining a healthy weight.
Regular activity helps maintain a healthy body weight by balancing caloric intake with energy expenditure. Moderate-to-vigorous activity is needed to metabolize stored body fat and to modify physiologic functions that affect insulin, estrogen, androgen, prostaglandins, and immune function. Physical activity accelerates the movement of food through the intestine, reducing the length of time that the bowel lining is exposed to mutagens, may decrease the exposure of breast tissue to circulating estrogen, and improves energy metabolism and reduces circulating concentrations of insulin and related growth factors.
Moderate activities require effort equivalent to a brisk walk. Vigorous activities engage large muscle groups and cause an increase in heart rate, breathing depth and frequency, and sweating. Men older than 40 years, women older than 50 years, and people with chronic illnesses should consult their physicians before starting a vigorous exercise program. To reduce risk of musculoskeletal injuries, stretching and warm-up periods should be part of each program.
Maintain a healthy weight throughout life. Current trends indicate that the largest percentage of calories in the American diet come from foods high in fat, sugar, and refined carbohydrates. Limiting portion sizes, especially of these types of foods, is another important strategy to reduce total caloric intake. Meals in restaurants typically exceed the portion sizes needed to meet recommended daily caloric intake. Balance caloric intake with physical activity and lose weight if currently overweight or obese. Obesity is a major risk factor for cancer, diabetes, stroke, and coronary heart disease.
If you drink alcoholic beverages, limit consumption. Men should limit themselves to two drinks per day and women to one drink per day. A drink is defined as 12 oz of beer, 5 oz of wine, or 1.5 oz of 80-proof distilled spirits. Alcohol consumption is an established cause of cancers of the mouth, pharynx, larynx, esophagus, liver, and breast. The risk increases substantially with intake of more than two drinks per day.
Factors Affecting Risk for the Most Common Cancers
Currently, the best advice is to consume antioxidants through food sources rather than supplements.
The major risk factors for bladder cancer are smoking and exposure to certain industrial chemicals. Limited evidence suggests that drinking more fluids and eating more vegetables may lower the risk of bladder cancer.
There are no known nutritional risk factors for brain cancer.
Risk is increased by several factors that cannot be easily modified: menarche before 12 years of age, nulliparity or first birth at 30 years or older, late age at menopause, and a family history of breast cancer. Risk can be reduced by limiting the use of hormone replacement therapy, avoiding obesity, staying physically active, and breastfeeding. The best nutritional advice is to engage in vigorous activity at least four hours a week, avoid or limit alcoholic beverages to no more than one a day, and minimize lifetime weight gain.
Risk of colorectal cancer is increased in those with a family history, with the use of tobacco, and possibly with excessive alcohol consumption. Obesity and diets high in red meat have also been associated with increased risk of colon cancer. Risk may be decreased by using aspirin or other nonsteroidal anti-inflammatory drugs and, possibly, hormone replacement therapy. Diets high in vegetables and fruits have been associated with decreased risk. Increasing evidence suggests that vigorous activity may have an even greater benefit in reducing risk than regular moderate exercise.
To reduce the risk of endometrial cancer, maintain a healthy weight through diet and regular exercise, and eat at least five servings of fruits and vegetables a day.
The best way to reduce the chances of kidney cancer is to avoid becoming overweight.
LEUKEMIAS AND LYMPHOMAS
There are no known nutritional factors for decreasing the risk for leukemias or lymphomas.
Currently, the best advice to reduce risk of lung cancer is to avoid exposure to tobacco and to eat at least five servings of fruits and vegetables a day.
ORAL AND ESOPHAGEAL CANCERS
Avoid all forms of tobacco, restrict alcohol consumption, avoid obesity, and eat at least five servings of vegetables and fruits a day.
There are no firmly established nutritional risk factors for ovarian cancer, but vegetable and fruit consumption may lower risk.
Avoid tobacco use, maintain a healthy weight, remain physically active, and eat five or more servings of fruits and vegetables a day.
To reduce risk, limit intake of animal-based products, especially red meats and high-fat dairy products, and eat five or more servings of fruits and vegetables a day.
To reduce risk, eat at least five servings of fruits and vegetables a day.
Other Dietary Factors Affecting Cancer Risk
The following points address concerns about diet and physical activity in relation to cancer.
There is currently no evidence that the substances found in bioengineered foods now on the market are harmful or that they would increase or decrease cancer risk because of the added genes.
Men and women should try to get recommended levels of calcium primarily through food sources.
There is no evidence that lowering blood cholesterol levels has an effect on cancer risk.
There is no evidence that caffeine use increases the risk of cancer.
Fluorides do not increase cancer risk.
Folic acid deficiency may increase the risk of colorectal and breast cancer. To reduce this risk, folic acid is best obtained through eating vegetables, fruits, and enriched grain products.
Additives are usually present in very small quantities in food, and no convincing evidence exists that any additive consumed at these levels causes human cancers.
Insufficient evidence exists to support a specific role for garlic in cancer prevention.
Radiation does not remain in the foods after treatment, and there is no evidence that eating irradiated foods increases cancer risk.
Even if lycopene in foods is associated with lower risk for cancer, it does not follow that high doses taken as supplements would be more effective or safe.
Consumption of meats preserved by methods using smoke or salt increases exposure to potentially carcinogenic chemicals and should be minimized. Braising, steaming, poaching, stewing, and microwaving meats minimize the production of these chemicals. Microwaving and steaming may be the best ways to preserve the nutritional content in vegetables.
Consumption of olive oil is not associated with any increased risk of cancer.
At present, no research exists to demonstrate whether organic foods are more effective in reducing cancer risk than are similar foods produced by other farming methods.
There is no evidence that residues of pesticides and herbicides at the low doses found in foods increase the risk of cancer.
There is no evidence that phytochemicals taken as supplements are as beneficial as the vegetables, fruits, beans, and grains from which they are extracted.
No evidence suggests that salt used in cooking or in flavoring foods affects cancer risk.
There is a narrow margin between safe and toxic doses of selenium. The maximum dose in a supplement should not exceed 200 mcg per day. Seafood, meats, and grain products are good sources of selenium.
There is no convincing data that soy supplements are beneficial in reducing cancer risk.
Food is the best source of vitamins and minerals, not supplements. If a supplement is taken, the best choice is a balanced multivitamin/mineral supplement containing no more than 100 percent of the daily value of most nutrients, because high doses of some nutrients can have adverse effects.
Tea has not been proven to reduce cancer risk in humans. The few studies in which vitamin C has been given as a supplement have not shown a reduced risk of cancer.
Recent evidence demonstrates that trans-fats have adverse cardiovascular effects, such as raising blood cholesterol levels, but their relationship to cancer risk has not been determined.
Drinking at least eight cups of liquid a day is usually recommended, and some studies indicate that even more may be beneficial.
Copyright © 2002 by the American Academy of Family Physicians.
This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact firstname.lastname@example.org for copyright questions and/or permission requests.
Want to use this article elsewhere? Get Permissions | <urn:uuid:9b4198fd-5f77-49dd-99ce-070adc902727> | CC-MAIN-2015-35 | http://www.aafp.org/afp/2002/1015/p1555.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00044-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.938234 | 2,741 | 2.828125 | 3 |
A better understanding
09.17.10 - Permalink
Building the world's fastest computer for open research also requires developing the applications that will allow researchers to do their science on the machine. Bill Gropp, the chief applications architect for Blue Waters, tells Access' Barbara Jewett what that work involves and how it feeds into the future of high-performance computing.
Let's start with Blue Waters; tell me about your role in that.
I have several roles in Blue Waters. I am one of the co-PIs so I'm involved in monitoring the project and participating in various ways in what we're doing and what we need to do, with particular focus on the software. I'm also supervising a couple of projects that are looking at pieces that will contribute to that. One project I'm leading is porting libraries that either some of the PRAC-approved applications use or, in a couple of cases, libraries that we expect applications to need. PRAC is the National Science Foundation's Petascale Computing Resource Allocation program.
How long does it take to develop a library?
Well, it depends on how much you need to do. If what you want to do is just port and see what trouble you have, that doesn't really take very long. If you really want to tune the library to make good use of the hardware, then that can take months.
We also are looking at I/O libraries. At large scale, approaches that work on a thousand nodes, such as having every processor write out its own file, is not a good way to use the I/O system when you have 10,000 nodes and hundreds of thousands of cores. It's also not a good way to do your analysis, with hundreds of thousands of filessome applications have each process write out one file per variable, and now we're talking about hundreds of billions of files; there are better ways. There's no real reason to do it that way other than in the past most I/O systems were not that capable of doing better, and if you only had a few hundred or a few thousand files you could get away with doing your I/O that way. There are I/O libraries that are up to the task of providing the application developers with workable alternatives. That's one component of what we try to dohelp the applications developers.
The other project that we're looking at is support for programming models. We like to provide people with alternatives so they can make good use of the system. The most obvious is the hybrid programming model that mixes MPI with threadseither OpenMP or pthreads. And we'll also be providing good quality parallel languages that support what's called the partitioned global address space (PGAS) programming model. One of these is an extension to C called Unified Parallel C or UPC. So that you can define, for example, an array that is distributed across the parallel machine and easily access any part of that array. If this distributed data structure fits your application, it actually makes it easier to write different types of codes. Also, because it's a language, the compiler can perform optimizations that a library can't. IBM is doing this with their xlupc compiler and they already have some nice results. Of course, we don't expect users to rewrite their applications in a new language, so it will be possible to mix programming models in a single application. For example, you can write a module in UPC and link it with your MPI program. We'd like to see applications consider some performance-critical piece of code that would be suitable for UPC and try it out, linking with their MPI-based application.
Is that something new and novel? It sounds like something not commonly done.
It's not commonly done. UPC has been around for quite awhile, there are some users, there's a book on it, there are a number of groups that have built compilers and so forth, and it's been used on other supercomputers, but it hasn't really become mainstream. And I'm not sure it will become mainstream, but we want to make it easier and more feasible for groups to experiment with it and see if it will help solve some of their problems.
I wouldn't say that this is the first time support for UPC is available, but I would say this is the first time any supercomputing center has made it a goal that UPC would be available and interoperable with other parallel programming models, including MPI. We can use it to help better understand how we can move past the current MPI barrier in a way that meets the needs of applications.
I'd be the first one to say that it is relatively easy to define a new programming model that is exactly what you need for your application. The reason that MPI has been so successful is that the MPI programming model has been what almost everyone has needed to do their science. It is harder than people sometimes think to create a programming model that is applicable to everyone. By making it possible for people to experiment with other approaches without having to start over, we're probably going to give them programming alternatives that emphasize parallelism. We're going to give everyone, not just developers, a programming model better suited to where we go next.
As we go forward with exascale, the codes are going to have to change, aren't they?
Yes, an exascale system is going to have far more concurrency than the petascale systems. We're looking at 300,000 cores at least for a sustained petascale system. At exascale you're more likely to be looking at tens of millions of threads of concurrency, and you're going to have to manage those in some way. And you are reaching scalability problems. If you think about the fact that messages between different parts of the machine will take different amounts of time to arrive because nodes are not the same distance apart, you start ending up with variations in timing that at a thousand nodes are really irrelevant but at a million nodes can become very important. And in the typical MPI model, programs assume that they have control of all cores all the time, and that each phase of the computation and communication will take the same amount of time.
But with exascale you start having a more dynamic response to when things get scheduled, when things get done. You can do that in MPI but it is not as easy, and there is no reason to still have to do that programming. So we need to start looking for models that will help us forge that track without sacrificing some of the things that MPI has given us in the ability to in fact expressthe algorithms, providing information on performance-critical issues like data locality and so forth. So that we can gain more experience on how we put these things together.
I think we will see more hybrid models of programming so that we don't use one model everywhere. Many programmers don't like the hybrid programming model because it is hard to use. But the reason that the hybrid model is hard to use is that the parts have not been designed to fit well together. So the real problem is not that it is a hybrid model, the problem is that it is not a well-designed hybrid model.
An exascale system is likely to be hybrid hardware, partly because an exascale system will have to be maybe two orders of magnitude more power efficient than Blue Watersand Blue Waters is pretty power efficient. To do that, we are not going to be able afford general-purpose cores. We might be able to have lower-powered general-purpose cores, but then we have to have even greater concurrency because that's your tradeoff.
So what I think we will see is that the exascale system is likely to have heterogeneous hardware because it will be hardware that is specialized for say, control flow, hardware that is specialized for streams of data, hardware that is specialized for vector operations that are different than what you get with streams; there might be hardware that is specialized for minimizing data motion. And you'll have to program all these things. And all that sounds pretty frightening, but to do it having a uniform programming model that hides everything from the user won't work. So what will work, then, is a programming model that tries to minimize the programmer's pain and makes it as easy as possible for the programmer to work with the different hardware components. I think that is not as bad as some people might think. And we have some beginning experience with this with the work that is going on here and elsewhere on the use of GPUs. I don't expect an exascale system to have GPUs attached to nodes like the current systems, but I wouldn't be surprised if the features of GPUs don't become part of what is inside an exascale processor chip. It won't be an extra chip on the side, it will be within the processor chip. The software and the tools that we're starting to develop will help us understand what we need to know to use that part of an exascale system.
What are some other things that we're doing here at NCSA and the University of Illinois that go along with this?
There's a lot that is going on. There's fundamental work in computer architecture for the hardware that's going on between the computer science and electrical and computer engineering departments. There's work with programming models, there's work with tools to help you understand the performance that you're getting. One of the other problems to date has been that to a large extent a lot of the software work has been an art rather than engineering. In many applications, there is a trial and error approach to improving performance. So one of the things we are doing as part of the Blue Waters project and in these departments is trying to develop a better understanding of performance, better tools for modeling the performance of applications, better tools for applying the transformations needed to improve performance. With any aspect of engineering there is always an art to it, but it needs to be more systematic, more quantitative.
And so with the Blue Waters project we have several groups helping to develop analytic models of their applications' performance, which can help guide us and identify where the biggest gap is between the performance you should be getting and the performance you are getting. There are other efforts that are looking at tools that could be applied across the whole application so if, for example, you want to change a data structure, you can change it everywhere in your application. Such changes are something that computers are good at. And Wen-mei Hwu, another co-PI and GPU expert, is developing tools for using GPUs so you can use analytic models and understand how a code should perform. It also helps identify where there are greater bottlenecks and that affects things and how you may design the code. And their tools will become ever more important as the architecture becomes more complex and more specialized.
Some people say that by the time the Blue Waters operations end in 2016, we'll be ready for exascale, others that we'll be working with petascale for a very long time. Do you want to gaze into your crystal ball and say when we might be moving on?
When operation of Blue Waters reaches its predicted end, we won't be at exascale. Exascale is roughly 100 times faster than Blue Waters, and I just don't think we'll be there. Certainly not with the kind of architecture that we're looking at now.
I think that exascale is possible. There's a lot to do in terms of hardware research and software research. But I think we will go through another turn of an intermediate system, probably around 2015-2016, that might be a 100 petaflop. There are people who would like to see an exascale system by 2018. That's very aggressive. It is doable, but you'll have to sacrifice either cost or power.
Sometimes I think Blue Waters may be the last homogeneous general purpose processor system, because to get much past this without more power and without a lot more money you are going to have to give up something. You might give up the homogeneous parts and that would allow you to put a lot more computational power into the same footprint and the same power envelope, but it would be essentially like having a new edition, a new system. But that is going to require the right software tools. At exascale there is a risk of spending all of your energy moving data, not actually doing computations with it. And that will require building algorithms that are very limited in the data they would move, much more so than we do now.
How does IACAT play into some of the things we're doing here at Illinois?
The Institute for Advanced Computing Applications and Technologies provides a way to connect NCSA staff with the rest of campus, looking at some of these questions. We have a couple of projects that are looking in general at advanced computingnot all of them are looking at exascalebut three of them that are looking at the petascale to exascale issues. One of those is looking at use of computers in applications. So that brings the knowledge here, and the access and the expertise in the applications, to the work that's being done in GPU systems, so those tools can actually be applied to real applications in petascale situations. There's another project that is doing similar things, but within a different programming model, that provides a more dynamic approach to the use of several of these processors to address the problems of systems that evolve over time. And again, working with real applications as compared to working with benchmarking and testing codes. And there's a third project that is looking more at algorithms with a focus on multiscale problems. To get to exascale or to get to trans-petascale then they'll need to re-think the algorithms. We need different algorithms, not for the code problems, but for the problems we'll put on an exascale system. And those problems tend to have many components and parts.
We're talking about extreme-scale machines, but not everybody has an extreme-scale type of problem to solve. For those researchers who need computational power to solve their problems, but are not Blue Waters or extreme-scale users, what sort of computing resources will be available in the future for them?
Well, even though they won't be using exascale they'll benefit from solving the power problem for it. The processing cores are not going to get much faster, so the only way you make a processor faster is if you provide more parallelism on it. Your laptop, in a couple of years, might have 32, 64, or even 128 processor cores on it. Doubling will be the only way to get more computational power. And even now I think all laptops have at least two cores, so everybody has to deal with parallel processing. A lot of the techniques that we are developing make use of different levels of parallelism. And when you are looking at parallelism on Blue Waters, it's not equal threads; there is a hierarchy to the parallelism. So we have to understand how to make use of the eight cores on each chip and the 32 cores on each module. All that work will, in a few years time, be of good use in your laptop.
So we'll all benefit from Blue Waters.
We'll all benefit from a better understanding of how to make good use of specific levels of parallelism. Just the very top level of parallelism atop the whole thing is something that only people who have the most demanding problems will have to worry about. But the tools that we're developing will help the whole software stack. Sometimes when looking at the big picture it gives you a better way to understand how to solve individual pieces. | <urn:uuid:387d277a-3afa-4214-a0a5-6fc7edecf7be> | CC-MAIN-2015-35 | http://www.ncsa.illinois.edu/news/story/a_better_understanding | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645176794.50/warc/CC-MAIN-20150827031256-00339-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.970124 | 3,240 | 2.65625 | 3 |
The debate on education is a hot topic these days. The introduction of Common Core State Standards is only just the latest in the never ending debate on education.
In our home we use literature as our core curriculum. I believe literature is the most effective and meaningful way to teach the vast majority of subjects needed for the average person to thrive in the world. This is one of the many reasons that I oppose CCSS, because it stresses informational texts over classic literature.
I thought I’d spend a little time today telling you WHY I think literature and the classics are so important.
Let’s take history for example. Which is more memorable?
George Washington was born in 1732. He was general of the American army during the Revolutionary War. He was first president of the United States of America and served from 1789 to 1783. He died in 1799.
George Washington was born in an old farmhouse to a grumpy and ungrateful woman. His mother’s hovering kept him from following his dreams again and again but he often found solace and companionship in his older half-brother Lawrence Washington.
At the age of 14 he finally convinced his mother to allow him to become a surveyor, a career to which he was well suited and proved to be the basis upon which he gained many of the skills which allowed him to rise quickly within the ranks of the British Army in the early 1750’s. In May of 1754 he was leading a small platoon when he ambushed a French patrol and killed a man who turned out to be the French ambassador. This was the spark that began the French and Indian War (also known as the 7 years war).
During the War he suffered through several bouts of dysentery, before he had fully recovered he found himself in a battle that would many would remember for years to come. In June of 1755 he rode on horseback to carry messages back and forth between command and the troops, all while bullets coming from the enemy hidden within the trees picked off his fellow soldiers one by one. George himself had two horses killed out from underneath him but he continued to get up and soldier on. When the battle was over he was the only officer left standing, and was completely unscathed. Later he found four bullet holes in his coat, and one in his hat. One of his fellow soldiers said, “I expected every moment to see him fall. Nothing but the superintending care of Providence could have saved him.”
It is said that one of the Indian Chiefs fighting against the British Army later remarked, “I called to my young men and said, mark yon tall and daring warrior? He is not of the red-coat tribe–he hath an Indian’s wisdom, and his warriors fight as we do–himself is alone exposed. Quick, let your aim be certain, and he dies. Our rifles were leveled, rifles which, but for [him], knew not how to miss–’twas all in vain, a power mightier far than we, shielded you… Listen! The Great Spirit protects that man [pointing at Washington], and guides his destinies–he will become the chief of nations, and a people yet unborn will hail him as the founder of a mighty empire”
Need I keep going? We haven’t even gotten to the Boston Tea Party yet! I could tell you story after story, plucked from my memory. I need only to look up the dates. Yet the dates are all that textbooks expect us to remember.
History is rich. History is absolutely captivating! History is filled with more than names, dates, and places and there is just NOT enough room to put all of that in one book.
Text books are mostly written by committee, and since they have to put so much information in one volume, they don’t have room for what many consider “non-essential” details. But I believe the “non-essential” details are what make history WORTH remembering!
History textbooks certainly have their place in a home… I even have a few in mine… but even the best history textbooks are nothing more than a glorified timelines. You can’t learn history from three or four books, you learn history from hundreds of books… thousands of books! You are cheating yourself out of some of the most amazing experiences and lessons if you think otherwise.
I had always WANTED to love history. Every single school year I would lovingly pick up my history textbook and gently turn the pages hoping that THIS year, would be the year that history would come alive for me. Year after year I was disappointed. I mean, how on earth do you make the Holocaust boring? You do it by killing the stories.
What is the most important part of history? Are exact dates really all that important?
Before I go on I want to clarify what “the classics” are. I’ve been using the term literature and classics interchangeably but they aren’t.
My favorite definition if “the classics” is by Oliver Demille. He says, “A ‘classic’ is a work — be it literature, music, art, etc. — that’s worth returning to over and over because you get more from it each time.”
Isn’t that beautiful! I would like to add that a classic is any work that inspires you to learn more.
By this definition every person should have their very own classics list that consist of movies, music, art, books from every genre (both old and modern), and even people (babies are a classic). I even know of a few “textbooks” that are classics for me.
Shakespeare alone can teach us diction, history, ethics, language, poetry, grammar, communication, vocabulary, and more. Using only the works of Shakespeare you can leave high school with more ability to function and succeed in the world than most people graduating these days. Fortunately we have far more than Shakespeare at our fingertips. Using public domain books alone will allow anyone to get an absolutely superb, world class, leadership education. How much more rich of an experience will it be when we add modern and personalized classics to that list?
If you are having a hard time finding classics for your family, literature is a fabulous way to start. If you aren’t ready for Shakespeare, try Dickens. Is Bastiat too verbose? What about CS Lewis? Having a hard time with Jane Austen? Lewis Carroll might be a little easier to understand, or even Laura Ingalls Wilder. There is classic literature for every level, every age, every ability. Once you love the the Chronicles of Narnia you may be drawn to The Screwtape Letters, or The Great Divorce, then on to the Abolition of Man. Don’t try and move on to things you aren’t ready for just because you are an adult, and old English can be really difficult to navigate for anyone that is out of practice. Start where you are and move from there! Classics are worth reading again and again even if they are “for kids”, THAT is what makes them Classic.
That being said I do admit that it is very difficult (and few of us are brave enough) to get away from textbooks entirely. As I mentioned, I do have several textbooks in our homeschool library. So, I would like to give you a few pointers if you find yourself needing one or two.
- One author. Modern textbooks are written by committee. Texts that are written by one person (or at most two people) are much more likely going to be written by someone with a passion for the subject and the desire to share. Honestly, who takes the time to write a chemistry book other than someone who absolutely loves chemistry? You lose that passion when you subject a book to the critical eye of a committee.
- Don’t ever assume it is “all encompassing”. Textbooks, by definition, can NEVER be all encompassing. History texts are glorified timelines, science texts must always be supplemented by experiments and hands on activities, and math can easily be misunderstood without real life examples. Textbooks can be a really great starting point but they should never be the ending point. Think of them as a reference only because that is really all they are (and if one turns out to be more than that, wonderful!)
- If you find an author you love and trust, stick with them. Dr. Jay Wile is the author of our high school science books. I love the conversational and clear way in which he writes. It feels like you are having a conversation with an expert who is trying very hard to help you understand. When I heard that he was starting a new series for the elementary grades it was a no-brainer for me to buy it even though I already had elementary science texts.
- Do your research to make sure that the publishers/authors share your world view. Chances are you aren’t going to find anything you agree with perfectly, and I’m not saying that you should shield your children from other ideas and views. What I am saying is that you want something that you can trust to not teach things that are contrary to your family values. A handful of discrepancies can easily be addressed with conversation and added teaching, but a text or curriculum with a completely opposing world view will make it difficult for you to teach what you feel is important. For example, I am not likely going to appreciate a textbook that views the traditional role of a wife and mother as oppressive. Nor am I likely going to buy something that views Joseph Smith as the devil’s spawn and claims all Mormons are going to hell. It doesn’t fit my world view and it would be far too time consuming to try and address those claims in a text riddled with them. Fortunately we have countless options to choose from! There are texts out there that support every worldview, choose the one that fits yours most closely.
It should go without saying (though I’m sure someone will take issue if I don’t mention it) but once you get into professional training (doctor, architect, engineer, etc.) you will need to study the books that are necessary to learn what you need to learn, textbook or not. Since I don’t teach specialized professional training, I don’t need to worry or write about those resources
Classics are an absolute treasure. Just imagine what it would be like to learn from someone who speaks to your heart and soul. What would it be like to have a friend tell you just what you need to hear just when you need to hear it? To have someone push you just hard enough to grow (like having a personal trainer who helps you feel pleasantly sore after a workout), someone who whispers suggestions about how help you with your problems or what you need to help you out of your funk–that is what the classics are!
This is why classics are more effective than textbooks. Classics are meaningful on a personal level. Classics speak to YOU in a way that is different from the way they speak to anyone else.
So what are you waiting for? Go see what treasures the classics have waiting for you! | <urn:uuid:9875cdb3-459b-45f6-8489-db0725b930cb> | CC-MAIN-2015-35 | http://ordinaryhappilyeverafter.com/blog/2014/02/classics-textbooks-tjed-101/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.975937 | 2,334 | 3.03125 | 3 |
Federigo Enriques's parents were Giacomo Enriques and Matilde Coriat who were Jewish. Giacomo, whose ancestors were Portuguese, had been a rich rug merchant before his marriage to Matilde, who had been born in Tunisia and was a native French speaker. However, Matilde received such a large dowry on her marriage that Giacomo retired from the rug business at that time. Federigo was known in the family as Ghigo. He had an older sister, Elbina, who married Guido Castelnuovo in 1896, and a younger brother Paolo (1878-1932). Paolo Enriques became Professor of Zoology at the University of Padua and did important work on genetics. Matilde made great efforts to have the children well-educated and gave them every encouragement. The Enriques family moved from Livorno to Pisa in 1878 where Federigo was educated at the Liceo di Pisa. An early sign of Federigo's talent in mathematics is recounted in (see ):-
Bored by a homework assignment set by his tutor that involved computing the squares of numbers from 1 to 30, Federigo, then age eleven, figured out that he could generate the squares by adding successive odd numbers: 1, 1 + 3, 1 + 3 + 5, and so on. Buoyed by his discovery, he went on to calculate the squares of numbers from 1 to 1,000, publishing his results in a small pamphlet, which cost him his entire savings (seven lira). When Enriques's daughter asked him, many years later, whether his parents had been pleased with his enterprise, he flashed a smile and replied, "They never knew about it." The eleven-year-old had shown even then a streak of independence he never lost.
Two years later, when he was thirteen, Federigo met geometry for the first time and became very enthusiastic about the subject. His mother wrote to her sister at that time (see also ):-
Elbina studies with Mr Rodolfo and gives us much satisfaction. Ghigo [Federigo] has just turned thirteen. His studies also go well. Today, just imagine, he started a geometry course. You know how this boy is, every day his head has a new idea that lasts for about the space of a morning.
Of course, she was wrong about Enriques's new found love for geometry - it lasted his lifetime. He became fascinated by other subjects while at the high school, such as logic, epistemology, pedagogy, and the history of science. It is clear that his later interest in philosophy originated during this period in his life. Mr Rodolfo, who is mentioned in the quote above, was employed by the Enriques family to tutor the three children. Although it is not clear exactly what he taught Federigo, it is reasonable to assume that he made a highly significant contribution to the young boy's education teaching him topics not covered in the school syllabus.
Federigo took his final school examinations in the summer of 1887. He entered the University of Pisa, also studying at the Scuola Normale in Pisa, and he was awarded his degree in June 1891. He was fortunate to have been taught by Enrico Betti, Luigi Bianchi, Ulisse Dini, and Vito Volterra in Pisa. His thesis advisor had been Riccardo De Paolis (1854-1892), who had been appointed professor of higher geometry at the University of Pisa in 1881. After his laurea degree, Enriques continued to study at Pisa for a year. In 1892 he asked Guido Castelnuovo, who was in Rome, for advice on which direction his research should take. He took Castelnuovo's advice and worked on algebraic surfaces, sometimes collaborating with Castelnuovo. before moving to Rome to work with him. He spent a year studying in Rome before moving again, this time to Turin where he worked with Corrado Segre. In 1893 he published Ricerche di geometria sulle superficie algebriche, an important contribution to the theory of algebraic surfaces. This was the same year in which he began to seek a university chair.
Giuseppe Bruno died early in 1893 and later that year a competition was announced to fill his chair of projective and descriptive geometry at the University of Turin. Ferdinando Aschieri, Eugenio Bertini, Enrico D'Ovidio, Corrado Segre, and Giuseppe Veronese were appointed as the referees and they examined six eligible candidates. In addition to Enriques, these were Luigi Berzolari, Mario Pieri, Alfonso Del Re, Fererico Amodeo, and Edgardo Ciani. Berzolari was appointed to the chair with Enriques ranked in fourth equal position behind Berzolari, Pieri and Del Re :-
... it is apparent that Enriques, younger than most candidates, felt perhaps abnormally insecure about finding a stable position in a difficult world. He evidently also trusted personal lobbying more than the competition process. He described his experiences and plans and poured out his feelings in frequent - sometimes daily - long letters to Castelnuovo: 668 letters between November 1892 and December 1906.
An extraordinary professorship (called professor straordinario) in descriptive and projective geometry became vacant at the University of Bologna when Domenico Montesano left Bologna to take up a chair at Naples in 1893. Given the ranking in the previous competition, it was expected that Pieri or Del Re would be appointed and there was talk of appointing Pieri without a competition :-
Enriques had been considering the situation and concluded that (1) should there be a competition, the faculty would fill the instructional need temporarily by hiring a professor incaricato; or (2) should the faculty call Pieri with no competition, Pieri's positions at Turin would become vacant. In either case, Enriques would be a stronger candidate than he would be that year for straordinario at Bologna. Enriques had written to Volterra, his former teacher at Pisa, for advice. Volterra replied that Arzelà had written to him from Bologna referring to a possible incaricato position; and Volterra suggested that Enriques go to Bologna to talk with Pincherle and Arzelà. Enriques delayed, apparently at Arzelà's suggestion, but did go, probably on the day of the faculty meeting, Friday or Saturday, 17 or 18 November.
The Bologna faculty agreed to appoint Pieri without a competition and it appeared that it would be a formality that the minister of education would take the advice of the Faculty of Bologna and confirm Pieri's appointment. However the government became embroiled in a scandal and the minister of education resigned. The new minister of education did not approve Pieri's appointment, telling the Faculty at Bologna that they had to hold a competition and make a temporary appointment while this was taking place. Enriques was appointed to the temporary post in January 1894. The competition for the permanent Bologna post did not take place until October 1896; Enriques was appointed to the chair of descriptive and projective geometry with Pieri coming a close second. The long saga surrounding this appointment is described in detail in .
Enriques made important contributions to geometry and to the history and philosophy of mathematics. He produced a series of papers over a period of 20 years which, together with Castelnuovo, finally produced a classification of algebraic surfaces :-
In the 1890s Enriques, a mathematician who once quipped that "intuition is the aristocratic way of discovery, rigour the plebian way", and his colleague and future brother-in-law Castelnuovo began their monumental work on the birational theory of algebraic surfaces over the complex numbers. Severi joined them in this effort a few years later.
The key to the classification was contained in a remarkable paper Sulla proprietà caratteristica delle superficie algebriche irregolari published by Enriques in 1905. It contained what is now called the "Completeness theorem". His work on algebraic surfaces gained world-wide recognition when it was highlighted by H F Baker in his presidential address to the International Congress of Mathematicians in Cambridge in 1912. Another topic which Enriques worked on was differential geometry. In this area he also won fame with the joint award of the Bordin prize to him and Severi in 1907 for work on hyperelliptic surfaces. Around this time he contributed three articles to the Encyklopädie der mathematischen Wissenschaften, one on the foundations of mathematics and the other two, co-authored with Castelnuovo, on algebraic surfaces and birational transformations.
The foundations of mathematics had always interested Enriques, and, at Klein's request, he wrote an article on the foundations of geometry. His interest also extended to psychology when he asked questions such as:-
What leads a mathematician to make a conjecture?
For example, in a letter to Guido Castelnuovo written in May 1896, he writes (see ):-
I have been working for several days on another point that mathematics takes only as a pretext: hearing the name you will feel more horror than astonishment. This is the philosophical problem of space. Books on psychology and logic, physiology, and comparative psychology, a critique of knowledge etc., sit on my coffee table where I savour them with delight trying to extract the essence for what concerns my problem ... Read Wundt's 'Logik', at least that part which concerns the method of mathematics, and think that he is a physiologist who writes it: a physiologist who is not afraid to climb the steep slope of Kant's conception to illuminate from above the great progress of all sciences.
Giorgio Israel and Marta Menghini write in :-
Enriques himself recalled how his interest in mathematics was due to a "philosophical infection caught at school." His interest in science problems (and geometry in particular) was never simply of a technical nature but was always motivated by questions of "general culture" and lively reflection on the role of scientific thought in human activity. His mathematical research was interwoven with the intention of finding an answer to the great philosophical question on which science is founded. It is, after all, impossible to separate Enriques, the philosopher of science, from Enriques, the mathematician, without running the risk of failing to understand both of them.
His book Problemi della scienza written in 1906, stressed the unifying aspect of scientific theories, the association of ideas and of scientific representation. He writes:-
It is plainly seen that scientific questions include something essential, apart from the special way in which they are conceived in a particular epoch by the scholars who study such problems. ... In the formulation of concepts, we shall see not only economy of thought ... but also a somewhat determinate mental process ... .
Jeremy Gray writes in :-
The mathematical community has evolved sophisticated ways of reading Enriques's work in algebraic geometry, and we see most of it as either correct or easy to put right. It is harder for us today to accommodate his writing as a philosopher or populariser. He held a subtle position, according to which knowledge is inseparable from the means of knowing, logic from psychology. This has long been unfashionable in the sciences. It may be that cognitive psychology will reopen the avenues Enriques explored; there are signs that it has reached at least the philosophy of mathematics. ... Enriques offered a position on the nature of knowledge that was original and sophisticated. His readers found a rare grasp of modern science, traditional philosophy, and contemporary psychology.
To see extracts from reviews of Enriques works, look at THIS LINK.
In addition to his research work, Enriques also wrote textbooks for schools. He co-founded Mathesis, becoming president of the Society. He founded a number of journals including Scientia and Periodico di Matematiche. He also founded the Italian Philosophical Society and was president of it from 1907 until 1913. During that time he organised the fourth international congress of philosophy in Bologna in 1911.
He remained at Bologna until 1922 when he was called to Rome to teach complementary mathematics, a new course designed for high school mathematics teachers. In the following year he accepted the chair of higher geometry at the University of Rome where he founded the National Institute for the History of Science and the School of the History of Science. Since he was Jewish, he was affected by the Manifesto della razza (Manifesto of Race) enacted by Mussolini in July 1938. This stripped Jews of Italian citizenship and banned them from positions in banking, government, and education. Enriques had to resign from teaching in 1938. The fact that he had tried to avoid political problems by joining the Fascist Party was of no help in letting him avoid the difficulties caused by the Manifesto of Race. He was forced into hiding during the war years but, from 1941, he was able to participate in the illegal school organised by Castelnuovo in Rome to give special courses to instruct Jewish students disadvantaged by anti-Semitic government policies. He also continued writing but, unable to publish under his own name, published some works under the names of his loyal students. After the fall of Fascism in 1944, he was able to return to teaching at the University of Rome.
Many of Enriques' students recalled their experiences :-
As a teacher, Enriques loved nothing better than to engage in his own leisurely peripatetic conversations with students, in the public gardens in Bologna or under its arcades after class. When he moved to Rome, the labyrinthine network of paths in the Villa Borghese became his favourite destination; he would stop there every so often, one student at that time recalls, "to trace mysterious figures on the ground, with the tip of his inseparable walking stick".
Enriques received many honours. He was awarded an honorary degree by the University of St Andrews. He was elected to the Reale Accademia dei Lincei in 1906 and, in the following year, was awarded (together with Levi-Civita) the Royal Prize in Mathematics. He was also elected to the National Academy of Sciences of Italy (the "Academy of Forty"), the Académie des Sciences Morales et Politiques (1937) and many other academies.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page | <urn:uuid:cbeefe64-872b-41e0-807e-77e5be5f00aa> | CC-MAIN-2015-35 | http://www-history.mcs.st-andrews.ac.uk/Biographies/Enriques.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00106-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.97998 | 3,100 | 3.0625 | 3 |
The Children of Birmingham
In the almost excessive reporting and other coverage of the recent events in the South, there has been amazingly little mention of the momentous success of non-violence as a political means. How and why is it working? One reason for the silence, no doubt, is that reporters and their readers tend to look for, and wait for, “dramatic” violent incidents—the police dogs got much more notice than the fortitude of the children. Perhaps another reason is that successful non-violence does not fit easily with American political thinking; maybe people don’t want to mention it. The fact remains that a major social change is occurring, with repercussions in the power structure and the economy, yet with not many wounded and less than half a dozen killed (counting in a murder or two that “has no connection”). We are witnessing a novelty in democratic history. Especially in pacifists it awakens the hope that the example may have consequences in America and in the world far beyond the field of race relations itself.
In discussions of non-violence—except when they are narrowly moral or religious—much is said, usually discouragingly, about the peculiar group characters and the peculiar circumstances that are necessary. It is always pointed out that just the Indians succeeded with just the British, but nobody could have succeeded with the Germans (not that anybody tried). Certainly success does depend on the character of the opponent and on the nature of one’s own commitment. What can we now gather, from the Southern reports, about our peculiar American situation? I am drawing the following reflections from the observations of Dave Dellinger of Liberation magazine during a recent stay in Birmingham.
(1) Very important is that in a situation of powerless frustration, non-violent action at least gives something to do that is simple and guiltless. In a highly organized economic and social structure, where political action is indirect, and legal remedy is necessarily elaborate and slow, people suffer from a creeping paralysis. But now one can strike, boycott, picket, protest en masse, pray and sing en masse, sit in, go limp, be hauled off to jail. None of this requires much money; propaganda is mainly by word of mouth and live meetings; the skills are easily taught—though they require courage and conviction; and organization has proved to be surprisingly uncomplicated and flexible. The general American feeling is, “In such a vast system, what can one man do without connections, money, political party?” Here, demonstrably, many can have an effect, with rather rudimentary organization. People revive with activity, and one action leads to another.
(2) A striking feature of the Southern protest—as contrasted, for example, with the Gandhian movement—is that the Negroes and their white friends do not regard themselves as flouting the Law; rather, the laws are “unjust.” In the sit-downs in terminals, disobeying the police is not regarded as Civil Disobedience or “conscientious objection,” but rather as affirming one’s rights. Partly, of course, this is because the federal law is different from the state law (and has more troops behind it); but it is also because at a certain point of foolishness and unfairness, law as such loses its moral authority. (This was evident after the Dred Scott decision or during Prohibition.) When the disobedient feel that they are “right,” the problems of disobedience cease to be internal and become physical and technical: how not to be swept away by the hoses, how not to drown or get your head broken. This is, of course, a kind of anarchy. In my opinion it is salutary in our extraordinarily supine police-ridden society.
(3) There has been good solidarity. An incident occurs—e.g., police persecution—and at once a small crowd gathers and threatens quickly to become a big crowd. Naturally, any concerted action involving deep need and the danger (and embarrassment) of public exposure creates the sense of solidarity and tends to grow by feeding on itself. In the South—as with labor action in the 30’s but unlike pacifist protest so far—this process of accretion has continued, perhaps because the provocations are omnipresent and obvious. There is no need for imagination or persuasion. Also, the very fact of being in a ghetto, which is demoralizing and ordinarily leads to self-contempt and mutual contempt, becomes the ground of loyalty and rallying when there is a ray of hope. The organized leadership has had the usual bickering, based on jealousies and real differences of opinion, but the solidarity of the people has been unbroken and has forced the leaders to cooperate.
(4) There has been an interesting dialectic between the local actions and the national and regional organizations. Repeatedly the segregationists charge that all the trouble is caused by outside agitators, but the charge doesn’t stick, and new national organizations keep springing up that are even more radical than the ones already in existence. When Dr. King says, typically, “Our local affiliate in Birmingham invited us to be on call to engage in a non-violent direct action program if such were deemed necessary” (“Letter to Eight Alabama Clergymen”), his statement is entirely convincing. It is convincing because everywhere, at a grassroots level, all the Negroes and a great majority of the whites believe that it is high time for such a direct action—at least in Alabama!—so there is nothing surprising if the Negroes of Birmingham think so. Here is a curious inversion of the usual relation of “masses” and “leadership.” The ideology of the protests, the story of the injustice protested, does not have to be taught; it has been endemic in every community for generations; there are millions of orators. Except for the Black Muslims—who are opposed to the integration movement anyway—leadership has added nothing to the ideology of the movement. What leadership—like the students around James Farmer who became CORE—has done, however, has been to give examples of direct action and these have grown gradually into the tactics of a general guerrilla war. The tactics have been taught. But tactics are precisely what one would expect to come “from below”—as, for example, the CIO ideology of vertical unionism came from above—yet brilliant tactics like staying-in and playing basketball rather than leaving the plant were probably spontaneous and local. As Dave Dellinger points out, very few Negroes have ever heard of Gandhi, yet Montgomery and Birmingham were directly Gandhian nevertheless.
So far I have been talking largely about sociological matters. Let me turn to ethical and spiritual matters, which have been, in my opinion, even more essential for the success of non-violent protest.
(5) First of all, non-violence is succeeding—is remaining non-violent—because by and large the segregationists themselves know that they are in the wrong, whatever they may animally or psychotically feel. The Declaration of Independence, Christianity, Ethics, the Court, all declare against them. They cannot help but recognize that the protesters have a human claim. This makes it impossible simply to mow the protesters down. Paranoiac individuals or an inflamed mob might become violent, but such violence does not carry over. Indeed, the most that the segregationists seem to be saying at present is, “Yes, but not now—not by compulsion.” Also a big proportion of the Southern whites—I have heard it estimated at 35 per cent—is against segregation. And it is evident that the great majority of the youth in the colleges are either integrationist or have no strong feelings—they are interested in getting their degrees and going on to prestige graduate schools without undue embarrassment. In any case, there have been unmistakable signs of ambivalence, divided self, among the white authorities. Mighty threats of reprisal have come to nothing; orders have been given and not executed; respectable churches have begun to remember their universalism.
Underlying this change of heart, of course, are changed objective conditions: the growth of overwhelmingly national communications and economy, national military conscription, national labor unions following Northern industry South, and national politicians relying heavily on Northern urban votes. Not least, the world-wide breakdown of colonialism has made segregation embarrassing in the cold war.
(6) Again, as in India, religion has been a powerful help to the Southern Negroes, enabling them to transcend self and fear, and to meet with fortitude the risks of unresisting martyrdom. The Negro meetings, as described, are often revivalist and (spiritually) intoxicating. On the march and standing, praying and hymn-singing are things to do. The Christian rhetoric of the leaders seems to be somewhat authentic. Among these simpler rural or recently rural folk, Christianity seems to retain some of its essence, its millenarianism, its Kingdom Come.
(7) Finally, there have been the unique factors of family and children which have gone little noticed. The warm, close-knit community of Southern Negro families, dedicated to the children and imbued with religion, becomes a good source of strength and endurance for a difficult non-violent war, once Uncle Tom has been transcended. (The fragmented and more chaotic families of New York and Chicago promise much more hostility and spite.) Those who have criticized the exposure of the children in Birmingham quite miss the point. In my opinion, the entire movement is for the children—and all the evidence is that the children took part enthusiastically. Let us notice that in Northern cities and suburbs also, mass non-violent action has been directed mostly at the issue of the elementary schools. There have been a few mass protests for fair labor practice, but protesting will not produce jobs for the uneducated; it is the thought that the schools are inferior that causes passion. (To give an analogy, the only pacifist protest with mass appeal has been against poisoning the milk with fallout from the testing.) In Dr. King’s usual sermon, surely the telling passage is: “When you suddenly find your tongue twisted and your speech stammering as you seek to explain to your six-year-old daughter why she can’t go to the public amusement park that has just been advertised on television, etc.” And the seminal Supreme Court decision was, of course, to give the kids a better break by integrating the schools.
Probably it is only by an idea of the future that one can maintain the vigor and discipline necessary for revolutionary non-violence, or indeed for revolution of any kind. People get used to long-standing present oppression; if it worsens, they lash out in anger and vengeance, and they riot. But the future requires idealism and persistence. In India, Swaraj—Self-Rule—was the future; it was hope. In America, at present, for whites and Negroes both, only the children seem to represent the future. This is what our affluent society has come to, for rich and poor. Among the whites, there is fantastic concern over primary schooling, child psychology, young marriage, suburban environment, as if adult life were not serious. (Ironically, the suburban flight—for the sake of the white children—segregates the Northern schools and now creates problems for the white parents because of the Negro children.) Among the Negroes, the need to secure equal opportunity for the children has suddenly become desperate because with automation and the collapse of share-cropping they are threatened with an even worse future.
Success in non-violence means, fundamentally, not victory or defeat for either side, but that the opposing groups come to share a new future of common mankind. This was certainly Gandhi’s conception. Astoundingly, in affluent America—with its one-third still ill-housed, ill-clothed, ill-fed, even though the GNP is now 600 billion per year—the future of mankind seems to be represented by giving opportunity to the children of poor Negroes. They alone seem to be human! Nothing else arouses deep feeling, can fire the pen of intellectual writers, can bring people out on the streets and make the powers-that-be take notice, and even President Kennedy press for some sensible domestic legislation with a modicum of vigor. The liberal press seems to have no other serious social problem to cover.
Looked at frankly, this is a pathetic—and disastrous—situation. Unless we, Negroes and whites, show an equal seriousness for the future about more grown-up and universal problems, there is not going to be any future for these children to inherit. God bless them, they will get an equal education, but it will be a bad one; they will grow up and vote, but it will be for Bobby Kennedy; they will have jobs (or equal unemployment) manning an expanding economy and galloping technology that we cannot cope with as it is; and they will drive cars in cities already choking to death with cars. They will drift like everybody else—in a world whose functional community is kept from developing by an archaic power structure—right into nuclear annihilation.
The hope is that this Gandhian movement of the children of Birmingham, who certainly never heard of Gandhi, will teach them and some other people to take similar democratic action toward other things that make life livable. | <urn:uuid:64e00128-55f3-4c18-9597-782dea4c8f66> | CC-MAIN-2015-35 | https://www.commentarymagazine.com/article/the-children-of-birmingham/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065341.3/warc/CC-MAIN-20150827025425-00047-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.962653 | 2,780 | 2.53125 | 3 |
What is chlordane?
Chlordane is a man-made mixture of chemicals that was widely used as an insecticide in the United States (U.S.). Although no longer used, chlordane is very persistent and can still be found in some soils. Chlordane contains heptachlor, another persistent insecticide (see BCERF Fact Sheet #12, Pesticides and Breast Cancer Risk, An Evaluation of Heptachlor). The most common trade names for chlordane sold in the U.S. were Octachlor and Velsicol 1068.
What is the history of chlordane's use?
Chlordane was used extensively as an insecticide in the U.S., from its introduction in 1947, through the 1980s. The most common use of chlordane was for termite control. It was poured or injected around foundations to protect homes and buildings from termite damage. Its use was especially high in areas where termites caused structural damage, such as the southern U.S. It was also used to kill insects in the soil, to prevent them from damaging food crops, gardens and turf, and was used as an herbicide to control weeds in turf. Another use was to prevent fire ants from building nests in power transformers. Chlordane's use on food crops was canceled in 1978 by the U.S. Environmental Protection Agency (EPA). Its use for protection of buildings and power transformers continued for another 10 years. In 1988, all commercial and domestic use of chlordane in the U.S. was banned by the EPA.
Why was chlordane use banned?
Laboratory mice that were fed chlordane over long periods of time had a higher incidence of liver cancer than untreated mice. These results raised concerns about chlordane's ability to cause cancer in humans. Chlordane was also found to stay in the environment and build up in animal and fish fat. There was a concern that people may be exposed to this insecticide by eating food contaminated with chlordane, including fish, shell-fish, dairy, meat and poultry products. Its use was subsequently banned due to these concerns.
Is chlordane still made commercially?
Chlordane is still made in the U.S. for export. Formulations containing chlordane are available internationally for termite control and wood treatment.
How do federal agencies regulate chlordane to protect the consumer?
All chlordane sales and use in the U.S. were canceled by the EPA in 1988. Since chlordane is still made in the U.S., the Occupational Safety and Health Administration (OSHA) regulates chlordane levels in the workplace. The EPA limits the amount of chlordane that can be released from any industrial source into waste waters. The EPA also sets the maximum level of chlordane allowed in drinking water. This "maximum contaminant level" for chlordane has been set at no more than 2 micrograms of chlordane per liter of drinking water (one liter is approximately one quart). The Food and Drug Administration (FDA) and the U.S. Department of Agriculture (USDA) monitor the levels of chlordane and its breakdown products in domestic and imported foods.
Who might have been exposed to chlordane?
People most likely to have been exposed to this chemical in the past are:
How can we be exposed to chlordane today?
Chlordane is very stable in the environment. It can remain in some soils for over 20 years. People digging around the foundations of buildings treated with chlordane in the past could still expose themselves to this highly persistent chemical. Children may be exposed by playing with chlordane-contaminated soil found near the foundations of chlordane-treated buildings and homes. Chlordane can be found in the air of some homes 15 years after its use. Because of extensive use of chlordane for termite control in urban areas, small amounts of chlordane in soil can be carried into water run-off, and contaminate river and lake beds where fish feed. Chlordane can build up and accumulate in the fat tissue of fish and shell fish that have lived in contaminated bodies of water. Those working in factories that make chlordane should follow current OSHA workplace protection guidelines to minimize their exposure to chlordane.
Does chlordane cause cancer in experimental animals?
Chlordane caused an increase in the incidence of liver cancer in male and female mice and thyroid cancer in female rats when it was fed to these laboratory animals over long periods of time. Chlordane may also act with other carcinogens to "promote" liver tumors in male mice. Diethylnitrosamine (DEN) is a known cancer-causing substance (carcinogen). Male mice that were given DEN in drinking water and then fed a diet containing chlordane, were twice as likely to develop liver tumors than when given DEN alone.
Does chlordane cause cancer in humans?
There is inadequate evidence to show that chlordane causes cancer in humans. In a few reports, chlordane exposures have been linked with cancer. But since most of the people who have been exposed to chlordane have also been exposed to other pesticides, establishing a clear link between chlordane exposure and cancer is difficult. In one study, an increase in deaths due to lung cancer was observed among pesticide applicators. However, there was no significant increase in deaths from any cancers among men who worked in chlordane-manufacturing plants. In another study, agricultural workers who handled chlordane and other pesticides were observed to have a higher risk of developing a type of cancer called non-Hodgkin's lymphoma. But other studies have not observed an increase in the risk for this cancer among chlordane-exposed agricultural workers. Unfortunately, similar studies on women exposed to chlordane through their work in agriculture or at chlordane manufacturing plants have not been done.
Does chlordane cause breast cancer?
In studies conducted so far, chlordane has not been directly linked with causing breast cancer in either animals or in humans. Since chlordane can build up in breast fat, three studies have looked at the levels of a chemical contained in chlordane mixtures (trans-nonachlor) and chlordane break-down products (oxychlordane) in the breast fat of women with and without breast cancer. The results of these studies have not been consistent. Two of these studies did not find significantly higher levels of chlordane or its breakdown products in the breast fat of women with breast cancer compared to women without the disease. Both of these studies tested small groups of women (20 or less). The one study that did find elevated levels of oxychlordane in women with breast cancer compared to women without breast cancer is of very limited value because of the very small number of women studied (only 5 in each group) and other problems in the design of the study.
It is difficult to make any definite conclusions from the very few small studies that have been conducted to date. Larger, more carefully designed studies are needed to further investigate whether higher body levels of chlordane and its breakdown products are associated with an increased risk of developing breast cancer. Several studies involving larger groups of women are in progress.
How may chlordane affect breast cancer risk?
A woman's lifetime exposure to estrogen has been linked to increased breast cancer risk. Estrogen is a female hormone that helps control the reproductive cycles and breast growth. There is a concern that synthetic chemicals that act like estrogen or cause other chemicals to act like estrogen may increase a woman's risk of developing breast cancer. In one laboratory experiment designed to test for a chemical's ability to act like estrogen, chlordane did not act like estrogen when it was tested alone. There is currently no evidence that chlordane can enhance the estrogen-like effects of other environmental pollutants. The one study that had suggested that chlordane can enhance the estrogen-like effect of other pesticides is considered invalid because the results of this study could not be reproduced.
Another way a chemical may affect breast cancer risk is if it "disrupts" the way the body makes or breaks down estrogen. Estrogen can be broken down in the liver by several routes. One route yields a very weak form of estrogen that is excreted from the body. Other routes yield forms of estrogen that may be cancer promoting. Chlordane increases the rate of estrogen breakdown in the liver. However, scientists have not determined if chlordane causes breakdown of estrogen into a more or less cancer-promoting form.
The immune system of the body plays an important role in the body's defense against cancer. There is concern that chemicals that damage the immune system may affect cancer risk. The development of one part of the immune system has been shown to be adversely affected in young experimental animals that were exposed to chlordane before birth. However, these studies did not determine if the chlordane-exposed animals were more prone to develop breast cancer as adults. Therefore, more animal studies are needed to determine if chlordane-induced changes in the immune system can affect breast cancer risk.
Is chlordane present in breast milk?
Since chlordane can build up and be stored in breast fat, human milk can carry this chemical from a mother to a breast-fed infant. At the levels of contamination found in breast milk samples in the U.S., researchers have concluded that the estimated risk of cancer is far outweighed by the beneficial effects of breast feeding an infant. The amount of chlordane that an infant in the U.S. may receive from breast milk has been estimated to be well below the "Allowable Daily Intake" set by the World Health Organization.
There is not enough evidence to show that chlordane directly causes breast cancer in humans or laboratory animals. However, there is limited evidence that chlordane has the potential to affect breast cancer risk: it may affect estrogen levels in animals, compromise the animal's immune system and act with other carcinogens to "promote" liver tumors. Further studies are needed to determine if chlordane affects breast cancer risk through these mechanisms.
Where is more research needed?
Is more research being done?
The National Institutes of Health (NIH) has recently funded two large new studies to determine any possible association between higher body levels of pesticides such as chlordane and breast cancer risk. One study based in California is looking at blood levels of a variety of persistent pesticides, including chlordane and its breakdown products, in African-American women. Another large scale study is being conducted on women residing in Long Island, New York to determine if chlordane exposure is associated with increased breast cancer risk. When the results of these studies are available, they will be included in any updated versions of this fact sheet.
How can I minimize exposure to chlordane that may still be in the environment?
Prepared by Renu Gandhi, Ph.D., BCERF Research Associate
and Suzanne M. Snedeker, Ph.D., Research Project Leader, BCERF
When reproducing this material, credit the authors and the Program on Breast Cancer and
Environmental Risk Factors in New York State.
Funding for this fact sheet was made possible by the New York State Department of Health and the U.S. Department of Agriculture/Cooperative State Research, Education and Extension Service. | <urn:uuid:e8bd978d-d274-4c33-9647-049ec35d9ea7> | CC-MAIN-2015-35 | http://envirocancer.cornell.edu/factsheet/Pesticide/fs11.chlordane.cfm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064865.49/warc/CC-MAIN-20150827025424-00218-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.960892 | 2,364 | 3 | 3 |
This page contains information on ginkgo biloba and how it is used as a herb in alternative herbal treatments to treat ailments and problems. It is used to promote mental alertness, improving memory and to treat asthma and also a role in male sexual functioning.
Please note that we are not advocating that people stop using their normal medication, but would like to make people aware that some alternative therapies can be very effective to help treat problems and create a healthier, younger and more vital you. For more information on our range of products, please click here.
Although we believe in the therapeutic and healing properties of herbs, care must be taken in the use thereof, as they are powerful compounds.
Genus and specie
Ginkgo biloba L.
Maidenhair tree, Buddha's fingernails, flying moth leaf, and duck-foot.
Ginkgo biloba is a large deciduous tree. The tree trunk is erect and the branches form a dense crown when
old. The bark is gray and very cracked. Leaves are fan-shaped and yellowish green. The
male flowers are yellow and hang in thick, pendulous catkins. The female flowers are round, solitary and on long stalks, followed by yellowish fruits, which have a nasty smell when ripe.
The leaves and seeds are used.
Ginkgo biloba is a bittersweet, astringent herb that dilates the bronchial tubes, blood vessels and controls allergic
responses. The leaves stimulate the circulation and the seeds have anti-fungal and antibacterial effects.
It contains flavonoids (flavonol glycoside and non-glycosidid biflavonoids), unique diterpene lactones (gingolides A, B, C, J and M) as well as a sesquiterpenoid (bilobalide).
- Internal use
- It is used internally to combat asthma, allergic inflammatory responses, senile dementia, to aid in mental alertness, improving memory, circulatory complaints and varicose veins.
- The leaf extract has shown good results in combating various symptoms of cerebrovascular insufficiency, as well as dementia, such as memory loss, disturbed concentration, dizziness, mood swings and morbus Alzheimer.
- Some studies have also shown the extract to be effective in treating tinnitus.
- The leaves are used to stabilize an irregular heartbeat and the seeds are useful for coughs with thick phlegm and urinary incontinence.
- Since Gingko biloba improves blood flow, and especially microcirculation, it is helpful to increase blood flow to the genitals by stimulating the action of the endothelium derived relaxing factor and appears to be very effective in the treatment of erectile dysfunction caused by a lack of proper blood flow to the genitals.
- The seeds are used in Chinese medicine and work on the "Lung" and "Kidney" meridians and are used for asthmatic disorders.
- The seeds are also roasted and eaten as a snack and are also used in soups, stir fries and stews.
- Edible oil is also obtained from the seeds.
- External use
- Aromatherapy and essential oil use
Ingesting large amounts of the material as well as the seeds may cause dermatitis, headaches, diarrhea and vomiting.
Taking ginkgo orally may increase the effects of blood thinning medication.
- Healing cream
- To assist in wound healing while soothing skin complaints – such as eczema and psoriasis, acne and piles and moisturizing and protecting the skin. This product has shown its effectiveness over a wide range of problems and judging from sales over more than a decade – it is the trusted healing cream to help with all mishaps, allergic reactions, irritated, burning, itchy and uncomfortable skin conditions.
- Face wash
- This face wash will properly clean your face and remove all impurities and environmental pollutants, without drying the skin. It contains eight herbal extracts to help promote a clear, vital and healthy complexion and a younger looking skin.
- Moisturizing day cream
- This day cream is formulated to help fight the signs of aging on various fronts. It helps to reduce free radical damage which, if left unchecked, leads to premature aging. The herbal extracts help to promote cell rejuvenation and regeneration and provide moisture and hydration to the skin.
- Nourishing night cream
- This nourishing night cream penetrates the skin extremely well and does not make the skin feel oily. It contains a host of herbal extracts to help in the fight against premature aging and has added vitamin E as well. Apart from the moisturizing effect and the anti-aging properties it also softens and smoothes the skin.
- Eye gel
- An effective refreshing eye gel to help reduce puffiness and dark rings around the eyes, while fighting wrinkles and lines. This is a very clever combination of herbal extracts and the base formula has its roots in a clinically proven formula.
- Mud face mask
- With this skin treatment product we combined a special selection of herbs in a base of thermal mud with oligoelements. This recommended weekly treatment will boost circulation to the skin, help to fight wrinkles and lines, improve firmness while at the same time improving suppleness and elasticity of the skin.
- Shampoo with rosemary extract + 7 other herbals
- Our shampoo is in a class of its own – and granted – it is far more expensive than cheap supermarket shampoos, but no other shampoo has the active ingredients we have in our shampoo. The rosemary will boost the health of the hair and scalp, while the other seven herbal extracts will help strengthen the hair and make it shine, increase the volume and make it manageable.
- Rosemary hair treatment conditioner
- We have found that this hair conditioner should really be used as a conditioning treatment. This then removes the need to condition the hair every time you wash – and can be used once a month. It is a superb hair tonic and helps in the control of sebum secretion of the scalp. Although not formulated for dandruff – the ingredients will assist with this as well, while supporting the health of the scalp.
- Hand and body lotion
- When formulating this hand and body lotion we created a rich nourishing, protecting and reviving lotion, which will not leave the skin oily or tacky, but will create a well moisturized, hydrated and supple skin. After applying this lotion it will quickly be absorbed by the skin, leaving it silky soft, smooth and well moisturized.
- Stretch mark gel
- Although nothing can remove already formed stretch marks (only surgery can do that) – thousands of satisfied clients confirm that this gel improves the appearance of old stretch marks. The gel will help in PREVENTING stretch marks (a 92% success rate) and is used with great success by expectant mothers and body builders who may form marks when bulking-up. The formula of this gel is based on clinical studies done in France, to which we added other herbal extracts.
- Cellulite gel
- Fighting cellulite is easy with this herbal cellulite gel. It contains a patented extract of Bayberry (Myriceline) and nine other plant extracts and essential oils. The gel will help to get rid of cellulite (which has been clinically proven) and will also help to prevent cellulite from forming. So now it is easy to get your soft body contours back again.
- Apple cider vinegar (liquid) with Centella asiatica
- The health benefits of apple cider vinegar are combined with the therapeutic properties of Centella asiatica. This old folk remedy is still used with great effect by thousands of people daily.
- Digest capsules
- If your digestive system is under-par and you struggle with constipation or you simply need to boost the health of your digestive system then this capsule is for you. Fenugreek is a general digestive tonic and psyllium is a magical bulking agent that will help proper bowel movements, without using a laxative.
- Detox capsules
- Our modern day lifestyle exposes us to many unwanted additive and our diet also places stress on the body. To help the body get rid of toxins and waste materials naturally we combined fennel, basil, celery and parsley to help the body remove these toxins. It peps-up your metabolism and helps the bladder, kidneys and liver to do their work more effectively.
- Urinary and bladder health capsules
- Using an all natural approach, our capsules will help the discomfort and burning urine sensation of urinary tract infection and help clear up foul smelling urine. We combined cranberry, dandelion, uva ursi and vitamin C in a single capsule to effectively fight bladder infections and to stop the burning sensation when urinating.
- Tri- Mushroom blend capsules
- If you need an immune system boost then have a look at our combination of maitake, reishi and shiitake mushrooms. These mushrooms have showed to be a great help in boosting the immune system – and form a good nutritional supplement support for HIV/Aids patients and people receiving chemotherapy. Any person with even a slightly compromised immune system may benefit from this supplement.
- Olive leaf extract capsules
- This natural detoxifier helps with a variety of ailments and people with chronic fatigue syndrome, infections, glandular fever (Epstein Barr), and even candida and herpes have found it of value. Olive leaf extract helps to fight bacteria, viruses, retroviruses, and protozoa and yeast strains. Apart from fighting all of these problems it also helps to improve kidney function and fights free radicals.
- Sexual supplement (previously known as Vuka Nkuzi)
- Men normally don’t admit when they have a declining libido although it is a most common problem. We combine in this supplement seven different natural ingredients to boost sexual health – without any side effects often experienced with such medication. After more than a decade, and thousands of regular Vuka Nkuzi clients we still offer this well priced supplement to boost the libido.
- Jojoba oil
- This liquid golden ester not only moisturizes and penetrates the skin but also helps to fight wrinkles and lines while promoting a clear and unblemished skin. Jojoba does not clog the pores but helps to restore skin elasticity and smoothness. It will leave the skin supple and velvety soft without any oiliness and can be used neat on the skin.
- Lavender oil
- This is the most popular essential oil and with good reason. Lavender oil is a superb product to use on burns, insect bites, sunburn, wounds and other skin complaints and irritations. Lavender oil is one of the few essential oils that can be used neat (not diluted) on the skin, and is great to help with regeneration and rejuvenation of skin cells. Emotionally it has a calming and soothing effect and is an exceptionally pleasant smelling oil.
- Tea tree oil
- Once used, you will always have a bottle of tea tree essential oil handy. It deals effectively with bacteria, fungi and viruses. Our oil surpasses the “Australian Standard” by having less than 15% 1,8 cineole and more than 30% of Terpinen-4-ol. This powerful essential oil is effective in various ways to fight infections and skin problems.
- Almond oil
- This light and deeply moisturizing oil has a softening effect on the skin and can be used on the face and body. Almond oil has excellent emollient properties and helps to balance water and moisture loss in the skin. It can be used neat on the skin and also makes an excellent massage base. | <urn:uuid:60ff5575-2517-4c1b-a933-c3e198b8464e> | CC-MAIN-2015-35 | http://www.ageless.co.za/herb-ginkgo-biloba.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646312602.99/warc/CC-MAIN-20150827033152-00100-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.925276 | 2,449 | 2.59375 | 3 |
|Legal Service India - Right to strike under Industrial Dispute Act, 1947 - Industry laws in India|
|Legal Advice | Find a lawyer | Constitutional law | Judgments | forms | PIL | family law | Cyber Law | Law Forum | Income-Tax | Consumer laws | ROC laws|
Every right comes with its own duties. Most
powerful rights have more duties attached to them. Today, in each
country of globe whether it is democratic, capitalist, socialist,
give right to strike to the workers. But this right must be the
weapon of last resort because if this right is misused, it will
create a problem in the production and financial profit of the
industry. This would ultimately affect the economy of the country.
Today, most of the countries, especially India, are dependent upon
foreign investment and under these circumstances it is necessary
that countries who seeks foreign investment must keep some
safeguard in there respective industrial laws so that there will
be no misuse of right of strike. In India, right to protest is a
fundamental right under Article 19 of the Constitution of India.
But right to strike is not a fundamental right but a legal right
and with this right statutory restriction is attached in the
industrial dispute Act, 1947.
Provision of valid strike under the Industrial Dispute Act, 1947-Section 2(q) of said Act defines the term strike, it says, "strike" means a cassation of work by a body of persons employed in any industry acting in combination, or a concerted refusal, or a refusal, under a common understanding of any number of persons who are or have been so employed to continue to work or accept employment. Whenever employees want to go on strike they have to follow the procedure provided by the Act otherwise there strike deemed to be an illegal strike. Section 22(1) of the Industrial Dispute Act, 1947 put certain prohibitions on the right to strike. It provides that no person employed in public utility service shall go on strike in breach of contract:
(a) Without giving to employer notice of strike with in six weeks before striking; or
(b) Within fourteen days of giving such notice; or
(c) Before the expiry of the date of strike specified in any such notice as aforesaid; or
(d) During the pendency of any conciliation proceedings before a conciliation officer and seven days after the conclusion of such proceedings.
It is to be noted that these provisions do not prohibit the workmen from going on strike but require them to fulfill the condition before going on strike. Further these provisions apply to a public utility service only. The Industrial Dispute Act, 1947 does not specifically mention as to who goes on strike. However, the definition of strike itself suggests that the strikers must be persons, employed in any industry to do work.
Notice of strikeNotice to strike within six weeks before striking is not necessary where there is already lockout in existence. In mineral Miner Union vs. Kudremukh Iron Ore Co. Ltd., it was held that the provisions of section 22 are mandatory and the date on which the workmen proposed to go on strike should be specified in the notice. If meanwhile the date of strike specified in the notice of strike expires, workmen have to give fresh notice. It may be noted that if a lock out is already in existence and employees want to resort to strike, it is not necessary to give notice as is otherwise required. In Sadual textile Mills v. Their workmen certain workmen struck work as a protest against the lay-off and the transfer of some workmen from one shift to another without giving four days notice as required by standing order 23. On these grounds a question arose whether the strike was justified. The industrial tribunal answered in affirmative. Against this a writ petition was preferred in the High Court of Rajasthen. Reversing the decision of the Tribunal Justice Wanchoo observed:
" ....We are of opinion that what is generally known as a lightning strike like this take place without notice..... And each worker striking ......(is) guilty of misconduct under the standing orders ........and liable to be summarily dismissed.....(as)..... the strike cannot be justified at all. "
General prohibition of strike-
The provisions of section 23 are general in nature. It imposes general restrications on declaring strike in breach of contract in the both public as well as non- public utility services in the following circumstances mainly: -
(a) During the pendency of conciliation proceedings before a board and till the expiry of 7 days after the conclusion of such proceedings;
(b) During the pendency and 2 month's after the conclusion of proceedings before a Labour court, Tribunal or National Tribunal;
(c) During the pendency and 2 months after the conclusion of arbitrator, when a notification has been issued under sub- section 3 (a) of section 10 A;
(d) During any period in which a settlement or award is in operation in respect of any of the matter covered by the settlement or award.
The principal object of this section seems to ensure a peaceful atmosphere to enable a conciliation or adjudication or arbitration proceeding to go on smoothly. This section because of its general nature of prohibition covers all strikes irrespective of the subject matter of the dispute pending before the authorities. It is noteworthy that a conciliation proceedings before a conciliation officer is no bar to strike under section 23.
In the Ballarpur Collieries Co. v. H. Merchant it was held that where in a pending reference neither the employer nor the workmen were taking any part, it was held that section 23 has no application to the strike declared during the pendency of such reference.
Illegal StrikeSection 24 provides that a strike in contravention of section 22 and 23is illegal. This section is reproduced below:
(1) A strike or a lockout shall be illegal if,
(i) It is commenced or declared in contravention of section 22 or section 23; or
(ii) It is continued on contravention of an order made under sub section (3) of section 10 or sub section (4-A) of section 10-A.
(2) Where a strike or lockout in pursuance of an industrial dispute has already commenced and is in existence all the time of the reference of the dispute to a board, an arbitrator, a Labour court, Tribunal or National Tribunal, the continuance of such strike or lockout shall not be deemed to be illegal;, provided that such strike or lockout was not at its commencement in contravention of the provision of this Act or the continuance thereof was not prohibited under sub section (3) of section 10 or sub section (4-A) of 10-A.
(3) A strike declared in the consequence of an illegal lockout shall not be deemed to be illegal.
Consequence of illegal Strike-Dismissal of workmen-
In M/S Burn & Co. Ltd. V, Their Workmen , it was laid down that mere participation in the strike would not justify suspension or dismissal of workmen. Where the strike was illegal the Supreme Court held that in case of illegal strike the only question of practical importance would be the quantum or kind of punishment. To decide the quantum of punishment a clear distinction has to be made between violent strikers and peaceful strikers.
In Punjab National Bank v. Their Employees , it was held that in the case of strike, the employer might bar the entry of the strikers within the premises by adopting effective and legitimate method in that behalf. He may call upon employees to vacate, and, on their refusal to do so, take due steps to suspend them from employment, proceed to hold proper inquires according to the standing order and pass proper orders against them subject to the relevant provisions of the Act.
In Cropton Greaves Ltd. v. Workmen, it was held that in order to entitle the workmen to wages for the period of strike, the strike should be legal and justified. A strike is legal if it does not violate any provision of the statute. It cannot be said to be unjustified unless the reasons for it are entirely perverse or unreasonable. Whether particular strike is justified or not is a question of fact, which has to be judged in the light of the fact and circumstances of each case. The use of force, coercion, violence or acts of sabotage resorted to by the workmen during the strike period which was legal and justified would disentitle them to wages for strike period.
The constitutional bench in Syndicate Bank v. K. Umesh Nayak decided the matter , the Supreme Court held that a strike may be illegal if it contravenes the provision of section 22, 23 or 24 of the Act or of any other law or the terms of employment depending upon the facts of each case. Similarly, a strike may be justified or unjustified depending upon several factors such as the service conditions of the workmen, the nature of demands of the workmen, the cause led to strike, the urgency of the cause or demands of the workmen, the reasons for not resorting to the dispute resolving machinery provided by the Act or the contract of employment or the service rules provided for a machinery to resolve the dispute, resort to strike or lock-out as a direct is prima facie unjustified. This is, particularly so when the provisions of the law or the contract or the service rules in that behalf are breached. For then, the action is also illegal.
Right of employer to compensation for loss caused by illegal strike-In Rothas Industries v. Its Union , the Supreme Court held that the remedy for illegal strike has to be sought exclusively in section 26 of the Act. The award granting compensation to employer for loss of business though illegal strike is illegal because such compensation is not a dispute within the meaning of section 2(k) of the Act.
Conclusion- The right to strike is not fundamental and absolute right in India in any special and common law, Whether any undertaking is industry or not. This is a conditional right only available after certain pre-condition are fulfilled. If the constitution maker had intended to confer on the citizen as a fundamental right the right to go on strike, they should have expressly said so. On the basis of the assumption that the right to go on strike has not expressly been conferred under the Article 19(1) (c) of the Constitution. Further his Lordship also referred to the observation in Corpus Juris Secundum that the right to strike is a relative right which can be exercised with due regard to the rights of others. Neither the common law nor the fourteenth Amendment to the federal constitution confers an absolute right to strike. it was held in the case that the strike as a weapon has to be used sparingly for redressal of urgent and pressing grievances when no means are available or when available means have failed to resolve it. It has to be resorted to, to compel the other party to the dispute to see the justness of the demand. It is not to be utilized to work hardship to the society at large so as to strengthen the bargaining power. Every dispute between an employer and employee has to take into consideration the third dimension, viz. the interest of the society as whole.
Authored by Vijendra Vikram Singh Paul and can be reached at:
Human Rights, Environment & Industrial
Disaster: Globalization has influenced trade all over
the world; companies have looked...
Laws Regulating Mergers & Acquisition In India: A merger is a combination of two companies where one corporation is completely....
Vicarious Liability Of Directors And Officers On Bouncing Of Cheques: in the light of decisions of the Apex Court, provisions relating to section 141(1)..
Strikes and Lockouts: In any Industrial endeavour co-operation of labour and capitalis quite....
Company Law Board v. Arbitral Tribunal: The legislature never intends to contradict itself....
Operational Risk Management under Basel II in the light of COSO-ERM& Maturity Model
Operational risk in today’s tech savvy organization is of great concern which emphasizesr,...
Trade Secret: Trade secret is a formula, process, device, or other business information,...
Mergers In Pharma Sector: Takeovers in the Pharmaceutical industry are the current rage ll over the world
WTO & Development In Developing Countries: The problem of our age is growing economic disparity between developed and industrialized
• Know your legal options
• Information about your legal issues
Call us at Ph no: 9650499965
We offer Copyright Registration Services
Right from your Desktop...
*Call us at Ph no: 9891244487
Legal AdviceGet legal advice from Highly qualified lawyers within 48hrs.
with complete solution.
lawyers in Delhi
lawyers in Chandigarh
lawyers in Allahabad
lawyers in Lucknow
lawyers in Jodhpur
lawyers in Jaipur
lawyers in New Delhi
lawyers in Nashik
lawyers in Mumbai
lawyers in Pune
lawyers in Nagpur
lawyers in Ahmedabad
lawyers in Surat
lawyers in Kolkata
lawyers in Janjgir
lawyers in Rajkot
lawyers in Indore
lawyers in Ludhiana
|Lawyers in India - Search by City|
lawyers in Chennai
lawyers in Bangalore
lawyers in Hyderabad
lawyers in Cochin
lawyers in Pondicherry
lawyers in Agra
lawyers in Dhaka
lawyers in Dubai
lawyers in Toronto
lawyers in Sydney
lawyers in London
lawyers in Los Angeles
lawyers in New York
About Us |
F A Q |
Divorce by mutual consent |
| Submit article |
legal Service India.com is Copyrighted under the Registrar of Copyright Act ( Govt of India) © 2000-2015
ISBN No: 978-81-928510-0-6 | <urn:uuid:7eb293f6-d4a4-43a5-9375-9a07c4af62d2> | CC-MAIN-2015-35 | http://www.legalserviceindia.com/articles/dispute.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645264370.66/warc/CC-MAIN-20150827031424-00103-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.940747 | 2,884 | 2.71875 | 3 |
Kelley Knight Heins
Click for video: John Callas, project manager for the Mars rover mission,
explains how a duplicate rover is being used at NASA's Jet Propulsion Laboratory
in Pasadena, Calif., to figure out how best to free up a rover stuck in Martian sand.
Click on the image to watch an msnbc.com video.
Take two parts diatomaceous earth, add one part clay ... and voila! You've got a blend of simulated Martian sand fine enough to get a rover stuck in.
"It's not a secret formula," John Callas, project manager for the Mars Exploration Rovers, said as he showed us around the place at NASA's Jet Propulsion Laboratory where a stand-in for the Spirit rover is mired in buckets of the stuff.
The semi-impromptu tour, arranged for me and a few other folks who attended last week's American Astronomical Society meeting in Pasadena, Calif., provided an inside look at the clean room where NASA's future Mars rover is taking shape, as well as the not-so-clean room where rovers are put to the test.
Sometimes the tests are conducted long after the real rover has left the building, and that's why an engineering model of the Mars Exploration Rover (known as the Surface System Test Bed rover, or SSTB) is up to its robotic ankles in fake Martian sand at JPL's In-Situ Instrument Laboratory. The real Spirit rover has been stuck in a sand trap and possibly hung up on a rock for more than a month, and its handlers are planning to practice techniques for dislodging it in an indoor pit filled with crushed rock.
Once the pit is set up to duplicate the scene on the west side of the Martian plateau known as Home Plate, mission managers will see which maneuvers have the best chance of freeing Spirit. For example, should the rover try following its own tracks out of the mire, or turn its wheels down the slope to take advantage of gravity? Should it go slow, or go for broke?
"The most important thing is, we don't want to make things worse," Callas explained. The rover could just spin itself in deeper, for example, and get hung up on the pyramid-shaped rock that appears to be sitting beneath its belly. That would immobilize Spirit: After more than five years of rambling, the rover would end its days at that spot.
"If you set the belly on the ground, I think it's 'game over' at that point. We don't want to do that," Callas said.
Kelley Knight Heins
A "Rover Crossing" sign is mounted near the entrance to the Jet Propulsion
Laboratory's In-Situ Instrument Laboratory, where rovers sometimes get grimy.
Last Tuesday, while we were looking in on the pit, workers were shoveling the crushed rock to build a mound with the same 12-degree slant that Spirit is experiencing on Mars. Then a box will be placed on the slope and filled with a sandy soil like the stuff in which Spirit is stuck.
This particular stuff is lighter and fluffier than your typical Martian soil. "It's almost flourlike," Callas said. Duplicating the texture wasn't easy, but after fiddling with the ingredients, engineers came up with the not-so-secret formula Callas described. They hollowed out a hole in the rock pit, set a blue tarp down into the hole, filled it with the simulated sand and then stuck two of the test rover's six wheels way down into the hole for a "shoebox test." The test showed that the mixture seemed to have about the right fluffiness.
The next task is to mix up enough of the faux Mars muck to fill NASA's sandbox and start making dry runs. Eventually, a sequence of maneuvers will be beamed up to Spirit, and the microscopic imager on the end of Spirit's robotic arm will be used to monitor the rover's progress.
It could take weeks for the plan to play out, but that's fully in line with NASA's expectations: When Spirit's twin, the Opportunity rover, was stuck in a sand dune on the other side of the Red Planet, back in 2005, breaking free took weeks as well. Spirit's current situation looks even stickier, Callas said.
At least he and his colleagues are on the right track with their formula for Martian sand. Who knows? Maybe you could even sell the stuff. When our NASA guide, Whitney Clavin, put her hand in the sand, the sensation made her suspect that Mars might not be all that bad a place after all.
"That felt great!" she said later. "That stuff felt like a beach in Thailand."
NASA / JPL-Caltech
Click for video: Full-scale models of three generations of Mars rovers
were put on display at NASA's Jet Propulsion Laboratory in May 2008: Mars
Pathfinder's Sojourner rover is front and center, with the Mars Exploration
Rover (model for Spirit and Opportunity) at left and the Mars Science
Laboratory (now named Curiosity) at right. Click on the image for a
YouTube video of the photo opportunity.
The grimy rover pit is just a few minutes' walk from the immaculate clean room where the successor to Spirit and Opportunity is taking shape. Components for the Mars Science Laboratory, which was rechristened the "Curiosity" rover last month after a naming contest, are spread across the floor of a warehouse-sized white room at JPL's Spacecraft Assembly Facility.
A model of the spacecraft's "Sky Crane" descent stage sits against one side of the room like a giant spider. A couple of saucer-shaped protective shells are in different corners, covered in shiny shrouds. Racks of rover wheels are sitting in the center of the floor, as are two partly assembled models of the rover itself. One is an engineering model, which would be used like the rover in the rocky pit. The other is the flight model, which is due for launch in 2011 and a soft Martian landing in 2012.
"The engineering model will get dirty," the mission's deputy project scientist, Joy Crisp, told us. "The flight model will stay clean."
If NASA stuck with its original plan, Curiosity would have been launched this year. But money troubles and problems with the rover's actuators led mission planners to order a two-year postponement. I asked Crisp whether she was worried that engineers would lose their edge due to the delay.
"That's not our big worry," she replied. "It's still a very big, complex, challenging thing. ... Even with the two years, we're going, 'Oh, gosh, this is hard!'"
Curiosity is designed to drill into some of Mars' biggest mysteries: What types of organic compounds are hidden in the rock and soil? What happened to the liquid water and the carbon dioxide that scientists believe was more abundant on ancient Mars? Could this seemingly dead world sustain life?
For big questions, you need a big rover, and Curiosity will be the biggest rover ever to roam the Red Planet. It measures 10 feet long (3 meters long, not including its robotic arm) and weighs 1,927 pounds (900 kilograms). In comparison, Spirit and Opportunity are both 5.2 feet (1.6 meters) long and weigh in at 384 pounds (174 kilograms) each.
As we looked down into the clean room from a viewing gallery, white-suited workers covered up some of the racks and attached equipment to an overhead hoist system. One worker pulled around what seemed to be a rather low-tech vacuum cleaner, making sure the clean room was as clean as could be. (Crisp guessed that the vacuum was equipped with special HEPA filters.)
Launch vehicle manager Arden Acord stopped by the gallery for a look, and told us that the clean-room workers were practicing the installation procedure for Curiosity's plutonium-fueled power source.
Curiosity will be drawing electricity from a radioisotope thermoelectric generator, or RTG, unlike the solar-powered rovers currently operating on Mars. (Even Spirit and Opportunity use radioisotope units as heaters, however.) Through the decades, RTGs have been used on spacecraft ranging from the Apollo lunar lander to the Mars Viking lander, the Cassini orbiter and the New Horizons mission to Pluto. The units are considered more reliable than solar arrays for providing round-the-clock power, but they tend to generate controversy as well as electricity.
Acord knows full well that RTGs need to be handled carefully, and as little as possible. The units can get as hot as 350 degrees Fahrenheit (175 degrees Celsius). "We're not putting the real one on until [Curiosity is being prepared for launch at] the Cape," he said.
The practice unit has to be installed - quickly, efficiently and safely - through a hole in the side of the spacecraft's backshell, Acord said.
"The health physics people down at the Cape have to know that you know what you're doing," he explained. "That's why you have to do it with a tape measure and a stopwatch."
NASA / JPL
Click for video: Engineers from NASA's Jet Propulsion Laboratory and Alliance
Spacesystems test the range of motion on the Curiosity rover's robotic arm joints.
The instruments have not been mounted on the arm's turret yet, but weights have
been placed on it for testing. Click on the image for a QuickTime video from NASA
that shows the testing procedure (sped up to compress the time).
Other aspects of spacecraft assembly are practiced using similar routines. Engineers recently put an engineering model of Curiosity's robotic arm through its paces, using dummy weights at the end of the arm. The real thing will bristle with tools, including a camera, a spectrometer, a drill, a brush and a tungsten carbide drill.
"This arm is truly amazing," Crisp said. "It's got 75 pounds on the end of this huge robotic arm."
Assembling everything will take months, and if the past is any guide, Curiosity and its earthbound twin will be at least partially assembled far more than once.
"They've put things together and have done some testing - and then they took it apart," Crisp said.
For updates on the Mars missions, check in with our "Return to the Red Planet" section. For more "Inside" reports, click on the links to these archived items: | <urn:uuid:b1655329-712b-4858-8bdc-34835b456ccb> | CC-MAIN-2015-35 | http://cosmiclog.nbcnews.com/_news/2009/06/15/4350061-inside-the-rover-factory?lite | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00106-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.955508 | 2,205 | 3.15625 | 3 |
Steller sea lion
Also Northern sea lion, Sea king, Stellar sea lion, and Steller's sea lion. The Steller Sea Lion (Scientific name: Eumetopias jubatus) is one of 16 species of marine mammals in the family of Eared seals which include sea lions and fur seals. Together with the families of true seals and Walruses, Eared seals form the group of marine mammals known as pinnipeds.
The Steller Sea Lion is found on the North Pacific coasts. Steller's Sea Lions are not often found in captivity. This is due to their belligerent nature. They are thought to be dangerous to have in zoos, and untrainable for circuses.
Eared seals differ from the true seals in having small external earflaps and hind flippers that can be turned to face forwards. Together with strong front flippers, this gives them extra mobility on land and an adult fur seal can move extremely fast across the beach if it has to. They also use their front flippers for swimming, whereas true seals use their hind flippers.
The world population of Steller's sea lions has been undergoing a mysterious decline; since 1980, numbers have dropped from over 300,000 individuals to fewer than 100,000. Despite this well-documented and worrying decline, the causes are still being debated.
Kingdom: Anamalia (Animals)
The impressive adult males are two and a half times the size of the females; they have large necks, and shoulders covered with a mane of long, coarse hair. The length of the average male is 282 centimeters, while the length of the average female is only 228 centimeters. The weight of the average male is 566 kilograms, while the weight of the average female is only 263 kilograms.
Both sexes are a a yellowish buff color, and a coat of short coarse hair that lacks a distinct undercoat, whilst pups are originally black, moulting to the adult coat after three to four months.
Newborn pups are about 100 centimeters long, weigh 16-23 kilograms, and have a thick, dark brown pelage that molts to lighter after six months. After 2-3 years their color changes again, this time to the adult color.
The common name of the species comes from the German naturalist who first described these seals in 1741, George Wilhelm Steller.
Steller's sea lions breed in massive, noisy rookeries common to most eared seals.
The males arrive at the haul-out sites in spring and establish their territory on the limited space of the beach. Male Steller Sea Lions (bulls) are extremely aggressive and territorial. They are large animals and often fight for mates. They do this by throwing their huge bodies up against one another and biting. The strongest bull is the male with the largest harem. They usually do not feed throughout the breeding season, as they cannot afford to relinquish their hard-won position.
Females arrive in mid-May to late June and give birth to a single pup; only four days later the female is ready to mate again and the most successful males will aggressively guard, and mate with, up to 30 females.
Sixty to sixty seven percent of all females are impregnated every year.
Implantation of the fertilized egg is delayed for three months giving Steller's Sea Lion a twelve month gestation period.
Around nine days after giving birth the female will resume foraging trips to the sea. The new pups are fed by their mother for a minimum of three months, but a mother may continue to suckle her young for up to two or three years.
The pups are able to swim after one month, and can catch food after approximately three months.
The breeding season draws to an end in early July but these sea lions maintain their social lifestyle, being commonly seen on shore throughout the year in groups of tens to hundreds of animals.
The age of maturity is 3-6 years for females, and 3-7 years for males, but males are unlikely to breed successfully until their eighth to tenth year due to the fierce competition at rookeries.
On average, females live 30 years. Males, subject to injury in violent encounters with other males, typically live only 18 years.
Steller's Sea Lions are not often found in captivity. This is due to their belligerent nature. They are thought to be dangerous to have in zoos, and untrainable for circuses.
They acquire their food by diving into the ocean. The deepest recorded dive by one of these huge beasts was 120-160 yards. This particular animal was found caught in a net, so this number could be slightly inaccurate.
Sea Lions are also known for their "sun bathing" (basking). They are most often viewed by boaters and tourists as they lay in the sunshine on the rocks.
Studies are now being done on the communication of Steller Sea Lions. They are believed to make certain clicking noises when hunting and swimming and can produce a low roaring sound similar to that of a lion.
Steller's Sea Lion is found on the North Pacific coasts. The countries included are Russia, Japan, Canada, and part of the United States. More specifically it is found from the sea of Japan at 43 degrees N, north to the Pacific rim at 66 degrees N, and then south down to the North American Pacific coast to San Miguel Island at 34 degrees N.
Found in the cool waters of the northern Pacific Ocean, hauling out on beaches and rocky coastline.
Steller's Sea Lions are known to be true carnivores. They feed on both commercial and non-commercial fish, walleye pollock ([[Theragra chalcogramma]]), Atka mackerel ([[Pleurogrammus monopterygius]]) and Pacific herring ([[Clupea harengus]]), and also on cephalopods (octopus and squid). Commercially exploited walleye pollack is an important part of their diet. This selective diet is a major cause of the Sea Lions' diminishing population, due to competition with humans for this favorite. Stellar sea lions are known to prey of other pinnipeds at times, including Harbor seals, Bearded seals, ringed seals, Spotted seals and young Pribilof fur seals.
These animals are a threatened species. The Steller sea lion was placed on the Endangered Species list in 1990. The Western Alaskan population was reclassified as Endangered in April 1997.
The world population of Steller's sea lions has been undergoing a mysterious decline; since 1980, numbers have dropped from over 300,000 individuals to fewer than 100,000. Despite this well-documented and worrying decline, the causes are still being debated; various hypotheses cite pollution, bycatch, parasites and disease, rookery disturbance and predation by killer whales. Research into dietary factors have revealed that Steller's sea lions in the northeast Pacific have suffered a decrease in the diversity and energy content of their diet since the mid 1970s, corresponding to changes in fish species available due to natural climatic changes. A diet dominated by low energy fish (such as pollock) can cause sea lions to lose condition, and can result in reduced pregnancy rates and increased susceptibility to disease or predation. This may be one of the major causes of the population decline.
Thousands were once killed each year in the nets of fishermen in Alaska. Changes in fishing techniques and gear in 1984 reduced the number killed.
An unknown number are shot each year during commercial fishing because this species is seen as a pest to the industry. The Steller Sea Lion eats a variety of commercial fish. The intense commercial fishing of pollock, a major food source, has decreased the Alaskan population from 175,000 animals in 1962 to 40,000 in 1992. They are also caught in plastic trash, which usually leads to death. This species is also hunted on a small scale for subsistence and for trade.
The Commonwealth of Independent States (CIS) is proposing to add the species to the "red" species list.
The United States National Marine Fisheries Service (NOAA) has established a number of protection measures relating to fishing bans around major rookeries and feeding areas, in an attempt to slow the decline in population numbers. A consortium of North Pacific Universities is carrying out ongoing research into the causes of the perplexing population decline. The battle to understand the factors involved in the decline in Steller's sea lion numbers may also provide better understanding of the complex marine ecosystem and the effects of fish stock changes (by both natural and man-made causes) on other marine mammals and sea birds.
Classified as Endangered (EN) on the IUCN Red List 2007.
Economic Importance for Humans
Profit from meat, hides, and blubber. Ecotourism benefits greatly from sea lions because humans think that they are "cute".
They are a primary source of food for inhabitants of the Aleutian Islands. Their skins were used for boat coverings, clothing, and their whiskers for cleaning of Chinese opium pipes. Since the passage of the Marine Mammal Protection Act (1972), the use of these sea lions has declined.
Eats many commercial fish that humans exploit.
- Eumetopias jubatus (Schreber, 1776) Encyclopedia of Life (accessed April 7, 2009)
- Eumetopias jubatus, Gonder, M., 2000, Animal Diversity Web (accessed April 9, 2009)
- Steller sea lion, Seal Conservation Society (accessed April 9, 2009)
- The Pinnipeds: Seals, Sea Lions, and Walruses, Marianne Riedman, University of California Press, 1991 ISBN: 0520064984
- Encyclopedia of Marine Mammals, Bernd Wursig, Academic Press, 2002 ISBN: 0125513402
- Marine Mammal Research: Conservation beyond Crisis, edited by John E. Reynolds III, William F. Perrin, Randall R. Reeves, Suzanne Montgomery and Timothy J. Ragen, Johns Hopkins University Press, 2005 ISBN: 0801882559
- Walker's Mammals of the World, Ronald M. Nowak, Johns Hopkins University Press, 1999 ISBN: 0801857899
- Steller sea lion, MarineBio.org (accessed April 9, 2009) | <urn:uuid:03158198-999b-4937-a71d-7e9d2a034da5> | CC-MAIN-2015-35 | http://www.eoearth.org/view/article/156250/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645294817.54/warc/CC-MAIN-20150827031454-00337-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.940771 | 2,157 | 3.34375 | 3 |