score
int64 15
783
| text
stringlengths 897
602k
| url
stringlengths 16
295
| year
int64 13
24
|
---|---|---|---|
18 | Here's a very simple GNU Make function: it takes three arguments and makes a
'date' out of them by inserting / between the first and second and second and third arguments:
make_date = $1/$2/$3
The first thing to notice is that make_date is defined just like any other GNU Make macro (you must use = and not := for reasons we'll see below).
To use make_date we $(call) it like this:
today = $(call make_date,19,12,2007)
That will result in today containing 19/12/2007.
The macro uses special macros $1, $2, and $3. These macros contain the argument specified in the $(call). $1 is the first argument, $2 the second and so on.
There's no maximum number of arguments, but if you go above 10 then you need parens: you can't write $10 instead of $(10). There's also no minimum number. Arguments that are missing are just undefined and will typically be treated as an empty string.
The special argument $0 contains the name of the function. In the example above $0 is make_date.
Since functions are just macros with some special automatic macros filled in (if you use the $(origin) function on any of the argument macros ($1 etc.) you'll find that they are classed as automatic just like $@), you can use GNU Make built in functions to build up complex functions.
Here's a function that turns every / into a \ in a path"
unix_to_dos = $(subst /,\,$1)
using the $(subst). Don't be worried about the use of / and \ there. GNU Make does very
little escaping and a literal \ is most of the time just a \.
Some argument handling gotchas
When GNU Make is processing a $(call) it starts by splitting the argument list on commas to set $1 etc. The arguments are expanded so that $1 etc. are completely expanded before they are ever referenced (it's as if GNU Make used := to set them). This means that if an argument has a side-effect (such as calling $(shell)) then that side-effect will always occur as soon as the $(call) is executed, even if the argument was never actually used by the function.
One common problem is that if an argument contains a comma the splitting of
arguments can go wrong. For example, here's a simple function that swaps its two arguments:
swap = $2 $1
If you do $(call swap,first,argument,second) GNU Make doesn't have any way to know that the first argument was meant to be first,argument and swap ends up returning argument first instead of second first,argument.
There are two ways around this. You could simply hide the first argument inside a macro. Since GNU Make doesn't expand the arguments until after splitting a comma inside a macro will not cause any confusion:
FIRST := first,argument
SWAPPED := $(call swap,$(FIRST),second)
The other way to do this is to create a simple macro that just contains a comma and use that instead:
c := ,
SWAPPED := $(call swap,first$cargument,second)
Or even call that macro , and use it (with parens):
, := ,
SWAPPED := $(call swap,first$(,)argument,second)
Calling built-in functions
It's possible to use the $(call) syntax with built in GNU Make functions. For example, you could call $(warning) like this:
This is useful because it means that you can pass any function name as an argument to a user-defined function and $(call) it without needing to know if it's built-in or not.
This gives you the ability to created functions that act on functions. The classic functional programming map function (which applies a function to every member of a list returning the resulting list) can be created | http://www.agileconnection.com/article/gnu-make-user-defined-functions | 13 |
17 | Fallacies are defects in an argument other than false premises which cause an argument to be invalid, unsound or weak. Fallacies can be separated into two general groups: formal and informal. A formal fallacy is a defect which can be identified merely be looking at the logical structure of an argument rather than any specific statements.
Formal fallacies are only found only in deductive arguments with identifiable forms. One of the things which makes them appear reasonable is the fact that they look like and mimic valid logical arguments, but are in fact invalid. Here is an example:
1. All humans are mammals. (premise)
2. All cats are mammals. (premise)
3. All humans are cats. (conclusion)
Both premises in this argument are true, but the conclusion is false. The defect is a formal fallacy, and can be demonstrated by reducing the argument to its bare structure:
1. All A are C
2. All B are C
3. All A are B
It does not really matter what A, B and C stand for we could replace them with wines, milk and beverages. The argument would still be invalid and for the exact same reason. Sometimes, therefore, it is helpful to reduce an argument to its structure and ignore content in order to see if it is valid.
Informal fallacies are defects which can be identified only through an analysis of the actual content of the argument rather than through its structure. Here is an example:
1. Geological events produce rock. (premise)
2. Rock is a type of music. (premise)
3. Geological events produce music. (conclusion)
The premises in this argument are true, but clearly the conclusion is false. Is the defect a formal fallacy or an informal fallacy? To see if this is actually a formal fallacy, we have to break it down to its basic structure:
1. A = B
2. B = C
3. A = C
As we can see, this structure is valid, therefore the defect cannot be a formal fallacy identifiable from the structure. Therefore, the defect must be an informal fallacy identifiable from the content. In fact, when we examine the content, we find that a key term, rock, is being used with two different definitions (the technical term for this sort of fallacy is Equivocation).
Informal fallacies can work in several ways. Some distract the reader from what is really going on. Some, like in the above example, make use of vagueness or ambiguity to cause confusion. Some appeal to emotions rather than logic and reason.
Categorizing fallacies can be done in a number of different methods. Aristotle was the first to try and systematically describe and categorize fallacies, identifying thirteen fallacies divided into two groups. Since then many more have been described and the categorization is more complicated. Thus, while the categorization used here should prove. | http://atheism.about.com/od/logicalarguments/a/fallacy.htm | 13 |
24 | |This is the print version of Geometry
You won't see this message or any elements not part of the book's content when you print or preview this page.
Part I- Euclidean Geometry
Chapter 1: Points, Lines, Line Segments and Rays
Points and lines are two of the most fundamental concepts in Geometry, but they are also the most difficult to define. We can describe intuitively their characteristics, but there is no set definition for them: they, along with the plane, are the undefined terms of geometry. All other geometric definitions and concepts are built on the undefined ideas of the point, line and plane. Nevertheless, we shall try to define them.
A point is an exact location in space. Points are dimensionless. That is, a point has no width, length, or height. We locate points relative to some arbitrary standard point, often called the "origin". Many physical objects suggest the idea of a point. Examples include the tip of a pencil, the corner of a cube, or a dot on a sheet of paper.
As for a line segment, we specify a line with two points. Starting with the corresponding line segment, we find other line segments that share at least two points with the original line segment. In this way we extend the original line segment indefinitely. The set of all possible line segments findable in this way constitutes a line. A line extends indefinitely in a single dimension. Its length, having no limit, is infinite. Like the line segments that constitute it, it has no width or height. You may specify a line by specifying any two points within the line. For any two points, only one line passes through both points. On the other hand, an unlimited number of lines pass through any single point.
We construct a ray similarly to the way we constructed a line, but we extend the line segment beyond only one of the original two points. A ray extends indefinitely in one direction, but ends at a single point in the other direction. That point is called the end-point of the ray. Note that a line segment has two end-points, a ray one, and a line none.
A point exists in zero dimensions. A line exists in one dimension, and we specify a line with two points. A plane exists in two dimensions. We specify a plane with three points. Any two of the points specify a line. All possible lines that pass through the third point and any point in the line make up a plane. In more obvious language, a plane is a flat surface that extends indefinitely in its two dimensions, length and width. A plane has no height.
Space exists in three dimensions. Space is made up of all possible planes, lines, and points. It extends indefinitely in all directions.
Mathematics can extend space beyond the three dimensions of length, width, and height. We then refer to "normal" space as 3-dimensional space. A 4-dimensional space consists of an infinite number of 3-dimensional spaces. Etc.
[How we label and reference points, lines, and planes.]
Chapter 2: Angles
An angle is the union of two rays with a common endpoint, called the vertex. The angles formed by vertical and horizontal lines are called right angles; lines, segments, or rays that intersect in right angles are said to be perpendicular.
Angles, for our purposes, can be measured in either degrees (from 0 to 360) or radians (from 0 to ). Angles length can be determined by measuring along the arc they map out on a circle. In radians we consider the length of the arc of the circle mapped out by the angle. Since the circumference of a circle is , a right angle is radians. In degrees, the circle is 360 degrees, and so a right angle would be 90 degrees.
Angles are named in several ways.
- By naming the vertex of the angle (only if there is only one angle formed at that vertex; the name must be non-ambiguous)
- By naming a point on each side of the angle with the vertex in between.
- By placing a small number on the interior of the angle near the vertex.
Classification of Angles by Degree Measure
- an angle is said to be acute if it measures between 0 and 90 degrees, exclusive.
- an angle is said to be right if it measures 90 degrees.
- notice the small box placed in the corner of a right angle, unless the box is present it is not assumed the angle is 90 degrees.
- all right angles are congruent
- an angle is said to be obtuse if it measures between 90 and 180 degrees, exclusive.
Special Pairs of Angles
- adjacent angles
- adjacent angles are angles with a common vertex and a common side.
- adjacent angles have no interior points in common.
- complementary angles
- complementary angles are two angles whose sum is 90 degrees.
- complementary angles may or may not be adjacent.
- if two complementary angles are adjacent, then their exterior sides are perpendicular.
- supplementary angles
- two angles are said to be supplementary if their sum is 180 degrees.
- supplementary angles need not be adjacent.
- if supplementary angles are adjacent, then the sides they do not share form a line.
- linear pair
- if a pair of angles is both adjacent and supplementary, they are said to form a linear pair.
- vertical angles
- angles with a common vertex whose sides form opposite rays are called vertical angles.
- vertical angles are congruent.
Side-Side-Side (SSS) (Postulate 12) If three sides of one triangle are congruent to three sides of a second triangle, then the two triangles are congruent.
Side-Angle-Side (SAS) (Postulate 13)
If two sides and the included angle of a second triangle, then the two triangles are congruent.
If two angles and the included side of one triangle are congruent to two angles and the included side of a second triangle, then two triangles are congruent.
If two angles and a non-included side of one triangle are congruent to two angles and the corresponding non-included side of a second triangle, then the two triangles are congruent.
NO - Angle-Side-Side (ASS)
The "ASS" postulate does not work, unlike the other ones. A way that students can remember this is that "ass" is not a nice word, so we don't use it in geometry (since it does not work).
There are two approaches to furthering knowledge: reasoning from known ideas and synthesizing observations. In inductive reasoning you observe the world, and attempt to explain based on your observations. You start with no prior assumptions. Deductive reasoning consists of logical assertions from known facts.
What you need to know
Before one can start to understand logic, and thereby begin to prove geometric theorems, one must first know a few vocabulary words and symbols.
Conditional: a conditional is something which states that one statement implies another. A conditional contains two parts: the condition and the conclusion, where the former implies the latter. A conditional is always in the form "If statement 1, then statement 2." In most mathematical notation, a conditional is often written in the form p ⇒ q, which is read as "If p, then q" where p and q are statements.
Converse: the converse of a logical statement is when the conclusion becomes the condition and vice versa; i.e., p ⇒ q becomes q ⇒ p. For example, the converse of the statement "If someone is a woman, then they are a human" would be "If someone is a human, then they are a woman." The converse of a conditional does not necessarily have the same truth value as the original, though it sometimes does, as will become apparent later.
AND: And is a logical operator which is true only when both statements are true. For example, the statement "Diamond is the hardest substance known to man AND a diamond is a metal" is false. While the former statement is true, the latter is not. However, the statement "Diamond is the hardest substance known to man AND diamonds are made of carbon" would be true, because both parts are true.
OR: If two statements are joined together by "or," then the truth of the "or" statement is dependant upon whether one or both of the statements from which it is composed is true. For example, the statement "Tuesday is the day after Monday OR Thursday is the day after Saturday" would have a truth value of "true," because even though the latter statement is false, the former is true.
NOT: If a statement is preceded by "NOT," then it is evaluating the opposite truth value of that statement. The symbol for "NOT" is For example, if the statement p is "Elvis is dead," then ¬p would be "Elvis is not dead." The concept of "NOT" can cause some confusion when it relates to statements which contain the word "all." For example, if r is "¬". "All men have hair," then ¬r would be "All men do not have hair" or "No men have hair." Do not confuse this with "Not all men have hair" or "Some men have hair." The "NOT" should apply to the verb in the statement: in this case, "have." ¬p can also be written as NOT p or ~p. NOT p may also be referred to as the "negation of p."
Inverse: The inverse of a conditional says that the negation of the condition implies the negation of the conclusion. For example, the inverse of p ⇒ q is ¬p ⇒ ¬q. Like a converse, an inverse does not necessarily have the same truth value as the original conditional.
Biconditional: A biconditional is conditional where the condition and the conclusion imply one another. A biconditional starts with the words "if and only if." For example, "If and only if p, then q" means both that p implies q and that q implies p.
Premise: A premise is a statement whose truth value is known initially. For example, if one were to say "If today is Thursday, then the cafeteria will serve burritos," and one knew that what day it was, then the premise would be "Today is Thursday" or "Today is not Thursday."
⇒: The symbol which denotes a conditional. p ⇒ q is read as "if p, then q."
Iff: Iff is a shortened form of "if and only if." It is read as "if and only if."
⇔: The symbol which denotes a biconditonal. p ⇔ q is read as "If and only if p, then q."
∴: The symbol for "therefore." p ∴ q means that one knows that p is true (p is true is the premise), and has logically concluded that q must also be true.
∧: The symbol for "and."
∨: The symbol for "or."
There are a few forms of deductive logic. One of the most common deductive logical arguments is modus ponens, which states that:
- p ⇒ q
- p ∴ q
- (If p, then q)
- (p, therefore q)
An example of modus ponens:
- If I stub my toe, then I will be in pain.
- I stub my toe.
- Therefore, I am in pain.
Another form of deductive logic is modus tollens, which states the following.
- p ⇒ q
- ¬q ∴ ¬p
- (If p, then q)
- (not q, therefore not p)
Modus tollens is just as valid a form of logic as modus ponens. The following is an example which uses modus tollens.
- If today is Thursday, then the cafeteria will be serving burritos.
- The cafeteria is not serving burritos, therefore today is not Thursday.
Another form of deductive logic is known as the If-Then Transitive Property. Simply put, it means that there can be chains of logic where one thing implies another thing. The If-Then Transitive Property states:
- p ⇒ q
- (q ⇒ r) ∴ (p ⇒ r)
- (If p, then q)
- ((If q, then r), therefore (if p, then r))
For example, consider the following chain of if-then statements.
- If today is Thursday, then the cafeteria will be serving burritos.
- If the cafeteria will be serving burritos, then I will be happy.
- Therefore, if today is Thursday, then I will be happy.
Inductive reasoning is a logical argument which does not definitely prove a statement, but rather assumes it. Inductive reasoning is used often in life. Polling is an example of the use of inductive reasoning. If one were to poll one thousand people, and 300 of those people selected choice A, then one would infer that 30% of any population might also select choice A. This would be using inductive logic, because it does not definitively prove that 30% of any population would select choice A.
Because of this factor of uncertainty, inductive reasoning should be avoided when possible when attempting to prove geometric properties.
Truth tables are a way that one can display all the possibilities that a logical system may have when given certain premises. The following is a truth table with two premises (p and q), which shows the truth value of some basic logical statements. (NOTE: T = true; F = false)
|p||q||¬p||¬q||p ⇒ q||p ⇔ q||p ∧ q||p ∨ q|
Unlike science which has theories, mathematics has a definite notion of proof. Mathematics applies deductive reasoning to create a series of logical statements which show that one thing implies another.
Consider a triangle, which we define as a shape with three vertices joined by three lines. We know that we can arbitrarily pick some point on a page, and make that into a vertex. We repeat that process and pick a second point. Using a ruler, we can connect these two points. We now make a third point, and using the ruler connect it to each of the other points. We have constructed a triangle.
In mathematics we formalize this process into axioms, and carefully lay out the sequence of statements to show what follows. All definitions are clearly defined. In modern mathematics, we are always working within some system where various axioms hold.
The most common form of explicit proof in highschool geometry is a two column proof consists of five parts: the given, the proposition, the statement column, the reason column, and the diagram (if one is given).
Example of a Two-Column Proof
Now, suppose a problem tells you to solve for , showing all steps made to get to the answer. A proof shows how this is done:
Prove: x = 1
|Property of subtraction|
We use "Given" as the first reason, because it is "given" to us in the problem.
Written proofs (also known as informal proofs, paragraph proofs, or 'plans for proof') are written in paragraph form. Other than this formatting difference, they are similar to two-column proofs.
Sometimes it is helpful to start with a written proof, before formalizing the proof in two-column form. If you're having trouble putting your proof into two column form, try "talking it out" in a written proof first.
Example of a Written Proof
We are given that x + 1 = 2, so if we subtract one from each side of the equation (x + 1 - 1 = 2 - 1), then we can see that x = 1 by the definition of subtraction.
A flowchart proof or more simply a flow proof is a graphical representation of a two-column proof. Each set of statement and reasons are recorded in a box and then arrows are drawn from one step to another. This method shows how different ideas come together to formulate the proof.
Postulates in geometry are very similar to axioms, self-evident truths, and beliefs in logic, political philosophy and personal decision-making. The five postulates of Euclidean Geometry define the basic rules governing the creation and extension of geometric figures with ruler and compass. Together with the five axioms (or "common notions") and twenty-three definitions at the beginning of Euclid's Elements, they form the basis for the extensive proofs given in this masterful compilation of ancient Greek geometric knowledge. They are as follows:
- A straight line may be drawn from any given point to any other.
- A straight line may be extended to any finite length.
- A circle may be described with any given point as its center and any distance as its radius.
- All right angles are congruent.
- If a straight line intersects two other straight lines, and so makes the two interior angles on one side of it together less than two right angles, then the other straight lines will meet at a point if extended far enough on the side on which the angles are less than two right angles.
Postulate 5, the so-called Parallel Postulate was the source of much annoyance, probably even to Euclid, for being so relatively prolix. Mathematicians have a peculiar sense of aesthetics that values simplicity arising from simplicity, with the long complicated proofs, equations and calculations needed for rigorous certainty done behind the scenes, and to have such a long sentence amidst such other straightforward, intuitive statements seems awkward. As a result, many mathematicians over the centuries have tried to prove the results of the Elements without using the Parallel Postulate, but to no avail. However, in the past two centuries, assorted non-Euclidean geometries have been derived based on using the first four Euclidean postulates together with various negations of the fifth.
Chapter 7. Vertical Angles
Vertical angles are a pair of angles with a common vertex whose sides form opposite rays. An extensively useful fact about vertical angles is that they are congruent. Aside from saying that any pair of vertical angles "obviously" have the same measure by inspection, we can prove this fact with some simple algebra and an observation about supplementary angles. Let two lines intersect at a point, and angles A1 and A2 be a pair of vertical angles thus formed. At the point of intersection, two other angles are also formed, and we'll call either one of them B1 without loss of generality. Since B1 and A1 are supplementary, we can say that the measure of B1 plus the measure of A1 is 180. Similarly, the measure of B1 plus the measure of A2 is 180. Thus the measure of A1 plus the measure of B1 equals the measure of A2 plus the measure of B1, by substitution. Then by subracting the measure of B1 from each side of this equality, we have that the measure of A1 equals the measure of A2.
Parallel Lines in a Plane
Two coplanar lines are said to be parallel if they never intersect. For any given point on the first line, its distance to the second line is equal to the distance between any other point on the first line and the second line. The common notation for parallel lines is "||" (a double pipe); it is not unusual to see "//" as well. If line m is parallel to line n, we write "m || n". Lines in a plane either coincide, intersect in a point, or are parallel. Controversies surrounding the Parallel Postulate lead to the development of non-Euclidean geometries.
Parallel Lines and Special Pairs of Angles
When two (or more) parallel lines are cut by a transversal, the following angle relationships hold:
- corresponding angles are congruent
- alternate exterior angles are congruent
- same-side interior angles are supplementary
Theorems Involving Parallel Lines
- If a line in a plane is perpendicular to one of two parallel lines, it is perpendicular to the other line as well.
- If a line in a plane is parallel to one of two parallel lines, it is parallel to both parallel lines.
- If three or more parallel lines are intersected by two or more transversals, then they divide the transversals proportionally.
Congruent shapes are the same size with corresponding lengths and angles equal. In other words, they are exactly the same size and shape. They will fit on top of each other perfectly. Therefore if you know the size and shape of one you know the size and shape of the others. For example:
Each of the above shapes is congruent to each other. The only difference is in their orientation, or the way they are rotated. If you traced them onto paper and cut them out, you could see that they fit over each other exactly.
Having done this, right away we can see that, though the angles correspond in size and position, the sides do not. Therefore it is proved the triangles are not congruent.
Similar shapes are like congruent shapes in that they must be the same shape, but they don't have to be the same size. Their corresponding angles are congruent and their corresponding sides are in proportion.
Methods of Determining Congruence
Two triangles are congruent if:
- each pair of corresponding sides is congruent
- two pairs of corresponding angles are congruent and a pair of corresponding sides are congruent
- two pairs of corresponding sides and the angles included between them are congruent
Tips for Proofs
Commonly used prerequisite knowledge in determining the congruence of two triangles includes:
- by the reflexive property, a segment is congruent to itself
- vertical angles are congruent
- when parallel lines are cut by a transversal corresponding angles are congruent
- when parallel lines are cut by a transversal alternate interior angles are congruent
- midpoints and bisectors divide segments and angles into two congruent parts
For two triangles to be similar, all 3 corresponding angles must be congruent, and all three sides must be proportionally equal. Two triangles are similar if...
- Two angles of each triangle are congruent.
- The acute angle of a right triangle is congruent to the acute angle of another right triangle.
- The two triangles are congruent. Note here that congruency implies similarity.
A quadrilateral is a polygon that has four sides.
Special Types of Quadrilaterals
- A parallelogram is a quadrilateral having two pairs of parallel sides.
- A square, a rhombus, and a rectangle are all examples of parallelograms.
- A rhombus is a quadrilateral of which all four sides are the same length.
- A rectangle is a parallelogram of which all four angles are 90 degrees.
- A square is a quadrilateral of which all four sides are of the same length, and all four angles are 90 degrees.
- A square is a rectangle, a rhombus, and a parallelogram.
- A trapezoid is a quadrilateral which has two parallel sides (U.S.)
- U.S. usage: A trapezium is a quadrilateral which has no parallel sides.
- U.K usage: A trapezium is a quadrilateral with two parallel sides (same as US trapezoid definition).
- A kite is an quadrilateral with two pairs of congruent adjacent sides.
One of the most important properties used in proofs is that the sum of the angles of the quadrilateral is always 360 degrees. This can easily be proven too:
If you draw a random quadrilateral, and one of its diagonals, you'll split it up into two triangles. Given that the sum of the angles of a triangle is 180 degrees, you can sum them up, and it'll give 360 degrees.
A parallelogram is a geometric figure with two pairs of parallel sides. Parallelograms are a special type of quadrilateral. The opposite sides are equal in length and the opposite angles are also equal. The area is equal to the product of any side and the distance between that side and the line containing the opposite side.
Properties of Parallelograms
The following properties are common to all parallelograms (parallelogram, rhombus, rectangle, square)
- both pairs of opposite sides are parallel
- both pairs of opposite sides are congruent
- both pairs of opposite angles are congruent
- the diagonals bisect each other
- A rhombus is a parallelogram with four congruent sides.
- The diagonals of a rhombus are perpendicular.
- Each diagonal of a rhombus bisects two angles the rhombus.
- A rhombus may or may not be a square.
- A square is a parallelogram with four right angles and four congruent sides.
- A square is both a rectangle and a rhombus and inherits all of their properties.
A Trapezoid (American English) or Trapezium (British English) is a quadrilateral that has two parallel sides and two non parallel sides.
Some properties of trapezoids:
- The interior angles sum to 360° as in any quadrilateral.
- The parallel sides are unequal.
- Each of the parallel sides is called a base (b) of the trapezoid. The two angles that join one base are called 'base angles'.
- If the two non-parallel sides are equal, the trapezoid is called an isosceles trapezoid.
- In an isosceles trapezoid, each pair of base angles are equal.
- If one pair of base angles of a trapezoid are equal, the trapezoid is isosceles.
- A line segment connecting the midpoints of the non-parallel sides is called the median (m) of the trapeziod.
- The median of a trapezoid is equal to one half the sum of the bases (called b1 and b2).
- A line segment perpendicular to the bases is called an altitude (h) of the trapezoid.
The area (A) of a trapezoid is equal to the product of an altitude and the median.
Recall though that the median is half of the sum of the bases.
Substituting for m, we get:
A circle is a set of all points in a plane that are equidistant from a single point; that single point is called the centre of the circle and the distance between any point on circle and the centre is called radius of the circle.
a chord is an internal segment of a circle that has both of its endpoints on the circumference of the circle.
- the diameter of a circle is the largest chord possible
a secant of a circle is any line that intersects a circle in two places.
- a secant contains any chord of the circle
a tangent to a circle is a line that intersects a circle in exactly one point, called the point of tangency.
- at the point of tangency the tangent line and the radius of the circle are perpendicular
Chapter 16. Circles/Arcs
An arc is a segment of the perimeter of a given circle. The measure of an arc is measured as an angle, this could be in radians or degrees (more on radians later). The exact measure of the arc is determined by the measure of the angle formed when a line is drawn from the center of the circle to each end point. As an example the circle below has an arc cut out of it with a measure of 30 degrees.
As I mentioned before an arc can be measured in degrees or radians. A radian is merely a different method for measuring an angle. If we take a unit circle (which has a radius of 1 unit), then if we take an arc with the length equal to 1 unit, and draw line from each endpoint to the center of the circle the angle formed is equal to 1 radian. this concept is displayed below, in this circle an arc has been cut off by an angle of 1 radian, and therefore the length of the arc is equal to because the radius is 1.
From this definition we can say that on the unit circle a single radian is equal to radians because the perimeter of a unit circle is equal to . Another useful property of this definition that will be extremely useful to anyone who studies arcs is that the length of an arc is equal to its measure in radians multiplied by the radius of the circle.
Converting to and from radians is a fairly simple process. 2 facts are required to do so, first a circle is equal to 360 degrees, and it is also equal to . using these 2 facts we can form the following formula:
, thus 1 degree is equal to radians.
From here we can simply multiply by the number of degrees to convert to radians. for example if we have 20 degrees and want to convert to radians then we proceed as follows:
The same sort of argument can be used to show the formula for getting 1 radian.
, thus 1 radian is equal to
A tangent is a line in the same plane as a given circle that meets that circle in exactly one point. That point is called the point of tangency. A tangent cannot pass through a circle; if it does, it is classified as a chord. A secant is a line containing a chord.
A common tangent is a line tangent to two circles in the same plane. If the tangent does not intersect the line containing and connecting the centers of the circles, it is an external tangent. If it does, it is an internal tangent.
Two circles are tangent to one another if in a plane they intersect the same tangent in the same point.
Sector of a circle
A sector of a circle can be thought of as a pie piece. In the picture below, a sector of the circle is shaded yellow.
To find the area of a sector, find the area of the whole circle and then multiply by the angle of the sector over 360 degrees.
A more intuitive approach can be used when the sector is half the circle. In this case the area of the sector would just be the area of the circle divided by 2.
- See Angle
Addition Property of Equality
For any real numbers a, b, and c, if a = b, then a + c = b + c.
A figure is an angle if and only if it is composed of two rays which share a common endpoint. Each of these rays (or segments, as the case may be) is known as a side of the angle (For example, in the illustration at right), and the common point is known as the angle's vertex (point B in the illustration). Angles are measured by the difference of their slopes. The units for angle measure are radians and degrees. Angles may be classified by their degree measure.
- Acute Angle: an angle is an acute angle if and only if it has a measure of less than 90°
- Right Angle: an angle is an right angle if and only if it has a measure of exactly 90°
- Obtuse Angle: an angle is an obtuse angle if and only if it has a measure of greater than 90°
Angle Addition Postulate
If P is in the interior of an angle , then
Center of a circle
Point P is the center of circle C if and only if all points in circle C are equidistant from point P and point P is contained in the same plane as circle C.
A collection of points is said to be a circle with a center at point P and a radius of some distance r if and only if it is the collection of all points which are a distance of r away from point P and are contained by a plane which contain point P.
A polygon is said to be concave if and only if it contains at least one interior angle with a measure greater than 180° exclusively and less than 360° exclusively.
Two angles formed by a transversal intersecting with two lines are corresponding angles if and only if one is on the inside of the two lines, the other is on the outside of the two lines, and both are on the same side of the transversal.
Corresponding Angles Postulate
If two lines cut by a transversal are parallel, then their corresponding angles are congruent.
Corresponding Parts of Congruent Triangles are Congruent Postulate
The Corresponding Parts of Congruent Triangles are Congruent Postulate (CPCTC) states:
- If ∆ABC ≅ ∆XYZ, then all parts of ∆ABC are congruent to their corresponding parts in ∆XYZ. For example:
- ∠ABC ≅ ∠XYZ
- ∠BCA ≅ ∠YZX
- ∠CAB ≅ ∠ZXY
CPCTC also applies to all other parts of the triangles, such as a triangle's altitude, median, circumcenter, et al.
A line segment is the diameter of a circle if and only if it is a chord of the circle which contains the circle's center.
- See Circle
and if they cross they are congruent
A collection of points is a line if and only if the collection of points is perfectly straight (aligned), is infinitely long, and is infinitely thin. Between any two points on a line, there exists an infinite number of points which are also contained by the line. Lines are usually written by two points in the line, such as line AB, or
A collection of points is a line segment if and only if it is perfectly straight, is infinitely thin, and has a finite length. A line segment is measured by the shortest distance between the two extreme points on the line segment, known as endpoints. Between any two points on a line segment, there exists an infinite number of points which are also contained by the line segment.
Two lines or line segments are said to be parallel if and only if the lines are contained by the same plane and have no points in common if continued infinitely.
Two planes are said to be parallel if and only if the planes have no points in common when continued infinitely.
Two lines that intersect at a 90° angle.
Given a line, and a point P not in line , then there is one and only one line that goes through point P perpendicular to
An object is a plane if and only if it is a two-dimensional object which has no thickness or curvature and continues infinitely. A plane can be defined by three points. A plane may be considered to be analogous to a piece of paper.
A point is a zero-dimensional mathematical object representing a location in one or more dimensions. A point has no size; it has only location.
A polygon is a closed plane figure composed of at least 3 straight lines. Each side has to intersect another side at their respective endpoints, and that the lines intersecting are not collinear.
The radius of a circle is the distance between any given point on the circle and the circle's center.
- See Circle
A ray is a straight collection of points which continues infinitely in one direction. The point at which the ray stops is known as the ray's endpoint. Between any two points on a ray, there exists an infinite number of points which are also contained by the ray.
The points on a line can be matched one to one with the real numbers. The real number that corresponds to a point is the point's coordinate. The distance between two points is the absolute value of the difference between the two coordinates of the two points.
Geometry/Synthetic versus analytic geometry
- Two and Three-Dimensional Geometry and Other Geometric Figures
Perimeter and Arclength
Perimeter of Circle
The circles perimeter can be calculated using the following formula
where and the radius of the circle.
Perimeter of Polygons
The perimeter of a polygon with number of sides abbreviated can be caculated using the following formula
Arclength of Circles
The arclength of a given circle with radius can be calculated using
where is the angle given in radians.
Arclength of Curves
If a curve in have a parameter form for , then the arclength can be calculated using the following fomula
Derivation of formula can be found using differential geometry on infinitely small triangles.
Area of Circles
The method for finding the area of a circle is
Where π is a constant roughly equal to 3.14159265358978 and r is the radius of the circle; a line drawn from any point on the circle to its center.
Area of Triangles
Three ways of calculating the area inside of a triangle are mentioned here.
If one of the sides of the triangle is chosen as a base, then a height for the triangle and that particular base can be defined. The height is a line segment perpendicular to the base or the line formed by extending the base and the endpoints of the height are the corner point not on the base and a point on the base or line extending the base. Let B = the length of the side chosen as the base. Let
h = the distance between the endpoints of the height segment which is perpendicular to the base. Then the area of the triangle is given by:
This method of calculating the area is good if the value of a base and its corresponding height in the triangle is easily determined. This is particularly true if the triangle is a right triangle, and the lengths of the two sides sharing the 90o angle can be determined.
- , also known as Heron's Formula
If the lengths of all three sides of a triangle are known, Hero's formula may be used to calculate the area of the triangle. First, the semiperimeter, s, must be calculated by dividing the sum of the lengths of all three sides by 2. For a triangle having side lengths a, b, and c :
Then the triangle's area is given by:
If the triangle is needle shaped, that is, one of the sides is very much shorter than the other two then it can be difficult to compute the area because the precision needed is greater than that available in the calculator or computer that is used. In otherwords Heron's formula is numerically unstable. Another formula that is much more stable is:
where , , and have been sorted so that .
In a triangle with sides length a, b, and c and angles A, B, and C opposite them,
This formula is true because in the formula . It is useful because you don't need to find the height from an angle in a separate step, and is also used to prove the law of sines (divide all terms in the above equation by a*b*c and you'll get it directly!)
Area of Rectangles
The area calculation of a rectangle is simple and easy to understand. One of the sides is chosen as the base, with a length b. An adjacent side is then the height, with a length h, because in a rectangle the adjacent sides are perpendicular to the side chosen as the base. The rectangle's area is given by:
Sometimes, the baselength may be referred to as the length of the rectangle, l, and the height as the width of the rectangle, w. Then the area formula becomes:
Regardless of the labels used for the sides, it is apparent that the two formulas are equivalent.
Of course, the area of a square with sides having length s would be:
Area of Parallelograms
The area of a parallelogram can be determined using the equation for the area of a rectangle. The formula is:
A is the area of a parallelogram. b is the base. h is the height.
The height is a perpendicular line segment that connects one of the vertices to its opposite side (the base).
Area of Rhombus
Remember in a rombus all sides are equal in length.
and represent the diagonals.
Area of Trapezoids
The area of a trapezoid is derived from taking the arithmetic mean of its two parallel sides to form a rectangle of equal area.
Where and are the lengths of the two parallel bases.
Area of Kites
The area of a kite is based on splitting the kite into four pieces by halving it along each diagonal and using these pieces to form a rectangle of equal area.
Where a and b are the diagonals of the kite.
Alternatively, the kite may be divided into two halves, each of which is a triangle, by the longer of its diagonals, a. The area of each triangle is thus
Where b is the other (shorter) diagonal of the kite. And the total area of the kite (which is composed of two identical such triangles) is
Which is the same as
Areas of other Quadrilaterals
The areas of other quadrilaterals are slightly more complex to calculate, but can still be found if the quadrilateral is well-defined. For example, a quadrilateral can be divided into two triangles, or some combination of triangles and rectangles. The areas of the constituent polygons can be found and added up with arithmetic.
Volume is like area expanded out into 3 dimensions. Area deals with only 2 dimensions. For volume we have to consider another dimension. Area can be thought of as how much space some drawing takes up on a flat piece of paper. Volume can be thought of as how much space an object takes up.
|Common equations for volume:|
|A cube:||s = length of a side|
|A rectangular prism:||l = length, w = width, h = height|
|A cylinder (circular prism):||r = radius of circular face, h = height|
|Any prism that has a constant cross sectional area along the height:||A = area of the base, h = height|
|A sphere:||r = radius of sphere
which is the integral of the Surface Area of a sphere
|An ellipsoid:||a, b, c = semi-axes of ellipsoid|
|A pyramid:||A = area of the base, h = height of pyramid|
|A cone (circular-based pyramid):||r = radius of circle at base, h = distance from base to tip
(The units of volume depend on the units of length - if the lengths are in meters, the volume will be in cubic meters, etc.)
The volume of any solid whose cross sectional areas are all the same is equal to that cross sectional area times the distance the centroid(the center of gravity in a physical object) would travel through the solid.
If two solids are contained between two parallel planes and every plane parallel to these two plane has equal cross sections through these two solids, then their volumes are equal.
A Polygon is a two-dimensional figure, meaning all of the lines in the figure are contained within one plane. They are classified by the number of angles, which is also the number of sides.
One key point to note is that a polygon must have at least three sides. Normally, three to ten sided figures are referred to by their names (below), while figures with eleven or more sides is an n-gon, where n is the number of sides. Hence a forty-sided polygon is called a 40-gon.
A polygon with three angles and sides.
A polygon with four angles and sides.
A polygon with five angles and sides.
A polygon with six angles and sides.
A polygon with seven angles and sides.
A polygon with eight angles and sides.
A polygon with nine angles and sides.
A polygon with ten angles and sides.
For a list of n-gon names, go to and scroll to the bottom of the page.
Polygons are also classified as convex or concave. A convex polygon has interior angles less than 180 degrees, thus all triangles are convex. If a polygon has at least one internal angle greater than 180 degrees, then it is concave. An easy way to tell if a polygon is concave is if one side can be extended and crosses the interior of the polygon. Concave polygons can be divided into several convex polygons by drawing diagonals. Regular polygons are polygons in which all sides and angles are congruent.
A triangle is a type of polygon having three sides and, therefore, three angles. The triangle is a closed figure formed from three straight line segments joined at their ends. The points at the ends can be called the corners, angles, or vertices of the triangle. Since any given triangle lies completely within a plane, triangles are often treated as two-dimensional geometric figures. As such, a triangle has no volume and, because it is a two-dimensionally closed figure, the flat part of the plane inside the triangle has an area, typically referred to as the area of the triangle. Triangles are always convex polygons.
A triangle must have at least some area, so all three corner points of a triangle cannot lie in the same line. The sum of the lengths of any two sides of a triangle is always greater than the length of the third side. The preceding statement is sometimes called the Triangle Inequality.
Certain types of triangles
Categorized by angle
The sum of the interior angles in a triangle always equals 180o. This means that no more than one of the angles can be 90o or more. All three angles can all be less than 90oin the triangle; then it is called an acute triangle. One of the angles can be 90o and the other two less than 90o; then the triangle is called a right triangle. Finally, one of the angles can be more than 90o and the other two less; then the triangle is called an obtuse triangle.
Categorized by sides
If all three of the sides of a triangle are of different length, then the triangle is called a scalene triangle.
If two of the sides of a triangle are of equal length, then it is called an isoceles triangle. In an isoceles triangle, the angle between the two equal sides can be more than, equal to, or less than 90o. The other two angles are both less than 90o.
If all three sides of a triangle are of equal length, then it is called an equilateral triangle and all three of the interior angles must be 60o, making it equilangular. Because the interior angles are all equal, all equilateral triangles are also the three-sided variety of a regular polygon and they are all similar, but might not be congruent. However, polygons having four or more equal sides might not have equal interior angles, might not be regular polygons, and might not be similar or congruent. Of course, pairs of triangles which are not equilateral might be similar or congruent.
Opposite corners and sides in triangles
If one of the sides of a triangle is chosen, the interior angles of the corners at the side's endpoints can be called adjacent angles. The corner which is not one of these endpoints can be called the corner opposite to the side. The interior angle whose vertex is the opposite corner can be called the angle opposite to the side.
Likewise, if a corner or its angle is chosen, then the two sides sharing an endpoint at that corner can be called adjacent sides. The side not having this corner as one of its two endpoints can be called the side opposite to the corner.
The sides or their lengths of a triangle are typically labeled with lower case letters. The corners or their corresponding angles can be labeled with capital letters. The triangle as a whole can be labeled by a small triangle symbol and its corner points. In a triangle, the largest interior angle is opposite to longest side, and vice versa.
Any triangle can be divided into two right triangles by taking the longest side as a base, and extending a line segment from the opposite corner to a point on the base such that it is perpendicular to the base. Such a line segment would be considered the height or altitude ( h ) for that particular base ( b ). The two right triangles resulting from this division would both share the height as one of its sides. The interior angles at the meeting of the height and base would be 90o for each new right triangle. For acute triangles, any of the three sides can act as the base and have a corresponding height. For more information on right triangles, see Right Triangles and Pythagorean Theorem.
Area of Triangles
If base and height of a triangle are known, then the area of the triangle can be calculated by the formula:
( is the symbol for area)
Ways of calculating the area inside of a triangle are further discussed under Area.
The centroid is constructed by drawing all the medians of the triangle. All three medians intersect at the same point: this crossing point is the centroid. Centroids are always inside a triangle. They are also the centre of gravity of the triangle.
The three angle bisectors of the triangle intersect at a single point, called the incentre. Incentres are always inside the triangle. The three sides are equidistant from the incentre. The incentre is also the centre of the inscribed circle (incircle) of a triangle, or the interior circle which touches all three sides of the triangle.
The circumcentre is the intersection of all three perpendicular bisectors. Unlike the incentre, it is outside the triangle if the triangle is obtuse. Acute triangles always have circumcentres inside, while the circumcentre of a right triangle is the midpoint of the hypotenuse. The vertices of the triangle are equidistant from the circumcentre. The circumcentre is so called because it is the centre of the circumcircle, or the exterior circle which touches all three vertices of the triangle.
The orthocentre is the crossing point of the three altitudes. It is always inside acute triangles, outside obtuse triangles, and on the right vertex of the right-angled triangle.
Please note that the centres of an equilateral triangle are always the same point.
Right Triangles and Pythagorean Theorem
Right triangles are triangles in which one of the interior angles is 90o. A 90o angle is called a right angle. Right triangles are sometimes called right-angled triangles. The other two interior angles are complementary, i.e. their sum equals 90o. Right triangles have special properties which make it easier to conceptualize and calculate their parameters in many cases.
The side opposite of the right angle is called the hypotenuse. The sides adjacent to the right angle are the legs. When using the Pythagorean Theorem, the hypotenuse or its length is often labeled with a lower case c. The legs (or their lengths) are often labeled a and b.
Either of the legs can be considered a base and the other leg would be considered the height (or altitude), because the right angle automatically makes them perpendicular. If the lengths of both the legs are known, then by setting one of these sides as the base ( b ) and the other as the height ( h ), the area of the right triangle is very easy to calculate using this formula:
This is intuitively logical because another congruent right triangle can be placed against it so that the hypotenuses are the same line segment, forming a rectangle with sides having length b and width h. The area of the rectangle is b × h, so either one of the congruent right triangles forming it has an area equal to half of that rectangle.
Right triangles can be neither equilateral, acute, nor obtuse triangles. Isosceles right triangles have two 45° angles as well as the 90° angle. All isosceles right triangles are similar since corresponding angles in isosceles right triangles are equal. If another triangle can be divided into two right triangles (see Triangle), then the area of the triangle may be able to be determined from the sum of the two constituent right triangles. Also the Pythagorean theorem can be used for non right triangles. a2+b2=c2-2c
For history regarding the Pythagorean Theorem, see Pythagorean theorem. The Pythagorean Theorem states that:
- In a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the other two sides.
Let's take a right triangle as shown here and set c equal to the length of the hypotenuse and set a and b each equal to the lengths of the other two sides. Then the Pythagorean Theorem can be stated as this equation:
Using the Pythagorean Theorem, if the lengths of any two of the sides of a right triangle are known and it is known which side is the hypotenuse, then the length of the third side can be determined from the formula.
Sine, Cosine, and Tangent for Right Triangles
Sine, Cosine, and Tangent are all functions of an angle, which are useful in right triangle calculations. For an angle designated as θ, the sine function is abbreviated as sin θ, the cosine function is abbreviated as cos θ, and the tangent function is abbreviated as tan θ. For any
angle θ, sin θ, cos θ, and tan θ are each single determined values and if θ is a known value, sin θ, cos θ, and tan θ can be looked up in a table or found with a calculator. There is a table listing these function values at the end of this section. For an angle between listed values, the sine, cosine, or tangent of that angle can be estimated from the values in the table. Conversely, if a number is known to be the sine, cosine, or tangent of a angle, then such tables could be used in reverse to find (or estimate) the value of a corresponding angle.
These three functions are related to right triangles in the following ways:
In a right triangle,
- the sine of a non-right angle equals the length of the leg opposite that angle divided by the length of the hypotenuse.
- the cosine of a non-right angle equals the length of the leg adjacent to it divided by the length of the hypotenuse.
- the tangent of a non-right angle equals the length of the leg opposite that angle divided by the length of the leg adjacent to it.
For any value of θ where cos θ ≠ 0,
If one considers the diagram representing a right triangle with the two non-right angles θ1and θ2, and the side lengths a,b,c as shown here:
For the functions of angle θ1:
Analogously, for the functions of angle θ2:
Table of sine, cosine, and tangent for angles θ from 0 to 90°
|θ in degrees||θ in radians||sin θ||cos θ||tan θ|
General rules for important angles:
Polyominoes are shapes made from connecting unit squares together, though certain connections are not allowed.
A domino is the shape made from attaching unit squares so that they share one full edge. The term polyomino is based on the word domino. There is only one possible domino.
Tromino↑Jump back a section
A polymino made from four squares is called a tetromino. There are five possible combinations and two reflections:
A polymino made from five squares is called a pentomino. There are twelve possible pentominoes, excluding mirror images and rotations.
Ellipses are sometimes called ovals. Ellipses contain two foci. The sum of the distance from a point on the ellipse to one focus and that same point to the other focus is constant
Area Shapes Extended into 3rd Dimension
Geometry/Area Shapes Extended into 3rd Dimension
Area Shapes Extended into 3rd Dimension Linearly to a Line or Point
Geometry/Area Shapes Extended into 3rd Dimension Linearly to a Line or Point
Ellipsoids and Spheres
Geometry/Ellipsoids and Spheres
Suppose you are an astronomer in America. You observe an exciting event (say, a supernova) in the sky and would like to tell your colleagues in Europe about it. Suppose the supernova appeared at your zenith. You can't tell astronomers in Europe to look at their zenith because their zenith points in a different direction. You might tell them which constellation to look in. This might not work, though, because it might be too hard to find the supernova by searching an entire constellation. The best solution would be to give them an exact position by using a coordinate system.
On Earth, you can specify a location using latitude and longitude. This system works by measuring the angles separating the location from two great circles on Earth (namely, the equator and the prime meridian). Coordinate systems in the sky work in the same way.
The equatorial coordinate system is the most commonly used. The equatorial system defines two coordinates: right ascension and declination, based on the axis of the Earth's rotation. The declination is the angle of an object north or south of the celestial equator. Declination on the celestial sphere corresponds to latitude on the Earth. The right ascension of an object is defined by the position of a point on the celestial sphere called the vernal equinox. The further an object is east of the vernal equinox, the greater its right ascension.
A coordinate system is a system designed to establish positions with respect to given reference points. The coordinate system consists of one or more reference points, the styles of measurement (linear measurement or angular measurement) from those reference points, and the directions (or axes) in which those measurements will be taken. In astronomy, various coordinate systems are used to precisely define the locations of astronomical objects.
Latitude and longitude are used to locate a certain position on the Earth's surface. The lines of latitude (horizontal) and the lines of longitude (vertical) make up an invisible grid over the earth. Lines of latitude are called parallels. Lines of longitude aren't completely straight (they run from the exact point of the north pole to the exact point of the south pole) so they are called meridians. 0 degrees latitude is the Earth's middle, called the equator. O degrees longitude was tricky because there really is no middle of the earth vertically. It was finally agreed that the observatory in Greenwich, U.K. would be 0 degrees longitude due to its significant roll in scientific discoveries and creating latitude and longitude. 0 degrees longitude is called the prime meridian.
Latitude and longitude are measured in degrees. One degree is about 69 miles. There are sixty minutes (') in a degree and sixty seconds (") in a minute. These tiny units make GPS's (Global Positioning Systems) much more exact.
There are a few main lines of latitude:the Arctic Circle, the Antarctic Circle, the Tropic of Cancer, and the Tropic of Capricorn. The Antarctic Circle is 66.5 degrees south of the equator and it marks the temperate zone from the Antarctic zone. The Arctic Circle is an exact mirror in the north. The Tropic of Cancer separates the tropics from the temperate zone. It is 23.5 degrees north of the equator. It is mirrored in the south by the Tropic of Capricorn.
Horizontal coordinate system
One of the simplest ways of placing a star on the night sky is the coordinate system based on altitude or azimuth, thus called the Alt-Az or horizontal coordinate system. The reference circles for this system are the horizon and the celestial meridian, both of which may be most easily graphed for a given location using the celestial sphere.
In simplest terms, the altitude is the angle made from the position of the celestial object (e.g. star) to the point nearest it on the horizon. The azimuth is the angle from the northernmost point of the horizon (which is also its intersection with the celestial meridian) to the point on the horizon nearest the celestial object. Usually azimuth is measured eastwards from due north. So east has az=90°, south has az=180°, west has az=270° and north has az=360° (or 0°). An object's altitude and azimuth change as the earth rotates.
Equatorial coordinate system
The equatorial coordinate system is another system that uses two angles to place an object on the sky: right ascension and declination.
Ecliptic coordinate system
The ecliptic coordinate system is based on the ecliptic plane, i.e., the plane which contains our Sun and Earth's average orbit around it, which is tilted at 23°26' from the plane of Earth's equator. The great circle at which this plane intersects the celestial sphere is the ecliptic, and one of the coordinates used in the ecliptic coordinate system, the ecliptic latitude, describes how far an object is to ecliptic north or to ecliptic south of this circle. On this circle lies the point of the vernal equinox (also called the first point of Aries); ecliptic longitude is measured as the angle of an object relative to this point to ecliptic east. Ecliptic latitude is generally indicated by φ, whereas ecliptic longitude is usually indicated by λ.
Galactic coordinate system
As a member of the Milky Way Galaxy, we have a clear view of the Milky Way from Earth. Since we are inside the Milky Way, we don't see the galaxy's spiral arms, central bulge and so forth directly as we do for other galaxies. Instead, the Milky Way completely encircles us. We see the Milky Way as a band of faint starlight forming a ring around us on the celestial sphere. The disk of the galaxy forms this ring, and the bulge forms a bright patch in the ring. You can easily see the Milky Way's faint band from a dark, rural location.
Our galaxy defines another useful coordinate system — the galactic coordinate system. This system works just like the others we've discussed. It also uses two coordinates to specify the position of an object on the celestial sphere. The galactic coordinate system first defines a galactic latitude, the angle an object makes with the galactic equator. The galactic equator has been selected to run through the center of the Milky Way's band. The second coordinate is galactic longitude, which is the angular separation of the object from the galaxy's "prime meridian," the great circle that passes through the Galactic center and the galactic poles. The galactic coordinate system is useful for describing an object's position with respect to the galaxy's center. For example, if an object has high galactic latitude, you might expect it to be less obstructed by interstellar dust.
Transformations between coordinate systems
One can use the principles of spherical trigonometry as applied to triangles on the celestial sphere to derive formulas for transforming coordinates in one system to those in another. These formulas generally rely on the spherical law of cosines, known also as the cosine rule for sides. By substituting various angles on the celestial sphere for the angles in the law of cosines and by thereafter applying basic trigonometric identities, most of the formulas necessary for coordinate transformations can be found. The law of cosines is stated thus:
To transform from horizontal to equatorial coordinates, the relevant formulas are as follows:
where RA is the right ascension, Dec is the declination, LST is the local sidereal time, Alt is the altitude, Az is the azimuth, and Lat is the observer's latitude. Using the same symbols and formulas, one can also derive formulas to transform from equatorial to horizontal coordinates:
Transformation from equatorial to ecliptic coordinate systems can similarly be accomplished using the following formulas:
where RA is the right ascension, Dec is the declination, φ is the ecliptic latitude, λ is the ecliptic longitude, and ε is the tilt of Earth's axis relative to the ecliptic plane. Again, using the same formulas and symbols, new formulas for transforming ecliptic to equatorial coordinate systems can be found:
- Traditional Geometry:
A topological space is a set X, and a collection of subsets of X, C such that both the empty set and X are contained in C and the union of any subcollection of sets in C and the intersection of any finite subcollection of sets in C are also contained within C. The sets in C are called open sets. Their complements relative to X are called closed sets.
Given two topological spaces, X and Y, a map f from X to Y is continuous if for every open set U of Y, f−1(U) is an open set of X.
Hyperbolic and Elliptic Geometry
There are precisely three different classes of three-dimensional constant-curvature geometry: Euclidean, hyperbolic and elliptic geometry. The three geometries are all built on the same first four axioms, but each has a unique version of the fifth axiom, also known as the parallel postulate. The 1868 Essay on an Interpretation of Non-Euclidean Geometry by Eugenio Beltrami (1835 - 1900) proved the logical consistency of the two Non-Euclidean geometries, hyperbolic and elliptic.
The Parallel Postulate
The parallel postulate is as follows for the corresponding geometries.
Euclidean geometry: Playfair's version: "Given a line l and a point P not on l, there exists a unique line m through P that is parallel to l." Euclid's version: "Suppose that a line l meets two other lines m and n so that the sum of the interior angles on one side of l is less than 180°. Then m and n intersect in a point on that side of l." These two versions are equivalent; though Playfair's may be easier to conceive, Euclid's is often useful for proofs.
Hyperbolic geometry: Given an arbitrary infinite line l and any point P not on l, there exist two or more distinct lines which pass through P and are parallel to l.
Elliptic geometry: Given an arbitrary infinite line l and any point P not on l, there does not exist a line which passes through P and is parallel to l.
Hyperbolic geometry is also known as saddle geometry or Lobachevskian geometry. It differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. Some of these remarkable consequences of this geometry's unique fifth postulate include:
1. The sum of the three interior angles in a triangle is strictly less than 180°. Moreover, the angle sums of two distinct triangles are not necessarily the same.
2. Two triangles with the same interior angles have the same area.
Models of Hyperbolic Space
The following are four of the most common models used to describe hyperbolic space.
1. The Poincaré Disc Model. Also known as the conformal disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by arcs of circles that are orthogonal to the boundary circle and by diameters of the boundary circle. Preserves hyperbolic angles.
2. The Klein Model. Also known as the Beltrami-Klein model or projective disc model. In it, the hyperbolic plane is represented by the interior of a circle, and lines are represented by chords of the circle. This model gives a misleading visual representation of the magnitude of angles.
3. The Poincaré Half-Plane Model. The hyperbolic plane is represented by one-half of the Euclidean plane, as defined by a given Euclidean line l, where l is not considered part of the hyperbolic space. Lines are represented by half-circles orthogonal to l or rays perpendicular to l. Preserves hyperbolic angles.
4. The Lorentz Model. Spheres in Lorentzian four-space. The hyperbolic plane is represented by a two-dimensional hyperboloid of revolution embedded in three-dimensional Minkowski space.
Based on this geometry's definition of the fifth axiom, what does parallel mean? The following definitions are made for this geometry. If a line l and a line m do not intersect in the hyperbolic plane, but intersect at the plane's boundary of infinity, then l and m are said to be parallel. If a line p and a line q neither intersect in the hyperbolic plane nor at the boundary at infinity, then p and q are said to be ultraparallel.
The Ultraparallel Theorem
For any two lines m and n in the hyperbolic plane such that m and n are ultraparallel, there exists a unique line l that is perpendicular to both m and n.
Elliptic geometry differs in many ways to Euclidean geometry, often leading to quite counter-intuitive results. For example, directly from this geometry's fifth axiom we have that there exist no parallel lines. Some of the other remarkable consequences of the parallel postulate include: The sum of the three interior angles in a triangle is strictly greater than 180°.
Models of Elliptic Space
Spherical geometry gives us perhaps the simplest model of elliptic geometry. Points are represented by points on the sphere. Lines are represented by circles through the points.
- Euclid's First Four Postulates
- Euclid's Fifth Postulate
- Incidence Geometry
- Projective and Affine Planes (necessary?)
- Axioms of Betweenness
- Pasch and Crossbar
- Axioms of Congruence
- Continuity (necessary?)
- Hilbert Planes
- Neutral Geometry
If you would like to request anything in this topic please post it below.
- Modern geometry
- An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle =
Geometry/An Alternative Way and Alternative Geometric Means of Calculating the Area of a Circle | http://en.m.wikibooks.org/wiki/Geometry/Print_version | 13 |
21 | It was developed by Charles Spearman in early 1900s and as such this test is also called as Spearman’s rank correlation coefficient. In statistical analysis situation arises when the data are not available to use in numerical form for correlation analysis, but the information is sufficient to rank the data as first, second, third and so on, in this kind of situations we use quite often the rank correlation method and work out the coefficient of rank correlation. These latest developments are all covered in the Statistic Homework help, Assignment help at transtutors.com
The rank correlation coefficientin fact is a measure of association which is based on the ranks of the observations and not on the numerical values of the data. In performing this, for calculating rank correlation coefficient, at first the actual observation to be replaced by their ranks, the highest value is given rank 1, rank 2 to next highest one and by following this particular order ranks are assigned for all the values. If two or more values seem to be equal, the calculated average of the ranks that should have been assigned to such values had all of them be different, is then taken and the same rank (equal to the calculated average) is given to the concerning values. The next step is to record the difference between ranks for each pair of observations and then square these differences to get a total value of such differences. Then finally the rank correlation coefficient is worked out.
Here n denotes the number of paired observations.
The value of Spearman’s rank correlation coefficient will always vary between -1 and +1. Spearman’s rank correlation is also known as “grade correlation”. Basically it is a non parametric measure of statistical dependence between two variables. This test makes an assessment of how well the relationship between two variables can be described using a monotonic function. All such methods are covered inStatistic Homework help, Assignment help at transtutors.com
Steps involved in Spearman’s rank correlation test:
The null hypothesis - "There is no relationship between the two sets of data."
Ranking both sets of data from highest to the lowest position and checking for tied ranks.
Subtract the two sets of ranks to get the difference d and Square the values of d.
Add the squared values of d to get Sigma d2. The next step is to use the formula
$ = 1-(6Sigma d2/n3-n) where n is the number of ranks
there is a perfect negative correlation If the $ value is -1, if falls between -1 and -0.5, there is a strong negative correlation, if falls between -0.5 and 0, there is a weak negative correlation, if it is 0, there is no correlation, if falls between 0 and 0.5, there is a weak positive correlation, if falls between 0.5 and 1, there is a strong positive correlation, if it is 1, there is a perfect positive correlation between the two data sets. The null hypothesisis accepted if $ value is 0, otherwise it is rejected. Whenever the objective is to know if two variables are related to each other, the correlation technique is used.
Our email-based homework help support provides best and intelligent insight and recreation which help make the subject practical and pertinent for any assignment help.
Transtutors.com present timely homework help at logical charges with detailed answers to your Statistic questions so that you get to understand your assignments or homework better apart from having the answers. Our tutors are remarkably qualified and have years of experience providing Spearman Rank Correlation Test homework help or assignment help. | http://www.transtutors.com/homework-help/statistics/nonparametric-tests/spearman-rank-correlation/ | 13 |
15 | Analysis of variance
Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means and their associated procedures (such as "variation" among and between groups). In ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes t-test to more than two groups. Doing multiple two-sample t-tests would result in an increased chance of committing a type I error. For this reason, ANOVAs are useful in comparing (testing) three or more means (groups or variables) for statistical significance.
Background and terminology
ANOVA is a particular form of statistical hypothesis testing heavily used in the analysis of experimental data. A statistical hypothesis test is a method of making decisions using data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result (when a probability (p-value) is less than a threshold (significance level)) justifies the rejection of the null hypothesis.
In the typical application of ANOVA, the null hypothesis is that all groups are simply random samples of the same population. This implies that all treatments have the same effect (perhaps none). Rejecting the null hypothesis implies that different treatments result in altered effects.
By construction, hypothesis testing limits the rate of Type I errors (false positives leading to false scientific claims) to a significance level. Experimenters also wish to limit Type II errors (false negatives resulting in missed scientific discoveries). The Type II error rate is a function of several things including sample size (positively correlated with experiment cost), significance level (when the standard of proof is high, the chances of overlooking a discovery are also high) and effect size (when the effect is obvious to the casual observer, Type II error rates are low).
The terminology of ANOVA is largely from the statistical design of experiments. The experimenter adjusts factors and measures responses in an attempt to determine an effect. Factors are assigned to experimental units by a combination of randomization and blocking to ensure the validity of the results. Blinding keeps the weighing impartial. Responses show a variability that is partially the result of the effect and is partially random error.
ANOVA is the synthesis of several ideas and it is used for multiple purposes. As a consequence, it is difficult to define concisely or precisely.
"Classical ANOVA for balanced data does three things at once:
- As exploratory data analysis, an ANOVA is an organization of an additive data decomposition, and its sums of squares indicate the variance of each component of the decomposition (or, equivalently, each set of terms of a linear model).
- Comparisons of mean squares, along with F-tests ... allow testing of a nested sequence of models.
- Closely related to the ANOVA is a linear model fit with coefficient estimates and standard errors."
In short, ANOVA is a statistical tool used in several ways to develop and confirm an explanation for the observed data.
- It is computationally elegant and relatively robust against violations to its assumptions.
- ANOVA provides industrial strength (multiple sample comparison) statistical analysis.
- It has been adapted to the analysis of a variety of experimental designs.
As a result: ANOVA "has long enjoyed the status of being the most used (some would say abused) statistical technique in psychological research." ANOVA "is probably the most useful technique in the field of statistical inference."
ANOVA is difficult to teach, particularly for complex experiments, with split-plot designs being notorious. In some cases the proper application of the method is best determined by problem pattern recognition followed by the consultation of a classic authoritative test.
(Condensed from the NIST Engineering Statistics handbook: Section 5.7. A Glossary of DOE Terminology.)
- Balanced design
- An experimental design where all cells (i.e. treatment combinations) have the same number of observations.
- A schedule for conducting treatment combinations in an experimental study such that any effects on the experimental results due to a known change in raw materials, operators, machines, etc., become concentrated in the levels of the blocking variable. The reason for blocking is to isolate a systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting randomization.
- A set of experimental runs which allows the fit of a particular model and the estimate of effects.
- Design of experiments. An approach to problem solving involving collection of data that will support valid, defensible, and supportable conclusions.
- How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect.
- Unexplained variation in a collection of observations. DOE's typically require understanding of both random error and lack of fit error.
- Experimental unit
- The entity to which a specific treatment combination is applied.
- Process inputs an investigator manipulates to cause a change in the output.
- Lack-of-fit error
- Error that occurs when the analysis omits one or more important terms or factors from the process model. Including replication in a DOE allows separation of experimental error into its components: lack of fit and random (pure) error.
- Mathematical relationship which relates changes in a given response to changes in one or more factors.
- Random error
- Error that occurs due to natural variation in the process. Random error is typically assumed to be normally distributed with zero mean and a constant variance. Random error is also called experimental error.
- A schedule for allocating treatment material and for conducting treatment combinations in a DOE such that the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs.[nb 1]
- Performing the same treatment combination more than once. Including replication allows an estimate of the random error independent of any lack of fit error.
- The output(s) of a process. Sometimes called dependent variable(s).
- A treatment is a specific combination of factor levels whose effect is to be compared with other treatments.
Classes of models
There are three classes of models used in the analysis of variance, and these are outlined here.
The fixed-effects model of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see if the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole.
Random effects models are used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model.
A mixed-effects model contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types.
Example: Teaching experiments could be performed by a university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives.
Defining fixed and random effects has proven elusive, with competing definitions arguably leading toward a linguistic quagmire.
Assumptions of ANOVA
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Even when the statistical model is nonlinear, it can be approximated by a linear model for which an analysis of variance may be appropriate.
Textbook analysis using a normal distribution
- Independence of observations – this is an assumption of the model that simplifies the statistical analysis.
- Normality – the distributions of the residuals are normal.
- Equality (or "homogeneity") of variances, called homoscedasticity — the variance of data in groups should be the same.
The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors ('s) are independent and
In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of unit treatment additivity, which is discussed in the books of Kempthorne and David R. Cox.
In its simplest form, the assumption of unit-treatment additivity[nb 2] states that the observed response from experimental unit when receiving treatment can be written as the sum of the unit's response and the treatment-effect , that is
The assumption of unit-treatment addivity implies that, for every treatment , the th treatment have exactly the same effect on every experiment unit.
The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant.
The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling.
Derived linear model
Kempthorne uses the randomization-distribution and the assumption of unit treatment additivity to produce a derived linear model, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent!
The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments.
Statistical models for observational data
However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald A. Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public.
Summary of assumptions
The normal-model based ANOVA analysis assumes the independence, normality and homogeneity of the variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis.
However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are no necessary assumptions for ANOVA in its full generality, but the F-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest.
Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses, which are believed to follow a multiplicative model. According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition.
Characteristics of ANOVA
ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance results are independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding.
Logic of ANOVA
The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial, "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean."
Partitioning of the sum of squares
ANOVA uses traditional standardized terminology. The definitional equation of sample variance is , where the divisor is called the degrees of freedom (DF), the summation is called the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means. If the null hypothesis is true, all three variance estimates are equal (within sampling error).
The fundamental technique is a partitioning of the total sum of squares SS into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels.
The number of degrees of freedom DF can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect.
See also Lack-of-fit sum of squares.
The F-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic
where MS is mean square, = number of treatments and = total number of cases
to the F-distribution with , degrees of freedom. Using the F-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution.
The expected value of F is (where n is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1 the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls.
The textbook method of concluding the hypothesis test is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the numerator degrees of freedom, the denominator degrees of freedom and the significance level (α). If F ≥ FCritical (Numerator DF, Denominator DF, α) then reject the null hypothesis.
The computer method calculates the probability (p-value) of a value of F greater than or equal to the observed value. The null hypothesis is rejected if this probability is less than or equal to the significance level (α). The two methods produce the same result.
The ANOVA F-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (maximizing power for a fixed significance level). To test the hypothesis that all treatments have exactly the same effect, the F-test's p-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum.[nb 3] The ANOVA F–test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.[nb 4]
ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients." "[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..."
ANOVA for a single factor
The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors. A relatively complete discussion of the analysis (models, data summaries, ANOVA table) of the completely randomized experiment is available.
ANOVA for multiple factors
ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used.
The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz). All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare. The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results.
Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects." Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot.
A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications.
Worked numeric examples
Some analysis is required in support of the design of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments.
The number of experimental units
In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential.
Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals.
Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards.
Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confident interval.
Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true.
Several standardized measures of effect gauge the strength of the association between a predictor (or set of predictors) and the dependent variable. Effect-size estimates facilitate the comparison of findings in studies and across disciplines. A non-standardized measure of effect size with meaningful units may be preferred for reporting purposes.
η2 ( eta-squared ): Eta-squared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). On average it overestimates the variance explained in the population. As the sample size gets larger the amount of bias gets smaller,
Cohen (1992) suggests effect sizes for various indexes, including ƒ (where 0.1 is a small effect, 0.25 is a medium effect and 0.4 is a large effect). He also offers a conversion table (see Cohen, 1988, p. 283) for eta squared (η2) where 0.0099 constitutes a small effect, 0.0588 a medium effect and 0.1379 a large effect.
It is always appropriate to carefully consider outliers. They have a disproportionate impact on statistical conclusions and are often the result of errors.
It is prudent to verify that the assumptions of ANOVA have been met. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. One rule of thumb: "If the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations and our results will still be approximately correct."
A statistically significant effect in ANOVA is often followed up with one or more different follow-up tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are planned (a priori) or post hoc. Planned tests are determined before looking at the data and post hoc tests are performed after looking at the data.
Often one of the "treatments" is none, so the treatment group can act as a control. Dunnett's test (a modification of the t-test) tests whether each of the other treatment groups has the same mean as the control.
Post hoc tests such as Tukey's range test most commonly compare every group mean with every other group mean and typically incorporate some method of controlling for Type I errors. Comparisons, which are most commonly planned, can be either simple or compound. Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels.
Following ANOVA with pair-wise multiple-comparison tests has been criticized on several grounds. There are many such tests (10 in one table) and recommendations regarding their use are vague or conflicting.
Study designs and ANOVAs
There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model.
Some popular designs use the following types of ANOVA:
- One-way ANOVA is used to test for differences among two or more independent groups (means),e.g. different levels of urea application in a crop. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2.
- Factorial ANOVA is used when the experimenter wants to study the interaction effects among the treatments.
- Repeated measures ANOVA is used when the same subjects are used for each treatment (e.g., in a longitudinal study).
- Multivariate analysis of variance (MANOVA) is used when there is more than one response variable.
Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; Unbalanced experiments offer more complexity. For single factor (one way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and F-ratios will depend on the order in which the sources of variation are considered." The simplest techniques for handling unbalanced data restore balance by either throwing out data or by synthesizing missing data. More complex techniques use regression.
ANOVA is (in part) a significance test. The American Psychological Association holds the view that simply reporting significance is insufficient and that reporting confidence bounds is preferred.
ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized.
While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. The development of least-squares methods by Laplace and Gauss circa 1800 provided an improved method of combining observations (over the existing practices of astronomy and geodesy). It also initiated much study of the contributions to sums of squares. Laplace soon knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827 Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800 astronomers had isolated observational errors resulting from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885.
Sir Ronald Fisher introduced the term "variance" and proposed a formal analysis of variance in a 1918 article The Correlation Between Relatives on the Supposition of Mendelian Inheritance. His first application of the analysis of variance was published in 1921. Analysis of variance became widely known after being included in Fisher's 1925 book Statistical Methods for Research Workers.
One of the attributes of ANOVA which ensured its early popularity was computational elegance. The structure of the additive model allows solution for the additive coefficients by simple algebra rather than by matrix calculations. In the era of mechanical calculators this simplicity was critical. The determination of statistical significance also required access to tables of the F function which were supplied by early statistics texts.
|Wikimedia Commons has media related to: Analysis of variance|
- Randomization is a term used in multiple ways in this material. "Randomization has three roles in applications: as a device for eliminating biases, for example from unobserved explanatory variables and selection effects: as a basis for estimating standard errors: and as a foundation for formally exact significance tests." Cox (2006, page 192) Hinkelmann and Kempthorne use randomization both in experimental design and for statistical analysis.
- Unit-treatment additivity is simply termed additivity in most texts. Hinkelmann and Kempthorne add adjectives and distinguish between additivity in the strict and broad senses. This allows a detailed consideration of multiple error sources (treatment, state, selection, measurement and sampling) on page 161.
- Rosenbaum (2002, page 40) cites Section 5.7 (Permutation Tests), Theorem 2.3 (actually Theorem 3, page 184) of Lehmann's Testing Statistical Hypotheses (1959).
- The F-test for the comparison of variances has a mixed reputation. It is not recommended as a hypothesis test to determine whether two different samples have the same variance. It is recommended for ANOVA where two estimates of the variance of the same sample are compared. While the F-test is not generally robust against departures from normality, it has been found to be robust in the special case of ANOVA. Citations from Moore & McCabe (2003): "Analysis of variance uses F statistics, but these are not the same as the F statistic for comparing two population standard deviations." (page 554) "The F test and other procedures for inference about variances are so lacking in robustness as to be of little use in practice." (page 556) "[The ANOVA F test] is relatively insensitive to moderate nonnormality and unequal variances, especially when the sample sizes are similar." (page 763) ANOVA assumes homoscedasticity, but it is robust. The statistical test for homoscedasticity (the F-test) is not robust. Moore & McCabe recommend a rule of thumb.
- Gelman (2005, p 2)
- Howell (2002, p 320)
- Montgomery (2001, p 63)
- Gelman (2005, p 1)
- Gelman (2005, p 5)
- "Section 5.7. A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 5 April 2012.
- "Section 4.3.1 A Glossary of DOE Terminology". NIST Engineering Statistics handbook. NIST. Retrieved 14 Aug 2012.
- Montgomery (2001, Chapter 12: Experiments with random factors)
- Gelman (2005, pp 20–21)
- Snedecor, George W.; Cochran, William G. (1967). Statistical Methods (6th ed.). p. 321.
- Cochran & Cox (1992, p 48)
- Howell (2002, p 323)
- Anderson, David R.; Sweeney, Dennis J.; Williams, Thomas A. (1996). Statistics for business and economics (6th ed.). Minneapolis/St. Paul: West Pub. Co. pp. 452–453. ISBN 0-314-06378-1.
- Anscombe (1948)
- Kempthorne (1979, p 30)
- Cox (1958, Chapter 2: Some Key Assumptions)
- Hinkelmann and Kempthorne (2008, Volume 1, Throughout. Introduced in Section 2.3.3: Principles of experimental design; The linear model; Outline of a model)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 6.3: Completely Randomized Design; Derived Linear Model)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 6.6: Completely randomized design; Approximating the randomization test)
- Bailey (2008, Chapter 2.14 "A More General Model" in Bailey, pp. 38–40)
- Hinkelmann and Kempthorne (2008, Volume 1, Chapter 7: Comparison of Treatments)
- Kempthorne (1979, pp 125–126, "The experimenter must decide which of the various causes that he feels will produce variations in his results must be controlled experimentally. Those causes that he does not control experimentally, because he is not cognizant of them, he must control by the device of randomization." "[O]nly when the treatments in the experiment are applied by the experimenter using the full randomization procedure is the chain of inductive inference sound. It is only under these circumstances that the experimenter can attribute whatever effects he observes to the treatment and the treatment only. Under these circumstances his conclusions are reliable in the statistical sense.")
- Freedman[full citation needed]
- Montgomery (2001, Section 3.8: Discovering dispersion effects)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 6.10: Completely randomized design; Transformations)
- Bailey (2008)
- Montgomery (2001, Section 3-3: Experiments with a single factor: The analysis of variance; Analysis of the fixed effects model)
- Cochran & Cox (1992, p 2 example)
- Cochran & Cox (1992, p 49)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 6.7: Completely randomized design; CRD with unequal numbers of replications)
- Moore and McCabe (2003, page 763)
- Gelman (2008)
- Montgomery (2001, Section 5-2: Introduction to factorial designs; The advantages of factorials)
- Belle (2008, Section 8.4: High-order interactions occur rarely)
- Montgomery (2001, Section 5-1: Introduction to factorial designs; Basic definitions and principles)
- Cox (1958, Chapter 6: Basic ideas about factorial experiments)
- Montgomery (2001, Section 5-3.7: Introduction to factorial designs; The two-factor factorial design; One observation per cell)
- Wilkinson (1999, p 596)
- Montgomery (2001, Section 3-7: Determining sample size)
- Howell (2002, Chapter 8: Power)
- Howell (2002, Section 11.12: Power (in ANOVA))
- Howell (2002, Section 13.7: Power analysis for factorial experiments)
- Moore and McCabe (2003, pp 778–780)
- Wilkinson (1999, p 599)
- Montgomery (2001, Section 3-4: Model adequacy checking)
- Moore and McCabe (2003, p 755, Qualifications to this rule appear in a footnote.)
- Montgomery (2001, Section 3-5.8: Experiments with a single factor: The analysis of variance; Practical interpretation of results; Comparing means with a control)
- Hinkelmann and Kempthorne (2008, Volume 1, Section 7.5: Comparison of Treatments; Multiple Comparison Procedures)
- Howell (2002, Chapter 12: Multiple comparisons among treatment means)
- Montgomery (2001, Section 3-5: Practical interpretation of results)
- Cochran & Cox (1957, p 9, "[T]he general rule [is] that the way in which the experiment is conducted determines not only whether inferences can be made, but also the calculations required to make them.")
- "The Probable Error of a Mean". Biometrika 6: 1–0. 1908. doi:10.1093/biomet/6.1.1.
- Montgomery (2001, Section 3-3.4: Unbalanced data)
- Montgomery (2001, Section 14-2: Unbalanced data in factorial design)
- Wilkinson (1999, p 600)
- Gelman (2005, p.1) (with qualification in the later text)
- Montgomery (2001, Section 3.9: The Regression Approach to the Analysis of Variance)
- Howell (2002, p 604)
- Howell (2002, Chapter 18: Resampling and nonparametric approaches to data)
- Montgomery (2001, Section 3-10: Nonparametric methods in the analysis of variance)
- Stigler (1986)
- Stigler (1986, p 134)
- Stigler (1986, p 153)
- Stigler (1986, pp 154–155)
- Stigler (1986, pp 240–242)
- Stigler (1986, Chapter 7 - Psychophysics as a Counterpoint)
- Stigler (1986, p 253)
- Stigler (1986, pp 314–315)
- The Correlation Between Relatives on the Supposition of Mendelian Inheritance. Ronald A. Fisher. Philosophical Transactions of the Royal Society of Edinburgh. 1918. (volume 52, pages 399–433)
- On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample. Ronald A. Fisher. Metron, 1: 3-32 (1921)
- Scheffé (1959, p 291, "Randomization models were first formulated by Neyman (1923) for the completely randomized design, by Neyman (1935) for randomized blocks, by Welch (1937) and Pitman (1937) for the Latin square under a certain null hypothesis, and by Kempthorne (1952, 1955) and Wilk (1955) for many other designs.")
- Anscombe, F. J. (1948). "The Validity of Comparative Experiments". Journal of the Royal Statistical Society. Series A (General) 111 (3): 181–211. doi:10.2307/2984159. JSTOR 2984159. MR 30181.
- Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Pre-publication chapters are available on-line.
- Belle, Gerald van (2008). Statistical rules of thumb (2nd ed.). Hoboken, N.J: Wiley. ISBN 978-0-470-14448-0.
- Cochran, William G.; Cox, Gertrude M. (1992). Experimental designs (2nd ed.). New York: Wiley. ISBN 978-0-471-54567-5.
- Cohen, Jacob (1988). Statistical power analysis for the behavior sciences (2nd ed.). Routledge ISBN 978-0-8058-0283-2
- Cohen, Jacob (1992). "Statistics a power primer". Psychology Bulletin 112 (1): 155–159. doi:10.1037/0033-2909.112.1.155. PMID 19565683.
- Cox, David R. (1958). Planning of experiments. Reprinted as ISBN 978-0-471-57429-3
- Cox, D. R. (2006). Principles of statistical inference. Cambridge New York: Cambridge University Press. ISBN 978-0-521-68567-2.
- Freedman, David A.(2005). Statistical Models: Theory and Practice, Cambridge University Press. ISBN 978-0-521-67105-7
- Gelman, Andrew (2005). "Analysis of variance? Why it is more important than ever". The Annals of Statistics 33: 1–53. doi:10.1214/009053604000001048.
- Gelman, Andrew (2008). "Variance, analysis of". The new Palgrave dictionary of economics (2nd ed.). Basingstoke, Hampshire New York: Palgrave Macmillan. ISBN 978-0-333-78676-5.
- Hinkelmann, Klaus & Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7.
- Howell, David C. (2002). Statistical methods for psychology (5th ed.). Pacific Grove, CA: Duxbury/Thomson Learning. ISBN 0-534-37770-X.
- Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952) Wiley ed.). Robert E. Krieger. ISBN 0-88275-105-0.
- Lehmann, E.L. (1959) Testing Statistical Hypotheses. John Wiley & Sons.
- Montgomery, Douglas C. (2001). Design and Analysis of Experiments (5th ed.). New York: Wiley. ISBN 978-0-471-31649-7.
- Moore, David S. & McCabe, George P. (2003). Introduction to the Practice of Statistics (4e). W H Freeman & Co. ISBN 0-7167-9657-0
- Rosenbaum, Paul R. (2002). Observational Studies (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-98967-9
- Scheffé, Henry (1959). The Analysis of Variance. New York: Wiley.
- Stigler, Stephen M. (1986). The history of statistics : the measurement of uncertainty before 1900. Cambridge, Mass: Belknap Press of Harvard University Press. ISBN 0-674-40340-1.
- Wilkinson, Leland (1999). "Statistical Methods in Psychology Journals; Guidelines and Explanations". American Psychologist 54 (8): 594–604. doi:10.1037/0003-066X.54.8.594.
- Box, G. E. P. (1953). "Non-Normality and Tests on Variances". Biometrika (Biometrika Trust) 40 (3/4): 318–335. JSTOR 2333350.
- Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, I. Effect of Inequality of Variance in the One-Way Classification". The Annals of Mathematical Statistics 25 (2): 290. doi:10.1214/aoms/1177728786.
- Box, G. E. P. (1954). "Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, II. Effects of Inequality of Variance and of Correlation Between Errors in the Two-Way Classification". The Annals of Mathematical Statistics 25 (3): 484. doi:10.1214/aoms/1177728717.
- Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics 150. New York: Springer-Verlag. ISBN 0-387-98578-6.
- Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2.
- Cox, David R. & Reid, Nancy M. (2000). The theory of design of experiments. (Chapman & Hall/CRC). ISBN 978-1-58488-195-7
- Fisher, Ronald (1918). "Studies in Crop Variation. I. An examination of the yield of dressed grain from Broadbalk". Journal of Agricultural Science 11: 107–135.
- Freedman, David A.; Pisani, Robert; Purves, Roger (2007) Statistics, 4th edition. W.W. Norton & Company ISBN 978-0-393-92972-0
- Hettmansperger, T. P.; McKean, J. W. (1998). Robust nonparametric statistical methods. Kendall's Library of Statistics 5 (First ed.). New York: Edward Arnold. pp. xiv+467 pp. ISBN 0-340-54937-8. MR 1604954. Unknown parameter
- Lentner, Marvin; Thomas Bishop (1993). Experimental design and analysis (Second ed.). P.O. Box 884, Blacksburg, VA 24063: Valley Book Company. ISBN 0-9616255-2-X.
- Tabachnick, Barbara G. & Fidell, Linda S. (2007). Using Multivariate Statistics (5th ed.). Boston: Pearson International Edition. ISBN 978-0-205-45938-4
- Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6. MR 2283455.
|Wikiversity has learning materials about Analysis of variance|
- SOCR ANOVA Activity and interactive applet.
- Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R
- NIST/SEMATECH e-Handbook of Statistical Methods, section 7.4.3: "Are the means equal?" | http://en.wikipedia.org/wiki/Analysis_of_variance | 13 |
19 | Unit 2 Idealism, Realism and Pragmatism in Education
By the end of this topic, you should be able to:
1. Explain major world views of philosophies: idealism, realism, and pragmatism; and
2. Identify the contributions of the world views of philosophies, such as idealism, realism, and pragmatism to the field of education.
Traditionally, philosophical methods have consisted of analysis and clarification of concepts, arguments, theories, and language. Philosophers have analyzed theories and arguments; by enhancing previous arguments, raising powerful objections that lead to the revision or abandonment of theories and lines of arguments (Noddings, 1998).
This topic will provide readers with some
general knowledge of philosophies. Basically, there are three general or world
philosophies that are idealism, realism, and pragmatism. Educators confront
philosophical issues on a daily basis, often not recognizing them as such. In
fact, in the daily practice of educators, they formulate goals, discuss values,
and set priorities. Hence, educators who gets involved in dealing with goals,
values and priorities soon realizes that in a
Philosophy is concerned primarily with identifying beliefs about human existence and evaluating arguments that support those beliefs. Develop a set of questions that may drive philosophical investigations.
7. 1 IDEALISM
In the Western culture, idealism is perhaps
the oldest systematic philosophy, dating back at least to Plato in ancient
Idealism is the philosophical theory that maintains that the ultimate nature of reality is based on mind or ideas. It holds that the so-called external or real world is inseparable from mind, consciousness, or perception. Idealism is any philosophy which argues that the only things knowable are consciousness or the contents of consciousness; not anything in the outside world, if such a place actually exists.
Indeed, idealism often takes the form of arguing that the only real things are mental entities, not physical things and argues that reality is somehow dependent upon the mind rather than independent of it. Some narrow versions of idealism argue that our understanding of reality reflects the workings of our mind, first and foremost, that the properties of objects have no standing independent of minds perceiving them.
Besides, the nature and identity of the mind in idealism upon which reality is dependent is one issue that has divided idealists of various sorts. Some argue that there is some objective mind outside of nature; some argue that it is simply the common power of reason or rationality; some argue that it is the collective mental faculties of society; and some focus simply on the minds of individual human beings.
In short, the main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as idea-ism. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed.
To achieve a sufficient understanding of idealism, it is necessary to examine the works of selected outstanding philosophers usually associated with this philosophy. Idealism comes in several flavors:
(a) Platonic idealism - there exists a perfect realm of form and ideas and our world merely contains shadows of that realm; only ideas can be known or have any reality;
(b) Religious idealism - this theory argues that all knowledge originates in perceived phenomena which have been organized by categories.
(c) Modern idealism - all objects are identical with some idea and the ideal knowledge is itself the system of ideas.
How does modern idealism compare with the other idealism of earlier periods? Discuss.
7.1.1 Platonic Idealism
Plato was a Greek philosopher during the
4th century B.C.E. - a student of Socrates and teacher of Aristotle.
is an ancient school of philosophy founded by Plato. At the beginning, this
school had a physical existence at a site just outside the walls of
is an ancient school of philosophy founded by Plato. At the beginning, this
school had a physical existence at a site just outside the walls of
According to Platonic idealism, there exists a perfect realm of form and ideas and our world merely contains shadows of that realm. Plato was a follower of Socrates, a truly innovative thinker of his time, who did not record his ideas, but shared them orally through a question and answer approach. Plato presented his ideas in two works: The Republic and Laws. He believed in the importance of searching for truth because truth was perfect and eternal. He wrote about separating the world of ideas from the world of matter. Ideas are constant, but in the world of matter, information and ideas are constantly changing because of their sensory nature. Therefore Plato’s idealism suggested moving from opinion to true knowledge in the form of critical discussions, or the dialectic. Since at the end of the discussion, the ideas or opinions will begin to synthesize as they work closer to truth. Knowledge is a process of discovery that can be attained through skilful questioning. For example, a particular tree, with a branch or two missing, possibly alive, possibly dead, and with the initials of two lovers carved into its bark, is distinct from the abstract form of tree-ness. A tree is the ideal that each of us holds that allows us to identify the imperfect reflections of trees all around us.
Platonism is considered to be in
mathematics departments all over the world, regarding the predominant
philosophy of mathematics as the foundations of mathematics. One statement of
this philosophy is the thesis that mathematics is not created but discovered.
The absence in this thesis is of clear distinction between mathematical and
non-mathematical creation that leaves open the
Plato believed in the importance of state involvement in education and in moving individuals from concrete to abstract thinking. He believed that individual differences exist and that outstanding people should be rewarded for their knowledge. With this thinking came the view that girls and boys should have equal opportunities for education. In Plato’s utopian society there were three social classes of education: workers, military personnel, and rulers. He believed that the ruler or king would be a good person with much wisdom because it was only ignorance that led to evil.
7.1.2 Religious Idealism: Augustine
Religion and idealism are closely
attached. Judaism, the originator of Christianity, and Christianity were
influenced by many of the Greek philosophers that hold idealism strongly. Saint
Augustine of Hippo, a bishop, a confessor, a doctor of the church, and one of
the great thinkers of the Catholic Church discussed the universe as being
divided into the City of
The City of
This parallels Plato’s scheme of the world of ideas and the world of matter. Religious thinkers believed that man did not create knowledge, but discovered it. Augustine, like Plato did not believe that one person could teach another. Instead, they must be led to understanding through skilful questioning. Religious idealists see individuals as creations of God who have souls and contain elements of godliness that need to be developed.
Augustine was connected the philosophy of
Platonists and Neo-Platonist with Christianity. For
instance, he saw the World of Ideas as the City of
According to Ozmon & Craver, 2008 today one can see the tremendous influence religious idealism has had on American education. Early Christians implemented the idea of systematic teaching, which was used consistently throughout new and established schools. Many Greek and Jewish ideas about the nature of humanity were taught. For centuries, the Christian church educated generations with Idealist philosophy. In addition, idealism and the Judeo-Christian religion were unified in European culture by the Middle Ages and thereafter.
Augustine was also very influential in the history of education where he introduced the theory of three different types of students and instructed teachers to adapt their teaching styles to each student's individual learning style. The three different kinds of students are:
(a) The student who has been well-educated by knowledgeable teachers;
(b) The student who has had no education; and
(c) The student who has had a poor education, but believes himself to be well educated.
If a student has been well educated in a wide variety of subjects, the teacher must be careful not to repeat what they have already learned, but to challenge the student with material which they do not yet know thoroughly. With the student who has had no education, the teacher must be patient, willing to repeat things until the student understands and sympathetic. Perhaps the most difficult student, however, is the one with an inferior education who believes he understands something when he does not. Augustine stressed the importance of showing this type of student the difference between having words and having understanding and of helping the student to remain humble with his acquisition of knowledge.
An additional fundamental idea which Augustine introduced is the idea of teachers responding positively to the questions they may receive from their students, no matter if the student interrupted his teacher. Augustine also founded the controlled style of teaching. This teaching style ensures the student’s full understanding of a concept because the teacher does not bombard the student with too much material; focuses on one topic at a time; helps them discover what they don't understand, rather than moving on too quickly; anticipates questions; and helps them learn to solve difficulties and find solutions to problems. In a nutshell, Augustine claimed there are two basic styles a teacher uses when speaking to the students:
(i) The mixed style includes complex and sometimes showy language to help students see the beautiful artistry of the subject they are studying; and
(ii) The grand style is not quite as elegant as the mixed style, but is exciting and heartfelt, with the purpose of igniting the same passion in the students’ hearts.
Augustine balanced his teaching philosophy with the traditional bible-based practice of strict discipline where he agreed with using punishment as an incentive for children to learn. Augustine believed all people tend toward evil, and students must therefore be physically punished when they allow their evil desires to direct their actions.
Identify and explain the aims, content, and the methods of education based on the educational philosophy of Aristotle.
7.1.3 Modern Idealism: Rene Descartes, Immanuel Kant, and Friedrich Hegel
By the beginning of the modern period in the fifteenth and sixteenth centuries, idealism has become to be largely identified with systematization and subjectivism. Some major features of modern idealism are:
(a) Belief that reality includes, in addition to the physical universe, that which transcends it, is superior to it, and which is eternal. This ultimate reality is non-physical and is best characterized by the term mind;
(b) Physical realities draw their meaning from the transcendent realities to which they are related;
(c) That which is distinctive of human nature is mind. Mind is more than the physical entity, brain;
(d) Human life has a predetermined purpose. It is to become more like the transcendent mind;
(e) Man's purpose is fulfilled by development of the intellect and is referred to as self-realization;
(f) Ultimate reality includes absolute values;
(g) Knowledge comes through the application of reason to sense experience. In so far as the physical world reflects the transcendent world, we can determine the nature of the transcendent; and
(h) Learning is a personal process of developing the potential within. It is not conditioning or pouring in facts, but it is self-realization. Learning is a process of discovery.
The identification of modern idealism was encouraged by the writings and thoughts of Renè Descartes, Immanuel Kant, and Georg Wilhelm Friedrich Hegel.
(i) René Descartes
Descartes, a French philosopher, was born
in the town of mathematics. In 1614, he studied civil and cannon law at
mathematics. In 1614, he studied civil and cannon law at
In 1637, he published geometry, in which his combination of algebra and geometry gave birth to analytical geometry, known as Cartesian Geometry. But the most important contribution Descartes made was his philosophical writings. Descartes was convinced that science and mathematics could be used to explain everything in nature, so he was the first to describe the physical universe in terms of matter and motion - seeing the universe a as giant mathematically designed engine. Descartes wrote three important texts: Discourse on Method of rightly conducting the reason and seeking truth in the sciences, "Meditations on First Philosophy and A Principles of Philosophy” . In his Discourse on Method, he attempts to arrive at a fundamental set of principles that one can know as true without any doubt. To achieve this, he employs a method called metaphysical doubt, sometimes also referred to as methodological skepticism wne he rejects any ideas that can be doubted, and then re-establishes them in order to acquire a firm foundation for genuine knowledge. Initially, Descartes arrives at only a single principle - thought exists: „thought cannot be separated from me, therefore, I exist. Most famously, this is known as cogito ergo sum where it means I think, therefore I am. Therefore, Descartes concluded, if he doubted, then something or someone must be doing the doubting; therefore the very fact that he doubted proved his existence. Descartes decides that he can be certain that he exists because he thinks as he perceives his body through the use of the senses; however, these have previously been proven unreliable.
Hence, Descartes assumes that the only indubitable knowledge is that he is a thinking thing. Thinking is his essence as it is the only thing about him that cannot be doubted. Descartes defines thought or cogitatio as what happens in me such that I am immediately conscious of it, insofar as I am conscious of it. Thinking is thus every activity of a person of which he is immediately conscious.
(ii) Immanuel Kant
Immanuel Kant, one of the world’s great
philosopher, was born in the East Prussian city of Königsberg,
Germany studied at its schools and university, and worked there as a tutor and
professor for more than forty years. He had never
In writing his Critique of Pure Reason and Critique of Practical Reason, Kant tried to make sense of rationalism and empiricism within the idealist philosophy. In his system, individuals could have a valid knowledge of human experience that was established by the scientific laws of nature. The Critique of Pure Reason spells out the conditions for mathematical, scientific, and metaphysical knowledge in its Transcendental Aesthetic, Transcendental Analytic, and Transcendental Dialectic. Carefully distinguishing judgments as analytic or synthetic and as a priori or a posteriori, Kant held that the most interesting and useful varieties of human knowledge rely upon synthetic a priori judgments, which are, in turn, possible only when the mind determines the conditions of its own experience. Thus, it is we who impose the forms of space and time upon all possible sensation in mathematics, and it is we who render all experience coherent as scientific knowledge governed by traditional notions of substance and causality by applying the pure concepts of the understanding to all possible experience. However, regulative principles of this sort hold only for the world as we know it, and since metaphysical propositions seek a truth beyond all experience, they cannot be established within the bounds of reason. In Critique of Practical Reason, Kant grounded the conception of moral autonomy upon our postulation of God, freedom, and immortality.
Kant’s philosophy of education involved some aspects of character education. He believed in the importance of treating each person as an end and not as a means. He thought that education should include training in discipline, culture, discretion, and moral training. Teaching children to think and an emphasis on duty toward self and others were also vital points in his philosophies.
Teaching a child to think is associated closely with Kant’s notion of will, and the education of will means living according to the duties flowing the categorical imperatives. Kant’s idealism is based on his concentration on thought processes and the nature of relationship between mind and its objects on the one hand and universal moral ideas on the other. With these systematic thoughts it has greatly influenced all subsequent Western philosophy, idealistic, and other wise.
(iii) Georg Wilhelm Friedrich Hegel
George Wilhelm Friedrich Hegel, German
philosopher, is one the creators of German idealism. He was born in
Hegel developed a comprehensive philosophical framework, or system, to account in an integrated and developmental way for the relation of mind and nature, the subject and object of knowledge, and psychology, the state, history, art, religion, and philosophy. In particular, he developed a concept of mind or spirit that manifested itself in a set of contradictions and oppositions that it ultimately integrated and united, such as those between nature and freedom, and immanence and transcendence, without eliminating either pole or reducing it to the other. However, Hegel most influential conceptions are of speculative logic or dialectic, absolute idealism, absolute spirit, negativity, sublation, the master / slave dialectic, ethical life, and the importance of history.
Hegelianism is a collective term for schools of thought following Hegel’s philosophy which can be summed up by the saying that the rational alone is real, which means that all reality is capable of being expressed in rational categories. His goal was to reduce reality to a more synthetic unity within the system of transcendental idealism. In fact, one major feature of the Hegelian system is movement towards richer, more complex, and more complete synthesis.
Three of Hegel’s most famous books are
Phenomenology of Mind, Logic, and Philosophy of Right. In these books, Hegel emphasizes
three major aspects: logic, nature, and spirit. Hegel maintained that if his
logical system were applied accurately, one would arrive at the Absolute Ideas,
which is similar to Plato’s unchanging ideas. However, the difference is that
Hegel was sensitive to change where change, development, and movement are all
central and necessary in
Nature was considered to be the opposite of the Absolute Ideas. Ideas and nature together form the Absolute Spirit which is manifested by history, art, religion, and philosophy. Hegel’s idealism is in the search for final Absolute Spirit. Examining any one thing required examining or referring to another thing. Hegel’s thinking is not as prominent as it once was because his system led to the glorification of the state at the expense of individuals. Hegel thought that to be truly educated an individual must pass through various stages of the cultural evolution of mankind. Additionally, he reasoned that it was possible for some individuals to know everything essential in the history of humanity.
The far reaching influence of Hegel is due in a measure to the undoubted vastness of the scheme of philosophical synthesis which he conceived and partly realized. A philosophy which undertook to organize under the single formula of triadic development every department of knowledge, from abstract logic up to the philosophy of history, has a great deal of attractiveness to those who are metaphysically inclined. Hegel’s philosophy is the highest expression of that spirit of collectivism which characterized the nineteenth century. In theology, Hegel revolutionized the methods of inquiry. The application of his notion of development to biblical criticism and to historical investigation is obvious to anyone who compares the spirit and purpose of contemporary theology with the spirit and purpose of the theological literature of the first half of the nineteenth century. In science, as well, and in literature, the substitution of the category of becoming for the category of being is a very patent fact, and is due to the influence of Hegel's method. In political economy and political science the effect of Hegel's collectivistic conception of the „state‰ supplanted to a large extent the individualistic conception which was handed down from the eighteenth century to the nineteenth.
Hegel also had considerable influence on the philosophy and theory of education. He appeared to think that to be truly educated, an individual must pass through the various stages of the cultural evolution of humankind. This idea can be much applies to the development of science and technology. For instance, to a person who lived 300 years ago, electricity was unknown except as a natural occurrence, such as lightning. Then again, today, practically everyone depends on the electrical power for everyday use and has a working, practical knowledge of it entirely outside the experience of a person from the past. A contemporary person can easily learn elementary facts about electricity in a relatively short time; that is he or she can pass through or learn an extremely important phase of our cultural evolution simply due to a passing of time.
Finally, in short, in Hegel’s philosophical education, he believed that only mind is real and that human thought, through participation in the universal spirit, progresses toward a destined ideal by a dialectical process of resolving opposites through synthesis. 112/126
According to Ozmon
and Craver (2008) the most central thread of realism is the
Realists believe that the study of ideas
can be enhanced by the study of material
More generally, realism is any
philosophical theory that emphasizes the existence
the term stands for the theory that there is a reality quite independent
To understand this complex philosophy, one
must examine its development
and Betrand Russell have contributed much to realism ideology.
7.2.1 Aristotle Realism
Aristotle (384 - 322 B.C.E.), a great
Greek philosopher, was a child of to a
Aristotle believed that the world could be
understood at a fundamental level
As a result of this belief, Aristotle
literally wrote about everything: poetics,
Aristotle was the first person to asserts that nature is understandable. This
Aristotelian Realism that ideas, such as the idea of God or the idea of a tree,
can exist without matter, but matter cannot exist without form. In order to get
to form, it was necessary to study material things. As a result, Aristotle used
syllogism, which is a process of „ordering statements about reality in a
logical, systematic form (Ozmon & Craver, 2008).
This systematic form would include a major premise, a minor premise, and a
All men are mortal.
Socrates is a man;
Therefore, Socrates is mortal.
Aristotle described the relation between form and matter with the Four Causes:
(a) Material cause - the matter from which something is made;
(b) Formal cause - the design that shapes the material object;
(c) Efficient cause - the agent that produces the object; and
(d) Final cause - the direction toward which the object is tending.
Through these different forms, Aristotle demonstrated that matter was constantly in a process of change. He believed that God, the Ultimate Reality held all creation together. Organization was very important in Aristotle’s philosophy. It was his thought that human beings as rational creatures are fulfilling their purpose when they think and thinking are the highest characteristic.
According to Aristotle, each thing had a
purpose and education’s purpose was to develop the capacity for reasoning.
Proper character was formed by following the Golden Mean, the
The importance of education in the
philosophy of Aristotle was enormous, since the individual man could learn to
use his reason to arrive at virtue, happiness, and political harmony only
through the process of education. For Aristotle, the purpose of education is to
produce a good man. Man is not good by nature so he must learn to control his
animal activities through the use of reason. Only when man behaves by habit and
reason, according to his nature as a rational being, he is capable of
happiness. In short, education must aim at the development of the
7.2.2 Religious Realism: Thomas Aquinas
Saint Thomas Aquinas (1225 - 1274) was a
priest of the Roman Catholic Church
and Doctor Communis. He is frequently referred to as Thomas since Aquinas refers to his residence rather than his surname. He was the foremost classical proponent of natural theology and the father of the Thomistic school of philosophy and theology.
The philosophy of Aquinas has exerted enormous influence on subsequent Christian theology, especially the Roman Catholic Church, and extending to Western philosophy in general. He stands as a vehicle and modifier of Aristotelianism, which he merged with the thought of Augustine. Aquinas believed that for the knowledge of any truth whatsoever man needs divine help, that the intellect may be moved by God to its act. Besides, he believed that human beings have the natural capacity to know many things without special divine revelation, even though such revelation occurs from time to time. Aquinas believed that truth is known through reason - the natural revelation and faith - the supernatural revelation. Supernatural revelation has its origin in the inspiration of the Holy Spirit and is made available through the teaching of the prophets, summed up in Holy Scripture, and transmitted by the Magisterium, the sum of which is called Tradition. On the other hand, natural revelation is the truth available to all people through their human nature where certain truths all men can attain from correct human reasoning.
Thomism is the philosophical school that arose as a legacy of the work and thought of Thomas Aquinas where it is based on Summa Theologica meaning summary of theology. Summa Theologica is arguably second only to the Bible in importance to the Roman Catholic Church, written from 1265 to 1274 is the most famous work of Thomas Aquinas. Although the book was never finished, it was intended as a manual for beginners as a compilation of all of the main theological teachings of that time. It summarizes the reasoning for almost all points of Christian theology in the West. The Summa’s topics follow a cycle:
(a) the existence of God;
(b) God's creation;
(d) Man's purpose;
(f) The Sacraments; and
(g) back to God.
In these works, faith and reason are harmonized into a grand theologico-philosophical
system which inspired the medieval philosophical tradition known as Thomism and
which has been favored by the Roman Catholic church
ever since. Aquinas made an important contribution to epistemology, recognizing the central part played by sense perception in
human cognition. It is through the senses that we first become acquainted with
existent, material things. Thomas
Moreover, in the Summa Theologica, Aquinas records his famous five ways which seek to prove the existence of God from the facts of change, causation, contingency, variation and purpose. These cosmological and teleological arguments can be neatly expressed in syllogistic form as below:
(i) Way 1
• The world is in motion or motus.
• All changes in the world are due to some prior cause.
• There must be a prior cause for this entire sequence of changes, that is, God.
(ii) Way 2
• The world is a sequence of events.
• Every event in the world has a cause.
• There must be a cause for the entire sequence of events, that is, God.
(iii) Way 3
• The world might not have been.
• Everything that exists in the world depends on some other thing for its existence.
• The world itself must depend upon some other thing for its existence, that is, God.
(iv) Way 4
• There are degrees of perfection in the world.
• Things are more perfect the closer they approach the maximum.
• There is a maximum perfection, that is, God.
(v) Way 5
• Each body has a natural tendency towards its goal.
• All order requires a designer.
• This end-directedness of natural bodies must have a designing force behind it.
Therefore each natural body has a designer, that is, God.
Thomas Aquinas tried to balance the
philosophy of Aristotle with Christian ideas. He believed that truth was passed
to humans by God through divine revelation, and that humans had the ability to
seek out truth. Unlike Aristotle, Aquinas
realism came to the forefront because he held that human reality is not only
spiritual or mental but also physical and natural. From the standpoint of a
human teacher, the path to the soul lies through the physical senses, and
education must use this path to accomplish learning. Proper instruction thus
directs the learner to knowledge that leads to true being by progressing from a
In view of education, Aquinas believed that the primary agencies of education are the family and the church; the state -or organized society - runs a poor third; the family and the church have an obligation to teach those things that relate to the unchanging principles of moral and divine law. In fact, Aquinas mentioned that the mother is the child’s first teacher, and because the child is molded easily; it is the mother’s role to set the child’s moral tone; the church stands for the source of knowledge of the divine and should set the grounds for understanding God’s law. The state should formulate and enforce law on education, but it should not abridge the educational primacy of the home and church.
7.2.3 Modern Realism: Francis Bacon and John Locke
Modern realism began to develop because classical realism did not adequately include a method of inductive thinking. If the original premise or truth was incorrect, then there was a possibility of error in the logic of the rest of the thinking. Modern realists therefore believed that a process of deduction must be used to explain ideas. Of all the philosophers engaged in this effort, the two most outstanding did Francis Bacon and John Locke; where they were involved in developing systematic methods of thinking and ways to increase human understanding.
(a) Francis Bacon
Bacon (1561 - 1626) was an English philosopher, statesman, scientist, lawyer,
jurist, and author. He also served as a politician in the courts of Elizabeth I
and James I. He was not a successful in his political efforts, but his record
in the philosophical thought remained extremely
The Novum Organum is a philosophical work by Francis Bacon published in 1620. This is a reference to Aristotle's work Organon, which was his treatise on logic and syllogism. In Novum Organum, Bacon details a new system of logic he believes to be superior to the old ways of syllogism of Aristotle. In this work, we see the development of the Baconian Method, consisting of procedures for isolating the form, nature or cause of a phenomenon, employing the method of agreement, method of difference, and method of associated variation.
Bacon felt that the problem with religious realism was that it began with dogma or belief and then worked toward deducing conclusions. He felt that science could not work with this process because it was inappropriate and ineffective for the scientific process to begin with preconceived ideas. Bacon felt that developing effective means of inquiry was vital because knowledge was power that could be used to deal effectively with life. He therefore devised the inductive method of acquiring knowledge which begins with observations and then uses reasoning to make general statements or laws. Verification was needed before a judgment could be made. When data was collected, if contradictions were found, then the ideas would be discarded.
The Baconian Method consists of procedures for isolating the form nature, or cause, of a phenomenon, including the method of agreement, method of difference, and method of concomitant or associated variation. Bacon suggests that we draw up a list of all things in which the phenomenon we are trying to explain occurs, as well as a list of things in which it does not occur. Then, we rank the lists according to the degree in which the phenomenon occurs in each one. After that, we should be able to deduce what factors match the occurrence of the phenomenon in one list and do not occur in the other list, and also what factors change in accordance with the way the data had been ranked. From this, Bacon concludes that we should be able to deduce by elimination and inductive reasoning what is the cause underlying the phenomenon.
of the scientific or inductive approach uncover many errors in propositions
that were taken for granted originally. Bacon urged that people should
re-examine all previously accepted knowledge. At the least, he considered that
people should attempt to get rid off the various idols in their mind before
which they bow down and that cloud their thinking. Bacon
(i) Idols of the Tribe (Idola Tribus): This is humans' tendency to perceive more order and regularity in systems than truly exists, and is due to people following their preconceived ideas about things.
(ii) Idols of the Cave or Den (Idola Specus): This is due to individuals' personal weaknesses in reasoning due to particular personalities, likes and dislikes. For instance, a woman had several bad experiences with men with moustaches, thus she might conclude that all moustached men are bad; this is a clear case of faulty generalization.
(iii) Idols of the Marketplace (Idola Fori): This is due to confusions in the use of language and taking some words in science to have a different meaning than their common usage. For example, such words as liberal and conservative might have little meaning when applied to people because a person could be liberal on one issue and conservative on another.
(iv) Idols of the Theatre (Idola Theatri): This is due to using philosophical systems which have incorporated mistaken methods. Bacon insisted on housekeeping of the mind, in which we should break away from the dead ideas of the past and begin again by using the method of induction.
Bacon did not propose an actual philosophy, but rather a method of developing philosophy. He wrote that, although philosophy at the time used the deductive syllogism to interpret nature, the philosopher should instead proceed through inductive reasoning from fact to axiom to law.
(b) John Locke
John Locke (1632 - 1704) was an English philosopher. Locke is considered the first of the British empiricists. His ideas had enormous influence on the development of epistemology and political philosophy, and he is widely regarded as one of the most influential Enlightenment thinkers, classical republicans, and contributors to liberal theory. Surprisingly, Locke’s writings influenced Voltaire and Rousseau, many Scottish Enlightenment thinkers, as well as the American revolutionaries. This influence is reflected in the American Declaration of Independence.
Thoughts Concerning Education is a 1693 discourse on education written by John
Locke. For over a century, it was the most important philosophical work on
Locke’s Essay Concerning Human Understanding, wrote in 1690, Locke outlined a
new theory of mind, contending that the child's mind was a tabula
rasa or blank slate or
empty mind; that is, it did not contain any innate or inborn ideas. In
describing the mind in these terms, Locke was drawing on
Theatetus, which suggests that the mind is like a
wax tablet. Although Locke argued vigorously for the
rasa theory of mind, he
nevertheless did believe in innate talents and interests. For example, he
advises parents to watch their children carefully in order to discover their
aptitudes, and to nurture their children's own interests rather than force
them to participate in activities which they dislike. John Locke believed that
the mind was a blank slate at birth; information and knowledge were added
through experience, perception, and reflection. He felt that what we know is what we experience. Locke believed
Another Locke most important contribution to eighteenth-century educational theory also stems from his theory of the self. He writes: the little and almost insensible impressions on our tender infancies have very important and lasting consequences. That is, the associations of ideas made when young are more significant than those made when mature because they are the foundation of the self - they mark the tabula rasa.
7.2.4 Contemporary Realism: Alfred North Whitehead and Bertrand Russell
Contemporary realism developed around the twentieth century due to concerns with science and scientific problems of a philosophical nature (Ozmon and Carver, 2008). Two outstanding figures in the twentieth century of contemporary realism were Alfred Norton Whitehead and Bertrand Russell.
(a) Alfred North Whitehead
North Whitehead (1861 -
1947) was an English mathematician who became a philosopher. He wrote on
algebra, logic, foundations of mathematics, philosophy
of science, physics, metaphysics, and education. He co-authored the epochal
Principia Mathematica with Bertrand Russell. While
Thomas Aquinas tried to balance the ideas of Aristotle with the ideas
Principia Mathematica is a three - volume work on the
foundations of mathematics, written by Alfred North
Whitehead's philosophical influence can be felt in all three of the main areas in which he worked - logic and the foundations of mathematics, the philosophy of science, and metaphysics, as well as in other areas such as ethics, education and religion. Whitehead was interested in actively utilizing the knowledge and skills that were taught to students to a particular end. He believed we should aim at producing men who possess both culture and expert knowledge in some special direction. He even thought that, education has to impart an intimate sense for the power and beauty of ideas coupled with structure for ideas together with a particular body of knowledge, which has peculiar reference to the life of being possessing it.
(b) Bertrand Arthur William Russell
Arthur William Russell, a British mathematician and philosopher had embraced
materialism in his early writing career. Russell earned his reputation as a
distinguished thinker by his work in mathematics and logic. In 1903 he
published„The Principles of Mathematics and by 1913 he and Alfred North
Whitehead had published the three volumes of Principia Mathematica.
The research, which Russell did during this period, establishes him as one of
the founding fathers of modern analytical philosophy; discussing towards
mathematical quantification as the basis of philosophical
appears to have discovered his paradox in the late spring of 1901, while
working on his Principles of Mathematics of 1903. Russell's
paradox is the most famous of the logical or set-theoretical paradoxes. The
paradox arises within naive set theory by considering the set of all sets that
are not members of themselves. Such a set appears to be a member of itself if
and only if it is not a member of itself, hence the paradox. For instance, some
sets, such as the set of all teacups, are not members of themselves; other
sets, such as the set of all non-teacups, are members of themselves. If we call
the set of all sets that are not members of themselves: R. If R is a member
of itself, then by definition it must not be a member of itself. Similarly, if
R is not a member of itself, then by definition it must be a member of itself.
The paradox has prompted much work in logic, set theory and the philosophy and
foundations of mathematics. The
root of the word pragmatism is a Greek word meaning work. According to pragmatism,
the truth or meaning of an idea or a proposition lies in its observable
practical consequences rather than anything metaphysical. It can be summarized by the phrase whatever works, is likely true. Because reality changes, whatever works will also change -
thus, truth must also be
Pragmatism is also a practical, matter-of-fact way of approaching or assessing situations or of solving problems. However, we might wonder why people insist on doing things and using processes that do not work. Several true reasons for this to happened is because the weight of the customs and tradition, fear and apathy, and the fact that habitual ways of thinking and doing seem to work even though they have lost use in today's world.
pragmatism as a philosophical movement began in the
movement. The background of pragmatism can be found in the works of such people like Francis Bacon and John Locke.
7.3.1 Centrality of Experience: Francis Bacon and John Locke
Human experience is an important ingredient of pragmatist philosophy. John Locke talked about the mind as a „tabula rasa‰ and the world of experience as the verification of thought, or in other words: the mind is a tabula rasa at birth; world of experience verifies thought. Another philosopher, Rousseau followed Locke's idea but with an expansion of the „centrality of experience‰ as the basis for a philosophical belief. Rousseau saw people as basically good but corrupted by civilization. If we would avoid that corruption then we should focus on the educational connection between nature and experience by building the education of our youth around the youth's natural inquisitiveness while attending to their physiological, psychological and, social developmental stages.
Locke believed that as people have more experiences, they have more ideas imprinted on the mind and more with which to relate. However, he argued that one could have false ideas as well as true ones. The only way people can be sure of their ideas are correct is by verifying them in the world of experience, such as physical proof.
Locke emphasized the idea of placing children in the
most desirable environment for their education and pointed out the importance
of environment in making people who they are. Nevertheless,
notion of experience contained internal flaw and caused difficulties. His
firmness that mind is a tabula rasa
established mind as a passive, malleable instrument
7.3.2 Science and Society: Auguste Comte, Charles Darwin, and John Dewey
Bridging the transition between the Age of Enlightenment and the Modern Age, Auguste Comte (1798 - 1857) and Charles Darwin (1809 1882) shared a belief that science could have a profound and positive effect on society. ComteÊs commitment to the use of science to address the ills of society resulted in the study of sociology. The effects of Charles Darwin and his five years aboard the HMS Beagle are still echoing throughout the world of religion and education.
Basically, Comte talked on use of science to solve social problems in sociology and was very much influenced by John Dewey's (1859 1952) ideas regarding the role of science in society. While Darwin initiate „Origin of the Species‰; nature operates by process of development without predetermined directions or ends, reality not found in being but becoming, and promoted pragmatist view that education tied directly to biological and social development.
Figure 7.12: From Left : Auguste Comte, Charles Darwin, and John Dewey
Auguste Comte was a French philosopher and one of the founders of sociology and positivism. He is responsible for the coining and introduction of the term altruism. Altruism is an ethical doctrine that holds that individuals have a moral obligation to help, serve, or benefit others, if necessary at the sacrifice of self interest. Auguste Comte's version of altruism calls for living for the sake of others. One who holds to either of these ethics is known as an "altruist." One
universal law that Comte saw at work in all sciences where he called it the law of three phases. It is by his statement of this law that he is best known in the English-speaking world; namely, that society has gone through three phases:
theological, metaphysical, and scientific. In Comte's lifetime, his work was sometimes viewed skeptically, with perceptions that he had elevated positivism to a religion and had named himself the Pope of Positivism.
emphasis on the interconnectedness of social elements was a forerunner of
modern functionalism. His emphasis on a quantitative, mathematical basis for decision-making, remains with us today. It is a foundation of the
modern notion of positivism, modern quantitative statistical analysis, and
business decision making. His description of the continuing cyclical relationship
between theory and practice is seen in modern business systems of Total Quality
Management and Continuous Quality Improvement where advocates describe a
Charles Darwin's wrote the On the Origin of Species, published in 1859, is a seminal work of scientific literature considered to be the foundation of evolutionary biology. The full title was On the Origin of Species by Means of Natural Selection, or the Preservation of Favored Races in the Struggle for Life.
For the sixth edition of 1872, the short title was changed to The Origin of Species. Darwin's book introduced the theory that populations evolve over the course of generations through a process of natural selection, and presented a body of evidence that the diversity of life arose through a branching pattern of evolution and common descent. He included evidence that he had accumulated on the voyage of the Beagle in the 1830s, and his subsequent findings from
research, correspondence, and experimentation.
evolutionary ideas had already been proposed to explain new findings in
biology. There was growing support for such ideas among protester anatomists
and the general public, but during the first half of the 19th century the
English scientific establishment was closely tied to the Church of England,
while science was part of natural theology. Ideas about the transmutation of
species were controversial as they conflicted with the beliefs that species
were unchanging parts of a designed hierarchy and that humans were unique,
unrelated to animals. The political and theological implications were intensely
debated, but transmutation was not accepted by the scientific mainstream. The
book was written to be read by non-specialists and attracted widespread
interest on its publication. As
On the other hand, Dewey attempted to create a philosophy that captured and reflected the influences of the contemporary world on the preparation of the future leaders through the educational system. The reliance on the source of knowledge has to be tempered by an understanding of the societal effects if the learning was to be meaningful, beneficial, or productive. John Dewey discussed the Nature of Experience; experience and nature are not two different things separated from each other, rather experience itself is of nature : experience is and
Dewey viewed method, rather than abstract answer, as a central concern, thought that modern industrial society has submerged both individuality and sociality. He defined individuality as the interplay of personal choice and freedom with objective condition. Whereas sociality refers to milieu or medium conducive to individual development.
Moreover, Dewey believed that most religions have a negative effect because they tend to classify people. Dewey thought that two schools of social and religious reform exist: one holds that people must be constantly watched, guided and controlled to see that they stay on the right path and the other holds that people will control their own actions intelligently. Dewey also believed that a truly aesthetic experience is one in which people are unified with their activity.
Finally, Dewey stated that we should project art into all human activities, such as, the art of politics and the art of education.
(a) How is pragmatism similar and different from idealism and realism? Explain.
(b) Discuss your thoughts about why pragmatism is seen as most effective in a democratic society.
(c) Compare and contrast Dewey's philosophical thoughts with your society's approach and your own.
REALISM, AND PRAGMATISM AND ITS CRITIQUE IN
Developing a philosophical perspective on education is not easy. However, it is very important if a person wants to become a more effective professional educator. A sound philosophical perspective helps one sees the interaction among students, curriculum, and aims and goals of education of various type of philosophy in achieving a teacher personal and professional undertakings.
7.4.1 Idealism in Philosophy of Education
Idealism as a philosophy had its greatest impact during the nineteenth century. Its influence in today's world is less important than it has been in the past. Much of what we know as idealism today was influenced by German ideas of idealism. The main tenant of idealism is that ideas and knowledge are the truest reality. Many things in the world change, but ideas and knowledge are enduring. Idealism was often referred to as „idea-ism‰. Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed.
Table 7.1 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for idealism in philosophy of education:
Table7.1: Idealism in Philosophy of Education
7.4.2 Realism in Philosophy of Education
According to Ozmon and Craver (2008) „the central thread of realism is the principal of independence.‰ The world of ideas and matter defined in idealism by Plato and Socrates do not exist separately and apart from each other for realists. They contend that material things can exist whether or not there is a human being around to appreciate or perceive them.
Table 7.2 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for realism in philosophy of education:
Table 7.2: Realism in Philosophy of Education
7.4.3 Pragmatism in Philosophy of Education
is basically an American philosophy, but has its roots in European thinking.
Pragmatists believe that ideas are tools that can be used to cope with the
world. They believe that educators should seek out new process, incorporate traditional
and contemporary ideas, or create new ideas to deal with the changing world.
There is a great deal of stress placed on sensitivity to consequences, but are
quick to state that consideration should be given to the method of arriving at
the consequences. The means to solving a problem is as important as the end.
The scientific method is important in the thinking process for pragmatists, but
it was not to seem like sterile lab thinking. Pragmatists want to apply the
scientific method for the greater good of the world. They believe that although
science has caused many problems in our world, it can still be used to
However, the progressive pragmatic movement believed in separating children by intelligence and ability in order to meet the needs of society. The softer side of that philosophy believed in giving children a great deal of freedom to explore, leading many people to label the philosophy of pragmatism in education as permissive.
Table 7.3 discuss the aims of education, methods of education, curriculum, role of teacher, and critique for pragmatism in philosophy of education:
Table 7.3: Pragmatism Realism in Philosophy of Education
Which of the philosophy is most compatible with your beliefs as an educator? Why?
• Basically, there three general or world philosophies that are idealism, realism, and pragmatism.
• Idealism is the philosophical theory that maintains that the ultimate nature of reality is based on mind or ideas. It holds that the so-called external or „real world‰ is inseparable from mind, consciousness, or perception.
• Platonic idealism says that there exists a perfect realm of form and ideas and our world merely contains shadows of that realm; only ideas can be known or have any reality.
• Religious idealism argues that all knowledge originates in perceived phenomena which have been organized by categories.
• Modern idealism says that all objects are identical with some idea and the ideal knowledge is itself the system of ideas.
• Platonic idealism usually refers to Plato's theory of forms or doctrine of ideas.
Plato held the realm of ideas to be absolute reality. Plato's method was the dialectic method all thinking begins with a thesis; as exemplified in the Socratic dialogues.
discussed the universe as being divided into the City of
• Augustine believed that faith based knowledge is determined by the church and all true knowledge came from God.
• Descartes was convinced that science and mathematics could be used to explain everything in nature, so he was the first to describe the physical universe in terms of matter and motion - seeing the universe a as giant mathematically designed engine.
• Kant held that the most interesting and useful varieties of human knowledge rely upon synthetic a priori judgments, which are, in turn, possible only when the mind determines the conditions of its own experience.
• Kant's philosophy of education involved some aspects of character education. He believed in the importance of treating each person as an end and not as a means.
• Hegel developed a concept of mind or spirit that manifested itself in a set of contradictions and oppositions that it ultimately integrated and united, such as those between nature and freedom, and immanence and transcendence, without eliminating either pole or reducing it to the other.
• „Hegelianism‰ is a collective term for schools of thought following Hegel's philosophy which can be summed up by the saying that „the rational alone is real‰, which means that all reality is capable of being expressed in rational categories.
• The most central thread of realism is the principal or thesis of independence. This thesis holds that reality, knowledge, and value exist independently of the human mind.
• Aristotle believed that the world could be understood at a fundamental level through the detailed observation and cataloguing of phenomenon.
• Aquinas believed that truth is known through reason - the natural revelation and faith - the supernatural revelation.
• Thomism is the philosophical school that arose as a legacy of the work and thought of Thomas Aquinas where it is based on Summa Theologica meaning „summary of theology‰.
• Aquinas mentioned that the mother is the child's first teacher, and because the child is molded easily; it is the mother's role to set the child's moral tone; the church stands for the source of knowledge of the divine and should set the grounds for understanding God's law. The state should formulate and enforce law on education.
• Bacon devised the inductive method of acquiring knowledge which begins with observations and then uses reasoning to make general statements or laws. Verification was needed before a judgment could be made. When data was collected, if contradictions were found, then the ideas would be discarded.
„Baconian Method‰ consists of procedures for
isolating the form nature, or cause, of a phenomenon, including the method of
agreement, method of difference, and method of concomitant or associated
identified the „idols‰, called the Idols of The Mind where he described
these as things which obstructed the path of correct scientific
• John Locke sought to explain how we develop knowledge. He attempted a rather modest philosophical task: „to clear the ground of some of the rubbish‰ that deter people from gaining knowledge. He was trying to do away with thought of what Bacon called „idols‰.
• Locke outlined a new theory of mind, contending that the child's mind was a „tabula rasa‰ or „blank slate‰ or „empty mind‰; that is, it did not contain any innate or inborn ideas.
• Whitehead was interested in actively „utilising the knowledge and skills that were taught to students to a particular end‰. He believed we should aim at „producing men who possess both culture and expert knowledge in some special direction‰.
• Russell, one of the founding fathers of modern analytical philosophy; discussing towards mathematical quantification as the basis of philosophical generalization.
• Russell's paradox is the most famous of the logical or set-theoretical paradoxes. The paradox arises within naive set theory by considering the set of all sets that are not members of themselves. Such a set appears to be a member of itself if and only if it is not a member of itself, hence the paradox.
• Pragmatism is a practical, matter-of-fact way of approaching or assessing situations or of solving problems. • Human experience is an important ingredient of pragmatist philosophy.
• John Locke talked about the mind as a „tabula rasa‰ and the world of experience as the verification of thought, or in other words: the mind is a tabula rasa at birth; world of experience verifies thought.
followed Locke's idea but with an expansion of the
„centrality of experience‰ as the basis for a philosophical belief. Rousseau
saw people as basically good but corrupted by civilization. If we would avoid
that corruption then we should focus on the educational connection between
nature and experience by building the education of our youth around the
• Locke believed that as people have more experiences, they have more ideas imprinted on the mind and more with which to relate.
• Comte is responsible for the coining and introduction of the term altruism. Altruism is an ethical doctrine that holds that individuals have a moral obligation to help, serve, or benefit others, if necessary at the sacrifice of self interest.
• One universal law that Comte saw at work in all sciences where he called it the „law of three phases‰. It is by his statement of this law that he is best known in the English-speaking world; namely, that society has gone through three phases: theological, metaphysical, and scientific.
• Dewey attempted to create a philosophy that captured and reflected the influences of the contemporary world on the preparation of the future leaders through the educational system. The reliance on the source of knowledge has to be tempered by an understanding of the societal effects if the learning was to be meaningful, beneficial, or productive.
• John Dewey discussed the Nature of Experience; experience and nature are not two different things separated from each other, rather experience itself is of nature : experience is and of nature.
• Idealists believe that ideas can change lives. The most important part of a person is the mind. It is to be nourished and developed.
• The world of ideas and matter defined in idealism by Plato and Socrates do not exist separately and apart from each other for realists. They contend that material things can exist whether or not there is a human being around to appreciate or perceive them.
• They believe that educators should seek out new process, incorporate traditional and contemporary ideas, or create new ideas to deal with the changing world.
Democracy and education.
Experience and education.
treatises of government, ed. Peter Laslett.
Locke, J. 1975 .
An essay concerning human understanding,
ed. P. H.Nidditch.
Locke, J. 1989 .
Some thoughts concerning education,
ed. John W. Yolton and Jean S. Yolton.
Some thoughts concerning
education; and of the conduct of the understanding,
ed. Ruth W. Grant and Nathan Tarcov.
Philosophy of education.
Ozmon, H.A. & Craver, S.M. (2008). Philosophical foundations of education (8th
Turner, W. (1910). Philosophy of Immanuel Kant. In The Catholic Encyclopedia.
Arthur G. (1966). John Dewey as educator: His design for
work in education (1894-1904).
Bohac, P., (2001, February 6). Dewey's pragmatism. Chapter 4 Pragmatism and Education. Retrieved September 3, 2009, from http://www.brendawelch.com/uwf/pragmatism.pdf
Created on Nov 12, 2010 and edited last 13 November, 2010 by Pengendali@2006 | http://www.kheru2006.webs.com/4idealism_realism_and_pragmatigsm_in_education.htm | 13 |
15 | Creative Debate is a role-playing exercise. Students assume a specific point of view and debate a controversial topic from this perspective. Creative Debates promote both critical thinking and tolerance of opposing views.
Steps to Creative Debate:
Discuss the rules for debate with the class. Have students suggest guidelines. Once a consensus is reached, post the rules for quick reference.
Suggest a topic for debate or allow the students to select a topic. If the topic requires research, allow the students to gather and organize information before the debate.
Divide the class into three groups. Select two groups to participate in the debate. The third group acts as observers. Rearrange the classroom so that opposing groups face one another and the observers sit to the side.
Provide a reading selection that states one of the positions on the debate topic. Assign one group to argue for the selection; the other group argues against.
Each student selects a character from the past or present that represents their position in the debate. (Teachers may want to suggest a list of characters to speed up this process.)
Have each student introduce himself as the character to the class and then argue the topic from the perspective of this character. Encourage students to "act out" the character's personality (speech patterns, mannerisms, etc.).
Each group presents their positions for ten minutes. Allow extra time for rebuttals.
Next, ask the student teams to switch their positions and argue the opposing viewpoint. (Perhaps the group of observers might change places with one of the other groups.) Repeat the debate and rebuttal process.
At the end of the debate, ask students to reflect on their experiences. Raise questions like . . .
Did you find it difficult to argue from both perspectives in the debate?
What did you learn from this experience?
Did your own views and opinions change?
How would you approach a similar debate in the future? | http://www.readingeducator.com/strategies/debate.htm | 13 |
21 | Logical Reasoning is our guide to good decisions. It is also a guide to sorting out truth from falsehood.
Like every subject, Logic has its own vocabulary and it is important that you understand the meanings of some important words/terms on which problems are usually framed in the Common Admission Test. Once you have become familiar with the vocabulary of Logic, it will be imperative that you also understand some rules/principles on which questions can be solved.
Some of the important types and styles of problems in logic are:
a. Problems based on ‘Propositions and their Implications’
These problems typically have a proposition followed by either a deductive or an inductive argument. An argument has a minimum of two statements — a premise and a conclusion, in any order.It may also have more than one premise (statement) and more than one conclusion. The information in the premise(s) either makes the conclusion weak or makes it strong. The examinee is usually required to:
i. identify the position of the premise(s) vis-à-vis the conclusion, that is, is the premise weakening or strengthening the conclusion
ii. identify if the conclusion drawn based on the premise(s) is correct or incorrect
iii. identify if only one conclusion follows, either conclusion follows, or neither conclusion follows, or both the conclusions follow (assuming the problem has two premises and two conclusions)
iv. identify an option in which the third statement is implied by the first two statements; this type of question is called a — Syllogism
v. identify the correct ordered pair where the first statement implies the second statement and both these statements are logically consistent with the main proposition (assuming, each question has a main proposition followed by four statements A, B, C, D)
vi. identify the set in which the statements are most logically related (assuming, each question has six statements and there are four options listed as —sets of combinations of three statements ABD, ECF, ABF, BCE etc.)
vii. identify the option where the third segment can be logically deduced from the preceding two (assuming, each question has a set of four statements and each of these statements has three segment, for example:
A. Tedd is beautiful;
Robo is beautiful too;
Ted is Robo.
B. Some apples are guavas;
Some guavas are oranges;
Oranges are apples.
C. Tedd is beautiful;
Robo is beautiful too;
Tedd may be Robo.
D. Apples are guavas;
Guavas are oranges;
Oranges are grapes.
(a) Only C
(b) Only A
(c) A and C
(d) Only B
The answer to the above question is option (c)
The above is in no way an exhaustive list of problems on logic, but it gives a fair view of the types and styles of questions that one may face. | http://www.jagranjosh.com/articles/cat-logical-reasoning-format-syllabus-and-types-of-problem-1338317908-1 | 13 |
25 | The NCTE Committee on Critical Thinking and the Language Arts defines critical thinking as "a process which stresses an attitude of suspended judgment, incorporates logical inquiry and problem solving, and leads to an evaluative decision or action." In a new monograph copublished by the ERIC Clearinghouse on Reading and Communication Skills, Siegel and Carey (1989) emphasize the roles of signs, reflection, and skepticism in this process.
Ennis (1987) suggests that "critical thinking is reasonable, reflective thinking that is focused on deciding what to believe or do." However defined, critical thinking refers to a way of reasoning that demands adequate support for one's beliefs and an unwillingness to be persuaded unless the support is forthcoming.
Why should we be concerned about critical thinking in our classrooms? Obviously, we want to educate citizens whose decisions and choices will be based on careful, critical thinking. Maintaining the right of free choice itself may depend on the ability to think clearly. Yet, we have been bombarded with a series of national reports which claim that "Johnny can't think" (Mullis, 1983; Gardner, 1983; Action for Excellence, 1983). All of them call for schools to guide students in developing the higher level thinking skills necessary for an informed society.
Skills needed to begin to think about issues and problems do not suddenly appear in our students (Tama, 1986; 1989). Teachers who have attempted to incorporate higher level questioning in their discussions or have administered test items demanding some thought rather than just recall from their students are usually dismayed at the preliminary results. Unless the students have been prepared for the change in expectations, both the students and the teacher are likely to experience frustration.
What is needed to cultivate these skills in the classroom? A number of researchers claim that the classroom must nurture an environment providing modeling, rehearsal, and coaching, for students and teachers alike, to develop a capacity for informed judgments (Brown, 1984; Hayes and Alvermann, 1986).
Hayes and Alvermann report that this coaching led teachers to acknowledge students' remarks more frequently and to respond to the students more elaborately. It significantly increased the proportion of text-connected talk students used as support for their ideas and/or as cited sources of their information. In addition, students' talk became more inferential and analytical.
A summary of the literature on the role of "wait time," (the time a teacher allows for a student to respond as well as the time an instructor waits after a student replies) found that it had an impact on students' thinking (Tobin, 1987). In this review of studies, Tobin found that those teachers who allowed a 3-5 second pause between the question and response permitted students to produce cognitively complex discourse. Teachers who consciously managed the duration of pauses after their questioning and provided regular intervals of silence during explanation created an environment where thinking was expected and practiced.
However, Tobin concludes that "wait time" in and of itself does not insure critical thinking. A curriculum which provides students with the opportunity to develop thinking skills must be in place. Interestingly, Tobin found that high achievers consistently were permitted more wait time than were less skilled students, ndicating that teachers need to monitor and evaluate their own behavior while using such strategies.
Finally, teachers need to become more tolerant of "conflict," or confrontation, in the classroom. They need to raise issues which create dissonance and refrain from expressing their own bias, letting the students debate and resolve problems. Although content area classroom which encourages critical thinking can promote a kind of some psychological discomfort in some students as conflicting accounts of information and ideas are argued and debated, such feelings may motivate them to resolve an issue (Festinger, 1957). They need to get a feel for the debate and the conflict it involves. Isn't there ample everyday evidence of this: Donahue, Geraldo Rivera, USA Today?
Authors like Frager (1984) and Johnson and Johnson (1979) claim that to really engage in critical thinking, students must encounter the dissonance of conflicting ideas. Dissonance, as discussed by Festinger, 1957 promotes a psychological discomfort which occurs in the presence of an inconsistency and motivates students to resolve the issue.
To help students develop skills in resolving this dissonance, Frager (1984) offers a model for conducting critical thinking classes and provides samples of popular issues that promote it: for example, banning smoking in public places, the bias infused in some sports accounts, and historical incidents written from both American and Russian perspectives.
If teachers feel that their concept of thinking is instructionally useful, if they develop the materials necessary for promoting this thinking, and if they practice the procedures necessary, then the use of critical thinking activities in the classroom will produce positive results.
Matthew Lipman (1988) writes, "The improvement of student thinking--from ordinary thinking to good thinking--depends heavily upon students' ability to identify and cite good reasons for their opinions."
Training students to do critical thinking is not an easy task. Teaching which involves higher level cognitive processes, comprehension, inference, and decision making often proves problematic for students. Such instruction is often associated with delays in the progress of a lesson, with low success and completion rates, and even with direct negotiations by students to alter the demands of work (Doyle, 1985). This negotiation by students is understandable. They have made a career of passive learning. When met by instructional situations in which they may have to use some mental energies, some students resist that intellectual effort. What emerges is what Sizer (1984) calls "conspiracy for the least," an agreement by the teacher and students to do just enough to get by.
Despite the difficulties, many teachers are now promoting critical thinking in the classroom. They are nurturing this change from ordinary thinking to good thinking admirably. They are 1) promoting critical thinking by infusing instruction with opportunities for their students to read widely, to write, and to discuss; 2) frequently using course tasks and assignments to focus on an issue, question, or problem; and 3) promoting metacognitive attention to thinking so that students develop a growing awareness of the relationship of thinking to reading, writing, speaking, and listening. (See Tama, 1989.)
Another new ERIC/RCS and NCTE monograph (Neilsen, 1989) echoes similar advice, urging teachers to allow learners to be actively involved in the learning process, to provide consequential contexts for learning, to arrange a supportive learning environment that respects student opinions while giving enough direction to ensure their relevance to a topic, and to provide ample opportunities for learners to collaborate.
Action for Excellence. A Comprehensive Plan to Improve Our Nation's Schools. Denver: Education Commission of the States, 1983. 60pp. [ED 235 588]
Brown, Ann L. "Teaching students to think as they read: Implications for curriculum reform." Paper commissioned by the American Educational Research Association Task Force on Excellence in Education, October 1984. 42pp. [ED 273 567]
Doyle, Walter. "Recent research on classroom management: Implications for teacher preparation." Journal of Teacher Education, 36 (3), 1985, pp. 31-35.
Ennis, Robert. "A taxonomy of critical thinking dispositions and abilities." In Joan Baron and Robert Sternberg (Eds.) Teaching Thinking Skills: Theory and Practice. New York: W.H. Freeman, 1987.
Festinger, Leon. A Theory of Cognitive Dissonance. Evanston, Illinois: Row Peterson, 1957.
Frager, Alan. "Conflict: The key to critical reading instruction." Paper presented at annual meeting of The Ohio Council of the International Reading Association Conference, Columbus, Ohio, October 1984. 18pp. [ED 251 806]
Gardner, David P., et al. A Nation at Risk: The Imperative for Educational Reform. An Open Letter to the American People. A Report to the Nation and the Secretary of Education. Washington, DC: National Commission on Excellence in Education, 1983. 72pp. [ED 226 006]
Hayes, David A., and Alvermann, Donna E. "Video assisted coaching of textbook discussion skills: Its impact on critical reading behavior." Paper presented at the annual meeting of the American Research Association, San Francisco: April 1986. 11pp. [ED 271 734]
Johnson, David W., and Johnson, Roger T. "Conflict in the classroom: Controversy and learning," Review of Educational Research, 49, (1), Winter 1979, pp. 51-70.
Lipman, Matthew. "Critical thinking--What can it be?" Educational Leadership, 46 (1), September 1988, pp. 38-43.
Mullis, Ina V. S., and Mead, Nancy. "How well can students read and write?" Issuegram 9. Denver: Education Commission of the States, 1983. 9pp. [ED 234 352]
Neilsen, Allan R., Critical Thinking and Reading: Empowering Learners to Think and Act. Monographs on Teaching Critical Thinking, Number 2. Bloomington, Indiana: ERIC Clearinghouse on Reading and Communication Skills and The National Council of Teachers of English, Urbana, Illinois, 1989. [Available from ERIC/RCS and NCTE.]
Siegel, Marjorie, and Carey, Robert F. Critical Thinking: A Semiotic Perspective. Monographs on Teaching Critical Thinking, Number 1. Bloomington, Indiana: ERIC Clearinghouse on Reading and Communication Skills and the National Council of Teachers of English, Urbana, Illinois, 1989. [Available from ERIC/RCS and NCTE.]
Sizer, Theodore. Horace's Compromise: The Dilemma of the American High School. Boston: Houghton-Mifflin, 1984. [ED 264 171; not available from EDRS.]
Tama, M. Carrol. "Critical thinking has a place in every classroom," Journal of Reading 33 (1), October 1989.
Tama, M. Carrol. "Thinking skills: A return to the content area classroom." Paper presented at the annual meeting of the International Reading Association, 1986. 19pp. [ED 271 737]
Tobin, Kenneth. "The role of wait time in higher cognitive level learning," Review of Educational Research, 57 (1), Spring 1987, pp. 69-95. | http://www.ericdigests.org/pre-9211/critical.htm | 13 |
37 | Visualization of the quicksort algorithm. The horizontal lines are pivot values.
|Worst case performance||O(n2)|
|Best case performance||O(n log n)|
|Average case performance||O(n log n)|
|Worst case space complexity||O(n) auxiliary (naive)
O(log n) auxiliary (Sedgewick 1978)
Quicksort, or partition-exchange sort, is a sorting algorithm developed by Tony Hoare that, on average, makes O(n log n) comparisons to sort n items. In the worst case, it makes O(n2) comparisons, though this behavior is rare. Quicksort is often faster in practice than other O(n log n) algorithms. Additionally, quicksort's sequential and localized memory references work well with a cache. Quicksort is a comparison sort and, in efficient implementations, is not a stable sort. Quicksort can be implemented with an in-place partitioning algorithm, so the entire sort can be done with only O(log n) additional space used by the stack during the recursion.
The quicksort algorithm was developed in 1960 by Tony Hoare while in the Soviet Union, as a visiting student at Moscow State University. At that time, Hoare worked in a project on machine translation for the National Physical Laboratory. He developed the algorithm in order to sort the words to be translated, to make them more easily matched to an already-sorted Russian-to-English dictionary that was stored on magnetic tape.
Quicksort is a divide and conquer algorithm. Quicksort first divides a large list into two smaller sub-lists: the low elements and the high elements. Quicksort can then recursively sort the sub-lists.
The steps are:
- Pick an element, called a pivot, from the list.
- Reorder the list so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation.
- Recursively apply the above steps to the sub-list of elements with smaller values and separately the sub-list of elements with greater values.
The base case of the recursion are lists of size zero or one, which never need to be sorted.
Simple version
In simple pseudocode, the algorithm might be expressed as this:
function quicksort('array') if length('array') ≤ 1 return 'array' // an array of zero or one elements is already sorted select and remove a pivot value 'pivot' from 'array' create empty lists 'less' and 'greater' for each 'x' in 'array' if 'x' ≤ 'pivot' then append 'x' to 'less' else append 'x' to 'greater' return concatenate(quicksort('less'), 'pivot', quicksort('greater')) // two recursive calls
Notice that we only examine elements by comparing them to other elements. This makes quicksort a comparison sort. This version is also a stable sort (assuming that the "for each" method retrieves elements in original order, and the pivot selected is the last among those of equal value).
The correctness of the partition algorithm is based on the following two arguments:
- At each iteration, all the elements processed so far are in the desired position: before the pivot if less than the pivot's value, after the pivot if greater than the pivot's value (loop invariant).
- Each iteration leaves one fewer element to be processed (loop variant).
The correctness of the overall algorithm can be proven via induction: for zero or one element, the algorithm leaves the data unchanged; for a larger data set it produces the concatenation of two parts, elements less than the pivot and elements greater than it, themselves sorted by the recursive hypothesis.
In-place version
The disadvantage of the simple version above is that it requires O(n) extra storage space, which is as bad as merge sort. The additional memory allocations required can also drastically impact speed and cache performance in practical implementations. There is a more complex version which uses an in-place partition algorithm and can achieve the complete sort using O(log n) space (not counting the input) on average (for the call stack). We start with a partition function:
// left is the index of the leftmost element of the subarray // right is the index of the rightmost element of the subarray (inclusive) // number of elements in subarray = right-left+1 function partition(array, left, right, pivotIndex) pivotValue := array[pivotIndex] swap array[pivotIndex] and array[right] // Move pivot to end storeIndex := left for i from left to right - 1 // left ≤ i < right if array[i] <= pivotValue swap array[i] and array[storeIndex] storeIndex := storeIndex + 1 swap array[storeIndex] and array[right] // Move pivot to its final place return storeIndex
This is the in-place partition algorithm. It partitions the portion of the array between indexes left and right, inclusively, by moving all elements less than
array[pivotIndex] before the pivot, and the equal or greater elements after it. In the process it also finds the final position for the pivot element, which it returns. It temporarily moves the pivot element to the end of the subarray, so that it doesn't get in the way. Because it only uses exchanges, the final list has the same elements as the original list. Notice that an element may be exchanged multiple times before reaching its final place. Also, in case of pivot duplicates in the input array, they can be spread across the right subarray, in any order. This doesn't represent a partitioning failure, as further sorting will reposition and finally "glue" them together.
This form of the partition algorithm is not the original form; multiple variations can be found in various textbooks, such as versions not having the storeIndex. However, this form is probably the easiest to understand.
Once we have this, writing quicksort itself is easy:
function quicksort(array, left, right) // If the list has 2 or more items if left < right // See "Choice of pivot" section below for possible choices choose any pivotIndex such that left ≤ pivotIndex ≤ right // Get lists of bigger and smaller items and final position of pivot pivotNewIndex := partition(array, left, right, pivotIndex) // Recursively sort elements smaller than the pivot quicksort(array, left, pivotNewIndex - 1) // Recursively sort elements at least as big as the pivot quicksort(array, pivotNewIndex + 1, right)
Each recursive call to this quicksort function reduces the size of the array being sorted by at least one element, since in each invocation the element at pivotNewIndex is placed in its final position. Therefore, this algorithm is guaranteed to terminate after at most n recursive calls. However, since partition reorders elements within a partition, this version of quicksort is not a stable sort.
Implementation issues
Choice of pivot
In very early versions of quicksort, the leftmost element of the partition would often be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already sorted arrays, which is a rather common use-case. The problem was easily solved by choosing either a random index for the pivot, choosing the middle index of the partition or (especially for longer partitions) choosing the median of the first, middle and last element of the partition for the pivot (as recommended by R. Sedgewick).
Selecting a pivot element is also complicated by the existence of integer overflow. If the boundary indices of the subarray being sorted are sufficiently large, the naïve expression for the middle index, (left + right)/2, will cause overflow and provide an invalid pivot index. This can be overcome by using, for example, left + (right-left)/2 to index the middle element, at the cost of more complex arithmetic. Similar issues arise in some other methods of selecting the pivot element.
- To make sure at most O(log N) space is used, recurse first into the smaller half of the array, and use a tail call to recurse into the other.
- Use insertion sort, which has a smaller constant factor and is thus faster on small arrays, for invocations on such small arrays (i.e. where the length is less than a threshold t determined experimentally). This can be implemented by leaving such arrays unsorted and running a single insertion sort pass at the end, because insertion sort handles nearly sorted arrays efficiently. A separate insertion sort of each small segment as they are identified adds the overhead of starting and stopping many small sorts, but avoids wasting effort comparing keys across the many segment boundaries, which keys will be in order due to the workings of the quicksort process. It also improves the cache use.
Like merge sort, quicksort can also be parallelized due to its divide-and-conquer nature. Individual in-place partition operations are difficult to parallelize, but once divided, different sections of the list can be sorted in parallel. The following is a straightforward approach: If we have processors, we can divide a list of elements into sublists in O(n) average time, then sort each of these in average time. Ignoring the O(n) preprocessing and merge times, this is linear speedup. If the split is blind, ignoring the values, the merge naïvely costs O(n). If the split partitions based on a succession of pivots, it is tricky to parallelize and naïvely costs O(n). Given O(log n) or more processors, only O(n) time is required overall, whereas an approach with linear speedup would achieve O(log n) time for overall.
One advantage of this simple parallel quicksort over other parallel sort algorithms is that no synchronization is required, but the disadvantage is that sorting is still O(n) and only a sublinear speedup of O(log n) is achieved. A new thread is started as soon as a sublist is available for it to work on and it does not communicate with other threads. When all threads complete, the sort is done.
Other more sophisticated parallel sorting algorithms can achieve even better time bounds. For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in O(log n) time on a CRCW PRAM with n processors by performing partitioning implicitly.
Formal analysis
Average-case analysis using discrete probability
Quicksort takes O(n log n) time on average, when the input is a random permutation. Why? For a start, it is not hard to see that the partition operation takes O(n) time.
In the most unbalanced case, each time we perform a partition we divide the list into two sublists of size 0 and (for example, if all elements of the array are equal). This means each recursive call processes a list of size one less than the previous list. Consequently, we can make nested calls before we reach a list of size 1. This means that the call tree is a linear chain of nested calls. The th call does work to do the partition, and , so in that case Quicksort takes time. That is the worst case: given knowledge of which comparisons are performed by the sort, there are adaptive algorithms that are effective at generating worst-case input for quicksort on-the-fly, regardless of the pivot selection strategy.
In the most balanced case, each time we perform a partition we divide the list into two nearly equal pieces. This means each recursive call processes a list of half the size. Consequently, we can make only nested calls before we reach a list of size 1. This means that the depth of the call tree is . But no two calls at the same level of the call tree process the same part of the original list; thus, each level of calls needs only O(n) time all together (each call has some constant overhead, but since there are only O(n) calls at each level, this is subsumed in the O(n) factor). The result is that the algorithm uses only O(n log n) time.
In fact, it's not necessary to be perfectly balanced; even if each pivot splits the elements with 75% on one side and 25% on the other side (or any other fixed fraction), the call depth is still limited to , so the total running time is still O(n log n).
So what happens on average? If the pivot has rank somewhere in the middle 50 percent, that is, between the 25th percentile and the 75th percentile, then it splits the elements with at least 25% and at most 75% on each side. If we could consistently choose a pivot from the two middle 50 percent, we would only have to split the list at most times before reaching lists of size 1, yielding an O(n log n) algorithm.
When the input is a random permutation, the pivot has a random rank, and so it is not guaranteed to be in the middle 50 percent. However, when we start from a random permutation, in each recursive call the pivot has a random rank in its list, and so it is in the middle 50 percent about half the time. That is good enough. Imagine that you flip a coin: heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't. Imagine that you are flipping a coin over and over until you get k heads. Although this could take a long time, on average only 2k flips are required, and the chance that you won't get heads after flips is highly improbable (this can be made rigorous using Chernoff bounds). By the same argument, Quicksort's recursion will terminate on average at a call depth of only . But if its average call depth is O(log n), and each level of the call tree processes at most elements, the total amount of work done on average is the product, O(n log n). Note that the algorithm does not have to verify that the pivot is in the middle half—if we hit it any constant fraction of the times, that is enough for the desired complexity.
Average-case analysis using recurrences
An alternative approach is to set up a recurrence relation for the T(n) factor, the time needed to sort a list of size . In the most unbalanced case, a single Quicksort call involves O(n) work plus two recursive calls on lists of size and , so the recurrence relation is
In the most balanced case, a single quicksort call involves O(n) work plus two recursive calls on lists of size , so the recurrence relation is
The master theorem tells us that T(n) = O(n log n).
The outline of a formal proof of the O(n log n) expected time complexity follows. Assume that there are no duplicates as duplicates could be handled with linear time pre- and post-processing, or considered cases easier than the analyzed. When the input is a random permutation, the rank of the pivot is uniform random from 0 to n-1. Then the resulting parts of the partition have sizes i and n-i-1, and i is uniform random from 0 to n-1. So, averaging over all possible splits and noting that the number of comparisons for the partition is , the average number of comparisons over all permutations of the input sequence can be estimated accurately by solving the recurrence relation:
Solving the recurrence gives
This means that, on average, quicksort performs only about 39% worse than in its best case. In this sense it is closer to the best case than the worst case. Also note that a comparison sort cannot use less than comparisons on average to sort items (as explained in the article Comparison sort) and in case of large , Stirling's approximation yields , so quicksort is not much worse than an ideal comparison sort. This fast average runtime is another reason for quicksort's practical dominance over other sorting algorithms.
Analysis of Randomized quicksort
Using the same analysis, one can show that Randomized quicksort has the desirable property that, for any input, it requires only O(n log n) expected time (averaged over all choices of pivots). However, there exists a combinatorial proof, more elegant than both the analysis using discrete probability and the analysis using recurrences.
To each execution of Quicksort corresponds the following binary search tree (BST): the initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot of the right half is the root of the right subtree, and so on. The number of comparisons of the execution of Quicksort equals the number of comparisons during the construction of the BST by a sequence of insertions. So, the average number of comparisons for randomized Quicksort equals the average cost of constructing a BST when the values inserted form a random permutation.
Consider a BST created by insertion of a sequence of values forming a random permutation. Let C denote the cost of creation of the BST. We have: (whether during the insertion of there was a comparison to ).
By linearity of expectation, the expected value E(C) of C is Pr(during the insertion of there was a comparison to ).
Fix i and j<i. The values , once sorted, define j+1 intervals. The core structural observation is that is compared to in the algorithm if and only if falls inside one of the two intervals adjacent to .
Observe that since is a random permutation, is also a random permutation, so the probability that is adjacent to is exactly .
We end with a short calculation:
Space complexity
The space used by quicksort depends on the version used.
The in-place version of quicksort has a space complexity of O(log n), even in the worst case, when it is carefully implemented using the following strategies:
- in-place partitioning is used. This unstable partition requires O(1) space.
- After partitioning, the partition with the fewest elements is (recursively) sorted first, requiring at most O(log n) space. Then the other partition is sorted using tail recursion or iteration, which doesn't add to the call stack. This idea, as discussed above, was described by R. Sedgewick, and keeps the stack depth bounded by O(log n).
Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it uses O(log n) space. However, without Sedgewick's trick to limit the recursive calls, in the worst case quicksort could make O(n) nested recursive calls and need O(n) auxiliary space.
From a bit complexity viewpoint, variables such as left and right do not use constant space; it takes O(log n) bits to index into a list of n items. Because there are such variables in every stack frame, quicksort using Sedgewick's trick requires bits of space. This space requirement isn't too terrible, though, since if the list contained distinct elements, it would need at least O(n log n) bits of space.
Another, less common, not-in-place, version of quicksort uses O(n) space for working storage and can implement a stable sort. The working storage allows the input array to be easily partitioned in a stable manner and then copied back to the input array for successive recursive calls. Sedgewick's optimization is still appropriate.
Selection-based pivoting
A selection algorithm chooses the kth smallest of a list of numbers; this is an easier problem in general than sorting. One simple but effective selection algorithm works nearly in the same manner as quicksort, except instead of making recursive calls on both sublists, it only makes a single tail-recursive call on the sublist which contains the desired element. This small change lowers the average complexity to linear or O(n) time, and makes it an in-place algorithm. A variation on this algorithm brings the worst-case time down to O(n) (see selection algorithm for more information).
Conversely, once we know a worst-case O(n) selection algorithm is available, we can use it to find the ideal pivot (the median) at every step of quicksort, producing a variant with worst-case O(n log n) running time. In practical implementations, however, this variant is considerably slower on average.
There are four well known variants of quicksort:
- Balanced quicksort: choose a pivot likely to represent the middle of the values to be sorted, and then follow the regular quicksort algorithm.
- External quicksort: The same as regular quicksort except the pivot is replaced by a buffer. First, read the M/2 first and last elements into the buffer and sort them. Read the next element from the beginning or end to balance writing. If the next element is less than the least of the buffer, write it to available space at the beginning. If greater than the greatest, write it to the end. Otherwise write the greatest or least of the buffer, and put the next element in the buffer. Keep the maximum lower and minimum upper keys written to avoid resorting middle elements that are in order. When done, write the buffer. Recursively sort the smaller partition, and loop to sort the remaining partition. This is a kind of three-way quicksort in which the middle partition (buffer) represents a sorted subarray of elements that are approximately equal to the pivot.
- Three-way radix quicksort (developed by Sedgewick and also known as multikey quicksort): is a combination of radix sort and quicksort. Pick an element from the array (the pivot) and consider the first character (key) of the string (multikey). Partition the remaining elements into three sets: those whose corresponding character is less than, equal to, and greater than the pivot's character. Recursively sort the "less than" and "greater than" partitions on the same character. Recursively sort the "equal to" partition by the next character (key). Given we sort using bytes or words of length W bits, the best case is O(KN) and the worst case O(2KN) or at least O(N2) as for standard quicksort, given for unique keys N<2K, and K is a hidden constant in all standard comparison sort algorithms including quicksort. This is a kind of three-way quicksort in which the middle partition represents a (trivially) sorted subarray of elements that are exactly equal to the pivot.
- Quick radix sort (also developed by Powers as a o(K) parallel PRAM algorithm). This is again a combination of radix sort and quicksort but the quicksort left/right partition decision is made on successive bits of the key, and is thus O(KN) for N K-bit keys. Note that all comparison sort algorithms effectively assume an ideal K of O(logN) as if k is smaller we can sort in O(N) using a hash table or integer sorting, and if K >> logN but elements are unique within O(logN) bits, the remaining bits will not be looked at by either quicksort or quick radix sort, and otherwise all comparison sorting algorithms will also have the same overhead of looking through O(K) relatively useless bits but quick radix sort will avoid the worst case O(N2) behaviours of standard quicksort and quick radix sort, and will be faster even in the best case of those comparison algorithms under these conditions of uniqueprefix(K) >> logN. See Powers for further discussion of the hidden overheads in comparison, radix and parallel sorting.
Comparison with other sorting algorithms
Quicksort is a space-optimized version of the binary tree sort. Instead of inserting items sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is implied by the recursive calls. The algorithms make exactly the same comparisons, but in a different order. An often desirable property of a sorting algorithm is stability - that is the order of elements that compare equal is not changed, allowing controlling order of multikey tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain for in situ (or in place) quicksort (that uses only constant additional space for pointers and buffers, and logN additional space for the management of explicit or implicit recursion). For variant quicksorts involving extra memory due to representations using pointers (e.g. lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex, or disk-bound, data structures tend to increase time cost, in general making increasing use of virtual memory or disk.
The most direct competitor of quicksort is heapsort. Heapsort's worst-case running time is always O(n log n). But, heapsort is assumed to be on average somewhat slower than standard in-place quicksort. This is still debated and in research, with some publications indicating the opposite. Introsort is a variant of quicksort that switches to heapsort when a bad case is detected to avoid quicksort's worst-case running time. If it is known in advance that heapsort is going to be necessary, using it directly will be faster than waiting for introsort to switch to it.
Quicksort also competes with mergesort, another recursive sort algorithm but with the benefit of worst-case O(n log n) running time. Mergesort is a stable sort, unlike standard in-place quicksort and heapsort, and can be easily adapted to operate on linked lists and very large lists stored on slow-to-access media such as disk storage or network attached storage. Like mergesort, quicksort can be implemented as an in-place stable sort, but this is seldom done. Although quicksort can be written to operate on linked lists, it will often suffer from poor pivot choices without random access. The main disadvantage of mergesort is that, when operating on arrays, efficient implementations require O(n) auxiliary space, whereas the variant of quicksort with in-place partitioning and tail recursion uses only O(log n) space. (Note that when operating on linked lists, mergesort only requires a small, constant amount of auxiliary storage.)
Bucket sort with two buckets is very similar to quicksort; the pivot in this case is effectively the value in the middle of the value range, which does well on average for uniformly distributed inputs.
See also
- Steven S. Skiena (27 April 2011). The Algorithm Design Manual. Springer. p. 129. ISBN 978-1-84800-069-8. Retrieved 27 November 2012.
- "Data structures and algorithm: Quicksort". Auckland University.
- Shustek, L. (2009). "Interview: An interview with C.A.R. Hoare". Comm. ACM 52 (3): 38–41. doi:10.1145/1467247.1467261. More than one of
- Sedgewick, Robert (1 September 1998). Algorithms In C: Fundamentals, Data Structures, Sorting, Searching, Parts 1-4 (3 ed.). Pearson Education. ISBN 978-81-317-1291-7. Retrieved 27 November 2012.
- Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM 21 (10): 847–857. doi:10.1145/359619.359631.
- qsort.c in GNU libc: ,
- Miller, Russ; Boxer, Laurence (2000). Algorithms sequential & parallel: a unified approach. Prentice Hall. ISBN 978-0-13-086373-7. Retrieved 27 November 2012.
- David M. W. Powers, Parallelized Quicksort and Radixsort with Optimal Speedup, Proceedings of International Conference on Parallel Computing Technologies. Novosibirsk. 1991.
- McIlroy, M. D. (1999). "A killer adversary for quicksort". Software: Practice and Experience 29 (4): 341–237. doi:10.1002/(SICI)1097-024X(19990410)29:4<341::AID-SPE237>3.3.CO;2-I.
- David M. W. Powers, Parallel Unification: Practical Complexity, Australasian Computer Architecture Workshop, Flinders University, January 1995
- Hsieh, Paul (2004). "Sorting revisited.". www.azillionmonkeys.com. Retrieved 26 April 2010.
- MacKay, David (1 December 2005). "Heapsort, Quicksort, and Entropy". users.aims.ac.za/~mackay. Retrieved 26 April 2010.
- A Java implementation of in-place stable quicksort
- Sedgewick, R. (1978). "Implementing Quicksort programs". Comm. ACM 21 (10): 847–857. doi:10.1145/359619.359631.
- Dean, B. C. (2006). "A simple expected running time analysis for randomized "divide and conquer" algorithms". Discrete Applied Mathematics 154: 1–5. doi:10.1016/j.dam.2005.07.005.
- Hoare, C. A. R. (1961). "Algorithm 63: Partition". Comm. ACM 4 (7): 321. doi:10.1145/366622.366642.
- Hoare, C. A. R. (1961). "Algorithm 64: Quicksort". Comm. ACM 4 (7): 321. doi:10.1145/366622.366644.
- Hoare, C. A. R. (1961). "Algorithm 65: Find". Comm. ACM 4 (7): 321–322. doi:10.1145/366622.366647.
- Hoare, C. A. R. (1962). "Quicksort". Comput. J. 5 (1): 10–16. doi:10.1093/comjnl/5.1.10. (Reprinted in Hoare and Jones: Essays in computing science, 1989.)
- Musser, D. R. (1997). "Introspective Sorting and Selection Algorithms". Software: Practice and Experience 27 (8): 983–993. doi:10.1002/(SICI)1097-024X(199708)27:8<983::AID-SPE117>3.0.CO;2-#.
- Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Pages 113–122 of section 5.2.2: Sorting by Exchanging.
- Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 7: Quicksort, pp. 145–164.
- A. LaMarca and R. E. Ladner. "The Influence of Caches on the Performance of Sorting." Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, 1997. pp. 370–379.
- Faron Moller. Analysis of Quicksort. CS 332: Designing Algorithms. Department of Computer Science, Swansea University.
- Martínez, C.; Roura, S. (2001). "Optimal Sampling Strategies in Quicksort and Quickselect". SIAM J. Comput. 31 (3): 683–705. doi:10.1137/S0097539700382108.
- Bentley, J. L.; McIlroy, M. D. (1993). "Engineering a sort function". Software: Practice and Experience 23 (11): 1249–1265. doi:10.1002/spe.4380231105.
|The Wikibook Algorithm implementation has a page on the topic of: Quicksort|
- Animated Sorting Algorithms: Quick Sort – graphical demonstration and discussion of quick sort
- Animated Sorting Algorithms: 3-Way Partition Quick Sort – graphical demonstration and discussion of 3-way partition quick sort
- Interactive Tutorial for Quicksort
- Quicksort applet with "level-order" recursive calls to help improve algorithm analysis
- Open Data Structures - Section 11.1.2 - Quicksort
- Multidimensional quicksort in Java
- Literate implementations of Quicksort in various languages on LiteratePrograms
- A colored graphical Java applet which allows experimentation with initial state and shows statistics | http://en.wikipedia.org/wiki/Quicksort | 13 |
67 | Validity and Soundness
A deductive argument is said to be valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Otherwise, a deductive argument is said to be invalid.
A deductive argument is sound if and only if it is both valid, and all of its premises are actually true. Otherwise, a deductive argument is unsound.
According to the definition of a deductive argument (see the Deduction and Induction), the author of a deductive argument always intends that the premises provide the sort of justification for the conclusion whereby if the premises are true, the conclusion is guaranteed to be true as well. Loosely speaking, if the author’s process of reasoning is a good one, if the premises actually do provide this sort of justification for the conclusion, then the argument is valid.
In effect, an argument is valid if the truth of the premises logically guarantees the truth of the conclusion. The following argument is valid, because it is impossible for the premises to be true and the conclusion to nevertheless be false:
Either Elizabeth owns a Honda or she owns a Saturn.
Elizabeth does not own a Honda.
Therefore, Elizabeth owns a Saturn.
It is important to stress that the premises of an argument do not have actually to be true in order for the argument to be valid. An argument is valid if the premises and conclusion are related to each other in the right way so that if the premises were true, then the conclusion would have to be true as well. We can recognize in the above case that even if one of the premises is actually false, that if they had been true the conclusion would have been true as well. Consider, then an argument such as the following:
All toasters are items made of gold.
All items made of gold are time-travel devices.
Therefore, all toasters are time-travel devices.
Obviously, the premises in this argument are not true. It may be hard to imagine these premises being true, but it is not hard to see that if they were true, their truth would logically guarantee the conclusion’s truth.
It is easy to see that the previous example is not an example of a completely good argument. A valid argument may still have a false conclusion. When we construct our arguments, we must aim to construct one that is not only valid, but sound. A sound argument is one that is not only valid, but begins with premises that are actually true. The example given about toasters is valid, but not sound. However, the following argument is both valid and sound:
No felons are eligible voters.
Some professional athletes are felons.
Therefore, some professional athletes are not eligible voters.
Here, not only do the premises provide the right sort of support for the conclusion, but the premises are actually true. Therefore, so is the conclusion. Although it is not part of the definition of a sound argument, because sound arguments both start out with true premises and have a form that guarantees that the conclusion must be true if the premises are, sound arguments always end with true conclusions.
It should be noted that both invalid, as well as valid but unsound, arguments can nevertheless have true conclusions. One cannot reject the conclusion of an argument simply by discovering a given argument for that conclusion to be flawed.
Whether or not the premises of an argument are true depends on their specific content. However, according to the dominant understanding among logicians, the validity or invalidity of an argument is determined entirely by its logical form. The logical form of an argument is that which remains of it when one abstracts away from the specific content of the premises and the conclusion, i.e., words naming things, their properties and relations, leaving only those elements that are common to discourse and reasoning about any subject matter, i.e., words such as “all”, “and”, “not”, “some”, etc. One can represent the logical form of an argument by replacing the specific content words with letters used as place-holders or variables.
For example, consider these two arguments:
All tigers are mammals.
No mammals are creatures with scales.
Therefore, no tigers are creatures with scales.
All spider monkeys are elephants.
No elephants are animals.
Therefore, no spider monkeys are animals.
These arguments share the same form:
All A are B;
No B are C;
Therefore, No A are C.
All arguments with this form are valid. Because they have this form, the examples above are valid. However, the first example is sound while the second is unsound, because its premises are false. Now consider:
All basketballs are round.
The Earth is round.
Therefore, the Earth is a basketball.
All popes reside at the Vatican.
John Paul II resides at the Vatican.
Therefore, John Paul II is a pope.
These arguments also have the same form:
All A’s are F;
X is F;
Therefore, X is an A.
Arguments with this form are invalid. This is easy to see with the first example. The second example may seem like a good argument because the premises and the conclusion are all true, but note that the conclusion’s truth isn’t guaranteed by the premises’ truth. It could have been possible for the premises to be true and the conclusion false. This argument is invalid, and all invalid arguments are unsound.
While it is accepted by most contemporary logicians that logical validity and invalidity is determined entirely by form, there is some dissent. Consider, for example, the following arguments:
My table is circular. Therefore, it is not square shaped.
Juan is bachelor. Therefore, he is not married.
These arguments, at least on the surface, have the form:
x is F;
Therefore, x is not G.
Arguments of this form are not valid as a rule. However, it seems clear in these particular cases that it is, in some strong sense, impossible for the premises to be true while the conclusion is false. However, many logicians would respond to these complications in various ways. Some might insist–although this is controverisal–that these arguments actually contain implicit premises such as “Nothing is both circular and square shaped” or “All bachelors are unmarried,” which, while themselves necessary truths, nevertheless play a role in the form of these arguments. It might also be suggested, especially with the first argument, that while (even without the additional premise) there is a necessary connection between the premise and the conclusion, the sort of necessity involved is something other than “logical” necessity, and hence that this argument (in the simple form) should not be regarded as logically valid. Lastly, especially with regard to the second example, it might be suggested that because “bachelor” is defined as “adult unmarried male”, that the true logical form of the argument is the following universally valid form:
x is F and not G and H;
Therefore, x is not G.
The logical form of a statement is not always as easy to discern as one might expect. For example, statements that seem to have the same surface grammar can nevertheless differ in logical form. Take for example the two statements:
(1) Tony is a ferocious tiger.
(2) Clinton is a lame duck.
Despite their apparent similarity, only (1) has the form “x is a A that is F”. From it one can validly infer that Tony is a tiger. One cannot validly infer from (2) that Clinton is a duck. Indeed, one and the same sentence can be used in different ways in different contexts. Consider the statement:
(3) The King and Queen are visiting dignitaries.
It is not clear what the logical form of this statement is. Either there are dignitaries that the King and Queen are visiting, in which case the sentence (3) has the same logical form as “The King and Queen are playing violins,” or the King and Queen are themselves the dignitaries who are visiting from somewhere else, in which case the sentence has the same logical form as “The King and Queen are sniveling cowards.” Depending on which logical form the statement has, inferences may be valid or invalid. Consider:
The King and Queen are visiting dignitaries. Visiting dignitaries is always boring. Therefore, the King and Queen are doing something boring.
Only if the statement is given the first reading can this argument be considered to be valid.
Because of the difficulty in identifying the logical form of an argument, and the potential deviation of logical form from grammatical form in ordinary language, contemporary logicians typically make use of artificial logical languages in which logical form and grammatical form coincide. In these artificial languages, certain symbols, similar to those used in mathematics, are used to represent those elements of form analogous to ordinary English words such as “all”, “not”, “or”, “and”, etc. The use of an artifically constructed language makes it easier to specify a set of rules that determine whether or not a given argument is valid or invalid. Hence, the study of which deductive argument forms are valid and which are invalid is often called “formal logic” or “symbolic logic”.
In short, a deductive argument must be evaluated in two ways. First, one must ask if the premises provide support for the conclusion by examing the form of the argument. If they do, then the argument is valid. Then, one must ask whether the premises are true or false in actuality. Only if an argument passes both these tests is it sound. However, if an argument does not pass these tests, its conclusion may still be true, despite that no support for its truth is given by the argument.
Note: there are other, related, uses of these words that are found within more advanced mathematical logic. In that context, a formula (on its own) written in a logical language is said to be valid if it comes out as true (or “satisfied”) under all admissible or standard assignments of meaning to that formula within the intended semantics for the logical language. Moreover, an axiomatic logical calculus (in its entirety) is said to be sound if and only if all theorems derivable from the axioms of the logical calculus are semantically valid in the sense just described.
For a more sophisticated look at the nature of logical validity, see the articles on “Logical Consequence” in this encyclopedia. The articles on “Argument” and “Deductive and Inductive Arguments” in this encyclopedia may also be helpful.
The author of this article is anonymous. The IEP is actively seeking an author who will write a replacement article.
Last updated: August 27, 2004 | Originally published: August/27/2004 | http://www.iep.utm.edu/val-snd/ | 13 |
49 | Categorical syllogisms are a special type of argument which has been studied for more than two thousand years. now since the time of Aristotle. It is the central piece of Aristotelian logic, and is still the most visible type of argument in logic courses and textbooks today. I
Categorical syllogisms, no matter what they are about, have a rigorous structure:
- There are exactly three categorical propositions.
- Two of those propositions are premises; the other is the conclusion.
- There are exactly three terms, each appearing only twice.
Given these structural requirements. categorical syllogisms are rather cumbersome and unnatural. The structure, however, is transparent, and the structural properties of the syllogism (the relationships asserted between the three terms) determine whether an argument is valid or not.
Consider the following argument:(1) All men are mortal. (All P are M) (2) Socrates is a man. (Some S are M) (3) Therefore, Socrates is mortal. (Some S are P)Each of the three terms in a categorical syllogism occurs in exactly two of the propositions in the argument. For ease of identification and reference, these terms are called the major, minor and middle terms of the argument. The term which occurs in both premises is called the middle term and is usually represented by the letter M. The term which occurs as the predicate term in the conclusion is called the major term and it is usually represented by the letter P. The premise which has the major term is called the major premise. The subject of the conclusion is called the minor term, represented by the letter S, and the premise with the minor term is called the minor premise. So, in the example above, 'Socrates' is the minor term, 'mortals' is the major term, and 'men' is the middle term.
In the example above, it appears that the conclusion follows from the premises. But how can we be sure? Fortunately, there are two distinct methods available to us for testing the validity of a categorical syllogism. The first of the methods relies on an understanding of properties of categorical propositions : quantity, quality, and the distribution of terms.
Four rules apply to all valid categorical syllogisms:
Rule 1: In a valid categorical syllogism, the middle term must be distributed in at least one premise.
Rule 2: In a valid categorical syllogism, any term that is distributed in the conclusion must be distributed in the premises.
Rule 3: In a valid categorical syllogism, the number of negative premises must be equal to the number of negative conclusions.
Rule 4: In a valid categorical syllogism a particular conclusion cannot be drawn from exclusively universal premises unless one assumes existential import. We do not assume existential import, and we will refer to arguments that would be valid if we did as traditionally valid.
All and only those arguments that pass each of these tests are valid. Failure to satisfy one or more of the rules renders the argument non-valid. Applying these rules to our argument, we see that the middle term, 'men', is distributed in the first premise (the subject of an A proposition is distributed), so the argument passes the first test. Neither of the terms in the conclusion is distributed (both terms in an I proposition are undistributed), so the argument passes the second test. There are no negative premises and no negative conclusions, and 0 = 0, so the argument passes the third test. Finally, the second premise is particular, so the argument passes the fourth test even though the conclusion is particular.
Consider another example:(1) Some logicians wear earrings. (2) Some persons who wear earrings are rational. (3) Therefore, some logicians are not rational.
This, too, is a categorical syllogism. However, this argument is not valid. The reason is that even though some logicians wear earrings and some persons who wear earrings are not rational, it does not necessarily follow that some logicians are not rational. In fact the premises could be all true while the conclusion is false. In terms of the four rules, this argument violates Rule 1, the middle term, 'those who wear earrings', is not distributed in either of the premises.
Fallacies and Rule Violations
Categorical syllogisms that violate one or more of the rules commit a fallacy in reasoning. Different violations are given specific names. An argument that violates rule 1 commits the fallacy of the undistributed middle. If the minor term is distributed in the conclusion but not in the minor premise, the argument commits the fallacy of an illicit minor. If the major term is distributed in the conclusion but not in the major premise, the argument commits the fallacy of an illicit major. An argument with 2 negative premises commits the fallacy of 2 negatives, any other violation of rule 3 is called the fallacy of negative terms. Finally, an argument that violates rule 4 commits the existential fallacy.
Quiz yourself of applying the rules to arguments to test for validity.
Many people find the rule tests for validity unnatural and cumbersome. Fortunately, there is another method for testing categorical syllogisms for validity that involves Venn diagrams.
Return to Tutorials Index Go on to Venn Diagram Tests for Validity | http://cstl-cla.semo.edu/hill/PL120/notes/syllogisms.htm | 13 |
34 | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
A statistical hypothesis test is a method of making statistical decisions from and about experimental data. Null-hypothesis testing just answers the question of "how well the findings fit the possibility that chance factors alone might be responsible." This is done by asking and answering a hypothetical question. One use is deciding whether experimental results contain enough information to cast doubt on conventional wisdom.
As an example, consider determining whether a suitcase contains some radioactive material. Placed under a Geiger counter, it produces 10 counts per minute. The null hypothesis is that no radioactive material is in the suitcase and that all measured counts are due to ambient radioactivity typical of the surrounding air and harmless objects in a suitcase. We can then calculate how likely it is that the null hypothesis produces 10 counts per minute. If it is likely, for example if the null hypothesis predicts on average 9 counts per minute and a standard deviation of 1 count per minute, we say that the suitcase is compatible with the null hypothesis (which does not imply that there is no radioactive material, we just can't determine!); on the other hand, if the null hypothesis predicts for example 1 count per minute and a standard deviation of 1 count per minute, then the suitcase is not compatible with the null hypothesis and there are likely other factors responsible to produce the measurements.
The test described here is more fully the null-hypothesis statistical significance test. The null hypothesis is a conjecture that exists solely to be falsified by the sample. Statistical significance is a possible finding of the test - that the sample is unlikely to have occurred by chance given the truth of the null hypothesis. The name of the test describes its formulation and its possible outcome. One characteristic of the test is its crisp decision: reject or do not reject (which is not the same as accept). A calculated value is compared to a threshold.
One may be faced with the problem of making a definite decision with respect to an uncertain hypothesis which is known only through its observable consequences. A statistical hypothesis test, or more briefly, hypothesis test, is an algorithm to choose between the alternatives (for or against the hypothesis) which minimizes certain risks.
This article describes the commonly used frequentist treatment of hypothesis testing. From the Bayesian point of view, it is appropriate to treat hypothesis testing as a special case of normative decision theory (specifically a model selection problem) and it is possible to accumulate evidence in favor of (or against) a hypothesis using concepts such as likelihood ratios known as Bayes factors.
There are several preparations we make before we observe the data.
- The null hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. For example: The mean response to treatment being tested is equal to the mean response to the placebo in the control group. Both responses have the normal distribution with this unknown mean and the same known standard deviation ... (value).
- A test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. In the example given above, it might be the numerical difference between the two sample means, m1 − m2.
- The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). In this example, the difference between sample means would have a normal distribution with a standard deviation equal to the common standard deviation times the factor where n1 and n2 are the sample sizes.
- Among all the sets of possible values, we must choose one that we think represents the most extreme evidence against the hypothesis. That is called the critical region of the test statistic. The probability of the test statistic falling in the critical region when the null hypothesis is correct, is called the alpha value (or size) of the test.
- The probability that a sample falls in the critical region when the parameter is , where is for the alternative hypothesis, is called the power of the test at . The power function of a critical region is the function that maps to the power of .
After the data are available, the test statistic is calculated and we determine whether it is inside the critical region.
If the test statistic is inside the critical region, then our conclusion is one of the following:
- Reject the null hypothesis. (Therefore the critical region is sometimes called the rejection region, while its complement is the acceptance region.)
- An event of probability less than or equal to alpha has occurred.
The researcher has to choose between these logical alternatives. In the example we would say: the observed response to treatment is statistically significant.
If the test statistic is outside the critical region, the only conclusion is that there is not enough evidence to reject the null hypothesis. This is not the same as evidence in favor of the null hypothesis. That we cannot obtain using these arguments, since lack of evidence against a hypothesis is not evidence for it. On this basis, statistical research progresses by eliminating error, not by finding the truth.
Definition of termsEdit
Following the exposition in Lehmann and Romano, we shall make some definitions:
- Simple hypothesis
- Any hypothesis which specifies the population distribution completely.
- Composite hypothesis
- Any hypothesis which does not specify the population distribution completely.
- Statistical test
- A decision function that takes its values in the set of hypotheses.
- Region of acceptance
- The set of values for which we fail to reject the null hypothesis.
- Region of rejection / Critical region
- The set of values of the test statistic for which the null hypothesis is rejected.
- Power of a test (1-)
- The test's probability of correctly rejecting the null hypothesis. The complement of the false negative rate
- Size / Significance level of a test ()
- For simple hypotheses, this is the test's probability of incorrectly rejecting the null hypothesis. The false positive rate. For composite hypotheses this is the upper bound of the probability of rejecting the null hypothesis over all cases covered by the null hypothesis.
- Most powerful test
- For a given size or significance level, the test with the greatest power.
- Uniformly most powerful test (UMP)
- A test with the greatest power for all values of the parameter being tested.
- Unbiased test
- For a specific alternative hypothesis, a test is said to be unbiased when the probability of rejecting the null hypothesis is not less than the significance level when the alternative is true and is less than or equal to the significance level when the null hypothesis is true.
- Uniformly most powerful unbiased (UMPU)
- A test which is UMP in the set of all unbiased tests.
Common test statisticsEdit
|One-sample z-test||(Normal distribution or n > 30) and σ known.|
(z is the distance from the mean in standard deviations. It is possible to calculate a minimum proportion of a population that falls within n standard deviations (see: Chebyshev's inequality).
|Two-sample z-test||Normal distribution and independent observations and (σ1 AND σ2 known)|
|(Normal population or n > 30) and σ unknown|
|(Normal population of differences or n > 30) and σ unknown|
|One-proportion z-test||n .p > 10 and n (1 − p) > 10|
|Two-proportion z-test, equal variances||
|n1.p1 > 5 AND n1(1 − p1) > 5 and n2.p2 > 5 and n2(1 − p2) > 5 and independent observations|
|Two-proportion z-test, unequal variances||n1.p1 > 5 and n1(1 − p1) > 5 and n2.p2 > 5 and n2(1 − p2) > 5 and independent observations|
|Two-sample pooled t-test||(Normal populations or n1+n2 > 40) and independent observations and σ1 = σ2 and (σ1 and σ2 unknown)|
|Two-sample unpooled t-test||(Normal populations or n1+n2 > 40) and independent observations and σ1 ≠ σ2 and (σ1 and σ2 unknown)|
|Definition of symbols|| = sample size|
= sample mean
= population mean
= population standard deviation
= t statistic
= degrees of freedom
= sample 1 size
= sample 2 size
= sample 1 std. deviation
= sample 2 std. deviation
= sample mean of differences
= population mean difference
= std. deviation of differences
= proportion 1
= proportion 2
= population 1 mean
= population 2 mean
= minimum of n1 or n2
Hypothesis testing is largely the product of Ronald Fisher, Jerzy Neyman, Karl Pearson and (son) Egon Pearson. Fisher was an agricultural statistician who emphasized rigorous experimental design and methods to extract a result from few samples assuming Gaussian distributions. Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an (extended) hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century.
The following example is summarized from Fisher Fisher thoroughly explained his method in a proposed experiment to test a Lady's claimed ability to determine the means of tea preparation by taste. The article is less than 10 pages in length and is notable for its simplicity and completeness regarding terminology, calculations and design of the experiment. The example is loosely based on an event in Fisher's life. The Lady proved him wrong.
- The null hypothesis was that the Lady had no such ability.
- The test statistic was a simple count of the number of successes in 8 trials.
- The distribution associated with the null hypothesis was the binomial distribution familiar from coin flipping experiments.
- The critical region was the single case of 8 successes in 8 trials based on a conventional probability criterion (<5%).
- Fisher asserted that no alternative hypothesis was (ever) required.
If, and only if the 8 trials produced 8 successes was Fisher willing to reject the null hypothesis - effectively acknowledging the Lady's ability with >98% confidence (but without quantifying her ability). Fisher later discussed the benefits of more trials and repeated tests.
Little criticism of the technique appears in introductory statistics texts. Criticism is of the application or of the interpretation rather than of the method.
Criticism of null-hypothesis significance testing is available in other articles (null-hypothesis and statistical significance) and their references. Attacks and defenses of the null-hypothesis significance test are collected in Harlow et al.
The original purposes of Fisher's formulation, as a tool for the experimenter, was to plan the experiment and to easily assess the information content of the small sample. There is little criticism, Bayesian in nature, of the formulation in its original context.
In other contexts, complaints focus on flawed interpretations of the results and over-dependence/emphasis on one test.
Numerous attacks on the formulation have failed to supplant it as a criterion for publication in scholarly journals. The most persistent attacks originated from the field of Psychology. After review, the American Psychological Association did not explicitly deprecate the use of null-hypothesis significance testing, but adopted enhanced publication guidelines which implicitly reduced the relative importance of such testing. The International Committee of Medical Journal Editors recognizes an obligation to publish negative (not statistically significant) studies under some circumstances. The applicability of the null-hypothesis testing to the publication of observational (as contrasted to experimental) studies is doubtful.
Some statisticians have commented that pure "significance testing" has what is actually a rather strange goal of detecting the existence of a "real" difference between two populations. In practice a difference can almost always be found given a large enough sample, what is typically the more relevant goal of science is a determination of causal effect size. The amount and nature of the difference, in other words, is what should be studied. Many researchers also feel that hypothesis testing is something of a misnomer. In practice a single statistical test in a single study never "proves" anything. [How to reference and link to summary or text]
"Hypothesis testing: generally speaking, this is a misnomer since much of what is described as hypothesis testing is really null-hypothesis testing."
"Statistics do not prove anything." "Billions of supporting examples for absolute truth are outweighed by a single exception." "...in statistics, we can only try to disprove or falsify."
Even when you reject a null hypothesis, effect sizes should be taken into consideration. If the effect is statistically significant but the effect size is very small, then it is a stretch to consider the effect theoretically important.[How to reference and link to summary or text]
Philosophical criticism Edit
Philosophical criticism to hypothesis testing includes consideration of borderline cases.
Any process that produces a crisp decision from uncertainty is subject to claims of unfairness near the decision threshold. (Consider close election results.) The premature death of a laboratory rat during testing can impact doctoral theses and academic tenure decisions. Clotho, Lachesis and Atropos yet spin, weave and cut the threads of life under the guise of Probability.[How to reference and link to summary or text]
"... surely, God loves the .06 nearly as much as the .05"
The statistical significance required for publication has no mathematical basis, but is based on long tradition.
"It is usual and convenient for experimenters to take 5% as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results."
Fisher, in the cited article, designed an experiment to achieve a statistically significant result based on sampling 8 cups of tea.
Ambivalence attacks all forms of decision making. A mathematical decision-making process is attractive because it is objective and transparent. It is repulsive because it allows authority to avoid taking personal responsibility for decisions.
Pedagogic criticism Edit
Pedagogic criticism of the null-hypothesis testing includes the counter-intuitive formulation, the terminology and confusion about the interpretation of results.
"Despite the stranglehold that hypothesis testing has on experimental psychology, I find it difficult to imagine a less insightful means of transiting from data to conclusions."
Students find it difficult to understand the formulation of statistical null-hypothesis testing. In rhetoric, examples often support an argument, but a mathematical proof "is a logical argument, not an empirical one". A single counterexample results in the rejection of a conjecture. Karl Popper defined science by its vulnerability to dis-proof by data. Null-hypothesis testing shares the mathematical and scientific perspective rather the more familiar rhetorical one. Students expect hypothesis testing to be a statistical tool for illumination of the research hypothesis by the sample; It is not. The test asks indirectly whether the sample can illuminate the research hypothesis.
Students also find the terminology confusing. While Fisher disagreed with Neyman and Pearson about the theory of testing, their terminologies have been blended. The blend is not seamless or standardized. While this article teaches a pure Fisher formulation, even it mentions Neyman and Pearson terminology (Type II error and the alternative hypothesis). The typical introductory statistics text is less consistent. The Sage Dictionary of Statistics would not agree with the title of this article, which it would call null-hypothesis testing. "...there is no alternate hypothesis in Fisher's scheme: Indeed, he violently opposed its inclusion by Neyman and Pearson." In discussing test results, "significance" often has two distinct meanings in the same sentence; One is a probability, the other is a subject-matter measurement (such as currency). The significance (meaning) of (statistical) significance is significant (important).
There is widespread and fundamental disagreement on the interpretation of test results.
"A little thought reveals a fact widely understood among statisticians: The null hypothesis, taken literally (and that's the only way you can take it in formal hypothesis testing), is almost always false in the real world.... If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null hypothesis is always false, what's the big deal about rejecting it?" (The above criticism only applies to point hypothesis tests. If one were testing, for example, whether a parameter is greater than zero, it would not apply.)
"How has the virtually barren technique of hypothesis testing come to assume such importance in the process by which we arrive at our conclusions from our data?"
Null-hypothesis testing just answers the question of "how well the findings fit the possibility that chance factors alone might be responsible."
Null-hypothesis significance testing does not determine the truth or falseness of claims. It determines whether confidence in a claim based solely on a sample-based estimate exceeds a threshold. It is a research quality assurance test, widely used as one requirement for publication of experimental research with statistical results. It is uniformly agreed that statistical significance is not the only consideration in assessing the importance of research results. Rejecting the null hypothesis is not a sufficient condition for publication.
"Statistical significance does not necessarily imply practical significance!"
Practical criticism Edit
Practical criticism of hypothesis testing includes the sobering observation that published test results are often contradicted. Mathematical models support the conjecture that most published medical research test results are flawed. Null-hypothesis testing has not achieved the goal of a low error probability in medical journals.
"Contradiction and initially stronger effects are not unusual in highly cited research of clinical interventions and their outcomes."
"Most Research Findings Are False for Most Research Designs and for Most Fields"
Jones and Tukey suggested a modest improvement in the original null-hypothesis formulation to formalize handling of one-tail tests. Fisher ignored the 8-failure case (equally improbable as the 8-success case) in the example tea test which altered the claimed significance by a factor of 2.
Killeen proposed an alternative statistic that estimates the probability of duplicating an experimental result. It "provides all of the information now used in evaluating research, while avoiding many of the pitfalls of traditional statistical inference."
- Comparing means test decision tree
- Confidence limits (statistics)
- Multiple comparisons
- Omnibus test
- Behrens-Fisher problem
- Bootstrapping (statistics)
- Fisher's method for combining independent tests of significance
- Null hypothesis testing
- Predictability (measurement)
- Prediction errors
- Statistical power
- Statistical theory
- Statistical significance
- Theory formulation
- Theory verification
- Type I error, Type II error
- ↑ 1.0 1.1 1.2 1.3 The Sage Dictionary of Statistics, pg. 76, Duncan Cramer, Dennis Howitt, 2004, ISBN 076194138X
- ↑ Testing Statistical Hypotheses, 3E.
- ↑ 3.0 3.1 Fisher, Sir Ronald A. (1956). "Mathematics of a Lady Tasting Tea" James Roy Newman The World of Mathematics, volume 3.
- ↑ What If There Were No Significance Tests? (Harlow, Mulaik & Steiger, 1997, ISBN 978-0-8058-2634-0
- ↑ The Tao of Statistics, pg. 91, Keller, 2006, ISBN 1-4129-2473-1
- ↑ Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284.
- ↑ 7.0 7.1 Loftus, G.R. 1991. On the tyranny of hypothesis testing in the social sciences. Contemporary Psychology 36: 102-105.
- ↑ 8.0 8.1 Cohen, J. 1990. Things I have learned (so far). American Psychologist 45: 1304-1312.
- ↑ Introductory Statistics, Fifth Edition, 1999, pg. 521, Neil A. Weiss, ISBN 0-201-59877-9
- ↑ Ioannidis JPA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218-228.
- ↑ Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2(8): e124.
- ↑ A Sensible Formulation of the Significance Test, Jones and Tukey, Psychological Methods 2000, Vol. 5, No. 4, pg. 411-414
- ↑ An Alternative to Null-Hypothesis Significance Tests, Killeen, Psychol Sci. 2005 May ; 16(5): 345-353.
- A Guide to Understanding Hypothesis Testing
- A good Introduction
- Bayesian critique of classical hypothesis testing
- Critique of classical hypothesis testing highlighting long-standing qualms of statisticians
- Analytical argumentations of probability and statistics
- Laws of Chance Tables - used for testing claims of success greater than what can be attributed to random chance
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://psychology.wikia.com/wiki/Statistical_test | 13 |
15 | In the next few articles, I'd like to concentrate on securing data as it travels over a network. If you remember the IP packets series (see Capturing TCP Packets), most network traffic is transmitted in clear text and can be decoded by a packet sniffing utility. This can be bad for transmissions containing usernames, passwords, or other sensitive data. Fortunately, other utilities known as cryptosystems can protect your network traffic from prying eyes.
To configure a cryptosystem properly, you need a good understanding of the various terms and algorithms it uses. This article is a crash course in Cryptographic Terminology 101. Following articles will demonstrate configuring some of the cryptosytems that are available to FreeBSD.
What is a cryptosystem and why would you want to use one? A cryptosystem is a utility that uses a combination of algorithms to provide the following three components: privacy, integrity, and authenticity. Different cryptosytems use different algorithms, but all cryptosystems provide those three components. Each is important, so let's take a look at each individually.
Privacy ensures that only the intended recipient understands the network transmission. Even if a packet sniffer captures the data, it won't be able to decode the contents of the message. The cryptosystem uses an encryption algorithm, or cipher, to encrypt the original clear text into cipher text before it is transmitted. The intended recipient uses a key to decrypt the cipher text back into the original clear text. This key is shared between the sender and the recipient, and it is used to both encrypt and decrypt the data. Obviously, to ensure the privacy of the data, it is crucial that only the intended recipient has the key, for anyone with the key can decrypt the data.
It is possible for someone without the key to decrypt the data by cracking or guessing the key that was used to encrypt the data. The strength of the encryption algorithm gives an indication of how difficult it is to crack the key. Normally, strengths are expressed in terms of bitsize. For example, it would take less time to crack a key created by an algorithm with a 56-bit size than it would for a key created by an algorithm with a 256-bit size.
Does this mean you should always choose the algorithm with the largest bit size? Not necessarily. Typically, as bit size increases, the longer it takes to encrypt and decrypt the data. In practical terms, this translates into more work for the CPU and slower network transmissions. Choose a bit size that is suited to the sensitivity of the data you are transmitting and the hardware you have. The increase in CPU power over the years has resulted in a double-edged sword. It has allowed the use of stronger encryption algorithms, but it has also reduced the time it takes to crack the key created by those algorithms. Because of this, you should change the key periodically, before it is cracked. Many cryptosystems automate this process for you.
There are some other considerations when choosing an encryption algorithm. Some encryption algorithms are patented and require licenses or restrict their usage. Some encryption algorithms have been successfully exploited or are easily cracked. Some algorithms are faster or slower than their bit size would indicate. For example, DES and 3DES are considered to be slow; Blowfish is considered to be very fast, despite its large bit size.
Legal considerations also vary from country to country. Some countries impose export restrictions. This means that it is okay to use the full strength of an encryption algorithm within the borders of the country, but there are restrictions for encrypting data that has a recipient outside of the country. The United States used to restrict the strength of any algorithm leaving the U.S. border to 40 bits, which is why some algorithms support the very short bit size of 40 bits.
There are still countries where it is illegal to even use encryption. If you are unsure if your particular country has any legal or export restrictions, do a bit of research before you configure your FreeBSD system to use encryption.
The following table compares the encryption algorithms you are most likely to come across.
|DES||56||slow, easily cracked|
|Blowfish||32 - 448||no||extremely fast|
|CAST||40 - 128||yes|
|AES (Rijndael)||128, 192, 256||no||fast|
How much of the original packet is encrypted depends upon the encryption mode. If a cryptosystem uses transport mode, only the data portion of the packet is encrypted, leaving the original headers in clear text. This means that a packet sniffer won't be able to read the actual data but will be able to determine the IP addresses of the sender and recipient and which port number (or application) sent the data.
If a cryptosystem uses tunnel mode, the entire packet, data and headers, is encrypted. Since the packet still needs to be routed to its final destination, a new Layer 3 header is created. This is known as encapsulation, and it is quite possible that the new header contains totally different IP addresses than the original IP header. We will see why in a later article when we configure your FreeBSD system for IPSEC.
Integrity is the second component found in cryptosystems. This component ensures that the data received is indeed the data that was sent and that the data wasn't tampered with during transit. It requires a different class of algorithms, known as cryptographic checksums or cryptographic hashes. You may already be familiar with checksums as they are used to ensure that all of the bits in a frame or a header arrived in the order they were sent. However, frame and header checksums use a very simple algorithm, meaning that it is mathematically possible to change the bits and still use the same checksum. Cryptographic checksums need to be more tamper-resistant.
Like encryption algorithms, cryptographic checksums vary in their effectiveness. The longer the checksum, the harder it is to change the data and recreate the same checksum. Also, some checksums have known flaws. The following table summarizes the cryptographic checksums:
|Checksum length||Known flaws|
The order in the above chart is intentional. When it comes to cryptographic checksums, MD4 is the least secure, and SHA-1 is the most secure. Always choose the most secure checksum available in your cryptosystem.
Another term to look for in a cryptographic checksum is HMAC or Hash-based Message Authentication Code. This indicates that the checksum algorithm uses a key as part of the checksum. This is good, as it's impossible to alter the checksum without access to the key. If a cryptographic checksum uses HMAC, you'll see that term before the name of the checksum. For example, HMAC-MD4 is more secure than MD4, HMAC-SHA is more secure than SHA. If we were to order the checksum algorithms from least secure to most secure, it would look like this:
So far, we've ensured that the data has been encrypted and that the data hasn't been altered during transit. However, all of that work would be for naught if the data, and more importantly, the key, were mistakenly sent to the wrong recipient. This is where the third component, or authenticity, comes into play.
Before any encryption can occur, a key has to be created and exchanged. Since the same key is used to encrypt and to decrypt the data during the session, it is known as a symmetric or session key. How do we safely exchange that key in the first place? How can we be sure that we just exchanged that key with the intended recipient and no one else?
This requires yet another class of algorithms known as asymmetric or public key algorithms. These algorithms are called asymmetric as the sender and recipient do not share the same key. Instead, both the sender and the recipient separately generate a key pair which consists of two mathematically related keys. One key, known as the public key, is exchanged. This means that the recipient has a copy of the sender's public key and vice versa. The other key, known as the private key, must be kept private. The security depends upon the fact that no one else has a copy of a user's private key. If a user suspects that his private key has been compromised, he should immediately revoke that key pair and generate a new key pair.
When a key pair is generated, it is associated with a unique string of short nonsense words known as a fingerprint. The fingerprint is used to ensure that you are viewing the correct public key. (Remember, you never get to see anyone else's private key.) In order to verify a recipient, they first need to send you a copy of their public key. You then need to double-check the fingerprint with the other person to ensure you did indeed get their public key. This will make more sense in the next article when we generate a key pair and you see a fingerprint for yourself.
The most common key generation algorithm is RSA. You'll often see the term RSA associated with digital certificates or certificate authorities, also known as CAs. A digital certificate is a signed file that contains a recipient's public key, some information about the recipient, and an expiration date. The X.509 or PKCS #9 standard dictates the information found in a digital certificate. You can read the standard for yourself at http://www.rsasecurity.com/rsalabs/pkcs or http://ftp.isi.edu/in-notes/rfc2985.txt.
Digital certificates are usually stored on a computer known as a Certificate Authority. This means that you don't have to exchange public keys with a recipient manually. Instead, your system will query the CA when it needs a copy of a recipient's public key. This provides for a scalable authentication system. A CA can store the digital certificates of many recipients, and those recipients can be either users or computers.
It is also possible to generate digital certificates using an algorithm known as DSA. However, this algorithm is patented and is slower than RSA. Here is a FAQ on the difference between RSA and DSA. (The entire RSA Laboratories' FAQ is very good reading if you would like a more in depth understanding of cryptography.)
There is one last point to make on the subject of digital certificates and CAs. A digital certificate contains an expiration date, and the certificate cannot be deleted from the CA before that date. What if a private key becomes compromised before that date? You'll obviously want to generate a new certificate containing the new public key. However, you can't delete the old certificate until it expires. To ensure that certificate won't inadvertently be used to authenticate a recipient, you can place it in the CRL or Certificate Revocation List. Whenever a certificate is requested, the CRL is read to ensure that the certificate is still valid.
Authenticating the recipient is one half of the authenticity component. The other half involves generating and exchanging the information that will be used to create the session key which in turn will be used to encrypt and decrypt the data. This again requires an asymmetric algorithm, but this time it is usually the Diffie Hellman, or DH, algorithm.
It is important to realize that Diffie Hellman doesn't make the actual session key itself, but the keying information used to generate that key. This involves a fair bit of fancy math which isn't for the faint of heart. The best explanation I've come across, in understandable language with diagrams, is Diffie-Hellman Key Exchange - A Non-Mathematician's Explanation by Keith Palmgren.
It is important that the keying information is kept as secure as possible, so the larger the bit size, the better. The possible Diffie Hellman bit sizes have been divided into groups. The following chart summarizes the possible Diffie Hellman Groups:
|Group Name||Bit Size|
When configuring a cryptosytem, you should use the largest Diffie Hellman Group size that it supports.
The other term you'll see associated with the keying information is PFS, or Perfect Forward Secrecy, which Diffie Hellman supports. PFS ensures that the new keying information is not mathematically related to the old keying information. This means that if someone sniffs an old session key, they won't be able to use that key to guess the new session key. PFS is always a good thing and you should use it if the cryptosytem supports it.
Let's do a quick recap and summarize how a cryptosytem protects the data transmitted onto a network.
In next week's article, you'll have the opportunity to see many of these cryptographic terms in action as we'll be configuring a cryptosytem that comes built-in to your FreeBSD system: ssh.
Dru Lavigne is a network and systems administrator, IT instructor, author and international speaker. She has over a decade of experience administering and teaching Netware, Microsoft, Cisco, Checkpoint, SCO, Solaris, Linux, and BSD systems. A prolific author, she pens the popular FreeBSD Basics column for O'Reilly and is author of BSD Hacks and The Best of FreeBSD Basics.
Read more FreeBSD Basics columns.
Return to the BSD DevCenter.
Copyright © 2009 O'Reilly Media, Inc. | http://www.linuxdevcenter.com/lpt/a/2866 | 13 |
22 | United States Declaration of Independence
|This article's factual accuracy is disputed. (January 2013)|
|United States Declaration of Independence|
1823 facsimile of the engrossed copy
|Ratified||July 4, 1776|
|Location||Engrossed copy: National Archives
Rough draft: Library of Congress
|Author(s)||Thomas Jefferson et al. (Engrosser: Probably Timothy Matlack)|
|Signatories||56 delegates to the Continental Congress|
|Purpose||To announce and explain separation from Great Britain|
The Declaration of Independence is a statement adopted by the Continental Congress on July 4, 1776, which announced that the thirteen American colonies, then at war with Great Britain, regarded themselves as independent states, and no longer a part of the British Empire. Instead they now formed a new nation—the United States of America. John Adams was a leader in pushing for independence, which was unanimously approved on July 2. A committee had already drafted the formal declaration, to be ready when congress voted on independence.
Adams persuaded the committee to select Thomas Jefferson to compose the original draft of the document, which congress would edit to produce the final version. The Declaration was ultimately a formal explanation of why Congress had voted on July 2 to declare independence from Great Britain, more than a year after the outbreak of the American Revolutionary War. The national birthday, the Independence Day is celebrated on July 4, although Adams wanted July 2.
After ratifying the text on July 4, Congress issued the Declaration of Independence in several forms. It was initially published as the printed Dunlap broadside that was widely distributed and read to the public. The most famous version of the Declaration, a signed copy that is popularly regarded as the Declaration of Independence, is displayed at the National Archives in Washington, D.C. Although the wording of the Declaration was approved on July 4, the date of its signing was August 2. The original July 4 United States Declaration of Independence manuscript was lost while all other copies have been derived from this original document.
The sources and interpretation of the Declaration have been the subject of much scholarly inquiry. The Declaration justified the independence of the United States by listing colonial grievances against King George III, and by asserting certain natural and legal rights, including a right of revolution. Having served its original purpose in announcing independence, the text of the Declaration was initially ignored after the American Revolution. Since then, it has come to be considered a major statement on human rights, particularly its second sentence:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
This has been called "one of the best-known sentences in the English language", containing "the most potent and consequential words in American history." The passage came to represent a moral standard to which the United States should strive. This view was notably promoted by Abraham Lincoln, who considered the Declaration to be the foundation of his political philosophy, and argued that the Declaration is a statement of principles through which the United States Constitution should be interpreted. It has inspired work for the rights of marginalized people throughout the world. It provided inspiration to numerous national declarations of independence throughout the world.
Believe me, dear Sir: there is not in the British empire a man who more cordially loves a union with Great Britain than I do. But, by the God that made me, I will cease to exist before I yield to a connection on such terms as the British Parliament propose; and in this, I think I speak the sentiments of America.—Thomas Jefferson, November 29, 1775
By the time the Declaration of Independence was adopted in July 1776, the Thirteen Colonies and Great Britain had been at war for more than a year. Relations between the colonies and the mother country had been deteriorating since the end of the Seven Years' War in 1763. The war had plunged the British government deep into debt, and so Parliament enacted a series of measures to increase tax revenue from the colonies. Parliament believed that these acts, such as the Stamp Act of 1765 and the Townshend Acts of 1767, were a legitimate means of having the colonies pay their fair share of the costs to keep the colonies in the British Empire.
Many colonists, however, had developed a different conception of the empire. Because the colonies were not directly represented in Parliament, colonists argued that Parliament had no right to levy taxes upon them. This tax dispute was part of a larger divergence between British and American interpretations of the British Constitution and the extent of Parliament's authority in the colonies. The orthodox British view, dating from the Glorious Revolution of 1688, was that Parliament was the supreme authority throughout the empire, and so by definition anything Parliament did was constitutional. In the colonies, however, the idea had developed that the British Constitution recognized certain fundamental rights that no government—not even Parliament—could violate. After the Townshend Acts, some essayists even began to question whether Parliament had any legitimate jurisdiction in the colonies at all. Anticipating the arrangement of the British Commonwealth, by 1774 American writers such as Samuel Adams, James Wilson, and Thomas Jefferson were arguing that Parliament was the legislature of Great Britain only, and that the colonies, which had their own legislatures, were connected to the rest of the empire only through their allegiance to the Crown.
The issue of Parliament's authority in the colonies became a crisis after Parliament passed the Coercive Acts in 1774 to punish the Province of Massachusetts for the Boston Tea Party of 1773. Many colonists saw the Coercive Acts as a violation of the British Constitution and thus a threat to the liberties of all of British America. In September 1774, the First Continental Congress convened in Philadelphia to coordinate a response. Congress organized a boycott of British goods and petitioned the king for repeal of the acts. These measures were unsuccessful because King George III and the ministry of Prime Minister Lord North were determined not to retreat on the question of parliamentary supremacy. As the king wrote to North in November 1774, "blows must decide whether they are to be subject to this country or independent".
Even after fighting in the American Revolutionary War began at Lexington and Concord in April 1775, most colonists still hoped for reconciliation with Great Britain. When the Second Continental Congress convened at the Pennsylvania State House in Philadelphia in May 1775, some delegates hoped for eventual independence, but no one yet advocated declaring it. Although many colonists no longer believed that Parliament had any sovereignty over them, they still professed loyalty to King George, who they hoped would intercede on their behalf. They were to be disappointed: in late 1775, the king rejected Congress's second petition, issued a Proclamation of Rebellion, and announced before Parliament on October 26 that he was considering "friendly offers of foreign assistance" to suppress the rebellion. A pro-American minority in Parliament warned that the government was driving the colonists toward independence.
In January 1776, just as it became clear in the colonies that the king was not inclined to act as a conciliator, Thomas Paine's pamphlet Common Sense was published. Paine, who had only recently arrived in the colonies from England, argued in favor of colonial independence, advocating republicanism as an alternative to monarchy and hereditary rule. Common Sense introduced no new ideas, and probably had little direct effect on Congress's thinking about independence; its importance was in stimulating public debate on a topic that few had previously dared to openly discuss. Public support for separation from Great Britain steadily increased after the publication of Paine's enormously popular pamphlet.
Although some colonists still held out hope for reconciliation, developments in early 1776 further strengthened public support for independence. In February 1776, colonists learned of Parliament's passage of the Prohibitory Act, which established a blockade of American ports and declared American ships to be enemy vessels. John Adams, a strong supporter of independence, believed that Parliament had effectively declared American independence before Congress had been able to. Adams labeled the Prohibitory Act the "Act of Independency", calling it "a compleat Dismemberment of the British Empire". Support for declaring independence grew even more when it was confirmed that King George had hired German mercenaries to use against his American subjects.
Despite this growing popular support for independence, Congress lacked the clear authority to declare it. Delegates had been elected to Congress by thirteen different governments—which included extralegal conventions, ad hoc committees, and elected assemblies—and were bound by the instructions given to them. Regardless of their personal opinions, delegates could not vote to declare independence unless their instructions permitted such an action. Several colonies, in fact, expressly prohibited their delegates from taking any steps towards separation from Great Britain, while other delegations had instructions that were ambiguous on the issue. As public sentiment for separation from Great Britain grew, advocates of independence sought to have the Congressional instructions revised. For Congress to declare independence, a majority of delegations would need authorization to vote for independence, and at least one colonial government would need to specifically instruct (or grant permission for) its delegation to propose a declaration of independence in Congress. Between April and July 1776, a "complex political war" was waged to bring this about.
In the campaign to revise Congressional instructions, many Americans formally expressed their support for separation from Great Britain in what were effectively state and local declarations of independence. Historian Pauline Maier identified more than ninety such declarations that were issued throughout the Thirteen Colonies from April to July 1776. These "declarations" took a variety of forms. Some were formal, written instructions for Congressional delegations, such as the Halifax Resolves of April 12, with which North Carolina became the first colony to explicitly authorize its delegates to vote for independence. Others were legislative acts that officially ended British rule in individual colonies, such as on May 4, when the Rhode Island legislature became the first to declare its independence from Great Britain. Many "declarations" were resolutions adopted at town or county meetings that offered support for independence. A few came in the form of jury instructions, such as the statement issued on April 23, 1776, by Chief Justice William Henry Drayton of South Carolina: "the law of the land authorizes me to declare...that George the Third, King of Great Britain...has no authority over us, and we owe no obedience to him." Most of these declarations are now obscure, having been overshadowed by the declaration approved by Congress on July 4.
Some colonies held back from endorsing independence. Resistance was centered in the middle colonies of New York, New Jersey, Maryland, Pennsylvania, and Delaware. Advocates of independence saw Pennsylvania as the key: if that colony could be converted to the pro-independence cause, it was believed that the others would follow. On May 1, however, opponents of independence retained control of the Pennsylvania Assembly in a special election that had focused on the question of independence. In response, on May 10 Congress passed a resolution, which had been promoted by John Adams and Richard Henry Lee, calling on colonies without a "government sufficient to the exigencies of their affairs" to adopt new governments. The resolution passed unanimously, and was even supported by Pennsylvania's John Dickinson, the leader of the anti-independence faction in Congress, who believed that it did not apply to his colony.
May 15 preamble
As was the custom, Congress appointed a committee to draft a preamble to explain the purpose of the resolution. John Adams wrote the preamble, which stated that because King George had rejected reconciliation and was hiring foreign mercenaries to use against the colonies, "it is necessary that the exercise of every kind of authority under the said crown should be totally suppressed". Adams's preamble was meant to encourage the overthrow of the governments of Pennsylvania and Maryland, which were still under proprietary governance. Congress passed the preamble on May 15 after several days of debate, but four of the middle colonies voted against it, and the Maryland delegation walked out in protest. Adams regarded his May 15 preamble effectively as an American declaration of independence, although a formal declaration would still have to be made.
Lee's resolution and the final push
On the same day that Congress passed Adams's radical preamble, the Virginia Convention set the stage for a formal Congressional declaration of independence. On May 15, the Convention instructed Virginia's congressional delegation "to propose to that respectable body to declare the United Colonies free and independent States, absolved from all allegiance to, or dependence upon, the Crown or Parliament of Great Britain". In accordance with those instructions, Richard Henry Lee of Virginia presented a three-part resolution to Congress on June 7. The motion, which was seconded by John Adams, called on Congress to declare independence, form foreign alliances, and prepare a plan of colonial confederation. The part of the resolution relating to declaring independence read:
Resolved, that these United Colonies are, and of right ought to be, free and independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved.
Lee's resolution met with resistance in the ensuing debate. Opponents of the resolution, while conceding that reconciliation with Great Britain was unlikely, argued that declaring independence was premature, and that securing foreign aid should take priority. Advocates of the resolution countered that foreign governments would not intervene in an internal British struggle, and so a formal declaration of independence was needed before foreign aid was possible. All Congress needed to do, they insisted, was to "declare a fact which already exists". Delegates from Pennsylvania, Delaware, New Jersey, Maryland, and New York were still not yet authorized to vote for independence, however, and some of them threatened to leave Congress if the resolution were adopted. Congress therefore voted on June 10 to postpone further discussion of Lee's resolution for three weeks. Until then, Congress decided that a committee should prepare a document announcing and explaining independence in the event that Lee's resolution was approved when it was brought up again in July.
Support for a Congressional declaration of independence was consolidated in the final weeks of June 1776. On June 14, the Connecticut Assembly instructed its delegates to propose independence, and the following day the legislatures of New Hampshire and Delaware authorized their delegates to declare independence. In Pennsylvania, political struggles ended with the dissolution of the colonial assembly, and on June 18 a new Conference of Committees under Thomas McKean authorized Pennsylvania's delegates to declare independence. On June 15, the Provincial Congress of New Jersey, which had been governing the province since January 1776, resolved that Royal Governor William Franklin was "an enemy to the liberties of this country" and had him arrested. On June 21, they chose new delegates to Congress and empowered them to join in a declaration of independence.
Only Maryland and New York had yet to authorize independence. When the Continental Congress had adopted Adams's radical May 15 preamble, Maryland's delegates walked out and sent to the Maryland Convention for instructions. On May 20, the Maryland Convention rejected Adams's preamble, instructing its delegates to remain against independence, but Samuel Chase went to Maryland and, thanks to local resolutions in favor of independence, was able to get the Maryland Convention to change its mind on June 28. Only the New York delegates were unable to get revised instructions. When Congress had been considering the resolution of independence on June 8, the New York Provincial Congress told the delegates to wait. But on June 30, the Provincial Congress evacuated New York as British forces approached, and would not convene again until July 10. This meant that New York's delegates would not be authorized to declare independence until after Congress had made its decision.
Draft and adoption
While political maneuvering was setting the stage for an official declaration of independence, a document explaining the decision was being written. On June 11, 1776, Congress appointed a "Committee of Five", consisting of John Adams of Massachusetts, Benjamin Franklin of Pennsylvania, Thomas Jefferson of Virginia, Robert R. Livingston of New York, and Roger Sherman of Connecticut, to draft a declaration. Because the committee left no minutes, there is some uncertainty about how the drafting process proceeded—accounts written many years later by Jefferson and Adams, although frequently cited, are contradictory and not entirely reliable. What is certain is that the committee, after discussing the general outline that the document should follow, decided that Jefferson would write the first draft. The committee in general, and Jefferson in particular, thought Adams should write the document, but Adams persuaded the committee to choose Jefferson and promised to consult with Jefferson personally. Considering Congress's busy schedule, Jefferson probably had limited time for writing over the next seventeen days, and likely wrote the draft quickly. He then consulted the others, made some changes, and then produced another copy incorporating these alterations. The committee presented this copy to the Congress on June 28, 1776. The title of the document was "A Declaration by the Representatives of the United States of America, in General Congress assembled."
Congress ordered that the draft "lie on the table". For two days Congress methodically edited Jefferson's primary document, shortening it by a fourth, removing unnecessary wording, and improving sentence structure. Congress removed Jefferson's assertion that Britain had forced slavery on the colonies, in order to moderate the document and appease persons in Britain who supported the Revolution. Although Jefferson wrote that Congress had "mangled" his draft version, the Declaration that was finally produced, according to his biographer John Ferling, was "the majestic document that inspired both contemporaries and posterity."
On Monday, July 1, having tabled the draft of the declaration, Congress resolved itself into a committee of the whole, with Benjamin Harrison of Virginia presiding, and resumed debate on Lee's resolution of independence. John Dickinson made one last effort to delay the decision, arguing that Congress should not declare independence without first securing a foreign alliance and finalizing the Articles of Confederation. John Adams gave a speech in reply to Dickinson, restating the case for an immediate declaration.
After a long day of speeches, a vote was taken. As always, each colony cast a single vote; the delegation for each colony—numbering two to seven members—voted amongst themselves to determine the colony's vote. Pennsylvania and South Carolina voted against declaring independence. The New York delegation, lacking permission to vote for independence, abstained. Delaware cast no vote because the delegation was split between Thomas McKean (who voted yes) and George Read (who voted no). The remaining nine delegations voted in favor of independence, which meant that the resolution had been approved by the committee of the whole. The next step was for the resolution to be voted upon by the Congress itself. Edward Rutledge of South Carolina, who was opposed to Lee's resolution but desirous of unanimity, moved that the vote be postponed until the following day.
On July 2, South Carolina reversed its position and voted for independence. In the Pennsylvania delegation, Dickinson and Robert Morris abstained, allowing the delegation to vote three-to-two in favor of independence. The tie in the Delaware delegation was broken by the timely arrival of Caesar Rodney, who voted for independence. The New York delegation abstained once again, since they were still not authorized to vote for independence, although they would be allowed to do so by the New York Provincial Congress a week later. The resolution of independence had been adopted with twelve affirmative votes and one abstention. With this, the colonies had officially severed political ties with Great Britain. In a now-famous letter written to his wife on the following day, John Adams predicted that July 2 would become a great American holiday. Adams thought that the vote for independence would be commemorated; he did not foresee that Americans—including himself—would instead celebrate Independence Day on the date that the announcement of that act was finalized.
After voting in favor of the resolution of independence, Congress turned its attention to the committee's draft of the declaration. Over several days of debate, Congress made a few changes in wording and deleted nearly a fourth of the text, most notably a passage critical of the slave trade, changes that Jefferson resented. On July 4, 1776, the wording of the Declaration of Independence was approved and sent to the printer for publication.
Annotated text of the Declaration
The declaration is not divided into formal sections; but it is often discussed as consisting of five parts: Introduction, the Preamble, the Indictment of George III, the Denunciation of the British people, and the Conclusion.
Asserts as a matter of Natural Law the ability of a people to assume political independence; acknowledges that the grounds for such independence must be reasonable, and therefore explicable, and ought to be explained.
In CONGRESS, July 4, 1776.
The unanimous Declaration of the thirteen united States of America,
When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.
Outlines a general philosophy of government that justifies revolution when government harms natural rights.
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.
A bill of particulars documenting the king's "repeated injuries and usurpations" of the Americans' rights and liberties.
|Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.
He has refused his Assent to Laws, the most wholesome and necessary for the public good.
He has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in their operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend to them.
He has refused to pass other Laws for the accommodation of large districts of people, unless those people would relinquish the right of Representation in the Legislature, a right inestimable to them and formidable to tyrants only.
He has called together legislative bodies at places unusual, uncomfortable, and distant from the depository of their Public Records, for the sole purpose of fatiguing them into compliance with his measures.
He has dissolved Representative Houses repeatedly, for opposing with manly firmness of his invasions on the rights of the people.
He has refused for a long time, after such dissolutions, to cause others to be elected, whereby the Legislative Powers, incapable of Annihilation, have returned to the People at large for their exercise; the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within.
He has endeavoured to prevent the population of these States; for that purpose obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither, and raising the conditions of new Appropriations of Lands.
He has obstructed the Administration of Justice by refusing his Assent to Laws for establishing Judiciary Powers.
He has made Judges dependent on his Will alone for the tenure of their offices, and the amount and payment of their salaries.
He has erected a multitude of New Offices, and sent hither swarms of Officers to harass our people and eat out their substance.
He has kept among us, in times of peace, Standing Armies without the Consent of our legislatures.
He has affected to render the Military independent of and superior to the Civil Power.
He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:
For quartering large bodies of armed troops among us:
For protecting them, by a mock Trial from punishment for any Murders which they should commit on the Inhabitants of these States:
For cutting off our Trade with all parts of the world:
For imposing Taxes on us without our Consent:
For depriving us in many cases, of the benefit of Trial by Jury:
For transporting us beyond Seas to be tried for pretended offences:
For abolishing the free System of English Laws in a neighbouring Province, establishing therein an Arbitrary government, and enlarging its Boundaries so as to render it at once an example and fit instrument for introducing the same absolute rule into these Colonies
For taking away our Charters, abolishing our most valuable Laws and altering fundamentally the Forms of our Governments:
For suspending our own Legislatures, and declaring themselves invested with power to legislate for us in all cases whatsoever.
He has abdicated Government here, by declaring us out of his Protection and waging War against us.
He has plundered our seas, ravaged our coasts, burnt our towns, and destroyed the lives of our people.
He is at this time transporting large Armies of foreign Mercenaries to compleat the works of death, desolation, and tyranny, already begun with circumstances of Cruelty & Perfidy scarcely paralleled in the most barbarous ages, and totally unworthy the Head of a civilized nation.
He has constrained our fellow Citizens taken Captive on the high Seas to bear Arms against their Country, to become the executioners of their friends and Brethren, or to fall themselves by their Hands.
He has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.
In every stage of these Oppressions We have Petitioned for Redress in the most humble terms: Our repeated Petitions have been answered only by repeated injury. A Prince, whose character is thus marked by every act which may define a Tyrant, is unfit to be the ruler of a free people.
This section essentially finished the case for independence. The conditions that justified revolution have been shown.
|Nor have We been wanting in attentions to our British brethren. We have warned them from time to time of attempts by their legislature to extend an unwarrantable jurisdiction over us. We have reminded them of the circumstances of our emigration and settlement here. We have appealed to their native justice and magnanimity, and we have conjured them by the ties of our common kindred to disavow these usurpations, which, would inevitably interrupt our connections and correspondence. They too have been deaf to the voice of justice and of consanguinity. We must, therefore, acquiesce in the necessity, which denounces our Separation, and hold them, as we hold the rest of mankind, Enemies in War, in Peace Friends.|
The signers assert that there exist conditions under which people must change their government, that the British have produced such conditions, and by necessity the colonies must throw off political ties with the British Crown and become independent states. The conclusion contains, at its core, the Lee Resolution that had been passed on July 2.
|We, therefore, the Representatives of the united States of America, in General Congress, Assembled, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good People of these Colonies, solemnly publish and declare, That these united Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown, and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as Free and Independent States, they have full Power to levy War, conclude Peace, contract Alliances, establish Commerce, and to do all other Acts and Things which Independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.|
The first and most famous signature on the engrossed copy was that of John Hancock, President of the Continental Congress. Two future presidents, Thomas Jefferson and John Adams, were among the signatories. Edward Rutledge (age 26), was the youngest signer, and Benjamin Franklin (age 70) was the oldest signer. The fifty-six signers of the Declaration represented the new states as follows (from north to south):
Historians have often sought to identify the sources that most influenced the words and political philosophy of the Declaration of Independence. By Jefferson's own admission, the Declaration contained no original ideas, but was instead a statement of sentiments widely shared by supporters of the American Revolution. As he explained in 1825:
Neither aiming at originality of principle or sentiment, nor yet copied from any particular and previous writing, it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion.
Jefferson's most immediate sources were two documents written in June 1776: his own draft of the preamble of the Constitution of Virginia, and George Mason's draft of the Virginia Declaration of Rights. Ideas and phrases from both of these documents appear in the Declaration of Independence. They were, in turn, directly influenced by the 1689 English Declaration of Rights, which formally ended the reign of King James II. During the American Revolution, Jefferson and other Americans looked to the English Declaration of Rights as a model of how to end the reign of an unjust king. The Scottish Declaration of Arbroath (1320) and the Dutch Act of Abjuration (1581) have also been offered as models for Jefferson's Declaration, but these models are now accepted by few scholars.
Jefferson wrote that a number of authors exerted a general influence on the words of the Declaration. The English political theorist John Locke, whom Jefferson called one of "the three greatest men that have ever lived", is usually cited as one of the primary influences. In 1922, historian Carl L. Becker wrote that "Most Americans had absorbed Locke's works as a kind of political gospel; and the Declaration, in its form, in its phraseology, follows closely certain sentences in Locke's second treatise on government." The extent of Locke's influence on the American Revolution has been questioned by some subsequent scholars, however. Historian Ray Forrest Harvey declared in 1937, as he argued for the dominant influence of the Swiss jurist Jean Jacques Burlamaqui, that Jefferson and Locke were at "two opposite poles" in their political philosophy, as evidenced by Jefferson's use in the Declaration of Independence of the phrase "pursuit of happiness" instead of "property." Other scholars emphasized the influence of republicanism rather than Locke's classical liberalism. Historian Garry Wills argued that Jefferson was influenced by the Scottish Enlightenment, particularly Francis Hutcheson, rather than Locke, an interpretation that has been strongly criticized.
Legal historian John Phillip Reid has written that the emphasis on the political philosophy of the Declaration has been misplaced. The Declaration is not a philosophical tract about natural rights, argues Reid, but is instead a legal document—an indictment against King George for violating the constitutional rights of the colonists. In contrast, historian Dennis J. Mahoney argues that the Declaration is not a legal document at all, but a philosophical document influenced by Emerich de Vattel, Jean-Jacques Burlamaqui, and Samuel Pufendorf. Historian David Armitage has argued that the Declaration is a document of international law. According to Armitage, the Declaration was strongly influenced by de Vattel's The Law of Nations, a book that Benjamin Franklin said was "continually in the hands of the members of our Congress". Armitage writes that because "Vattel made independence fundamental to his definition of statehood", the primary purpose of the Declaration was "to express the international legal sovereignty of the United States". If the United States were to have any hope of being recognized by the European powers, the American revolutionaries had first to make it clear that they were no longer dependent on Great Britain.
The handwritten copy of the Declaration of Independence that was signed by Congress is dated July 4, 1776. The signatures of fifty-six delegates are affixed; however, whether or not Congress actually signed the document on this date has long been the subject of debate. Thomas Jefferson, Benjamin Franklin, and John Adams all wrote that the Declaration had been signed by Congress on July 4. But in 1796, signer Thomas McKean disputed that the Declaration had been signed on July 4, pointing out that some signers were not then present, including several who were not even elected to Congress until after that date.
According to the 1911 record of events by the U.S. State Department, under Sec. Philander C. Knox, the Declaration was transposed on paper, adopted by the Continental Congress, and signed by John Hancock, President of the Congress, on July 4, 1776. On August 2, 1776, a parchment paper copy of the Declaration was signed by 56 persons. Many of these signers were not present when the original Declaration was adopted on July 4. One signer, Matthew Thornton, from New Hampshire, who agreed to the Declaration and having joined the Continental Congress, signed on November 4, 1776.
Historians have generally accepted McKean's version of events, arguing that the famous signed version of the Declaration was created after July 19 and was not signed by Congress until August 2. In 1986, legal historian Wilfred Ritz argued that historians had misunderstood the primary documents and given too much credence to McKean, who had not been present in Congress on July 4. According to Ritz, about thirty-four delegates signed the Declaration on July 4, and the others signed on or after August 2. Historians who reject a July 4 signing maintain that most delegates signed on August 2, and that those eventual signers who were not present added their names later. Two future U.S. presidents, Thomas Jefferson and John Adams, were among the signatories.
The most famous signature on the engrossed copy is that of John Hancock, who, as President of Congress, presumably signed first. Hancock's large, flamboyant signature became iconic, and John Hancock emerged in the United States as an informal synonym for "signature". A commonly circulated but apocryphal account claims that after Hancock signed, the delegate from Massachusetts commented, "The British ministry can read that name without spectacles." Another apocryphal report indicates that Hancock proudly declared, "There! I guess King George will be able to read that!"
Various legends about the signing of the Declaration emerged years later, when the document had become an important national symbol. In one famous story, John Hancock supposedly said that Congress, having signed the Declaration, must now "all hang together", and Benjamin Franklin replied: "Yes, we must indeed all hang together, or most assuredly we shall all hang separately." The quote did not appear in print until more than fifty years after Franklin's death.
Publication and reaction
After Congress approved the final wording of the Declaration on July 4, a handwritten copy was sent a few blocks away to the printing shop of John Dunlap. Through the night Dunlap printed about 200 broadsides for distribution. Before long, the Declaration was read to audiences and reprinted in newspapers across the thirteen states. The first official public reading of the document was by John Nixon in the yard of Independence Hall on July 8; public readings also took place on that day in Trenton, New Jersey, and Easton, Pennsylvania. A German translation of the Declaration was published in Philadelphia by July 9.
President of Congress John Hancock sent a broadside to General George Washington, instructing him to have it proclaimed "at the Head of the Army in the way you shall think it most proper". Washington had the Declaration read to his troops in New York City on July 9, with the British forces not far away. Washington and Congress hoped the Declaration would inspire the soldiers, and encourage others to join the army. After hearing the Declaration, crowds in many cities tore down and destroyed signs or statues representing royal authority. An equestrian statue of King George in New York City was pulled down and the lead used to make musket balls.
British officials in North America sent copies of the Declaration to Great Britain. It was published in British newspapers beginning in mid-August, it had reached Florence and Warsaw by mid-September, and a German translation appeared in Switzerland by October. The first copy of the Declaration sent to France got lost, and the second copy arrived only in November 1776. It reached Portuguese America by Brazilian medical student ‘Vendek’ José Joaquim Maia e Barbalho, who had met with Thomas Jefferson in Nîmes in 1786.[contradiction] Though the Spanish-American authorities banned the circulation of the Declaration, it was widely transmitted and translated, by the Venezuelan Manuel García de Sena, by the Colombian Miguel de Pombo, by the Ecuadorian Vicente Rocafuerte and by the New Englanders Richard Cleveland and William Shaler, who distributed the Declaration and the United States Constitution among creoles in Chile and Indians in Mexico in 1821. The North Ministry did not give an official answer to the Declaration, but instead secretly commissioned pamphleteer John Lind to publish a response, which was entitled Answer to the Declaration of the American Congress. British Tories denounced the signers of the Declaration for not applying the same principles of "life, liberty, and the pursuit of happiness" to African Americans. Thomas Hutchinson, the former royal governor of Massachusetts, also published a rebuttal. These pamphlets challenged various aspects of the Declaration. Hutchinson argued that the American Revolution was the work of a few conspirators who wanted independence from the outset, and who had finally achieved it by inducing otherwise loyal colonists to rebel. Lind's pamphlet had an anonymous attack on the concept of natural rights, written by Jeremy Bentham, an argument which he would repeat during the French Revolution. Both pamphlets asked how the American slaveholders in Congress could proclaim that "all men are created equal" without freeing their own slaves.
Enslaved African Americans also heard the call to liberty and freedom. Tens of thousands of slaves left plantations in the South and farms in the North to join the British lines, or to escape during the disruption of war. The British kept their promise and evacuated thousands of Black Loyalists with their troops in the closing days of the war, for resettlement as freedmen in Nova Scotia, Jamaica or England. Four to five thousand African Americans served in the Continental Army fighting for American Independence. The revolutionary government freed slaves who enlisted with the Continentals; 5% of George Washington's forces consisted of African-American troops.
William Whipple, a signer of the Declaration of Independence who had fought in the war, freed his slave, Prince Whipple, because of revolutionary ideals. In the postwar decades, so many other slaveholders also freed their slaves that from 1790-1810, the percentage of free blacks in the Upper South increased to 8.3 percent from less than one percent of the black population. Most Northern states abolished slavery; although with gradual emancipation, slaves were still listed in some mid-Atlantic state censuses in 1840.
Having fought for independence, after the war freedmen faced housing and job discrimination, were denied voting rights in several states, and needed passes to travel between the states.
History of the documents
The copy of the Declaration that was signed by Congress is known as the engrossed or parchment copy. It was probably engrossed (that is, carefully handwritten) by clerk Timothy Matlack. Because of poor conservation of the engrossed copy through the 19th century, a facsimile made in 1823, rather than the original, has become the basis of most modern reproductions. In 1921, custody of the engrossed copy of the Declaration, along with the United States Constitution, was transferred from the State Department to the Library of Congress. After the Japanese attack on Pearl Harbor in 1941, the documents were moved for safekeeping to the United States Bullion Depository at Fort Knox in Kentucky, where they were kept until 1944. In 1952, the engrossed Declaration was transferred to the National Archives, and is now on permanent display at the National Archives in the "Rotunda for the Charters of Freedom".
Although the document signed by Congress and enshrined in the National Archives is usually regarded as the Declaration of Independence, historian Julian P. Boyd argued that the Declaration, like Magna Carta, is not a single document. Boyd considered the printed broadsides ordered by Congress to be official texts as well. The Declaration was first published as a broadside that was printed the night of July 4 by John Dunlap of Philadelphia. Dunlap printed about 200 broadsides, of which 26 are known to survive. The 26th copy was discovered in The National Archives in England in 2009. In 1777, Congress commissioned Mary Katherine Goddard to print a new broadside that, unlike the Dunlap broadside, listed the signers of the Declaration. Nine copies of the Goddard broadside are known to still exist. A variety of broadsides printed by the states are also extant.
Several early handwritten copies and drafts of the Declaration have also been preserved. Jefferson kept a four-page draft that late in life he called the "original Rough draught". How many drafts Jefferson wrote prior to this one, and how much of the text was contributed by other committee members, is unknown. In 1947, Boyd discovered a fragment of an earlier draft in Jefferson's handwriting. Jefferson and Adams sent copies of the rough draft, with slight variations, to friends.
During the writing process, Jefferson showed the rough draft to Adams and Franklin, and perhaps other members of the drafting committee, who made a few more changes. Franklin, for example, may have been responsible for changing Jefferson's original phrase "We hold these truths to be sacred and undeniable" to "We hold these truths to be self-evident". Jefferson incorporated these changes into a copy that was submitted to Congress in the name of the committee. The copy that was submitted to Congress on June 28 has been lost, and was perhaps destroyed in the printing process, or destroyed during the debates in accordance with Congress's secrecy rule.
Having served its original purpose in announcing the independence of the United States, the Declaration was initially neglected in the years immediately following the American Revolution. Early celebrations of Independence Day, like early histories of the Revolution, largely ignored the Declaration. Although the act of declaring independence was considered important, the text announcing that act attracted little attention. The Declaration was rarely mentioned during the debates about the United States Constitution, and its language was not incorporated into that document. George Mason's draft of the Virginia Declaration of Rights was more influential, and its language was echoed in state constitutions and state bills of rights more often than Jefferson's words. "In none of these documents", wrote Pauline Maier, "is there any evidence whatsoever that the Declaration of Independence lived in men's minds as a classic statement of American political principles."
Influence in other countries
Some leaders of the French Revolution admired the Declaration of Independence but were more interested in the new American state constitutions. The French Declaration of the Rights of Man and Citizen (1789) borrowed language from George Mason's Virginia Declaration of Rights and not Jefferson's Declaration, although Jefferson was in Paris at the time and was consulted during the drafting process. According to historian David Armitage, the United States Declaration of Independence did prove to be internationally influential, but not as a statement of human rights. Armitage argued that the Declaration was the first in a new genre of declarations of independence that announced the creation of new states.
Other French leaders were directly influenced by the text of the Declaration of Independence itself. The Manifesto of the Province of Flanders (1790) was the first foreign derivation of the Declaration; others include the Venezuelan Declaration of Independence (1811), the Liberian Declaration of Independence (1847), the declarations of secession by the Confederate States of America (1860–61), and the Vietnam Declaration of Independence (1945). These declarations echoed the United States Declaration of Independence in announcing the independence of a new state, without necessarily endorsing the political philosophy of the original.
Some other countries that used the Declaration as inspiration or directly copied sections from it is the Haitian declaration of 1 January 1804 from the Haitian Revolution, the United Provinces of New Granada in 1811, the Argentine Declaration of Independence in 1816, the Chilean Declaration of Independence in 1818, Costa Rica in 1821, El Salvador in 1821, Guatemala in 1821, Honduras in (1821), Mexico in 1821, Nicaragua in 1821, Peru in 1821, Bolivian War of Independence in 1825, Uruguay in 1825, Ecuador in 1830, Colombia in 1831, Paraguay in 1842, Dominican Republic in 1844, Texas Declaration of Independence in March 1836, California Republic in November 1836, Hungarian Declaration of Independence in 1849, Declaration of the Independence of New Zealand in 1835, Czechoslovak declaration of independence from 1918 drafted in Washington D.C. with Gutzon Borglum among the drafters, Rhodesia in 11 November 1965.
Revival of interest
In the United States, interest in the Declaration was revived in the 1790s with the emergence of America's first political parties. Throughout the 1780s, few Americans knew, or cared, who wrote the Declaration. But in the next decade, Jeffersonian Republicans sought political advantage over their rival Federalists by promoting both the importance of the Declaration and Jefferson as its author. Federalists responded by casting doubt on Jefferson's authorship or originality, and by emphasizing that independence was declared by the whole Congress, with Jefferson as just one member of the drafting committee. Federalists insisted that Congress's act of declaring independence, in which Federalist John Adams had played a major role, was more important than the document announcing that act. But this view, like the Federalist Party, would fade away, and before long the act of declaring independence would become synonymous with the document.
A less partisan appreciation for the Declaration emerged in the years following the War of 1812, thanks to a growing American nationalism and a renewed interest in the history of the Revolution. In 1817, Congress commissioned John Trumbull's famous painting of the signers, which was exhibited to large crowds before being installed in the Capitol. The earliest commemorative printings of the Declaration also appeared at this time, offering many Americans their first view of the signed document. Collective biographies of the signers were first published in the 1820s, giving birth to what Garry Wills called the "cult of the signers". In the years that followed, many stories about the writing and signing of the document would be published for the first time.
When interest in the Declaration was revived, the sections that were most important in 1776—the announcement of the independence of the United States and the grievances against King George—were no longer relevant. But the second paragraph, with its talk of self-evident truths and unalienable rights, were applicable long after the war had ended. Because the Constitution and the Bill of Rights lacked sweeping statements about rights and equality, advocates of marginalized groups turned to the Declaration for support. Starting in the 1820s, variations of the Declaration were issued to proclaim the rights of workers, farmers, women, and others. In 1848, for example, the Seneca Falls Convention, a meeting of women's rights advocates, declared that "all men and women are created equal".
Slavery and the Declaration
The contradiction between the claim that "all men are created equal" and the existence of American slavery attracted comment when the Declaration was first published. As mentioned above, although Jefferson had included a paragraph in his initial draft that strongly indicted Britain's role in the slave trade, this was deleted from the final version. Jefferson himself was a prominent Virginia slave holder having owned hundreds of slaves. Referring to this seeming contradiction, English abolitionist Thomas Day wrote in a 1776 letter, "If there be an object truly ridiculous in nature, it is an American patriot, signing resolutions of independency with the one hand, and with the other brandishing a whip over his affrighted slaves." In the 19th century, the Declaration took on a special significance for the abolitionist movement. Historian Bertram Wyatt-Brown wrote that "abolitionists tended to interpret the Declaration of Independence as a theological as well as a political document". Abolitionist leaders Benjamin Lundy and William Lloyd Garrison adopted the "twin rocks" of "the Bible and the Declaration of Independence" as the basis for their philosophies. "As long as there remains a single copy of the Declaration of Independence, or of the Bible, in our land," wrote Garrison, "we will not despair." For radical abolitionists like Garrison, the most important part of the Declaration was its assertion of the right of revolution: Garrison called for the destruction of the government under the Constitution, and the creation of a new state dedicated to the principles of the Declaration.
The controversial question of whether to add additional slave states to the United States coincided with the growing stature of the Declaration. The first major public debate about slavery and the Declaration took place during the Missouri controversy of 1819 to 1821. Antislavery Congressmen argued that the language of the Declaration indicated that the Founding Fathers of the United States had been opposed to slavery in principle, and so new slave states should not be added to the country. Proslavery Congressmen, led by Senator Nathaniel Macon of North Carolina, argued that since the Declaration was not a part of the Constitution, it had no relevance to the question.
With the antislavery movement gaining momentum, defenders of slavery such as John Randolph and John C. Calhoun found it necessary to argue that the Declaration's assertion that "all men are created equal" was false, or at least that it did not apply to black people. During the debate over the Kansas-Nebraska Act in 1853, for example, Senator John Pettit of Indiana argued that "all men are created equal", rather than a "self-evident truth", was a "self-evident lie". Opponents of the Kansas-Nebraska Act, including Salmon P. Chase and Benjamin Wade, defended the Declaration and what they saw as its antislavery principles.
Lincoln and the Declaration
The Declaration's relationship to slavery was taken up in 1854 by Abraham Lincoln, a little-known former Congressman who idolized the Founding Fathers. Lincoln thought that the Declaration of Independence expressed the highest principles of the American Revolution, and that the Founding Fathers had tolerated slavery with the expectation that it would ultimately wither away. For the United States to legitimize the expansion of slavery in the Kansas-Nebraska Act, thought Lincoln, was to repudiate the principles of the Revolution. In his October 1854 Peoria speech, Lincoln said:
Nearly eighty years ago we began by declaring that all men are created equal; but now from that beginning we have run down to the other declaration, that for some men to enslave others is a "sacred right of self-government." ... Our republican robe is soiled and trailed in the dust. Let us repurify it. ... Let us re-adopt the Declaration of Independence, and with it, the practices, and policy, which harmonize with it. ... If we do this, we shall not only have saved the Union: but we shall have saved it, as to make, and keep it, forever worthy of the saving.
The meaning of the Declaration was a recurring topic in the famed debates between Lincoln and Stephen Douglas in 1858. Douglas argued that "all men are created equal" in the Declaration referred to white men only. The purpose of the Declaration, he said, had simply been to justify the independence of the United States, and not to proclaim the equality of any "inferior or degraded race". Lincoln, however, thought that the language of the Declaration was deliberately universal, setting a high moral standard for which the American republic should aspire. "I had thought the Declaration contemplated the progressive improvement in the condition of all men everywhere", he said. During the seventh and last joint debate with Steven Douglas at Alton, Illinois on 15 October 1858 Lincoln said about the declaration:
I think the authors of that notable instrument intended to include all men, but they did not mean to declare all men equal in all respects. They did not mean to say all men were equal in color, size, intellect, moral development, or social capacity. They defined with tolerable distinctness in what they did consider all men created equal — equal in "certain inalienable rights, among which are life, liberty, and the pursuit of happiness." This they said, and this they meant. They did not mean to assert the obvious untruth that all were then actually enjoying that equality, or yet that they were about to confer it immediately upon them. In fact, they had no power to confer such a boon. They meant simply to declare the right, so that the enforcement of it might follow as fast as circumstances should permit. They meant to set up a standard maxim for free society which should be familiar to all, constantly looked to, constantly labored for, and even, though never perfectly attained, constantly approximated, and thereby constantly spreading and deepening its influence, and augmenting the happiness and value of life to all people, of all colors, everywhere.
According to Pauline Maier, Douglas's interpretation was more historically accurate, but Lincoln's view ultimately prevailed. "In Lincoln's hands", wrote Maier, "the Declaration of Independence became first and foremost a living document" with "a set of goals to be realized over time".
Like Daniel Webster, James Wilson, and Joseph Story before him, Lincoln argued that the Declaration of Independence was a founding document of the United States, and that this had important implications for interpreting the Constitution, which had been ratified more than a decade after the Declaration. Although the Constitution did not use the word "equality", Lincoln believed that "all men are created equal" remained a part of the nation's founding principles. He famously expressed this belief in the opening sentence of his 1863 Gettysburg Address: "Four score and seven years ago [i.e. in 1776] our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal."
Lincoln's view of the Declaration as a moral guide to interpreting the Constitution became influential. "For most people now," wrote Garry Wills in 1992, "the Declaration means what Lincoln told us it means, as a way of correcting the Constitution itself without overthrowing it." Admirers of Lincoln, such as Harry V. Jaffa, praised this development. Critics of Lincoln, notably Willmoore Kendall and Mel Bradford, argued that Lincoln dangerously expanded the scope of the national government, and violated states' rights, by reading the Declaration into the Constitution.
Women's suffrage and the Declaration
In July 1848, the first Woman's Rights Convention was held in Seneca Falls, New York. The convention was organized by Elizabeth Cady Stanton, Lucretia Mott, Mary Ann McClintock, Martha White, and Jane Hunt. In their "Declaration of Sentiments," patterned on the Declaration of Independence, the convention members demanded social and political equality for women. Their motto was that "All men and women are created equal" and the convention demanded suffrage for women. The suffrage movement was supported by William Lloyd Garrison and Frederick Douglass.
In popular culture
The adoption of the Declaration of Independence was dramatized in the 1969 Tony Award-winning musical play 1776, and the 1972 movie of the same name, as well as in the 2008 television miniseries John Adams.
The engrossed copy of the Declaration is central to the 2004 Hollywood film National Treasure, in which the main character steals the document because he believes it has secret clues to a treasure hidden by some of the Founding Fathers.
The Declaration is featured in The Probability Broach (1980), an alternative history novel, when one word is added to the document, to read that governments "derive their just power from the unanimous consent of the governed".
- Becker, Declaration of Independence, 5.
- "Declaring Independence", Revolutionary War, Digital History, University of Houston. From Adams' notes: "Why will you not? You ought to do it." "I will not." "Why?" "Reasons enough." "What can be your reasons?" "Reason first, you are a Virginian, and a Virginian ought to appear at the head of this business. Reason second, I am obnoxious, suspected, and unpopular. You are very much otherwise. Reason third, you can write ten times better than I can." "Well," said Jefferson, "if you are decided, I will do as well as I can." "Very well. When you have drawn it up, we will have a meeting.""
- "Did You Know...Independence Day Should Actually Be July 2?" (Press release). National Archives and Records Administration. 1 June 2005. Retrieved 2012-07-04.
- Boyd (1976), The Declaration of Independence: The Mystery of the Lost Original, pg. 438
- Stephen E. Lucas, "Justifying America: The Declaration of Independence as a Rhetorical Document," in Thomas W. Benson, ed., American Rhetoric: Context and Criticism, Carbondale, Illinois: Southern Illinois University Press, 1989, p. 85
- Ellis, American Creation, 55–56.
- McPherson, Second American Revolution, 126.
- Hazelton, Declaration History, 19.
- Christie and Labaree, Empire or Independence, 31.
- Bailyn, Ideological Origins, 162.
- Bailyn, Ideological Origins, 200–02.
- Bailyn, Ideological Origins, 180–82.
- Middlekauff, Glorious Cause, 241.
- Bailyn, Ideological Origins, 224–25.
- Middlekauff, Glorious Cause, 241–42. The writings in question include Wilson's Considerations on the Authority of Parliament and Jefferson's A Summary View of the Rights of British America (both 1774), as well as Samuel Adams's 1768 Circular Letter.
- Middlekauff, Glorious Cause, 168; Ferling, Leap in the Dark, 123–24.
- Hazelton, Declaration History, 13; Middlekauff, Glorious Cause, 318.
- Middlekauff, Glorious Cause, 318.
- Maier, American Scripture, 25. The text of the 1775 king's speech is online, published by the American Memory project.
- Maier, American Scripture, 25.
- Rakove, Beginnings of National Politics, 88–90.
- Christie and Labaree, Empire or Independence, 270; Maier, American Scripture, 31–32.
- Jensen, Founding, 667.
- Rakove, Beginnings of National Politics, 89; Maier, American Scripture, 33.
- Maier, American Scripture, 33–34.
- Hazelton, Declaration History, 209; Maier, American Scripture, 25–27.
- Friedenwald, Interpretation, 67.
- Friedenwald, Interpretation, 77.
- Maier, American Scripture, 30.
- Maier, American Scripture, 59.
- Jensen, Founding, 671; Friedenwald, Interpretation, 78.
- Maier, American Scripture, 48, and Appendix A, which lists the state and local declarations.
- Jensen, Founding, 678–79.
- Jensen, Founding, 679; Friedenwald, Interpretation, 92–93.
- Maier, American Scripture, 69–72, quote on 72.
- Maier, American Scripture, 48. The modern scholarly consensus is that the best-known and earliest of the local declarations, the Mecklenburg Declaration of Independence, allegedly adopted in May 1775 (a full year before other local declarations), is most likely inauthentic; Maier, American Scripture, 174.
- Jensen, Founding, 682.
- Jensen, Founding, 683.
- Jensen, Founding, 684; Maier, American Scripture, 37. For the full text of the May 10 resolve see the Journals of the Continental Congress.
- Jensen, Founding, 684.
- Burnett, Continental Congress, 159. The text of Adams's letter is online.
- Maier, American Scripture, 37; Jensen, Founding, 684. For the full text of the May 15 preamble see the Journals of the Continental Congress.
- Rakove, National Politics, 96; Jensen, Founding, 684; Friedenwald, Interpretation, 94.
- Rakove, National Politics, 97; Jensen, Founding, 685.
- Maier, American Scripture, 38.
- Boyd, Evolution, 18; Maier, American Scripture, 63. The text of the May 15 Virginia resolution is online at Yale Law School's Avalon Project.
- Maier, American Scripture, 41; Boyd, Evolution, 19.
- Jensen, Founding, 689–90; Maier, American Scripture, 42.
- Jensen, Founding, 689; Armitage, Global History, 33–34. The quote is from Jefferson's notes; Boyd, Papers of Jefferson, 1:311.
- Maier, American Scripture, 42–43; Friedenwald, Interpretation, 106.
- Jensen, Founding, 691–92.
- Friedenwald, Interpretation, 106–07; Jensen, Founding, 691.
- Jensen, Founding, 692.
- Jensen, Founding, 693.
- Jensen, Founding, 694.
- Jensen, Founding, 694–96; Friedenwald, Interpretation, 96; Maier, American Scripture, 68.
- Friedenwald, Interpretation, 118; Jensen, Founding, 698.
- Friedenwald, Interpretation, 119–20.
- Dupont and Onuf, 3.
- Maier, American Scripture, 97–105; Boyd, Evolution, 21.
- Boyd, Evolution, 22.
- Maier, American Scripture, 104.
- Becker, Declaration of Independence, 4.
- Jensen, Founding, 701.
- John E. Ferling, Setting the World Ablaze: Washington, Adams, Jefferson, and the American Revolution, Oxford University Press. ISBN 978-0-19-513409-4. OCLC 468591593, pp. 131-137
- Burnett, Continental Congress, 181.
- Jensen, Founding, 699.
- Burnett, Continental Congress, 182; Jensen, Founding, 700.
- Maier, American Scripture, 45.
- Boyd, Evolution, 19.
- Jensen, Founding, 703–04.
- Maier, American Scripture, 160–61.
- Maier, American Scripture, 146–50.
- Lucas, Stephen E. "The Stylistic Artistry of the Declaration of Independence". National Archives and Records Administration. Retrieved 2012-07-04.
- "Index of Signers by State". ushistory.org - Independence Hall Association in Philadelphia. Retrieved 2006-10-12.
- "TO HENRY LEE — Thomas Jefferson The Works, vol. 12 (Correspondence and Papers 1816-1826; 1905)". The Online Library of Liberty. May 8, 1825. Retrieved March 8, 2008.
- Malone, Jefferson the Virginian, 221; Maier, American Scripture, 125–26.
- Maier, American Scripture, 126–28.
- Maier, American Scripture, 53–57.
- Maier found no evidence that the Dutch Act of Abjuration served as a model for the Declaration and considers the argument "unpersuasive" (American Scripture, p. 264). Armitage discounts the influence of the Scottish and Dutch acts, and writes that neither was called "declarations of independence" until fairly recently (Global History, pp. 42–44). For the argument in favor of the influence of the Dutch act, see Stephen E. Lucas, "The 'Plakkaat van Verlatinge': A Neglected Model for the American Declaration of Independence", in Rosemarijn Hofte and Johanna C. Kardux, eds., Connecting Cultures: The Netherlands in Five Centuries of Transatlantic Exchange (Amsterdam, 1994), 189–207.
- Boyd, Evolution, 16–17.
- "The Three Greatest Men". Retrieved June 13, 2009. "Jefferson identified Bacon, Locke, and Newton as "the three greatest men that have ever lived, without any exception". Their works in the physical and moral sciences were instrumental in Jefferson's education and world view."
- Becker, Declaration of Independence, 27.
- Ray Forrest Harvey, Jean Jacques Burlamaqui: A Liberal Tradition in American Constitutionalism (Chapel Hill, North Carolina, 1937), 120.
- A brief, online overview of the classical liberalism vs. republicanism debate is Alec Ewald, "The American Republic: 1760-1870" (2004). In a similar vein, historian Robert Middlekauff argues that the political ideas of the independence movement took their origins mainly from the "eighteenth-century commonwealthmen, the radical Whig ideology", which in turn drew on the political thought of John Milton, James Harrington, and John Locke. See Robert Middlekauff (2005), The Glorious Cause, pp. 3-6, 51-52, 136
- Wills, Inventing America, especially chs. 11–13. Wills concludes (p. 315) that "the air of enlightened America was full of Hutcheson's politics, not Locke's."
- Hamowy, "Jefferson and the Scottish Enlightenment", argues that Wills gets much wrong (p. 523), that the Declaration seems to be influenced by Hutcheson because Hutcheson was, like Jefferson, influenced by Locke (pp. 508–09), and that Jefferson often wrote of Locke's influence, but never mentioned Hutcheson in any of his writings (p. 514). See also Kenneth S. Lynn, "Falsifying Jefferson," Commentary 66 (Oct. 1978), 66–71. Ralph Luker, in "Garry Wills and the New Debate Over the Declaration of Independence" (The Virginia Quarterly Review, Spring 1980, 244–61) agreed that Wills overstated Hutcheson's influence to provide a communitarian reading of the Declaration, but he also argued that Wills's critics similarly read their own views into the document.
- John Phillip Reid, "The Irrelevance of the Declaration", in Hendrik Hartog, ed., Law in the American Revolution and the Revolution in the Law (New York University Press, 1981), 46–89.
- Mahoney, Declaration of Independence.
- Benjamin Franklin to Charles F.W. Dumas, December 19, 1775, in The Writings of Benjamin Franklin, ed. Albert Henry Smyth (New York: 1970), 6:432.
- Armitage, Global History, 21, 38–40.
- Warren, "Fourth of July Myths", 242–43.
- Hazelton, Declaration History, 299–302; Burnett, Continental Congress, 192.
- The U.S. State Department (1911), The Declaration of Independence, 1776, pp. 10, 11.
- Warren, "Fourth of July Myths", 245–46; Hazelton, Declaration History, 208–19; Wills, Inventing America, 341.
- Ritz, "Authentication", 179–200.
- Ritz, "Authentication", 194.
- Hazelton, Declaration History, 208–19.
- Hazelton, Declaration History, 209.
- Merriam-Webster online; Dictionary.com.
- Malone, Story of the Declaration, 91.
- Maier, American Scripture, 156.
- Armitage, Global History, 72.
- Maier, American Scripture, 155.
- Maier, American Scripture, 156–57.
- Armitage, Global History, 73.
- The Declaration of Independence in World Context
- The Contagion of Sovereignty: Declarations of Independence since 1776
- Armitage, Global History, 75.
- Jessup, John J. (September 20, 1943). "America and the Future". Life: 105. Retrieved 09-03-2011.
- Hutchinson, Thomas (1776). In Eicholz, Hans. Strictures upon the Declaration of the Congress at Philadelphia in a Letter to a Noble Lord, &c.. London: Liberty Fund. Retrieved 2012-11-07.
- Armitage, Global History, 74.
- Bailyn, Ideological Origins, 155–56.
- Armitage, Global History, 79–80.
- Armitage, Global History, 76–77.
- Peter Kolchin, American Slavery, 1619-1877 (1993), pp. 77-79, 81
- Quarles, Benjamin (August 1975). "Black America at the Time of the Revolutionary War". Ebony: 44, 45, 48. Retrieved 09-03-2011.
- "The Declaration of Independence: A History". Charters of Freedom. National Archives and Records Administration. Retrieved July 1, 2011.
- Malone, Story of the Declaration, 263.
- "Charters of Freedom Re-encasement Project". National Archives and Records Administration. Retrieved July 1, 2011.
- "Rare copy of United States Declaration of Independence found in Kew". The Daily Telegraph. July 3, 2009. Retrieved July 1, 2011.
- Ann Marie Dube (May 1996). "The Declaration of Independence". A Multitude of Amendments, Alterations and Additions: The Writing and Publicizing of the Declaration of Independence, the Articles of Confederation, and the Constitution of the United States. National Park Service. Retrieved July 1, 2011.
- Boyd, "Lost Original", 446.
- Boyd, Papers of Jefferson, 1:421.
- Becker, Declaration of Independence, 142 note 1. Boyd (Papers of Jefferson, 1:427–28) casts doubt on Becker's belief that the change was made by Franklin.
- Boyd, "Lost Original", 448–50. Boyd argued that if a document was signed on July 4--which he thought unlikely--it would have been the Fair Copy, and probably would have been signed only by Hancock and Thomson.
- Ritz, "From the Here", speculates that the Fair Copy was immediately sent to the printer so that copies could be made for each member of Congress to consult during the debate. All of these copies were then destroyed, theorizes Ritz, to preserve secrecy.
- Armitage, Global History, 87–88; Maier, American Scripture, 162, 168–69.
- McDonald, "Jefferson's Reputation", 178–79; Maier, American Scripture, 160.
- Armitage, Global History, 92.
- Armitage, Global History, 90; Maier, American Scripture, 165–67.
- Maier, American Scripture, 167.
- Armitage, Global History, 82.
- Maier, American Scripture, 166–68.
- Armitage, Global History, 113.
- Armitage, Global History, 120–35.
- Armitage, Global History, 104, 113.
- McDonald, "Jefferson's Reputation", 172.
- McDonald, "Jefferson's Reputation", 172, 179.
- McDonald, "Jefferson's Reputation", 179; Maier, American Scripture, 168–71.
- McDonald, "Jefferson's Reputation", 180–84; Maier, American Scripture, 171.
- Wills, Inventing America, 348.
- Detweiler, "Changing Reputation", 571–72; Maier, American Scripture, 175–78.
- Detweiler, "Changing Reputation", 572; Maier, American Scripture, 175.
- Detweiler, "Changing Reputation", 572; Maier, American Scripture, 175–76; Wills, Inventing America, 324. See also John C. Fitzpatrick, Spirit of the Revolution (Boston 1924).
- Maier, American Scripture, 176.
- Wills, Inventing America, 90.
- Armitage, "Global History," 93.
- Maier, American Scripture, 196–97.
- Maier, American Scripture, 197. See also Philip S. Foner, ed., We, the Other People: Alternative Declarations of Independence by Labor Groups, Farmers, Woman's Rights Advocates, Socialists, and Blacks, 1829–1975 (Urbana 1976).
- Maier, American Scripture, 197; Armitage, Global History, 95.
- Cohen (1969), Thomas Jefferson and the Problem of Slavery
- Armitage, Global History, 77.
- Wyatt-Brown, Lewis Tappan, 287.
- Mayer, All on Fire, 53, 115.
- Maier, American Scripture, 198–99.
- Detweiler, "Congressional Debate", 598.
- Detweiler, "Congressional Debate", 604.
- Detweiler, "Congressional Debate", 605.
- Maier, American Scripture, 199; Bailyn, Ideological Origins, 246.
- Maier, American Scripture, 200.
- Maier, American Scripture, 200–01.
- Maier, American Scripture, 201–02.
- McPherson, Second American Revolution, 126–27.
- Maier, American Scripture, 204.
- Maier, American Scripture, 204–05.
- "Abraham Lincoln (1809–1865): Political Debates Between Lincoln and Douglas 1897, page 415.". Bartleby. Retrieved 26 January 2013.
- Maier, American Scripture, 207.
- Wills, Lincoln at Gettysburg, 100.
- Wills, Lincoln at Gettysburg, 129–31.
- Wills, Lincoln at Gettysburg, 145.
- Wills, Lincoln at Gettysburg, 147.
- Wills, Lincoln at Gettysburg, 39, 145–46. See also Harry V. Jaffa, Crisis of the House Divided (1959) and A New Birth of Freedom: Abraham Lincoln and the Coming of the Civil War (2000); Willmoore Kendall and George W. Carey, The Basic Symbols of the American Political Tradition (1970); and M.E. Bradford, "The Heresy of Equality: A Reply to Harry Jaffa" (1976), reprinted in A Better Guide than Reason (1979) and Modern Age, the First Twenty-five Years (1988).
- Norton, et al (2010), p. 301.
- Armitage, David. The Declaration Of Independence: A Global History. Cambridge, Massachusetts: Harvard University Press, 2007. ISBN 978-0-674-02282-9.
- Bailyn, Bernard. The Ideological Origins of the American Revolution. Enlarged edition. Originally published 1967. Harvard University Press, 1992. ISBN 0-674-44302-0.
- Becker, Carl. The Declaration of Independence: A Study in the History of Political Ideas. 1922. Available online from The Online Library of Liberty and Google Book Search. Revised edition New York: Vintage Books, 1970. ISBN 0-394-70060-0.
- Boyd, Julian P. The Declaration of Independence: The Evolution of the Text. Originally published 1945. Revised edition edited by Gerard W. Gawalt. University Press of New England, 1999. ISBN 0-8444-0980-4.
- Boyd, Julian P., ed. The Papers of Thomas Jefferson, vol. 1. Princeton University Press, 1950.
- Boyd, Julian P. "The Declaration of Independence: The Mystery of the Lost Original". Pennsylvania Magazine of History and Biography 100, number 4 (October 1976), 438–67.
- Burnett, Edward Cody. The Continental Congress. New York: Norton, 1941.
- Christie, Ian R. and Benjamin W. Labaree. Empire or Independence, 1760–1776: A British-American Dialogue on the Coming of the American Revolution. New York: Norton, 1976.
- Detweiler, Philip F. "Congressional Debate on Slavery and the Declaration of Independence, 1819–1821," American Historical Review 63 (April 1958): 598–616. in JSTOR
- Detweiler, Philip F. "The Changing Reputation of the Declaration of Independence: The First Fifty Years." William and Mary Quarterly, 3rd series, 19 (1962): 557–74. in JSTOR
- Dumbauld, Edward. The Declaration of Independence And What It Means Today. Norman: University of Oklahoma Press, 1950.
- Ellis, Joseph. American Creation: Triumphs and Tragedies at the Founding of the Republic. New York: Knopf, 2007. ISBN 978-0-307-26369-8.
- Dupont, Christian Y. and Peter S. Onuf, eds. Declaring Independence: The Origina and Influence of America's Founding Document. Revised edition. Charlottesville, Virginia: University of Virginia Library, 2010. ISBN 978-0-9799997-1-0.
- Ferling, John E. A Leap in the Dark: The Struggle to Create the American Republic. New York: Oxford University Press, 2003. ISBN 0-19-515924-1.
- Friedenwald, Herbert. The Declaration of Independence: An Interpretation and an Analysis. New York: Macmillan, 1904. Accessed via the Internet Archive.
- Gustafson, Milton. "Travels of the Charters of Freedom". Prologue Magazine 34, no 4. (Winter 2002).
- Hamowy, Ronald. "Jefferson and the Scottish Enlightenment: A Critique of Garry Wills's Inventing America: Jefferson's Declaration of Independence". William and Mary Quarterly, 3rd series, 36 (October 1979), 503–23.
- Hazelton, John H. The Declaration of Independence: Its History. Originally published 1906. New York: Da Capo Press, 1970. ISBN 0-306-71987-8. 1906 edition available on Google Book Search
- Journals of the Continental Congress,1774–1789, Vol. 5 ( Library of Congress, 1904–1937)
- Jensen, Merrill. The Founding of a Nation: A History of the American Revolution, 1763–1776. New York: Oxford University Press, 1968.
- Mahoney, D. J. (1986). "Declaration of independence". Society 24: 46–48. doi:10.1007/BF02695936.
- Lucas, Stephen E., "Justifying America: The Declaration of Independence as a Rhetorical Document," in Thomas W. Benson, ed., American Rhetoric: Context and Criticism, Carbondale, Illinois: Southern Illinois University Press, 1989
- Maier, Pauline. American Scripture: Making the Declaration of Independence. New York: Knopf, 1997. ISBN 0-679-45492-6.
- Malone, Dumas. Jefferson the Virginian. Volume 1 of Jefferson and His Time. Boston: Little Brown, 1948.
- Mayer, Henry. All on Fire: William Lloyd Garrison and the Abolition of Slavery. New York: St. Martin's Press, 1998. ISBN 0-312-18740-8.
- McDonald, Robert M. S. "Thomas Jefferson's Changing Reputation as Author of the Declaration of Independence: The First Fifty Years." Journal of the Early Republic 19, no. 2 (Summer 1999): 169–95.
- McPherson, James. Abraham Lincoln and the Second American Revolution. New York: Oxford University Press, 1991. ISBN 0-19-505542-X.
- Middlekauff, Robert. The Glorious Cause: The American Revolution, 1763–1789. Revised and expanded edition. New York: Oxford University Press, 2005.
- Norton, Mary Beth, et al., A People and a Nation, Eighth Edition, Boston, Wadsworth, 2010. ISBN 0-547-17558-2.
- Rakove, Jack N. The Beginnings of National Politics: An Interpretive History of the Continental Congress. New York: Knopf, 1979. ISBN 0-8018-2864-3
- Ritz, Wilfred J. "The Authentication of the Engrossed Declaration of Independence on July 4, 1776". Law and History Review 4, no. 1 (Spring 1986): 179–204.
- Ritz, Wilfred J. "From the Here of Jefferson's Handwritten Rough Draft of the Declaration of Independence to the There of the Printed Dunlap Broadside". Pennsylvania Magazine of History and Biography 116, no. 4 (October 1992): 499–512.
- Tsesis, Alexander. For Liberty and Equality: The Life and Times of the Declaration of Independence (Oxford University Press; 2012) 397 pages; explores the impact on American politics, law, and society since its drafting.
- Warren, Charles. "Fourth of July Myths." The William and Mary Quarterly, Third Series, vol. 2, no. 3 (July 1945): 238–72. in JSTOR
- United States Department of State, "The Declaration of Independence, 1776, 1911.
- Wills, Garry. Inventing America: Jefferson's Declaration of Independence. Garden City, New York: Doubleday, 1978. ISBN 0-385-08976-7.
- Wills, Garry. Lincoln at Gettysburg: The Words That Rewrote America. New York: Simon & Schuster, 1992. ISBN 0-671-76956-1.
- Wyatt-Brown, Bertram. Lewis Tappan and the Evangelical War Against Slavery. Cleveland: Press of Case Western Reserve University, 1969. ISBN 0-8295-0146-0.
|Wikisource has original text related to this article:|
- "Declare the Causes: The Declaration of Independence" lesson plan for grades 9-12 from National Endowment for the Humanities
- Declaration of Independence at the National Archives
- Declaration of Independence at the Library of Congress
- "The Stylistic Artistry of the Declaration of Independence" by Stephen E. Lucas
- Short film released in 2002 with actors reading the Declaration, with an introduction by Morgan Freeman
- Mobile-Friendly Declaration of Independence
Loyalist Responses to the Declaration of Independence
- Strictures upon the Declaration of the Congress at Philadelphia (London 1776), Thomas Hutchinson's reaction to the Declaration
- The Loyalist Declaration of Independence published in The Royal Gazette, (New York) on November 17, 1781 (Transcription provided by Bruce Wallace and posted on The On-Line Institute for Advanced Loyalist Studies.) | http://en.wikipedia.org/wiki/United_States_Declaration_of_Independence | 13 |
20 | Previous: 1.3 Designing Programs and the C Language
Up: 1.3 Designing Programs and the C Language
Next: 1.3.2 The C Language
Previous Page: 1.3 Designing Programs and the C Language
Next Page: 1.3.2 The C Language
An algorithm is a general solution of a problem which can be written as a verbal description of a precise, logical sequence of actions. Cooking recipes, assembly instructions for appliances and toys, or precise directions to reach a friend's house, are all examples of algorithms. A computer program is an algorithm expressed in a specific programming language. An algorithm is the key to developing a successful program.
Suppose a business office requires a program for computing its payroll. There are several people employed. They work regular hours, and sometimes overtime. The task is to compute pay for each person as well as compute the total pay disbursed.
Given the problem, we may wish to express our recipe or algorithm for solving the payroll problem in terms of repeated computations of total pay for several people. The logical modules involved are easy to see.
Repeat the following while there is more data: get data for an individual, calculate the pay for the individual from the current data, and, update the cumulative pay disbursed so far, print the pay for the individual. After the data is exhausted, print the total pay disbursed.
Figure 1.5 shows a for our task. This is a layered diagram showing the development of the steps to be performed to solve the task. Each box corresponds to some subtask which must be performed. On each layer, it is read from left to right to determine the performance order. Proceeding down one layer corresponds to breaking a task up into smaller component steps -- a refinement of the algorithm. In our example, the payroll task is at the top and that box represents the entire solution to the problem. On the second layer, we have divided the problem into two subtasks; processing a single employee's pay in a loop (to be described below), and printing the total pay disbursed for all employees. The subtask of processing an individual pay record is then further refined in the next layer. It consists of, first reading data for the employee, then calculating the pay, updating a cumulative total of pay disbursed, and finally printing the pay for the employee being processed.
The structural diagram is useful in developing the steps involved in designing the algorithm. Boxes are refined until the steps within the box are ``doable''. Our diagram corresponds well with the algorithm developed above. However, this type of diagram is not very good at expressing the sequencing of steps in the algorithm. For example, the concept of looping over many employees is lost in the bottom layer of the diagram. Another diagram, called a flow chart is useful for showing the control flow of the algorithm, and can be seen in Figure 1.6. Here the actual flow of control for repetitions is shown explicitly. We first read data since the control flow requires us to test if there is more data. If the answer is ``yes'' we proceed to the calculation of pay for an individual, updating of total disbursed pay so far, and printing of the individual pay. We then read the next set of data and loop back to the test. If there is more data, repeat the process, otherwise control passes to the printing of total disbursed pay and the program ends.
From this diagram we can write our refined algorithm as shown below. However, one module may require further attention; the one that calculates pay. Each calculation of pay may involve arithmetic expressions such as multiplying hours worked by the rate of pay. It may also involve branching to alternate computations if the hours worked indicate overtime work. Incorporating these specifics, our algorithm may be written as follows:
get (first) data, e.g., id, hours worked, rate of pay while more data (repeat the following) if hours worked exceeds 40 (then) calculate pay using overtime pay calculation otherwise calculate pay using regular pay calculation calculate cumulative pay disbursed so far print the pay statement for this set of data get (next) data
print cumulative pay disbursed
The algorithm is the most important part of solving difficult problems. Structural diagrams and flow charts are tools that make the job of writing the algorithm easier, especially in complex programs. The final refined algorithm should use the same type of constructs as most programming languages. Once an algorithm is developed, the job of writing a program in a computer language is relatively easy; a simple translation of the algorithm steps into the proper statements for the language. In this text, we will use algorithms to specify how tasks will be performed. Programs that follow the algorithmic logic will then be easy to implement. Readers may wish to draw structural diagrams and flow charts as visual aids in understanding complex algorithms.
There is a common set of programming constructs provided by most languages useful for algorithm construction, including:
if overtime hours exceed 40 then calculate pay using overtime pay calculation otherwise calculate pay using regular pay calculation
while new data repeat the following ...
read data write/print data, individual pay, disbursed pay
Languages that include the above types of constructions are called
and include such languages as C, Pascal, and FORTRAN.
A program written in an algorithmic language must, of course, be translated into machine language. A Utility program, called a , translates source programs in algorithmic languages to object programs in machine language. One instruction in an algorithmic language, called a , usually translates to several machine level instructions. The work of the compiler, the translation process, is called compilation.
To summarize, program writing requires first formulating the underlying algorithm that will solve a particular problem. The algorithm is then coded into an algorithmic language by the programmer, compiled by the compiler, and loaded into memory by the operating system. Finally, the program is executed by the hardware. | http://www-ee.eng.hawaii.edu/Courses/EE150/Book/chap1/subsection2.1.3.1.html | 13 |
39 | In chemistry, chemical synthesis is the purposeful execution of one or more chemical reactions in order to get a product, or several products. This happens by physical and chemical manipulations usually involving one or more reactions. In modern laboratory usage, this tends to imply that the process is reproducible, reliable, and established to work in multiple laboratories.
A chemical synthesis begins by selection of compounds that are known as reagents or reactants. Various reaction types can be applied to these to synthesize the product, or an intermediate product. This requires mixing the compounds in a reaction vessel such as a chemical reactor or a simple round-bottom flask. Many reactions require some form of work-up procedure before the final product is isolated. The amount of product in a chemical synthesis is the reaction yield. Typically, chemical yields are expressed as a weight in grams or as a percentage of the total theoretical quantity of product that could be produced. A side reaction is an unwanted chemical reaction taking place that diminishes the yield of the desired product.
The word synthesis in its current meaning was first used by the chemist Adolph Wilhelm Hermann Kolbe.
Many strategies exist in chemical synthesis that go beyond converting reactant A to reaction product B. In cascade reactions multiple chemical transformations take place within a single reactant, in multi-component reactions up to 11 different reactants form a single reaction product and in a telescopic synthesis one reactant goes through multiple transformations without isolation of intermediates.
Organic synthesis is a special branch of chemical synthesis dealing with the construction of organic compounds. It has developed into one of the most important components of organic chemistry. There are two main areas of research within the general area of organic synthesis: Total synthesis and methodology.
In the total synthesis of a complex product it may take multiple steps to synthesize the product of interest, and inordinate amounts of time. Skill in organic synthesis is prized among chemists and the synthesis of exceptionally valuable or difficult compounds has won chemists such as Robert Burns Woodward the Nobel Prize for Chemistry. If a chemical synthesis starts from basic laboratory compounds and yields something new, it is a purely synthetic process. If it starts from a product isolated from plants or animals and then proceeds to a new compounds, the synthesis is described as a semisynthetic process.
A total synthesis is the complete chemical synthesis of complex organic molecules from simple, commercially available (petrochemical) or natural precursors. In a linear synthesis there is a series of steps which are performed one after another until the molecule is made- this is often adequate for a simple structure. The chemical compounds made in each step are usually referred to as synthetic intermediates. For more complex molecules, a convergent synthesis is often preferred. This is where several "pieces" (key intermediates) of the final product are synthesized separately, then coupled together, often near the end of the synthesis.
The "father" of modern organic synthesis is regarded as Robert Burns Woodward, who received the 1965 Nobel Prize for Chemistry for several brilliant examples of total synthesis such as his 1954 synthesis of strychnine. Some modern examples include Wender's, Holton's, Nicolaou's and Danishefsky's synthesis of Taxol.
Each step of a synthesis involves a chemical reaction, and reagents and conditions for each of these reactions need to be designed to give a good yield and a pure product, with as little work as possible. A method may already exist in the literature for making one of the early synthetic intermediates, and this method will usually be used rather than "trying to reinvent the wheel." However most intermediates are compounds that have never been made before, and these will normally be made using general methods developed by methodology researchers. To be useful, these methods need to give high yields and to be reliable for a broad range of substrates. Methodology research usually involves three main stages—discovery, optimisation, and studies of scope and limitations. The discovery requires extensive knowledge of and experience with chemical reactivities of appropriate reagents. Optimisation is where one or two starting compounds are tested in the reaction under a wide variety of conditions of temperature, solvent, reaction time, and so on, until the optimum conditions for product yield and purity are found. Then the researcher tries to extend the method to a broad range of different starting materials, to find the scope and limitations. Some larger research groups may then perform a total synthesis (see above) to showcase the new methodology and demonstrate its value in a real application.
Many complex natural products occur as one pure enantiomer. Traditionally, however, a total synthesis could only make a complex molecule as a racemic mixture, that is, as an equal mixture of both possible enantiomer forms. The racemic mixture might then be separated via chiral resolution.
In the latter half of the twentieth century, chemists began to develop methods of asymmetric catalysis and kinetic resolution whereby reactions could be directed to produce only one enantiomer rather than a racemic mixture. Early examples include Sharpless epoxidation (K. Barry Sharpless) and asymmetric hydrogenation (William S. Knowles and Ryoji Noyori), and these workers went on to share the Nobel Prize in Chemistry in 2001 for their discoveries. Such reactions gave chemists a much wider choice of enantiomerically pure molecules to start from, where previously only natural starting materials could be used. Using techniques pioneered by Robert B. Woodward and new developments in synthetic methodology, chemists became more able to take simple molecules through to more complex molecules without unwanted racemisation, by understanding stereocontrol. This allowed the final target molecule to be synthesised as one pure enantiomer without any resolution being necessary. Such techniques are referred to as asymmetric synthesis.
Elias James Corey brought a more formal approach to synthesis design, based on retrosynthetic analysis, for which he won the Nobel Prize for Chemistry in 1990. In this approach, the research is planned backwards from the product, using standard rules. The steps are shown using retrosynthetic arrows (drawn as =>), which in effect means "is made from." Other workers in this area include one of the pioneers of computational chemistry, James B. Hendrickson, who developed a computer program for designing a synthesis based on sequences of generic "half-reactions." Computer-aided methods have recently been reviewed.
The other meaning of chemical synthesis is narrow and restricted to a specific kind of chemical reaction, a direct combination reaction, in which two or more reactants combine to form a single product. The general form of a direct combination reaction is:
- A + B → AB
where A and B are elements or compounds, and AB is a compound consisting of A and B. Examples of combination reactions include:
- 2Na + Cl2 → 2 NaCl (formation of table salt)
- S + O2 → SO2 (formation of sulfur dioxide)
- 4 Fe + 3 O2 → 2 Fe2O3 (iron rusting)
- CO2 + H2O → H2CO3 (carbon dioxide dissolving and reacting with water to form carbonic acid)
4 special synthesis rules:
- metal oxide + H2O → metal hydroxide
- nonmetal oxide + H2O → oxy acid
- metal chloride + O2 → metal chlorate
- metal oxide + CO2 → metal carbonate
- ↑ K.C. Nicolaou and E.J. Sorensen, Classics in Total Synthesis (New York: VCH, 1996).
- ↑ R.B. Woodward, M.P. Cava, W.D. Ollis, W. D. A. Hunger, H.U. Daeniker, and K. Schenker, The Total Synthesis of Strychnine, Journal of the American Chemical Society 76 (18): 4749–4751.
- ↑ J. March and D. Smith, Advanced Organic Chemistry, 5th (New York: Wiley, 2001).
- ↑ E. J. Corey and Xue-Min Cheng, The Logic of Chemical Synthesis (New York: John Wiley, 1999, ISBN 0471509795).
- ↑ Matthew H. Todd, Computer-aided Organic Synthesis, Chemical Society Reviews 34: 247–266.
- Corey, E. J., and Xue-Min Cheng. 1995. The Logic of Chemical Synthesis. New York: John Wiley. ISBN 0471509795.
- McMurry, John. 2004. Organic Chemistry, 6th ed. Belmont, CA: Brooks/Cole. ISBN 0534420052.
- Solomons, T.W. Graham, and Craig B. Fryhle. 2004. Organic Chemistry, 8th ed. Hoboken, NJ: John Wiley. ISBN 0471417998.
- Vogel, A.I., et al. 1996. Vogel's Textbook of Practical Organic Chemistry, 5th Edition. Prentice Hall. ISBN 0582462363.
- Zumdahl, Steven S. 2005. Chemical Principles. New York, NY: Houghton Mifflin. ISBN 0618372067.
All links retrieved May 10, 2013.
- Natural product syntheses
- Organic Synthesis Search
- Organic Chemistry Synthesis Approach and Modern Development
|Topics in organic chemistry|
|List of organic compounds|
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Chemical_synthesis | 13 |
21 | In 1920, A. A. Michelson and F. G. Pease measured the angular diameter of Betelgeuse, α Orionis, with the 100-inch reflector at Mount Wilson. From its distance, they directly inferred its diameter, confirming that it was a huge star. This was the first direct measurement of stellar diameter; all other methods had been indirect and subject to uncertainty. Attempts to enlarge the phase interferometer to make the method applicable to a larger number of stars undertaken by Pease in later years were unsuccessful. In 1956 Hanbury Brown and Twiss applied a method they had devised for radio astronomy to visual astronomy and measured the angular diameter of Sirius, α Canis Majoris. This was the intensity interferometer, which removed most of the limitations of the phase interferometer, allowing measurements on a much larger sample of stars. A large interferometer was built at Narrabri, NSW, Australia, which finally provided a number of accurate stellar diameters by direct measurement after 1968.
These important and interesting developments are largely ignored in astronomy textbooks. The Michelson experiment is usually only briefly acknowledged, and the Hanbury Brown inteferometer is not mentioned at all. One reason for this is the difficulty of explaining the method, especially without mathematics. Optics texts usually give a reasonable explanation, because the measurements are valuable examples of the wave nature of light, but the astronomy is slighted. For these reasons, I will attempt to give a thorough explanation of the methods of measuring stellar diameters by interferometry, together with the important dusky corners that they illuminate. First, we must review how stellar distances are found, which is a fundamental task of astronomy.
When we look at the stars through the years, we are impressed by their fixity, at least on human scales of time. Since Ptolemy looked at the stars, they have retained their places, except perhaps for slight differences in the case of a few stars such as Arcturus, which has moved about 1° since then. The lack of movement through the year, as the earth orbits the sun, and in secular time, can be considered evidence either of the fixity of the earth (if the stars are considered scattered in space) or that they are all at the same distance, or that they are extremely distant. Most old cosmologies put the stars on a spherical surface, the firmament, with heaven beyond, so their fixity was no problem. Other old cosmologies, which had stars scattered in space, interpreted the fixity as due to the fixity of the earth, though a few philosophers thought the stars greatly distant, a most unpalatable thought for most thinkers. There was no way to select between the alternatives.
As soon as the revolution of the earth about the sun was accepted, and the firmament banished, evidence for the apparent displacement of the nearer stars relative to the more distant, called parallax, was sought. Parallax is illustrated in the diagram at the right, together with some other definitions. A and B are the positions of the earth at times six months apart, so the base line AB is two astronomical units, 2a = 2 AU. The parallax is traditionally measured in arc-seconds, often written p" to emphasize the fact. The number 206,265 is the number of seconds in a radian. The distance from the sun O to the star S is d, measured in parsecs or light years. The light-year was originally for public consumption, to emphasize the inconceivably great distances involved, but remains a vivid and common unit.
Parallax, however, was not observed--a most unsatisfactory condition. William Herschel looked for it in vain. Not until 1838 was parallax finally found. Bessel thought that a star with large proper motion might be relatively close, so he chose 61 Cygni, a fifth-magnitude star in Cygnus, which appeared on a dense background of Milky Way stars that could form a good reference. By visual observation, using a special measuring instrument, he found a parallax of 0.296", which corresponded to a distance of 3.4 psc or 11 l.y. In the same year, Struve found 0.124" for Vega, α Lyrae, and Henderson 0.743" for α Centauri. These were huge distances, but were only to the apparently nearest stars, which surprised everyone. Henderson actually chose one of the nearest stars of all. Only dim, red Proxima Centauri has a larger parallax, at 0.786". It is about 3 l.y. distant.
The introduction of photography made it much easier to measure parallaxes. It was only necessary to compare plates taken 6 months apart, and special intstruments were developed to facilitate the task. When the two plates were presented alternately to the eye, nearby stars would jump back and forth, while the distant ones remained unmoved. In this way a large list of trigonometric parallaxes were determined, but they covered only the stars close to us, up to perhaps a parallax of 0.1", or a distance of 33 l.y.. These are merely our close neighbors in the vastness of space.
A star appears dimmer the farther it is away, according to m = M + 5 + 5 log p", where m is the visual magnitude of the star (smaller numbers mean brighter). Putting in p" = 0.1, we find m = M, called the absolute magnitude, the apparent magnitude if the star were viewed at a standard distance of 10 parsec. If by some magic we could infer the absolute magnitude M of any star we observed to have an apparent magnitude m, then the parallax could be determined by this formula based on the inverse-square law. There are many uncertainties here, the major ones the effect of interstellar absorption and, above all, the determination of M.
The spectroscope shows that the spectra of stars can be classified into a regular series based principally on surface temperature: the types O, B, A, F, G, K, M and C, each quite recognizable. The Sun has a type G spectrum, and a surface temperature of 5800-6000K, for example. By classifying the spectra of the sample of stars whose distances we know by trigonometrical parallax, it was found by Hertzsprung and Russell that they roughly followed a single path on a plot of absolute magnitude M against spectral type, called the main sequence. Many stars did not follow this path, but were usually easy to recognize as different. If a star was presumed to be on the main sequence by all available evidence, then a look at its spectrum established its spectral class, and the Hertzsprung-Russell diagram gave an estimate of its absolute magnitude. Then its parallax, and so its distance, could be estimated. This is called a spectroscopic parallax, which extended our knowledge of stellar distances to really large distances, albeit approximately.
Any good astronomy text will show how larger and larger distances were estimated by other methods, such as Cepheid variables (the period gave a clue to the absolute magnitude, and these were very bright stars that could be seen a long ways) and the red shift, which extended distance knowledge far into the realm of the galaxies. However, trigonometric and spectroscopic parallaxes are sufficient for our present purposes. Accurate determination of distance is essential to accurate determination of stellar diameter.
It is worth remembering that no star has a parallax as large as 1", and that spectroscopic parallaxes are uncertain and subject to revision. For example, the parallax of Betelgeuse, α Orionis, was taken as 0.018" in 1920, but now a figure of 0.0055" is accepted. Of course, the contemplation of the vast distances in space is an essential part of the appreciation of astronomy, so very different from the views popularized by fiction, in which the Enterprise flits about space like a Portuguese trader in the Indian Ocean. Betelgeuse is 411 l.y. away. To get there about now, at the speed of light, you would have had to have departed when Shakespeare was born.
The other important preparation for our task is the understanding of inteference, interferometry and coherence. This is a big job, which requires reference to Optics texts for a thorough attack. Here we can only present the fundamentals in an abbreviated form. This should be sufficient for our purpose, however.
Observation teaches that if one candle gives a certain amount of illumination, then two candles give twice as much. The energy comes out on the light rays and just piles up where it is received, like so much snow. This reasonable and common-sense view is, of course, totally wrong, like the concept that matter is continuous, like cheese. It is useful in practice, but does not lead to understanding, only off into the weeds of speculation.
The truth is that light has an amplitude that moves on propagating wavefronts from its source. The amplitudes from different sources add at any point, and the energy received is the average value of the square of the resultant amplitude. Any effects caused by the adding of amplitudes are traditionally called interference, but amplitudes do not "interfere" with each other in the usual sense of the word, but seem blissfully independent of each other. We find that the wavelength of the amplitudes is quite small, only 500 nm for green visible light. Combined with the large velocity of light, 3 x 108 m/s, it turns out that the frequency is 0.606 x 1015 Hz, a really large value. This makes it impossible to observe the amplitude directly. All we can measure are time averages of functions of the amplitude. If A is the amplitude, then the intensity I = <AA*> is one such function. The angle brackets imply a time average over some suitable interval, much longer than the period of the oscillation of the amplitude. There are units to be considered, which introduce numerical factors, but we shall usually neglect them.
An amplitude can be represented by A(t) = Ae2πiνt aeiφe2πiνt, where the complex amplitude A has been given in polar form, with an amplitude a and phase φ. Unfortunately, we have to use the same word for what we have called a generalized amplitude and the modulus of a complex number. It would have been better to call a the modulus, but this is not usually done. The two meanings for "amplitude" are not easy to confuse, fortunately. This form of the amplitude is not typical of most light sources, and is a kind of idealization. However, it is approached closely by laser light, so it is easy for us to experience. When we do use laser illumination, the light-as-snow illusion is shattered, and there are fringes and spots everywhere. These are, of course, results of interference, and show that our amplitude picture is correct. Complex values are the easiest way to reflect the phase properties of an amplitude (in the general sense), and we need not be dismayed by their appearance.
Let's suppose we have two amplitudes, A = aei0 and B = be2πix/λ, where λ is the wavelength λ = c/ν, and x is a linear distance. When x = 0, the two amplitudes will be in phase, and the net amplitude will be A + B = a + b. The intensity I = (a + b)2, while the intensities in the unmixed beams are a2 and b2. The intensity in the mixed beam is the sum of the intensities in each beam alone, plus the amount 2ab, the interference term. If the two beams have equal amplitudes, then when the two beams fall together, the total intensity is four times the intensity of one beam, or (one candle) + (one candle) = (four candles), or 1 + 1 = 4. We never see this with candles, but we do with lasers, so the strange mathematics is quite valid. Energy is conserved, of course, so this extra intensity must come from somewhere else, where the intensity is less
If x = λ/2, then A = a and B = beiπ = -b. Now when we superimpose the two beams, the resultant amplitude is a - b, and the intensity is I = (a - b)2. The intensity is the sum of the separate intensities plus the interference term -2ab. If a = b, the intensity I = 0. Here, we have (one candle) + (one candle) = (zero candles), or 1 + 1 = 0. It is clear where the intensity came from for x = 0. As x increases steadily, the intensity forms bright fringes for x = 0, λ, 2λ, etc. and dark fringes for x = λ/2, 3λ/2, etc. If the amplitudes of the two beams are equal, the dark fringes are black, and the bright fringes are 4 times the average value. This gives the maximum contrast or visibility to the fringes. If b is less than a, the maxima are not as bright and the minima are not as dark. If b = 0, then the fringes disappear, and their visibility is zero. Michelson defined the visibility of fringes as V = (Imax - Imin)/(Imax + Imin), which ranges from 0 to 1.
Fringes had been observed since the early 17th century, when objects were illuminated by light coming from pinholes. Newton's rings are only one of the interference phenomena discovered by him. It was the explanation of the fringes that was lacking. No explanation was satisfactory until Thomas Young's experiments around 1801. Young did not discover fringes, but explained them in the current manner, which was no small accomplishment. He observed the interference of two beams, of the type just described, and measured the wavelength of light in terms of the fringe spacing. His experiment, in an abstract form, is a standard introduction to interference. It is not an easy experiment, especially when performed with a candle. Young actually used a wire, not two slits, which would have given an impossibly low illumination. When he held it before a pinhole through which a candle shone, fringes were seen in the shadow of the wire by direct observation, and their spacing could be compared with the diameter of the wire.
You can reproduce the experiment with an LED, a piece of #22 wire (diameter 0.6439 mm), and a hand lens of about 100 mm focal length, as shown in the diagram at the left. I used a yellow high-intensity LED in a clear envelope, viewed from the side where the source is practically a pinhole. The shield only reduces glare. I did not actually count the fringes (a micrometer eyepiece would make this easy) because I was holding the wire by hand, but the fine fringes were quite visible. The center fringe was a bright one.
This experiment uses some little-known characteristics of diffraction. If you look at a wire held at some distance from a pinhole, with your eye in the shadow of the wire, two short bright lines will be seen at the top and bottom edges of the wire. These act a line sources of light, producing two beams that intefere to make fringes in the shadow. There are also fringes outside the shadow, with different phase relations (the pattern is not continuous at the shadow edges), but are difficult to see in the glare. This gives a much larger intensity than two slits would in the same places, and made the experiment possible for Young. Of course, one could use two clear lines scratched carefully on a blackened photographic emulsion, as is done in schools, but the effect is not as good.
The geometry of a two-beam interference experiment is shown at the right. The source S is behind a pinhole that makes the illumination beyond the pinhole spatially coherent. That is, it all comes from the same direction and meets the two apertures with equal amplitudes. There may also be filtering that makes the light monochromatic, or temporally coherent, so that it approximates the model that we have been using. The distance D must be sufficient if the light at the apertures is to be coherent, something we will have much to say about below. However, the fringe spacing does not depend on D in any way. The screen is at a distance f from the apertures, presumed much larger than the separation a of the apertures. A lens of focal length f placed at its focal length from the screen makes the geometry exact in a short distance, and prevents too much spreading of the illumination. The fringe spacing is fλ/a, a useful relation to remember.
For the suggested Young's experiment, λ = 600 nm (roughly), f = 100 mm, and a = 0.6439 mm, giving a spacing of 0.093 mm, which seems roughly in agreement with observation. The wire will be approximately 7 fringes wide as seen through the lens. This is a way of measuring the wavelength if you know the wire diameter, or the wire diameter if you know the wavelength. Try #30 wire and observe that the fringes are not only wider, but fewer fit into the shadow.
An interference problem of interest to us is what happens with a circular aperture of diameter 2a. We must now superimpose the amplitudes from each area element of the aperture, and this calls for a double integral, using polar coordinates. Such problems are called diffraction, but there is no difference in principle with interference. The amplitude from each element of area will be the same, but its phase will differ depending on the distance from the element to the screen. This integral is done in all texts on physical optics, and the result is what is most important to us. This result is I(r) = I(0)[2J1(z)/z]2, where z = 2π[r/(2λf/d)], where d is the diameter of the aperture. We see the same factor λf/a as for the two apertures, where now a = d/2. The function J1(x) is the Bessel function of order 1, which behaves like x/2 - (x/2)3/2 + (x/2)5/12 - ... for small x. This is the famous Airy pattern, first derived by G. B. Airy, Astronomer Royal, on the basis of Fresnel's new wave theory. The intensity is strongly concentrated in the central maximum. 91% of the intensity is in the central maximum and the first bright ring surrounding it. If the intensity distribution across the disc is not uniform, the diffraction pattern will change slightly, but the general characteristics will be the same.
The Bessel function involved is zero when z = 3.83, so the radius of the first dark ring a = [(2)(3.83)/(2π)](λf/d) = 1.22λf/d. The angle (in radians) subtended by this radius at the aperture is θ = 1.22λ/d. The image of a star in a perfect telescope is an Airy disc. The d = 100" (2.54 m) Mount Wilson reflector has a Cassegrain focal length f = 40.84 m. If the effective wavelength is 575 nm, then θ = 2.76 x 10-7 rad = 0.057". Since neither the telescope nor the seeing can be perfect, this is a limit that can only be approached more or less closely. As we shall see, it is of the order of the angle subtended by the diameters of the largest stars, so actually seeing the disc of a star in a telescope is a vain hope. There are now some larger telescopes, including the 200" Hale reflector and the 5 m reflector in Russia, but this does not change the situation very significantly. Photographs have been taken with the 4 m telescope at Kitt Peak that with special processing have seemed to show some details of Betelgeuse. Direct observation of stellar discs seems just beyond practicality, unfortunately.
Stellar diameters are such an important parameter in theories that some way to estimate them before they could be directly measured was sought. The usual method was to estimate the total radiated energy from the absolute magnitude. Magnitudes relative to total radiated energy are called bolometric magnitudes, and can often be obtained by adding a (negative) correction to the visual magnitude. The observed magnitudes may be affected by interstellar extinction as well. The total radiation was set equal to the known rate of radiation from a black body at the surface temperature of the star, which could be inferred from its spectrum. The effective temperature is defined in terms of Stefan's Law with unit emissivity, which bypasses the problem of emissivity without solving it. Since the emission of energy is proportional to the fourth power of the effective absolute temperature T, and the area is proportional to the square of the diameter of the star, we have the diameter D proportional to the square root of the luminosity L (an exponential function of the bolometric magnitude) and the square of the effective temperature T. If D', L' and T' = 5800K are the same quantities for the Sun, then D/D' = √(L/L')(5800/T)2. The diameter of the Sun is D' = 1.392 x 106 km, and its luminosity L' = 3.90 x 1033 erg/s.
Let's consider Betelgeuse, α Orionis. This M2-spectrum red star is said to have a luminosity 13,500 times that of the sun (it is variable, but this is a typical value at maximum). Its surface temperature is about 3000K. This gives D/D' = 434, or a diameter of 6.04 x 108 km, or 376 x 106 miles. A star as bright and as cool as Betelgeuse has to be large.
Actual light disturbances are not as simple as the sinusoidal variations with constant amplitude and phase that we have discussed above. Laser light may approximate such disturbances, but not the light from candles or stars. This light is the resultant of a multitude of amplitudes from individual atomic emissions, which take place independently. Two signals of different frequencies get out of step quickly, the more quickly the more they are different in frequency. Random phase changes between two signals of the same frequency cause interference fringes to move. The light from thermal sources--candles and stars--is in the nature of a noise signal, with a wide frequency spectrum and constantly fluctuating phase. It is no surprise that we do not observe interference fringes in the usual conditions. What is surprising is that we can devise arrangements in which fringes appear. To do this, we must arrange that the phase relations between signals coming from the same atomic sources are constant. One way of doing this was described above, where we used a pinhole to define the source, and a filter to reduce the bandwidth. When we illuminated the two apertures with this light, stable fringes were then produced.
The light from two different thermal sources cannot be made to produce fringes. Two different lasers can produce fringes, but the experiment is rather difficult even for such ideal sources. Fringes can be made in white light, but only a few colored fringes are seen near the point where the phase difference is zero.
When the light from two points can be made to form fringes, the signals are said to be coherent. When no fringes are seen, the signals are called incoherent. In the two-aperture experiment, if we make the pinhole larger and larger, the fringes lose contrast or visibility, and eventually disappear. The light falling on the apertures becomes less and less coherent as this takes place. This simple observation shows the basis for determining stellar diameters by interferometry. We only have to find the limits of the region where the light from the star is coherent, using interference, and this is directly related to the apparent angular extent of the source. We take apertures farther and farther apart, and find out where the fringes disappear.
To analyze this quantitatively, we introduce a quantity called the degree of coherence, γ12 = γ(P1,P2,τ) = γ(r12,τ). P1 and P2 are the two points considered, r12 is the distance between them, and τ is the time difference in arrival at the screen (observing position). The degree of coherence is a complex number, though we usually consider its modulus, |γ 12|. The modulus of the degree of coherence varies between 0 (incoherent) to 1 (completely coherent).
To find out how γ is defined in terms of the light disturbances, we return to the two-aperture experiment. If A1 and A2 are the complex time-dependent signals from the two apertures, then the signal at the observation point Q is K1A1 + K2A2, where the K's are propagators that describe the changes in amplitude and phase as we go from an aperture to the screen. They are of the form K = ie2πi(t - t1)/r, the form typically used in diffraction integrals. We will not use these expressions explicitly, so do not worry about them. The curious nonintuitive factor "i" makes the phases come out properly. To find the intensity at Q, we multiply the signal by its complex conjugate and take the time average. The intensity of beam 1 alone is I1 = <A1A*1>, with a similar expression for I2. We find I = |K1|2I1 + |K2|2I2 + 2 Re[K1K2*<A1A2*>], where Re stands for "real part." If z is a complex number, Re(z) = (z + z*)/2.
We now define Γ12 = <A1(t + τ)A2*(t)> and call it the mutual coherence of the light signal at the two points. In statistical language, it is the cross-correlation of the two signals. The complex degree of coherence is simply the normalized value of this quantity, γ12 = Γ12/√(I1I2). Using Schwartz's Inequality, we find that 0 ≤ |γ| ≤ 1.
Now, using the intensities at Q (including the K's) we have the interference formula I(Q) = I1 + I2 + 2√(I1I2)Re[γ12(τ)], where τ is the time difference (s2 - s1)/c between the paths P1Q and P2Q. The visibility of the fringes, V = |γ|, so if we measure V, we then know |γ|.
To better understand what this means, let the two signals at Q be ae2πiνt and be2πiν(t + τ). Then, Γ12 = abe-2πiτ and γ12 = e-2πiντ, so Re(γ) = cos(2πντ). This gives I(Q) = I1 + I2 + 2√(I1I2)cos(2πΔs/λ), where Δs is the path difference. This is just the formula we found earlier for the two-aperture problem. We see that the degree of coherence is unity here.
We also see that the phase of γ has a rapidly-varying part for monochromatic light, 2πντ. Ordinary narrow-band or quasimonochromatic light is very much like narrow-band noise. The frequency varies randomly over a small range centered on the average value, while the amplitude varies up and down irregularly. For such signals, it is useful to separate the rapidly varying part (at the average frequency) from the more slowly varying part. Hence, we write γ = |γ(&tau)|ei[α(&tau) - δ], where δ is the part we have just looked at that involves the path difference, and α(τ) is the rest. The dependence on τ reflects what is often called the temporal coherence, while the dependence on the location of the source points reflects spatial coherence. In either case, we remember that coherence is the ability to produce stable interference fringes.
A signal with a frequency bandwidth Δν shows incoherence after a time interval of the order of Δτ. Visible light has an effective bandwidth of roughly 500 nm to 600 nm (not the extremes of visual sensitivity, of course), so that Δν = 1 x 1014 Hz, and Δτ = 1 x 10-14 s. The light has barely time to wiggle once before coherence is destroyed. It is no wonder that fringes are not seen in white light except in special cases, and even then only one or two. This is a result of temporal coherence alone. By restricting the frequency bandwidth, the coherence time Δτ can be increased to more comfortable values.
This theorem tells us the dependence of the coherence on distance for an extended source, such as a pinhole or a star. We will apply it only to a circular source of uniform brightness, but it can also be used for much more general sources.
The theorem states that: "The complex degree of coherence between P1 and P2 in a plane illuminated by an extended quasi-monochromatic source is equal to the normalized complex amplitude in the diffraction pattern centered on P2 that would be obtained by replacing the source by an aperture of the same size and illuminating it by a spherical wave converging on P2, the amplitude distribution proportional to the intensity distribution across the source." We have already discussed diffraction from a circular aperture, which is precisely the diffraction pattern we require in the case of a uniformly bright disc. The coherence is 1 when P1 and P2 coincide, and decreases according to J1(z)/z as P1 moves outwards, becoming zero where the diffraction pattern has its first dark ring.
The distance for γ = 0 is, therefore, given by a = 1.22λ/θ, where θ is the whole angle subtended by the source at P2, measured in radians. If we take the effective wavelength as 575 nm, and measure the angle in seconds, we find a = 0.145/θ" m. This holds for pinholes and stars. The smaller the angle subtended by the source, the larger the radius of coherence a. A pinhole 0.1 mm in diameter subtends an angle of 69" at a screen 300 mm distant, so a = 2.1 mm. The Sun or Moon subtend an angle of about 0.5° or 1800" at the surface of the Earth, so a = 0.08 mm. Venus subtends an angle between 9" and 60" (its distance from the Earth varies greatly), so a = 2.3 mm to 14.4 mm. Venus is often not a disc, especially when closest to the Earth, but this gives an idea of its radius of coherence. Jupiter subtends an angle about the same as the maximum for Venus, so the radius of coherence of its light is 2 mm or so. On the other hand, Betelgeuse subtends an angle of 0.047" when it is at its largest, so a = 3.1 m or about 10 ft. The discs of most stars subtend much smaller angles, so for all stars a > 3 m, and often hundreds of metres.
We now have all the theory we need to understand stellar interferometry. It is clear that we are looking for the radius at which the illumination has zero coherence, and this radius gives us the angular diameter. The linear diameter is obtained by multiplying by the distance. If we knew the diameter to start with, inverting this method would give us the distance.
One subject we can take up before describing stellar interferometers is the familiar phenomenon of twinkling, or scintillation of celestial bodies. This is caused by slight variations in the density of the atmosphere due to turbulence, wind shear and other causes, and is closely related to the "heat shimmer" seen on summer days over hot surfaces. The stars seem to jitter in position, become brighter or dimmer, and show flashes of color. The amount of scintillation varies greatly depending on elevation and weather, and sometimes is almost absent. Scintillation is the cause of good or poor telescopic "seeing." When the seeing is bad, stellar images wobble and jump, and the resolving power is reduced.
It is often observed that the planets do not scintillate to the same degree as the stars, but the effect is variable. Venus scintillates and shows flashes of color when a thin crescent and closest to us, but mostly the planets show a serene and calm face even when nearby stars are twinkling. The reason for this difference is often ascribed to the fact that the planets show an apparent disc, while the stars do not. However, even the edges of the planet images do not wiggle and jump, so this is probably not the reason for the difference.
It is much more reasonable that since a planet's light is incoherent over any but a very small distance, interference effects do not occur. The image may still move slightly depending on its refraction by changes in density. With a star, however, the area of coherence may include whole turbulence cells, and the randomly deflected light may exhibit interference, causing the variations in brightness and the colors. So scintillation does depend on the apparent size of the body, but in a more esoteric way. Exactly the same effects occur for terrestrial light sources, though they must be quite distant to create large areas of coherence.
A. A. Michelson was led to the stellar interferometer through his experiences with using his original interferometer, now named after him, with light consisting of a narrow line or line, such as sodium light with its D line doublet. The fringes go through cycles of visibility as the path length is varied, and from the variations the structure of the line can be unraveled. Doing this with sodium light was a popular laboratory exercise in optics.
If the end of the telescope tube is closed with a mask with two apertures, fringes are produced at the focus, showing that the light is coherent, if the two apertures are close enough together. With the Sun, or Jupiter, fringes would not appear at all because of the small coherence radius. With stars, however, the decrease in fringe visibility would be evident, and from the separation of the apertures for zero visibility the angular diameter could be found.
No telescope was large enough in aperture to give γ = 0 for even the stars with the largest angular diameters, even giant Betelgeuse, so light had to be collected from a greater separation by means of mirrors mounted on a transverse beam. A 20 ft beam was selected for the initial experiments, which should be sufficient for measuring Betelgeuse. The telescope selected was the 100-inch reflector at Mount Wilson, not because of its large aperture, but because of its mechanical stability. A 20-ft steel beam of two 10-inch channels is certainly not light, although its weight was reduced as much as possible by removing superfluous metal. The two outer mirrors directed the light to two central mirrors 45 in. apart, which then sent the light toward the paraboloidal mirror.
The 100-inch (2.54 m) telescope could be used at a prime (Newtonian) focus at a focal length of 45 ft (13.72 m), the beam diverted to the side by a plane mirror near the top, or at a Cassegrain focus at a focal length of 134 ft. (40.84 m) after reflection from a hyperboloidal mirror also near the top, and diversion to the side near the bottom of the telescope. The latter was chosen to give greater magnification, 1600X with a 1" efl eyepiece. The average wavelength for Betelgeuse was taken as 575 nm. With the 45 in. separation of the apertures and 40.84 m focal length, the fringe spacing is 0.02 mm, as you can easily check from these figures. The fringes were observed visually, and were easily seen, even when the image was unsteady but could still be followed by the eye.
The path length from each of the outer mirrors to the center mirrors had to be carefully adjusted for equality, because of the limited temporal coherence of the light (as mentioned above; these are white-light fringes). This was done by glass wedges in one beam. A direct-vision prism to observe the fringes in a restricted bandwidth made them easier to locate. The interferometer was very difficult to align, but satisfactory fringes were seen. When the outer mirrors were moved to cause the fringes in the image of Betelgeuse to disappear, it still had to be verified that other stars gave fringes under the same conditions, in case the absence of fringes was due to some other cause. One star chosen for this test was Sirius, which gave prominent fringes.
For Betelgeuse, a separation of 121 in (3.073 m) caused disappearance of the fringes, so the angular diameter was 0.047". At the time, the parallax of Betelgeuse was thought to be 0.018", but it now seems to be closer to 0.0055", for a distance of 182 psc or 593 l.y.. From these numbers we find the diameter of Betelgeuse to be 1.28 x 109 km, or 797 x 106 miles. This is about twice as large as the diameter estimated from luminosity, which implies that the star is cooler than expected, or its emissivity is lower for some reason. It is often stated that Betelgeuse would fit within the orbit of Mars. This was from the older figures; it actually would extend halfway to Jupiter, well within the asteroid belt. The angular diameter of Betelgeuse varies from 0.047" at maximum down to about 0.034" at minimum, since the star pulsates irregularly.
The verification of the large size of Betelgeuse was one of the principal results of the 20-ft interferometer. This makes its average density quite small, but of course it becomes more concentrated toward the center where the thermonuclear reactions are taking place. The star long ago exhausted the hydrogen in its core, and began burning hydrogen in an expanding shell as it swelled and cooled to a red supergiant. Now the pulsations show that it is beginning to light its helium fire at the center, which is fighting with the hydrogen reactions further up. The helium will be consumed in a relatively short time, and the star will shrink and cool to a white dwarf. That seems to be the history according to the stellar theorists, at least.
The diameters of seven stars in all were measured by the 20-ft interferometer, down to an angular diameter of 0.020", where some extrapolation had to be made. All these stars were red supergiants with spectra from K1 to M6, including the remarkable ο Ceti, Mira, that pulsates more deeply and regularly than Betelgeuse. The angular diameter of Mira at maximum was .047", the same as Betelgeuse's, but it is five times closer, so its linear diameter is about 160 x 106 miles, so Venus could revolve within it.
In hopes of measuring smaller diameters, perhaps even those of main-sequence stars, a larger interferometer was designed and built by Pease, with a 50-foot beam and mounted on the 200-inch Hale telescope at Palomar. This instrument was very difficult to operate, but the measurements on Betelgeuse and Arcturus agreed with those from the earlier instrument, while those on Antares differed significantly. Little was added by the new instrument, but it showed that a limit had been reached, largely because of the difficulty of keeping the two paths equal and the bad effects of scintillation. Modern techniques might overcome these limitations to some degree, but no great improvement is to be expected.
The correlation or intensity stellar interferometer was invented in about 1954 by two remarkable investigators, R. Hanbury Brown and R. Q. Twiss. A large interferometer was completed in 1965 at Narrabri, Australia, and by the end of the decade had measured the angular diameters of more than 20 stars, including main sequence stars, down to magnitude +2.0. This interferometer was equivalent to a 617-foot Michelson stellar interferometer, was much easier to use, and gave repeatable, accurate results.
Hanbury Brown was a radio astronomer at the University of Manchester's Jodrell Bank observatory, and Twiss was at the U.K. Services Electronics Research Laboratory at Baldock. They united the resources necessary to conceive and execute the project between them. They seem to have been not very much appreciated in their native country, but prospered in Australia, which offered them the opportunity to develop their ideas.
The intensity interferometer was introduced as a new type of interferometer for radio astronomy, but it was soon realized that it could be applied to the problem of stellar angular diameters as a successor to the Michelson interferometer of thirty years before. It works on the same fundamental principle of determining the coherence of starlight as a function of the distance between two points, but the means of finding the coherence is totally different, and relies on some esoteric properties of quasi-monochromatic light. The diameter of Sirius, the first main-sequence star whose diameter was measured, was determined in preliminary tests at Jodrell Bank in 1956, under difficult observing conditions. This was not an accurate result, but it was a milestone. To explain the interferometer, the best way to start is to look at its construction.
A diagram of the Narrabri interferometer is shown at the right. The two mirrors direct the starlight to the photomultipliers PM (RCA Type 8575, and others). Each mirror is a mosaic of 252 small hexagonal mirrors, 38 cm over flats, with a three-point support and an electrical heater to eliminate condensation. They are aluminized and coated with SiO. The focal lengths are selected from the range naturally produced by the manufacturing processes to make the large mirrors approximate paraboloids. Great accuracy is not necessary, since a good image is not required, only that the starlight be directed onto the photocathodes. The starlight is filtered through a narrow-band interference filter. The most-used filter is 443 nm ± 5 nm. The photocathode is 42 mm diameter, and the stellar image is about 25 x 25 mm. This is all of the optical part of the interferometer; all the rest is electronics.
The mirrors are mounted on two carriages that run on a circular railway of 188 m diameter and 5.5 m gauge. At the southern end of the circle is the garage where the mirrors spend the day and maintenance can be carried out. A central cabin is connected to the carriages by wires from a tower. This cabin contains the controls and the electronics. Note that the separation of the mirrors can be varied from 10 m up to 188 m. The mirrors rotate on three axes to follow the star. One of the small mirrors in each large mirror is devoted to the star-guiding system, that consists of a photocell and a chopper. This keeps the mirrors locked on the star under study, without moving the mirror carriages. The light-gathering power of the 6.5 m diameter mirrors is much greater than that of the small mirrors in the Michelson stellar interferometer, allowing the Narrabri interferometer to operate down to magnitude +2.0. The available baseline distances permit measurements of angular diameters from 0.011" to 0.0006".
The photocurrent, which is about 100 μA, is a measure of the total intensity, required for normalizing the correlation coefficient. This is measured and sent to the data-handling devices (connections not shown). The photocurrent is sent to a wide-band amplifier, then through a phase-reversing switch, and then through a wide-band filter that passes 10-110 MHz. This bandwidth excludes scintillation frequencies, eliminating their effects. The signals from the two photomultipliers then are multiplied in the correlator. The phase of one of the photocurrents is reversed at a 5 kHz rate, which makes the correlation signal change sign at the same rate, but leaving the noise unchanged. A tuned 5 kHz amplifier at the output of the multiplier selects just this signal, which is then synchronously rectified. This is a standard method of increasing the signal-to-noise ratio in situations such as these. The signal-to-noise ratio in the photocurrent is about 1 to 105. The other channel is reversed at a much slower rate, once every 10 seconds, and the correlation for each state is separately recorded. When these values are subtracted, the changes in gain and other effects are eliminated, and the result is the desired correlation.
The electrical bandwidth of 100 MHz implies that the signal paths from the photomultipliers to the correlator must be equal to about 1 ns to avoid loss of correlation due to temporal coherence. This seems like a very tight requirement at first view, but it is much easier to equalize electrical transmission lines that optical paths. The 1 ns corresponds to about 1 ft in length, which now does not seem as bad. In the case of the Michelson stellar interferometer, the paths must be equal to a wavelength or so, and this was the most important factor limiting its size.
Small lamps in the photomultiplier housings can be turned on when the shutters are closed. These lamps give uncorrelated light, so any correlation that is recorded when they are on is false. In another test, perfectly correlated noise is supplied to both channels from a wide-band noise generator for measuring the gain of the correlator. These and other tests are carried out during an observing session. The correlator is the most critical part of the interferometer, and most of the effort went in to making it as accurate and reliable as possible.
Skylight is allowed for by measuring the intensity and correlation with the mirrors pointing to the sky near the star. One contribution to the correlation was anticipated, that of the Cherenkov radiation from cosmic rays. This is a faint blue streak of light (that both mirrors would see simultaneously, and would thus correlate) that is produced when the cosmic ray is moving at greater than the speed of light (c/n) in the atmosphere. This proved to be unobservable. Meteors would have the same effect, but they are so rare that this is ruled out. Observations were not carried out when the Moon increased the skylight to an unacceptable level.
The theory of how the correlation in this case is related to the degree of coherence is similar to what we explained in connection with the Michelson instrument, but happens to be more involved, so only the idea will be sketched here. The filtered starlight is a quasi-monochromatic signal, in which the closely-space frequency components can be considered to beat against one another to create fluctuations in intensity, <AA*>. This is a general and familiar aspect of narrow-band noise. There are also accompanying fluctuations in phase, but these are not important here. The correlation measured in the intensity interferometer is proportional to <ΔI1ΔI2>, where ΔI = I = Iav is the fluctuation in I. If expressions for the quantities are inserted in terms of the amplitudes, it is found that the normalized correlation is proportional to |γ12|2, the square of the fringe visibility in the Michelson case. The phase information is gone, but the magnitude of the degree of coherence is still there, and that is enough for the measurement of diameters.
Advantages of the Brown and Twiss interferometer include: larger light-gathering capacity permitting use on dimmer stars; ease of adjusting the time delays of the channels to equality; electronic instead of visual observation; immunity to scintillation; much larger practical separations; and the elimination of the need for a large, sturdy telescope as a mount.
The photoelectric effect has long been evidence for what has been called the "particle" nature of light. Einstein demonstrated that the probability of emission of a photoelectron was proportional to the average intensity of the light, what we have represented by <AA*>, that the kinetic energy of the emitted electron was E = hν - φ, where φ is the work function, and that the emission of photoelectrons occurred instantaneously, however feeble the illumination. It was seen as a kind of collision of a "photon" with an electron, ejecting the electron as the photon was absorbed. A photocathode is called a square law detector because of its dependence on the square of the amplitude. Actually, all this is perfectly well described in quantum mechanics, and there are no surprises. What is incorrect is thinking of photons as classical particles (even classical particles obeying quantum mechanics) instead of constructs reflecting the nature of quantum transitions. Those who thought of photons as marbles, and there were many, thought Brown and Twiss were full of rubbish, since whether the light was coherent or not, the random emission of electrons by "photon collisions" would erase all correlations. The photocurrents of two separate detectors would be uncorrelated whether they were illuminated coherently or incoherently. One would simply have the well-known statistics of photoelectrons. If Brown and Twiss were correct, then quantum mechanics "would be in need of thorough revision," or so they thought.
The experiment that Brown and Twiss performed to verify that correlations could be measured between the outputs of two photomultipliers is shown at the right. The source was a mercury arc, focused on a rectangular aperture, 0.13 x 0.15 mm. The 435.8 nm line was isolated by filters. The photocathodes were 2.65 m from the source, and masked by a 9.0 x 8.5 mm aperture. Since the illumination had reasonable temporal coherence, the two light paths were only made equal to about 1 cm. A horizontal slide allowed one photomultiplier to be moved so that the cathode apertures could be superimposed or separated as seen from the source, varying the degree of coherence from 1 to 0. The electrical bandwidth, determined by the amplifiers, was 3-27 MHz. The output of the multiplier was integrated for periods of about one hour. If repeated today, the experiment could not be done with a laser, because the source incoherence is essential to the effect. The experiment clearly showed that correlation was observed when the cathodes were superimposed, which disappeared when they were separated.
A similar experiment was performed by Brannen and Ferguson in which the coincidences of photoelectrons emitted from two cathodes were observed. No extra coincidences, or correlation, were observed when the cathodes were illuminated coherently, and this, it seemed, proved that the Brown and Twiss interferometer could not work (although, of course, it confounded them by working anyway). Some thought maybe light wasn't described well by quantum mechanics at all, and that the classical theory predicted what was observed. This is very nearly true, since the amplitudes of wave theory include a lot of quantum mechanical characteristics by their very nature. However, light is quite properly and correctly described by quantum mechanics when it is done properly, and not by naive intuition.
With the concurrence of E. M. Purcell, Brown and Twiss showed that the coincidence experiment was much too insensitive to show the effect as the experiment was designed, and instead would have required years of data to show any correlation by photon counting. They later demonstrated correlations using photon counting, resolving the problem. Of course, their method using electronic correlation, as in the stellar interferometer, was much more efficient and gave much better results than photon counting.
Information on this interesting controversy can be found in the References.
The best test of the interferometer would be the measurement of a star of known diameter. However, there are no such stars. Therefore, the only tests are the consistency of repeated measurements. The interferometer measures the angular diameter directly, and the linear diameter depends on knowing the distance, which in many cases is uncertain. All astronomical data is subject to error, revision and misinterpretation, though the current quoted figures always look firm and reliable enough.
The problem with using a terrestrial source for a test is seen from the fact that a source of diameter a mm has an angular diameter of 0.2a" at a distance of 1 km. The maximum angular diameter that the Narrabri interferometer can measure is 0.011", so a source diameter of only 0.05 mm would be required at 1 km, or 5 mm at 100 km. It would be very difficult to push enough light to be seen through such a small aperture!
The angular diameter can be used directly to find the exitance (energy emitted by the stellar surface per unit area) without knowing the distance, and the exitance can be used to find the temperature. Therefore, the interferometer data has been used to refine the temperature scale of the stars, which previously was estimated only from the spectrum. The monochromatic flux F at the surface of a star is related to the monochromatic flux received outside the Earth's atmosphere by F = 4f/θ2, where θ is the angular diameter, as illustrated in the diagram. This does not include corrections for interstellar extinction. Then, &integ;F d&lambda = σTe4, where σ is Stefan's constant.
By 1967, measurements had been made on 15 stars from spectral type B0 to F5, including a number of main sequence stars, including Regulus (3.8), Sirius (1.76), Vega (3.03), Fomalhaut (1.56), Altair (1.65) and Procyon (2.17), for which reliable parallaxes were known. The number in parentheses is the diameter in solar diameters. Measurements could not be made on Betelgeuse, since the mirrors could not be brought closer than 10 m apart, and besides the 6.5 m mirrors would themselves resolve the star, reducing the correlation to a trifle.
E. Hecht and A. Zajac, Optics (Reading, MA: Addison-Wesley, 1974). Section 12.4 covers the application of coherence theory to stellar interferometry. This is also a good reference for the other optical matters discussed above.
M. Born and E. Wolf, Principles of Optics (London: Pergamon Press, 1959). Chapter X treats partial coherence. Section 10.4 is especially relevant to our subject. The Michelson stellar interferometer is covered in Section 7.3.6, pp 270-276, with a mention of the intensity interferometer, which was quite new when this book was written.
A. A. Michelson and F. G. Pease, "Measurement of the Diameter of α Orionis With The Interferometer," Astrophysical J. 53, 249-259 (1920).
R. Hanbury Brown and R. Q. Twiss, "A New Type of Interferometer for Use in Radio Astronomy," Philosophical Magazine (7)45, 663 (1954).
R. Hanbury Brown and R. Q. Twiss, "A Test of a New Type of Stellar Interferometer on Sirius," Nature 178, 1046-1048 (1956).
E. Brannen and H. I. S. Ferguson, "The Question of Correlation Between Photons in Coherent Light Rays," Nature 178, 481-482 (1956).
R. Hanbury Brown and R. Q. Twiss, "The Question of Correlation Between Photons in Coherent Light Rays," Nature 178, 1447-1448 (1956).
E. M. Purcell, (Same title as previous reference) Nature 178, 1449-1450 (1956).
R. Q Twiss and A. G. Little, and R. Hanbury Brown, "Correlation Between Photons, in Coherent Beams of Light, Detected by a Coincidence Counting Technique," Nature 180, 324-326 (1957).
R. Hanbury Brown, "The Stellar Interferometer at Narrabri Observatory," Sky and Telescope, 27(2) August 1964, 64-69.
R. Hanbury Brown, J. Davis and L. R. Allen, "The Stellar Interferometer at Narrabri Observatory, I and II," Monthly Notices of the Royal Astronomical Society, 137, 375-417 (1967).
R. Hanbury Brown, "Measurement of Stellar Diameters," Annual Reviews of Astronomy and Astrophysics, 1968, 13-38. With bibliography.
_________, "Star Sizes Measured," Sky and Telescope 38(3), March 1968, 1 and 155.
Composed by J. B. Calvert
Created 25 September 2002
Last revised 19 November 2008 | http://mysite.du.edu/~jcalvert/astro/starsiz.htm | 13 |
25 | ||This article needs more links to other articles to help integrate it into the encyclopedia. (December 2012)|
The Nullification Crisis was a sectional crisis during the presidency of Andrew Jackson created by South Carolina's 1832 Ordinance of Nullification. This ordinance declared by the power of the State that the federal Tariffs of 1828 and 1832 were unconstitutional and therefore null and void within the sovereign boundaries of South Carolina. The controversial and highly protective Tariff of 1828 (known to its detractors as the "Tariff of Abominations") was enacted into law during the presidency of John Quincy Adams. The tariff was opposed in the South and parts of New England. Its opponents expected that the election of Jackson as President would result in the tariff being significantly reduced.
The nation had suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. Many South Carolina politicians blamed the change in fortunes on the national tariff policy that developed after the War of 1812 to promote American manufacturing over its European competition. By 1828 South Carolina state politics increasingly organized around the tariff issue. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state itself declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and Vice President John C. Calhoun, the most effective proponent of the constitutional theory of state nullification.
On July 14, 1832, after Calhoun had resigned the Vice Presidency in order to run for the Senate where he could more effectively defend nullification, Jackson signed into law the Tariff of 1832. This compromise tariff received the support of most northerners and half of the southerners in Congress. The reductions were too little for South Carolina, and in November 1832 a state convention declared that the tariffs of both 1828 and 1832 were unconstitutional and unenforceable in South Carolina after February 1, 1833. Military preparations to resist anticipated federal enforcement were initiated by the state. In late February both a Force Bill, authorizing the President to use military forces against South Carolina, and a new negotiated tariff satisfactory to South Carolina were passed by Congress. The South Carolina convention reconvened and repealed its Nullification Ordinance on March 11, 1833.
The crisis was over, and both sides could find reasons to claim victory. The tariff rates were reduced and stayed low to the satisfaction of the South, but the states’ rights doctrine of nullification remained controversial. By the 1850s the issues of the expansion of slavery into the western territories and the threat of the Slave Power became the central issues in the nation.
Since the Nullification Crisis, the doctrine of states' rights has been asserted again by opponents of the Fugitive Slave Act of 1850, proponents of California's Specific Contract Act of 1863, (which nullified the Legal Tender Act of 1862) opponents of Federal acts prohibiting the sale and possession of marijuana in the first decade of the 21st century, and opponents of implementation of laws and regulations pertaining to firearms from the late 1900s up to 2013.
Background (1787 - 1816)
The historian Richard E. Ellis wrote:
|“||By creating a national government with the authority to act directly upon individuals, by denying to the state many of the prerogatives that they formerly had, and by leaving open to the central government the possibility of claiming for itself many powers not explicitly assigned to it, the Constitution and Bill of Rights as finally ratified substantially increased the strength of the central government at the expense of the states.||”|
The extent of this change and the problem of the actual distribution of powers between state and the federal governments would be a matter of political and ideological discussion up to the Civil War and beyond. In the early 1790s the debate centered on Alexander Hamilton's nationalistic financial program versus Jefferson's democratic and agrarian program, a conflict that led to the formation of two opposing national political parties. Later in the decade the Alien and Sedition Acts led to the states' rights position being articulated in the Kentucky and Virginia Resolutions. The Kentucky Resolutions, written by Thomas Jefferson, contained the following, which has often been cited as a justification for both nullification and secession:
|“||… that in cases of an abuse of the delegated powers, the members of the general government, being chosen by the people, a change by the people would be the constitutional remedy; but, where powers are assumed which have not been delegated, a nullification of the act is the rightful remedy: that every State has a natural right in cases not within the compact, (casus non fœderis) to nullify of their own authority all assumptions of power by others within their limits: that without this right, they would be under the dominion, absolute and unlimited, of whosoever might exercise this right of judgment for them: that nevertheless, this commonwealth, from motives of regard and respect for its co-States, has wished to communicate with them on the subject: that with them alone it is proper to communicate, they alone being parties to the compact, and solely authorized to judge in the last resort of the powers exercised under it… .||”|
The Virginia Resolutions, written by James Madison, hold a similar argument:
|“||The resolutions, having taken this view of the Federal compact, proceed to infer that, in cases of a deliberate, palpable, and dangerous exercise of other powers, not granted by the said compact, the States, who are parties thereto, have the right, and are in duty bound to interpose to arrest the evil, and for maintaining, within their respective limits, the authorities, rights, and liberties appertaining to them. ...The Constitution of the United States was formed by the sanction of the States, given by each in its sovereign capacity. It adds to the stability and dignity, as well as to the authority of the Constitution, that it rests on this solid foundation. The States, then, being parties to the constitutional compact, and in their sovereign capacity, it follows of necessity that there can be no tribunal above their authority to decide, in the last resort, whether the compact made by them be violated; and, consequently, as parties to it, they must themselves decide, in the last resort, such questions as may be of sufficient magnitude to require their interposition.||”|
Historians differ over the extent to which either resolution advocated the doctrine of nullification. Historian Lance Banning wrote, “The legislators of Kentucky (or more likely, John Breckinridge, the Kentucky legislator who sponsored the resolution) deleted Jefferson's suggestion that the rightful remedy for federal usurpations was a "nullification" of such acts by each state acting on its own to prevent their operation within its respective borders. Rather than suggesting individual, although concerted, measures of this sort, Kentucky was content to ask its sisters to unite in declarations that the acts were "void and of no force", and in "requesting their appeal" at the succeeding session of the Congress.” The key sentence, and the word "nullification" was used in supplementary Resolutions passed by Kentucky in 1799.
Madison's judgment is clearer. He was chairman of a committee of the Virginia Legislature which issued a book-length Report on the Resolutions of 1798, published in 1800 after they had been decried by several states. This asserted that the state did not claim legal force. "The declarations in such cases are expressions of opinion, unaccompanied by other effect than what they may produce upon opinion, by exciting reflection. The opinions of the judiciary, on the other hand, are carried into immediate effect by force." If the states collectively agreed in their declarations, there were several methods by which it might prevail, from persuading Congress to repeal the unconstitutional law, to calling a constitutional convention, as two-thirds of the states may. When, at the time of the Nullification Crisis, he was presented with the Kentucky resolutions of 1799, he argued that the resolutions themselves were not Jefferson's words, and that Jefferson meant this not as a constitutional but as a revolutionary right.
Madison biographer Ralph Ketcham wrote:
|“||Though Madison agreed entirely with the specific condemnation of the Alien and Sedition Acts, with the concept of the limited delegated power of the general government, and even with the proposition that laws contrary to the Constitution were illegal, he drew back from the declaration that each state legislature had the power to act within its borders against the authority of the general government to oppose laws the legislature deemed unconstitutional.”||”|
Historian Sean Wilentz explains the widespread opposition to these resolutions:
|“||Several states followed Maryland's House of Delegates in rejecting the idea that any state could, by legislative action, even claim that a federal law was unconstitutional, and suggested that any effort to do so was treasonous. A few northern states, including Massachusetts, denied the powers claimed by Kentucky and Virginia and insisted that the Sedition law was perfectly constitutional .... Ten state legislatures with heavy Federalist majorities from around the country censured Kentucky and Virginia for usurping powers that supposedly belonged to the federal judiciary. Northern Republicans supported the resolutions' objections to the alien and sedition acts, but opposed the idea of state review of federal laws. Southern Republicans outside Virginia and Kentucky were eloquently silent about the matter, and no southern legislature heeded the call to battle.||”|
The election of 1800 was a turning point in national politics as the Federalists were replaced by the Democratic-Republican Party led by Thomas Jefferson and James Madison, the authors of the Kentucky and Virginia Resolutions. But, the four presidential terms spanning the period from 1800 to 1817 “did little to advance the cause of states’ rights and much to weaken it.” Over Jefferson’s opposition, the power of the federal judiciary, led by Federalist Chief Justice John Marshall, increased. Jefferson expanded federal powers with the acquisition of the Louisiana Territory and his use of a national embargo designed to prevent involvement in a European war. Madison in 1809 used national troops to enforce a Supreme Court decision in Pennsylvania, appointed an “extreme nationalist” in Joseph Story to the Supreme Court, signed the bill creating the Second Bank of the United States, and called for a constitutional amendment to promote internal improvements.
Opposition to the War of 1812 was centered in New England. Delegates to a convention in Hartford, Connecticut met in December 1814 to consider a New England response to Madison’s war policy. The debate allowed many radicals to argue the cause of states’ rights and state sovereignty. In the end, moderate voices dominated and the final product was not secession or nullification, but a series of proposed constitutional amendments. Identifying the South’s domination of the government as the cause of much of their problems, the proposed amendments included “the repeal of the three-fifths clause, a requirement that two-thirds of both houses of Congress agree before any new state could be admitted to the Union, limits on the length of embargoes, and the outlawing of the election of a president from the same state to successive terms, clearly aimed at the Virginians.” The war was over before the proposals were submitted to President Madison.
After the conclusion of the War of 1812 Sean Wilentz notes:
|“||Madison’s speech [his 1815 annual message to Congress] affirmed that the war had reinforced the evolution of mainstream Republicanism, moving it further away from its original and localist assumptions. The war’s immense strain on the treasury led to new calls from nationalist Republicans for a national bank. The difficulties in moving and supplying troops exposed the wretchedness of the country’s transportation links, and the need for extensive new roads and canals. A boom in American manufacturing during the prolonged cessation of trade with Britain created an entirely new class of enterprisers, most of them tied politically to the Republicans, who might not survive without tariff protection. More broadly, the war reinforced feelings of national identity and connection.||”|
This spirit of nationalism was linked to the tremendous growth and economic prosperity of this post war era. However in 1819 the nation suffered its first financial panic and the 1820s turned out to be a decade of political turmoil that again led to fierce debates over competing views of the exact nature of American federalism. The “extreme democratic and agrarian rhetoric” that had been so effective in 1798 led to renewed attacks on the “numerous market-oriented enterprises, particularly banks, corporations, creditors, and absentee landholders”.
Tariffs (1816-1828)
The Tariff of 1816 had some protective features, and it received support throughout the nation, including that of John C. Calhoun and fellow South Carolinian William Lowndes. The first explicitly protective tariff linked to a specific program of internal improvements was the Tariff of 1824. Sponsored by Henry Clay, this tariff provided a general level of protection at 35% ad valorem (compared to 25% with the 1816 act) and hiked duties on iron, woolens, cotton, hemp, and wool and cotton bagging. The bill barely passed the federal House of Representatives by a vote of 107 to 102. The Middle states and Northwest supported the bill, the South and Southwest opposed it, and New England split its vote with a majority opposing it. In the Senate the bill, with the support of Tennessee Senator Andrew Jackson, passed by four votes, and President James Monroe, the Virginia heir to the Jefferson-Madison control of the White House, signed the bill on March 25, 1824. Daniel Webster of Massachusetts led the New England opposition to this tariff.
Protest against the prospect and the constitutionality of higher tariffs began in 1826 and 1827 with William Branch Giles, who had the Virginia legislature pass resolutions denying the power of Congress to pass protective tariffs, citing the Virginia Resolutions of 1798 and James Madison's 1800 defense of them. Madison denied both the appeal to nullification and the unconstitutionality; he had always held that the power to regulate commerce included protection. Jefferson had, at the end of his life, written against protective tariffs.
The Tariff of 1828 was largely the work of Martin Van Buren (although Silas Wright Jr. of New York prepared the main provisions) and was partly a political ploy to elect Andrew Jackson president. Van Buren calculated that the South would vote for Jackson regardless of the issues so he ignored their interests in drafting the bill. New England, he thought, was just as likely to support the incumbent John Quincy Adams, so the bill levied heavy taxes on raw materials consumed by New England such as hemp, flax, molasses, iron and sail duck. With an additional tariff on iron to satisfy Pennsylvania interests, Van Buren expected the tariff to help deliver Pennsylvania, New York, Missouri, Ohio, and Kentucky to Jackson. Over opposition from the South and some from New England, the tariff was passed with the full support of many Jackson supporters in Congress and signed by President Adams in early 1828.
As expected, Jackson and his running mate John Calhoun carried the entire South with overwhelming numbers in all the states but Louisiana where Adams drew 47% of the vote in a losing effort. However many Southerners became dissatisfied as Jackson, in his first two annual messages to Congress, failed to launch a strong attack on the tariff. Historian William J. Cooper Jr. writes:
|“||The most doctrinaire ideologues of the Old Republican group [supporters of the Jefferson and Madison position in the late 1790s] first found Jackson wanting. These purists identified the tariff of 1828, the hated Tariff of Abominations, as the most heinous manifestation of the nationalist policy they abhorred. That protective tariff violated their constitutional theory, for, as they interpreted the document, it gave no permission for a protective tariff. Moreover, they saw protection as benefiting the North and hurting the South.||”|
South Carolina Background (1819-1828)
South Carolina had been adversely affected by the national economic decline of the 1820s. During this decade, the population decreased by 56,000 whites and 30,000 slaves, out of a total free and slave population of 580,000. The whites left for better places; they took slaves with them or sold them to traders moving slaves to the Deep South for sale.
Historian Richard E. Ellis describes the situation:
|“||Throughout the colonial and early national periods, South Carolina had sustained substantial economic growth and prosperity. This had created an extremely wealthy and extravagant low country aristocracy whose fortunes were based first on the cultivation of rice and indigo, and then on cotton. Then the state was devastated by the Panic of 1819. The depression that followed was more severe than in almost any other state of the Union. Moreover, competition from the newer cotton producing areas along the Gulf Coast, blessed with fertile lands that produced a higher crop-yield per acre, made recovery painfully slow. To make matters worse, in large areas of South Carolina slaves vastly outnumbered whites, and there existed both considerable fear of slave rebellion and a growing sensitivity to even the smallest criticism of “the peculiar institution.”||”|
State leaders, led by states’ rights advocates like William Smith and Thomas Cooper, blamed most of the state’s economic problems on the Tariff of 1816 and national internal improvement projects Soil erosion and competition from the New Southwest were also very significant reasons for the state’s declining fortunes. George McDuffie was a particularly effective speaker for the anti-tariff forces, and he popularized the Forty Bale theory. McDuffie argued that the 40% tariff on cotton finished goods meant that “the manufacturer actually invades your barns, and plunders you of 40 out of every 100 bales that you produce.” Mathematically incorrect, this argument still struck a nerve with his constituency. Nationalists such as Calhoun were forced by the increasing power of such leaders to retreat from their previous positions and adopt, in the words of Ellis, "an even more extreme version of the states' rights doctrine" in order to maintain political significance within South Carolina.
South Carolina’s first effort at nullification occurred in 1822. Its planters believed that free black sailors had assisted Denmark Vesey in his planned slave rebellion. South Carolina passed a Negro Seamen Act, which required that all black foreign seamen be imprisoned while their ships were docked in Charleston. Britain strongly objected, especially as it was recruiting more Africans as sailors. What was worse, if the captains did not pay the fees to cover the cost of jailing, South Carolina would sell the sailors into slavery. Other southern states also passed laws against free black sailors.
Supreme Court Justice William Johnson, in his capacity as a circuit judge, declared the South Carolina law as unconstitutional since it violated United States treaties with Great Britain. The South Carolina Senate announced that the judge’s ruling was invalid and that the Act would be enforced. The federal government did not attempt to carry out Johnson's decision.
Route to nullification in South Carolina (1828-1832)
Historian Avery Craven argues that, for the most part, the debate from 1828-1832 was a local South Carolina affair. The state's leaders were not united and the sides were roughly equal. The western part of the state and a faction in Charleston, led by Joel Poinsett, would remain loyal to Jackson almost to the end. Only in small part was the conflict between “a National North against a States’-right South”.
After the final vote on the Tariff of 1828, the South Carolina congressional delegation held two caucuses, the second at the home of Senator Robert Y. Hayne. They were rebuffed in their efforts to coordinate a united Southern response and focused on how their state representatives would react. While many agreed with George McDuffie that tariff policy could lead to secession at some future date, they all agreed that as much as possible, the issue should be kept out of the upcoming presidential election. Calhoun, while not at this meeting, served as a moderating influence. He felt that the first step in reducing the tariff was to defeat Adams and his supporters in the upcoming election. William C. Preston, on behalf of the South Carolina legislature, asked Calhoun to prepare a report on the tariff situation. Calhoun readily accepted this challenge and in a few weeks time had a 35,000-word draft of what would become his “Exposition and Protest”.
Calhoun’s “Exposition” was completed late in 1828. He argued that the tariff of 1828 was unconstitutional because it favored manufacturing over commerce and agriculture. He thought that the tariff power could only be used to generate revenue, not to provide protection from foreign competition for American industries. He believed that the people of a state or several states, acting in a democratically elected convention, had the retained power to veto any act of the federal government which violated the Constitution. This veto, the core of the doctrine of nullification, was explained by Calhoun in the Exposition:
|“||If it be conceded, as it must be by every one who is the least conversant with our institutions, that the sovereign powers delegated are divided between the General and State Governments, and that the latter hold their portion by the same tenure as the former, it would seem impossible to deny to the States the right of deciding on the infractions of their powers, and the proper remedy to be applied for their correction. The right of judging, in such cases, is an essential attribute of sovereignty, of which the States cannot be divested without losing their sovereignty itself, and being reduced to a subordinate corporate condition. In fact, to divide power, and to give to one of the parties the exclusive right of judging of the portion allotted to each, is, in reality, not to divide it at all; and to reserve such exclusive right to the General Government (it matters not by what department to be exercised), is to convert it, in fact, into a great consolidated government, with unlimited powers, and to divest the States, in reality, of all their rights, It is impossible to understand the force of terms, and to deny so plain a conclusion.||”|
The report also detailed the specific southern grievances over the tariff that led to the current dissatisfaction. ” Fearful that “hotheads” such as McDuffie might force the legislature into taking some drastic action against the federal government, historian John Niven describes Calhoun’s political purpose in the document:
|“||All through that hot and humid summer, emotions among the vociferous planter population had been worked up to a near-frenzy of excitement. The whole tenor of the argument built up in the “Exposition” was aimed to present the case in a cool, considered manner that would dampen any drastic moves yet would set in motion the machinery for repeal of the tariff act. It would also warn other sections of the Union against any future legislation that an increasingly self-conscious South might consider punitive, especially on the subject of slavery.||”|
The report was submitted to the state legislature which had 5,000 copies printed and distributed. Calhoun, who still had designs on succeeding Jackson as president, was not identified as the author but word on this soon leaked out. The legislature took no action on the report at that time.
In the summer of 1828 Robert Barnwell Rhett, soon to be considered the most radical of the South Carolinians, entered the fray over the tariff. As a state representative, Rhett called for the governor to convene a special session of the legislature. An outstanding orator, Rhett appealed to his constituents to resist the majority in Congress. Rhett addressed the danger of doing nothing:
|“||But if you are doubtful of yourselves – if you are not prepared to follow up your principles wherever they may lead, to their very last consequence – if you love life better than honor, -- prefer ease to perilous liberty and glory; awake not! Stir not! -- Impotent resistance will add vengeance to your ruin. Live in smiling peace with your insatiable Oppressors, and die with the noble consolation that your submissive patience will survive triumphant your beggary and despair.||”|
Rhett’s rhetoric about revolution and war was too radical in the summer of 1828 but, with the election of Jackson assured, James Hamilton Jr. on October 28 in the Colleton County Courthouse in Walterborough “launched the formal nullification campaign.” Renouncing his former nationalism, Hamilton warned the people that, “Your task-master must soon become a tyrant, from the very abuses and corruption of the system, without the bowels of compassion, or a jot of human sympathy.” He called for implementation of Mr. Jefferson’s “rightful remedy” of nullification. Hamilton sent a copy of the speech directly to President-elect Jackson. But, despite a statewide campaign by Hamilton and McDuffie, a proposal to call a nullification convention in 1829 was defeated by the South Carolina legislature meeting at the end of 1828. State leaders such as Calhoun, Hayne, Smith, and William Drayton were all able to remain publicly non-committal or opposed to nullification for the next couple of years.
The division in the state between radicals and conservatives continued throughout 1829 and 1830. After the failure of a state project to arrange financing of a railroad within the state to promote internal trade, the state petitioned Congress to invest $250,000 in the company trying to build the railroad. After Congress tabled the measure, the debate in South Carolina resumed between those who wanted state investment and those who wanted to work to get Congress' support. The debate demonstrated that a significant minority of the state did have an interest in Clay’s American System. The effect of the Webster-Haynes debate was to energize the radicals, and some moderates started to move in their direction.
The state election campaign of 1830 focused on the tariff issue and the need for a state convention. On the defensive, radicals underplayed the intent of the convention as pro-nullification. When voters were presented with races where an unpledged convention was the issue, the radicals generally won. When conservatives effectively characterized the race as being about nullification, the radicals lost. The October election was narrowly carried by the radicals, although the blurring of the issues left them without any specific mandate. In South Carolina, the governor was selected by the legislature, which selected James Hamilton, the leader of the radical movement, as governor and fellow radical Henry L. Pinckney as speaker of the South Carolina House. For the open Senate seat, the legislature chose the more radical Stephen Miller over William Smith.
With radicals in leading positions, in 1831, they began to capture momentum. State politics became sharply divided along Nullifier and Unionist lines. Still, the margin in the legislature fell short of the two-thirds majority needed for a convention. Many of the radicals felt that convincing Calhoun of the futility of his plans for the presidency would lead him into their ranks. Calhoun meanwhile had concluded that Martin Van Buren was clearly establishing himself as Jackson’s heir apparent. At Hamilton’s prompting, George McDuffie made a three-hour speech in Charleston demanding nullification of the tariff at any cost. In the state, the success of McDuffie’s speech seemed to open up the possibilities of both military confrontation with the federal government and civil war within the state. With silence no longer an acceptable alternative, Calhoun looked for the opportunity to take control of the anti-tariff faction in the state; by June he was preparing what would be known as his Fort Hill Address.
Published on July 26, 1831, the address repeated and expanded the positions Calhoun had made in the “Exposition”. While the logic of much of the speech was consistent with the states’ rights position of most Jacksonians, and even Daniel Webster remarked that it “was the ablest and most plausible, and therefore the most dangerous vindication of that particular form of Revolution”, the speech still placed Calhoun clearly in the nullifier camp. Within South Carolina, his gestures at moderation in the speech were drowned out as planters received word of the Nat Turner insurrection in Virginia. Calhoun was not alone in finding a connection between the abolition movement and the sectional aspects of the tariff issue. It confirmed for Calhoun what he had written in a September 11, 1830 letter:
|“||I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness.||”|
From this point, the nullifiers accelerated their organization and rhetoric. In July 1831 the States Rights and Free Trade Association was formed in Charleston and expanded throughout the state. Unlike state political organizations in the past, which were led by the South Carolina planter aristocracy, this group appealed to all segments of the population, including non-slaveholder farmers, small slaveholders, and the Charleston non-agricultural class. Governor Hamilton was instrumental in seeing that the association, which was both a political and a social organization, expanded throughout the state. In the winter of 1831 and spring of 1832, the governor held conventions and rallies throughout the state to mobilize the nullification movement. The conservatives were unable to match the radicals in either organization or leadership.
The state elections of 1832 were “charged with tension and bespattered with violence,” and “polite debates often degenerated into frontier brawls.” Unlike the previous year’s election, the choice was clear between nullifiers and unionists. The nullifiers won and on October 20, 1832, Governor Hamilton called the legislature into a special session to consider a convention. The legislative vote was 96-25 in the House and 31-13 in the Senate
In November 1832 the Nullification Convention met. The convention declared that the tariffs of 1828 and 1832 were unconstitutional and unenforceable within the state of South Carolina after February 1, 1833. They said that attempts to use force to collect the taxes would lead to the state’s secession. Robert Hayne, who followed Hamilton as governor in 1833, established a 2,000-man group of mounted minutemen and 25,000 infantry who would march to Charleston in the event of a military conflict. These troops were to be armed with $100,000 in arms purchased in the North.
The enabling legislation passed by the legislature was carefully constructed to avoid clashes if at all possible and to create an aura of legality in the process. To avoid conflicts with Unionists, it allowed importers to pay the tariff if they so desired. Other merchants could pay the tariff by obtaining a paper tariff bond from the customs officer. They would then refuse to pay the bond when due, and if the customs official seized the goods, the merchant would file for a writ of replevin to recover the goods in state court. Customs officials who refused to return the goods (by placing them under the protection of federal troops) would be civilly liable for twice the value of the goods. To insure that state officials and judges supported the law, a "test oath" would be required for all new state officials, binding them to support the ordinance of nullification.
Governor Hayne in his inaugural address announced South Carolina's position:
|“||If the sacred soil of Carolina should be polluted by the footsteps of an invader, or be stained with the blood of her citizens, shed in defense, I trust in Almighty God that no son of hers … who has been nourished at her bosom … will be found raising a parricidal arm against our common mother. And even should she stand ALONE in this great struggle for constitutional liberty … that there will not be found, in the wider limits of the state, one recreant son who will not fly to the rescue, and be ready to lay down his life in her defense.||”|
Washington, D.C. (1828-1832)
When President Jackson took office in March 1829 he was well aware of the turmoil created by the “Tariff of Abominations”. While he may have abandoned some of his earlier beliefs that had allowed him to vote for the Tariff of 1824, he still felt protectionism was justified for products essential to military preparedness and did not believe that the current tariff should be reduced until the national debt was fully paid off. He addressed the issue in his inaugural address and his first three messages to Congress, but offered no specific relief. In December 1831, with the proponents of nullification in South Carolina gaining momentum, Jackson was recommending “the exercise of that spirit of concession and conciliation which has distinguished the friends of our Union in all great emergencies.” However on the constitutional issue of nullification, despite his strong beliefs in states’ rights, Jackson did not waver.
Calhoun’s “Exposition and Protest” did start a national debate over the doctrine of nullification. The leading proponents of the nationalistic view included Daniel Webster, Supreme Court Justice Joseph Story, Judge William Alexander Duer, John Quincy Adams, Nathaniel Chipman, and Nathan Dane. These people rejected the compact theory advanced by Calhoun, claiming that the Constitution was the product of the people, not the states. According to the nationalist position, the Supreme Court had the final say on the constitutionality of legislation, the national union was perpetual and had supreme authority over individual states. The nullifiers, on the other hand, asserted that the central government was not to be the ultimate arbiter of its own power, and that the states, as the contracting entities, could judge for themselves what was or was not constitutional. While Calhoun’s “Exposition” claimed that nullification was based on the reasoning behind the Kentucky and Virginia Resolutions, an aging James Madison in an August 28, 1830 letter to Edward Everett, intended for publication, disagreed. Madison wrote, denying that any individual state could alter the compact:
|“||Can more be necessary to demonstrate the inadmissibility of such a doctrine than that it puts it in the power of the smallest fraction over 1/4 of the U. S. — that is, of 7 States out of 24 — to give the law and even the Constn. to 17 States, each of the 17 having as parties to the Constn. an equal right with each of the 7 to expound it & to insist on the exposition. That the 7 might, in particular instances be right and the 17 wrong, is more than possible. But to establish a positive & permanent rule giving such a power to such a minority over such a majority, would overturn the first principle of free Govt. and in practice necessarily overturn the Govt. itself.||”|
Part of the South’s strategy to force repeal of the tariff was to arrange an alliance with the West. Under the plan, the South would support the West’s demand for free lands in the public domain if the West would support repeal of the tariff. With this purpose Robert Hayne took the floor on the Senate in early 1830, thus beginning “the most celebrated debate, in the Senate’s history.” Daniel Webster’s response shifted the debate, subsequently styled the Webster-Hayne debates, from the specific issue of western lands to a general debate on the very nature of the United States. Webster's position differed from Madison's: Webster asserted that the people of the United States acted as one aggregate body, Madison held that the people of the several states had acted collectively. John Rowan spoke against Webster on that issue, and Madison wrote, congratulating Webster, but explaining his own position. The debate presented the fullest articulation of the differences over nullification, and 40,000 copies of Webster’s response, which concluded with “liberty and Union, now and forever, one and inseparable”, were distributed nationwide.
Many people expected the states’ rights Jackson to side with Haynes. However once the debate shifted to secession and nullification, Jackson sided with Webster. On April 13, 1830 at the traditional Democratic Party celebration honoring Thomas Jefferson’s birthday, Jackson chose to make his position clear. In a battle of toasts, Hayne proposed, “The Union of the States, and the Sovereignty of the States.” Jackson’s response, when his turn came, was, “Our Federal Union: It must be preserved.” To those attending, the effect was dramatic. Calhoun would respond with his own toast, in a play on Webster’s closing remarks in the earlier debate, “The Union. Next to our liberty, the most dear.” Finally Martin Van Buren would offer, “Mutual forbearance and reciprocal concession. Through their agency the Union was established. The patriotic spirit from which they emanated will forever sustain it.”
Van Buren wrote in his autobiography of Jackson’s toast, “The veil was rent – the incantations of the night were exposed to the light of day.” Thomas Hart Benton, in his memoirs, stated that the toast “electrified the country.” Jackson would have the final words a few days later when a visitor from South Carolina asked if Jackson had any message he wanted relayed to his friends back in the state. Jackson’s reply was:
|“||Yes I have; please give my compliments to my friends in your State and say to them, that if a single drop of blood shall be shed there in opposition to the laws of the United States, I will hang the first man I can lay my hand on engaged in such treasonable conduct, upon the first tree I can reach.||”|
Other issues than the tariff were still being decided. In May 1830 Jackson vetoed an important (especially to Kentucky and Henry Clay) internal improvements program in the Maysville Road Bill and then followed this with additional vetoes of other such projects shortly before Congress adjourned at the end of May. Clay would use these vetoes to launch his presidential campaign. In 1831 the re-chartering of the Bank of the United States, with Clay and Jackson on opposite sides, reopened a long simmering problem. This issue was featured at the December 1831 National Republican convention in Baltimore which nominated Henry Clay for president, and the proposal to re-charter was formally introduced into Congress on January 6, 1832. The Calhoun-Jackson split entered the center stage when Calhoun, as vice-president presiding over the Senate, cast the tie-breaking vote to deny Martin Van Buren the post of minister to England. Van Buren was subsequently selected as Jackson’s running mate at the 1832 Democratic National Convention held in May.
In February 1832 Henry Clay, back in the Senate after a two decades absence, made a three day long speech calling for a new tariff schedule and an expansion of his American System. In an effort to reach out to John Calhoun and other southerners, Clay’s proposal provided for a ten million dollar revenue reduction based on the amount of budget surplus he anticipated for the coming year. Significant protection was still part of the plan as the reduction primarily came on those imports not in competition with domestic producers. Jackson proposed an alternative that reduced overall tariffs to 28%. John Quincy Adams, now in the House of Representatives, used his Committee of Manufacturers to produce a compromise bill that, in its final form, reduced revenues by five million dollars, lowered duties on non-competitive products, and retained high tariffs on woolens, iron, and cotton products. In the course of the political maneuvering, George McDuffie’s Ways and Means Committee, the normal originator of such bills, prepared a bill with drastic reduction across the board. McDuffie’s bill went nowhere. Jackson signed the Tariff of 1832 on July 14, 1832, a few days after he vetoed the Bank of the United States re-charter bill. Congress adjourned after it failed to override Jackson’s veto.
With Congress in adjournment, Jackson anxiously watched events in South Carolina. The nullifiers found no significant compromise in the Tariff of 1832 and acted accordingly (see the above section). Jackson heard rumors of efforts to subvert members of the army and navy in Charleston and he ordered the secretaries of the army and navy to begin rotating troops and officers based on their loyalty. He ordered General Winfield Scott to prepare for military operations and ordered a naval squadron in Norfolk to prepare to go to Charleston. Jackson kept lines of communication open with unionists like Joel Poinsett, William Drayton, and James L. Petigru and sent George Breathitt, brother of the Kentucky governor, to independently obtain political and military intelligence. After their defeat at the polls in October, Petigru advised Jackson that he should " Be prepared to hear very shortly of a State Convention and an act of Nullification.” On October 19, 1832 Jackson wrote to his Secretary of War, “The attempt will be made to surprise the Forts and garrisons by the militia, and must be guarded against with vestal vigilance and any attempt by force repelled with prompt and exemplary punishment.” By mid-November Jackson’s reelection was assured.
On December 3, 1832 Jackson sent his fourth annual message to Congress. The message “was stridently states’ rights and agrarian in its tone and thrust” and he disavowed protection as anything other than a temporary expedient. His intent regarding nullification, as communicated to Van Buren, was “to pass it barely in review, as a mere buble [sic], view the existing laws as competent to check and put it down.” He hoped to create a “moral force” that would transcend political parties and sections. The paragraph in the message that addressed nullification was:
|“||It is my painful duty to state that in one quarter of the United States opposition to the revenue laws has arisen to a height which threatens to thwart their execution, if not to endanger the integrity of the Union. What ever obstructions may be thrown in the way of the judicial authorities of the General Government, it is hoped they will be able peaceably to overcome them by the prudence of their own officers and the patriotism of the people. But should this reasonable reliance on the moderation and good sense of all portions of our fellow citizens be disappointed, it is believed that the laws themselves are fully adequate to the suppression of such attempts as may be immediately made. Should the exigency arise rendering the execution of the existing laws impracticable from any cause what ever, prompt notice of it will be given to Congress, with a suggestion of such views and measures as may be deemed necessary to meet it.||”|
On December 10 Jackson issued the Proclamation to the People of South Carolina, in which he characterized the positions of the nullifiers as "impractical absurdity" and "a metaphysical subtlety, in pursuit of an impractical theory." He provided this concise statement of his belief:
|“||I consider, then, the power to annul a law of the United States, assumed by one State, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, unauthorized by its spirit, inconsistent with every principle on which It was founded, and destructive of the great object for which it was formed.||”|
The language used by Jackson, combined with the reports coming out of South Carolina, raised the spectre of military confrontation for many on both sides of the issue. A group of Democrats, led by Van Buren and Thomas Hart Benton among others, saw the only solution to the crisis in a substantial reduction of the tariff.
Negotiation and Confrontation (1833)
In apparent contradiction of his previous claim that the tariff could be enforced with existing laws, on January 16 Jackson sent his Force Bill Message to Congress. Custom houses in Beaufort and Georgetown would be closed and replaced by ships located at each port. In Charleston the custom house would be moved to either Castle Pinckney or Fort Moultrie in Charleston harbor. Direct payment rather than bonds would be required, and federal jails would be established for violators that the state refused to arrest and all cases arising under the state’s nullification act could be removed to the United States Circuit Court. In the most controversial part, the militia acts of 1795 and 1807 would be revised to permit the enforcement of the custom laws by both the militia and the regular United States military. Attempts were made in South Carolina to shift the debate away from nullification by focusing instead on the proposed enforcement.
The Force bill went to the Senate Judiciary Committee chaired by Pennsylvania protectionist William Wilkins and supported by members Daniel Webster and Theodore Frelinghuysen of New Jersey; it gave Jackson everything he asked. On January 28 the Senate defeated a motion by a vote of 30 to 15 to postpone debate on the bill. All but two of the votes to delay were from the lower South and only three from this section voted against the motion. This did not signal any increased support for nullification but did signify doubts about enforcement. In order to draw more votes, proposals were made to limit the duration of the coercive powers and restrict the use of force to suppressing, rather than preventing, civil disorder. In the House the Judiciary Committee, in a 4-3 vote, rejected Jackson’s request to use force. By the time Calhoun made a major speech on February 15 strongly opposing it, the Force Bill was temporarily stalled.
On the tariff issue, the drafting of a compromise tariff was assigned in December to the House Ways and Means Committee, now headed by Gulian C. Verplanck. Debate on the committee’s product on the House floor began in January 1833. The Verplanck tariff proposed reductions back to the 1816 levels over the course of the next two years while maintaining the basic principle of protectionism. The anti-Jackson protectionists saw this as an economic disaster that did not allow the Tariff of 1832 to even be tested and "an undignified truckling to the menaces and blustering of South Carolina." Northern Democrats did not oppose it in principle but still demanded protection for the varying interests of their own constituents. Those sympathetic to the nullifiers wanted a specific abandonment of the principle of protectionism and were willing to offer a longer transition period as a bargaining point. It was clear that the Verplanck tariff was not going to be implemented.
In South Carolina, efforts were being made to avoid an unnecessary confrontation. Governor Hayne ordered the 25,000 troops he had created to train at home rather than gathering in Charleston. At a mass meeting in Charleston on January 21, it was decided to postpone the February 1 deadline for implementing nullification while Congress worked on a compromise tariff. At the same time a commissioner from Virginia, Benjamin Watkins Leigh, arrived in Charleston bearing resolutions that criticized both Jackson and the nullifiers and offering his state as a mediator.
Henry Clay had not taken his defeat in the presidential election well and was unsure on what position he could take in the tariff negotiations. His long term concern was that Jackson eventually was determined to kill protectionism along with the American Plan. In February, after consulting with manufacturers and sugar interests in Louisiana who favored protection for the sugar industry, Clay started to work on a specific compromise plan. As a starting point, he accepted the nullifiers' offer of a transition period but extended it from seven and a half years to nine years with a final target of a 20% ad valorem rate. After first securing the support of his protectionist base, Clay, through an intermediary, broached the subject with Calhoun. Calhoun was receptive and after a private meeting with Clay at Clay’s boardinghouse, negotiations preceded.
Clay introduced the negotiated tariff bill on February 12, and it was immediately referred to a select committee consisting of Clay as chairman, Felix Grundy of Tennessee, George M. Dallas of Pennsylvania, William Cabell Rives of Virginia, Webster, John M. Clayton of Delaware, and Calhoun. On February 21 the committee reported a bill to the floor of the Senate which was largely the original bill proposed by Clay. The Tariff of 1832 would continue except that reduction of all rates above 20% would be reduced by one tenth every two years with the final reductions back to 20% coming in 1842. Protectionism as a principle was not abandoned and provisions were made for raising the tariff if national interests demanded it.
Although not specifically linked by any negotiated agreement, it became clear that the Force Bill and Compromise Tariff of 1833 were inexorably linked. In his February 25 speech ending the debate on the tariff, Clay captured the spirit of the voices for compromise by condemning Jackson's Proclamation to South Carolina as inflammatory, admitting the same problem with the Force Bill but indicating its necessity, and praising the Compromise Tariff as the final measure to restore balance, promote the rule of law, and avoid the "sacked cities," "desolated fields," and "smoking ruins" that he said would be the product of the failure to reach a final accord. The House passed the Compromise Tariff by 119-85 and the Force Bill by 149-48. In the Senate the tariff passed 29-16 and the Force bill by 32-1 with many opponents of it walking out rather than voting for it.
Calhoun rushed to Charleston with the news of the final compromises. The Nullification Convention met again on March 11. It repealed the November Nullification Ordinance and also, "in a purely symbolic gesture", nullified the Force Bill. While the nullifiers claimed victory on the tariff issue, even though they had made concessions, the verdict was very different on nullification. The majority had, in the end, ruled and this boded ill for the South and their minorities hold on slavery. Rhett summed this up at the convention on March 13. Warning that, "A people, owning slaves, are mad, or worse than mad, who do not hold their destinies in their own hands," he continued:
|“||Every stride of this Government, over your rights, brings it nearer and nearer to your peculiar policy. …The whole world are in arms against your institutions … Let Gentlemen not be deceived. It is not the Tariff – not Internal Improvement – nor yet the Force bill, which constitutes the great evil against which we are contending. … These are but the forms in which the despotic nature of the government is evinced – but it is the despotism which constitutes the evil: and until this Government is made a limited Government … there is no liberty – no security for the South.||”|
People reflected on the meaning of the nullification crisis and its outcome for the country. On May 1, 1833 Jackson wrote, "the tariff was only a pretext, and disunion and southern confederacy the real object. The next pretext will be the negro, or slavery question."
The final resolution of the crisis and Jackson’s leadership had appeal throughout the North and South. Robert Remini, the historian and Jackson biographer, described the opposition that nullification drew from traditionally states’ rights Southern states:
The Alabama legislature, for example, pronounced the doctrine “unsound in theory and dangerous in practice.” Georgia said it was “mischievous,” “rash and revolutionary.” Mississippi lawmakers chided the South Carolinians for acting with “reckless precipitancy.”
Forest McDonald, describing the split over nullification among proponents of states rights, wrote, “The doctrine of states’ rights, as embraced by most Americans, was not concerned exclusively, or even primarily with state resistance to federal authority.” But, by the end of the nullification crisis, many southerners started to question whether the Jacksonian Democrats still represented Southern interests. The historian William J. Cooper notes that, “Numerous southerners had begun to perceive it [the Jacksonian Democratic Party] as a spear aimed at the South rather than a shield defending the South.”
In the political vacuum created by this alienation, the southern wing of the Whig Party was formed. The party was a coalition of interests united by the common thread of opposition to Andrew Jackson and, more specifically, his “definition of federal and executive power.” The party included former National Republicans with an “urban, commercial, and nationalist outlook” as well as former nullifiers. Emphasizing that “they were more southern than the Democrats,” the party grew within the South by going “after the abolition issue with unabashed vigor and glee.” With both parties arguing who could best defend southern institutions, the nuances of the differences between free soil and abolitionism, which became an issue in the late 1840s with the Mexican War and territorial expansion, never became part of the political dialogue. This failure increased the volatility of the slavery issues.
Richard Ellis argues that the end of the crisis signified the beginning of a new era. Within the states’ rights movement, the traditional desire for simply “a weak, inactive, and frugal government” was challenged. Ellis states that “in the years leading up to the Civil War the nullifiers and their pro-slavery allies used the doctrine of states’ rights and state sovereignty in such a way as to try to expand the powers of the federal government so that it could more effectively protect the peculiar institution.” By the 1850s, states’ rights had become a call for state equality under the Constitution.
Madison reacted to this incipient tendency by writing two paragraphs of "Advice to My Country," found among his papers. It said that the Union "should be cherished and perpetuated. Let the open enemy to it be regarded as a Pandora with her box opened; and the disguised one, as the Serpent creeping with his deadly wiles into paradise." Richard Rush published this "Advice" in 1850, by which time Southern spirit was so high that it was denounced as a forgery.
The first test for the South over the slavery issue began during the final congressional session of 1835. In what became known as the Gag Rule Debates, abolitionists flooded the Congress with anti-slavery petitions to end slavery and the slave trade in Washington, D.C. The debate was reopened each session as Southerners, led by South Carolinians Henry Pinckney and John Hammond, prevented the petitions from even being officially received by Congress. Led by John Quincy Adams, the slavery debate remained on the national stage until late 1844 when Congress lifted all restrictions on processing the petitions.
Describing the legacy of the crisis, Sean Wilentz writes:
|“||The battle between Jacksonian democratic nationalists, northern and southern, and nullifier sectionalists would resound through the politics of slavery and antislavery for decades to come. Jackson’s victory, ironically, would help accelerate the emergence of southern pro-slavery as a coherent and articulate political force, which would help solidify northern antislavery opinion, inside as well as outside Jackson’s party. Those developments would accelerate the emergence of two fundamentally incompatible democracies, one in the slave South, the other in the free North.||”|
For South Carolina, the legacy of the crisis involved both the divisions within the state during the crisis and the apparent isolation of the state as the crisis was resolved. By 1860, when South Carolina became the first state to secede, the state was more internally united than any other southern state. Historian Charles Edward Cauthen writes:
|“||Probably to a greater extent than in any other Southern state South Carolina had been prepared by her leaders over a period of thirty years for the issues of 1860. Indoctrination in the principles of state sovereignty, education in the necessity of maintaining Southern institutions, warnings of the dangers of control of the federal government by a section hostile to its interests – in a word, the education of the masses in the principles and necessity of secession under certain circumstances – had been carried on with a skill and success hardly inferior to the masterly propaganda of the abolitionists themselves. It was this education, this propaganda, by South Carolina leaders which made secession the almost spontaneous movement that it was.||”|
See also
- Origins of the American Civil War
- American System (economic plan)
- American School (economics)
- Alexander Hamilton
- Friedrich List
- Nullification Convention
- Remini, Andrew Jackson, v2 pp. 136-137. Niven pg. 135-137. Freehling, Prelude to Civil War pg 143
- Freehling, The Road to Disunion, pg. 255. Craven pg. 60. Ellis pg. 7
- Craven pg.65. Niven pg. 135-137. Freehling, Prelude to Civil War pg 143
- Niven p. 192. Calhoun replaced Robert Y. Hayne as senator so that Hayne could follow James Hamilton as governor. Niven writes, "There is no doubt that these moves were part of a well-thought-out plan whereby Hayne would restrain the hotheads in the state legislature and Calhoun would defend his brainchild, nullification, in Washington against administration stalwarts and the likes of Daniel Webster, the new apostle of northern nationalism."
- Howe p. 410. In the Senate only Virginia and South Carolina voted against the 1832 tariff. Howe writes, "Most southerners saw the measure as a significant amelioration of their grievance and were now content to back Jackson for reelection rather than pursue the more drastic remedy such as the one South Carolina was touting."
- Freehling, Prelude to Civil War pg. 1-3. Freehling writes, “In Charleston Governor Robert Y. Hayne ... tried to form an army which could hope to challenge the forces of ‘Old Hickory.’ Hayne recruited a brigade of mounted minutemen, 2,000 strong, which could swoop down on Charleston the moment fighting broke out, and a volunteer army of 25,000 men which could march on foot to save the beleaguered city. In the North Governor Hayne’s agents bought over $100,000 worth of arms; in Charleston Hamilton readied his volunteers for an assault on the federal forts.”
- Wilentz pg. 388
- Woods pg. 78
- Tuttle, California Digest 26 pg. 47
- Ellis pg. 4
- McDonald pg. vii. McDonald wrote, “Of all the problems that beset the United States during the century from the Declaration of Independence to the end of Reconstruction, the most pervasive concerned disagreements about the nature of the Union and the line to be drawn between the authority of the general government and that of the several states. At times the issue bubbled silently and unseen between the surface of public consciousness; at times it exploded: now and again the balance between general and local authority seemed to be settled in one direction or another, only to be upset anew and to move back toward the opposite position, but the contention never went away.”
- Ellis pg. 1-2.
- For full text of the resolutions, see Kentucky Resolutions of 1798 and Kentucky Resolutions of 1799.
- James Madison, Virginia Resolutions of 1798
- Banning pg. 388
- Brant, p. 297, 629
- Brant, pp. 298.
- Brant, p.629
- Ketchum pg. 396
- Wilentz pg. 80.
- Ellis p.5. Madison called for the constitutional amendment because he believed much of the American System was unconstitutional. Historian Richard Buel Jr. notes that in preparing for the worst from the Hartford Convention, the Madison administration made preparation to intervene militarily in case of New England secession. Troops from the Canadian border were moved near Albany so that they could move into either Massachusetts or Connecticut if necessary. New England troops were also returned to their recruitment areas in order to serve as a focus for loyalists. Buel pg.220-221
- McDonald pg. 69-70
- Wilentz pg.166
- Wilentz pg. 181
- Ellis pg. 6. Wilentz pg. 182.
- Freehling, Prelude to Civil War pg. 92-93
- Wilentz pg. 243. Economic historian Frank Taussig notes “The act of 1816, which is generally said to mark the beginning of a distinctly protective policy in this country, belongs rather to the earlier series of acts, beginning with that of 1789, than to the group of acts of 1824, 1828, and 1832. Its highest permanent rate of duty was twenty per cent., an increase over the previous rates which is chiefly accounted for by the heavy interest charge on the debt incurred during the war. But after the crash of 1819, a movement in favor of protection set in, which was backed by a strong popular feeling such as had been absent in the earlier years.” http://teachingamericanhistory.org/library/index.asp?document=1136
- Remini, Henry Clay pg. 232. Freehling, The Road to Disunion, pg. 257.
- McDonald pg. 95
- Brant, p. 622
- Remini, Andrew Jackson, v2 pp. 136-137. McDonald presents a slightly different rationale. He stated that the bill would “adversely affect New England woolen manufacturers, ship builders, and shipowners” and Van Buren calculated that New England and the South would unite to defeat the bill, allowing Jacksonians to have it both ways – in the North they could claim they tried but failed to pass a needed tariff and in the South they could claim that they had thwarted an effort to increase import duties. McDonald pg. 94-95
- Cooper pg. 11-12.
- Freehling, The Road to Disunion, pg. 255. Historian Avery Craven wrote, “Historians have generally ignored the fact that the South Carolina statesmen, in the so-called Nullification controversy, were struggling against a practical situation. They have conjured up a great struggle between nationalism and States” rights and described these men as theorists reveling in constitutional refinements for the mere sake of logic. Yet here was a clear case of commercial and agricultural depression. Craven pg. 60
- Ellis pg. 7. Freehling notes that divisions over nullification in the state generally corresponded to the extent that the section suffered economically. The exception was the “Low country rice and luxury cotton planters” who supported nullification despite their ability to survive the economic depression. This section had the highest percentage of slave population. Freehling, Prelude to Civil War, pg. 25.
- Cauthen pg. 1
- Ellis pg. 7. Freehling, Road to Disunion, pg. 256
- Gerald Horne, Negro Comrades of the Crown: African Americans and the British Empire Fight the U.S. Before Emancipation, New York University (NYU) Press, 2012, pp. 97-98
- Freehling, Road to Disunion, p. 254
- Craven pg.65.
- Niven pg. 135-137. Freehling, Prelude to Civil War pg 143.
- South Carolina Exposition and Protest
- Niven pg. 158-162
- Niven pg. 161
- Niven pg. 163-164
- Walther pg. 123. Craven pg. 63-64.
- Freehling, Prelude to Civil War pg. 149
- Freehling, Prelude to Civil War pg. 152-155, 173-175. A two-thirds vote of each house of the legislature was required to convene a state convention.
- Freehling, Prelude to Civil War pg. 177-186
- Freehling, Prelude to Civil War, pg. 205-213
- Freehling, Prelude to Civil War, pg. 213-218
- Peterson pg. 189-192. Niven pg. 174-181. Calhoun wrote of McDuffie’s speech, “I think it every way imprudent and have so written Hamilton … I see clearly it brings matters to a crisis, and that I must meet it promptly and manfully.” Freehling in his works frequently refers to the radicals as “Calhounites” even before 1831. This is because the radicals, rallying around Calhoun’s “Exposition,” were linked ideologically, if not yet practically, with Calhoun.
- Niven pg. 181-184
- Ellis pg. 193. Freehling, Prelude to Civil War, pg. 257.
- Freehling pg. 224-239
- Freehling, Prelude to Civil War pg. 252-260
- Freehling, Prelude to Civil War pg. 1-3.
- Ellis pg. 97-98
- Remini, Andrew Jackson, v. 3 pg. 14
- Ellis pg. 41-43
- Ellis p. 9
- Ellis pg. 9
- Brant, p.627.
- Ellis pg. 10. Ellis wrote, "But the nullifiers' attempt to legitimize their controversial doctrine by claiming it was a logical extension of the principles embodied in the Kentucky and Virginia Resolutions upset him. In a private letter he deliberately wrote for publication, Madison denied many of the assertions of the nullifiers and lashed out in particular at South Carolina's claim that if a state nullified an act of the federal government it could only be overruled by an amendment to the Constitution." Full text of the letter is available at http://www.constitution.org/jm/18300828_everett.htm.
- Brant, pp. 626-7. Webster never asserted the consolidating position again.
- McDonald pg.105-106
- Remini, Andrew Jackson, v.2 pg. 233-235.
- Remini, Andrew Jackson, v.2 pg. 233-237.
- Remini, Andrew Jackson, v.2 pg. 255-256 Peterson pg. 196-197.
- Remini, Andrew Jackson, v.2 pg. 343-348
- Remini, Andrew Jackson, v.2 pg. 347-355
- Remini, Andrew Jackson, v.2 pg. 358-373. Peterson pg. 203-212
- Remini, Andrew Jackson, v.2 pg. 382-389
- Ellis pg. 82
- Remini, Andrew Jackson, v. 3 pg. 9-11. Full text of his message available at http://www.thisnation.com/library/sotu/1832aj.html
- Ellis pg 83-84. Full document available at: http://www.yale.edu/lawweb/avalon/presiden/proclamations/jack01.htm
- Ellis pg. 93-95
- Ellis pg. 160-165. Peterson pg. 222-224. Peterson differs with Ellis in arguing that passage of the Force Bill “was never in doubt.”
- Ellis pg. 99-100. Peterson pg. 217.
- Wilentz pg. 384-385.
- Peterson pg. 217-226
- Peterson pg. 226-228
- Peterson pg. 229-232
- Freehling, Prelude to Civil War, pg. 295-297
- Freehling, Prelude to Civil War, pg. 297. Willentz pg. 388
- Jon Meacham (2009), American Lion: Andrew Jackson in the White House, New York: Random House, p. 247; Correspondence of Andrew Jackson, Vol. V, p. 72.
- Remini, Andrew Jackson, v3. pg. 42.
- McDonald pg. 110
- Cooper pg. 53-65
- Ellis pg. 198
- Brant p. 646; Rush produced a copy in Mrs. Madison's hand; the original also survives. The contemporary letter to Edward Coles (Brant, p. 639) makes plain that the enemy in question is the nullifier.
- Freehling, Prelude to Civil War pg. 346-356. McDonald (pg 121-122) saw states’ rights in the period from 1833-1847 as almost totally successful in creating a “virtually nonfunctional” federal government. This did not insure political harmony, as “the national political arena became the center of heated controversy concerning the newly raised issue of slavery, a controversy that reached the flash point during the debates about the annexation of the Republic of Texas” pg. 121-122
- Cauthen pg. 32
- Brant, Irving: The Fourth President: A Life of James Madison Bobbs Merrill, 1970.
- Buel, Richard Jr. America on the Brink: How the Political Struggle Over the War of 1812 Almost Destroyed the Young Republic. (2005) ISBN 1-4039-6238-3
- Cauthen, Charles Edward. South Carolina Goes to War. (1950) ISBN 1-57003-560-1
- Cooper, William J. Jr. The South and the Politics of Slavery 1828-1856 (1978) ISBN 0-8071-0385-3
- Craven, Avery. The Coming of the Civil War (1942) ISBN 0-226-11894-0
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights, and the Nullification Crisis (1987)
- Freehling, William W. The Road to Disunion: Secessionists at Bay, 1776-1854 (1991), Vol. 1
- Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816-1836. (1965) ISBN 0-19-507681-8
- Howe, Daniel Walker. What Hath God Wrought: The Transformation of America, 1815-1848. (2007) ISBN 978-0-19-507894-7
- McDonald, Forrest. States’ Rights and the Union: Imperium in Imperio 1776-1876 (2000) ISBN 0-7006-1040-5
- Niven, John. John C. Calhoun and the Price of Union (1988) ISBN 0-8071-1451-0
- Peterson, Merrill D. The Great Triumvirate: Webster, Clay, and Calhoun. (1987) ISBN 0-19-503877-0
- Remini, Robert V. Andrew Jackson and the Course of American Freedom, 1822-1832,v2 (1981) ISBN 0-06-014844-6
- Remini, Robert V. Andrew Jackson and the Course of American Democracy, 1833-1845, v3 (1984) ISBN 0-06-015279-6
- Remini, Robert V. Henry Clay: Statesman for the Union (1991) ISBN 0-393-31088-4
- Tuttle, Charles A. (Court Reporter) California Digest: A Digest of the Reports of the Supreme Court of California, Volume 26 (1906)
- Walther, Eric C. The Fire-Eaters (1992) ISBN 0-8071-1731-5
- Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln. (2005) ISBN 0-393-05820-4
- Woods, Thomas E. Jr. Nullification (2010) ISBN 978-1-59698-149-2
Further reading
- Barnwell, John. Love of Order: South Carolina's First Secession Crisis (1982)
- Capers, Gerald M. John C. Calhoun, Opportunist: A Reappraisal (1960)
- Coit, Margaret L. John C. Calhoun: American Portrait (1950)
- Houston, David Franklin (1896). A Critical Study of Nullification in South Carolina. Longmans, Green, and Co.
- Latner, Richard B. "The Nullification Crisis and Republican Subversion," Journal of Southern History 43 (1977): 18-38, in JSTOR
- McCurry, Stephanie. Masters of Small Worlds.New York: Oxford UP, 1993.
- Pease, Jane H. and William H. Pease, "The Economics and Politics of Charleston's Nullification Crisis", Journal of Southern History 47 (1981): 335-62, in JSTOR
- Ratcliffe, Donald. "The Nullification Crisis, Southern Discontents, and the American Political Process", American Nineteenth Century History. Vol 1: 2 (2000) pp. 1–30
- Wiltse, Charles. John C. Calhoun, nullifier, 1829-1839 (1949)
- South Carolina Exposition and Protest, by Calhoun, 1828.
- The Fort Hill Address: On the Relations of the States and the Federal Government, by Calhoun, July 1831.
- South Carolina Ordinance of Nullification, November 24, 1832.
- President Jackson's Proclamation to South Carolina, December 10, 1832.
- Primary Documents in American History: Nullification Proclamation (Library of Congress)
- President Jackson's Message to the Senate and House Regarding South Carolina's Nullification Ordinance, January 16, 1833
- Nullification Revisited: An article examining the constitutionality of nullification (from a favorable aspect, and with regard to both recent and historical events). | http://en.wikipedia.org/wiki/Nullification_Crisis | 13 |
28 | looking at the relationship between two variables taken from objects in
a sample, correlation
is the appropriate approach when we are interested in the strength
of the association between the variables but
assume causality (that is, we have two potentially
variables, not one independent and one depent
variable). The question addressed by correlation analysis is the
extent to which two variables covary.
In graphical terms, this amounts to asking how closely points on a scatterplot
fall to an imaginary line drawn through the long axis. Of course,
it is important to remember that we are not saying anything about the line
itself (slope, intercept, etc.), just where the points lie in relation
to that line.
Principles of Correlation
To measure the degree of correlation, we compute one of several correlation coefficients, measures of the tendency for two variables to change together. Correlation coefficients range from -1.0 to +1.0. A correlation of +1.0 indicates that the variables always change togther and in the same direction (positive correlation), while a value of -1.0 indicates a perfect negative correlation, where larger values for one variable are always associated with small values of the other, and vice versa. A correlation of 0.0 indicates that the variables vary independently of one another (are uncorrelated, or show no joint dependence). Values in between these extremes represent different degrees of positive and negative correlations.
Graphically, perfect correlations imply that the points fall along a imaginary line of some non-zero slope, whereas completely uncorrelated variables generate a scatterplot that is circular. Thus, the correlation coefficient measures the ellipticality of the scatter of points. Points falling along a line of zero slope, however, are also uncorrelated, as one of the variables shows no variance (and thus cannot covary with the other).
there are multiple types of correlation coefficients, there are two that
are used most commonly. Both of these depend on computing the product
of the two deviations of X1 and X2 from their respective
Pearson Correlation Coefficient
The Pearson correlation coeffcient is a parametric statistic, which assumes that (1) a random sample, (2) both variables are interval or ratio, (3) both variables are more or less normally distributed, and (4) any relationship that exists is linear. To calculate the Pearson correlation, we must first calculate the covariance, or sum of the products of the deviations of two variables from their respective means. The covariance (cov) is calculated as
cov(X1,X2) = 1/(n-1) * SUM ((X1i - X1 bar)(X2i - X2 bar))
While the covariance shows the same tendencies as the correlation, its actual value is dependent on the original units (so cov ranges from negative to positive infinity). We would like to standardize these covariances, so we can compare variables measured on different scales and compute correlations among pairs of variables measured in different scales. To do this, we divide the covariance by the standard deviations of the variables to generate the Pearson correlation coefficient (rp), as
rp = cov (X1,X2) / (SX1 * SX2)
It is important to remember that r is not a test of significance, just a measure of the degree of association.
Click here to see an example calculation.
Spearman Correlation Coefficient
The Spearman correlation is nonparametric, and is also known as a rank correlation, as it is conducted on the ranks of the observations for data that are at least ordinal. Specifically, this correlation evaluates the differences in ranks of an object that is ranked for two different variables. So, the sample of objects is ranked twice (once for each of the variables for which the correlation is to be assessed), and the difference in the ranks is calculated for each object. The Spearman correlation from these data is given by
rs = 1 - ((6*SUMd2) / n*(n2 - 1))
where d2 = (RX1 - RX2)2. If the rank order is the same for both variables, then the correlation is perfect (1.0 or -1.0).
As with the Pearson correlation, we do not know from the value of r alone whether the observed correlation is significant. For either type of correlation, we can test the null hypothesis that the correlation is not significant by calculating
t = r * SQRT((n - 2) / (1 - r2)
and comparing this to a critical t at the 0.05 level with n - 2 degrees of freedom (n - 2 since one degree of freedom is lost for each variable).
to see an example calculation. | http://bioweb.wku.edu/faculty/McElroy/BIOL283/283lects26.htm | 13 |
16 | Sean D. Pitman M.D.
© December, 2006
Most scientists today believe that various places on this planet, such as Greenland, the Antarctic, and many other places, have some very old ice. The ice in these areas appears to be layered in a very distinctive annual pattern. In fact, this pattern is both visually and chemically recognizable and extends downward some 4,000 to 5,000 meters. What happens is that as the snow from a previous year is buried under a new layer of snow, it is compacted over time with the weight of each additional layer of snow above it. This compacted snow is called the “firn” layer. After several meters this layers snowy firn turns into layers of solid ice (note that 30cm of compacted snow compresses further into about 10cm of ice). These layers are much thinner on the Antarctic ice cap as compared to the Greenland ice cap since Antarctica averages only 5cm of "water equivalent" per year while Greenland averages over 50cm of water equivalent. 1,2 since these layers get even thinner as they are buried under more and more snow and ice, due to compression and lateral flow (see diagram), the thinner layers of the Antarctic ice cap become much harder to count than those of the Greenland ice cap at an equivalent depth. So, scientists feel that most accurate historical information comes from Greenland, although much older ice comes from other drier places. Still, the ice cores drilled in the Greenland ice cap, such as the American Greenland Ice Sheet Project (GISP2) and the European Greenland Ice Core Project (GRIP), are felt to be very old indeed - upwards of 160,000 years old. (Back to Top)
The Visual Method
But how, exactly, are these layers counted? Obviously,
at the surface the layers are easy to count visually – and in Greenland the
layers are fairly easily distinguished at depths as great as 1,500 to 2,000m
(see picture). Even here though, there
might be a few problems. How does one
distinguish between a yearly layer and a sub-yearly layer of ice?
For instance, it is not only possible but also likely for various large
snowstorms and/or snowdrifts to lay down
“Fundamentally, in counting any annual marker, we must ask whether it is
absolutely unequivocal, or whether nonannual events could mimic or obscure a
year. For the visible strata (and, we believe, for any other annual indicator at
accumulation rates representative of central Greenland), it is almost certain
that variability exists at the subseasonal or storm level, at the annual level,
and for various longer periodicities (2-year, sunspot, etc.). We certainly must
entertain the possibility of misidentifying the deposit of a large storm or a
snow dune as an entire year or missing a weak indication of a summer and thus
picking a 2-year interval as 1 year.” 7
Good examples of this phenomenon can be found in areas of very high precipitation, such as the more coastal regions of Greenland. It was in this area, 17 miles off the east coast of Greenland, that Bob Cardin and other members of his squadron had to ditch their six P-38’s and two B-17’s when they ran out of gas in 1942 - the height of WWII. Many years later, in 1981, several members of this original squad decided to see if they could recover their aircraft. They flew back to the spot in Greenland where they thought they would find their planes buried under a few feet of snow. To their surprise, there was nothing there. Not even metal detectors found anything. After many years of searching, with better detection equipment, they finally found the airplanes in 1988 three miles from their original location and under approximately 260 feet of ice! They went on to actually recovered one of them (“Glacier Girl” – a P38), which was eventually restored to her former glory.20
What is most interesting about this story, at least for the purposes of this discussion, is the depth at which the planes were found (as well as the speed which the glacier moved). It took only 46 years to bury the planes in over 260 feet (~80 meters) of ice and move them some 3 miles from their original location. This translates into a little over 5 ½ feet (~1.7 meters) of ice or around 17 feet (~5 meters) of compact snow per year and about 100 meters of movement per year. In a telephone interview, Bob Cardin was asked how many layers of ice were above the recovered airplane. He responded by saying, “Oh, there were many hundreds of layers of ice above the airplane.” When told that each layer was supposed to represent one year of time, Bob said, “That is impossible! Each of those layers is a different warm spell – warm, cold, warm, cold, warm, cold.” 21 Also, the planes did not sink in the ice over time as some have suggested. Their density was less than the ice or snow since they were not filled with the snow, but remained hollow. They were in fact buried by the annual snowfall over the course of almost 50 years.
Now obviously, this example does not reflect the actual climate of central Greenland or of central Antarctica. As a coastal region, it is exposed to a great deal more storms and other sub-annual events that produce the 17 feet of annual snow per year. However, even now, large snowstorms also drift over central Greenland. And, in the fairly recent warm Hipsithermal period (~4 degrees warmer than today) the precipitation over central Greenland, and even Antarctica, was most likely much greater than it is today. So, how do scientists distinguish between annual layers and sub-annual layers? Visual methods, by themselves, seem rather limited – especially as the ice layers get thinner and thinner as one progresses down the column of ice. (Back to Top)
Oxygen and Other Isotopes
Well, there are many other methods that scientists use to help them identify annual layers. One such method is based on the oxygen isotope variation between 16O and 18O (and 17O) as they relate to changes in temperature. For instance, water (H2O), with the heavier 18O isotope, evaporates less rapidly and condenses more readily than water molecules that incorporate the lighter 16O isotope. Since the 18O requires more energy (warmer weather) to be evaporated and transported in the atmosphere, more 18O is deposited in the ice sheets in the summer than in the winter. Obviously then, the changing ratios of these oxygen isotopes would clearly distinguish the annual cycles of summer and winter as well as longer periods of warm and cold (such as the ice age) – right? Not quite. One major drawback with this method is that these oxygen isotopes do not stay put. They diffuse over time. This is especially true in the “firn layer” of compacted snow before it turns into ice. So, from the earliest formation of these ice layers, the ratios of oxygen isotopes as well as other isotopes are altered by gravitational diffusion and so cannot be used as reliable markers of annual layers as one moves down the ice core column. One of the evidences given for the reality of this phenomenon is the significant oxygen isotope enrichment (verses present day atmospheric oxygen ratios) found in 2,000 year-old-ice from Camp Century, Greenland.3 Interestingly enough, this property of isotope diffusion has long been recognized as a problem. Consider the following comment made by Fred Hall back in 1989:
“The accumulating firn [ice-snow granules] acts like a giant columnar sieve through which the gravitational enrichment can be maintained by molecular diffusion. At a given borehole, the time between the fresh fall of new snow and its conversion to nascent ice is roughly the height of the firn layers in [meters] divided by the annual accumulation of new ice in meters per year. This results in conversion times of centuries for firn layers just inside the Arctic and Antarctic circles, and millennia for those well inside [the] same. Which is to say--during these long spans of time, a continuing gas-filtering process is going on, eliminating any possibility of using the presence of such gases to count annual layers over thousands of years.” 4
Lorius et al., in a 1985 Nature article, agreed commenting that, “Further detailed isotope studies showed that seasonal delta 18O variations are rapidly smoothed by diffusion indicating that reliable dating cannot be obtained from isotope stratigraphy”.29 Jaworowski (work discussed further below in "Biased Data" section) also notes the following:
The short-term peaks of d18O in the ice sheets have been ascribed to annual summer/winter layering of snow formed at higher and lower air temperatures. These peaks have been used for dating the glacier ice, assuming that the sample increments of ice cores represent the original mean isotopic composition of precipitation, and that the increments are in a steady-state closed system.
Experimental evidence, however, suggests that this assumption is not valid, because of dramatic metamorphosis of snow and ice in the ice sheets as a result of changing temperature and pressure. At very cold Antarctic sites, the temperature gradients were found to reach 500°C/m, because of subsurface absorption of Sun radiation. Radiational subsurface melting is common in Antarctica at locations with summer temperatures below -20°C, leading to formation of ponds of liquid water, at a depth of about 1 m below the surface. Other mechanisms are responsible for the existence of liquid water deep in the cold Antarctic ice, which leads to the presence of vast sub-sheet lakes of liquid water, covering an area of about 8,000 square kilometers in inland eastern Antarctica and near Vostok Station, at near basal temperatures of -4 to -26.2°C. The sub-surface recrystallization, sublimation, and formation of liquid water and vapor disturb the original isotopic composition of snow and ice. . .
Important isotopic changes were found experimentally in firn (partially compacted granular snow that forms the glacier surface) exposed to even 10 times lower thermal gradients. Such changes, which may occur several times a year, reflecting sunny and overcast periods, would lead to false age estimates of ice. It is not possible to synchronize the events in the Northern and Southern Hemispheres, such as, for example, CO2 concentrations in Antarctic and Greenland ice. This is, in part the result of ascribing short-term stable isotope peaks of hydrogen and oxygen to annual summer/winter layering of ice. and using them for dating. . .
In the air from firn and ice at Summit, Greenland, deposited during the past ~200 years, the CO2 concentration ranged from 243.3 ppmv to 641.4 ppmv. Such a wide range reflects artifacts caused by sampling or natural processes in the ice sheet, rather than the variations of CO2 concentration in the atmosphere. Similar or greater range was observed in other studies of greenhouse gases in polar ice.50
(Back to Top)
Contaminated and Biased Data
According to Prof. Zbigniew Jaworowski, Chairman of the Scientific Council of the Central Laboratory for Radiological Protection in Warsaw, Poland, the ice core data is not only contaminated by procedural problems, it is also manipulated in order to fit popular theories of the day.
Jaworowski first argues that ice cores do not fulfill the essential criteria of a closed system. For example, there is liquid water in ice, which can dramatically change the chemical composition of the air bubbles trapped between ice crystals. "Even the coldest Antarctic ice (down to -73°C) contains liquid water. More than 20 physicochemical processes, mostly related to the presence of liquid water, contribute to the alteration of the original chemical composition of the air inclusions in polar ice. . . Even the composition of air from near-surface snow in Antarctica is different from that of the atmosphere; the surface snow air was found to be depleted in CO2 by 20 to 50 percent . . ."50
Beyond this, there is the problem of fractionation of gases as the "result of various solubilities in water (CH4 is 2.8 times more soluble than N2 in water at O°C; N2O, 55 times; and CO2, 73 times), starts from the formation of snowflakes, which are covered with a film of supercooled liquid."50
"[Another] one of these processes is formation of gas hydrates or clathrates. In the highly compressed deep ice all air bubbles disappear, as under the influence of pressure the gases change into the solid clathrates, which are tiny crystals formed by interaction of gas with water molecules. Drilling decompresses cores excavated from deep ice, and contaminates them with the drilling fluid filling the borehole. Decompression leads to dense horizontal cracking of cores [see illustration], by a well known sheeting process. After decompression of the ice cores, the solid clathrates decompose into a gas form, exploding in the process as if they were microscopic grenades. In the bubble-free ice the explosions form a new gas cavities and new cracks. Through these cracks, and cracks formed by sheeting, a part of gas escapes first into the drilling liquid which fills the borehole, and then at the surface to the atmospheric air. Particular gases, CO2, O2 and N2 trapped in the deep cold ice start to form clathrates, and leave the air bubbles, at different pressures and depth. At the ice temperature of –15°C dissociation pressure for N2 is about 100 bars, for O2 75 bars, and for CO2 5 bars. Formation of CO2 clathrates starts in the ice sheets at about 200 meter depth, and that of O2 and N2 at 600 to 1000 meters. This leads to depletion of CO2 in the gas trapped in the ice sheets. This is why the records of CO2 concentration in the gas inclusions from deep polar ice show the values lower than in the contemporary atmosphere, even for the epochs when the global surface temperature was higher than now."50
No study has yet demonstrated that the content of greenhouse trace gases in old ice, or even in the interstitial air from recent snow, represents the atmospheric composition.
The ice core data from various polar sites are not consistent with each other, and there is a discrepancy between these data and geological climatic evidence. One such example is the discrepancy between the classic Antarctic Byrd and the Vostok ice cores, where an important decrease in the CO2 content in the air bubbles occurred at the same depth of about 500 meters, but at which the ice age difference by about 16,000 years. In approximately 14,000-year-old part of the Byrd core, a drop in the CO2 concentration of 50 ppmv was observed, but in similarly old ice from the Vostok core, an increase of 60 ppmv was found. In about 6,000-year-old ice from Camp Century, Greenland, the CO2 concentration in air bubbles was 420 ppmv, but was 270 ppmv in similarly old ice from Byrd Antarctica . . .
One can also note that the CO2 concentration in the air bubbles decreases with the depth of the ice for the entire period between the years 1891 and 1661, not because of any changes in the atmosphere, but along the increasing pressure gradient, which is probably the result of clathrate formation, and the fact that the solubility of CO2 increases with depth.
If this isn't already bad enough, Jaworowski proceeds to argue that the data, as contaminated as it is, has been manipulated to fit popular theories of the day.
Until 1985, the published CO2 readings from the air bubbles in the pre-industrial ice ranged from 160 to about 700 ppmv, and occasionally even up to 2,450 ppmv. After 1985, high readings disappeared from the publications!50
Another problem is the notion that lead levels in ice cores correlate with the increased use of lead by various more and more modern civilizations such as the Greeks and Romans and then during European and American industrialization. A potential problem with this notion is Jaworowski's claim to have "demonstrated that in pre-industrial period the total flux of lead into the global atmosphere was higher than in the 20th century, that the atmospheric content of lead is dominated by natural sources, and that the lead level in humans in Medieval Ages was 10 to 100 times higher than in the 20th century."50 Beyond this potential problem, there is also the problem of heavy metal contamination of the ice cores during the drilling process.
Numerous studies on radial distribution of metals in the cores reveal an excessive contamination of their internal parts by metals present in the drilling fluid. In these parts of cores from the deep Antarctic, ice concentrations of zinc and lead were higher by a factor of tens or hundreds of thousands, than in the contemporary snow at the surface of the ice sheet. This demonstrates that the ice cores are not a closed system; the heavy metals from the drilling fluid penetrate into the cores via micro- and macro-cracks during the drilling and the transportation of the cores to the surface.50
Professor Jaworowski summarizes with a most interesting statement:
It is astonishing how credulously the scientific community and the public have accepted the clearly flawed interpretations of glacier studies as evidence of anthropogenic increase of greenhouse gases in the atmosphere. Further historians can use this case as a warning about how politics can negatively influence science.50
While this statement is most certainly a scathing rebuke of the scientific community as it stands, I would argue that Jaworowski doesn't go far enough. He doesn't consider that the problems he so carefully points as the basis for his own doubts concerning the basis of global warming may also pose significant problems for the validity of using ice cores for reliably assuming the passage of vast spans of time, supposedly recording in the layers of large ice sheets. (Back to Top)
So, it seems as though isotope ratios are severely limited if not entirely worthless as yearly markers for ice core dating beyond a very short period of time. However, there are several other dating methods, such as the correlation of impurities in the layers of ice to known historical events – such as known volcanic eruptions.
After a volcano erupts, the ash and other elements from the eruption fall out and are washed out of the atmosphere by precipitation. This fallout leaves “tephra” (microscopic shards of glass from the ash fallout – see picture), sulfuric acid, and other chemicals in the snow and subsequent ice from that year. Sometimes the tephra fallout can be specifically matched via physical and chemical analysis to a known historical eruption. This analysis begins when electrical conductivity measurements (ECM) are made along the entire length of the ice core. Increases in electrical conductivity indicate the presence of increased acid content. When a volcano erupts, it spews out a great deal of sulfur-rich gases. These are converted in the atmosphere to sulfuric acid aerosols, which end up in the layers of ice and increase the ECM readings. The higher the acidity, the better the conduction. Sections of ice from a region with an acidic spike are then melted and filtered through a capillary-pore membrane filter. An automated scanning electron microscope (SEM), equipped for x-ray microanalysis, is used to determine the size, shape and elemental composition of hundreds of particles on the filter. Cluster analysis, using a multivariate statistical routine that measures the elemental compositions of sodium, magnesium, aluminum, silicon, potassium, calcium, titanium and iron, is done to identify the volcanic “signature” of the tephra particles in the sample. Representative tephra particles are re-located for photomicrography and more detailed chemical analysis. Then tephra is collected from near the volcanic eruption that may have produced the fallout in the core and is ground into a fine powder, dispersed in liquid, and filtered through a capillary-pore membrane. Then automated SEM and chemical analysis is used on this known tephra sample to find its chemical signature and compare it with the unknown sample found in the ice core - to see if there is a match.22
Tephra from several well-known historical volcanoes have been analyzed in this way. For example, Crater Lake in Oregon was once a much larger mountain (Mt. Mazama) before it blew up as a volcano. In the mid-1960s scientists dated this massive explosion, with the use of radiocarbon dating methods, at between 6,500 and 7,000 years before present (BP). Then, in 1979, Scientific American published an article about a pair of sagebrush bark sandals that were found just under the Mazama tephra at Fort Rock Cave. These sandals were carbon-14 dated to around 9,000 years BP. Even thought this date was several thousand years older than expected, the article went on to say that the bulk of the evidence still put the most likely eruption date of Mt. Mazama at around 7,000 years BP. 23,24 Later, a “direct count” of the layers in the ice core obtained from Camp Century Greenland put the date of the Mazama tephra at 6,400±110 years BP.23,25 Then, at the 16th INQUA conference held June 2003, in Reno Nevada (attended by over 1,000 scientists studying the Quaternary period), Kevin M. Scott noted in an abstract that the Mazama Park eruptive period had been “newly dated at 5,600-5,900 14C yrs BP.” Scott went on to note that this new date “includes collapses and eruptions previously dated throughout a range of 4,300 to 6,700 14C yrs BP.” 26 At this point it should also be noted that the carbon-14 dating method is being calibrated by the Greenland ice cores, so it is circular to argue that the Greenland ice core dates have been validated by carbon-14 analysis.26
Another famous volcano, the Mediterranean volcano Thera, was so large that it effectively destroyed the Minoan (Santorini) civilization. This is thought to have happened in the year 1628 B.C. since tree rings from that region showed a significant disruption matching that date. Of course, such an anomaly was looked for in the ice cores. As predicted, layers in the "Dye 3" Greenland ice core showed such a major eruption in 1645, plus or minus 20 years. This match was used to confirm or calibrate the ice core data as recently as 2003.
Interestingly enough though, the scientists did not have the budget at the time to a systematic search throughout the whole ice core for such large anomalies that would also match a Thera-sized eruption. Now that such detailed searches have been done, many such sulfuric acid peaks have been found at numerous dates within the 18th, 17th, 16th, 15th, and 14th centuries B.C. 35 Beyond this, tephra analyzed from the "1620s" ice core layers did not match the volcanic material from the Thera volcano. The investigators concluded:
"Although we cannot completely rule out the possibility that two nearly coincident eruptions, including the Santorini eruption, are responsible for the 1623 BC signal in the GISP ice core, these results very much suggest that the Santorini eruption is not responsible for this signal. We believe that another eruption led not only to the 1623 BC ice core signal but also, by correlation, to the tree-ring signals at 1628/1627 BC." 36
Then, as recently as March of 2004, Pearce et al published a paper declaring that another volcano, the Aniakchak Volcano in Alaska, was the true source of the tephra found in the GRIP ice core at the "1645 ± 4 BC layer." These researchers went on to say that, "The age of the Minoan eruption of Santorini, however, remains unresolved." 37
So, here we have a clearly erroneous match between a volcanic eruption and both tree rings and ice core signals. What is most curious, however, is that many scientists still declare that ice cores are solidly confirmed by such means. Beyond this, as flexible as the dating here seems to be, the Mt. Mazama and Thera eruptions are still about the oldest eruptions that can be identified in the Greenland ice cores. There are two reasons for this. One reason is that below 10,000 layers or so in the ice core the ice becomes too alkaline to reliably identify the acid spikes associated with volcanic eruptions.5 Another reason is that the great majority of volcanic eruptions throughout history were not able to get very much tephra into the Greenland ice sheet. So, the great majority of volcanic signals are detected via their acid signal alone.
This presents a problem. A review of four eruption chronologies constructed since 1970 illustrate this problem quite nicely. In 1970, Lamb published an eruption chronology for the years 1500 to 1969. The work recorded 380 known historical eruptions. Ten years later, Hirschboek published a revised eruption chronology that recorded 4,796 eruptions for the same period – a very significant increase from Lamb’s figure. One year later, in 1981, Simkin et al. raised the figure to 7,664 eruptions and Newhall et al. increased the number further a year later to 7,713. It is also interesting to note that Simkin et al. recorded 3,018 eruptions between 1900 and 1969, but only 11 eruptions were recorded from between 1 and 100 AD. So obviously, as one goes back through recent history, the number of known volcanic eruptions drops off dramatically, though they were most certainly still occurring – just without documentation. Based on current rates of volcanic activity, an expected eruption rate for the past several thousand years comes to around 30,000 eruptions per 1,000 years.25
With such a high rate of volcanic activity, to include many rather large volcanoes, how are scientists so certain that a given acid spike on ECM is so clearly representative of any particular volcano – especially when the volcanic eruption in question happened more than one or two thousand years ago? The odds that at least one volcanic signal will be found in an ice core within a very small “range of error” around any supposed historical eruption are extremely good - even for large volcanoes. Really, is this all too far from a self-fulfilling prophecy? How then can the claim be made that historical eruptions validate the dating of ice cores to any significant degree?
“The desire to link such phenomena [volcanic eruptions] and the stretching of the dating frameworks involved is an attractive but questionable practice. All such attempts to link (and hence infer associations between) historic eruptions and environmental phenomena and human "impacts", rely on the accurate and precise association in time of the two events. . . A more general investigation of eruption chronologies constructed since 1970 suggest that such associations are frequently unreliable when based on eruption data gathered earlier than the twentieth century.” 25
(Back to Top)
So, if volcanic markers are generally unreliable and completely useless beyond a few thousand years, how are scientists so sure that their ice core dating methods are meaningful? Well, one of the most popular methods used to distinguish annual layers is one that measures the fluctuations in ice core dust. Dust is alkaline and shows up as a low ECM reading. During the dry northern summer, dust particles from Arctic Canada and the coastal regions of Greenland are carried by wind currents and are deposited on the Greenland ice sheet. During the winter, this area is not so dusty, so less dust is deposited during the winter as compared to the summer. This annual fluctuation of dust is thought to be the most reliable of all the methods for the marking of the annual cycle - especially as the layers start to get thinner and thinner as one moves down the column of ice.27 And, it certainly would be one of the most reliable methods if it were not for one little problem known as “post-depositional particle migration”.
Zdanowicz et al., from the University of New Hampshire, did real time studies of modern atmospheric dust deposition in the 1990’s on the Penny Ice Cap, Baffin Island, Arctic Canada. Their findings are most interesting indeed:
“After the snow deposition on polar ice sheets, not all the chemical species preserve the original concentration values in the ice. In order to obtain reliable past-environmental information by firn and ice cores, it is important to understand how post-depositional effects can alter the chemical composition of the ice. These effects can happen both in the most superficial layers and in the deep ice. In the snow surface, post-depositional effects are mainly due to re-emission in the atmosphere and we show here that chloride, nitrate, methane-sulphonic acid (MSA) and H2O2 [hydrogen peroxide] are greatly affected by this process; moreover, we show how the mean annual snow accumulation rate influences the re-emission extent. In the deep ice, post-depositional effects are mainly due to movement of acidic species and it is interesting to note the behavior of some substances (e.g. chloride and nitrate) in acidic (high concentrations of volcanic acid gases) and alkaline (high dust content) ice layers . . . We failed to identify any consistent relationship between dust concentration or size distribution, and ionic chemistry or snowpack stratigraphy.” 28
This study goes on to reveal that each yearly cycle is marked not by one distinct annual dust concentration as is normally assumed when counting ice core layers, but by two distinct dust concentration peaks – one in late winter-spring and another one in the late summer-fall. So, each year is initially marked by “two seasonal maxima of dust deposition.” By itself, this finding cuts in half those ice core dates that assume that each year is marked by only one distinct deposition of dust. This would still be a salvageable problem if the dust actually stayed put once it was deposited in the snow. But, it does not stay put – it moves!
“While some dust peaks are found to be associated with ice layers or Na [sodium] enhancements, others are not. Similarly, variations of the NMD [number mean diameter – a parameter for quantifying relative changes in particle size] and beta cannot be systematically correlated to stratigraphic features of the snowpack. This lack of consistency indicates that microparticles are remobilized by meltwater in such a way that seasonal (and stratigraphic) differences are obscured.” 28
This remobilization of the microparticles of dust in the snow was found to affect both fine and coarse particles in an uneven way. The resulting “dust profiles” displayed “considerable structure and variability with multiple well-defined peaks” for any given yearly deposit of snow. The authors hypothesized that this variability was most likely caused by a combination of factors to include “variations of snow accumulation or summer melt and numerous ice layers acting as physical obstacles against particle migration in the snow.” The authors suggest that this migration of dust and other elements limits the resolution of these methods to “multiannual to decadal averages”.28
Another interesting thing about the dust found in the layers of ice is that those layers representing the last “ice age” contain a whole lot of dust – up to 100 times more dust than is deposited on average today.19 The question is, how does one explain a hundred times as much Ice Age dust in the Greenland icecap with gradualistic, wet conditions? There simply are no unique dust sources on Earth to account for 100 times more dust during the 100,000 years of the Ice Age, particularly when this Ice Age was thought to be associated with a large amount of precipitation/rain – which would only cleanse the atmosphere more effectively. How can high levels of precipitation be associated with an extremely dusty atmosphere for such a long period of time? Isn’t this a contradiction from a uniformitarian perspective? Perhaps a more recent catastrophic model has greater explanatory value?
Other dating methods, such as 14C, 36Cl and other radiometric dating methods are subject to this same problem of post-depositional diffusion as well as contamination – especially when the summer melt sends water percolating through the tens and hundreds of layers found in the snowy firn before the snow turns to ice. Then, even after the snow turns to ice, diffusion is still a big problem for these molecules. They simply do not stay put.
More recent publications by Rempel et al., in Nature (May, 2001),32 also quoted by J.W. Wettlaufer (University of Washington) in a paper entitled, "Premelting and anomalous diffusion in ancient ice",31 suggest that chemicals that have been trapped in ancient glacial or polar ice can move substantial distances within the ice (up to 50cm even in deeper ice where layers get as thin as 3 or 4 millimeters). Such mobility is felt by these scientists to be "large enough to offset the resolution at which the core was examined and alter the interpretation of the ice-core record." What happens is that, "Substances that are climate signatures - from sea salt to sulfuric acid - travel through the frozen mass along microscopic channels of liquid water between individual ice crystals, away from the ice on which they were deposited. The movement becomes more pronounced over time as the flow of ice carries the substances deeper within the ice sheet, where it is warmer and there is more liquid water between ice crystals. . . The Vostok core from Antarctica, which goes back 450,000 years, contains even greater displacement [as compared to the Greenland ice cores] because of the greater depth." That means that past analyses of historic climate changes gleaned from ice core samples might not be all that accurate. Wettlaufer specifically notes that, "The point of the paper is to suggest that the ice core community go back and redo the chemistry."31,32 Of course these scientists do not think that such problems are significant enough to destroy the usefulness of ice cores as a fairly reliable means of determining historical climate changes. But, it does make one start to wonder how much confidence one can actually have in the popular interpretations of what ancient ice really means. (Back to Top)
To add to the problems inherent in ice core dating is the significant amount evidence that the world was a much warmer place just a few thousand years ago. These higher temperatures of the Middle Holocene or Hypsithermal period are said to have begun about 9,000 years ago and then started to fade about 4,000 years ago.8,53
So, how "warm" was this warm period? Various studies suggest sustained temperatures of northerly regions, such as the Canadian Northwest Territories, of 3-4°C warmer than today. Studies on sedimentary cores carried out in the North Atlantic between Hudson Strait and Cape Hatteras indicate ocean temperatures of 18°C (verses about 8°C today in this region).54 However, not all regions experienced the same increase in temperature and the overall average global temperature is thought to have been about 2°C warmer than it is today.55
also seems that in the fairly recent past the vegetation zones were much closer to
the poles than they are today. The
remains of some plant species can be found as far as 1,000 km farther north than
they are found today. Forests once
extended right up to the Barents Coast and the White Sea.
The European tundra zones were non-existent.
In northern Asia, peat-moss was discovered on Novaya Zemlya.
And, this was no short-term aberration in the weather. This warming trend
seems to have lasted for quite a while.56 Consider
also the very interesting suggestion of Prof. Borisov, a long time meteorology and climatology
professor at Leningrad State University:
“During the last 18,000 years, the warming was particularly appreciable during the Middle Holocene. This covered the time period of 9,000 to 2,500 years ago and culminated about 6,000 to 4,000 years ago, i.e., when the first pyramids were already being built in Egypt . . . The most perturbing questions of the stage under consideration are: Was the Arctic Basin iceless during the culmination of the optimum?”8
Professor Borisov asks a very interesting question. What would happen to the ice sheets during several thousand years of a “hypsithermal” warming if it really was some 2°C warmer than it is today? If the Arctic region around the entire globe, to include the Arctic Ocean, was ice free during just a few thousand years, even episodically during the summer months, what would have happened to the ice sheet on Greenland?
Consider what would happen if the entire Arctic Ocean went without ice during the summer months owing to a warmer and therefore longer spring, summer, and fall. Certainly there would be more snowfall, but this would not be enough to prevent the warm rainfall from removing the snow cover and the ice itself from Greenland’s ice sheet. A marine climate would create a more temperate environment because water vapor over the Arctic region would act as a greenhouse gas, holding the day’s heat within the atmosphere.
Borisov goes on to point out that a 1°C increase in average global temperature results in a more dramatic increase in temperature at the poles and extreme latitudes than it does at the equator and more tropical zones. For example, between the years 1890 and 1940, there was a 1°C degree increase in the average global temperature. During this same time the mean annual temperature in the Arctic basin rose 7°C. This change was reflected more in warmer winters than in warmer summers. For instance, the December temperature rose almost 17°C while the summer temperature changed hardly at all. Likewise, the average winter temperature for Spitsbergen and Greenland rose between 6 to 13°C during this time. 8 Along these same lines, an interesting article published in the journal Nature 30-years ago by R. L. Newson showed that, without the Arctic ice cap, the winters of the Arctic Ocean would rise 20-40ºC and 10-20ºC over northern Siberia and Alaska - all other factors being equal11 M. Warshaw and R. Rapp published similar results in the Journal of Applied Meteorology - using a different circulation model.12
Of course, the real question here is, would a 2°C increase in average global temperature, over today's "global warming" temperatures, melt the ice sheets of Greenland or even Antarctica?
Borisov argued that this idea is not all that far-fetched. He notes that measurements carried out on Greenland’s northeastern glaciers as far back as the early 1950’s showed that they were loosing ice far faster than it was being formed. 8 The northeastern glaciers were in fact in “ablation” as a result of just a 1°C rise in average global temperature. What would be expected from another 2°C rise? - over the course of several thousand years?
Since that time research done by Carl Boggild of the Geological Survey of Denmark and Greenland (GEUS), involving data from a network of 10 automatic monitoring stations, showed that the large portions of the Greenland ice sheet are melting up to 10 times faster that earlier research had indicated.
In 2000, research indicated that the Greenland ice was melting at a conservative estimate of just over 50 cubic kilometers of ice per year. However, studies done by a team from the University of Texas over 18 months from 2005 to 2006 with the use of gravity data collected by satellites, suggests that the "ice cap may be melting three times faster than indicated by previous measurements" from 1997 to 2003. Currently, the ice is melting at 239 cubic kilometers per year (measured from April 2002 to November 2005).52
Greenland covers 2,175,590 square kilometers with about 85% of that area covered by ice of about 2 km thick. That's about 4,351,180 cubic kilometers of ice. At current rates of melting, it would take about 18,000 years to melt all the ice on Greenland. Of course, 18,000 years seems well outside the range of the Hypsithermal period. However, even at current temperatures, the melt rate of the Earth's glaciers, to include those of Greenland, is accelerating dramatically - and we still have another 2°C to go. Towns in Greenland are already beginning to sink because of the melting permafrost. Even potatoes are starting to grow in Greenland. This has never happened before in the memory of those who have lived there all their lives.
In April of 2000, Lars Smedsrud and Tore Furevik wrote in an article in the Cicerone magazine, published by the Norwegian Climate Research Centre (CICERO) that , "If the melting of the ice, both in thickness and surface area, does not slow, then it is an established fact that the arctic ice will disappear during this century." This is based on the fact that the Arctic ice has thinned by some 40% between the years 1980 and 2000. This past summer, December 2006, explorers Lonnie Dupre and Eric Larsen made a very dangerous and most interesting trek to the North Pole. As they approached the Pole they found open water, a lot of icy slush, and ice so thin it wouldn't support their weight.
"We expected to see the ice get better, get flatter, as we got closer to the pole. But the ice was busted up," Dupre said. "As we got closer to the pole, we had to paddle our canoes more and more."51
Walt Meier, a researcher at the U.S. National Snow and Ice Data Center in Boulder, Colorado commented on these interesting findings noting that the melting of the Arctic ice cap in summer - is progressing more rapidly than satellite images alone have shown. Given resent data such as this, climate researchers at the U.S. Naval Postgraduate School in California predict the complete absence of summer ice on the Arctic Ocean by 2030 or sooner.51 That prediction is dramatically different than what scientists were predicting just a few years ago - that the ice would still be there by the end of the century. Consider how a complete loss of Arctic ice and with an average temperature increase over the Arctic Ocean upwards of 20-40ºC would affect the temperature of surrounding regions - like Greenland. Could Greenland long retain its ice without the Arctic polar ice?
If this is not convincing enough, consider that since the year 2000, glaciers around the world have continued melting at greater and greater rates - exponentially greater rates. Alaska's glaciers are receding at twice the rate previously thought, according to a new study published in July 19, 2002 Science journal. Around the globe, sea level is about 6 inches higher than it was just 100 years ago, and the rate of rise is increasing quite dramatically. Leading glaciologist, Dr. Mark Meier, remarked in February of 2002 that the accepted estimates of sea level rise were underestimated, due to the rapid retreat of mountain glaciers.44
The next year, at the American Association for the Advancement of Science (AAAS) meeting in San Francisco on February 25, 2001, Professor Lonnie Thompson, from Ohio State University's Department of Geological Sciences, presented a paper entitled, "Disappearing Glaciers - Evidence of a Rapidly Changing Earth." Dr. Thompson has completed 37 expeditions since 1978 to collect and study perhaps the world's largest archive of glacial ice cored from the Himalayas, Mount Kilimanjaro in Africa, the Andes in South America, the Antarctic and Greenland.
Prof. Thompson reported to AAAS that at least one-third of the massive ice field on top of Tanzania's Mount Kilimanjaro has melted in only the past twelve years. Further, since the first mapping of the mountain's ice in 1912, the ice field has shrunk by 82%. By 2015, there will be no more "snows of Kilimanjaro." In Peru, the Quelccaya ice cap in the Southern Andes Mountains is at least 20% smaller than it was in 1963. One of the main glaciers there, Qori Kalis, has been melting at the astonishing rate of 1.3 feet per day. Back in 1963, the glacier covered 56 square kilometers. By 2000, it was down to less than 44 square kilometers and now there is a new ten acre lake. It's melt rate has been increasing exponentially and at its current rate will be entirely gone between 2010 and 2015, the same time that Kilimanjaro dries.
The exponential nature of this worldwide melt is dramatically illustrated by aerial photographs taken of various glaciers. A series of photographs of the Qori Kalis glacier in Peru are available from 1963. Between 1963 and 1978 the rate of melt was 4.9 meters per year. Between 1978 and 1983 was 8 meters per year. This increased to 14 meters per year by 1993 and to 30 meters per year by 1995, to 49 meters per year by 1998 and to a shocking 155 meters per year by 2000. By 2001 it was up to about 200 meters per year. That's almost 2 feet per day. Dr. Thompson exclaimed, "You can literally sit there and watch it retreat."
Then, in 2001, NASA scientists published a major study, based on satellite and aircraft observations, showing that large portions of the Greenland ice sheet, especially around its margins, were thinning at a rate of roughly 1 meter per year. Other scientists, such as Carl Boggild and his team, have recorded thinning Greenland glaciers at rates as fast a 10 or even 12 meters per year. It is quite a shock to scientists to realize that the data from satellite images shows that various Greenland glaciers are thinning and retreating in an exponential manner - by an "astounding" 150 meters in thickness in just the last 15 years.43
In both 2002 and 2003, the Northern Hemisphere registered record low ocean ice cover. NASA's satellite data show the Arctic region warmed more during the 1990s than during the 1980s, with Arctic Sea ice now melting by up to 15 percent per decade. Satellite images show the ice cap covering the Northern pole has been shrinking by 10 percent per decade over the past 25 years.45
On the opposite end of the globe, sea ice floating near Antarctica has shrunk by some 20 percent since 1950. One of the world's largest icebergs, named B-15, that measured near 10,000 square kilometers (4,000 square miles) or half the size of New Jersey, calved off the Ross Ice Shelf in March 2000. The Larsen Ice Shelf has largely disintegrated within the last decade, shrinking to 40 percent of its previously stable size.45 Then, in 2002, the Larsen B ice shelf collapsed. Almost immediately after, researchers observed that nearby glaciers started flowing a whole lot faster - up to 8 times faster! This marked increase in glacial flow also resulted in dramatic drops in glacial elevations, lowering them by as much as 38 meters (124 feet) in just 6 months.48
Scientists monitoring a glacier in Greenland, the Kangerdlugssuaq glacier, have found that it is moving into the sea 3 times faster than just 10 years ago. Measurements taken in 1988 and in 1996 show the glacier was moving at a rate of between 3.1 and 3.7 miles per year. The latest measurements, taken the summer of 2005, showed that it is now moving at 8.7 miles a year. Satellite measurements of the Kangerdlugssuaq glacier show that, as well as moving more rapidly, the glacier's boundary is shrinking dramatically. Kangerdlugssuaq is about 1,000 meters (3,280ft) thick, about 4.5 miles wide, extends for more than 20 miles into the ice sheet and drains about 4 per cent of the ice from the Greenland ice sheet. The realization of the rapid melting of such a massive glacier, which was fairly stable until quite recently, came as quite a shock to the scientific community. Professor Hamilton expressed this general surprise in the following comment:
"This is a dramatic discovery. There is concern that the acceleration of this and similar glaciers and the associated discharge of ice is not described in current ice-sheet models of the effects of climate change. These new results suggest the loss of ice from the Greenland ice sheet, unless balanced by an equivalent increase in snowfall, could be larger and faster than previously estimated. As the warming trend migrates north, glaciers at higher latitudes in Greenland might also respond in the same way as Kangerdlugssuaq glacier. In turn, that could have serious implications for the rate of sea-level rise."46
The exponential increase in glacial speed is now thought to be due to increased surface melting. The liquid water formed on the surface during summer melts collects into large lakes. The water pressure generated by these surface lakes forces water down through the icy layers all the way to the underlying bedrock. It then spreads out, lifting up the glacier off the bedrock on a lubricating film of liquid water. Obviously, with such lubrication, the glacier can then flow at a much faster rate - exponentially faster. This increase in speed also makes for a thinner glacier since the glacier becomes more stretched out.46
For example the giant Jakobshavn glacier - at four miles wide and 1,000 feet thick the biggest on the landmass of Greenland - is now moving towards the sea at a rate of 113 feet a year; the "normal" annual speed of a glacier is just one foot. Until now, scientists believed the ice sheet would take 1,000 years to melt entirely, but Ian Howat, who is working with Professor Tulaczyk, says the new developments could "easily" cut this time "in half". 49 Again, this is well within the range of what would have been melted during Hypsithermal warming many times over.
It seems that no one predicted this. No one thought it possible and scientists are quite shocked by these facts. The amazingly fast rate of glacial retreat simply goes against the all prevailing models of glacial development and change, change which generally involve many thousands of years. Who would have thought that such changes could happen in mere decades?
Beyond this, there are many other evidences of a much warmer climate in Greenland and the Arctic basin in the fairly recent past. For example, when Greenland’s seas were 10 meters higher than they are today (during the last hipsithermal), warm water mollusks can be found that live over 500 to 750 miles farther south today. Also, the remains of land vertebrates, such as various reptiles, are found in Denmark and Scandinavia, when they live only in Mediterranean areas today.13
“Additional evidence is given by...peats and relics in Greenland--the northern limits may have been displaced northward through several degrees of latitude...and [by] other plants in Novaya Zemlya, and by peat and ripe fruit stones [fruit pits]...in Spitsbergen that no longer ripen in these northern lands. Various plants were more generally distributed in Ellesmere [Island and] birch grew more widely in Iceland....” 13
The point is that these types of plants and these types of large trees should never be able to grow on islands north of the Arctic Circle. Back in 1962 Ivan T. Sanderson noted that , “Pieces of large tree trunks of the types [found] . . . do not and cannot live at those latitudes today for purely biological reasons. The same goes for huge areas of Siberia.”14 Also, as previously noted, fruit does not ripen during short autumns at these high latitudes. Therefore, the spring and summer seasons had to be much longer for any seeds from these temperate trees to germinate and grow. Likewise, the peats that have been found on Greenland require temperate, humid climates to form. Peat formation requires climates that allow for the partial decomposition of vegetable remains under conditions of deficient drainage.13 Also, peat formations require at least 40 inches of rainfall a year and a mean temperature above 32°F. 15 In addition, there were temperate forests on the Seward Peninsula, in Alaska, and the Tuktoyaktuk Peninsula, in Canada’s frigid Inuvik Region, facing the Beaufort Sea and the Arctic Ocean and at Dubawnt Lake, in Canada’s frozen Keewatin Region, west of the Hudson Bay.16 And yet, somehow, it is believed that Greenland’s icecap survived several thousand years in such a recently temperate climate, but how?
What we have are temperate forests and warm waters near and within the Arctic Circle and Ocean all across the northern boundary from Siberia to Norway and from Alaska to the Hudson Bay. These temperate conditions existed for thousands of years both east and west of Greenland and at all the Greenland latitudes around the world - and these conditions had not yet ended by the time the Egyptians were building their pyramids! This, of course, would explain why mammoths and other large animals were able to live, during this period, throughout these northerly regions. (Back to Top)
Mammoths are especially interesting since millions of them recently lived (within the last 10-20 thousand years according to mainstream science) well within the Arctic Circle. Although popularly portrayed as living in cold barren environments and occasionally dying in local events, such as mudslides or entrapment in soft riverbanks, the evidence may actually paint a very different picture if studied at from a different perspective.
The well preserved "mummified" remains of many mammoths have been found along with those of many other types of warmer weather animals such as the horse, lion, tiger, leopard, bear, antelope, camel, reindeer, giant beaver, musk sheep, musk ox, donkey, ibex, badger, fox, wolverine, voles, squirrels, bison, rabbit and lynx as well as a host of temperate plants are still being found all jumbled together within the Artic Circle - along the same latitudes as Greenland all around the globe.39
The problem with the popular belief that millions of mammoths lived in very northerly regions around the entire globe, with estimates of up to 5 million living along a 600 mile stretch of Siberian coastline alone,39 is that these mammoths were still living in these regions within the past 10,000 to 20,000 years. Carbon 14 dating of Siberian mammoths has returned dates as early as 9670± 40 years before present (BP).41 An even more recent study (1995) carried out on mammoth remains located on Wrangel Island (on the border of the East-Siberian and Chukchi Seas) showed that woolly mammoths persisted on Wrangel Island in the mid-Holocene, from 7390-3730 years ago (i.e., till about ~2,000 B.C.)57
So, why is this a problem?
Contrary to popular imagination, these creatures were not surrounded by the extremely cold, harsh environments that exist in these northerly regions today. Rather, they lived in rather lush steppe-type conditions to include evidence of large fruit bearing trees, abundant grasslands, and the very large numbers and types of grazing animals already mentioned only to be quickly and collectively annihilated over huge areas by rapid weather changes. Clearly, the present is far far different than even the relatively recent past must have been. Sound too far fetched?
Consider that the last meal of the famous Berezovka mammoth (see picture above), found north of the Artic Circle, consisted of "twenty-four pounds of undigested vegetation" 39 to include over 40 types of plants; many no longer found in such northerly regions.43 The enormous quantities of food it takes to feed an elephant of this size (~300kg per day) is, by itself, very good evidence for a much different climate in these regions than exists today.39 Consider the following comment by Zazula et. al. published the June 2003 issue of Nature:
"This vegetation [Beringia: Includes an area between Siberia and Alaska as well as the Yukon Territory of Canada] was unlike that found in modern Arctic tundra, which can sustain relatively few mammals, but was instead a productive ecosystem of dry grassland that resembled extant subarctic steppe communities . . .
Abundant sage (Artemisia frigida) leaves, flowers from Artemisia sp., and seeds of bluegrass (Poa), wild-rye grass (Elymus), sedge (Carex) and rushes (Juncus/Luzula) . . . Seeds of cinquefoil (Potentilla), goosefoot (Chenopodium), buttercup (Ranunculus), mustard (Draba), poppy (Papaver), fairy-candelabra (Androsace septentrionalis), chickweed (Cerastium) and campion (Silene) are indicative of diverse forbs growing on dry, open, disturbed ground, possibly among predominantly arid steppe vegetation. Such an assemblage has no modern analogue in Arctic tundra. Local habitat diversity is indicated by sedge and moss peat from deposits that were formed in low-lying wet areas . . .
[This region] must have been covered with vegetation even during the coldest part of the most recent ice age (some 24,000 years ago) because it supported large populations of woolly mammoth, horses, bison and other mammals during a time of extensive Northern Hemisphere glaciation." 42
Now, does it really make sense for this region to be so warm, all year round, while the same latitudes on other parts of the globe where covered with extensive glaciers? Siberia, Alaska and Northern Europe and parts of northwestern Canada were all toasty warm while much of the remaining North American Continent and Greenland were covered with huge glaciers? Really?
Beyond this, consider that the mammoths didn't have hair erector muscles that enable an animal's fur to be "fluffed-up", creating insulating air pockets. They also lacked oil glands to protect against wetness and increased heat loss in extremely cold and damp environments. Animals currently living in Arctic regions have both oil glands and erector muscles. Of course, the mammoth did have a certain number of cold weather adaptations compared to its living cousins, the elephants; such as smaller ears, trunk and tail, fine woolly under-fur and long outer "protective" hair, and a thick layer of insulating fat,39 but these would by no means be enough to survive in the extremes of cold, ice and snow found in these same regions today - not to mention the lack of an adequate food supply. It seems very much as Sir Henry Howorth concluded back in the late 19th century:
"The instances of the soft parts of the great pachyderms being preserved are not mere local and sporadic ones, but they form a long chain of examples along the whole length of Siberia, from the Urals to the land of the Chukchis [the Bering Strait], so that we have to do here with a condition of things which prevails, and with meteorological conditions that extend over a continent.
When we find such a series ranging so widely preserved in the same perfect way, and all evidencing a sudden change of climate from a comparatively temperate one to one of great rigour, we cannot help concluding that they all bear witness to a common event. We cannot postulate a separate climate cataclysm for each individual case and each individual locality, but we are forced to the conclusion that the now permanently frozen zone in Asia became frozen at the same time from the same cause."40
Actually, northern portions of Asia, Europe, and North America contain the remains of extinct species of the elephant [mammoth] and rhinoceros, together with those of horses, oxen, deer, and other large quadrupeds.39 Even though the evidence speaks against the "instant" catastrophic event freeze that some have suggested,39 the weather change was still a real and relatively sudden change to a much colder and much harsher environment compared to the relatively temperate and abundant conditions that once existed in these northerly regions around much of the globe. Is it not then a least reasonable to hypothesize that Greenland also had such a temperate climate in the resent past, loosing its icecap completely and growing lush vegetation? If not, how was the Greenland ice sheet able to be so resistant to the temperate climate surrounding it on all sides for hundreds much less thousands of years? (Back to Top)
A Recently Green Greenland?
Interestingly enough, crushed plant parts have been found in the ice sheets of northeastern Greenland – from a dike ridge of a glacier. This silty plant material was said to give off a powerful odor, like that of decaying organic matter.17 This material was examined for fossils by Esa Hyyppa of the Geological Survey of Finland, who noted the following:
“The silt examined contained two whole leaves, several leaf fragments and two fruits of Dryas octopetala; [also] a small, partly decayed leaf of a shrub species not definitely determinable . . . and an abundance of much decayed, small fragments of plant tissues, mostly leaf veins and root hairs . . . " 17
It is most Interesting that scientists think that this plant material must have originated from some superficial deposit in a distant valley floor of Greenland and that this material was squeezed up from the base of the ice. Some scientists have even suggested that, “The modern aspect of the flora precludes a preglacial time of origin for it.” 17 Note also that the northeastern corner of Greenland is actually its coldest region. It has a “continental climate that is remote from the influence of the sea.” 18 The ocean dramatically affects climate. That is why regions like the north central portions of the United States have such long, cold winters when compared to equal latitudes along the eastern seaboard. Northeastern Greenland, therefore, would have the coldest climate of the entire island.
Also, consider that just this past July of 2004, plant material consisting of probable grass or pine needles and bark was discovered at the bottom of the Greenland ice sheet under about 10,400 feet of ice. Although thought to be several million years old, Dorthe Dahl-Jensen, a professor at the University of Copenhagen's Niels Bohr Institute and NGRIP project leader noted that the such plant material found under about 10,400 feet of ice indicates the Greenland Ice Sheet "formed very fast."38
Beyond the obvious fact that such types of organic material suggest an extremely rapid climactic change and burial by ice, the question is, Why hasn't such organic material been stripped completely off Greenland by now by the flowing ice sheets? For instance, we know how fast these ice sheets move - up to 100 meters per year in central regions and up to 10 miles per year for several of Greenland's major glaciers. Given several hundred thousand to over a few million years of such scrubbing by moving ice sheets, how could significant amounts of such organic material remain on the surface of Greenland?
In just the last 100 years Glacier National Park has gone from having over 150 glaciers to just 35 today. And, those that remain have already lost over 90% of the volume that they had 100 years ago. "For instance, the Qori Kalis Glacier in Peru is shrinking at a rate of 200 meters per year, 40 times as fast as in 1978 when the rate was only 5 meters per year. And, it's just one of the hundreds of glaciers that are vanishing.
Ice is also disappearing from the Arctic Ocean and Greenland at an astounding rate that has taken scientists completely off guard. More than a hundred species of animals have been spotted moving to more northerly regions than they usually occupy. Many kinds of temperate plants are also growing much farther north and at higher elevations. Given all of these surprising rapid turn of events, even mainstream scientists are presenting some rather interesting scenarios as to what will happen to massive ice sheets like that found on Greenland in the near future. In some scenarios, the ice on Greenland eventually melts, causing sea levels to rise some 18 feet (~6 meters). Melt just the West Antarctic ice sheet as well, and sea levels jump another 18 feet.34 The speed of glacial demise is only recently being appreciated by scientists who are "stunned" to realize that glaciers all around the world, like those of Mt. Kilimanjaro, the Himalayas just beneath Mt. Everest, the high Andes, Swiss Alps, and even Iceland, will be completely gone within just 30 years.33 The same thing happened to the Langjokull Ice Cap, in Iceland, during the Hypsithermal based on benthic diatom data. "Langjokull must have disappeared in the early Holocene for such a diverse, benthic dominated diatom assemblage to flourish."58 It's about to happen again.
Of course, this begs the question as to how the ice sheets on Greenland and elsewhere, which are currently melting much faster than they are forming with just a 1°C rise in global temperature, could have survived for several thousand years during the very recent Hypsithermal period when global temperatures were another 2°C degrees warmer than today and temperatures within the Arctic Circle were between 20 and 40ºC warmer?
(Back to Top)
First glance intuition is often very helpful in coming up with a good hypothesis to explain a given phenomenon, such as the hundreds of thousands of layers of ice found in places like Greenland and Antarctica. It seems down right intuitive that each layer found in these ice sheets should represent an annual cycle. After all, this seems to fit the uniformitarian paradigm so well. However, a closer inspection of the data seems to favor a much more recent and catastrophic model of ice sheet formation. Violent weather disturbances with large storms, a sudden cold snap, and high precipitation rates could very reasonably give rise to all the layers, dust bands, and isotope variations etc. that we find in the various ice sheets today.
So, which hypothesis carries more predictive power? Is there more evidence for a much warmer climate all around Greenland in the recent past or for the survival of the Greenland Ice sheet, without melting, for hundreds of thousands to millions of years? Both positions cannot be right. One of them has to be wrong. Can all the frozen temperate plants and animals within the Arctic Circle trump interpretation of ocean core sediments, coral dating, radiometric dating, sedimentation rate extrapolations, isotope matches between ocean and ice cores and Milankovitch cycles? Most scientists don't think so. Personally, I don't see why not? For me, the evidence of warm-weather animals and plants living all around Greenland around the entire Arctic Circle, is especially overwhelming.
(Back to Top)
D.A., Gow, A.J., Alley, R.B., Zielinski, G.A., Grootes, P.M., Ram, K., Taylor,
K.C., Mayewski, P.A. and Bolzan, J.F., “The Greenland Ice Sheet Project 2
depth-age scale: Methods and results”, Journal of Geophysical Research
Craig H., Horibe Y., Sowers T., “Gravitational
Separation of Gases and Isotopes in Polar Ice Caps”,
Science, 242(4885), 1675-1678, Dec. 23, 1988.
Hall, Fred. “Ice Cores Not That Simple”, AEON II: 1, 1989:199
P.M. and Stuiver, M., Oxygen 18/16 variability in Greenland snow and ice with 10-3
to 105 – year time resolution. Journal of Geophysical Research
R.B. et al., Visual-stratigraphic dating of the GISP2 ice core: Basis,
reproducibility, and application. Journal of Geophysical Research
Borisov P., Can
Man Change the Climate?, trans. V. Levinson (Moscow, U. S. S. R.), 1973
"Santor¡ni Volcano Ash, Traced Afar, Gives a Date of 1623 BC," The
New York Times [New York] (June 7, 1994):C8.
Britannica, Macropaedia, 19 vols. "Etna (Mount)," (Chicago, Illinois,
1982), Vol. 6, p. 1017.
R. L. Newson,
"Response of a General Circulation Model of the Atmosphere
to Removal of the Arctic Icecap," Nature (1973): 39-40.
M. Warshaw and
R. R. Rapp, "An Experiment on the Sensitivity of
a Global Circulation Model," Journal of Applied Meteorology 12 (1973):
B., The Quaternary Era, London, England, 1957, Vol. II, p. 1494.
Sanderson, The Dynasty of ABU, New York, 1962, p. 80.
Brooks C. E. P.,
Climate Through the Ages, 2nd ed., New York, 1970, p. 297.
Pielou E. C.,
After the Ice Age, Chicago, Illinois, 1992, p. 279.
Boyd, Louise A.,
The Coast of Northeast Greenland, American Geological Society Special
Publication No. 30, New York, 1948: p132.
"Glaciology (1): The Balance Sheet or the Mass Balance," Venture to
the Arctic, ed. R. A. Hamilton, Baltimore, Maryland, 1958, p. 175 and Table I,
Hammer et al.,
"Continuous Impurity Analysis Along the Dye 3 Deep Core," American
Geophysica Union Monograph 33 (1985): 90.
Laurence R. Kittleman, "Tephra," Scientific American, p.
171, New York, December, 1979.
- July 2000
Zdanowicz CM, Zielinski GA, Wake CP, “Characteristics
of modern atmospheric dust deposition in snow on the Penny Ice Cap, Baffin
Island, Arctic Canada”, Climate
Change Research Center, Institute for the Study of Earth, Oceans and Space,
University of New Hampshire, Tellus, 50B, 506-520, 1998. (http://www.ccrc.sr.unh.edu/~cpw/Zdano98/Z98_paper.html)
Lorius C., Jouzel J., Ritz C., Merlivat L., Barkov N. I., Korotkevitch Y.
S. and Kotlyakov V. M., “A 150,000-year climatic record from Antarctic ice”,
Nature, 316, 1985, 591-596.
Barbara Stenni, Valerie Masson-Delmotte, Sigfus Johnsen, Jean Jouzel, Antonio Longinelli, Eric Monnin, Regine Ro¨thlisberger, Enrico Selmo, “An Oceanic Cold Reversal During the Last Deglaciation”, Nature 280:644, 1979.
Wettlaufer, J.W., Premelting and anomalous diffusion in ancient ice, FOCUS session, March 16, 2001.
Rempel, A., Wettlaufer, J., Waddington E., Worster, G., "Chemicals in ancient ice move, affecting ice cores", Nature, May 31, 2001. (http://unisci.com/stories/20012/0531012.htm) (http://www.washington.edu/newsroom/news/2001archive/05-01archive/k053001.html)
The Olympian, "National Park's Famous Glaciers Rapidly Disappearing", Sunday, November 24, 2002. (http://www.theolympian.com/home/news/20021124/northwest/14207.shtml)
John Carey, Global Warming - Special Report, BusinessWeek, August 16, 2004, pp 60-69. ( http://www.businessweek.com )
Zielinski et al., "Record of Volcanism Since 7000 B.C. from the GISP2 Greenland Ice Core and Implications for the Volcano-Climate System", Science Vol. 264 pp. 948-951, 13 May 1994
Zielinski and Germani, "New Ice-Core Evidence Challenges the 1620s BC Age for the Santorini (Minoan) Eruption", Journal of Archaeological Science 25 (1998), pp. 279-289
Identification of Aniakchak (Alaska) tephra in Greenland ice core challenges the 1645 BC date for Minoan eruption of Santorini", Geochem. Geophys. Geosyst., 5, Q03005, doi:10.1029/2003GC000672. March, 2004 ( http://www.agu.org/pubs/crossref/2004/2003GC000672.shtml ), "
Jim Scott, "Greenland ice core project yields probable ancient plant remains", University of Colorado Press Release, 13 August 2004 ( http://www.eurekalert.org/pub_releases/2004-08/uoca-gic081304.php )
Michael J. Oard, "The extinction of the woolly mammoth: was it a quick freeze?" ( http://www.answersingenesis.org/Home/Area/Magazines/tj/docs/tj14_3-mo_mammoth.pdf )
Henry H. Howorth, The Mammoth and the Flood (London: Samson Low, Marston, Searle, and Rivington, 1887), pp. 96
Mol, Y. Coppens, A.N. Tikhonov, L.D. Agenbroad, R.D.E. Macphee, C. Flemming, A. Greenwood, B Buigues, C. De Marliave, B. van Geel, G.B.A. van Reenen, J.P. Pals, D.C. Fisher, D. Fox, "The Jarkov Mammoth: 20,000-Year-Old carcass of Siberian woolly mammoth Mammuthus Primigenius" (Blumenbach, 1799), The World of Elephants - International Congress, Rome 2001 ( http://www.cq.rm.cnr.it/elephants2001/pdf/305_309.pdf )
Grant D. Zazula, Duane G. Froese, Charles E. Schweger, Rolf W. Mathewes, Alwynne B. Beaudoin, Alice M. Telka, C. Richard Harington, John A Westgate, "Palaeobotany: Ice-age steppe vegetation in east Beringia", Nature 423, 603 (05 June 2003) ( http://www.sfu.ca/~qgrc/zazula_2003b.pdf )
Shukman, David, Greenland Ice-Melt 'Speeding Up', BBC News, UK Edition, 28 July, 2004. ( http://news.bbc.co.uk/1/hi/world/europe/3922579.stm )
Gary Braasch, Glaciers and Glacial Warming, Receding Glaciers, 2005. ( http://www.worldviewofglobalwarming.org/pages/glaciers.html )
Jerome Bernard, Polar Ice Cap Melting at Alarming Rate, COOLSCIENCE, Oct. 24, 2003 ( http://cooltech.iafrica.com/science/280851.htm )
Steve Connor, Melting Greenland Glacier May Hasten Rise in Sea Level, Independent - Common Dreams News Center, July 25, 2005 ( http://www.commondreams.org/headlines05/0725-02.htm )
Animation of Eastern Alp Glacial Retreat, Institut für Fernerkundung und Photogrammetrie Technische Universität Graz, Last accessed, September, 2005 ( Play Video )
Lynn Jenner, Glaciers Surge When Ice Shelf Breaks Up, National Aeronautics and Space Administration (NASA), September 21, 2004. ( Link )
Geoffrey Lean, The Big Thaw, Znet, accessed 2/06 (Link)
Zbigniew Jaworowski, Another Global Warming Fraud Exposed: Ice Core Data Show No Carbon Dioxide Increase, 21st Century, Spring 1997. ( Link ) and in a Statement written for a Hearing before the US Senate Committee on Commerce, Science, and Transportation, Climate Change: Incorrect information on pre-industrial CO2, March 19, 2004 ( Link )
Don Behm, Into the spotlight: Leno, scientists alike want to hear explorer's findings, Journal Sentinel, July 21, 2006 ( Link )
Kelly Young, Greenland ice cap may be melting at triple speed, NewScientist.com News Service, August 10, 2006 ( Link )
L. D. Keigwin, J. P. Sachs, Y. Rosenthal, and E. A. Boyle, The 8200 year B.P. event in the slope water system, western subpolar North Atlantic, Paleoceanography, Vol. 20, PA2003, doi:10.1029/2004PA001074, 2005 ( Link )
Nicole Petit-Maire, Philippe Bouysse, and others, Geological records of the recent past, a key to thenear future world environments, Episodes, Vol. 23, no. 4, December 2000 ( Link )
Harvey Nichols, Open Arctic Ocean Commentary, Climate Science: Roger Pielke Sr. Research Group Weblog, July 12, 2006 ( Link )
Vartanyan S.L., Kh. A. Arslanov, T.V. Tertychnaya, and S.B. Chernov, Radiocarbon Dating Evidence for Mammoths on Wrangel Island, Arctic Ocean, until 2000 BC, Radiocarbon Volume 37, Number 1, 1995, pp. 1-6. ( Link )
Black, J.L., Miller, G.H., and Geirsdottir, A., Diatoms as Proxies for a Fluctuating Ice Cap Margin, Hvitarvatn, Iceland, AGU Meeting - abstract, 2005 ( Link )
The following is from an E-mail exchange with C. Leroy Ellenberger, best known as a one-time advocate, but now a prolific critic of controversial writer/catastrophist Immanuel Velikovsky. My response follows:
July 26, 2007:
Talbott STILL does not GET IT concerning the ability of the ice at high altitude at the summit of the Greenland ice cap to have survived the global warming that occurred during the Hypsithermal period. Just because it was six or so degrees warmer at sea level during that time DOES NOT AUTOMATICALLY mean that it was six degrees warmer at the high altitude at Greenland's summit, due to adiabatic cooling; or even if it were six degrees warmer at the summit does not mean that the summer temperature necessarily got above freezing. AS I said in July 1994, we can ski Hawaii and Chile even while the folks at sea level are basking almost naked in the sun. And besides, the cores contain NO INDICATION that such wholesale melting, draining away untold number of annual layers, has even happened at the summit of Greenland in the past 110,000 years. Period. As Paul Simon sez in "The Boxer": "The man hears what he wants to hear and disregards the rest." That is Dave Talbott, "clueless in the mythosphere".
I would also like to point out what Robert Grumbine told me earlier today: if, say, 10,000 annual layers were melted, as Talbott would like to believe, then it would have been impossible for Bob Bass to get the high correlation he did between the signals in ice core profiles and Milankovitch cycles.
LeroyHigh altitude doesn't seem to be a helpful argument when it comes to explaining the preservation of Greenland Ice sheets during the thousands of years (6-7kyr) of Hypsithermal (Middle Holocene) warming. Why? Because what supports the high altitude of the ice in Greenland? Obviously, it is the ice itself.I mean really, note that the altitude of the ice sheet in Greenland is about 2,135 meters. Now, consider that about 2,000 meters of this altitude is made up of the thickness of the ice itself. If you warm up this region so that the lower altitudes start to melt, the edges are going to start receding at a rate that is faster than the replacement of the total ice lost. In short, the total volume of the ice will decrease and the ice sheet will become thinner as it flows peripherally. This will reduce the altitude of the ice sheet and increase the total amount of surface area exposed to the warmer temperatures. This cycle will only increase over the time of increased warmness.Consider this in the light of what is happening to the ice sheet in Greenland today with only a one degree increase in the average global temperature over the past 100 years or so. Currently, the ice is melting at 239 cubic kilometers per year (measured from April 2002 to November 2005). And, we aren't yet close to the average global warmness thought to have been sustained during the Hypsithermal (another 3 to 5 degrees warmer). If that's not a problem I don't know what is? But, as you pointed out, "A man hears what he wants to hear and disregards the rest." - but I suppose you are immune to this sort of human bias?As far as Milankovitch Cycles and the fine degree of correlation achieved, ever hear of "tuning"? If not, perhaps it might be interesting to look into just a bit. Milankovitch cycles seem, to me at least, to have a few other rather significant problems as well.Sean Pitman
Sean . . .
While your logic is unassailable, it is based on a false premise concerning how much warming occurred during the mid-Holocene Hypsithermal period. (1) Contrary to what you and Charles Ginenthal claim (coincidentally or not), there was no ca. 5 degree rise in average global temperature during the Hypsithermal, more generally known as the Atlantic period, from ca. 6000 B.C. to 3000 B.C. This 5 degrees is a figure that was derived for the rise in temperature in Europe, according to the source cited by Ginenthal. The consensus among climatologists is that the average global rise in temperature in the Hypsithermal/Atlantic period was about one degree, which we are seeing now. (2) However, regardless what the temperature rise might have been, another line of evidence contradicts your and Ginenthal's position. The hundreds of sediment cores extracted from the bottom of the Arctic Ocean indicate that during the past 70,000 years the Arctic Ocean has never been ice free and therefore never warm enough for all the melting you, Ginenthal, and Talbott claim allegedly happened. I urge you to read Mewhinney's Part 2 of "Minds in Ablation" and see if your dissertation on ancient ice does not need some revision or dismantling. It would appear that Dave Talbott's gloating in his email to this list at Thu, 26 Jul 3007 20:20:19-0400 (EDT), was not only premature, but totally unjustified.
Richard Alley [author of The Two Mile Time Machine] received my email while he was en route to Greenland, but he took the time to send the following reply, for which I am most grateful and which is above my request:
"Modelers such as P. Huybrechts have looked into this. In the models, there exist solutions in which somewhat smaller and steeper ice sheets are stable; warming causes melting back of the margins but not enough melting across the cold top of the ice sheet to generate abundant meltwater runoff. Averaged over the last few decades, iceberg calving has removed about half of the snow accumulation on Greenland, and melting the other half. Warming causing retreat would pull the ice largely or completely out of the ocean, thus reducing or eliminating the loss by calving; without losing icebergs, less snowfall is required to maintain the ice sheet, so stability is possible with more melting. Too much warming and the ice sheet no longer has a steady solution. The model results shown in our review paper are relevant." - Alley, R.B., P.U. Clark, P. Huybrechts and I. Joughin. Ice-sheet and sea-level changes. Science 310: 456-460 (2005).
On 7/26/07 8:17 PM, "Leroy Ellenberger"
I appreciate your response. It seems to me though that you are now simply throwing out anything that comes to mind to see if it will fly. First you argue that the altitude of Greenland would preserve the ice sheet in a warm environment for thousands of years. But, now that you see that this argument is untenable, you have now decided that it must not have been that warm during the Hypsithermal?
I've read through Mewhinney's "Minds in Ablation" several times now in my consideration of this topic. To be frank, I don't see where Mewhinney convincingly deals with the topic of the Hypsithermal warm period. For example, you argue that there was only a significant rise in temperature (relative to today) in Europe. Beyond this, you suggest that the overall average global temperature during the Hypsithermal was about the same as the average global temperature today.
Well, it seems to me like there are at least a few potential problems here. The first problem is that even at current global temperatures, the Greenland Ice sheet is in ablation at a rate that would easily melt it well within the time frame of the Hypsithermal period - several times over. Also, the notion that only Europe experienced significantly increased temperatures doesn't seem to gel quite right with the available facts.
Harvey Nichols, back in the late 60s, published a study of the history of the "Canadian Boreal forest-tundra ecotone". This study "suggested that the arctic tree-line had moved northwards 350 to 400 km beyond its modern position (extending soils evidence collected by Irving and Larsen, in Bryson et al. 1965, ref. 6) during the mid-Holocene warm period, the Hypsithermal. The climatic control of the modern arctic tree-line indicated that prolonged summer temperature anomalies of ~ + 3 to 4 C were necessary for this gigantic northward shift of the tree-line, thus fulfilling Budyko's temperature requirement for the melting of Arctic Ocean summer ice pack. A more extensive peat stratigraphic and palynological study (Nichols, 1975, ref. 7) confirmed and extended the study throughout much of the Canadian Northwest Territories of Keewatin and Mackenzie, with a paleo-temperature graph based on fossil pollen and peat and timber macrofossil analyses. This solidified the concept of a +3.5 to 4 degree (+/- 0.5) C summer warming, compared to modern values, for the Hypsithermal episode 3500 BP back at least to 7000 before present, again suggesting that by Budyko's (1966) calculations there should have been widespread summer loss of Arctic Ocean pack ice. By this time J.C. Ritchie and F. K. Hare (1971, ref.8) had also reported timber macrofossils from the far northwest of Canada's tundra from even earlier in the Hypsithermal."
Harvey Nichols (1967a) "The post-glacial history of vegetation and climate at Ennadai Lake, Keewatin, and Lynn Lake, Manitoba (Canada)", Eiszeitalter und Gegenwart, vol. 18, pp. 176 - 197.
H. Nichols (1967b) "Pollen diagrams from sub-arctic central Canada", Science 155, 1665 - 1668.
These "warm" features are not limited to Canada or Europe, but can be seen around the entire Arctic Circle. Large trees as well as fruit bearing trees and peat bogs, all of which have been dated as being no older than a few tens of thousands of years, are found along the northern most coasts of Russia, Canada, and Europe - often well within the boundaries of the Arctic Circle. Millions of Wholly Mammoth along with horse, lion, tiger, leopard, bear, antelope, camel, reindeer, giant beaver, musk sheep, musk ox, donkey, ibex, badger, fox, wolverine, voles, squirrels, bison, rabbit and lynx as well as a host of temperate plants are still being found all jumbled together within the Artic Circle - along the same latitudes as Greenland all around the globe. Again, the remains of many of these plants and animals date within a few tens of thousands of years ago. Yet, their presence required much warmer conditions within the Arctic Circle than exist today - as explained by Nichols above.
And, this problem isn't limited to the Hypsithermal period. Speaking of the area between Siberia and Alaska as well as the Yukon Territory of Canada Zazula et al said, "[This region] must have been covered with vegetation even during the coldest part of the most recent ice age (some 24,000 years ago) because it supported large populations of woolly mammoth, horses, bison and other mammals during a time of extensive Northern Hemisphere glaciation."
Grant D. Zazula, Duane G. Froese, Charles E. Schweger, Rolf W. Mathewes, Alwynne B. Beaudoin, Alice M. Telka, C. Richard Harington, John A Westgate, "Palaeobotany: Ice-age steppe vegetation in east Beringia", Nature 423, 603 (05 June 2003) ( http://www.sfu.ca/~qgrc/zazula
I don't get it? Was it much warmer than today all the way around the Arctic Circle, everywhere, and still cold in Greenland? How is such a feat achieved?
As far as your "other lines of evidence" they all seem shaky to me in comparison to the overwhelming evidence of warm-weather plants and animals living within the Arctic Circle within the last 20kyr or so.
The patterns of sedimentary cores are, by the way, subject to the very subjective process of "tuning" - as noted in my essay on Milankovitch Theory.
Richard Alley's argument that smaller "steeper" ice sheets are more stable during warm periods doesn't make any sense to me. Ice sheets flow. They don't remain "steep" or all humped up like Half-Dome. To significantly increase the "steepness" or "slope" of the Greenland ice sheet, the overall size of the sheet would have to be reduced from over 2,000,000 square kilometers to just a few thousand to make a significant difference in the overall "steepness" of the Greenland ice sheet.
Even today the Greenland ice is melting quite rapidly across most of "the top" as well as the sides. It is also melting in such a way that the surface meltwater percolates down through the entire ice sheet to create vast lakes at the bottom - lubricating the ice sheet and making it flow even faster. Just because it doesn't reach the ocean before it melts and turns into water doesn't mean that less ice is melting than before - i.e., just because it is flowing as liquid water instead of "calving" into the ocean.
No, I'm afraid you, Mewhinney, and Alley have a long way to go to explain some of these interesting problems - at least to my own satisfaction. It seems like you all accept certain interpretations based on a limited data set while failing to seriously consider a significant amount of evidence that seems to fundamentally counter your position in a very convincing manner.
Thanks again for your thoughts. I did find them very interesting.
You raise a many points here in your rejoinder, some of which distort what I wrote. I have neither the resources nor the time to explore all the points you raise, if indeed they need to be explored considering Jim Oberg's remark to Warner Sizemore in a late 1978 letter about not needing to chase every hare Velikovsky set loose to know that Worlds in Collision is bogus. And for all the points you raise, many of them interesting about exotic Arctic conditions and so forth, you do not, as I see it, come to grips with the testimony from the Arctic Ocean sediment cores which indicate that body of water has never been ice free in the past 70,000 years, as would be the case if climate were as warm as you claim. This has to be a boundary condition on your speculations despite all the
botanical and faunal activity in the Arctic during that time.
Sure, the Pleistocene and early Holocene were interesting environments whose conditions we have difficulty understanding and doubly so as we project our own experiences on that extinct epoch. AS an example of a distortion of what I wrote, I did not claim Europe was the only area that warmed five degrees during the Hypsithermal, merely that it was the area mentioned in the source Ginenthal used to justify his claim of a global warming that large. As for the demise of the Pleistocene megafauna that seems to interest you so much, I can do no better that R. Dale Guthrie's Frozen Fauna of the Mammoth Steppe (U. Chi. Press, 1990) and William White's three part critique in Kronos XI: 1-3 which focusses on the extragant claims made over the years about the catastrophic demise and preservation of the frozen mammoths. White was rebutting my defense of the Sanderson-Velikovsky school of mammoth extinction earlier in Kronos. Oh yes, and do not forget William R. Farrand's 1961 classic "Frozen Mammoths and Modern Geology", SCIENCE 133, 729-35. I leave you with the closing quote of my previous email: "Mundus vult decipi ergo decipiatur".
Sincerely, C. Leroy Ellenberger
Dear Leroy,If you aren't interested in seriously considering some of the main points I've raised, that's up to you. It is just that so far I haven't seen anyone present any significantly cogent arguments against the evidence for a very warm and iceless Arctic Circle and Ocean in the recent past.You say I've not considered the evidence of the ocean cores, but I have considered this evidence. It is just that your interpretation of the ocean sediment cores seems to fly in the face of the overwhelming interpretation of the existence of warm-weather animal and plant life within the entire Arctic Circle in the recent past. Both interpretations simply can't be right. One has to win out over the other. It all boils down to which perspective carries with it the greatest degree of predictive power. Consider this in the light of the following interpretation of ocean cores take from the Barents Sea ( i.e., part of the Arctic Ocean):"Marine sediment cores [taken in the Barents Sea] representing the entire Holocene yielded foraminifera which showed that a temperature optimum (the early Hypsithermal) developed between 7800 and 6800 BP, registering prolonged seasonal (summer) ice free conditions, and progressing to 3700 BP with temperatures similar to those of today, after which a relatively abrupt cooling occurred." [emphasis mine]J-C Duplessy, E. Ivanova, I. Murdmaa, M. Paterne, and L. Labeyrie, ( 2001): "Holocene paleoceanography of the northern Barents Sea and variations of the northward heat transport by the Atlantic Ocean" in "Boreas" vol. 30, # 1, pp. 2 - 16.So, there you have it. How then can you argue that ocean core sediments conclusively support your contention that the Arctic Ocean has "never been ice free for the past 70,000 years"? Now, is that really true? - given the above reference?Also, I don't see that it matters what killed off the mammoths for the purposes of this discussion - catastrophic or otherwise. That has nothing to do with the fact that these creatures and many others lived in lush warm environments for a long period of time above and around the significant majority of the Arctic Circle in the recent past. This is an overwhelming fact with an equally overwhelming conclusion that makes it very hard to imagine how Greenland could have remained frozen the whole time.If you have something as far as real evidence or a reasonable explanation, I'd be quite interested. Otherwise, I'm not into a discussion that is mostly about who can list off the most pejoratives. That might be fun, but I'm really not up for that sort of thing . . .Sean Pitman
Pulling a couple of points out of Pitman's latest 2 notes.
And the ultimate argument -- it doesn't make sense to Pitman. Yet even though he quotes Alley's argument, he doesn't see the effect. Half the ice that is lost from Greenland today is lost by calving of iceberge. Icebergs aren't meltwater. Meltwater is the other half of the mass loss.
The size of the ice sheet depends on the balance between income (accumulation) and outgo (melting, iceberg calving). No iceberg calving halves the outgo, letting the income side win out until the ice sheet gets so large that it starts calving again.
Ice sheets do indeed flow. What Pitman has missed is covered well in the Paterson reference I made earlier. Namely, the flow rate depends on the temperture of the ice. Colder ice doesn't flow as fast as warm. A second feature he missed is that ice is an excellent insulater. It takes time for warmer conditions at the surface to warm the temperatures in the interior of the ice sheet. Enough time that parts of the Greenland ice sheet still 'remember' glacial maximum temperatures. Much more of the sheet would have remembered the glacial maximum conditions several kya than currently do so, and the ice would have been correspondingly stiffer, leading to more steeply-sloped sides.
All the preceding, though, is aside. The real point is that in talking about the Greenland ice sheets' melting away during the hypisthermal, Pitman is making a _prediction_, not an observation. Yet he and some others are taking his prediction as observed fact. The preceding merely sheds a little light on what quality of prediction he made.
More to the point is that if he were engaging in science, the thing he'd do following his prediction of the obliteration of the Greenland
ice sheet is look for evidence that it had actually happened. Forests and mammoths don't show obliteration of ice sheets, so all that is irrelevant except as clues to what motivated the prediction.
One good way of determining that Greenland had melted away is to find those extra 6 meters of sea level that it represents. Yet, in fact, the sea levels are higher now than any time in the last ~100 ky, including during the hypsithermal.
In the second note:
The Barents sea today is ice-free in the summer, yet there is a perennial Arctic sea ice pack. It was also ice free -- in summer -- during the 'little ice age'. The Barents sea is marginal for sea ice packs, so doesn't carry a perennial ice cover. William Chapman, at the University of Illinois, has a nice web site on sea ice conditions called 'the cryosphere today'. The National Snow and Ice Data Center carries more data, some Scandanavian records back several centuries included.
Robert,Thank you for your thoughts. However, they still don't seem to solve the problem - as I understand it anyway.You present the seemingly obvious argument that the total ice lost from the Greenland ice sheet is the result of half melting and half calving. Obviously then, if the ice melts to a point where there is no more calving, half the loss is removed and the accumulation rate can keep up.Superficially this seems like an obvious conclusion. The problem here is that this argument does not take into account the etiology of calving - i.e., the flow of ice all the way to the ocean. If the ice sheet melts to such an extent that it no longer reaches the ocean, the ice sheet itself would have to be quite thin. Thick ice creates a lot of pressure on itself and flows over time at a rate that is fast enough to reach the ocean before it melts. The ice isn't going to be cold enough to make it "stiff" enough so that it doesn't flow at at least its current rate of flow (contrary to your suggestion). Also, the flow rate is only going to be increased over the current rate with increased areas of surface melting. This is due to the increased lubrication of the ice sheet from the percolation of liquid water from the surface to the base of the ice sheet.Also, the ice sheet isn't going to get much "steeper" than it already is. Why not? Because the thickness of the Greenland ice sheet is about 2 km, but around 1500 x 1500 km (2,175,590 sq km) in surface area. How does one create a relatively "steep" ice sheet unless the ice sheet one is thinking about is less than a few tens of km in maximum diameter?In short, it seems to me that it is the flow rate that is key, not the calving rate. Ice is lost at the flow rate regardless of the calving rate. Therefore, if the melting of the ice is so great that the flow rate cannot keep up in a way that allows calving into the ocean, this does not indicate a "halving" of the rate at which ice is lost from the sheet at all. It simply indicates that the flow rate cannot keep up with the increased melt rate. At this point the ice sheet would have become so thin that a much greater surface area would be exposed to summer melt - dramatically increasing the average yearly loss of ice as well as the flow rate (due to the lubrication effect).This sort of thing is already happening today. In the illustration below, note the significant increase in the area of summer melt in Greenland between 1992 and 2005, contributing to about 240 cubic kilometers of ice lost, per year, by 2005.
This feature will only be enhanced as the Arctic region continues to warm - still well shy of the warmth experienced in this region during the thousands of years of the Hypsithermal. Pretty soon the entire sheet will be subject to summer melting. I'm sorry, but this increased melt rate over the entire sheet isn't going to be overcome by a decline in calving rate. That just doesn't happen. There simply is no example of such a thing as far as I am aware. But, I'd be very interested in any reference of such an observation or model to the contrary.Also, the notion that the Arctic Ocean was covered with ice during the Hypsithermal is significantly undermined by current melt rates of the Arctic Ocean ice. At current rates the ice will be pretty much gone well within 50 years (see figure below). In fact, some, like Walt Meier, a researcher at the U.S. National Snow and Ice Data Center in Boulder, Colorado, commented on these interesting findings notes that the melting of the Arctic ice cap in summer is progressing more rapidly than satellite images alone have shown. Given recent data such as this, climate researchers at the U.S. Naval Postgraduate School in California predict the complete absence of summer ice on the Arctic Ocean by 2030 or sooner.Don Behm, Into the spotlight: Leno, scientists alike want to hear explorer's findings, Journal Sentinel, July 21, 2006 ( Link )That's only about 20 years away. And you think all the evidence that the Hypsithermal was even warmer within the entire Arctic region isn't enough to suspect that the Arctic Ocean was probably ice free then as well as it is going to be in very short order today? It stretches one's credulity to think otherwise - does it not? Yet, you argue that forests, mammoths, peat bogs, and warm-water forams are "irrelevant" to this question - even when they appear within the Arctic Circle? Really?Thanks for your efforts though. But, I must say . . . I for one still don't "get it".Sean PitmanConsider also that fairly recent evidence has come to light that mammoths survived on Wrangel Island (located on the border of the East-Siberian and Chukchi Seas) until 2,000 B.C. That's right. This is no joke.
Robert Grumbine presented an interesting challenge: "One good way of determining that Greenland had melted away is to find those extra 6 meters of sea level that it represents. Yet, in fact, the sea levels are higher now than any time in the last ~100 ky, including during the hypsithermal."
Well, as it turns out, this observation has been made and reported by several scientists, to include NguyÔn V¨n B¸ch, Ph¹m ViÖt Nga of the institute of Oceanography, NCNST. These authors report the following findings:
The study results of depositional environments provide with informations to reconstruct the sea-level positions in the last 6,000 years. Here, it must be admitted that in the time of 6,000 years or so before present in Trêng Sa region, the sea-level was higher than the present by 5 - 6m. That's why several coral reefs have the top surfaces of 5m in height. Nowadays, the most of scientific works touching upon Holocene sea-level changes support the conclusion that the sea-level was at +5m dated 6,000 years BP [1, 6]. Thus, in Trêng Sa Sea for the last 6,000 years BP sea-level has moved up and down 4 times (Fig.6) in a drop trend. The curve in Fig.6 is deduced from the study results of sedimentary sequence and stratigraphic, pollen-spores, chemical analysis and sedimentary basin analysis.
NguyÔn V¨n B¸ch, Ph¹m ViÖt Nga, Holocene sea-level changes in Trêng Sa archipelago area, Institute of Oceanography, NCNST, Hoµng Quèc ViÖt, CÇu GiÊy, Hµ Néi, July 9, 2001 ( Link )
Robert Grumbine responds:
Obliterating Greenland is a matter of global sea level, not merely local, so let's see [whether] the paper is about global sea level: . . .
Worse, w.r.t. Pitman's cherry-picking, is that the same paper does include global sea level curves which do show that global sea level has not been several meters greater than present any time in the past 10ky (one stops there, the one that goes farther back shows no such higher sea level for the past 125 ky -- its limit).
Not content to cherry-pick only a single local curve, it also turns out that he cherry-picked _which_ local curve. Figure 3 shows (and labels it so) Regional Sea Level curve, one curve for 'data scattered along the Vietnamese coast', and one for the Hoang So area. The latter accord with the global curve and aren't mentioned by Pitman. Fig 4 shows a curve for the Malaysia peninsula, which shows less sea level change than the scattered Vietnamese stations (4 m peak, vs the 5-6 Pitman quotes for the Vietnamese -- someone unknowledgeable but curious about the science would wonder why there were such very large differences over such small areas 0, 4, 5 meters in three nearby areas).
The authors, even in translation, are clear about what they were doing and what they found. They found some interesting features in their local area. What they did not do, or attempt, or challenge, was to construct a global sea level curve. What's interesting, for their work, is that while global sea level was flat the last 6 ky, their area has been oscillating up and down.
Robert,And that's the whole point. "Global" sea level curves for the Holocene seem to be extrapolations from regional sea level curves - curves that can vary widely with all kinds of theories as to the reasons for the regional differences. Some argue that:"The probability is strong that mid-Holocene eustatic sea level was briefly a meter or two higher than the present sea level, although separating isostatic and eustatic effects remains an impediment to conclusively demonstrating how much global ice volume was reduced. . .In summary, when relative sea-level records are reconstructed from paleoclimatological methods, all coastlines exemplify to one degree or another the complex processes confronting inhabitants of coastlines of Scandinavia, Chesapeake Bay, and Louisiana. Multiple processes can cause observed sea-level changes along any and all coastlines and uncertainty remains when attributing cause to reconstructed sea-level trends. Only through additional relative sea-level records (including sorely needed records from the LGM and early deglaciation), better glaciological budgets, and improved geophysical and glacial models will the many factors that control sea-level change be fully decoupled."Also note the Fairbridge curve with it's multiple Holocene blips several meters above today's "global" levels (Fig. 2):This hardly sounds to me like conclusive science - something upon which one can make very definitive negative or positive statements concerning the likely melt or non-melt of Greenland's ice. There's just not sufficient positive or negative predictive power. In short, its seems rather difficult to use such evidence, "cherry pick" if you will, and pretty much ignore the very strong evidence for a much warmer Arctic in the recent past than exists today - and the implications of this evidence for the survival of the Greenland ice sheet for thousands of years.Sean Pitman
Current concepts of late Pleistocene sea level history, generally referred to the 14C time scale, differ considerably1. Some authors2,3 assume that the sea level at about 30,000 BP was comparable with that of the present and others4,5 assume a considerably lower sea level at that time. We have now obtained 14C dates from in situ roots and peat which indicate that the sea level was lowered eustaticly to at least 40−60 m below the present level between 36,000 and 10,000 BP. The sea level rose from -13 m to about +5 m from 8,000 to 4,000 BP and then approached its present level. [emphasis added]
M. A. GEYH*, H. STREIF* & H.-R. KUDRASS, Sea-level changes during the late Pleistocene and Holocene in the Strait of Malacca, Nature 278, 441 - 443 (29 March 1979); doi:10.1038/278441a0 ( Link )
. Home Page . Truth, the Scientific Method, and Evolution
. Maquiziliducks - The Language of Evolution . Defining Evolution
. DNA Mutation Rates . Donkeys, Horses, Mules and Evolution
. Amino Acid Racemization Dating . The Steppingstone Problem
. Harlen Bretz . Milankovitch Cycles
. Kenneth Miller's Best Arguments
Since June 1, 2002 | http://www.detectingdesign.com/ancientice.html | 13 |
22 | Fprintf places output on the named output stream f (see fopen(2)).
Printf places output on the standard output stream stdout. Sprintf
places output followed by the null character (\0) in consecutive
bytes starting at s; it is the user's responsibility to ensure
that enough storage is available. Snprintf is like sprintf but
writes at most n bytes (including the null character) into s.
Vfprintf, vprintf, vsnprintf, and vsprintf are the same, except
the args argument is the argument list of the calling function,
and the effect is as if the calling function's argument list from
that point on is passed to the printf routines.
Each function returns the number of characters transmitted (not
including the \0 in the case of sprintf and friends), or a negative
value if an output error was encountered.
These functions convert, format, and print their trailing arguments
under control of a format string. The format contains two types
of objects: plain characters, which are simply copied to the output
stream, and conversion specifications, each of which results in
fetching of zero or more arguments. The results are
undefined if there are arguments of the wrong type or too few
arguments for the format. If the format is exhausted while arguments
remain, the excess are ignored.
Each conversion specification is introduced by the character %.
After the %, the following appear in sequence:
A field width or precision, or both, may be indicated by an asterisk
(*) instead of a digit string. In this case, an int arg supplies
the field width or precision. The arguments specifying field width
or precision, or both, shall appear (in that order) before the
argument (if any) to be converted. A negative field width
argument is taken as a – flag followed by a positive field width.
A negative precision is taken as if it were missing.
The flag characters and their meanings are:
Zero or more flags, which modify the meaning of the conversion
An optional decimal digit string specifying a minimum field width.
If the converted value has fewer characters than the field width,
it will be padded with spaces on the left (or right, if the left
adjustment, described later, has been given) to the field width.
An optional precision that gives the minimum number of digits
to appear for the d, i, o, u, x, and X conversions, the number
of digits to appear after the decimal point for the e, E, and
f conversions, the maximum number of significant digits for the
g and G conversions, or the maximum number of characters
to be written from a string in s conversion. The precision takes
the form of a period (.) followed by an optional decimal integer;
if the integer is omitted, it is treated as zero.
An optional h specifying that a following d, i, o, u, x or X conversion
specifier applies to a short int or unsigned short argument (the
argument will have been promoted according to the integral promotions,
and its value shall be converted to short or unsigned short before
printing); an optional
h specifying that a following n conversion specifier applies to
a pointer to a short argument; an optional l (ell) specifying
that a following d, i, o, u, x, or X conversion character applies
to a long or unsigned long argument; an optional l specifying
that a following n conversion specifier applies to a
pointer to a long int argument; or an optional L specifying that
a following e, E, f, g, or G conversion specifier applies to a
long double argument. If an h, l, or L appears with any other
conversion specifier, the behavior is undefined.
A character that indicates the type of conversion to be applied.|
– The result of the conversion is left–justified within the field.
+ The result of a signed conversion always begins with a sign (+
blank If the first character of a signed conversion is not a sign,
or a signed conversion results in no characters, a blank is prefixed
to the result. This implies that if the blank and + flags both
appear, the blank flag is ignored.
# The result is to be converted to an ``alternate form.'' For o
conversion, it increases the precision to force the first digit
of the result to be a zero. For x or X conversion, a non–zero result
has 0x or 0X prefixed to it. For e, E, f, g, and G conversions,
the result always contains a decimal point, even if no digits
0 For d, i, o, u, x, X, e, E, f, g, and G conversions, leading
zeros (following any indication of sign or base) are used to pad
the field width; no space padding is performed. If the 0 and –
flags both appear, the 0 flag will be ignored. For d, i, o, u,
x, and X conversions, if a precision is specified, the 0 flag
follow the point (normally, a decimal point appears in the result
of these conversions only if a digit follows it). For g and G
conversions, trailing zeros are not be removed from the result
as they normally are. For other conversions, the behavior is undefined.|
The conversion characters and their meanings are:
be ignored. For other conversions, the behavior is undefined.
f The double argument is converted to decimal notation in the style
[–]ddd.ddd, where the number of digits after the decimal point
is equal to the precision specification. If the precision is missing,
it is taken as 6; if the precision is explicitly 0, no decimal
The integer arg is converted to signed decimal (d or i), unsigned
octal (o), unsigned decimal (u), or unsigned hexadecimal notation
(x or X); the letters abcdef are used for x conversion and the
letters ABCDEF for X conversion. The precision specifies the minimum
number of digits to appear; if the value
being converted can be represented in fewer digits, it is expanded
with leading zeros. The default precision is 1. The result of
converting a zero value with a precision of zero is no characters.|
e,E The double argument is converted in the style [–]d.ddde±dd, where
there is one digit before the decimal point and the number of
digits after it is equal to the precision; when the precision
is missing, it is taken as 6; if the precision is zero, no decimal
point appears. The E format code produces a number
g,G The double argument is printed in style f or e (or in style
E in the case of a G conversion specifier), with the precision
specifying the number of significant digits. If an explicit precision
is zero, it is taken as 1. The style used depends on the value
converted: style e is used only if the exponent resulting
with E instead of e introducing the exponent. The exponent always
contains at least two digits.|
c The int argument is converted to an unsigned char, and the resulting
character is written.
from the conversion is less than –4 or greater than or equal to
the precision. Trailing zeros are removed from the fractional
portion of the result; a decimal point appears only if it is followed
by a digit.|
s The argument is taken to be a string (character pointer) and
characters from the string are printed until a null character
(\0) is encountered or the number of characters indicated by the
precision specification is reached. If the precision is missing,
it is taken to be infinite, so all characters up to the first
P The void* argument is printed in an implementation–defined way
(for Plan 9: the address as hexadecimal number).
character are printed. A zero value for the argument yields undefined
n The argument shall be a pointer to an integer into which is written
the number of characters written to the output stream so far by
this call to fprintf. No argument is converted.
% Print a %; no argument is converted.
If a conversion specification is invalid, the behavior is undefined.
If any argument is, or points to, a union or an aggregate (except
for an array of character type using %s conversion, or a pointer
cast to be a pointer to void using %P conversion), the behavior
In no case does a nonexistent or small field width cause truncation
of a field; if the result of a conversion is wider than the field
width, the field is expanded to contain the conversion result. | http://www.quanstro.net/magic/man2html/2/fprintf | 13 |
50 | for an activity according to grade level and/or core democratic
Civics Online hopes to not only provide a rich array of multi-media
primary sources, but to also give teachers ideas on using those
sources in the classroom. Explore the general activities
below, or investigate the activities created
for specific grade levels and core democratic values.
following strategies are based on the classroom use of primary documents
and the incorporation of interactive learning. As far as possible,
these strategies should be integrated into the social studies classroom
with the goal of placing students in learning situations that will
promote critical thinking and application of knowledge. These strategies
are intended as a springboard for dialogue and discussion. Teachers
are encouraged to adapt and modify the strategies for their own
of close textual reading and analysis of historical documents should
be a regular feature of the social studies classroom. Considerations
in this activity are: establishing historical context, targeting
the purpose of the document, identifying the social-political bias,
and recognizing what is at stake in the issue. Language analysis
should consider key words, tone, and intent.
far as possible, students should be placed in collaborative groups
to dialogue their responses to documents (print, on line, video).
An inductive method should be the framework for these discussions.
At times the teacher may lead Socratic discussions or might conduct
a debriefing. However, it is desirable to establish an ongoing framework
for analysis and evaluation.
analysis should be a component of other classroom activities with
the goal of developing a more articulate and well informed civic
activity calls upon students to take various roles associated with
an historical, social, or civics related issue. Students will research
the point of view of their assigned character and will participate
in one of a number of short or long term activities that might include:
simulated media interviews, a debate, a court case, a panel discussion,
an historical simulation, or a simple debriefing.
activity might involve a short term one period pro-con debate on
a focused historical or constitutional issue. It could also be part
of a longer project based on more extensive research used by students
working in teams resulting in a more formal closure project.
Thesis-Antithesis-Synthesis writing activity.
from a set of documents and information, the class (working in collaborative
groups) must design an activity in which their group recreates an
event and analyzes it from multiple perspectives. The event could
be an historical situation, a current event, or a constitutional
issue. Through dramatic scripting and role playing, the group will
prepare for class presentation a briefing on their topic. This activity
could be part of a larger unit as a major project or could be utilized
as part of a daily activity on a short term basis.
a unit closure activity or long term project, the simulation could
take the form of a One Act Play.
would work together in news teams digesting a number of documentary
materials. The materials would be presented in print packets or
online. Each student would be assigned a job simulating a team
of reporters or television news magazine staffers. At the end of
the activity (single period or longer term), the teams will present
their findings to the class. Many creative options may be utilized
for the reports (video, online, role playing, etc.).
social, and historical issues should, from time to time, be considered
in the form of classroom court. The court should engage every student
in the class in some formal role (judge, court reporter, expert
witness, etc.). The case may be worked up from on line or print
documents. If desired, an appeals process may be used. Student "reporters"
will debrief the class in a "Court TV" simulation.
class might consider an issue or social problem through the simulation
of a congressional hearing. The class might prepare by watching
one of the many such hearings shown on C-Span.
research (print as well as online) should be part of the preparation
for this activity. Teams of students representing various sides
of the issue should collaborate to produce testimony.
of students would role play the Senate or Congressional subcommittee.
They would share their conclusions with the class. Another group
would role play a news team covering the hearings for a television
activity might be the writing of legislation based on the findings
of the hearings.
activity is a longer term project that would involve a broader social
or constitutional issue (such as freedom of speech, civil rights,
or the right to bear arms). Role playing might be used or students
could work up their own points of view.
symposium discussion would center around a few (3-5) central questions.
These questions could be posed by the teacher or worked up by a
student committee. The class would do its research (print and online) in teams which would work together to digest their findings
in light of the key questions.
symposium itself could be set up in round table fashion with the
entire class or could be focused on a smaller group of representatives
selected from the research teams. Input from the teams could be
rotated among members.
perspective segments could be researched and pretaped by students.
Classroom link ups
social, historical, or constitutional issues could be considered
by individually paired classrooms in different districts. Ideally,
these online linkages would pair diverse districts such as an urban
classroom with a rural or suburban one.
classrooms would work together on commonly accessed documents, or
they could focus on the ways in which each classroom perceives important
civic questions differently.
term or short term projects could be worked up including many of
the formats outlined above. Interactive links (video conferences,
URL exchanges, etc) would provide opportunities for ongoing dialogue.
Actual on site student exchanges could be arranged for further collaboration.
would be assigned roles of representatives to the Constitutional
Convention of 1787, only as citizens of contemporary American society.
Their task would be to revise and rewrite the original constitution
to better reflect the civic needs and demands of a 21st century
student would receive an index card with a brief description of
his assigned role. Diverse teams of students would be created to
act as revision committees to reexamine the original document section
by section. The Bill of Rights would be the only part of the original
document not subject to change.
completing their reexamination, the teams will report their findings
to the class. Then through debate and compromise, the committee
of the whole will decide on what (if any) revisions in the original
document should be made. The entire class must reach consensus on
the wording of any changes.
much as possible, writing activities should connect to the standards
especially the Core Democratic Values. Also, each activity should
emphasize stating a position clearly and using specific evidence
to support the position.
citizen's journal (An ongoing free response collection of personal
reactions to historical documents, films, online materials, class
speakers, and artifacts.) The journal would allow complete freedom
of expression and would be "graded" only as a required activity.
The journal would encourage students to identify issues of interest
and to react to them informally.
jackdaw collection (Role playing a number of documents focusing
on a civic or constitutional issue. May include letters, editorials,
public documents, broadsides, pictures, cartoons, and artifacts).
The collection may be a closure activity for a research project
or connected to a debate or other activity.
role specific position papers (Written in response to a series of
documents, these letters may be written from the viewpoint of roles
assigned by the teacher). By providing a point of view, the assignment
will encourage students to look at issues from multiple perspectives.
legislative draft (After consideration of a local, state, or national
issue, small collaborative groups will brainstorm new legislation
in response to the issue. They will act as subcommittees for their
town council, county commission, zoning board, state legislature,
or congress. The final product will be presented to the whole for
debate and a final vote).
activity could be a closure activity in a legislative decision making
unit or it could be part of a current events unit or a social problems
citizen's letter (This is assigned in response to a local, state,
or national issue.)
researching the issue, the student must draft a letter to the appropriate
governmental agency or elected official. Before the letters are
mailed, they must be presented in small feedback groups of peers
for evaluation. They should be critiqued for logic, clarity, and
persuasive use of language and historical-legal precedent.
letters and responses to them should be posted on the classroom
bulletin board. A variation of this activity would be a request
for information or a clarification of a policy from a governmental
agency or official.
Interview (In response to a guest speaker or outside subject.) The
assignment would include framing several focused questions based
on some research, notes summarizing the responses, and a synthesis
paragraph summarizing the results of the interview. The interview
could be keyed to a topic determined by the teacher or a focus group
descriptive response (To an artifact or photograph.) As part of
an array or collection, the artifact becomes the subject of a descriptive
narrative in which the student connects the content of the artifact
to larger democratic issues and values.
a dialectical approach, the student breaks down an issue into three
components starting with its main thesis, its opposing thesis, and
a future focus synthesis fusing the two issues in a new way.
method can be used by discussion groups or as a three paragraph
format for written expression. The synthesis would include creative
solutions that point toward building a consensus. It encourages
the development of multiple perspectives.
LifeThis activity could be done in connection with the values of diversity or justice. The concept of the right to life can be presented in the context of The Golden Rule and the right of everyone to be respected as an individual. The Golden Rule can be taught as both a classroom rule and as a legal right. The teacher might display (in poster form) and discuss the Golden rule as interpreted by various world religions. The teacher might supplement with a story time selection such as The Rag Coat by Lauren Mills (Little Brown, 1991) which addresses issues of fair treatment. Also, Chicken Sunday by Patricia Polacco (Paperstar, 1992) is a fine multicultural treatment.
LifeAt this level, it might be appropriate to introduce multicultural pictures of children from a variety of societies. The purpose of this activity is to compare and contrast the economic, social, educational, and physical well being of children in a world-wide perspective. Questions can be raised about whether all children have equal access to food, shelter, education, and family support. After the discussion, a writing and art activity would allow students to express their thoughts about quality of life issues as they confront children. Students could choose a picture and write a story about the young person depicted and what they might be facing. Students should be encouraged to put themselves in the place of their chosen photograph. The papers may be illustrated as part of a parallel art activity.
LifeThe right to life is a motif in many adolescent novels. However, the issue might be most memorably presented in The Diary of Anne Frank, that is often required reading in middle school. The Holocaust is a horrific example of what happens when a political regime is based on the systematic disenfranchisement of citizens who have no constitutional protection of their basic rights. Whether presented in a literary or social studies context, the right to life is a key to understanding the difference between the right of citizens in a constitutional democracy and the victims of totalitarian genocide. After a debriefing discussion, the class might write essays on the Right to Life that would be shared in small groups and posted in the classroom.
LifeThe concept of life as a constitutional precept should be established in its full historical sense. Advanced classes might benefit from an overview of the Enlightenment philosophers such as John Locke and John Jacques Rousseau who influenced Thomas Jefferson. The students should fully understand what “natural rights” means as a basis for the American constitutional and legal system. The connection should then be made to a contemporary issue that connects the issue to the student. These might include: cloning, bioethics, or capital punishment. Discussion, debate, and writing should follow. This is a good issue for outside interviews or class speakers from the professional community.
LibertyPost a large color photograph of the restored Statue of Liberty. Discuss the location, meaning, and details of the statue and why so many Americans contributed so much for the restoration of Lady Liberty. Additional photographs of immigrants at Ellis Island may be added to the discussion. Art supplies could be furnished to allow the class to create their own poster of the statue and what it means to them. The finished posters should be displayed and discussed by the class.
LibertyAnnounce an individual “liberty” period of class time (15-20 minutes) in which each student will be able to spend the time at his/her own discretion on an activity of their own design. Limits should be broad and choices unlimited (within the teacher's classroom code). A debriefing would follow to discuss the individual choices and productivity of the period.
On a subsequent day, another “liberty” period would be held, this time with a democratic decision making model to determine the activity for the entire class. Again, a debriefing would analyze the success of the period.
On a third day, the teacher would dictate the activity of the special period with no liberty or democratic choice. The debriefing of this activity would include a comparison and contrast of the three different experiences. The teacher at this time would introduce the concepts of liberty, democracy, and tyranny to describe each of the three experiences. Follow up activities could include planning more student designed activities based on the liberty and democracy models. Writing activities could include the creation of "definitions" posters and
LibertyThe class will read and consider the explicit and implicit meaning of “The Pledge of Allegiance”, especially the phrase “...with liberty and justice for all.” The teacher will introduce some selected documents and case studies into follow up discussions. These cases should focus the ways in which the concepts of liberty and justice interface. Are they the same? Are there times when liberty and justice conflict? Are there limits on personal freedom in a democratic society? How does the constitution and rule of law help to determine these limits? Sample cases could include the Elian Gonzalez matter, free speech issues, or the American Revolution.
LibertyAfter reading and discussing several seminal documents that address the concept of liberty in American democracy, students should write a personal essay in which they define and defend their own ideas about liberty and personal freedom as citizens. The essay must address the problem of how to adjudicate disputes between individual “liberties” and whether our constitution places limits on personal freedom. Grading rubrics for the essay should include citation of historical examples and references to the constitution and court cases. A good place to begin the class discussion is the 1919 Schenk vs. U.S. case and the famous Holmes opinion on free speech (“clear and present danger”). Also, the writings of Henry David Thoreau and Ralph Waldo Emerson may prove provocative.
The Pursuit of HappinessCreate a classroom collection of pictures showing Americans at work and at play. The collection should reflect racial, ethnic, regional, economic, and gender diversity. The phrase "pursuit of happiness" should head the collection. After discussing the concept in plain terms and looking at the pictures, the class could do an art project in which each student would create a collage of fun things that his family does to pursue happiness. Students could draw, paint, or make a montage of happiness from their own personal perspective.
The Pursuit of HappinessAfter discussing the preamble to The Declaration of Independence, especially the phrase "life, liberty, and the pursuit of happiness", the class will be assigned an interview questionnaire. The questions will constitute a simple survey to be given to people at home asking them to try to define "pursuit of happiness" in a variety of ways. Categories might include: economic, educational, personal, family, political, and travel interests. After bringing their survey results back, the class will create a colorful statistical and graphic chart on the bulletin board. This chart will act as a working definition of the varied ways Americans pursue happiness.
The activity could also be part of a basic statistics introduction and part of a graphic design project. Also, it could be part of a history unit involving the American Revolution and the ideas that motivated our fight for independence from English rule.
The Pursuit of HappinessThe class might consider "pursuit of happiness" from the standpoint of immigration. The teacher might compile a packet of documents consisting of first hand testimony from first generation immigrants. Letters, interviews, and oral history sources should be included. Ellis Island, the "New" immigration of the late 19th century, and The Great Migration of southern Blacks to northern industrial cities should be considered. In each case, the conditions facing each group prior to migration should be detailed. Also, comparisons should be drawn between their old and new condition.
Evaluative activities might include: writing, role playing, enactments of historic scenarios, and graphic design of bulletin boards.
The Pursuit of HappinessThe class will conduct a debate on the subject of gun control. After researching and discussing Amendment II of The Constitution and the intended meaning of "the right to bear arms", the class will be divided into two teams to prepare their debate. One significant aspect of the debate should include whether gun ownership should be included in a citizen’s right to "pursue happiness" if the owner uses his firearm for hunting, competitive shooting, collecting or other peaceful activity. Are there times when "pursuit of happiness" might conflict with other rights such as "life" or "liberty"? How should such conflicts be resolved in our democratic system?
Part of the research for the debate could include interviews with guest speakers representing both sides of the issue. A closure activity might be a position paper defending one side of the argument and pointing toward possible solutions.
Common GoodDisplay and discuss pictures of significant American historic moments in which the country came together for the common good. These might include: the Pilgrims at Plymouth , the minute men at Concord, the signing of The Constitution, Martin Luther King and the 1963 March on Washington, a World War II victory parade, Earth Day, Habitat For Humanity, and a Red Cross blood drive. Appropriate holidays might be selected to consider pictures that reflect the commitment of citizens to the greater good of all.
Common GoodA suitable holiday or week of observance might be chosen to develop a class service project. For example, Earth Week could be a good time to begin a class recycling project. The class could build recycling centers for the school and surrounding neighborhood. Plastic, aluminum, and paper could be collected with the goals of beautification and contribution of profits toward a charitable purpose.
The project should begin with a definition of "common good" and the design of a poster symbolizing the concept. The class could then brainstorm and develop a service project of its own design.
Common GoodIn connection with its American history study, the class should focus on the development and purpose of several Utopian communities. Examples might include: Brook Farm, New Harmony, Amana, or the Shakers. An alternate focus could be the development of the Israeli Kibbutz system or the many efforts to forge the common good by the pioneers. John Smith’s efforts to save the Jamestown settlement or the Lewis and Clarke expedition would work well. Twentieth Century examples might include ways that American citizens responded to The Great Depression or World War II.
Supplemental research should include biographies of key figures and a tally list of specific ways that individual citizens contributed to the greater benefit of society to meet a common threat or need.
Project ideas include: skits, poster-charts, essays, and oral presentations. Debriefing should include discussion of contemporary parallels to the responsibility of citizenship in today’s world.
Common GoodDiscuss the concept of "common good" as a basic tenet of civic responsibility alongside the concept of individualism. The class should then be presented with a question: "How should a society of individuals dedicated to the notion of pursuing its own happiness also meet its commitments to work together for the greater benefit of others?"
The class will brainstorm the question by working in small groups to fill out a dichotomy sheet listing individual, contemporary, and historical examples of individualism on the one side and common good on the other. Then, each group will select and research one example of a situation in which the needs of both the individual and the common good were met at the same time.
Library time should be provided. The groups will creatively demonstrate their findings.
JusticeThe class will examine the Golden Rule as the basis for understanding the concept of justice. The focus for this should be the classroom rules about respecting others and waiting your turn to speak and being a good listener. While the concept of justice as a constitutional principle might be too advanced for this grade level, it can be embedded in a discussion of respect, cooperation, and fair treatment through class procedures.
JusticeMost children know the meaning of the phrases “that’s not fair” or “no fair”. Fairness, a key component of the broader idea of “justice” is a daily feature of playground ethics. In this way, the teacher could approach the idea of justice through reviewing the rules of a particular sport (like baseball or basketball) and the role that the umpire or referee plays in adjudicating disputes. This could be done in a class discussion of situations involving rules violations or it could be introduced in an actual playing situation where one side might be given unfair advantage (say unlimited double dribbles or 5 outs) over the other. The class would then debrief after the game to discuss the “unfair” or “unjust” nature of the rules and the impact that those uneven rules have on the outcome of the game. Analogies to real life situations and the rule of law would follow.
JusticeThe formal concept of justice as a constitutional and legal concept should be introduced. The teacher could use a case study like the Elian Gonzalez situation or another current situation such as school violence, pollution cases, or the Diallo shooting and trial in New York. After some research, consideration of pertinent documents, and study of possible redress, the class would then debate whether justice (as they understand it) has been achieved. Does justice involve changing laws, providing material compensation, or formal apologies? How is justice ultimately achieved in a democracy?
JusticeThe class will do a comparative study of three historical events which involve racial injustice and the constitutional process of redress. These are: Indian removal, slavery, and Japanese-American internment. The class will research the historical context, constitutional issues, and documentation of legal redress in each case. Then the class will be divided into debate groups to define and argue key issues that cut across all three cases. The ultimate question to be determined is whether justice was finally meted out to all three oppressed groups. The groups must compare and contrast the constitutional, economic, legislative, and legal redress in each historical case. A good closure activity would be a position paper defending a position on the nature of justice and legal redress involving minorities in American democracy.
EqualityEquality in America is not about sameness. Each person in our society is a unique individual who is encouraged to reach their full potential with equal protection under The Constitution. Therefore, a good activity to demonstrate this equality of opportunity for individuals is the creation of a classroom display of pictures show casing each member of the class. The display should be organized around an American flag or other patriotic symbol. Students can bring a picture from home or school pictures may be used if available. A connected activity might be the creation of self portraits created by the students in an art lesson. After completing the display, the class might discuss the display in connection with the ideal of equality, fairness, tolerance for others, and individualism.
Students might share a personal interest, hobby, pet, or favorite toy in discussing their self portrait. The teacher should endeavor to link the presentations to the importance of the individual in the American system and how our Constitution guarantees equality of opportunity for all to pursue their interests within the law.
EqualityThe tricky relationship between equality and individualism might be demonstrated through a class "olympics" competition. Set up a series of competitions that test a variety of physical and mental skills. Some ideas include: a softball throw, stationary jump, walking a line, free throw shooting, bird identification, spelling contest, geography competition, vocabulary definitions, math skills test, etc. Be sure to select a variety of safe games that will allow each student to be successful in one or two areas. Also, create enough challenges that each student may not be successful in some areas.
After tabulating the results, ask the class to discuss the fairness of the competition. Were all students given an equal chance to compete? What determined the success of the winners?
What factors influenced the outcomes of the competitions? Were the games chosen so that each student might have a chance to succeed? How might individual students improve their results if the events were held again? The debriefing might involve creation of a chart showing ways that students might improve their performance. Analogies might be drawn from history that show how equality of opportunity has not always existed. How have these inequities been addressed? A good case study might be the integration of major league baseball by Jackie Robinson in 1947 or the opportunities opened for women in the space program by astronaut Sally Ride. Research into celebrities who overcame initial failure or disadvantage to eventually succeed through individual initiative will complete the unit. The class might choose individual subjects from a list that includes such names as: Michael Jordan, Roger Staubach, Jim Abbott, Glenn Cunningham, Mildred “Babe” Zaharias, Oprah Winfrey, Gloria Estephan, Albert Einstein, Thomas Edison, Selena, and Colin Powell.
EqualityRead and discuss The Declaration of Independence. What did Jefferson mean when he wrote that "all men are created equal." What exceptions to this statement existed in 1776? How long did it take women, slaves, native Americans, and non property owners to achieve "equality"? Does equality mean equality of condition or equality of opportunity?
A good brainstorming activity is to make a chart of the ways people are and aren't equal.
Then compare this chart to The Bill of Rights. What inequities does The Constitution address? What inequities are a function of individualism and lie outside our constitutional system? This discussion might be the focus of small groups.
After reporting the results of their discussions to the whole, the class might be assigned an impromptu essay on the relationship between equality and individualism in America. How can we promote equality while protecting the rights of individual citizens? An alternate topic might be to define equality as an American value. Essays should include concrete examples from history or current events.
EqualityBreak the class into several study groups. Assign each one of the following fairness and equity laws: The Civil Rights Act of 1964 (Public Law 88-352), The Civil Rights Act of 1965 (Public Law 89-110), Title VII of the Civil Rights Act of 1964,
Title IX of the Educational Amendments of 1972, the Rehabilitation Act of 1973 and the Americans With Disabilities Act of 1990, and The Equal Rights Amendment (ERA) written by Alice Paul in 1921.
After researching the assignment, the groups should report to the class orally. The report should outline the conditions that led to the legislation and the specific ways that the legislation was designed to remediate an inequity. The presentations might include a creative component: a skit, a debate, a comic book, a poster, or a series of role playing interviews.
A follow up activity would assign the same groups the task of researching a current social
inequity that might be addressed by new legislation. After more research and planning, the groups would write a proposal for new laws that would remedy the inequity. Each proposal must show either constitutional precedent or demonstrate the need for a constitutional amendment. A formal written proposal should be submitted by each group.
Some good web sites are:
The Southern Poverty Law Center - http://www.splcenter.org/teaching tolerance/tt-index.html
Other organizations are:
Anti-Defimation League - http://www.adl.org
NAACP - http://www.naacp.org
National Organization for Women - http://www.now.org
DiversityThe class can create a diversity map of The United States. The teacher will put a large outline map of The United States (6-8 feet long) on one of the bulletin boards. The class will collect colorful pictures (from magazines) of Americans doing different jobs.
Each day, the class will discuss the different jobs that Americans do and will add cut out pictures of these diverse Americans to a collage inside the map.
When finished, the class should discuss their impressions of all of the different jobs and kinds of people who help to make America work. The pictures should include a diversity of professions, jobs, and kinds of people.
DiversityThe class can create a number of "diversity circles". The teacher will help the class decide the number and kinds of circles they want to create. The circles may include: sports, science, American history, politics, entertainment, etc. The circles will be posted in large spaces on classroom walls or bulletin boards (3 feet or more in diameter). The circles will consist of pictures and biographical blurbs researched and written by teams of students.
The circles may be set up by chronology, important contributions, or other criteria determined by the teams. Ideally, 4-6 different themes should be traced with 4-6 students in each team. The only general rule for each circle is that of diversity. Each team must strive to find and include the widest possible range of important contributors to their thematic circle as possible.
This activity combines historical research and cross disciplinary thinking.
DiversityThe class will undertake a study of American immigration. The teacher should organize the statistical and historical documentation of the key phases of American growth.
The activity should begin with the introduction of "the melting pot" metaphor coined by Hector St. Jean de Crevecoeur. After studying the documentary evidence, students will look into their own connection to immigration by interviewing family members to determine the facts of their own "coming to America".
As a group activity, the class will jointly create a "living" time line by tracing the history of immigration patterns from colonial times to the present. They will then connect immigration to other key trends and events in American history. Finally, each student will add his/her own family's immigration story to the line as specifically as they can. The student stories will be in the form of pictograms and written blurbs.
The time line should be large enough to include each student's story and several concurrent broad historical trend lines.
The debriefing discussion should pose the question of whether the "melting pot" really works as a way to describe immigration. Does diversity imply that our differences really "melt" away? Would a "stew pot" or "tossed salad" be a better metaphor?
The debriefing could culminate in a written essay response.
DiversityAfter studying the Declaration of Independence, in particular the second paragraph regarding the precepts of equality that it presents, the class will look at documents from 3 or 4 subsequent historical situations that call into question the idea that "all men are created equal" in our society. The teacher may select these situations from such examples as: Indian removal, Asian exclusion, anti immigrant nativism, gender exclusion, the Jim Crow era, integration and civil rights, etc.
The class will be divided up into 3-4 teams to study the historical context of their assigned topic and packets (or online) documents pertaining to their topic. Each group will create a one act play or series of dramatic vignettes that will be presented to the rest of the class. Each presentation must show how subsequent history resolved their situation.
A follow up debriefing should address the following questions. Was justice achieved? Has America always lived up to its ideal of equality? Is America a more diverse society today? Why has diversity in our population caused so many problems? Are the concepts of equality and diversity compatible? How has the constitution grown to make America more diverse since 1787? What does population growth and increasing diversity mean for America's future?
The debriefing could take the form of a panel discussion, a debate, or a written response.
TruthJust as there is a bond between citizens and the government, there is a bond between students and a teacher. Thus, it might be emphasized that telling the truth and refraining from lying is an important ethical rule that must be followed in the classroom. To illustrate this principle, the teacher might choose an appropriate selection for reading time from a trade publication. Aesop’s Fables or "The Boy Who Cried Wolf" might be effective in illustrating the point.
TruthFree speech is not a license to lie, cheat, or deceive. The class might benefit from creating a list of ten great reasons to tell the truth. To prepare for the activity, the teacher might display pictures of people who have been known for their honesty. The class could create an honesty mural to go with their list. Other related activities might include: a school wide survey, brainstorming some case studies involving moral reasoning, and researching how other cultures, past and present, view honesty. There are some excellent ideas in What Do You Stand For?: A Kid’s Guide to Building Character by Barbara Lewis (Free Spirit Publishing, 1998).
TruthThe relationship of trust between and government and its citizens is based on the free flow of information and public discussion of issues based on reliable facts. After considering some key historical cases involving governmental attempts to suppress the truth (e,g. The Peter Zenger case, deceptions involving the Vietnam war, the Watergate cover up, the Clinton impeachment, and human rights violations in The People’s republic of China), the class could conduct a survey of local political leaders and government officials. The teacher and a class committee could invite a panel of community leaders and journalists to participate in a question and answer session and discussion of truth in government. The class might select a controversial current local issue as the focus of the discussion. Closure activities might include a class debriefing, writing editorials on the topic, and creating a video news program on the guest speakers and their comments.
TruthA unit on consumerism might prove effective in studying the relationship between truth and the government. Ralph Nader’s Unsafe at Any Speed, Upton Sinclair’s muckraking classic - The Jungle - , or a recent 20/20 expose might kick off the unit. The teacher might prepare a packet of cases involving government action based on social research (e.g. The Triangle Shirtwaist fire, fire retardant child sleep-wear, the DDT ban, the tobacco litigation and settlement).
Students would then work in investigating teams researching recent legislation, the history of research behind the law, and current enforcement. The teams will present a brief on their finding to the class. The teacher should prepare an initial list of possible topics for the project. An option would include video taped "news magazine" presentations. Students should provide a list of sources used in their research.
Popular SovereigntyAs an integral part of class procedure, the teacher might consider “voting time” as a weekly activity. The decisions should involve simple choices that the students will have an interest in: treats on Friday, quiet time music selections, books selections for reading time, or recess activities. Former Speaker of the House “Tip” O’Neill has written that “all politics are local”. By learning to exercise free choice at the grassroots, students may develop a life long appreciation of democratic choice.
Popular SovereigntyIt might be fun and instructive at election time to build and decorate a voting booth and ballot box for class decision making. The class could view pictures of voting and, if the school itself is a polling place, could visit the polls to witness democracy in action (if officials permit). The class voting booth could be used during the year for special decision making events (special activities, class government, student of the week, special class rules, etc) or a mock election. Student groups could make the ballots, establish the choices, and count the results. The goal, of course, is to establish an understanding of majority rule and the collective power of the people in a democracy. As a supplemental unit, the class could learn about the history of elections tracing the results of local, state, or federal elections over time. The presidency might be a useful focus for tracing the evolution of political parties, the evolution of the popular vote, and voter participation. A class project could involve the creation of an extensive bulletin board display on the history of presidential elections.
Popular SovereigntyMiddle school “mock elections” could be held, especially during the fall general elections. A full slate of candidates should be developed, after some research into the issues (local, state, and federal). As many students as possible should take roles as candidates preparing platforms and speeches. Other students might act as news persons to conduct interviews. Others would act as election officials to supervise voting and to count ballots. Leading up to election day, art activities might include poster, bumper sticker, banner making. Bulletin boards could display an array of election memorabilia and campaign art. If feasible, the election could involve other classes to participate in a campaign rally and the election itself. Results of the election could be published in a classroom newspaper written by the entire class. Video could be a part of the experience with interviews, speeches, and news coverage of the election.
Popular SovereigntyVoting patterns could be studied by criteria such as: age, race, education and gender. A good historical case is the Lincoln -Douglas debate regarding the extension of slavery. A related issue is the problem of redistricting congressional boundaries along more equitable lines for minorities. A statistical comparison of voting in redistricted areas might provoke good discussion and debate about the impact of popular sovereignty in local areas. Another vital aspect of popular sovereignty is the constitutional recourse available to citizens when their wishes are violated by elected officials. Cases of initiative, referendum, and recall might be studied (especially those available on the local level). Discussion, debate, and writing activities should follow.
PatriotismStudents will learn the Pledge of Allegiance and the “Star Spangled Banner” in group recitation and singing. In addition, the class should learn about the history of the American flag and its proper display. To support this activity, the class can create a collection of flag art and pictures in the classroom. Other patriotic songs like “My Country Tis of Thee” and “America the Beautiful” could be also sung and discussed with art projects developed around the lyrics.
PatriotismStudents will study a variety of patriotic images and art (from The New England Patriots football logo to Uncle Sam to Norman Rockwell to World War II posters). They will then consider the word patriotism in brainstorming an inductive definition of the concept by determining what each of the images and paintings has in common. This definition will be compared to the formal dictionary definition. Both definitions and a visual display will be posted in the classroom.
PatriotismThe class will read and consider some traditional patriotic stories like those of Nathan Hale or Barbara Fritchie. Then, after discussing the stories and the qualities of individual patriotism, the class will brainstorm and research ways in which they (as individual citizens) might act patriotically. The teacher should encourage students to think broadly about patriotism as good citizenship in showing love and devotion to their country and its values. The class will then decide on a “good citizenship” project which enacts the values of patriotism that they have learned. This could involve writing letters of appreciation to war veterans, cleaning up a park memorial, or establishing a patriotic window display for a downtown business. The class could invite a veteran or elected official as guest speaker for the dedication of their project.
PatriotismThe class will respond to the question “My Country, Right or Wrong?” in a debate/discussion of whether patriotism and love of one’s country is always blind and unconditional. To prepare for the debate, the class should consider a series of historical cases in which the actions of the American government might be questioned on moral or ethical grounds. Examples might include: Indian Removal, The Spanish American War, the My Lai massacre, use of Agent Orange, the Golf of Tonkin Resolution, the Alien and Sedition Acts, conscientious objectors, Thoreau's night in jail, etc.
The purpose of the debate is to provoke higher level thinking about patriotism and its connection to moral and ethical values. For example, is it possible to be both a dissenter and a patriot? What separates a patriot from a zealot? How do our traditions of individualism and free speech interface with our value for patriotism and love of country?
The activity could involve cluster groups which nominate representatives to the class debate. The debate could involve role playing of historical figures from the cases studied. Class moderators and questioners would supervise the debate. The teacher would conduct a debriefing. An essay assignment on the question would follow as a closure activity.
Rule of LawTraffic signs and pedestrian rules provide opportunities for an introduction to the rule of law. In reviewing the traffic signs, stop lights, and crossing lanes around the school, the teacher should stress the importance of knowing what traffic signs mean and why it is important to obey them. Posters of traffic signs should be posted in the classroom. Students should be able to explain their rights as pedestrians and why traffic laws exist for the good of all. Art activities might include drawings of important traffic signs, stop lights, and mapping of each student's route home from school with street crossings and signs included.
Rule of LawThe evolution of law (in civil rights cases for example) might prove useful in learning about how law evolves from a living constitution. Starting with the 3/5 clause and moving through the Fugitive Slave Law, the constitutional amendments, Reconstruction, the Jim Crow era, Plessy v. Ferguson, Brown v. Board of Education, and the Civil Rights Acts of 1964 and 1965 the class will create an illustrated chart of the changes in civil rights law. The chart should contain a section for noting “causes” in recognition of the fact that each change in law is the result of a demonstrated need or omission in the existing law.
A future focus activity would be to brainstorm and research areas of the law that might need to be changed to meet new problems (e.g. cloning, parent rights, the rights of children). The teacher might supply news stories for further research. The class might write their own legal codes to address these social problems.
Rule of LawMiddle school is a good place to introduce a comparative study of how the rule of law is or is not implemented in countries around the world. Cases of free speech and human rights violations in China, Latin America, Africa, and in the former Soviet Bloc countries are well documented. Executions, illegal searches, political imprisonment, and genocide might be contrasted to how political problems are handled in a constitutional system as in the United States. Even in the United States, there are cases like Japanese Internment or the Red Scare in which the rule of law has been violated for political and perceived national security reasons. After doing the reading and the research, student groups will prepare panel discussions and drama groups to enact the scenarios under study. Student news groups will prepare “60 Minutes” segments briefing the class on various cases and their resolution. An important segment would be tracing the rule of law in the American Constitutional system.
Rule of LawDepending upon whether the group is a history class or a government class, several cases might prove stimulating in reaching a deep understanding of the rule of law in our constitutional system. The Watergate story with an emphasis on the documentation of President Nixon’s violation of law is a classic study of how elected officials are not above the constitution. Another approach might be to look at the evolution of the rights of the accused in the Brown/ Miranda/ Gideon cases. Also, a study of the conditions of women and African Americans before and after “protective” laws might prove useful. In addition, government classes might do comparative studies of constitutions (current and historic) from other countries. The emphasis should be on close study of primary documents. Small group discussion should be followed by large group debriefing. A writing activity on a critical question might provide closure.
Separation of PowersThe three branches of government may be introduced through large pictures of public buildings in Washington D.C. Pictures of The White House, The Supreme Court, and The Capitol Building may be placed on a bulletin board. The display might also include photographs of the president, Supreme Court justices, and members of congress and the senate. Above the three part display, an American flag might be displayed to symbolize the unifying quality of The Constitution and the way in which the three branches make up our federal government. In discussing the display, the teacher should explain in broad terms the role that each branch of government plays. As an activity, the students might create drawings or posters expressing their impressions of each branch of government.
Separation of PowersThe separation of powers can be studied through an interdisciplinary presentation of Sir Isaac Newton’s Third Law, "For every action, there is an opposed and equal reaction." By using a balance scale and weights, the teacher can demonstrate not only an important physical law but a key principle of democracy, the separation of powers. Once the scientific principle is understood, the teacher might introduce Montesquieu’s idea of "checks and balances" which is based on Newtonian thinking. A useful focus might be to consider the power to wage war in The Constitution. Why must the president ask Congress for a "declaration of war"? What power does Congress have to check the president ‘s use of the military? What powers does the Supreme Court have to check Congress and the president? As a case study, President Roosevelt’s December 8, 1941 speech to a joint session of Congress asking for a declaration of war against Japan might be considered. What would the response of The Supreme Court have been if the president had declared war without the approval of Congress? How could the Congress have checked the president if he had acted without their approval?
After consideration of the case and related questions, the class could create news headlines and brief stories reporting each scenario. An alternative project would be to do simulated interviews and CNN style news briefs.
Separation of PowersWith the assistance of the teachers, the class should read Articles I, II, and III of The Constitution. After discussion, the class may be divided into three teams (legislative, executive, and judicial) to create a chart that outlines the defined powers of their assigned branch of government. The charts should be posted on the class bulletin board.
Next, the teacher should distribute packets on a simulated case scenario. Some examples might be: "The President Sends U.S. Troops into Battle", "Congress Votes to Jail Political Dissidents", and "Supreme Court To Decide on Free Speech Rights of Middle School Students". The packets should detail the scenario and action taken or recommended by the respective branch of government. For evaluative purposes, the teacher should draw up a rubric referencing the case to Articles I, II, or III of The Constitution.
After studying each case, the student teams should analyze the three cases from the point of view of their assigned branch of government. Their report to the class should point out the constitutional problems in each case and should recommend action as justified in The Constitution. A final activity might be an essay assignment focusing on the proper sharing of power among the three branches of government and what the separation of power means to average citizens.
Separation of PowersThe class should read and review Articles I, II , and II of The Constitution. Then using the Legal Information Institute web site ( http://supct.law.cornell.edu/supct/cases/historic.htm), students should study briefs of the Marbury v Madison (1803) and McCullough v Maryland (1819) cases to fully understand the concepts of judicial review and broad congressional authority "within the scope of the constitution." Now, the class might do an in depth study of one or more cases involving questions of the separation of power between the three branches.
Suggested cases are: President Jackson's war against The Bank of The United States(1832-36), President Roosevelt's handling of The Northern Securities Trust(1902), Plessy v Ferguson (1896) and The War Powers Act (1973). The class could be divided into four research/study groups, each taking one of the cases. The groups would prepare a brief tracing the history of the case and the constitutional issues at stake. Their presentation should also identify the resolution of the case and link the resolution to issues of separation of power.
Some key discussion topics: How might these cases be resolved today? Does the balance of power among the three branches shift over time? How do politics and social change affect the balance? Is there equilibrium among the branches or does power shift over time? What are some issues today that reveal the shifting balance? Can we trace the history of the shifting balance of power? Which of the three branches seems to be in ascendance today?
A good closure activity might be an impromptu position paper or take home essay based on some of the issues raised by the presentations and discussion.
Representative GovernmentBasic representative democracy might start in the early elementary years through
a regular series of classroom elections. The elections could involve weekly choices such as story time material, recess games, bulletin board themes, or class colors. In addition, class elections could be held each month for teacher assistant. Students can help design and build a classroom ballot box. Art projects might include designing and creating ballots, election day banners, and a voting booth. Student committees can tabulate results and post them on a special election bulletin board.
Representative GovernmentThe teacher might organize a class "council" (or "senate") at the beginning of the year. The purpose of the council would be to represent the students in the class in making decisions that would affect the entire class during the school year ahead. The teacher should seek input from the class in establishing the "constitution" for the council. Such matters as term of office, size of constituency, powers of the council, meeting dates, and qualifications for office should be determined through class discussion. Before the elections are held, a brainstorming activity should explore the desired traits for leaders elected to the council.
A parallel biographical reading and research project could be assigned on the topic of "Great Leaders of Democracy, Yesterday and Today". The result would be a bulletin board display showing the final list of traits and several historical examples demonstrating each trait of good leadership. Election speeches for nominees are optional. For more participation and rotation in office, elections could be held once each semester. A recommended web site for stories about community leaders and activism is: The American Promise - http://www.pbs.org/ap/
Representative GovernmentMiddle school American history is a good place to research the policies of Alexander Hamilton and Thomas Jefferson concerning representative government.
The class might be divided into "Hamiltonians" and "Jeffersonians". each group would research the position of their leader on the topic of the powers of the central government vs. the power of the citizens. The activity would fit nicely into a unit on The Constitution and the compromises that resulted in our bicameral system. Closure might include a debate between members of each group on key points, the creation of a comparative chart, and
brainstorming how different our system might be today if either Jefferson's or Hamilton's
ideas had prevailed. Related activities might include: essays, editorials, news stories, video interviews, and role playing.
Representative GovernmentHigh school students might benefit from a comparative study of several different constitutions from around the world to measure the depth and effectiveness of representative government in The Constitution of The United States. The constitutions of the former U.S.S.R. and The Union of South Africa would be useful. The class might also be divided into study groups to determine the powers allotted to elected representatives in such bodies as the Japanese Diet, the Isreali Kinessett, and the British House of Commons.
For background, the class should review Article I of The Constitution and the writings of John Locke and Jean Jacques Rousseau. With the "pure democracy" of the New England town meeting at one end and totalitarian dictatorship on the other, where does the American republic stand in comparison to other countries in empowering its citizens?
Freedom of ReligionA simple but effective activity is the posting of religious holidays from a diversity of world religions on a class calendar. The calendar should be inclusive of all of the major world faiths. As a supplementary activity, a pictorial glossary of key concepts from each religion might be posted. In addition, students might create art depicting the major holidays and high holy days as they study them on the class calendar.
Freedom of ReligionStudents might begin to understand the concept of religious "pluralism" by studying the early colonies and the many religious groups that migrated to America during the colonial period. A map tracing the religious influences in the early colonies might emphasize the point graphically. The project could be expanded to trace the subsequent migration of new religious influences during the nineteenth and twentieth centuries. Consideration of religious pluralism should include an understanding of the principles of religious liberty, freedom of conscience, and separation of church and state found in the First Amendment of the Constitution. The study should also acknowledge those who do not profess a religious belief and their equal protection under the Constitution.
Freedom of ReligionReligious liberty under The Constitution might be presented in a comparative study of how religious persecution has been legalized in other political systems. The German treatment of Jews during the Holocaust, the conflict between Catholics and Protestants in Northern Ireland, and the Spanish Inquisition are opportune historical subjects to develop. More recently the "ethnic cleansing" in the Balkans and the slaughter in Rawanda address the theme in graphic terms. Student research groups might work on different topics. As a supplement, the teacher might compile documents packets with primary sources, pictures, news stories, and eyewitness accounts for class discussion. As a closure discussion, students might address the question of how religious conflicts like those in their study have been avoided in America. How does The Constitution provide equal protection for the beliefs of all of its citizens? An essay assignment on the subject might be part of a language arts activity.
Freedom of ReligionA good debate-discussion topic might be to address the relationship between religion and politics in American life. In what ways has religious belief shaped the political and social views of millions of American citizens? The class might undertake a comparative study of the history of recent American elections (say going back to the 1960’s) to see how religious affiliation has influenced the outcome. Voting statistics indicating party loyalty, religious affiliation, financial contributions, economic status, educational level, and ethnicity could be researched. The teacher might provide a packet with historical perspective from Machiavelli to William Jennings Bryan to Madeline Murray O’Hare to the South Carolina primary race between John McCain and George W. Bush.
Class activities include a debate, small group consideration of the documents packets, and an essay taking a position on the relationship between politics and religion in America.
FederalismDiscuss the idea of a government and how it begins with people and their needs. Have students draw or create a pictogram of their concept of what government does for its citizens.
FederalismLearn the original 13 states and their relationship to each other in a federal system. Learn about westward expansion and how new land meant the creation of new states. Learn the current 50 states and their relationship to the federal government. Begin to conceptualize how federal and state governments share power and serve different functions for their citizens. Have students create posters indicating the differences between the federal government and their state.
FederalismOrganize student groups to debate and discuss the different roles of federal and state governments. Using the Constitution and a mock Supreme Court, the class could conduct a debate
over a selected issue (gun control, civil rights, etc) and decide whether the issue should fall under
state or federal control. Some research required.
FederalismAfter consideration of the documents, the class should be divided into two groups
(Federalists vs Anti Federalists). Each group will prepare for a symposium-debate on the question of
Federalism and the sharing of political power in a democracy. Students will play historical roles based
on the major historical figures representing the evolution of their group's position and philosophy.
Representatives from both groups will meet with the teacher to determine the 3-5 key questions that
will be the focus of the symposium. Each student in both groups must prepare a role and stay in that
role for the duration of the debate. The discussion will stay focused on the preselected questions.
Each student will submit a position paper (with historical examples) representing his character's
hypothetical position on the selected questions. Extensive research required.
Civilian Control of the MilitaryThrough pictures and photographs, the class should observe the differences between the civilian and the military functions of government. A bulletin board display, divided into two large areas, would demonstrate various important jobs and services that the federal government provides to its citizens. The display will be a graphic introduction to the many ways that government serves the people. Also, it will introduce the differences between the military and the civilian roles.
Civilian Control of the MilitaryDisplay an array of pictures of the United States military in action including ships, planes, missiles, and vehicles. Also, include photographs of the President reviewing troops and photographs of other world leaders who wear a military uniforms (e.g. President Pinochet of Argentina and Fidel Castro of Cuba). Discuss with the class the relationship between the president and the military in our democratic system. Ask them to observe that the President wears civilian clothes and never wears a military uniform while in other countries there are sometimes generals who take over the civilian government.
Brainstorm a list of reasons why, in our constitution, the military is under the control of the executive branch. Why would a military dictatorship be attractive to some countries? As a project, the class could create a flow chart tracing the military organization of The United States and the relationship of each branch of service to the Department of Defense and the president. Pictures and information for the chart can be obtained from internet sites.
Civilian Control of the MilitaryThe class will read Article II, section 2 of The Constitution. After a discussion of the reasons for making the President, a civilian, the commander in chief of the military, the class will study a case involving a challenge to presidential authority by the military. The case might include: the firing of General MacArthur by President Truman in 1951 or the fictional situation posed by the films Seven Days in May or Fail Safe(available in rental). After considering the documents or viewing the films, the teacher should conduct a debriefing on the situation and the constitutional implications. As a closure activity, a mock court martial of the military figures in the case could be held with students preparing roles. An essay could also be assigned discussing the merits of civilian control of the military.
Civilian Control of the MilitaryAfter reviewing Article II, section 2 of The Constitution, the class will consider President Eisenhower’s remarks in 1960 concerning the "undue influence of the military-industrial complex." The teacher should prepare a packet which includes Eisenhower’s speech, remarks by military leaders like General Curtis LeMay, and other documents concerning the control and use of nuclear weapons. The focus of these documents will prepare discussion of the issue of "The Constitution in a Nuclear Age". After consideration of the documents, the class will be divided into two groups: one representing support for civilian control, the other representing the military point of view. After preparing several discussion points provided by the teacher, the class will engage in a round table discussion defending their assigned point of view. As a supplemental case, the teacher could provide a documents packet on the 1945 decision to use atomic weapons by President Truman and the various options facing him and the military perspective at the time.
This activity may be an extended term project and could involve additional research and writing. A shorter activity would involve group discussion of the packet and questions. | http://www.civics-online.org/teachers/activities.php | 13 |
15 | Chapter 1 The Parliament and the role of the House
The Commonwealth Parliament is composed of three distinct elements, the Queen1 the Senate and the House of Representatives.2 These three elements together characterise the nation as being a constitutional monarchy, a parliamentary democracy and a federation. The Constitution vests in the Parliament the legislative power of the Commonwealth. The legislature is bicameral, which is the term commonly used to indicate a Parliament of two Houses.
Although the Queen is nominally a constituent part of the Parliament, the Constitution immediately provides that she appoint a Governor-General to be her representative in the Commonwealth.3 The Queen’s role is little more than titular, as the legislative and executive powers and functions of the Head of State are vested in the Governor-General by virtue of the Constitution.4 However, while in Australia, the Sovereign has performed duties of the Governor-General in person,5 and in the event of the Queen being present to open Parliament, references to the Governor-General in the relevant standing orders6 are read as references to the Queen.7
The Royal Style and Titles Act provides that the Queen shall be known in Australia and its Territories as:
Elizabeth the Second, by the Grace of God Queen of Australia and Her other Realms and Territories, Head of the Commonwealth.8
The Governor-General is covered in this chapter as a constituent part of the Parliament. However, it is a feature of the Westminster system of government that the Head of State is part of both the Executive Government and the legislature. The relationship between these two bodies and the role of Governor-General as the Head of the Executive Government are discussed in the Chapter on ‘House, Government and Opposition’.
There have been 24 Governors-General of Australia9 since the establishment of the Commonwealth, ten of whom have been Australian born.
Letters Patent and Instructions were issued by Her Majesty Queen Elizabeth as Queen of Australia on 21 August 1984.10 These greatly simplified earlier provisions, and sought to reflect the proper constitutional position and to remove the archaic way in which the old Letters Patent referred to and expressed the Governor-General’s powers.11 The Letters Patent deal with the appointment of a person to the office of Governor-General, the appointment of a person as Administrator of the Commonwealth, and the appointment of a person as a Deputy of the Governor-General.
The Governor-General’s official title is Governor-General of the Commonwealth of Australia. The additional title of Commander-in-Chief of the Defence Force was not used in the 1984 Letters Patent, it being considered that the command in chief of the naval and military forces vested in the Governor-General by the Constitution was not a separate office but a function held ex officio.12
The Governor-General is appointed by the Crown, in practice on the advice of Australian Ministers of the Crown.13 The Governor-General holds office during the Crown’s pleasure, appointments normally being for five years, but some Governors-General have had extended terms of office, and others have resigned or have been recalled. The method of appointment was changed as a result of the 1926 and 1930 Imperial Conferences.14 Appointments prior to 1924 were made by the Crown on the advice of the Crown’s Ministers in the United Kingdom (the Governor-General being also the representative or agent of the British Government15) in consultation with Australian Ministers. The Balfour Report stated that the Governor-General should be the representative of the Crown only, holding the same position in the administration of public affairs in Australia as the Crown did in the United Kingdom. The 1930 report laid down certain criteria for the future appointments of Governors-General. Since then Governors-General have been appointed by the Crown after informal consultation with and on the formal advice of Australian Ministers.
The Letters Patent of 21 August 1984 provide that the appointment of a person as Governor-General shall be by Commission which must be published in the official gazette of the Commonwealth. They also provide that a person appointed to be Governor-General shall take the oath or affirmation of allegiance. These acts are to be performed by the Chief Justice or another justice of the High Court. The ceremonial swearing-in of a new Governor-General has traditionally taken place in the Senate Chamber.
Back to top
Administrator and Deputies
The Letters Patent relating to the office and the Constitution16 make provision for the appointment of an Administrator to administer the Government of the Commonwealth in the event of the death, incapacity, removal, or absence from Australia of the Governor-General (in effect an Acting Governor-General). As with the Governor-General, the Administrator is required to take the oath or affirmation of allegiance before the commission takes effect. The Crown’s commission is known as a dormant commission,17 only being invoked when necessary. An Administrator is not entitled to receive any salary from the Commonwealth in respect of any other office during the period of administration.18 More than one commission may exist at any one time. The Administrator may perform all the duties of the Governor-General under the Letters Patent and the Constitution during the Governor-General’s absence.19 A reference to the Governor-General in the standing orders includes an Administrator of the Commonwealth.20 There is a precedent for an Administrator calling Parliament together for a new session: Administrator Brooks did so in respect of the Third Session of the 23rd Parliament on 7 March 1961.21
The Constitution empowers the Crown to authorise the Governor-General to appoint Deputies to exercise, during the Governor-General’s pleasure, such powers and functions as the Governor-General thinks fit.22 The Letters Patent concerning the office contain more detailed provisions on the appointment of Deputies. State Governors considered to be more readily available in cases of urgency have been appointed as Deputies of the Governor-General with authority to exercise a wide range of powers and functions, including the making of recommendations with respect to the appropriation of revenues or moneys, the giving of assent to proposed laws and the making, signing or issuing of proclamations, orders, etc. on the advice of the Federal Executive Council.23 It is understood that these arrangements were introduced to ensure that urgent matters could be attended to in situations where, even though the Governor-General was in Australia, he or she was unavailable. The Governor-General traditionally also appoints a Deputy (usually the Chief Justice) to declare open a new Parliament. The same judge is also authorised to administer the oath or affirmation of allegiance to Members.24 Sometimes, when there are Senators to be sworn in as well, two judges may be commissioned with the authority to administer the oath or affirmation to Members and Senators.25
The Governor-General issues to a Speaker, once elected, a commission to administer the oath of allegiance to Members during the course of a Parliament.26 The Governor-General normally appoints the Vice-President of the Executive Council to be the Governor-General’s Deputy to summon meetings of the Executive Council and, in the Governor-General’s absence, to preside over meetings.27
In 1984 the Governor-General Act was amended to provide for the establishment of the statutory office of Official Secretary to the Governor-General.28 Annual reports of the Official Secretary have been presented to both Houses since 1985.29
Bagehot described the Crown’s role in England in the following classic statement:
To state the matter shortly, the sovereign has, under a constitutional monarchy such as ours, three rights—the right to be consulted, the right to encourage, the right to warn.30
In Australia, for all practical purposes, it is the Constitution which determines the nature and the exercise of the Governor-General’s powers and functions. In essence these powers can be divided into three groups—prerogative, legislative and executive.
Although since Federation it has been an established principle that the Governor-General in exercising the powers and functions of the office should only do so with the advice of his or her Ministers of State, the principle has not always been followed. This principle of responsible government is discussed further in the Chapter on ‘House, Government and Opposition’. The Constitution provides definite and limited powers, although in some cases the ways in which these powers may be exercised is not specified. The identification and range of prerogative powers are somewhat uncertain and have on occasions resulted in varying degrees of political and public controversy.
Quick and Garran defines prerogative powers as:
. . . matters connected with the Royal prerogative (that body of powers, rights, and privileges, belonging to the Crown at common law, such as the prerogative of mercy), or to authority vested in the Crown by Imperial statute law, other than the law creating the Constitution of the Commonwealth. Some of these powers and functions are of a formal character; some of them are purely ceremonial; others import the exercise of sovereign authority in matters of Imperial interests.31
To some extent this definition may be regarded as redundant or superfluous in modern times. However, the fact that the Constitution states, in some of its provisions, that the Governor-General may perform certain acts without any explicit qualification, while other provisions state that the Governor-General shall act ‘in Council’, suggests an element of discretion in exercising certain functions—that is, those in the first category. Quick and Garran states:
The first group includes powers which properly or historically belong to the prerogatives of the Crown, and survive as parts of the prerogative; hence they are vested in the Governor-General, as the Queen’s representative. The second group includes powers either of purely statutory origin or which have, by statute or custom, been detached from the prerogative; and they can, therefore, without any constitutional impropriety, be declared to be vested in the Governor-General in Council. But all those powers which involve the performance of executive acts, whether parts of the prerogative or the creatures of statute, will, in accordance with constitutional practice, as developed by the system known as responsible government, be performed by the Governor-General, by and with the advice of the Federal Executive Council . . . parliamentary government has well established the principle that the Crown can perform no executive act, except on the advice of some minister responsible to Parliament. Hence the power nominally placed in the hands of the Governor-General is really granted to the people through their representatives in Parliament. Whilst, therefore, in this Constitution some executive powers are, in technical phraseology, and in accordance with venerable customs, vested in the Governor-General, and others in the Governor-General in Council, they are all substantially in pari materia, on the same footing, and, in the ultimate resort, can only be exercised according to the will of the people.32
Modern references relating to the prerogative or discretionary powers of the Governor-General clarify this view in the interests of perspective. Sir Paul Hasluck made the following observations in a lecture given during his term as Governor-General:
The duties of the Governor-General are of various kinds. Some are laid on him by the Constitution, some by the Letters Patent and his Commission. Others are placed on him by Acts of the Commonwealth Parliament. Others come to him by conventions established in past centuries in Great Britain or by practices and customs that have developed in Australia.33
All of these duties have a common characteristic. The Governor-General is not placed in a position where he can run the Parliament, run the Courts or run any of the instrumentalities of government; but he occupies a position where he can help ensure that those who conduct the affairs of the nation do so strictly in accordance with the Constitution and the laws of the Commonwealth and with due regard to the public interest. So long as the Crown has the powers which our Constitution now gives to it, and so long as the Governor-General exercises them, Parliament will work in the way the Constitution requires, the Executive will remain responsible to Parliament, the Courts will be independent, the public service will serve the nation within the limits of the law and the armed services will be subject to civil authority.34
The dissolution of Parliament is an example of one of the matters in which the Constitution requires the Governor-General to act on his own. In most matters, the power is exercised by the Governor-General-in-Council, that is with the advice of the Federal Executive Council (in everyday language, with the advice of the Ministers meeting in Council).35
The Governor-General acts on advice, whether he is acting in his own name or as Governor-General-in-Council. He has the responsibility to weigh and evaluate the advice and has the opportunity of discussion with his advisers. It would be precipitate and probably out of keeping with the nature of his office for him to reject advice outright but he is under no compulsion to accept it unquestioningly. He has a responsibility for seeing that the system works as required by the law and conventions of the Constitution but he does not try to do the work of Ministers. For him to take part in political argument would both be overstepping the boundaries of his office and lessening his own influence.36
On 12 November 1975, following the dismissal of Prime Minister Whitlam, Speaker Scholes wrote to the Queen asking her to intervene and restore Mr Whitlam to office as Prime Minister in accordance with the expressed resolution of the House the previous day.37 On 17 November, the Queen’s Private Secretary, at the command of Her Majesty, replied, in part:
The Australian Constitution firmly places the prerogative powers of the Crown in the hands of the Governor-General as the representative of The Queen of Australia. The only person competent to commission an Australian Prime Minister is the Governor-General, and The Queen has no part in the decisions which the Governor-General must take in accordance with the Constitution. Her Majesty, as Queen of Australia, is watching events in Canberra with close interest and attention, but it would not be proper for her to intervene in person in matters which are so clearly placed within the jurisdiction of the Governor-General by the Constitution Act.38
Other than by recording the foregoing statements and discussing the question of dissolution (see below), it is not the intention of this text to detail the various constitutional interpretations as to the Governor-General’s discretionary powers. Based on informed opinion, the exercise of discretionary power by the Governor-General can be interpreted and regarded as conditional upon the following principal factors:
- the maintenance of the independent and impartial nature of the office is paramount;
- in the view of Quick and Garran the provisions of the Constitution vesting powers in the Governor-General are best read as being exercised ‘in Council’;
- the provisions of sections 61 and 62 of the Constitution (Federal Executive Council to advise the Governor-General in the government of the Commonwealth) are of significance and are interpreted to circumscribe discretions available to the Governor-General;
- the Statute of Westminster diminished to some extent the prerogative powers of the Crown in Australia;
- the reality that so many areas of power are directly or indirectly provided for in the Constitution;
- where discretions are available they are generally governed by constitutional conventions established over time as to how they may be exercised; and
- it is either a constitutional fact or an established constitutional convention that the Governor-General acts on the advice of Ministers in all but exceptional circumstances.
The act of dissolution puts to an end at the same time the duration of the House of Representatives and ipso facto the term of the Parliament.39 This alone means that the question of dissolution and how the power of dissolution is exercised is of considerable parliamentary importance because of the degree of uncertainty as to when and on what grounds dissolution may occur.40
The critical provision of the Constitution, in so far as its intention is concerned, is found in the words of section 28 ‘Every House of Representatives shall continue for three years from the first meeting of the House, and no longer’41 to which is added the proviso ‘but may be sooner dissolved by the Governor-General’. The actual source of the Governor-General’s power to dissolve is found in section 5, the effect and relevant words of which are that ‘The Governor-General may . . . by Proclamation or otherwise . . . dissolve the House of Representatives’.
While the Constitution vests in the Governor-General the power to dissolve the House, the criteria for taking this action are not prescribed and, therefore, they are matters generally governed by constitutional convention. In a real sense the exercise of the Crown’s power of dissolution is central to an understanding of prerogative powers and the nature of constitutional conventions.
As described earlier in this chapter, while it is the prerogative of the Crown to dissolve the House of Representatives, the exercise of the power is subject to the constitutional convention that it does so only on the advice and approval of a Minister of State, in practice the Prime Minister, directly responsible to the House of Representatives. The granting of dissolution is an executive act, the ministerial responsibility for which can be easily established.42
The nature of the power to dissolve and some of the historical principles, according to which the discretion is exercised, are illustrated by the following authoritative statements:
Of the legal power of the Crown in this matter there is of course no question. Throughout the Commonwealth . . . the King or his representative may, in law, grant, refuse or force dissolution of the Lower House of the Legislature . . . In legal theory the discretion of the Crown is absolute (though of course any action requires the consent of some Minister), but the actual exercise of the power is everywhere regulated by conventions.43
If a situation arises, however, in which it is proposed that the House be dissolved sooner than the end of its three-year term, the Governor-General has to reassure himself on other matters. This is an area for argument among constitutional lawyers and political historians and is a matter where the conventions and not the text of the Constitution are the chief guide. It is the function of the Prime Minister to advise that the House be dissolved. The most recent practices in Australia support the convention that he will make his proposal formally in writing supported by a written case in favour of the dissolution. It is open to the Governor-General to obtain advice on the constitutional question from other quarters—perhaps from the Chief Justice, the Attorney-General or eminent counsel—and then . . . a solemn responsibility rests on [the Governor-General] to make a judgment on whether a dissolution is needed to serve the purposes of good government by giving to the electorate the duty of resolving a situation which Parliament cannot resolve for itself.44
The right to dissolve the House of Representatives is reserved to the Crown. This is one of the few prerogatives which may be exercised by the Queen’s representative, according to his discretion as a constitutional ruler, and if necessary, a dissolution may be refused to responsible ministers for the time being.45
It is clear that it is incumbent on the Prime Minister to establish sufficient grounds for the need for dissolution, particularly when the House is not near the end of its three year term. The Governor-General makes a judgment on the sufficiency of the grounds. It is in this situation where it is generally recognised that the Governor-General may exercise a discretion not to accept the advice given.46
The grounds on which the Governor-General has accepted advice to dissolve the House of Representatives have not always been made public. It is reasonable to presume that no special reasons may be given to the Governor-General, or indeed are necessary, for a dissolution of the House if the House is near the end of its three year term.47
Table 1.1 Early Dissolutions of the House of Represenatives
|Back to top
|Dissolution date (a)
|26 March 1917
||6th: 2 years 5 months 19 days
||To synchronise election of the House with election for half the Senate and to gain a mandate from the people prior to the forthcoming Imperial War Conference ( H.R. Deb. (6.3.17) 10 993–11 000).
|3 November 1919
||7th: 2 years 4 months 21 days
||Not given to House
|16 September 1929
||11th: 7 months 11 days
||The House amended the Maritime Industries Bill against the wishes of the Government. The effect of the amendment was that the bill should not be brought into operation until submitted to a referendum or an election. Prime Minister Bruce based his advice on the following: ‘The Constitution makes no provision for a referendum of this description, and the Commonwealth Parliament has no power to pass effective legislation for the holding of such a referendum. The Government is, however, prepared to accept the other alternative—namely a general election’ ( H.R. Deb. (12.9.29) 873–4; correspondence read to House).
|27 November 1931
||12th: 2 years 8 days
||The Government was defeated on a formal motion for the adjournment of the House. The Governor-General took into consideration ‘the strength and relation of various parties in the House of Representatives and the probability in any case of an early election being necessary’ (H.R. Deb. (26.11.31) 1926–7; correspondence read to House).
|7 August 1934
||13th: 2 years 5 months 22 days
||Not given to House.
|4 November 1955
||21st: 1 year 3 months 1 day
||To synchronise elections of the House with elections for half the Senate; the need to avoid conflict with State election campaigns mid-way through the ensuing year; the impracticability of elections in January or February; authority (mandate) to deal with economic problems (H.R. Deb. (26.10.55) 1895–6; Sir John Kerr, Matters for Judgment, pp. 153, 412).
|1 November 1963
||24th: 1 year 8 months 13 days
||Prime Minister Menzies referred to the fact that the Government had gone close to defeat on five occasions; the need to obtain a mandate on policies concerning North West Cape radio station, the defence of Malaysia and the proposed southern hemisphere nuclear free zone (H.R. Deb. (15.10.63) 1790–5).
|10 November 1977
||30th: 1 year 8 months 25 days
||To synchronise House election with election for half the Senate; to provide an opportunity to end election speculation and the resulting uncertainty and to enable the Government to seek from the people an expression of their will; to conform with the pattern of elections taking place in the latter months of a calendar year (H.R. Deb. (27.10.77) 2476–7; Kerr, pp. 403–15; Dissolution of the House of Representatives by His Excellency the Governor-General on 10 November 1977, PP 16 (1979)).
|26 October 1984
||33rd: 1 year 6 months 6 days
||To synchronise elections for the House with election for half the Senate; claimed business community concerns that if there were to be an election in the spring it should be held as early as possible ending electioneering atmosphere etc., and to avoid two of seven Senators to be elected (because of the enlargement of Parliament) being elected without knowledge of when they might take their seats (as the two additional Senators for each State would not take their seats until the new and enlarged House had been elected and met) (H.R. Deb. (8.10.84) 1818–1820; correspondence tabled 9.10.84, VP 1983–84/954).
|31 August 1998
||38th: 2 years 4 months 1 day
||Not given to House.
(a) A dissolution of the House of Representatives is counted as ‘early’ if the dissolution occurs six months or more before the date the House of Representatives is scheduled to expire by effluxion of time. The table does not include simultaneous dissolutions of both Houses granted by the Governor-General under s. 57 of the Constitution (see Ch. on ‘Disagreements between the Houses’).
(b) The reasons stated in the table may not be the only reasons advised or upon which dissolution was exclusively granted. On three occasions dissolution ended Parliaments of less than two years six months duration where reasons, if any, were not given to the House—for example, the House may not have been sitting at the time.
As far as is known, the majority of dissolutions have taken place in circumstances which presented no special features. Where necessary, it is a normal feature for the Governor-General to grant a dissolution on the condition and assurance that adequate provision, that is, parliamentary appropriation, is made for the Administration in all its branches to be carried on until the new Parliament meets.48
The precedents in Table 1.1 represent those ‘early’ dissolutions where the grounds, available from the public record, were sufficient for the Governor-General to grant a request for a dissolution. A feature of the precedents is that in 1917, 1955, 1977 and 1984 the grounds given included a perceived need to synchronise the election of the House of Representatives with a periodic election for half the Senate.
On 10 January 1918, following the defeat of a national referendum relating to compulsory military service overseas, Prime Minister Hughes informed the House that the Government had considered it its duty to resign unconditionally and to offer no advice to the Governor-General. A memorandum from the Governor-General setting out his views was tabled in the House:
On the 8th of January the Prime Minister waited on the Governor-General and tendered to him his resignation. In doing so Mr. Hughes offered no advice as to who should be asked to form an Administration. The Governor-General considered that it was his paramount duty (a) to make provision for carrying on the business of the country in accordance with the principles of parliamentary government, (b) to avoid a situation arising which must lead to a further appeal to the country within twelve months of an election resulting in the return of two Houses of similar political complexion, which are still working in unison. The Governor-General was also of the opinion that in granting a commission for the formation of a new Administration his choice must be determined solely by the parliamentary situation. Any other course would be a departure from constitutional practice, and an infringement of the rights of Parliament. In the absence of such parliamentary indications as are given by a defeat of the Government in Parliament, the Governor-General endeavoured to ascertain what the situation was by seeking information from representatives of all sections of the House with a view to determining where the majority lay, and what prospects there were of forming an alternative Government.
As a result of these interviews, in which the knowledge and views of all those he consulted were most freely and generously placed at his service, the Governor-General was of the opinion that the majority of the National Party was likely to retain its cohesion, and that therefore a Government having the promise of stability could only be formed from that section of the House. Investigations failed to elicit proof of sufficient strength in any other quarter. It also became clear to him that the leader in the National Party, who had the best prospect of securing unity among his followers and of therefore being able to form a Government having those elements of permanence so essential to the conduct of affairs during war, was the Right Honourable W. M. Hughes, whom the Governor-General therefore commissioned to form an Administration.49
A further case which requires brief mention is that of Prime Minister Fadden who resigned following a defeat in the House on 3 October 1941. According to Crisp the Prime Minister ‘apparently relieved the Governor-General from determining the issue involved in the request of a defeated Prime Minister by advising him, not a dissolution, but sending for the Leader of the Opposition, Curtin’.50
The Governor-General is known to have refused to accept advice to grant a dissolution on three occasions:51
- August 1904.52 The 2nd Parliament had been in existence for less than six months. On 12 August 1904 the Watson Government was defeated on an important vote in the House.53 On the sitting day following the defeat, Mr Watson informed the House that following the vote he had offered the Governor-General ‘certain advice’ which was not accepted. He had thereupon tendered the resignation of himself and his colleagues which the Governor-General accepted.54 Mr Reid was commissioned by the Governor-General to form a new Government.
- July 1905. The 2nd Parliament had been in existence for less than 16 months. On 30 June 1905 the Reid Government was defeated on an amendment to the Address in Reply.55 At the next sitting Mr Reid informed the House that he had requested the Governor-General to dissolve the House. The advice was not accepted and the Government resigned.56 Mr Deakin was commissioned by the Governor-General to form a new Government.
- June 1909. The 3rd Parliament had been in existence for over two years and three months. On 27 May 1909 the Fisher Government was defeated on a motion to adjourn debate on the Address in Reply.57 Mr Fisher subsequently informed the House that he had advised the Governor-General to dissolve the House and the Governor-General on 1 June refused the advice and accepted Mr Fisher’s resignation.58 Mr Deakin was commissioned by the Governor-General to form a new Government. In 1914 Mr Fisher, as Prime Minister, tabled the reasons for his 1909 application for a dissolution.
The advice of Prime Minister Fisher in the 1909 case consisted of a lengthy Cabinet minute which contained the following summary of reasons:
Your Advisers venture to submit, after careful perusal of the principles laid down by Todd and other writers on Constitutional Law, and by leading British statesmen, and the precedents established in the British Parliament and followed throughout the self-governing Dominions and States, that a dissolution may properly be had recourse to under any of the following circumstances:
(1) When a vote of ‘no confidence’, or what amounts to such, is carried against a Government which has not already appealed to the country.
(2) When there is reasonable ground to believe that an adverse vote against the Government does not represent the opinions and wishes of the country, and would be reversed by a new Parliament.
(3) When the existing Parliament was elected under the auspices of the opponents of the Government.
(4) When the majority against a Government is so small as to make it improbable that a strong Government can be formed from the Opposition.
(5) When the majority against the Government is composed of members elected to oppose each other on measures of first importance, and in particular upon those submitted by the Government.
(6) When the elements composing the majority are so incongruous as to make it improbable that their fusion will be permanent.
(7) When there is good reason to believe that the people earnestly desire that the policy of the Government shall be given effect to.
All these conditions, any one of which is held to justify a dissolution, unite in the present instance.59
According to Crisp ‘The Governor-General was unmoved by considerations beyond ‘‘the parliamentary situation’’ ’.60 Evatt offers the view that ‘certainly the action of the Governor-General proceeded upon a principle which was not out of accord with what had until then been accepted as Australian practice, although the discretion may not have been wisely exercised’.61
Back to top
Functions in relation to the Parliament
The functions of the Governor-General in relation to the legislature are discussed in more detail elsewhere in the appropriate parts of the text. In summary the Governor-General’s constitutional duties (excluding functions of purely Senate application) are:
- appointing the times for the holding of sessions of Parliament (s. 5);
- proroguing and dissolving Parliament (s. 5);
- issuing writs for general elections of the House (in terms of the Constitution, exercised ‘in Council’) (s. 32);
- issuing writs for by-elections in the absence of the Speaker (in terms of the Constitution, exercised ‘in Council’) (s. 33);
- recommending the appropriation of revenue or money (s. 56);
- dissolving both Houses simultaneously (s. 57);
- convening a joint sitting of both Houses (s. 57);
- assenting to bills, withholding assent or reserving bills for the Queen’s assent (s. 58);
- recommending to the originating House amendments in proposed laws (s. 58); and
- submitting to electors proposed laws to alter the Constitution in cases where the two Houses cannot agree (s. 128).
The Crown in its relations with the legislature is characterised by formality, ceremony and tradition. For example, tradition dictates that the Sovereign should not enter the House of Representatives. Traditionally the Mace is not taken into the presence of the Crown.
It is the practice of the House to agree to a condolence motion on the death of a former Governor-General,62 but on recent occasions the House has not usually followed the former practice of suspending the sitting until a later hour as a mark of respect.63 In the case of the death of a Governor-General in office the sitting of the House has been adjourned as a mark of respect.64 An Address to the Queen has been agreed to on the death of a former Governor-General who was a member of the Royal Family,65 and references have been made to the death of a Governor-General’s close relative.66
During debate in the House no Member may use the name of the Queen, the Governor-General (or a State Governor) disrespectfully, or for the purpose of influencing the House in its deliberations.67 The practice of the House is that, unless the discussion is based upon a substantive motion which admits of a distinct vote of the House, reflections (opprobrious references) must not be cast in debate concerning the conduct of the Sovereign or the Governor-General, including a Governor-General designate. It is acceptable for a Minister to be questioned, without criticism or reflection on conduct, regarding matters relating to the public duties for which the Governor-General is responsible. (For more detail and related rulings see Chapter on ‘Control and conduct of debate’.)
On 2 March 1950 a question was directed to Speaker Cameron concerning a newspaper article alleging that during the formal presentation of the Address in Reply to the Governor-General’s Speech, the Speaker showed discourtesy to the Governor-General. Speaker Cameron said:
I am prepared to leave the judgment of my conduct at Government House to the honourable members who accompanied me there.68
Later, Speaker Cameron made a further statement to the House stating certain facts concerning the personal relationship between himself and the Governor-General. In view of this relationship, the Speaker had decided, on the presentation of the Address, to:
. . . treat His Excellency with the strict formality and respect due to his high office, and remove myself from his presence as soon as my duties had been discharged.69
In a previous ruling Speaker Cameron stated that ‘the name of the Governor-General must not be brought into debate either in praise or in blame’.70 Several Members required the Speaker to rule on this previous ruling in the light of his statement as to his conduct at Government House. Speaker Cameron replied that in his statement he had:
. . . made a statement of fact. I have made no attack upon His Excellency. I have simply stated the facts of certain transactions between us, and if the House considers that a reflection has been made on the Governor-General it has its remedy.71
Dissent from the Speaker’s ruling was moved and negatived after debate. Two sitting days later, the Leader of the Opposition moved that, in view of the Speaker’s statement, the House ‘is of opinion that Mr Speaker merits its censure’. The motion was negatived.72
Back to top
Functions in relation to the Executive Government
The executive power of the Commonwealth is vested in the Queen, and is exercisable by the Governor-General as the Queen’s representative,73 the Queen’s role being essentially one of name only. Section 61 of the Constitution states two principal elements of executive power which the Governor-General exercises, namely, the execution and maintenance of the Constitution, and the execution and maintenance of the laws passed (by the Parliament) in accordance with the Constitution.
The Constitution, however, immediately provides that in the government of the Commonwealth, the Governor-General is advised by a Federal Executive Council,74 effecting the concept of responsible government. The Governor-General therefore does not perform executive acts alone but ‘in Council’, that is, acting with the advice of the Federal Executive Council.75 The practical effect of this is, as stated in Quick and Garran:
. . . that the Executive power is placed in the hands of a Parliamentary Committee, called the Cabinet, and the real head of the Executive is not the Queen but the Chairman of the Cabinet, or in other words the Prime Minister.76
Where the Constitution prescribes that the Governor-General (without reference to ‘in Council’) may perform certain acts, it can be said that these acts are also performed in practice with the advice of the Federal Executive Council in all but exceptional circumstances.
As Head of the Executive Government,77 in pursuance of the broad scope of power contained in section 61, the constitutional functions of the Governor-General, excluding those of historical interest, are summarised as follows:
- choosing, summoning and dismissing Members of the Federal Executive Council (s. 62);
- establishing Departments of State and appointing (or dismissing) officers to administer Departments of State (these officers are Members of the Federal Executive Council and known as Ministers of State) (s. 64);
- directing, in the absence of parliamentary provision, what offices shall be held by Ministers of State (s. 65);
- appointing and removing other officers of the Executive Government (other than Ministers of State or as otherwise provided by delegation or as prescribed by legislation) (s. 67); and
- acting as Commander-in-Chief of the naval and military forces (s. 68).
Back to top
Functions in relation to the judiciary
The judicial power of the Commonwealth is vested in the High Court of Australia, and such other federal courts that the Parliament creates or other courts it invests with federal jurisdiction.78
The judiciary is the third element of government in the tripartite division of Commonwealth powers. The Governor-General is specifically included as a constituent part of the legislative and executive organs of power but is not part of the judiciary. While the legislature and the Executive Government have common elements which tend to fuse their respective roles, the judiciary is essentially independent. Nevertheless in terms of its composition it is answerable to the Executive (the Governor-General in Council) and also to the Parliament. The Governor-General in Council appoints justices of the High Court, and of other federal courts created by Parliament. Justices may only be removed by the Governor-General in Council on an address from both Houses praying for such removal on the ground of proved misbehaviour or incapacity.79
see also ‘The Courts and the Parliament’ at page 18.
Back to top
Powers and Jurisdiction of the Houses
While the Constitution states that the legislative power of the Commonwealth is vested in the Queen, a Senate and a House of Representatives80 and, subject to the Constitution, that the Parliament shall make laws for the ‘peace, order, and good government of the Commonwealth’,81 the Parliament has powers and functions other than legislative. The legislative function is paramount but the exercise of Parliament’s other powers, which are of historical origin, are important to the understanding and essential to the working of Parliament.
Back to top
Section 49 of the Constitution states:
The powers, privileges, and immunities of the Senate and of the House of Representatives, and of the members and the committees of each House, shall be such as are declared by the Parliament, and until declared shall be those of the Commons House of Parliament of the United Kingdom, and of its members and committees, at the establishment of the Commonwealth.
In 1987 the Parliament enacted comprehensive legislation under the head of power constituted by section 49. The Parliamentary Privileges Act 1987 provides that, except to the extent that the Act expressly provides otherwise, the powers, privileges and immunities of each House, and of the Members and the committees of each House, as in force under section 49 of the Constitution immediately before the commencement of the Act, continue in force. The provisions of the Act are described in detail in the Chapter on ‘Parliamentary privilege’ . In addition, the Parliament has enacted a number of other laws in connection with specific aspects of its operation, for example, the Parliamentary Precincts Act, the Parliamentary Papers Act and the Parliamentary Proceedings Broadcasting Act.
The significance of these provisions is that they give to both Houses considerable authority in addition to the powers which are expressly stated in the Constitution. The effect on the Parliament is principally in relation to its claim to the ‘ancient and undoubted privileges and immunities’ which are necessary for the exercise of its constitutional powers and functions.82
It is important to note that in 1704 it was established that the House of Commons (by itself) could not create any new privilege;83 but it could expound the law of Parliament and vindicate its existing privileges. Likewise neither House of the Commonwealth Parliament could create any new privilege for itself, although the Parliament could enact legislation to such an end. The principal powers, privileges and immunities of the House of Commons at the time of Federation (thus applying in respect of the Commonwealth Parliament until the Parliament ‘otherwise provided’) are summarised in Quick and Garran.
It should be noted that some of the traditional rights and immunities enjoyed by virtue of s. 49 have been modified since 1901—for instance, warrants for the committal of persons must specify the particulars determined by the House to constitute an offence, neither House may expel its members, and the duration of the immunity from arrest in civil causes has been reduced.84
Section 50 of the Constitution provides that:
Each House of the Parliament may make rules and orders with respect to:
(i.) The mode in which its powers, privileges, and immunities may be exercised and upheld:
(ii.) The order and conduct of its business and proceedings either separately or jointly with the other House.
The first part of this section enables each House to deal with procedural matters relating to its powers and privileges and, accordingly, the House has adopted a number of standing orders relating to the way in which its powers, privileges and immunities are to be exercised and upheld. These cover such matters as the:
- procedure in matters of privilege (S.O.s 51–53);
- power to order attendance or arrest (S.O.s 93, 96);
- power to appoint committees (S.O.s 214–224);
- power of summons (S.O.s 236, 249, 254);
- issues to do with evidence (S.O.s 236, 237, 242, 255); and
- protection of witnesses (S.O. 256).
The second part enables each House to make rules and orders regulating the conduct of its business. A comprehensive set of standing orders has been adopted by the House and these orders may be supplemented from time to time by way of sessional orders and special resolutions.
Section 50 confers on each House the absolute right to determine its own procedures and to exercise control over its own internal proceedings. The House has in various areas imposed limits on itself—for example, by the restrictions placed on Members in its rules of debate. Legislation has been enacted to remove the power of the House to expel a Member.
The legislative function of the Parliament is its most important and time-consuming. The principal legislative powers of the Commonwealth exercised by the Parliament are set out in sections 51 and 52 of the Constitution. However, the legislative powers of these sections cannot be regarded in isolation as other constitutional provisions extend, limit, restrict or qualify their provisions.
The distinction between the sections is that section 52 determines areas within the exclusive jurisdiction of the Parliament, while the effect of section 51 is that the itemised grant of powers includes a mixture of exclusive powers and powers exercised concurrently with the States. For example, some of the powers enumerated in section 51:
- did not belong to the States prior to 1901 (for example, fisheries in Australian waters beyond territorial limits) and for all intents and purposes may be regarded as exclusive to the Federal Parliament;
- were State powers wholly vested in the Federal Parliament (for example, bounties on the production or export of goods); or
- are concurrently exercised by the Federal Parliament and the State Parliaments (for example, taxation, except customs and excise).
In keeping with the federal nature of the Constitution, powers in areas of government activity not covered by section 51, or elsewhere by the Constitution, have been regarded as remaining within the jurisdiction of the States, and have been known as the ‘residual powers’ of the States.
It is not the purpose of this text to detail the complicated nature of the federal legislative power under the Constitution.85 However, the following points are useful for an understanding of the legislative role of the Parliament:
- as a general rule, unless a grant of power is expressly exclusive under the Constitution, the powers of the Commonwealth are concurrent with the continuing powers of the States over the same matters;
- sections, other than sections 51 and 52, grant exclusive power to the Commonwealth—for example, section 86 (customs and excise duties);
- section 51 operates ‘subject to’ the Constitution—for example, section 51(i) (Trade and Commerce) is subject to the provisions of section 92 (Trade within the Commonwealth to be free);
- section 51 must be read in conjunction with sections 106, 107, 108 and 109—for example, section 109 prescribes that in the case of any inconsistency between a State law and a Commonwealth law the Commonwealth law shall prevail to the extent of the inconsistency;
- the Commonwealth has increasingly used section 96 (Financial assistance to States) to extend its legislative competence—for example, in areas such as education, health and transport. This action has been a continuing point of contention and has led to changing concepts of federalism;
- section 51(xxxvi) recognises Commonwealth jurisdiction over 22 sections of the Constitution which include the provision ‘until the Parliament otherwise provides’—for example, section 29 (electoral matters). Generally they are provisions relating to the parliamentary and executive structure and, in most cases, the Parliament has taken action to alter these provisions;86
- section 51(xxxix) provides power to the Parliament to make laws on matters incidental to matters prescribed by the Constitution. This power, frequently and necessarily exercised, has been put to some significant uses—for example, jurisdictional powers and procedure of the High Court, and legislation concerning the operation of the Parliament;87
- section 51(xxix) the ‘external affairs power’ has been relied on effectively to extend the reach of the Commonwealth Parliament’s legislative power into areas previously regarded as within the responsibility of the States (in the Tasmanian Dams Case (1983) the High Court upheld a Commonwealth law enacted to give effect to obligations arising from a treaty entered into by the Federal Government).88
- section 51 itself has been altered on two occasions, namely, in 1964 when paragraph (xxiiiA) was inserted and in 1967 when paragraph (xxvi) was altered;89
- the Commonwealth has been granted exclusive (as against the States) legislative power in relation to any Territory by section 122, read in conjunction with section 52;
- the Federal Parliament on the other hand is specifically prohibited from making laws in respect of certain matters—for example, in respect of religion by section 116; and
- in practice Parliament delegates much of its legislative power to the Executive Government. Acts of Parliament frequently delegate to the Governor-General (that is, the Executive Government) a regulation making power for administrative purposes. However, regulations and other legislative instruments must be laid before Parliament, which exercises ultimate control by means of its power of disallowance.90
Back to top
The Courts and Parliament
The Constitution deliberately confers great independence on the federal courts of Australia. At the same time the Parliament plays a considerable role in the creation of courts, investing other courts with federal jurisdiction, prescribing the number of justices to be appointed to a particular court, and so on. In the scheme of the Constitution, the courts and the Parliament provide checks and balances on each other.
Back to top
With the exception of the High Court which is established by the Constitution, federal courts depend on Parliament for their creation.91 The Parliament may provide for the appointment of justices to the High Court additional to the minimum of a Chief Justice and two other justices.92 As prescribed by Parliament, the High Court now consists of a Chief Justice and six other justices.93
The appointment of justices of the High Court and of other courts created by the Parliament is made by the Governor-General in Council. Justices of the High Court may remain in office until they attain the age of 70 years. The maximum age for justices of any court created by the Parliament is 70 years, although the Parliament may legislate to reduce this maximum.94 Justices may only be removed from office by the Governor-General in Council, on an address from both Houses of the Parliament in the same session, praying for such removal on the ground of proved misbehaviour or incapacity95 (for discussion of the meaning of ‘misbehaviour’ and ‘incapacity’ see p. 20). A joint address under this section may originate in either House although Quick and Garran suggests that it would be desirable for the House of Representatives to take the initiative.96 There is no provision for appeal against removal.97There has been no case in the Commonwealth Parliament of an attempt to remove a justice of the High Court or other federal court. However, the conduct of a judge has been investigated by Senate committees and a Parliamentary Commission of Inquiry (see below). It may be said that, in such matters, as in cases of an alleged breach of parliamentary privilege or contempt, the Parliament may engage in a type of judicial procedure.
The appellate jurisdiction (i.e. the hearing and determining of appeals) of the High Court is laid down by the Constitution but is subject to such exceptions and regulations as the Parliament prescribes,98 providing that:
. . . no exception or regulation prescribed by the Parliament shall prevent the High Court from hearing and determining any appeal from the Supreme Court of a State in any matter in which at the establishment of the Commonwealth an appeal lies from such Supreme Court to the Queen in Council.99
The Parliament may make laws limiting the matters in which leave of appeal to Her Majesty in Council (the Privy Council) may be asked.100 Laws have been enacted to limit appeals to the Privy Council from the High Court101 and to exclude appeals from other federal courts and the Supreme Courts of Territories.102 Special leave of appeal to the Privy Council from a decision of the High Court may not be asked in any matter except where the decision of the High Court was given in a proceeding that was commenced in a court before the date of commencement of the Privy Council (Appeals from the High Court) Act on 8 July 1975, other than an inter se matter (as provided by section 74). The possibility of such an appeal has been described as ‘a possibility so remote as to be a practical impossibility’.103 Section 11 of the Australia Act 1986 provided for the termination of appeals to the Privy Council from all ‘Australian courts’ defined as any court other than the High Court.
The Constitution confers original jurisdiction on the High Court in respect of certain matters104 with which the Parliament may not interfere other than by definition of jurisdiction.105 The Parliament may confer additional original jurisdiction on the High Court106 and has done so in respect of ‘all matters arising under the Constitution or involving its interpretation’ and ‘trials of indictable offences against the laws of the Commonwealth’.107
Sections 77–80 of the Constitution provide Parliament with power to:
- define the jurisdiction of the federal courts (other than the High Court);
- define the extent to which the jurisdiction of any federal court (including the High Court) shall be exclusive of the jurisdiction of State courts;
- invest any State court with federal jurisdiction;
- make laws conferring rights to proceed against the Commonwealth or a State;
- prescribe the number of judges to exercise the federal jurisdiction of any court; and
- prescribe the place of any trial against any law of the Commonwealth where the offence was not committed within a State.
Back to top
Parliamentary Commission of Inquiry
The Parliament established, by legislation, a Parliamentary Commission of Inquiry in May 1986.108 The commission’s function was to inquire and advise the Parliament whether any conduct of the Honourable Lionel Keith Murphy (a High Court judge) had been such as to amount, in its opinion, to proved misbehaviour within the meaning of section 72 of the Constitution.
The Act provided for the commission to consist of three members to be appointed by resolutions of the House and the Senate. A person could not be a member unless he or she was or had been a judge, and the resolutions had to provide for one member to be the Presiding Member. Three members were appointed, one as the Presiding Member.109 Staff were appointed under the authority of the Presiding Officers.
Accounts of the 1984 Senate committee inquiries leading to the establishment of the Commission, and of the operation of the Commission and the course of its inquiry are given at pages 21–26 of the second edition.
In August 1996, following a special report to the Presiding Officers relating to the terminal illness of the judge,110 the inquiry was discontinued and the Act establishing the Commission repealed. The repealing Act also contained detailed provisions for the custody of documents in the possession of the commission immediately before the commencement of the repeal Act.
Back to top
The meaning of ‘misbehaviour’ and ‘incapacity’
Prior to the matters arising in 1984–86, little had been written about the meaning of section 72. Quick and Garran had stated:
Misbehaviour includes, firstly, the improper exercise of judicial functions; secondly, wilful neglect of duty, or non-attendance; and thirdly, a conviction for any infamous offence, by which, although it be not connected with the duties of his office, the offender is rendered unfit to exercise any office or public franchise. (Todd, Parl. Gov. in Eng., ii. 857, and authorities cited.)
‘Incapacity’ extends to incapacity from mental or bodily infirmity, which has always been held to justify the termination of an office held during good behaviour . . . The addition of the word does not therefore alter the nature of the tenure of good behaviour, but merely defines it more accurately.
No mode is prescribed for the proof of misbehaviour or incapacity, and the Parliament is therefore free to prescribe its own procedure. Seeing, however, that proof of definite legal breaches of the conditions of tenure is required, and that the enquiry is therefore in its nature more strictly judicial than in England, it is conceived that the procedure ought to partake as far as possible of the formal nature of a criminal trial; that the charges should be definitely formulated, the accused allowed full opportunities of defence, and the proof established by evidence taken at the Bar of each House.111
In an opinion published with the report of the Senate Select Committee on the Conduct of a Judge, the Commonwealth Solicitor-General stated, inter alia:
Misbehaviour is limited in meaning in section 72 of the Constitution to matters pertaining to—
(1) judicial office, including non-attendance, neglect of or refusal to perform duties; and
(2) the commission of an offence against the general law of such a quality as to indicate that the incumbent is unfit to exercise the office.
Misbehaviour is defined as breach of condition to hold office during good behaviour. It is not limited to conviction in a court of law. A matter pertaining to office or a breach of the general law of the requisite seriousness in a matter not pertaining to office may be found by proof, in appropriate manner, to the Parliament in proceedings where the offender has been given proper notice and opportunity to defend himself.112
Mr C. W. Pincus QC, in an opinion also published by the committee, stated on the other hand:
As a matter of law, I differ from the view which has previously been expressed as to the meaning of section 72. I think it is for Parliament to decide whether any conduct alleged against a judge constitutes misbehaviour sufficient to justify removal from office. There is no ‘technical’ relevant meaning of misbehaviour and in particular it is not necessary, in order for the jurisdiction under section 72 to be enlivened, that an offence be proved.113
The Presiding Officers presented a special report from the Parliamentary Commission of Inquiry containing reasons for a ruling on the meaning of ‘misbehaviour’ for the purposes of section 72.114 Sir George Lush stated, inter alia,
. . . my opinion is that the word ‘misbehaviour’ in section 72 is used in its ordinary meaning, and not in the restricted sense of ‘misconduct in office’. It is not confined, either, to conduct of a criminal matter.
The view of the meaning of misbehaviour which I have expressed leads to the result that it is for Parliament to decide what is misbehaviour, a decision which will fall to be made in the light of contemporary values. The decision will involve a concept of what, again in the light of contemporary values, are the standards to be expected of the judges of the High Court and other courts created under the Constitution. The present state of Australian jurisprudence suggests that if a matter were raised in addresses against a judge which was not on any view capable of being misbehaviour calling for removal, the High Court would have power to intervene if asked to do so.115
Sir Richard Blackburn stated:
All the foregoing discussion relates to the question whether ‘proved misbehaviour’ in section 72 of the Constitution must, as a matter of construction, be limited as contended for by counsel. In my opinion the reverse is correct. The material available for solving this problem of construction suggests that ‘proved misbehaviour’ means such misconduct, whether criminal or not, and whether or not displayed in the actual exercise of judicial functions, as, being morally wrong, demonstrates the unfitness for office of the judge in question. If it be a legitimate observation to make, I find it difficult to believe that the Constitution of the Commonwealth of Australia should be construed so as to limit the power of the Parliament to address for the removal of a judge, to grounds expressed in terms which in one eighteenth-century case were said to apply to corporations and their officers and corporators, and which have not in or since that case been applied to any judge.116
Mr Wells stated:
. . . the word ‘misbehaviour’ must be held to extend to conduct of the judge in or beyond the execution of his judicial office, that represents so serious a departure from standards of proper behaviour by such a judge that it must be found to have destroyed public confidence that he will continue to do his duty under and pursuant to the Constitution.
. . . Section 72 requires misbehaviour to be ‘proved’. In my opinion, that word naturally means proved to the satisfaction of the Houses of Parliament whose duty it is to consider whatever material is produced to substantiate the central allegations in the motion before them. The Houses of Parliament may act upon proof of a crime, or other unlawful conduct, represented by a conviction, or other formal conclusion, recorded by a court of competent jurisdiction; but, in my opinion, they are not obliged to do so, nor are they confined to proof of that kind. Their duty, I apprehend, is to evaluate all material advanced; to give to it, as proof, the weight it may reasonably bear; and to act accordingly.
According to entrenched principle, there should, in my opinion, be read into section 72 the requirement that natural justice will be administered to a judge accused of misbehaviour . . .117
Back to top
The courts as a check on the power of Parliament
In the constitutional context of the separation of powers, the courts, in their relationship to the Parliament, provide the means whereby the Parliament may be prevented from exceeding its constitutional powers. Wynes writes:
The Constitution and laws of the Commonwealth being, by covering Cl. V. of the Constitution Act, ‘binding on the Courts, judges and people of every State and of every part of the Commonwealth’, it is the essential function and duty of the Courts to adjudicate upon the constitutional competence of any Federal or State Act whenever the question falls for decision before them in properly constituted litigation.118
Original jurisdiction in any matter arising under the Constitution or involving its interpretation has been conferred on the High Court by an Act of Parliament,119 pursuant to section 76(i) of the Constitution. The High Court does not in law have any power to veto legislation and it does not give advisory opinions120 but in deciding between litigants in a case it may determine that a legislative enactment is unconstitutional and of no effect in the circumstances of the case. On the assumption that in subsequent cases the court will follow its previous decision (not always the case121) a law deemed ultra vires becomes a dead letter.
The power of the courts to interpret the Constitution and to determine the constitutionality and validity of legislation gives the judiciary the power to determine certain matters directly affecting the Parliament and its proceedings. The range of High Court jurisdiction in these matters can be seen from the following cases:122
- Petroleum and Minerals Authority case123—The High Court ruled that the passage of the Petroleum and Minerals Authority Bill through Parliament had not satisfied the provisions of section 57 of the Constitution and was consequently not a bill upon which the joint sitting of 1974 could properly deliberate and vote, and thus that it was not a valid law of the Commonwealth.124
- McKinlay’s case125—The High Court held that (1) sections 19, 24 and 25 of the Commonwealth Electoral Act 1918, as amended, did not contravene section 24 of the Constitution and (2) whilst sections 3, 4 and 12(a) of the Representation Act 1905, as amended, remained in their present form, the Representation Act was not a valid law by which the Parliament otherwise provides within the meaning of the second paragraph of section 24 of the Constitution.
- McKellar’s case126—The High Court held that a purported amendment to section 10 of the Representation Act 1905, contained in the Representation Act 1964, was invalid because it offended the precepts of proportionality and the nexus with the size of the Senate as required by section 24 of the Constitution.
- Postal allowance case127—The High Court held that the operation of section 4 of the Parliamentary Allowances Act 1952 and provisions of the Remuneration Tribunals Act 1973 denied the existence of an executive power to increase the level of a postal allowance—a ministerial decision to increase the allowance was thus held to be invalid.
It should be noted that the range of cases cited is not an indication that either House has conceded any role to the High Court, or other courts, in respect of its ordinary operations or workings. In Cormack v. Cope the High Court refused to grant an injunction to prevent a joint sitting convened under section 57 from proceeding (there was some division as to whether a court had jurisdiction to intervene in the legislative process before a bill had been assented to). The joint sitting proceeded, and later the Court considered whether, in terms of the Constitution, one Act was validly enacted.128
Back to top
Jurisdiction of the courts in matters of privilege
By virtue of section 49 of the Constitution the powers, privileges and immunities of the House of Representatives were, until otherwise declared by the Parliament, the same as those of the House of Commons as at 1 January 1901. The Parliamentary Privileges Act 1987 constituted a declaration of certain ‘powers, privileges and immunities’, but section 5 provided that, except to the extent that the Act expressly provided otherwise, the powers, privileges and immunities of each House, and the members and committees of each House, as in force under section 49 of the Constitution immediately before the commencement of the Act, continued in force.
As far as the House of Commons is concerned, the origin of its privileges lies in either the privileges of the ancient High Court of Parliament (before the division into Commons and Lords) or in later law and statutes; for example, Article 9 of the Bill of Rights of 1688129 declares what is perhaps the basic privilege:
That the freedom of speech and debates or proceedings in Parliament ought not to be impeached or questioned in any court or place out of Parliament.
This helped establish the basis of the relationship between the House of Commons and the courts. However a number of grey areas remained, centring on the claim of the House of Commons to be the sole and exclusive judge of its own privilege, an area of law which it maintained was outside the ambit of the ordinary courts and which the courts could not question. The courts maintained, on the contrary, that the lex et consuetudo parliamenti (the law and custom of Parliament) was part of the law of the land and that they were bound to decide any question of privilege arising in a case within their jurisdiction and to decide it according to their own interpretation of the law. Although there is a wide field of agreement between the House of Commons and the courts on the nature and principles of privilege, questions of jurisdiction are not wholly resolved.130
In the Commonwealth Parliament, the raising, consideration and determination of complaints of breach of privilege or contempt occurs in each House. The Houses are able to impose penalties for contempt, although some recourse to the courts could be possible. Section 9 of the Parliamentary Privileges Act 1987 requires that where a House imposes a penalty of imprisonment for an offence against that House, the resolution imposing the penalty and the warrant committing the person to custody must set out the particulars of the matters determined by the House to constitute the offence. The effect of this provision is that a person committed to prison could seek a court determination as to whether the offence alleged to constitute a contempt was in fact capable of constituting a contempt.
These matters are dealt with in more detail in the Chapter on ‘Parliamentary privilege’.
Back to top
The right of Parliament to the service of its Members in priority to the claims of the courts
This is one of the oldest of parliamentary privileges from which derives Members’ immunity from arrest in civil proceedings and their exemption from attendance as witnesses and from jury service.
Members of Parliament are immune from arrest or detention in a civil cause on sitting days of the House of which the person is a Member, on days on which a committee of which the person is a member meets and on days within five days before and after such days.131
Section 14 of the Parliamentary Privileges Act also grants an immunity to Senators and Members from attendance before courts or tribunals for the same periods as the immunity from arrest in civil causes. In the House of Commons it has been held on occasions that the service of a subpoena on a Member to attend as a witness was a breach of privilege.132 When such matters have arisen the Speaker has sometimes written to court authorities asking that the Member be excused. An alternative would be for the House to grant leave to a Member to attend.
By virtue of the Jury Exemption Act, Members of Parliament are not liable, and may not be summoned, to serve as jurors in any Federal, State or Territory court.133
For a more detailed treatment of this subject see Chapter on ‘Parliamentary privilege’.
Back to top
Attendance of parliamentary employees in court or their arrest
Section 14 of the Parliamentary Privileges Act provides that an officer of a House shall not be required to attend before a court or tribunal, or arrested or detained in a civil cause, on a day on which a House or a committee upon which the officer is required to attend meets, or within five days before or after such days.
Standing order 253 provides that an employee of the House, or other staff employed to record evidence before the House or any of its committees, may not give evidence relating to proceedings or the examination of a witness without the permission of the House.
A number of parliamentary employees are exempted from attendance as jurors in Federal, State and Territory courts.134 Exemption from jury service has been provided on the basis that certain employees have been required to devote their attention completely to the functioning of the House and its committees.
See also Chapters on ‘Documents’, ‘Parliamentary committees’ and ‘Parliamentary privilege’
Back to top
Parliament and the courts—other matters
Other matters involving the relationship between Parliament and the courts which require brief mention are:
- Interpretation of the Constitution. In 1908, the Speaker ruled:
. . . the obligation does not rest upon me to interpret the Constitution . . . the only body fully entitled to interpret the Constitution is the High Court . . . Not even this House has the power finally to interpret the terms of the Constitution.135
This ruling has been generally followed by all subsequent Speakers.
- The sub judice rule. It is the practice of the House that matters awaiting or under adjudication in a court of law should not be brought forward in debate. This rule is sometimes applied to restrict discussion on current proceedings before a royal commission, depending on its terms of reference and the particular circumstances. In exercising a discretion in applying the sub judice rule the Speaker makes decisions which involve the inherent right of the House to inquire into and debate matters of public importance while at the same time ensuring that the House does not set itself up as an alternative forum to the courts or permit the proceedings of the House to interfere with the course of justice.136
- Reflections on the judiciary. Standing order 89 provides, inter alia, that a Member must not use offensive words against a member of the judiciary.137
- The legal efficacy of orders and resolutions of the House. This is discussed in the Chapter on ‘Motions’.
Back to top
There is no limit to the power to amend the Constitution provided that the restrictions applying to the mode of alteration are met.138 However, there is considerable room for legal dispute as to whether the power of amendment extends to the preamble and the preliminary clauses of the Constitution Act itself.139
The Constitution, from which Parliament obtains its authority, cannot be changed by Parliament alone, although some provisions, such as sections 46–49, while setting out certain detail, are qualified by phrases such as ‘until the Parliament otherwise provides’, thus allowing the Parliament to modify, supplement or alter the initial provision. To change the constitution itself a majority vote of the electors of the Commonwealth, and of the electors in a majority of the States, at a referendum is also required. The Constitution itself, expressing as it does the agreement of the States to unite into a Federal Commonwealth, was originally agreed to by the people of the States at referendum.140 The process of constitutional alteration commences with the Houses of Parliament.
A proposal to alter the Constitution may originate in either House of the Parliament by means of a bill. Normally, the bill must be passed by an absolute majority of each House but, in certain circumstances, it need only be passed by an absolute majority of one House.141 Subject to the absolute majority provision, the passage of the bill is the same as for an ordinary bill.142 (The House procedures for the passage of constitution alteration bills are covered in the Chapter on ‘Legislation’.)
The short title of a bill proposing to alter the Constitution, in contradistinction to other bills, does not contain the word ‘Act’ during its various stages, for example, the short title is in the form Constitution Alteration (Establishment of Republic) 1999. While the proposed law is converted to an ‘Act’ after approval at referendum and at the point of assent, in a technical sense it is strictly a constitution alteration and its short title remains unchanged.
Back to top
Constitution alteration bills passed by one House only
If a bill to alter the Constitution passes one House and the other House rejects or fails to pass it, or passes it with any amendment to which the originating House will not agree, the originating House, after an interval of three months in the same or next session, may again pass the bill in either its original form or in a form which contains any amendment made or agreed to by the other House on the first occasion. If the other House again rejects or fails to pass the bill or passes it with any amendment to which the originating House will not agree, the Governor-General may submit the bill as last proposed by the originating House, either with or without any amendments subsequently agreed to by both Houses, to the electors in each State and Territory. The words ‘rejects or fails to pass, etc.’ are considered to have the same meaning as those in section 57 of the Constitution.143
In June 1914 six constitution alteration bills which had been passed by the Senate in December 1913 and not by the House of Representatives were again passed by the Senate.144 The bills were sent to the House which took no further action after the first reading.145 After seven days the Senate requested the Governor-General, by means of an Address, that the proposed laws be submitted to the electors.146 Acting on the advice of his Ministers, the Governor-General refused the request.147
Odgers put the view that the point to be made is that, following only a short period after sending the bills to the House of Representatives, the Senate felt competent to declare that they had failed to pass the other House.148 The view of Lumb and Moens has been that as there had been no ‘rejection’ or ‘amendment’ of the bills in the House of Representatives then the only question was whether there had been a failure to pass them, and that there had been no ‘failure to pass’ by the House and that therefore the condition precedent for holding a referendum had not been fulfilled.149
The circumstances of this case were unusual as a proposed double dissolution had been announced,150 and the Prime Minister had made it clear that the bills would be opposed and their discussion in the House of Representatives would not be facilitated.151 It was also significant that referendums had been held in May 1913 on similar proposals and were not approved by the electors.
Similar bills were again introduced in 1915 and on this occasion passed both Houses.152 Writs for holding referendums were issued on 2 November 1915. The Government subsequently decided not to proceed with the referendums (see below).
During 1973 a similar situation arose in respect of four bills passed by the House of Representatives. Three of them were not passed by the Senate and the fourth was laid aside by the House when the Senate insisted on amendments which were not acceptable to the House.153 After an interval of three months (in 1974), the House again passed the bills which were rejected by the Senate.154 Acting on the advice of his Ministers, the Governor-General, in accordance with section 128 of the Constitution, submitted the bills to the electors where they failed to gain approval.155
Back to top
Constitution alteration bills not submitted to referendum
In some cases constitution alteration bills have not been submitted to the people, despite having satisfied the requirements of the ‘parliamentary stages’ of the necessary process. The history of the seven constitution alteration bills of 1915 is outlined above. These were passed by both Houses, and submitted to the Governor-General and writs issued. When it was decided not to proceed with the proposals, a bill was introduced and passed to provide for the withdrawal of the writs and for other necessary actions.156 In 1965 two constitution alteration proposals, having been passed by both Houses, were deferred, but on this occasion writs had not been issued. When a question was raised as to whether the Government was not ‘flouting . . . the mandatory provisions of the Constitution’ the Prime Minister stated, inter alia, ‘. . . the advice of our own legal authorities was to the effect that it was within the competence of the Government to refrain from the issue of the writ’.157 In 1983 five constitution alteration bills were passed by both Houses, but the proposals were not proceeded with.158 Section 7 of the Referendum (Machinery Provisions) Act 1984 now provides that whenever a proposed law for the alteration of the Constitution is to be submitted to the electors, the Governor-General may issue a writ for the submission of the proposed law.
In the case of a bill having passed through both Houses, if a referendum is to be held the bill must be submitted to the electors in each State and Territory159 not less than two nor more than six months after its passage. The bill is presented to the Governor-General for the necessary referendum arrangements to be made.160 Voting is compulsory. If convenient, a referendum is held jointly with an election for the Senate and/or the House of Representatives. The question put to the people for approval is the constitutional alteration as expressed in the long title of the bill.161
The Referendum (Machinery Provisions) Act 1984 contains detailed provisions relating to the submission to the electors of constitution alteration proposals. It covers, inter alia, the form of a ballot paper and writ, the distribution of arguments for and against proposals, voting, scrutiny, the return of writs, disputed returns and offences. The Act places responsibility for various aspects of the conduct of a referendum on the Electoral Commissioner, State Electoral Officers and Divisional Returning Officers. The interpretation of provisions of the Referendum (Machinery Provisions) Act came before the High Court in 1988, when a declaration was made that the expenditure of public moneys on two advertisements was, or would be, a breach of subsection 11(4) of the Act. Arguments were accepted that certain words used in two official advertisements, which were said to be confined to an encouragement to the electors to be aware of the issues in the impending referendums, in fact promoted aspects of the argument in favour of the proposed laws, that is, in favour of the ‘yes’ case.162
If the bill is approved by a majority of the electors in a majority of the States, that is, at least four of the six States, and also by a majority of all the electors who voted, it is presented to the Governor-General for assent.163 However, if the bill proposes to alter the Constitution by diminishing the proportionate representation of any State in either House, or the minimum number of representatives of a State in the House of Representatives, or altering the limits of the State,164 the bill shall not become law unless the majority of electors voting in that State approve the bill. This means that the State affected by the proposal must be one of the four (or more) States which approve the bill.
An Act to alter the Constitution comes into operation on the day on which it receives assent, unless the contrary intention appears in the Act.165
Back to top
Distribution to electors of arguments for and against proposed constitutional alterations
The Referendum (Machinery Provisions) Act makes provision for the distribution to electors, by the Australian Electoral Commission, of arguments for and against proposed alterations. The ‘Yes’ case is required to be authorised by a majority of those Members of the Parliament who voted in favour of the proposed law and the ‘No’ case by a majority of those Members of the Parliament who voted against it.166 In the case of the four constitution alteration bills of 1974, which were passed by the House of Representatives only and before the enactment of the Referendum (Machinery Provisions) Act provisions, the Government provided by administrative arrangement for ‘Yes’ and ‘No’ cases to be distributed, the ‘No’ case being prepared by the Leader of the Opposition in the House of Representatives.167
Back to top
Dispute over validity of referendum
The validity of any referendum or of any return or statement showing the voting on any referendum may be disputed by the Commonwealth, by any State or by the Northern Territory, by petition addressed to the High Court within a period of 40 days following the gazettal of the referendum results.168 The Electoral Commission may also file a petition disputing the validity of a referendum.169 Pending resolution of the dispute or until the expiration of the period of 40 days, as the case may be, the bill is not presented for assent.
Of the 44 referendums170 submitted to the electors since Federation, eight have been approved. Of those which were not approved, 31 received neither a favourable majority of electors in a majority of States nor a favourable majority of all electors, while the remaining five achieved a favourable majority of all electors but not a favourable majority of electors in a majority of States.
The eight constitution alterations which gained the approval of the electors were submitted in 1906, 1910, 1928, 1946, 1967 and 1977 (three). The successful referendums were approved by majorities in every State, with the exception that New South Wales alone rejected the Constitution Alteration (State Debts) Bill submitted in 1910.
The proposals of 1906, 1910, 1946, 1974 and 1984 were submitted to the electors concurrently with general elections.
Successful referendums relating to the electoral and parliamentary processes have been:
- Constitution Alteration (Senate Elections) 1906. This was the first constitutional referendum. It altered section 13 to cause Senators’ terms to commence in July instead of January.
- Constitution Alteration (Senate Casual Vacancies) 1977. This provided that, where possible, a casual vacancy in the Senate should be filled by a person of the same political party as the Senator chosen by the people and for the balance of the Senator’s term.
- Constitution Alteration (Referendums) 1977. This provided for electors in the Territories to vote at referendums on proposed laws to alter the Constitution.
The Constitution Alteration (Mode of Altering the Constitution) Bill 1974 sought to amend section 128 in order to facilitate alterations to the Constitution but was rejected by the electors. The intention of the amendment was to alter the provision that a proposed law has to be approved by a majority of electors ‘in a majority of the States’ (four States) and, in its stead, provide that a proposed law has to be approved by a majority of electors ‘in not less than one-half of the States’ (three States). The further requirement that a proposed law has to be approved by ‘a majority of all the electors voting’ was to be retained.
Proposals rejected by the electors which have specifically related to the parliamentary and electoral processes have included:
- Constitution Alteration (Parliament) 1967. This proposal intended to amend section 24 by removing the requirement that the number of Members shall be, as nearly as practicable, twice the number of Senators. Other than by breaking this ‘nexus’, an increase in the number of Members can only be achieved by a proportionate increase in the number of Senators, regardless of existing representational factors applying to the House of Representatives only.
- Constitution Alteration (Simultaneous Elections) 1974 and 1977. These proposals were intended to ensure that at least half of the Senate should be elected at the same time as an election for the House of Representatives. It was proposed that the term of a Senator should expire upon the expiration, or dissolution, of the second House of Representatives following the first election of the Senator. The effective result of this proposal was that a Senator’s term of office, without facing election, would be for a period less than the existing six years.
- Constitution Alteration (Democratic Elections) 1974. This proposal intended to write into the Constitution provisions which aimed to ensure that Members of the House and of the State Parliaments are elected directly by the people, and that representation is more equal and on the basis of population and population trends.
- Constitution Alteration (Terms of Senators) 1984. This proposal sought to make Senators’ terms equal to two terms of the House and to ensure that Senate and House elections were held on the same day.
- Constitution Alteration (Parliamentary Terms) 1988. This proposal sought to extend the maximum term of the House of Representatives from three years to four years, beginning with the 36th Parliament. It also proposed that the terms of all Senators would expire upon the expiry or dissolution of the House of Representatives, that is, the ‘continuity’ achieved from the half-Senate election cycle would have been ended, and Senators would have been elected as for a double dissolution election. The practical effect of the bill was to establish a maximum four-year term and elections for both Houses of Parliament on the same day.
- Constitution Alteration (Fair Elections) 1988. This proposal sought, inter alia, to incorporate in the Constitution a requirement concerning a maximum ten percent tolerance (above or below the relevant average) in the number of electors at elections for the Commonwealth and State Parliaments and for mainland Territory legislatures.
- Constitution Alteration (Establishment of Republic) 1999. This proposal sought to establish the Commonwealth of Australia as a republic with the Queen and Governor-General being replaced by a President appointed by a two-thirds majority of the members of the Commonwealth Parliament.
Back to top
Referendums for other purposes
Referendums, other than for purposes of constitution alteration, were held in 1916 and 1917. These referendums related to the introduction of compulsory military service and were rejected by the people. The first was authorised by an Act of Parliament171 and the second was held pursuant to regulations made under the War Precautions Act.172
In May 1977, concurrent with the constitution alteration referendums then being held, electors were asked, in a poll as distinct from a referendum,173 to express on a voluntary basis their preference for the tune of a national song to be played on occasions other than Regal and Vice-Regal occasions.
Back to top
Review of the Constitution
In August 1927 the Government appointed a royal commission to inquire into and report upon the powers of the Commonwealth under the Constitution and the working of the Constitution since Federation. The report was presented to Parliament in November 1929174 but did not bring any positive results. In 1934 a Conference of Commonwealth and State Ministers on Constitutional Matters was held but little came of it.175 In 1942 a Convention of Government and Opposition Leaders and Members from both Commonwealth and State Parliaments met in Canberra to discuss certain constitutional matters in relation to post-war reconstruction. They made significant progress and approved a draft bill transferring certain State powers, including control of labour, marketing, companies, monopolies and prices, from the States to the Commonwealth Government. However only two of the State Parliaments were prepared to approve the bill.176
The next major review of the Constitution was conducted by a joint select committee of the Parliament, first appointed in 1956.177 The committee presented its first report in 1958178 and a final report in 1959.179 The report made many significant recommendations, but no constitutional amendments resulted in the short term.
Recommendations of the committee which were submitted some years later to the people at referendum were:
- to enable the number of Members of the House to be increased without necessarily increasing the number of Senators (1967);
- to enable Aboriginals to be counted in reckoning the population (1967);
- to ensure that Senate elections are held at the same time as House of Representatives elections (1974 and 1977);
- to facilitate alterations to the Constitution (1974);
- to ensure that Members of the House are chosen directly and democratically by the people (1974); and
- to ensure, so far as practicable, that a casual vacancy in the Senate is filled by a person of the same political party as the Senator chosen by the people (1977).
In 1970 the Victorian Parliament initiated a proposal to convene an Australian Constitutional Convention. Following agreement by the States to the proposal and the inclusion of the Commonwealth in the proposed convention, the first meeting took place at Sydney in 1973 and was followed by further meetings of the convention at Melbourne (1975), Hobart (1976) and Perth (1978). The convention agreed to a number of proposals for the alteration of the Constitution, some of which were submitted to the people at the referendums of 1977. The referendums on Simultaneous Elections, Referendums, and the Retirement of Judges were the subject of resolutions of the convention at meetings held in Melbourne and Hobart.
In 1985 the Commonwealth Government announced the establishment of a Constitutional Commission to report on the revision of the Constitution. It consisted of five members (a sixth resigning upon appointment to the High Court) and it operated by means of five advisory committees, covering the Australian judicial system, the distribution of powers, executive government, individual and democratic rights, and trade and national economic management. A series of background papers was published by the commission and papers and reports were prepared by the advisory committees.180 The commission’s first report was presented on 10 May 1988, and a summary was presented on 23 May 1988.181 The commission’s review and report preceded the presentation of four constitution alteration bills, dealing respectively with parliamentary terms, elections, local government, and rights and freedoms.182
In 1991 the Constitutional Centenary Foundation was established with the purposes of encouraging education and promoting public discussion, understanding and review of the Australian constitutional system in the decade leading to the centenary of the Constitution.183
In 1993 Prime Minister Keating established the Republic Advisory Committee with the terms of reference of producing an options paper describing the minimum constitutional changes necessary to achieve a republic, while maintaining the effect of existing conventions and principles of government. The committee’s report An Australian republic—the options was tabled in the House on 6 October 1993.184
In February 1998 the Commonwealth Government convened a Constitutional Convention to consider whether Australia should become a republic and models for choosing a head of state. Delegates (152—half elected, half appointed by the Government) met for two weeks in Canberra in Old Parliament House. The Convention also debated related issues, including proposals for a new preamble to the Constitution. The Convention supported an in-principle resolution that Australia should become a republic, and recommended that the model, and other related changes, supported by the Convention be put to the Australian people at a referendum. Constitution alteration bills for the establishment of a republic and for the insertion of a preamble followed in 1999, with those concerning the proposed republic being referred to a joint select committee for an advisory report. All the proposals were unsuccessful at referendum.
Back to top
Aspects of the Role of the House of Representatives
The bicameral nature of the national legislature reflects the federal nature of the Commonwealth. The House of Representatives was seen by Quick and Garran in 1901 as embodying the national aspect and the Senate the State aspect of the federal duality.185
It has been said that the federal part of the Australian Parliament is the Senate which being the organ of the States links them together as integral parts of the federal union. Thus, the Senate is the Chamber in which the States, considered as separate entities and corporate parts of the Commonwealth, are represented.186 The (original) States have equal representation in the Senate, irrespective of great discrepancies in population size.
On the other hand the House of Representatives is the national branch of the Federal Parliament in which the people are represented in proportion to their numbers—that is, each Member represents an (approximately) equal number of voters. In this sense the House may be said to be not only the national Chamber but also the democratic Chamber.187 Quick and Garran stated ‘its operation and tendency will be in the direction of unification and consolidation of the people into one integrated whole, irrespective of State boundaries, State rights or State interests’.188 Thus, the House of Representatives is the people’s House and the inheritance of responsible government, through the Cabinet system, is the most significant characteristic attaching to it.
The framers of our Constitution, almost as a matter of course, took the Westminster model of responsible government (influenced by the colonial experience and by the experience of the United States of America189) and fitted it into the federal scheme. Thus the role and functions of the House of Representatives are direct derivatives of the House of Commons, principal features being the system of Cabinet Government and the traditional supremacy of the lower House in financial matters.
The notion of responsible government is embodied in the structure and functions of the House of Representatives.190 That party or coalition of parties which commands a majority in the House is entitled to form the Government. From this group emerges the Prime Minister and the major portion of the Ministry, usually more than 75 per cent. This fact, and certain provisions of the Constitution concerning legislation, means that most legislation originates in the House of Representatives, and this emphasises its initiating and policy roles as distinct from the review role of the Senate.
In Australia the legal power to initiate legislation is vested in the legislature and nowhere else. In practice the responsibility falls overwhelmingly to one group within the legislature—the Ministry. However there are checks and balances and potential delays (which may sometimes be regarded as obstruction) in the legislative process because of the bicameral nature of the legislature, and these have particular importance when the party or coalition with a majority in the House does not have a majority in the Senate.
The Ministry is responsible for making and defending government decisions and legislative proposals. There are few important decisions made by the Parliament which are not first considered by the Government. However, government proposals are subject to parliamentary scrutiny which is essential in the concept of responsible government. The efficiency and effectiveness of a parliamentary democracy is in some measure dependent on the effectiveness of the Opposition; the more effective the Opposition, the more responsible and thorough the Government must become in its decision making.
The nature of representation in the Senate, the voting system used to elect Senators and the fact that only half the Senators are elected each third year may cause the Senate to reflect a different electoral opinion from that of the House. The House reflects, in its entirety, the most recent political view of the people and is the natural vehicle for making or unmaking governments. Jennings emphasises the role of the lower House in the following way:
The fact that the House of Commons is representative, that most of the ministers and most of the leading members of Opposition parties are in that House, and that the Government is responsible to that House alone, gives the Commons a great preponderance of authority. The great forum of political discussion is therefore in the Lower House.191
In Parliaments in the Westminster tradition the greater financial power is vested in the lower House. The modern practice in respect of the House of Commons’ financial privileges is based upon principles expressed in resolutions of that House as long ago as 1671 and 1678:
That in all aids given to the King by the Commons, the rate or tax ought not to be altered by the Lords; . . .
That all aids and supplies, and aids to his Majesty in Parliament, are the sole gift of the Commons; and all bills for the granting of any such aids and supplies ought to begin with the Commons; and that it is the undoubted and sole right of the Commons to direct, limit, and appoint in such bills the ends, purposes, considerations, conditions, limitations, and qualifications of such grants, which ought not to be changed or altered by the House of Lords.192
These principles are reflected in a modified way in the Australian Constitution. The Constitution was framed to express the traditional right of the lower House, the representative House, to initiate financial matters,193 to prevent the Senate from amending certain financial bills and to prevent the Senate from amending any proposed law so as to increase any proposed charge or burden on the people.194 In all other respects the Constitution gives to the Senate equal power with the House of Representatives in respect of all proposed laws, including the power of rejection.
Back to top
Independence of the Houses
Each House functions as a distinct and independent unit within the framework of the Parliament. The right inherent in each House to exclusive cognisance of matters arising within it has evolved through centuries of parliamentary history195 and is made clear in the provisions of the Constitution.
The complete autonomy of each House, within the constitutional and statutory framework existing at any given time, is recognised in regard to:
- its own procedure;
- questions of privilege and contempt; and
- control of finance, staffing, accommodation and services.196
This principle of independence characterises the formal nature of inter-House communication. Communication between the Houses may be by message,197 by conference,198 or by committees conferring with each other.199 The two Houses may also agree to appoint a joint committee operating as a single body and composed of members of each House.200
Contact between the Houses reaches its ultimate point in the merging of both in a joint sitting. In respect of legislative matters this can occur only under conditions prescribed by the Constitution and when the two Houses have failed to reach agreement.201
The standing orders of both the House and the Senate contain particular provisions with respect to the attendance of Members and employees before the other House or its committees. Should the Senate request by message the attendance of a Member before the Senate or any committee of the Senate, the House may immediately authorise the Member to attend, if the Member thinks fit. If a similar request is received in respect of an employee, the House may, if it thinks fit, instruct the employee to attend.202 In practice, there have been instances of Members and employees appearing as witnesses before Senate committees, in a voluntary capacity, without the formality of a message being sent to the House.203 Senators have appeared before the House Committee of Privileges, the Senate having given leave for them to appear, after having received a message from the House on the matter.204 In 2001 the Senate authorised Senators to appear before the committee ‘subject to the rule, applied in the Senate by rulings of the President, that one House of the Parliament may not inquire into or adjudge the conduct of a member of the other House’.205
As an expression of the principle of independence of the Houses, the Speaker took the view in 1970 that it would be parliamentarily and constitutionally improper for a Senate estimates committee to seek to examine the financial needs or commitments of the House of Representatives.206 In similar manner the House of Representatives estimates committees, when they operated, did not examine the proposed appropriations for the needs of the Senate.
As a further expression of the independence of the Houses it had been a traditional practice of each House not to refer to its counterpart by name but as ‘another place’ or ‘Members of another place’. The House agreed to remove the restriction on direct reference to the Senate and Senators in 1970 following a recommendation by the Standing Orders Committee.207 The standing orders prescribe however that a Member must not use offensive words against either House of the Parliament or a Member of the Parliament.208
Back to top
Functions of the House
The principal functions of the House, and the way in which they are expressed and carried out, can be summarised under the following headings.209
Back to top
The Government—Making and unmaking
It is accepted that the House of Representatives, which reflects the current opinion of the people at an election, is the appropriate House in which to determine which party or coalition of parties should form government. Thus the party or coalition of parties which commands a majority in the House assumes the Government and the largest minority party (or coalition of parties) the Opposition.
Within this framework resides the power to ‘unmake’ a Government should it not retain the confidence and support of a majority of the Members of the House. To enable a Government to stay in office and have its legislative program supported (at least in the House), it is necessary that Members of the government party or parties support the Government, perhaps not uncritically, but support it on the floor of the House on major issues. Party discipline is therefore an important factor in this aspect of the House’s functions.
A principal role of the House is to examine and criticise, where necessary, government action, with the knowledge that the Government must ultimately answer to the people for its decisions. It has been a Westminster convention and a necessary principle of responsible government that a Government defeated on the floor of the House on a major issue should resign or seek a dissolution of the House. Such a defeat would indicate prima facie that a Government had lost the confidence of the House, but there is no fixed definition of what is a matter of confidence. If a defeat took place on a major matter, modern thinking is that the Government would be entitled to seek to obtain a vote on a motion of confidence in order to test whether in fact it still had the confidence of the House. Defeat on a minor or procedural matter may be acknowledged, but not lead to further action, the Government believing that it still possessed the confidence of the House.
The Government has been defeated on the floor of the House of Representatives on a major issue on eight occasions since Federation following which either the Government resigned or the House was dissolved. The most recent cases were in 1929 (the Bruce–Page Government), 1931 (the Scullin Government), and 1941 (the Fadden Government).210 On 11 November 1975 immediately following the dismissal of the Whitlam Government, the newly appointed caretaker Government was defeated on a motion which expressed a want of confidence in Prime Minister Fraser and requested the Speaker to advise the Governor-General to call the majority leader (Mr Whitlam) to form a government. However, within the next hour and a half both Houses were dissolved and the resolution of the House could not be acted on.211
The fact that the power of the House to ‘unmake’ a Government is rarely exercised does not lessen the significance of that power. Defeat of the Government in the House has always been and still is possible. It is the ultimate sanction of the House in response to unacceptable policies and performance. In modern times, given the strength of party discipline, defeat of a Government on a major issue in the House would most likely indicate a split within a party or a coalition, or in a very finely balanced House the withdrawal of key support.
Back to top
The initiation and consideration of legislation
Section 51 of the Constitution provides that the Parliament has the power to make laws for the peace, order, and good government of the Commonwealth with respect to specified matters. The law-making function of Parliament is one of its most basic functions. The Senate and the House have substantially similar powers in respect of legislation, and the consideration of proposed laws occupies a great deal of the time of each House. Because of the provisions of the Constitution with respect to the initiation of certain financial legislation and the fact that the majority of Ministers are Members of the House of Representatives, the vast majority of bills introduced into the Parliament originate in the House of Representatives.
The right to govern carries with it the right to propose legislation. Private Members of the Government may be consulted on legislative proposals either in the party room or through the system of party committees. The result of these consultations may determine the extent to which the Government is willing to proceed on a policy issue or a course of executive action. In addition, the Opposition plays its role in suggesting changes to existing and proposed legislation. Some suggestions may be accepted by the Government immediately or taken up either in the Senate or at a later date.
Back to top
Seeking information on and clarification of government policy
The accountability of the Government to Parliament is pursued principally through questions, on and without notice, directed to Ministers concerning the administration of their departments, during debates of a general nature—for example, the Budget and Address in Reply debates—during debates on specific legislation, or by way of parliamentary committee inquiry.
The aim of parliamentary questioning and inquiry is to seek information, to bring the Government to account for its actions, and to bring into public view possible errors or failings or areas of incompetence or maladministration.
Back to top
Surveillance, appraisal and criticism of government administration
Debate takes place on propositions on particular subjects, on matters of public importance, and on motions to take note of documents including those moved in relation to ministerial statements dealing with government policy or matters of ministerial responsibility. Some of the major policy debates, such as on defence, foreign affairs and the economy, take place on motions of this kind. Historically, opportunities for private Members to raise matters and initiate motions which may seek to express an opinion of the House on questions of administration were limited, but these increased significantly in 1988.212
It is not possible for the House to oversee every area of government policy and executive action. However the House may be seen as an essential safeguard and a corrective means over excessive, corrupt or extravagant use of executive power.213 From time to time the Opposition may move a specific motion expressing censure of or no confidence in the Government. If a motion of no confidence were carried, the Government would be expected to resign. A specific motion of censure of or no confidence in a particular Minister or Ministers may also be moved. The effect of carrying such a motion against a Minister may be inconclusive as far as the House is concerned as any further action would be in the hands of the Prime Minister. However a vote against the Prime Minister, depending on circumstances, would be expected to have serious consequences for the Government.214
Back to top
Consideration of financial proposals and examination of public accounts
In accordance with the principle of the financial initiative of the Executive, the Government has the right to initiate or move to increase appropriations and taxes, but it is for the House to make decisions on government proposals and the House has the right to make amendments which will reduce a proposed appropriation or tax or to reject a proposal. Amendments to certain financial proposals may not be made by the Senate, but it may request the House to make amendments.
The appropriation of revenue and moneys is dependent on a recommendation by the Governor-General to the House of Representatives. Traditionally the Treasurer has been a Member of the House. Reflecting this, the government front bench in the House, now commonly known as the ministerial bench, was in past times referred to as the Treasury bench.
It is the duty of the House to ensure that public money is spent in accordance with parliamentary approval and in the best interests of the taxpayer. The responsibility for scrutinising expenditure is inherent in the consideration of almost any matter which comes before the House. The most significant means by which the Government is held to account for its expenditure occurs during the consideration of the main Appropriation Bill each year. However the examination of public administration and accounts has to some extent been delegated to committees215 which have the means and time available for closer and more detailed scrutiny (and see below).
The consideration of specific matters by a selected group of Members of the House is carried out by the use of standing and select committees, which is now an important activity of a modern Parliament and a principal means by which the House performs some of its functions, such as the examination of government administration. In 1987 the House took a significant step in establishing a comprehensive system of general purpose standing committees, empowered to inquire into and report upon any matter referred to them by either the House or a Minister, including any pre-legislation proposal, bill, motion, petition, vote or expenditure, other financial matter, report or document (see Chapter on ‘Parliamentary committees’).
The Public Accounts and Audit Committee, a joint statutory committee, is required to examine the accounts of the receipts and expenditure of the Commonwealth and each statement and report made by the Auditor-General. As is the case with other committees, inquiries undertaken by the committee result in the presentation of reports to the Parliament. The Public Works Committee, also a joint statutory committee, considers and reports on whether proposed public works referred to it for investigation should be approved, taking into account, inter alia, the financial aspects.
Back to top
Ventilation of grievances and matters of interest or concern
The provision of opportunities for the raising by private Members of particular matters—perhaps affecting the rights and liberties of individuals, or perhaps of a more general nature—is an important function of the House. Opportunities for raising these matters occur principally during periods for private Members’ business, Members’ statements, grievance debates, debates on the motion for the adjournment of the House or the Main Committee, and during debates on the Budget and the Address in Reply. Outside the House Members may make personal approaches to Ministers and departments regarding matters raised by constituents or other matters on which they require advice or seek attention.216
Petitions from citizens requesting action by the House are lodged by Members with the Clerk of the House who announces a summary of their content to the House, or may be presented directly by Members themselves. The subject matter of a petition is then referred to the appropriate Minister for information. Any ministerial response is also reported to the House.217
Back to top
Examination of delegated legislation
Regulations and other forms of subordinate legislation made by the Government pursuant to authority contained in an Act of the Parliament must be tabled in both Houses. A notice of motion for the disallowance of any such delegated legislation may be submitted to the House by any Member. Disallowance is then automatic after a certain period, unless the House determines otherwise. The Senate Standing Committee on Regulations and Ordinances plays a major role in overseeing delegated legislation.218
Back to top
Prerequisites for fulfilling functions
The exigencies of politics, the needs of the Government in terms of time, and its power of control of the House, have resulted in the evolution of a parliamentary system which reflects the fact that, while the will of the Government of the day will ultimately prevail in the House, the House consists of representatives of the people who will not hesitate to speak for the people and communities they represent. A responsible Government will keep the House informed of all major policy and administrative decisions it takes. A responsible Opposition will use every available means to ensure that it does. However, the effective functioning of the House requires a continual monitoring and review of its own operations and procedure. The forms of procedure and the way in which they are applied have an important effect on the relationship between the Government and the House. The Procedure Committee has presented reports on many aspects of the work of the House and its committees and has dealt with the issue of community involvement. It has sought to contribute to the maintenance and strengthening of the House’s capacity to perform its various functions.
Back to top | http://www.aph.gov.au/About_Parliament/House_of_Representatives/Powers_practice_and_procedure/practice/chapter1 | 13 |
20 | Posted on 29 March 2013.
First grade students imagined that they were Christopher Columbus. As an explorer, they needed a way to sail across the ocean to discover new lands. They were instructed to use limited resources to design, plan, and construct a ship. Then they would test their ships in an actual water race where they selected the type of force they would use to get the ship to move. At the start of the project, the students were told to find 1-2 partners. They did research in the library using books and websites to find out how ships are constructed and how they move as well as answering other questions they had. Next, they conducted an experiment on Sinking/Floating to see which materials would work best for constructing their ship. Once they had selected a design and their materials, they drew a labeled diagram using Pixie. Then they built their ship with the help of the art teacher. They tested their ships in the water table to see if they would float and if they could achieve straight motion with the type of force they had chosen. They made adjustments to their ships based on their reflections and comparisons with the other voyagers’ experiments. Finally, we had the Great Ship Race where students raced their ships in a gutter full of water. They documented the race with the iPad video cameras. They rated their own and each others’ ships using a rubric. They also shared their ships with other first grade classes.
This project scores in the Ideal/Target range of Research & Information Fluency.
-Students created their own questions to assist with research.
-Students used a variety of resources: books, websites, etc.
-Students evaluated the resources based on appropriateness and quality to their project using a rubric.
-Students used various types of experts to expand their questioning and revise their projects.
-Students conducted experiments to test the designs of their ships and made adjustments accordingly.
This project scores in the Ideal/Target range of Communication & Collaboration.
-Students chose their own groups and assigned each other roles.
-iPad videos and interviews were used to get ideas from other groups.
-Students collaborated to research (using websites/videos), design (using Pixie), and construct their ships.
-Students’ reflections were communicated through Pixie, iPads, and rating themselves with rubrics.
-Students were able to use the media specialist and art teacher as expert sources.
-Videos and pictures of their projects were posted to the classroom blog and their diagrams were posted to Comemories.
This project scores in the Ideal/Target range of Critical Thinking/Problem Solving.
-Students conducted a floating/sinking experiment prior to construction to plan their design.
-Students regularly revised their ideas based on updated information.
-Students were presented with a real-life problem related to Social Studies and Science.
-Students were required to choose from a limited number of materials (or they could trade with each other).
-Students reflected on their experiences individually and with their partners using a rubric.
-The class reflected on the entire project and reviewed the answers to the questions posed.
This project scores in the Ideal/Target range of Creativity/Innovation.
-Students created their own boats using materials they chose and decided what type of force to use to make their ships move.
-Students were encouraged to take risks and try new things that would help their ship succeed in the race.
-Students were creative in managing their resources since they could only select 5 or they could trade with other students.
-Students evaluated the creative process afterwards using a rubric.
- Lesson Plan (Word)
- Reflection Questions (PDF)
- Resource Evaluation Rubric (PDF)
- Supply List (PDF)
- Ship Race Evaluation Rubric (PDF)
- Copies of student boat diagrams
Posted in Comm/Collab - Target, Creativity - Target, Critical Thinking - Target, Elementary School, Info Fluency - Target, Math, Project, Science, Social Studies
Posted on 20 March 2013.
This school is implementing the “Leader in Me” character education program, so for this project, students studied a famous American and predicted how that person would show the 7 Habits at their school. Students were grouped into pairs and decided which famous American they wanted to research (Helen Keller, Jackie Robinson, Martin L. King Jr., Abraham Lincoln, George Washington or Susan B. Anthony). The students then used PebbleGo, BrainPop Jr., biographies, or any other source to gather information. They notated and evaluated their sources, then they took their research and completed a four square planning sheet for their presentation. Next, partners decided what digital program to use to present their research online. Pixie and Comic Life were their top choices since those were the two programs they had learned so far this year. The students took turns putting their research into the comic strip or Pixie. They had to include the famous American’s contribution, one new fact, how that person would be a leader at the school, and any other interesting facts of their choice. The finished projects were presented to the class and published online via Flipsnack. Afterwards they evaluated how well they worked as partners by filling out a Partner Work Reflection Sheet.
This project scores in the Approaching range of Research and Information Fluency. The teacher modeled how to read a book and gather research about a Famous American. Students worked together to gather research from multiple sources (online and print) to fill out a four square organizer in order to make sure they had all of the necessary information. They also recorded and rated their sources.
This project scores in the Approaching range of Communication & Collaboration. The students worked together in pairs choosing what famous American to research and what type of digital program to use to show their research. Their work was published online for others outside the classroom to access. Students reflected on their roles using the partner work reflection worksheet.
This project scores in the Developing range of Critical Thinking and Problem Solving. Students had to think of how the 7 Habits from the “Leader in Me” program were displayed in the life of their famous American. They had to apply what they have been learning in character education to a new historical situation and predict how their person would be a leader at their school. Their project was authentic because reinforces the school-wide “Leader in Me” program, and it will be used as an example of how the school is implementing the “Leader in Me” program.
This project scores in the Developing range of Creativity and Innovation. Students could choose which digital tool they wanted to use to display their information. They were able to select their own pictures and special effects. They predicted how their character would respond in a new situation.
Posted in Comm/Collab - App, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - App, Social Studies
Posted on 20 March 2013.
The objective of this lesson is for small groups of students to collaborate to create layered rhythm vocal ostinatos (repeating rhythm phrases) using puppets, based on a topic of their choice. Students will create ostinatos using repeated words or sounds in synchronization with puppet movements to create layers of sound patterns that compliment and contrast with the others in their group. During the course of this activity, students will become more confident and creative in their performances as shown through better voice projection, increased complexity of rhythms with movement, and increased sharing of ideas within their groups. They will plan, practice, perform and share these performances using various technology devices. Their performances will also be posted to Vimeo.
This project scores in the Developing range of Research & Information Fluency. The students used a reliable professional source (the symphony website) as well as each others’ projects to get ideas and improve upon them. The lesson builds on their research of rhythm using the symphony website and prior experience reading and playing rhythm patterns in instrument groups. Students have the choice of using various technology tools to record their performances, including iPads, Flip cameras and laptops.
This project scores in the Approaching range of Communication & Collaboration. Students worked in collaborative groups to create their ostinato vocal rhythms and puppet movements. They evaluated their performances and revised them in order to make them more complex. They chose what digital tool to use to record their performance and they posted the video of their projects online.
This project scores in the Developing range of Critical Thinking & Problem Solving. Students had to think about ways to make their ostinatos more complex by adding additional layers, turning simple words into longer phrases, and adjusting their puppets’ movements. By studying the recordings of their performances, students evaluated each other and themselves based on many applicable criteria: creativity, balance, contrast, rhythm, teamwork.
This project scores in the Developing range of Creativity & Innovation. All groups created the same product (puppet shows), but they were encouraged to think of new rhythms and topics for their ostinatos, making more than just one. This lesson provides multiple opportunities to plan, create, perform and share. It synthesizes the talents of students with different learning styles and abilities to create a new group experience. Each group performs for the class numerous times resulting in significant growth in the areas of rhythm, technology, teamwork and creativity.
Posted in Comm/Collab - App, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - Dev, Music, Project
Posted on 19 March 2013.
In groups of three, students chose a topic from the science curriculum that was taught during the first semester. Students individually researched their sub-topics, developed a plan using 4-Square, and wrote an expository paper. Students then collaborated with their group to plan their presentations on their topics. Students recorded the information that they would share during their presentations on note cards and worked with their peers to develop an appropriate visual display. They then presented their information to the class. They evaluated their projects and presentations with a rubric.
This project scores in the Approaching range of Research & Information Fluency. Students constructed questions to guide their research, they selected their own research tools, they rated their sources and research skills using a rubric, and organized their own information in a meaningful way.
This project scores in the Developing range of Communication & Collaboration. Students worked in groups selected by teachers. They chose their topics (within the scope of our science curriculum for the first semester) and worked together to create a presentation for the class using a digital tool of their choice.
This project scores in the Developing range of Critical Thinking & Problem Solving. The students used technology to come up with a new and creative way to present their information to the class. Several students used more than one program to create the visual components of their presentations. For example, students used Garageband to add sounds to their Keynotes. Students also had to decide which information was important to share and what order the group members would share their facts. Students evaluated their presentations using a rubric.
This project scores in the Developing range of Creativity & Innovation. Students chose the information they wanted to share with their classmates, and they also chose the tools they would use for their presentation. They created an interesting, entertaining way to review the semester science curriculum with their class. They also rated their creative process using a rubric.
- Lesson Plan (Word)
- Assignment Guidelines (Word)
- Research Guide (Word)
- 2 Evaluation Rubrics (PDF)
- 3 Examples of Student Papers (PDF)
- 3 Examples of Student Projects (Keynote)
Posted in Comm/Collab - Dev, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - App, Science
Posted on 19 March 2013.
Students review various computer programs. Students read various traditional fairy tales and fractured fairy tales. Students get in groups to write their own fractured fairy tales, following the steps of the writing process. Groups choose which program they would like to use to present their writings. Groups create storyboards on construction paper to plan out their presentation. Students use their chosen program to create a presentation for their writings.
This project scores in the Developing range of Research & Information Fluency. Students researched fractured fairy tales that had been selected by the teacher and the school media specialist. They analyzed and extended the ideas in those stories to create their own fractured fairy tales. They also evaluated and rated the stories using a rubric
This project scores in the Approaching range of Communication & Collaboration. Students worked in collaborative groups to decide which type of fairy tale to create and which digital tool to use to effectively communicate their fairy tale to an audience. Fairy tales were published online and classmates provided feedback. Students reflected on their group work using a rubric afterwards.
This project scores in the Developing range of Critical Thinking & Problem Solving. Students had to work together to decide which elements of the fairy tale could be changed and which ones needed to remain in order to keep the core narrative recognizable. Students evaluated their own, as well their classmates,’ fractured fairy tales using a rubric.
This project scores in the Developing range of Creativity and Innovation. Students were given the opportunity to choose which fairy tale to adapt and which digital tool to use. They were given creative license to change whatever aspect of the fairy tale they wanted as long as their new story retained certain recognizable features. They published their creations online and evaluated their creativity with a rubric.
Posted in Comm/Collab - App, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - Dev, Language Arts
Posted on 14 March 2013.
Families visiting Three Lakes Park have an interactive way to learn more about the animals they see in the nature center thanks to third graders down the street at Chamberlayne Elementary. The third graders researched native animals in the park and created virtual guides that can be accesses via QR codes at the park’s exhibit. The students were required to include a description and facts, but were then given the choice of what other technologies to use to help the public learn more about the animals. The self-guided group work resulted in content rich InstaBlogg sites that include creative movies, keynotes, quia games, polls, thinglinks, beeclips and pixie projects. Students were required to use the background knowledge developed during our animal studies unit to create a product that encourages the community to learn more.
The students were working in the Ideal range for Resarch and Information Fluency. This project was a culmination of our animal studies unit so students were already familiar with terms and megafauna. They were challenged to put their knowledge of animal relationships and adaptations to use in a relevant way so that others could benefit from their learning. Students were given a guide sheet and worked in groups to research the animal. They chose their own groups based on what animal they were interested in researching. The students used books from the library and Internet search sites such as OneSearch, DuckDuckGo and Pebble Go to find information about their animals. Because students were already familiar with content vocabulary and concepts from learning about the world’s various environments, they were able to hit the ground running. Most groups finished the required research and continued to find additional facts beyond the requirements. The facts that came from their own curiosity proved to be the most interesting for them and the ones they highlighted the most in their final product. One group, for example, learned that the large mouth bass has an amazing sense of smell. They were so proud of this fact in their video that their enthusiasm seemed to better engage the rest of the class when they watched the video. Groups also did some field research when they visited Three Lakes Park to view the animals up close and figure out where the best place was to put their QR codes.
Students worked in the Ideal range of Communication & Collaboration as they took on new roles in this activity. They became the experts and needed to create an interesting site to engage community members and encourage them to learn more about the animals in their back yards. Students were asked to teach about their animals in the most interactive way that they could using a blog which could be accessed by visitors to Three Lakes Park via a QR code. With that goal in mind, some groups worked on a video, others created “fact or fiction” games that reflected a fun way that they like to learn, and others created pixie pictures to illustrate life cycles. Many groups delegated tasks and were able to create more than one technology project to enhance their site. They used what they liked from their favorite websites to make their blog more interesting for others. Because they had the editing link, their sites would often look different in the morning. This was because students were going over to each others’ houses and working on their sites at home. They continue to make edits to improve their sites and better serve the community!
Students worked in the Ideal range of Critical Thinking and Problem Solving. With so many choices regarding their information and how to present it, students had to decide which facts were best to include and how to effectively communicate those ideas to the public. As they worked on their projects, the students needed less and less teacher assistance. They were taking advantage of shortcuts on the keyboard, dropping photos and videos into their folders for future use, and applying their knowledge from former Keynote lessons to perform advanced skills, like transitions and builds, on their own. They gained a better understanding of the pros and cons of each type of digital tool and made decisions based upon those insights. One of the goals of the project was to persuade visitors to protect the animals and preserve their environment, so students had think of ways to do that as well. This challenged them to apply the facts they learned to a new and specific situation at Three Lakes Park. Students also had the chance to help design the QR code poster that was displayed at the park. As a class they named important elements to include on the sign. Since they wanted to get people’s attention and make it easy to read, they realized the importance of font and color choice. Each group worked on a design and voted on the final poster as a class. Throughout the project, students were presented with challenges that had more than one solution. At the end of the project, they evaluated how well they performed each step of the process using a rubric.
Students worked in the Ideal range of Creativity and Innovation. They enjoyed making things that were different from their classmates and that would “WOW” their audience. One student really wanted to make a game. Each day he would ask how to make a game, so the teacher introduced him to Quia and gave him a brief overview of how to program the game. He produced an amazing game and the questions reflected a strong understanding and ability to extend the knowledge. He created “distractor” choices that were tricky, unless you read his group’s site. That’s just one example of how students went beyond the basic requirements for the assignment and took risks. It was exciting to see what they came up with. Everything was left up to individual groups and each page reflects the diverse ideas in the class.
Posted in Comm/Collab - Target, Creativity - Target, Critical Thinking - Target, Elementary School, Info Fluency - Target, Project, Science
Posted on 14 March 2013.
Students created their own blogs about an animal of their choice using Instablogg and a variety of other web tools. Topics included on the blog were the animal’s adaptations, habitat, diet, and fun facts. The students had 2 class periods to complete their research using the Internet and books from the library. As part of their research, students found an online photo of their animal to import into Thinglink. Thinglink allows users to create an interactive image with hotspots that can be clicked for more information. The students created a Thinglink image of their animal incorporating 3 adaptations (hotspots) and telling why they were beneficial to their animal. Students then created a video about the animal’s habitat using Photobooth and a background they selected so they appeared to be standing in the animal’s habitat. The video also included a variety of “fun” interesting facts. Videos were uploaded to Vimeo so students could post them to their blogs. Next, the students used Audacity to record an audio description of the diet of their animal. Those audio files were uploaded to Blabberize along with another photo of their animal, so it appeared like the animal was talking about its diet. The final step was to create a poll that asks visitors a question about their animal. Students embedded the Thinglink, the video, the Blabberize, and the poll onto their blogs. All blog links were posted to one page for easy access.
This project scores in the Approaching range of Research & Information Fluency. The students chose an animal they were interested in and used Duck Duck Go (Safe Internet) search engine to acquire their information. They also searched for and selected their own books from the library. The accuracy and reliability of the sources was discussed. To guide their research, we talked about what people might want to know about the animals and the class developed 3 categories that they needed to research.
This project scores in the Developing range of Communication & Collaboration. Students did not work in groups to conduct their research, but they did collaborate to produce the final product using a variety of digital tools. Their blog posts are online for others to view and interact with outside the classroom.
This project scores in the Developing range of Critical Thinking & Problem Solving. Students worked together to figure out what questions they had about their animals and what categories they wanted to learn more about. They collaborated to answer some of the essential questions that they came up with together. They determined what was important to include on their blogs and how to divide the information based on which web tool would best convey that information.
This project scores in the Approaching range of Creativity & Innovation. The students chose their animal, the information, and the pictures they wanted to use. They were introduced to a variety of web tools and were able to choose 3 of them to use to present their animal information on their blogs. They created useful, interactive, and entertaining sites for other people to learn more about their animals.
Posted in Comm/Collab - Dev, Creativity - App, Critical Thinking - Dev, Elementary School, Info Fluency - App, Project, Science
Posted on 13 March 2013.
This project requires students to research an inventor or scientist, create a multimedia presentation to share their research findings, design and create an invention or experiment in collaboration with other students to answer a specific question about helping an egg do something “eggstraordinary”, graph their trials/data, and write a story based on their “Eggs” adventure. Students were encouraged to use a variety of resources to research, develop, and present their projects.
This project scores in the Approaching range of Research & Information Fluency. Students were challenged to use multiple digital resources to learn about an important scientist or inventor of their choice and create a digital multimedia presentation. They were provided with a list of possible websites to use as well as questions to answer about their person. They cited and evaluated the helpfulness of each website they used. For the experiment phase they conducted their own research by testing their designs and recording the data.
This project scores in the Approaching range of Communication & Collaboration. Students chose their own groups according to teacher’s guidelines and expectations. As a group, students had to decide on one of six questions for the focus of their experiment. Groups had to communicate to plan what materials to bring in to create or design their final products, and they worked together to run trials and record data. At the end of their project they evaluated how well they worked together.
This project scores in the Approaching range of Critical Thinking & Problem Solving. Working in groups, students had to choose an egg challenge (such as making it bounce, roll, float, drop, or hold weight). Then they had to design and create an invention that would solve the challenge. During the trial and error stage, groups had to generate new questions regarding the outcome of their “eggs,” if they were unsuccessful. Students were given the choice to present their findings using a variety of digital tools such as graphs which require them to interpret their data. At the end of the experiment, students were asked to individually reflect upon their experience throughout this process.
This project scores in the Approaching range of Creativity & Innovation. Students were given many choices including the inventor they wanted to research, the digital tool they wanted to use, and the egg problem they wanted to solve. They were also encouraged to take risks with their invention and develop original ways to solve the problem. Their creative writing assignment was also open-ended to encourage original ideas. Finally they reflected on the creative process at the end.
- Lesson Plan (Word)
- Project Guidelines for Students (PDF)
- Student Creative Writing Sample (Pages)
- Student Comic Life Sample
- Student Keynote Sample
- Five photos (PNG) of student projects
Posted in Comm/Collab - App, Creativity - App, Critical Thinking - App, Elementary School, Info Fluency - App, Language Arts, Math, Project, Science
Posted on 11 March 2013.
This lesson was designed to enhance cross curriculum skills. Oral Language, Reading, Writing, Science, and Technology SOLs were all met throughout this class project. The students chose an animal to research in the library by using the Internet, encyclopedias, and nonfiction text. They used a graphic organizer to record the facts they found. Each student then used those facts, in two lessons with our school’s ITRT, to create a slide presentation using Keynote or a comic using Comic Life. In the end, the students presented their finished projects to their peers in class, and they were published online.
This lesson scores in the Developing range in Research & Information Fluency. Students chose an animal to research. Classroom teachers and the librarian helped students pick appropriate animals. Students learned about The Big6 Research method. The school librarian guided students through a Promethean Board lesson on organizing their research. Classroom teachers modeled how to complete the Animal Facts Graphic Organizer. Students used resources in the library to record facts about their animal on their graphic organizer. Classroom teachers and the school librarian monitored and supported students with finding facts and completing the Animal Facts Graphic Organizer. The students were required to find certain facts but they were also asked to find one “fun fact” of their choice.
This project scores in the Developing range of Communication & Collaboration. Although students worked individually on this project, they did present their projects to the class, they evaluated their presentations using a rubric, and they published their work online for others to see outside of their classroom.
This project scores in the Approaching range of Critical Thinking & Problem Solving. Students chose type of digital tool they thought would best convey their information (movie, slideshow, comic). They were also asked to solve one of two real-life problems: (1) Your animal is endangered and becoming extinct. How will you inform others about your animal. How will you encourage people to help? (2) If you were a zookeeper, how would you create a habitat for your animal to survive at your zoo? Finally, students evaluated their own work using a rubric.
This project scores in the Approaching range of Creativity & Innovation. Students could choose whether to make a movie, slideshow, or comic to present their animal research. They added their own ideas to solve an authentic problem. They also reflected on the creative process through their self-evaluation rubric.
Posted in Comm/Collab - Dev, Creativity - App, Critical Thinking - App, Elementary School, Info Fluency - Dev, Science
Posted on 11 March 2013.
After completing our unit on Virginia’s native people, students chose a current Indian tribe of Virginia to research in self-selected groups. This correlates with VS.2g: The student will demonstrate knowledge of the physical geography and native peoples, past and present, of Virginia by identifying and locating the current state-recognized tribes. They found answers to given essential questions through the use of printed materials and online resources. They also came up with at least three additional questions that they felt would be beneficial to answer in their presentation. The students had the option to create a Keynote, iMovie, or website to compile their research. If they felt they could best present their research in another way, they were given the opportunity to submit their ideas to the teacher for approval. Using the digital tool of their choosing, students created a resource for other students to use in learning about these Virginia tribes. The projects were published on the class blog after completion.
This project scores in the Developing range in Research & Information Fluency. Students were given questions to guide their research, but they also came up with their own questions. They were provided with several books to use if they chose to do so. While two websites were recommended, the students were also given time to use the search engine “Go Duck Go” to do their own searches.
This project scores in the Approaching range in Communication & Collaboration. Students were able to choose their own groups and the Indian tribe that they decided on together to research. They had a choice of three tools to compile their research. They also were required to do a reflection at the end of the project evaluating their involvement in the group and how the group worked together.
This project scores in the Developing range in Critical Thinking & Problem Solving. Students were expected to answer a set of questions to guide their research. If they choose to do so, they could also add additional information that they thought was of value. To inform others about the tribe they researched, they choose the product they felt would best help them communicate their findings.
This project scores in the Developing range in Creativity & Innovation. Students were asked to design a product that informs others about the tribe they researched. The teacher recommended three different types of products which they could choose from. However, if the students thought another way would help them better communicate their information, they could present it to the teacher to be approved.
Posted in Comm/Collab - App, Creativity - Dev, Critical Thinking - Dev, Elementary School, Info Fluency - Dev, Social Studies | http://blogs.henrico.k12.va.us/21/author/dhclough/ | 13 |
18 | What is the theme of this workshop? The theme of Workshop Seven
is "the excitement of student reasoning."
Whom do we see in the video? Terez Waldoch, a fourth grade teacher,
decides to try a new approach to teaching and bases her instruction on student
ideas. Her colleagues, administrators, students and their parents, however,
have different expectations for the classroom. With her reputation on the
line, Terez must decide if her new methods are more effective than her former
What happens in the video? A class of fourth-grade students poses
questions they want answered about the decomposition of matter. Guided by
their teacher, the students propose and execute experiments, presenting
and debating their results.
What problem does this workshop address? Should we just teach
the "facts"? Teachers agree that the "new" approaches
are compelling, yet few practice them. There are reasons for this avoidance:
a lack of time, the need for classroom control, pressure from parents to
teach the "basics," demands by administrators for more "coverage,"
and preparation for standardized tests. Can teachers afford to take the
risks of resisting these pressures to teach in a new way?
What teaching strategy does this workshop offer? How can a teacher
create a safe environment in the classroom and allow students to take risks?
Supporting this goal requires educated parents and administrators.
Section 2 - "Taking a Risk"
A. The Goals for Workshop Seven
Workshop Seven: "Taking A Risk" is for any teacher interested
in experiencing the excitement of student reasoning. To foster reasoning
by students and allow teachers to experiment with different learning methods,
it is important to make the classroom a safe place for both students and
teachers to innovate. There are obstacles to change, however, and we will
examine these in this workshop.
Discuss the differences between constructivist educational practices
and the traditional approach to learning and teaching.
Discuss ways in which teachers can change to a more constructivist
mode of teaching. How did you supplement your own science training?
What are some of the risks you are concerned about when inviting
students to direct their own investigations?
When presented on video, new teaching strategies based on children's
ideas might seem like more of the same old thing: cooperative learning,
hands-on experience, student-centered classroom, etc. The differences in
approach between constructivist learning and traditional approaches are
often subtle, yet extremely important.
How can you solve the following problem:
Some of the new constructivist learning concepts we depict in
our videos can easily be mistaken for more traditional and familiar techniques.
For instance, the approach taken by Terez Waldoch, the teacher in Workshop
Seven, focuses on her students' prior ideas. The actual activities in which
her students engage, however, could be confused with "hands-on"
activities that have a more traditional scope.
As teachers, discuss the subtle differences between the older
familiar strategies and the newer constructivist learning concepts presented
in our video.
Section 3 - Exercises
A. Exercises: Responding to Workshop Seven
Workshop Discussion After Viewing Video
What would you, as teachers, do next in this lesson?
The following is an activity that teachers might try in their classrooms.
Workshop and Post-Workshop Activity
Devise an approach similar to that depicted in the video for teaching
a subject of your choice. First, determine (by a method of your choice)
what the students already know or believe about the subject. Second, determine
what questions the students would like to answer about the subject. Finally,
have students work in small groups to devise, execute, and interpret experiments
or activities to answer their questions.
B. Exercise: Preparing for Workshop Eight
You will get the greatest benefit from Workshop Eight if you think about,
write down, and be prepared to discuss the following questions prior to
In the previous seven videos we have explored a variety of educational
strategies, including interviewing, journal keeping, concept mapping, metaphor
building, discussions, posters, and bridging analogies.
For discussion: What are the most effective uses of each approach?
What are some other approaches you have used or witnessed that are also
Section 4 - Educational Strategy
A. Environmental Action: A Debate
Affective teaching approaches attempt to involve the students' emotions
in the instructional process. Such involvement can result from the teacher
causing the students to care about the subject being taught, either by presenting
the content via characters and drama, or by involving the students in a
situation that has an effect on their lives and on which they can have an
effect, such as current events. Debating the social impact of a science
idea is an example of an affective approach to science teaching.
1. Choosing a Topic to Debate
Have the students research current issues in environmental science by
reading newspapers and news magazines, watching or listening to news workshops
on television or radio, and asking relatives and friends for their opinions
about recent environmental news topics. Dramatic local or global news events
with an environmental component make good debating topics. These might include
a recent oil spill, plans to build or close a local toxic waste dump or
incinerator, governmental initiatives to limit water use, efforts to save
endangered species of animals or plants, or local plans to convert unused
or preserved land to commercial use.
Working alone or in groups, the children can propose one or more topics
for debate. Make a list in class of all of the proposed debate topics; then
have the students vote or in some other way decide which topic or topics
to choose. During the topic selection process, children can make a case
for why particular topics are important or interesting. In this way, the
teacher will begin to determine the students' prior knowledge and beliefs
about the subject.
The final debate questions should be specific and involve clear issues.
A topic such as "Is it good to save water?" is not specific enough.
"Should our town approve the bond issue to build a new waste-water
treatment plant?" is specific and has clear issues of environmental
management, quality of life, and economics. With the issue of an oil spill,
the question should concern who, if anyone, should be blamed and fined.
Clear-cutting of forests can become an issue of jobs versus endangered species.
2. Preparing for the Debate
Once one or more topics are chosen, children can be assigned to teams.
Each team will prepare a case either for or against a particular topic.
Explain to the students that they need not start out with an opinion that
agrees with the side for which they are arguing. The point of a debate is
to amass evidence and construct a convincing argument for or against a proposition
Students can prepare for the debate by gathering evidence in a number
of ways. Each member of a team should be assigned specific aspects of the
issue(s) to be researched. The member could investigate the scientific background
of the issue, question parents and friends about their understanding of
the issue, conduct a survey of public opinion about the issue (on weekends
at a mall or other safe public place), clip news articles about the issue,
or interview people involved in the issue from government, citizens' groups,
3. Conducting the Debate
Students from each team can present evidence and arguments for or against
the debate question. The winner of each debate can be chosen by class vote.
Make sure each student has a chance to contribute to the debate.
4. After the Debate
When the debate is over, ask students about what they learned during
the process. Did preparing an argument change their ideas in any way? What
sorts of evidence did they find most convincing? Why? By listening to students'
ideas and comparing them to their ideas before the activity, the teacher
can begin to assess their understanding of the science content of the topic
and how those ideas have changed. Issues raised during the debate process
can be referred to during later science lessons.
Section 5 - Resources for Workshop Seven
Companies, publications, and organizations named in this guide represent
a cross-section of such entities. We do not endorse any companies, publications,
or organizations, nor should any endorsement be inferred from a listing
in this guide. Descriptions of such entities are for reference purposes
only. We have provided this information to help the reader locate materials
A. Related Resources on Decomposition
Leach, John, R. Driver. et al. Progression in Understanding Ecological
Concepts by Pupils Aged 5-16, CLIS (Children Learning In Science) Publications.
CSSME (Center for Studies in Science and Mathematics Education)
University of Leeds
Leeds, UK LS2 9JT
Campbell, Stu. 1990. Let It Rot! The Gardener's Guide to Composting,
Pownal, VT: Storey Communications, Inc.
Soil: We can't grow without it. 1985. Educators' Packet.
National Wildlife Federation
1412 16th Street NW
Washington, D.C. 20036
B. Further Reading
Bailey, Donna. 1991. Recycling Garbage. New York: F. Watts.
Condon, Judith. 1990. Recycling Paper. New York: F. Watts.
Brody, Michael. Understanding Pollution among 4th, 8th, and 11th graders.
Journal of Environmental Education 22(2): 24-33 (Winter 1990-91).
Campbell, Stu. 1990. Let It Rot! The Gardner's Guide to Composting.
Pownal, Vermont: Storey Communications, Inc.
Carin, Arthur. 1975. Teaching Science Through Discovery (7th Edition).
Columbus, OH: C.E. Merrill Publishing Company.
Devito, Alfred and G. Krockover. 1980. Creative Sciencing: A Practical
Approach. Creative Ventures, Inc.
Duckworth, E., J. Easley, D. Hawkins and A. Henriques. 1990. Science
Education: A Minds-On Approach for the Elementary Years. Hillsale, NJ:
Fisher, Kathleen M. and Selene Schumpert. 1995. Process & Inquiry
in Life Sciences Laboratory Manual, Parts 1-4. San Diego, CA: SemNet
Foster, Joanna. 1991. Carton, Can, and Orange Peels: Where does your
garbage go? New York: Clarion Books.
Kallen, Stuart. 1990. Recycle It! Once is not enough. Minnesota:
Abdo and Daughters.
Lavies, Bianca. 1993. Compost Critters. New York: Dutton Children's
Leach, John T., R.D. Konicek and B.L. Shapiro. The ideas used by British
and North American School children to interpret the phenomenon of decay:
a cross-cultural study. Paper presented to the Annual Meeting of the
American Educational Research Association. San Francisco. April 1992.
Milne, Lorus J. and M. Milne. 1987. A Shovel Full of Earth. New
York: Henry Holt & Co.
Osborne and Freyberg, eds. 1985. Learning in Science: The implications
of children's science. Auckland, NZ: Heinemann.
Schwartz, George I. and S. Bernice. 1974. Food Chains and Ecosystems:
Ecology for Young Experimenters. New York: Doubleday.
Silver, Donald M. 1993. One Small Square Backyard. New York: W.H.
Freeman & Co.
C. Bibliography on Decomposition
Adeniyi, E.O. 1985. Misconceptions of selected ecological concepts held
by some Nigerian students. Journal of Biological Education 19: 311-316.
Campbell, D., R. Konicek, B. Koscher, B. LaCorte, W.S. Laffond and T.
Waldoch. 1993. Children's alternative conceptions about decomposition and
the cycling of matter. Paper presented at the Annual Conference of the New
England Educational Research Organization. Portsmouth, NH. April 1993.
Griffiths, A.K., and B.A.C. Grant. 1985. High school students' understanding
of food webs: Identification of a learning hierarchy and related misconceptions.
Journal of Research in Science Teaching 22: 421-436.
Leach, J.T., R.D. Konicek and B.L. Shapiro. 1992. The ideas used by British
and North American school children to interpret the phenomenon of decay:
A cross-cultural study. Paper presented at the annual meeting of the American
Educational Research Association. San Francisco, CA. April 1992.
Sequeira, M. and M. Freitas. 1986. Death and decomposition of living
organisms: Children's alternative frameworks. Paper presented at the
eleventh conference of the Association for Teacher Education in Europe.
Toulouse, France. September 1986.
Sequeira, M. and M. Freitas. 1987. Children's alternative conceptions
about mould and copper oxide. In Proceedings of the Second International
Seminar: Misconceptions and Educational Strategies in Science and Mathematics,
J.D. Novak, ed. Ithaca, NY: Department of Education, Cornell University.
Smith, E.L. and C.W. Anderson. 1986. Alternative student conceptions
of matter cycling in ecosytems. Paper presented at the annual conference
of the North American Research in Science Teaching conference. San Franciso,
CA. April 1986. | http://www.learner.org/workshops/privuniv/pup07.html | 13 |
31 | A quantum computer is a device for computation that makes direct use of quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), quantum computation utilizes quantum properties to represent data and perform operations on these data. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers, like the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Richard Feynman in 1982.
Although quantum computing is still in its infancy, experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits). Both practical and theoretical research continues, and many national government and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.
Large-scale quantum computers could be able to solve certain problems much faster than any classical computer by using the best currently known algorithms, like integer factorization using Shor's algorithm or the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, which run faster than any possible probabilistic classical algorithm. Given unlimited resources, a classical computer can simulate an arbitrary quantum algorithm so quantum computation does not violate the Church–Turing thesis. However, in practice infinite resources are never available and the computational basis of 500 qubits, for example, would already be too large to be represented on a classical computer because it would require 2500 complex values to be stored. (For comparison, a terabyte of digital information stores only 243 discrete on/off values) Nielsen and Chuang point out that "Trying to store all these complex numbers would not be possible on any conceivable classical computer."
A classical computer has a memory made up of bits, where each bit represents either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or, crucially, any quantum superposition of these two qubit states; moreover, a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8. In general, a quantum computer with qubits can be in an arbitrary superposition of up to different states simultaneously (this compares to a normal computer that can only be in one of these states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with measurement of all the states, collapsing each qubit into one of the two pure states, so the outcome can be at most classical bits of information.
An example of an implementation of qubits for a quantum computer could start with the use of particles with two spin states: "down" and "up" (typically written and , or and ). But in fact any system possessing an observable quantity A which is conserved under time evolution and such that A has at least two discrete and sufficiently spaced consecutive eigenvalues, is a suitable candidate for implementing a qubit. This is true because any such system can be mapped onto an effective spin-1/2 system.
A quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a classical computer would require the storage of 2n complex coefficients. Although this fact may seem to indicate that qubits can hold exponentially more information than their classical counterparts, care must be taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This means that when the final state of the qubits is measured, they will only be found in one of the possible configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in one particular state before measurement since the fact that they were in a superposition of states before the measurement was made directly affects the possible outcomes of the computation.
For example: Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability computer is in state 000, B = probability computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.
The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. However, instead of adding to one, the sum of the squares of the coefficient magnitudes, , must equal one. Moreover, the coefficients are complex numbers. Since the probability amplitudes of the states are represented with complex numbers, the phase between any two states is a meaningful parameter, which is a key difference between quantum computing and probabilistic classical computing.
If you measure the three qubits, you will observe a three-bit string. The probability of measuring a given string is the squared magnitude of that string's coefficient (i.e., the probability of measuring 000 = , the probability of measuring 001 = , etc..). Thus, measuring a quantum state described by complex coefficients (a,b,...,h) gives the classical probability distribution and we say that the quantum state "collapses" to a classical state as a result of making the measurement.
Note that an eight-dimensional vector can be specified in many different ways depending on what basis is chosen for the space. The basis of bit strings (e.g., 000, 001, ..., 111) is known as the computational basis. Other possible bases are unit-length, orthogonal vectors and the eigenvectors of the Pauli-x operator. Ket notation is often used to make the choice of basis explicit. For example, the state (a,b,c,d,e,f,g,h) in the computational basis can be written as:
The computational basis for a single qubit (two dimensions) is and .
Using the eigenvectors of the Pauli-x operator, a single qubit is and .
|Is a universal quantum computer sufficient to efficiently simulate an arbitrary physical system?|
While a classical three-bit state and a quantum three-qubit state are both eight-dimensional vectors, they are manipulated quite differently for classical or quantum computation. For computing in either case, the system must be initialized, for example into the all-zeros string, , corresponding to the vector (1,0,0,0,0,0,0,0). In classical randomized computation, the system evolves according to the application of stochastic matrices, which preserve that the probabilities add up to one (i.e., preserve the L1 norm). In quantum computation, on the other hand, allowed operations are unitary matrices, which are effectively rotations (they preserve that the sum of the squares add up to one, the Euclidean or L2 norm). (Exactly what unitaries can be applied depend on the physics of the quantum device.) Consequently, since rotations can be undone by rotating backward, quantum computations are reversible. (Technically, quantum operations can be probabilistic combinations of unitaries, so quantum computation really does generalize classical computation. See quantum circuit for a more precise formulation.)
Finally, upon termination of the algorithm, the result needs to be read off. In the case of a classical computer, we sample from the probability distribution on the three-bit register to obtain one definite three-bit string, say 000. Quantum mechanically, we measure the three-qubit state, which is equivalent to collapsing the quantum state down to a classical distribution (with the coefficients in the classical state being the squared magnitudes of the coefficients for the quantum state, as described above), followed by sampling from that distribution. Note that this destroys the original quantum state. Many algorithms will only give the correct answer with a certain probability. However, by repeatedly initializing, running and measuring the quantum computer, the probability of getting the correct answer can be increased.
For more details on the sequences of operations used for various quantum algorithms, see universal quantum computer, Shor's algorithm, Grover's algorithm, Deutsch-Jozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.
Integer factorization is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers (or the related discrete logarithm problem, which can also be solved by Shor's algorithm), including forms of RSA. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
However, other existing cryptographic algorithms do not appear to be broken by these algorithms. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography.
Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than are required by classical algorithms. In this case the advantage is provable. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.
Consider a problem that has these four properties:
For problems with all four properties, the time for a quantum computer to solve this will be proportional to the square root of the number of inputs. That can be a very large speedup, reducing some problems from years to seconds. It can be used to attack symmetric ciphers such as Triple DES and AES by attempting to guess the secret key.
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.
There are a number of technical challenges in building a large-scale quantum computer, and thus far quantum computers have yet to solve a problem faster than a classical computer. David DiVincenzo, of IBM, listed the following requirements for a practical quantum computer:
One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world causes the system to decohere. This effect is irreversible, as it is non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, in particular the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[not in citation given]
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
If the error rate is small enough, it is thought to be possible to use quantum error correction, which corrects errors due to decoherence, thereby allowing the total calculation time to be longer than the decoherence time. An often cited figure for required error rate in each gate is 10−4. This implies that each gate must be able to perform its task in one 10,000th of the decoherence time of the system.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of bits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 qubits without error correction. With error correction, the figure would rise to about 107 qubits. Note that computation time is about or about steps and on 1 MHz, about 10 seconds.
A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.
There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are
The Quantum Turing machine is theoretically important but direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent to each other in the sense that each can simulate the other with no more than polynomial overhead.
For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
The large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. But at the same time, there is also a vast amount of flexibility.
In 2005, researchers at the University of Michigan built a semiconductor chip that functioned as an ion trap. Such devices, produced by standard lithography techniques, may point the way to scalable quantum computing tools. An improved version was made in 2006.
In 2009, researchers at Yale University created the first rudimentary solid-state quantum processor. The two-qubit superconducting chip was able to run elementary algorithms. Each of the two artificial atoms (or qubits) were made up of a billion aluminum atoms but they acted like a single one that could occupy two different energy states.
Another team, working at the University of Bristol, also created a silicon-based quantum computing chip, based on quantum optics. The team was able to run Shor's algorithm on the chip. Further developments were made in 2010. Springer publishes a journal ("Quantum Information Processing") devoted to the subject.
In April 2011, a team of scientists from Australia and Japan have finally made a breakthrough in quantum teleportation. They have successfully transferred a complex set of quantum data with full transmission integrity achieved. Also the qubits being destroyed in one place but instantaneously resurrected in another, without affecting their superpositions.
In 2011, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One. The company claims this system uses a 128 qubit processor chipset. On May 25, 2011 D-Wave announced that Lockheed Martin Corporation entered into an agreement to purchase a D-Wave One system. Lockheed Martin and the University of Southern California (USC) reached an agreement to house the D-Wave One Adiabatic Quantum Computer at the newly formed USC Lockheed Martin Quantum Computing Center, part of USC's Information Sciences Institute campus in Marina del Rey.
During the same year, researchers working at the University of Bristol created an all-bulk optics system able to run an iterative version of Shor's algorithm. They successfully managed to factorize 21.
In April 2012 a multinational team of researchers from the University of Southern California, Delft University of Technology, the Iowa State University of Science and Technology, and the University of California, Santa Barbara, constructed a two-qubit quantum computer on a crystal of diamond doped with some manner of impurity, that can easily be scaled up in size and functionality at room temperature. Two logical qubit directions of electron spin and nitrogen kernels spin were used. A system which formed an impulse of microwave radiation of certain duration and the form was developed for maintenance of protection against decoherence. By means of this computer Grover's algorithm for four variants of search has generated the right answer from the first try in 95% of cases.
The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half. A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.
BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.
The capacity of a quantum computer to accelerate classical algorithms has rigid limits — upper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer. A similar fact takes place for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.
Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasible). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Church–Turing thesis. It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e. there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.
|Wikimedia Commons has media related to: Quantum computer|
Dictionary and translator for handheld
New : sensagent is now available on your handheld
A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites !
With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site.
Improve your site content
Add new content to your site from Sensagent by XML.
Crawl products or adds
Get XML access to reach the best products.
Index images and define metadata
Get XML access to fix the meaning of your metadata.
Please, email us to describe your idea.
Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares.
Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame !
Change the target language to find translations.
Tips: browse the semantic fields (see From ideas to words) in two languages to learn more. | http://dictionary.sensagent.com/Quantum_computer/en-en/ | 13 |
18 | Science Fair Project Encyclopedia
In computer science, a selection algorithm is an algorithm for finding the kth smallest (or kth largest) number in a list. This includes the simple common cases of finding the minimum, maximum, and median elements.
One simple and widely used selection algorithm is to use a sort algorithm on the list, and then extract the kth element. This is particularly useful when we wish to make many different selections from a single list, in which case only one initial, expensive sort is needed, followed by many cheap extraction operations. When we need only to perform one selection, however, or when we need to strongly mutate the list between selections, this method can be costly, typically requiring at least O(n log n) time, where n is the length of the list.
Linear minimum/maximum algorithms
Worst-case linear algorithms to find minimums or maximums are obvious; we keep two variables, one referring to the index of the minimum/maximum element seen so far, and one holding its value. As we scan through the list, we update these whenever we encounter a more extreme element:
function minimum(a[1..n]) minIndex := 1 minValue := a for i from 2 to n if a[i] < minValue minIndex := i minValue := a[i] return minValue
function maximum(a[1..n]) maxIndex := 1 maxValue := a for i from 2 to n if a[i] > maxValue maxIndex := i maxValue := a[i] return maxValue
Note that there may be multiple minimum or maximum elements. Because the comparisons above are strict, these algorithms finds the minimum element with minimum index. By using non-strict comparisons (<= and >=) instead, we would find the minimum element with maximum index; this approach also has some small performance benefits.
Using the same idea, we can construct a simple, but inefficient general algorithm for finding the kth smallest or kth largest item in a list, requiring O(kn) time, which is effective when k is small. To accomplish this, we simply find the most extreme value and move it to the beginning until we reach our desired index. This can be seen as an incomplete selection sort. Here is the minimum-based algorithm:
function select(a[1..n], k) for i from 1 to k minIndex = i minValue = a[i] for j = i+1 to n if a[j] < minValue minIndex = j minValue = a[j] swap a[i] and a[minIndex] return a[k]
Other advantages of this method are:
- After locating the jth smallest element, it requires only O(j + (k-j)2) time to find the kth smallest element, or only O(k) for k ≤ j.
- It can be done with linked list data structures, whereas the one based on partition requires random access.
Linear general selection algorithms
Finding a worst-case linear algorithm for the general case of selecting the kth largest element is a much more difficult problem, but one does exist, and was published by Blum, Floyd, Pratt, Rivest, and Tarjan in their 1973 paper Time bounds for selection. It uses concepts based on those used in the quicksort sort algorithm, along with an innovation of its own.
In quicksort, there is a subprocedure called partition which can, in linear time, group the list into two parts, those less than a certain value, and those greater than or equal to a certain value. Here is pseudocode which performs a partition about the value
function partition(a, left, right, pivotIndex) pivotValue := a[pivotIndex] swap a[pivotIndex] and a[right] // Move pivot to end storeIndex := left for i from left to right-1 if a[i] ≤ pivotValue swap a[storeIndex] to a[i] storeIndex := storeIndex + 1 swap a[right] and a[storeIndex] // Move pivot to its final place return storeIndex
In quicksort, we recursively sort both branches, leading to best-case Ω(n log n) time. However, when doing selection, we already know which partition our desired element lies in, since the pivot is in its final sorted position, with all those preceding it in sorted order preceding it and all those following it in sorted order following it. Thus a single recursive call locates the desired element in the correct partition:
function select(a, k, left, right) select a pivot value a[pivotIndex] pivotNewIndex := partition(a, left, right, pivotIndex) if k = pivotNewIndex return a[k] else if k < pivotNewIndex return select(a, k, left, pivotNewIndex-1) else return select(a, k, pivotNewIndex+1, right)
Note the resemblance to quicksort; indeed, just as the minimum-based selection algorithm is a partial selection sort, this is a partial quicksort, generating and partitioning only O(log n) of its O(n) partitions. This simple procedure has expected linear performance, and, like quicksort, has quite good performance in practice. It is also an in-place algorithm, requiring only constant memory overhead, since the tail recursion can be eliminated with a loop like this:
function select(a, k, left, right) loop select a pivot value a[pivotIndex] pivotNewIndex := partition(a, left, right, pivotIndex) if k = pivotNewIndex return a[k] else if k < pivotNewIndex right := pivotNewIndex-1 else left := pivotNewIndex+1
Like quicksort, the performance of the algorithm is sensitive to the pivot that is chosen. If bad pivots are consistently chosen, this degrades to the minimum-based selection described previously. However, there is a way to consistently find very good pivots; this is the key to making the algorithm worst-case linear.
To accomplish this, we begin by dividing the list into groups of five elements. Any left over are ignored for now. Then, for each of these, we find the median of the group of five, an operation that can be made very fast by loading all five values into the register set and comparing them. We move all these medians into one contiguous block in the list, and proceed to invoke select recursively on this sublist of n/5 elements to find its median value. Then, we use this "median of medians" for our pivot.
What makes this a good pivot? Note that it is both less and greater than half the elements in our list of medians, or n/10 elements. Moreover, each of these elements is a median of 5, and so is both less and greater than 2 others outside the block, or 2(n/10) elements. Thus the median we chose splits the elements somewhere between 30%/70% and 70%/30%, which is more than good enough to assure worst-case linear behavior of the algorithm.
The trick, however, is ensuring that the added recursive call does not destroy this worst-case linear behavior. This is because the list of medians is 20% of the size of the list, and the other recursive call recurses on at most 70% of the list, so we have this recurrence relation for the running time:
T(n) ≤ T(n/5) + T(7n/10) + O(n)
The O(n) is for the partitioning work. But, because n/5 + 7n/10 is 9n/10 < n, it's not hard to find a linear formula for running time that can be plugged into this formula to make it true.
In practice, although this approach optimizes quite well, it is still typically outperformed by a considerable margin by the expected linear algorithm with fairly naive pivot choices, such as choosing randomly. The worst-case algorithm still has importance, however; it can be used, for example, to construct a worst-case O(n log n) quicksort algorithm, by using it to find the median at every step.
Selection as incremental sorting
One of the advantages of the sort-and-index approach, as mentioned, is its ability to amortize the sorting cost over many subsequent selections. However, sometimes the number of selections that will be done is not known in advance, and may be either small or large. In these cases, we can adapt the algorithms given above to simultaneously select an element while partially sorting the list, thus accelerating future selections.
Both the selection procedure based on minimum-finding and the one based on partitioning can be seen as a form of partial sort. The minimum-based algorithm sorts the list up to the given index, and so clearly speeds up future selections, especially of smaller indexes. The partition-based algorithm does not achieve the same behaviour automatically, but can be adapted to remember its previous pivot choices and reuse them wherever possible, avoiding costly partition operations, particularly the top-level one. The list becomes gradually more sorted as more partition operations are done incrementally; no pivots are ever "lost." If desired, this same pivot list could be passed on to quicksort to reuse, again avoiding many costly partition operations.
Using data structures to select in sublinear time
Given an unorganized list of data, linear time (Ω(n)) is required to find the minimum element, because we have to examine every element (otherwise, we might miss it.) If we organize the list, for example by keeping it sorted at all times, then selecting the kth largest element is trivial, but then insertion requires linear time, as do other operations such as combining two lists.
When only the minimum (or maximum) is needed, a better approach is to use a priority queue, which is typically able to find the minimum element in constant time, while all other operations, including insertion, are O(log n). More generally, a self-balancing binary search tree can easily be augmented to make it possible to both insert an element and find the kth largest element in O(log n) time.
- M. Blum, R.W. Floyd, V. Pratt, R. Rivest and R. Tarjan, "Time bounds for selection," J. Cornput. System Sci. 7 (1973) 448-461.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Selection_algorithm | 13 |
18 | Statistical correlation is a statistical technique which tells us if two variables are related.
For example, consider the variables family income and family expenditure. It is well known that income and expenditure increase or decrease together. Thus they are related in the sense that change in any one variable is accompanied by change in the other variable.
Again price and demand of a commodity are related variables; when price increases demand will tend to decreases and vice versa.
If the change in one variable is accompanied by a change in the other, then the variables are said to be correlated. We can therefore say that family income and family expenditure, price and demand are correlated.
Relationship Between Variables
Correlation can tell you something about the relationship between variables. It is used to understand:
- whether the relationship is positive or negative
- the strength of relationship.
Correlation is a powerful tool that provides these vital pieces of information.
In the case of family income and family expenditure, it is easy to see that they both rise or fall together in the same direction. This is called positive correlation.
In case of price and demand, change occurs in the opposite direction so that increase in one is accompanied by decrease in the other. This is called negative correlation.
Coefficient of Correlation
Statistical correlation is measured by what is called coefficient of correlation (r). Its numerical value ranges from +1.0 to -1.0. It gives us an indication of the strength of relationship.
In general, r > 0 indicates positive relationship, r < 0 indicates negative relationship while r = 0 indicates no relationship (or that the variables are independent and not related). Here r = +1.0 describes a perfect positive correlation and r = -1.0 describes a perfect negative correlation.
Closer the coefficients are to +1.0 and -1.0, greater is the strength of the relationship between the variables.
As a rule of thumb, the following guidelines on strength of relationship are often useful (though many experts would somewhat disagree on the choice of boundaries).
|Value of r||Strength of relationship|
|-1.0 to -0.5 or 1.0 to 0.5||Strong|
|-0.5 to -0.3 or 0.3 to 0.5||Moderate|
|-0.3 to -0.1 or 0.1 to 0.3||Weak|
|-0.1 to 0.1||None or very weak|
Correlation is only appropriate for examining the relationship between meaningful quantifiable data (e.g. air pressure, temperature) rather than categorical data such as gender, favorite color etc.
While 'r' (correlation coefficient) is a powerful tool, it has to be handled with care.
- The most used correlation coefficients only measure linear relationship. It is therefore perfectly possible that while there is strong non linear relationship between the variables, r is close to 0 or even 0. In such a case, a scatter diagram can roughly indicate the existence or otherwise of a non linear relationship.
- One has to be careful in interpreting the value of 'r'. For example, one could compute 'r' between the size of shoe and intelligence of individuals, heights and income. Irrespective of the value of 'r', it makes no sense and is hence termed chance or non-sense correlation.
- 'r' should not be used to say anything about cause and effect relationship. Put differently, by examining the value of 'r', we could conclude that variables X and Y are related. However the same value of 'r' does not tell us if X influences Y or the other way round. Statistical correlation should not be the primary tool used to study causation, because of the problem with third variables.
Found this article useful? Please take a moment to share it or spread the link using the icons below.
Amit Choudhury (May 2, 2009). Statistical Correlation. Retrieved May 24, 2013 from Explorable.com: http://explorable.com/statistical-correlation | http://explorable.com/statistical-correlation | 13 |
16 | Ribonucleic acid or RNA is a nucleic acid, consisting of many nucleotides that form a polymer. Each nucleotide consists of a nitrogenous base, a ribose sugar, and a phosphate. RNA plays several important roles in the processes of translating genetic information from deoxyribonucleic acid (DNA) into proteins. One type of RNA acts as a messenger between DNA and the protein synthesis complexes known as ribosomes, others form vital portions of the structure of ribosomes, act as essential carrier molecules for amino acids to be used in protein synthesis, or change which genes are active.
RNA is very similar to DNA, but differs in a few important structural details: RNA is usually single stranded, while DNA is usually double stranded. RNA nucleotides contain ribose while DNA contains deoxyribose (a type of ribose that lacks one oxygen atom), and RNA uses the nucleotide uracil in its composition, instead of thymine which is present in DNA. RNA is transcribed from DNA by enzymes called RNA polymerases and is generally further processed by other enzymes, some of them guided by non-coding RNAs.
Chemical and stereochemical structure
Each nucleotide in RNA contains a ribose, whose carbons are numbered 1' through 5'. The base – often adenine, cytosine, guanine or uracil – is attached to the 1' position. A phosphate group is attached to the 3' position of one ribose and the 5' position of the next. The phosphate groups have a negative charge each at physiological pH, making RNA a charged molecule.
The bases often form hydrogen bonds between adenine and uracil and between cytosine and guanine, but other interactions are possible, such as a group of adenine bases binding to each other in a bulge.
There are also numerous modified bases and sugars found in RNA that serve many different roles. Pseudouridine (Ψ), in which the linkage between uracil and ribose is changed from a C–N bond to a C–C bond, and ribothymidine (T), are found in various places (most notably in the TΨC loop of tRNA).
Another notable modified base is hypoxanthine, a deaminated guanine base whose nucleoside is called inosine. Inosine plays a key role in the Wobble Hypothesis of the genetic code. There are nearly 100 other naturally occurring modified nucleosides, of which pseudouridine and nucleosides with 2'-O-methylribose are the most common.
The specific roles of many of these modifications in RNA are not fully understood. However, it is notable that in ribosomal RNA, many of the post-translational modifications occur in highly functional regions, such as the peptidyl transferase center and the subunit interface, implying that they are important for normal function.
The most important structural feature of RNA, that distinguishes it from DNA is the presence of a hydroxyl group at the 2'-position of the ribose sugar. The presence of this functional group enforces the C3'-endo sugar conformation (as opposed to the C2'-endo conformation of the deoxyribose sugar in DNA) that causes the helix to adopt the A-form geometry rather than the B-form most commonly observed in DNA. This results in a very deep and narrow major groove and a shallow and wide minor groove.
A second consequence of the presence of the 2'-hydroxyl group is that in conformationally flexible regions of an RNA molecule (that is, not involved in formation of a double helix), it can chemically attack the adjacent phosphodiester bond to cleave the backbone.
Comparison with DNA
RNA and DNA differ in three main ways. First, unlike DNA which is double-stranded, RNA is a single-stranded molecule in most of its biological roles and has a much shorter chain of nucleotides. Secondly, while DNA contains deoxyribose, RNA contains ribose, (there is no hydroxyl group attached to the pentose ring in the 2' position in DNA, whereas RNA has two hydroxyl groups). These hydroxyl groups make RNA less stable than DNA because it is more prone to hydrolysis. Thirdly, the complementary nucleotide to adenine is not thymine, as it is in DNA, but rather uracil, which is an unmethylated form of thymine.
Like DNA, most biologically active RNAs including tRNA, rRNA, snRNAs and other non-coding RNAs are extensively base paired to form double stranded helices. Structural analysis of these RNAs have revealed that they are highly structured. Unlike DNA, this structure is not just limited to long double-stranded helices but rather collections of short helices packed together into structures akin to proteins. In this fashion, RNAs can achieve chemical catalysis, like enzymes. For instance, determination of the structure of the ribosome – an enzyme that catalyzes peptide bond formation – revealed that its active site is composed entirely of RNA.
Synthesis of RNA is usually catalyzed by an enzyme - RNA polymerase, using DNA as a template. Initiation of synthesis begins with the binding of the enzyme to a promoter sequence in the DNA (usually found "upstream" of a gene). The DNA double helix is unwound by the helicase activity of the enzyme. The enzyme then progresses along the template strand in the 3’ -> 5’ direction, synthesizing a complementary RNA molecule with elongation occurring in the 5’ -> 3’ direction. The DNA sequence also dictates where termination of RNA synthesis will occur.
There are also a number of RNA-dependent RNA polymerases as well that use RNA as their template for synthesis of a new strand of RNA. For instance, a number of RNA viruses (such as poliovirus) use this type of enzyme to replicate their genetic material. Also, it is known that RNA-dependent RNA polymerases are required for the RNA interference pathway in many organisms.
Messenger RNA (mRNA)
Messenger RNA is RNA that carries information from DNA to the ribosome sites of protein synthesis in the cell. In eukaryotic cells, once mRNA has been transcribed from DNA, it is "processed" before being exported from the nucleus into the cytoplasm, where it is bound to ribosomes and translated into its corresponding protein form with the help of tRNA. In prokaryotic cells, which do not have nucleus and cytoplasm compartments, mRNA can bind to ribosomes while it is being transcribed from DNA. After a certain amount of time the message degrades into its component nucleotides, usually with the assistance of ribonucleases.
- Further information: Non-coding RNA
RNA genes (also known as non-coding RNA or small RNA) are genes that encode RNA which is not translated into a protein. The most prominent examples of RNA genes are transfer RNA (tRNA) and ribosomal RNA (rRNA), both of which are involved in the process of translation. Two other groups of non-coding RNA are microRNAs (miRNA) which regulate the gene expression and small nuclear RNAs (snRNA), a diverse class that includes for example the RNAs that form spliceosomes that excise introns from pre-mRNA.
Ribosomal RNA is the catalytic component of the ribosomes, the protein synthesis factories in the cell. Eukaryotic ribosomes contain four different rRNA molecules: 18S, 5.8S, 28S, and 5S rRNA. Three of the rRNA molecules are synthesized in the nucleolus, and one is synthesized elsewhere. rRNA molecules are extremely abundant and make up at least 80% of the RNA molecules found in a typical eukaryotic cell. In the cytoplasm, ribosomal RNA and protein combine to form a nucleoprotein called a ribosome. The ribosome binds mRNA and carries out protein synthesis. Several ribosomes may be attached to a single mRNA at any time.
Transfer RNA is a small RNA chain of about 74-95 nucleotides that transfers a specific amino acid to a growing polypeptide chain at the ribosomal site of protein synthesis, during translation. It has sites for amino-acid attachment and an anticodon region for codon recognition that binds to a specific sequence on the messenger RNA chain through hydrogen bonding.
Several types of RNA can downregulate gene expression by being complementary to a part of a gene. MicroRNAs (miRNA; 21-22 nt) are found in eukaryotes and act through RNA interference (RNAi), where an effector complex of miRNA and enzymes can break down mRNA which the miRNA is complementary to, block the mRNA from being translated, or cause the promoter to be methylated which generally downregulates the gene. Some miRNAs upregulate genes instead (RNA activation). While small interfering RNAs (siRNA; 20-25 nt) are often produced by breakdown of viral RNA, there are also endogenous sources of siRNAs. siRNAs act through RNA interference in a fashion similar to miRNAs. Animals have Piwi-interacting RNAs (piRNA; 29-30 nt) which are active in germline cells and are though to be a defense against transposons and play a role in gametogenesis. X chromosome inactivation in female mammals is caused by Xist, an RNA which coats one X chromosome, inactiving it. Antisense RNAs are widespread among bacteria; most downregulate a gene, but a few are activators of transcription.
- Further information: RNA editing
RNA can be modified after transcription not only by splicing, but also by having its bases modified to other bases than adenine, cytosine, guanine and uracil.
In eukaryotes, modifications of RNA bases are generally directed by small nucleolar RNAs (snoRNA), found in the nucleolus and cajal bodies. snoRNAs associate with enzymes and guide them to a spot on an RNA by basepairing to that RNA. These enzymes then perform the base modification. rRNA and tRNA are extensively modified, but snRNA and mRNA can also be the target of base modification.
Double-stranded RNA (dsRNA) is RNA with two complementary strands, similar to the DNA found in all cells. dsRNA forms the genetic material of some viruses called double-stranded RNA viruses. In eukaryotes, long double-stranded RNA such as viral RNA can trigger RNA interference, where short dsRNA molecules called siRNAs (small interfering RNAs) can cause enzymes to break down specific mRNAs or silence the expression of genes. siRNA can also increase the transcription of a gene, a process called RNA activation.
RNA world hypothesis
The RNA world hypothesis proposes that the earliest forms of life relied on RNA both to carry genetic information (like DNA does now) and to catalyze biochemical reactions like an enzyme. According to this hypothesis, descendants of these early lifeforms gradually integrated DNA and proteins into their metabolism.
RNA secondary structures
The functional form of single stranded RNA molecules, just like proteins, frequently requires a specific tertiary structure. The scaffold for this structure is provided by secondary structural elements which are hydrogen bonds within the molecule. This leads to several recognizable "domains" of secondary structure like hairpin loops, bulges and internal loops. The secondary structure of RNA molecules can be predicted computationally by calculating the minimum free energies (MFE) structure for all different combinations of hydrogen bondings and domains. There has been a significant amount of research directed at the RNA structure prediction problem.
Comparative studies of conserved RNA structures are significantly more accurate and provide evolutionary information. Computationally reasonable and accurate online tools for alignment folding are provided by KNetFold, RNAalifold and Pfold.
A package of RNA structure prediction programs is also available for Windows: RNAstructure.
List of RNA types
|Messenger RNA||mRNA||Codes for protein||All cells|
|Ribosomal RNA||rRNA||Translation||All cells|
|Transfer RNA||tRNA||Translation||All cells|
|Small nuclear RNA||snRNA||Various||Eukaryotes and archaea |
|Small nucleolar RNA||snoRNA||RNA editing||Eukaryotes and archaea |
|Piwi-interacting RNA||piRNA||Gene regulation||Animal germline cells|
|Small interfering RNA||siRNA||Gene regulation||Eukaryotes|
|Antisense RNA||aRNA||Gene regulation||Bacteria|
|Transfer-messenger RNA||tmRNA||Terminating translation||Bacteria|
|Signal recognition particle RNA||SRP RNA||mRNA tagging for export||All cells|
|Guide RNA||gRNA||RNA editing||Kinetoplastid mitochondria |
In addition, the genome of many types of viruses consists of RNA, namely double-stranded RNA viruses, positive-sense RNA viruses, negative-sense RNA viruses and most satellite viruses and reverse transcribing viruses.
Nucleic acids were discovered in 1868 by Friedrich Miescher, who called the material 'nuclein' since it was found in the nucleus. It was later discovered that prokaryotic cells, which do not have a nucleus, also contain nucleic acids. The role of RNA in protein synthesis had been suspected since 1939, based on experiments carried out by Torbjörn Caspersson, Jean Brachet and Jack Schultz. Gerard Marbaix isolated the first messenger RNA, for rabbit hemoglobin, and found it induced the synthesis of hemoglobin after injection into oocytes. Finally, Severo Ochoa discovered the RNA, winning Ochoa the 1959 Nobel Prize for Medicine. The sequence of the 77 nucleotides of a yeast RNA was found by Robert W. Holley in 1965, winning Holley the 1968 Nobel Prize for Medicine. In 1976, Walter Fiers and his team at the University of Ghent determined the first complete nucleotide sequence of an RNA virus genome, that of bacteriophage MS2.
While ribosomal RNA and transfer RNA were found early, since the late 1990s, many new RNA genes have been found, and thus RNA genes may play a much more significant role than previously thought. In the late 1990s and early 2000, there was persistent evidence of more complex transcription occurring in mammalian cells (and possibly others). This could point towards a more widespread use of RNA in biology, particularly in gene regulation. A particular class of non-coding RNA, microRNA, has been found in many eukaryotes and clearly plays an important role in regulating other genes, through RNA interference.
- ↑ Westhof E, Fritsch V (2000). "RNA folding: beyond Watson–Crick pairs". Structure 8 (3): R55–R65.
- ↑ Barciszewski J, Frederic B, Clark C (1999). RNA biochemistry and biotechnology. Springer, 73–87. ISBN 0792358627.
- ↑ Yu Q, Morrow CD (2001). "Identification of critical elements in the tRNA acceptor stem and TΨC loop necessary for human immunodeficiency virus type 1 infectivity". J Virol. 75 (10): 4902–06.
- ↑ Elliott MS, Trewyn RW (1983). "Inosine biosynthesis in transfer RNA by an enzymatic insertion of hypoxanthine". J. Biol. Chem. 259 (4): 2407–10.
- ↑ Söll D, RajBhandary U (1995). tRNA: Structure, biosynthesis, and function. ASM Press, 165. ISBN 155581073X.
- ↑ Kiss T (2001). "Small nucleolar RNA-guided post-transcriptional modification of cellular RNAs". The EMBO Journal 20: 3617–22.
- ↑ King TH, Liu B, McCully RR, Fournier MJ (2002). "Ribosome structure and activity are altered in cells lacking snoRNPs that form pseudouridines in the peptidyl transferase center". Molecular Cell 11 (2): 425–435.
- ↑ Salazar M, Fedoroff OY, Miller JM, Ribeiro NS, Reid BR (1992). "The DNA strand in DNAoRNA hybrid duplexes is neither B-form nor A-form in solution". Biochemistry 1993 (32): 4207–15.
- ↑ Hermann T, Patel DJ (2000). "RNA bulges as architectural and recognition motifs". Structure 8 (3): R47–R54.
- ↑ Mikkola S, Nurmi K, Yousefi-Salakdeh E, Strömberg R, Lönnberg H (1999). "The mechanism of the metal ion promoted cleavage of RNA phosphodiester bonds involves a general acid catalysis by the metal aquo ion on the departure of the leaving group". Perkin transactions 2: 1619–1626.
- ↑ Nudler E, Gottesman ME (2002). "Transcription termination and anti-termination in E. coli". Genes to Cells 7: 755–768.
- ↑ Jeffrey L Hansen, Alexander M Long, Steve C Schultz (1997). "Structure of the RNA-dependent RNA polymerase of poliovirus". Structure 5 (8): 1109-1122.
- ↑ Ahlquist P (2002). "RNA-Dependent RNA Polymerases, Viruses, and RNA Silencing". Science 296 (5571): 1270–1273.
- ↑ Berg JM, Tymoczko JL, Stryer L (2002). Biochemistry, 5th edition, WH Freeman and Company, 781–808. ISBN 0-7167-4684-0.
- ↑ Matzke MA, Matzke AJM (2004). "Planting the seeds of a new paradigm". PLoS Biology 2 (5): e133.
- ↑ Check E (2007). "RNA interference: hitting the on switch". Nature 448 (7156): 855–858.
- ↑ Vazquez F, Vaucheret H, Rajagopalan R, Lepers C, Gasciolli V, Mallory AC, Hilbert J, Bartel DP, Crété P (2004). "Endogenous trans-acting siRNAs regulate the accumulation of Arabidopsis mRNAs". Molecular Cell 16 (1): 69–79.
- ↑ Horwich MD, Li C Matranga C, Vagin V, Farley G, Wang P, Zamore PD (2007). "The Drosophila RNA methyltransferase, DmHen1, modifies germline piRNAs and single-stranded siRNAs in RISC". Current Biology 17: 1265–72.
- ↑ Girard A, Sachidanandam R, Hannon GJ, Carmell MA (2006). "A germline-specific class of small RNAs binds mammalian Piwi proteins". Nature 442: 199–202.
- ↑ Heard E, Mongelard F, Arnaud D, Chureau C, Vourc'h C, Avner P (1999). "Human XIST yeast artificial chromosome transgenes show partial X inactivation center function in mouse embryonic stem cells". Proc. Natl. Acad. Sci. USA 96 (12): 6841–6846.
- ↑ Wagner EG, Altuvia S, Romby P (2002). "Antisense RNAs in bacteria and their genetic elements". Adv Genet. 46: 361–98.
- ↑ Covello PS, Gray MW (1989). "RNA editing in plant mitochondria". Nature 341: 662–666.
- ↑ Omer AD, Ziesche S, Decatur WA, Fournier MJ, Dennis PP (2003). "RNA-modifying machines in archaea". Molecular Microbiology 48 (3): 617–629.
- ↑ Doran G (2007). "RNAi – Is one suffix sufficient?". Journal of RNAi and Gene Silencing 3 (1): 217-219.
- ↑ Mathews DH, Disney MD, Childs JL, Schroeder SJ, Zuker M, Turner DH (2004). "Incorporating chemical modification constraints into a dynamic programming algorithm for prediction of RNA secondary structure". Proc. Natl. Acad. Sci. U. S. A. 101 (19): 7287–7292.
- ↑ Thore S, Mayer C, Sauter C, Weeks S, Suck D (2003). "Crystal Structures of the Pyrococcus abyssi Sm Core and Its Complex with RNA". J. Biol. Chem. 278 (2): 1239–47.
- ↑ Kiss T (2001). "Small nucleolar RNA-guided post-transcriptional modification of cellular RNAs". The EMBO Journal 20: 3617–22.
- ↑ Alfonzo JD, Thiemann O, Simpson L (1997). "The mechanism of U insertion/deletion RNA editing in kinetoplastid mitochondria". Nucleic Acids Research 25 (19): 3751–59.
- ↑ Dahm R (2005). "Friedrich Miescher and the discovery of DNA". Developmental Biology 278 (2): 274–88. PMID 15680349.
- ↑ Nierhaus KH, Wilson DN (2004). Protein Synthesis and Ribosome Structure. Wiley-VCH, 3. ISBN 3527306382.
- ↑ Carlier M (2003). L'ADN, cette "simple" molécule. Esprit libre.
- ↑ Ochoa S (1959). Enzymatic synthesis of ribonucleic acid. Nobel Lecture.
- ↑ Holley RW et al (1965). "Structure of a ribonucleic acid". Science 147 (1664): 1462–1465.
- ↑ Fiers W et al (1976). "Complete nucleotide-sequence of bacteriophage MS2-RNA: primary and secondary structure of replicase gene". Nature 260: 500–507.
- RNA World website
- Nucleic Acid Database Images of DNA, RNA and complexes.
- RNAJunction Database: Extracted atomic models of RNA junction and kissing loop structures.
|Nucleobases:||Purine (Adenine, Guanine) | Pyrimidine (Uracil, Thymine, Cytosine)|
|Nucleosides:||Adenosine/Deoxyadenosine | Guanosine/Deoxyguanosine | Uridine | Thymidine | Cytidine/Deoxycytidine|
|Nucleotides:||monophosphates (AMP, GMP, UMP, CMP) | diphosphates (ADP, GDP, UDP, CDP) | triphosphates (ATP, GTP, UTP, CTP) | cyclic (cAMP, cGMP, cADPR)|
|Deoxynucleotides:||monophosphates (dAMP, dGMP, TMP, dCMP) | diphosphates (dADP, dGDP, TDP, dCDP) | triphosphates (dATP, dGTP, TTP, dCTP)|
|Ribonucleic acids:||RNA | mRNA | tRNA | rRNA | gRNA | miRNA | ncRNA | piRNA | shRNA | siRNA | snRNA | snoRNA|
|Deoxyribonucleic acids:||DNA | mtDNA | cDNA|
|Nucleic acid analogues:||GNA | LNA | PNA | TNA | morpholino|
|Cloning vectors:||plasmid | cosmid | fosmid | phagemid | BAC | YAC | HAC|
bn:আরএনএ zh-min-nan:RNA bs:Ribonukleinska kiselina br:Trenkenn ribonukleek bg:РНК ca:Àcid ribonucleic cs:RNA da:RNA de:Ribonukleinsäure et:Ribonukleiinhape el:RNAeo:RNA eu:Azido erribonukleiko fo:RNAgl:Ácido ribonucleico ko:RNA hr:Ribonukleinska kiselina id:Asam ribonukleat is:RKS it:RNA he:RNA la:Acidum ribonucleicum lv:Ribonukleīnskābe lb:RNS lt:Ribonukleino rūgštis hu:RNS mk:РНК mn:РНХ nl:RNAno:RNA nn:Ribonukleinsyre oc:Acid ribonucleïcsimple:RNA sk:Ribonukleová kyselina sl:Ribonukleinska kislina sr:Рибонуклеинска киселина sh:Ribonukleinska kiselina fi:RNA sv:Ribonukleinsyra ta:ரைபோ கரு அமிலம் te:రైబో కేంద్రక ఆమ్లం th:อาร์เอ็นเอuk:РНК ur:رائبو مرکزی ترشہ yo:RNA
There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies | http://www.wikidoc.org/index.php/RNA | 13 |
31 | ||This article may be too long to read and navigate comfortably. (June 2013)|
|Part of a series on|
The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on empirical and measurable evidence subject to specific principles of reasoning. The Oxford English Dictionary defines the scientific method as: "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses."
The chief characteristic which distinguishes the scientific method from other methods of acquiring knowledge is that scientists seek to let reality speak for itself,[discuss] supporting a theory when a theory's predictions are confirmed and challenging a theory when its predictions prove false. Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methods of obtaining knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses via predictions which can be derived from them. These steps must be repeatable, to guard against mistake or confusion in any particular experimenter. Theories that encompass wider domains of inquiry may bind many independently derived hypotheses together in a coherent, supportive structure. Theories, in turn, may help form new hypotheses or place groups of hypotheses into context.
Scientific inquiry is generally intended to be as objective as possible in order to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established (when data is sampled or compared to chance).
Scientific method has been practiced in some form for at least one thousand years and is the process by which science is carried out. Because science builds on previous knowledge, it consistently improves our understanding of the world. The scientific method also improves itself in the same way, meaning that it gradually becomes more effective at generating new knowledge. For example, the concept of falsification (first proposed in 1934) reduces confirmation bias by formalizing the attempt to disprove hypotheses rather than prove them.
The overall process involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions to determine whether the original conjecture was correct. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, they are better considered as general principles. Not all steps take place in every scientific inquiry (or to the same degree), and not always in the same order. As noted by William Whewell (1794–1866), "invention, sagacity, [and] genius" are required at every step:
- Formulation of a question: The question can refer to the explanation of a specific observation, as in "Why is the sky blue?", but can also be open-ended, as in "Does sound travel faster in air than in water?" or "How can I design a drug to cure this particular disease?" This stage also involves looking up and evaluating previous evidence from other scientists, including experience. If the answer is already known, a different question that builds on the previous evidence can be posed. When applying the scientific method to scientific research, determining a good question can be very difficult and affects the final outcome of the investigation.
- Hypothesis: An hypothesis is a conjecture, based on the knowledge obtained while formulating the question, that may explain the observed behavior of a part of our universe. The hypothesis might be very specific, e.g., Einstein's equivalence principle or Francis Crick's "DNA makes RNA makes protein", or it might be broad, e.g., unknown species of life dwell in the unexplored depths of the oceans. A statistical hypothesis is a conjecture about some population. For example, the population might be people with a particular disease. The conjecture might be that a new drug will cure the disease in some of those people. Terms commonly associated with statistical hypotheses are null hypothesis and alternative hypothesis. A null hypothesis is the conjecture that the statistical hypothesis is false, e.g., that the new drug does nothing and that any cures are due to chance effects. Researchers normally want to show that the null hypothesis is false. The alternative hypothesis is the desired outcome, e.g., that the drug does better than chance. A final point: a scientific hypothesis must be falsifiable, meaning that one can identify a possible outcome of an experiment that conflicts with predictions deduced from the hypothesis; otherwise, it cannot be meaningfully tested.
- Prediction: This step involves determining the logical consequences of the hypothesis. One or more predictions are then selected for further testing. The less likely that the prediction would be correct simply by coincidence, the stronger evidence it would be if the prediction were fulfilled; evidence is also stronger if the answer to the prediction is not already known, due to the effects of hindsight bias (see also postdiction). Ideally, the prediction must also distinguish the hypothesis from likely alternatives; if two hypotheses make the same prediction, observing the prediction to be correct is not evidence for either one over the other. (These statements about the relative strength of evidence can be mathematically derived using Bayes' Theorem.)
- Testing: This is an investigation of whether the real world behaves as predicted by the hypothesis. Scientists (and other people) test hypotheses by conducting experiments. The purpose of an experiment is to determine whether observations of the real world agree with or conflict with the predictions derived from an hypothesis. If they agree, confidence in the hypothesis increases; otherwise, it decreases. Agreement does not assure that the hypothesis is true; future experiments may reveal problems. Karl Popper advised scientists to try to falsify hypotheses, i.e., to search for and test those experiments that seem most doubtful. Large numbers of successful confirmations are not convincing if they arise from experiments that avoid risk. Experiments should be designed to minimize possible errors, especially through the use of appropriate scientific controls. For example, tests of medical treatments are commonly run as double-blind tests. Test personnel, who might unwittingly reveal to test subjects which samples are the desired test drugs and which are placebos, are kept ignorant of which are which. Such hints can bias the responses of the test subjects. Failure of an experiment does not necessarily mean the hypothesis is false. Experiments always depend on several hypotheses, e.g., that the test equipment is working properly, and a failure may be a failure of one of the auxiliary hypotheses. (See the Duhem-Quine thesis.) Experiments can be conducted in a college lab, on a kitchen table, at CERN's Large Hadron Collider, at the bottom of an ocean, on Mars (using one of the working rovers), and so on. Astronomers do experiments, searching for planets around distant stars. Finally, most individual experiments address highly specific topics for reasons of practicality. As a result, evidence about broader topics is usually accumulated gradually.
- Analysis: This involves determining what the results of the experiment show and deciding on the next actions to take. The predictions of the hypothesis are compared to those of the null hypothesis, to determine which is better able to explain the data. In cases where an experiment is repeated many times, a statistical analysis such as a chi-squared test may be required. If the evidence has falsified the hypothesis, a new hypothesis is required; if the experiment supports the hypothesis but the evidence is not strong enough for high confidence, other predictions from the hypothesis must be tested. Once a hypothesis is strongly supported by evidence, a new question can be asked to provide further insight on the same topic. Evidence from other scientists and experience are frequently incorporated at any stage in the process. Many iterations may be required to gather sufficient evidence to answer a question with confidence, or to build up many answers to highly specific questions in order to answer a single broader question.
This model underlies the scientific revolution. One thousand years ago, Alhazen demonstrated the importance of forming questions and subsequently testing them, an approach which was advocated by Galileo in 1638 with the publication of Two New Sciences. The current method is based on a hypothetico-deductive model formulated in the 20th century, although it has undergone significant revision since first proposed (for a more formal discussion, see below).
|The basic elements of the scientific method are illustrated by the following example from the discovery of the structure of DNA:
The discovery became the starting point for many further studies involving the genetic material, such as the field of molecular genetics, and it was awarded the Nobel Prize in 1962. Each step of the example is examined in more detail later in the article.
The scientific method also includes other components required even when all the iterations of the steps above have been completed:
- Replication: If an experiment cannot be repeated to produce the same results, this implies that the original results were in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work.
- External review: The process of peer review involves evaluation of the experiment by experts, who give their opinions anonymously to allow them to give unbiased criticism. It does not certify correctness of the results, only that the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work.
- Data recording and sharing: Scientists must record all data very precisely in order to reduce their own bias and aid in replication by others, a requirement first promoted by Ludwik Fleck (1896–1961) and others. They must supply this data to other scientists who wish to replicate any results, extending to the sharing of any experimental samples that may be difficult to obtain.
The goal of a scientific inquiry is to obtain knowledge in the form of testable explanations that can predict the results of future experiments. This allows scientists to gain an understanding of reality, and later use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it is, and the more likely it is to be correct. The most successful explanations, which explain and make accurate predictions in a wide range of circumstances, are called scientific theories.
Most experimental results do not result in large changes in human understanding; improvements in theoretical scientific understanding is usually the result of a gradual synthesis of the results of different experiments, by various researchers, across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted by a scientific community as evidence in favor is presented, and as presumptions that are inconsistent with the evidence are falsified.
Properties of scientific inquiry
Scientific knowledge is closely tied to empirical findings, and always remains subject to falsification if new experimental observation incompatible with it is found. That is, no theory can ever be considered completely certain, since new evidence falsifying it might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that minor modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory is related to how long it has persisted without falsification of its core principles.
Confirmed theories are also subject to subsumption by more accurate theories. For example, thousands of years of scientific observations of the planets were explained almost perfectly by Newton's laws. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws as well as predicting and explaining other observations such as the deflection of light by gravity. Thus independent, unconnected, scientific observations can be connected to each other, unified by principles of increasing explanatory power.
Since every new theory must explain even more than the previous one, any successor theory capable of subsuming it must meet an even higher standard, explaining both the larger, unified body of observations explained by the previous theory and unifying that with even more observations. In other words, as scientific knowledge becomes more accurate with time, it becomes increasingly harder to produce a more successful theory, simply because of the great success of the theories that already exist. For example, the Theory of Evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology.
Beliefs and biases
Scientific methodology directs that hypotheses be tested in controlled conditions which can be reproduced by others. The scientific community's pursuit of experimental control and reproducibility diminishes the effects of cognitive biases.
For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).
A historical example is the conjecture that the legs of a galloping horse are splayed at the point when none of the horse's legs touches the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together.
In contrast to the requirement for scientific knowledge to correspond to reality, beliefs based on myth or stories can be believed and acted upon irrespective of truth, often taking advantage of the narrative fallacy that when narrative is constructed its elements become easier to believe. Myths intended to be taken as true must have their elements assumed a priori, while science requires testing and validation a posteriori before ideas are accepted.
Elements of the scientific method
There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of natural sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.
- Four essential elements of the scientific method are iterations, recursions, interleavings, or orderings of the following:
- Characterizations (observations, definitions, and measurements of the subject of inquiry)
- Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject)
- Predictions (reasoning including logical deduction from the hypothesis or theory)
- Experiments (tests of all of the above)
Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do (see below) but apply mostly to experimental sciences (e.g., physics, chemistry, and biology). The elements above are often taught in the educational system as "the scientific method".
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow, but is rather an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically large, the vanishingly small, and the extremely fast are removed from Einstein's theories — all phenomena Newton could not have observed — Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase our confidence in Newton's work.
A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:
- Define a question
- Gather information and resources (observe)
- Form an explanatory hypothesis
- Test the hypothesis by performing an experiment and collecting data in a reproducible manner
- Analyze the data
- Interpret the data and draw conclusions that serve as a starting point for new hypothesis
- Publish results
- Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 back to 3 again.
While this schema outlines a typical hypothesis/testing method, it should also be noted that a number of philosophers, historians and sociologists of science (perhaps most notably Paul Feyerabend) claim that such descriptions of scientific method have little relation to the ways science is actually practiced.
- Operation - Some action done to the system being investigated
- Observation - What happens when the operation is done to the system
- Model - A fact, hypothesis, theory, or the phenomenon itself at a certain moment
- Utility Function - A measure of the usefulness of the model to explain, predict, and control, and of the cost of use of it. One of the elements of any scientific utility function is the refutability of the model. Another is its simplicity, on the Principle of Parsimony more commonly known as Occam's Razor.
The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
|"I am not accustomed to saying anything with certainty after only one or two observations."—Andreas Vesalius (1546)|
Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.
Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electrical current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a particular kilogram of platinum-iridium kept in a laboratory in France.
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
The history of the discovery of the structure of DNA is a classic example of the elements of the scientific method: in 1950 it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle. ..2. DNA-hypotheses
Another example: precession of Mercury
The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to fully record the motion of planet Earth. Newton was able to include those measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that cannot be fully explained by Newton's laws of motion (see diagram to the right), though it took quite some time to realize this. The observed difference for Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of General Relativity. His relativistic calculations matched observation much more closely than did Newtonian theory (the difference is approximately 43 arc-seconds per century), .
An hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena.
Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have — their own creativity, ideas from other fields, induction, Bayesian inference, and so on — to imagine possible explanations for a phenomenon under study. Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that
- the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate … bald suppositions and areas of vagueness.
In general scientists tend to look for theories that are "elegant" or "beautiful". In contrast to the usual English use of these terms, they here refer to a theory in accordance with the known facts, which is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
Linus Pauling proposed that DNA might be a triple helix. This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong and that Pauling would soon admit his difficulties with that structure. So, the race was on to figure out the correct structure (except that Pauling did not realize at the time that he was in a race—see section on "DNA-predictions" below)
Predictions from the hypothesis
Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. Thus, much scientifically based speculation might convince one (or many) that the hypothesis that other intelligent species exist is true. But since there no experiment now known which can test this hypothesis, science itself can have little to say about the possibility. In future, some new technique might lead to an experimental test and the speculation would then become part of accepted science.
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x shaped patterns.
In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material". ..4. DNA-experiments
Another example: general relativity
Einstein's theory of General Relativity makes several specific predictions about the observable structure of space-time, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.
Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed, when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane.
Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190-120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Jābir ibn Hayyān (721-815 CE), al-Battani (853–929) and Alhazen (965-1039).
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from Kings College - Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's detailed X-ray diffraction images which showed an X-shape and was able to confirm the structure was helical. This rekindled Watson and Crick's model building and led to the correct structure. ..1. DNA-characterizations
Evaluation and improvement
The scientific method is iterative. At any stage it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts, Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it. They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images. ..DNA Example
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.
To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals including Nature and Science, have a policy that researchers must archive their data and methods so other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center.
Models of scientific inquiry
The classical model of scientific inquiry derives from Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also treated the compound forms such as reasoning by analogy.
In 1877, Charles Sanders Peirce (// like "purse"; 1839–1914) characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, belief being that on which one is prepared to act. He framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or hyperbolic doubt, which he held to be fruitless. He outlined four methods of settling opinion, ordered from least to most successful:
- The method of tenacity (policy of sticking to initial belief) — which brings comforts and decisiveness but leads to trying to ignore contrary information and others' views as if truth were intrinsically private, not public. It goes against the social impulse and easily falters since one may well notice when another's opinion is as good as one's own initial opinion. Its successes can shine but tend to be transitory.
- The method of authority — which overcomes disagreements but sometimes brutally. Its successes can be majestic and long-lived, but it cannot operate thoroughly enough to suppress doubts indefinitely, especially when people learn of other societies present and past.
- The method of the a priori — which promotes conformity less brutally but fosters opinions as something like tastes, arising in conversation and comparisons of perspectives in terms of "what is agreeable to reason." Thereby it depends on fashion in paradigms and goes in circles over time. It is more intellectual and respectable but, like the first two methods, sustains accidental and capricious beliefs, destining some minds to doubt it.
- The scientific method — the method wherein inquiry regards itself as fallible and purposely tests itself and criticizes, corrects, and improves itself.
Peirce held that slow, stumbling ratiocination can be dangerously inferior to instinct and traditional sentiment in practical matters, and that the scientific method is best suited to theoretical research, which in turn should not be trammeled by the other methods and practical ends; reason's "first rule" is that, in order to learn, one must desire to learn and, as a corollary, must not block the way of inquiry. The scientific method excels the others by being deliberately designed to arrive — eventually — at the most secure beliefs, upon which the most successful practices can be based. Starting from the idea that people seek not truth per se but instead to subdue irritating, inhibitory doubt, Peirce showed how, through the struggle, some can come to submit to truth for the sake of belief's integrity, seek as truth the guidance of potential practice correctly to its given goal, and wed themselves to the scientific method.
For Peirce, rational inquiry implies presuppositions about truth and the real; to reason is to presuppose (and at least to hope), as a principle of the reasoner's self-regulation, that the real is discoverable and independent of our vagaries of opinion. In that vein he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as actual consensus of some definite, finite community (such that to inquire would be to poll the experts), but instead as that final opinion which all investigators would reach sooner or later but still inevitably, if they were to push investigation far enough, even when they start from different points. In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, depends only on the final opinion destined in a sufficient investigation. That is a destination as far, or near, as the truth itself to you or me or the given finite community. Thus his theory of inquiry boils down to "Do the science." Those conceptions of truth and the real involve the idea of a community both without definite limits (and thus potentially self-correcting as far as needed) and capable of definite increase of knowledge. As inference, "logic is rooted in the social principle" since it depends on a standpoint that is, in a sense, unlimited.
Paying special attention to the generation of explanations, Peirce outlined the scientific method as a coordination of three kinds of inference in a purposeful cycle aimed at settling doubts, as follows (in §III–IV in "A Neglected Argument" except as otherwise noted):
1. Abduction (or retroduction). Guessing, inference to explanatory hypotheses for selection of those best worth trying. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises from surprising observations in one or more of those realms (and for example at any stage of an inquiry already underway). All explanatory content of theories comes from abduction, which guesses a new or outside idea so as to account in a simple, economical way for a surprising or complicative phenomenon. Oftenest, even a well-prepared mind guesses wrong. But the modicum of success of our guesses far exceeds that of sheer luck and seems born of attunement to nature by instincts developed or inherent, especially insofar as best guesses are optimally plausible and simple in the sense, said Peirce, of the "facile and natural", as by Galileo's natural light of reason and as distinct from "logical simplicity". Abduction is the most fertile but least secure mode of inference. Its general rationale is inductive: it succeeds often enough and, without it, there is no hope of sufficiently expediting inquiry (often multi-generational) toward new truths. Coordinative method leads from abducing a plausible hypothesis to judging it for its testability and for how its trial would economize inquiry itself. Peirce calls his pragmatism "the logic of abduction". His pragmatic maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object". His pragmatism is a method of reducing conceptual confusions fruitfully by equating the meaning of any conception with the conceivable practical implications of its object's conceived effects — a method of experimentational mental reflection hospitable to forming hypotheses and conducive to testing them. It favors efficiency. The hypothesis, being insecure, needs to have practical implications leading at least to mental tests and, in science, lending themselves to scientific tests. A simple but unlikely guess, if uncostly to test for falsity, may belong first in line for testing. A guess is intrinsically worth testing if it has instinctive plausibility or reasoned objective probability, while subjective likelihood, though reasoned, can be misleadingly seductive. Guesses can be chosen for trial strategically, for their caution (for which Peirce gave as example the game of Twenty Questions), breadth, and incomplexity. One can hope to discover only that which time would reveal through a learner's sufficient experience anyway, so the point is to expedite it; the economy of research is what demands the leap, so to speak, of abduction and governs its art.
2. Deduction. Two stages:
- i. Explication. Unclearly premissed, but deductive, analysis of the hypothesis in order to render its parts as clear as possible.
- ii. Demonstration: Deductive Argumentation, Euclidean in procedure. Explicit deduction of hypothesis's consequences as predictions, for induction to test, about evidence to be found. Corollarial or, if needed, Theorematic.
3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general) that the real is only the object of the final opinion to which adequate investigation would lead; anything to which no such process would ever lead would not be real. Induction involving ongoing tests or observations follows a method which, sufficiently persisted in, will diminish its error below any predesignate degree. Three stages:
- i. Classification. Unclearly premissed, but inductive, classing of objects of experience under general ideas.
- ii. Probation: direct Inductive Argumentation. Crude (the enumeration of instances) or Gradual (new estimate of proportion of truth in the hypothesis after each test). Gradual Induction is Qualitative or Quantitative; if Qualitative, then dependent on weightings of qualities or characters; if Quantitative, then dependent on measurements, or on statistics, or on countings.
- iii. Sentential Induction. "...which, by Inductive reasonings, appraises the different Probations singly, then their combinations, then makes self-appraisal of these very appraisals themselves, and passes final judgment on the whole result".
Many subspecialties of applied logic and computer science, such as artificial intelligence, machine learning, computational learning theory, inferential statistics, and knowledge representation, are concerned with setting out computational, logical, and statistical frameworks for the various types of inference involved in scientific inquiry. In particular, they contribute hypothesis formation, logical deduction, and empirical testing. Some of these applications draw on measures of complexity from algorithmic information theory to guide the making of predictions from prior distributions of experience, for example, see the complexity measure called the speed prior from which a computable strategy for optimal inductive reasoning can be derived.
Communication and community
Frequently the scientific method is employed not only by a single person, but also by several people cooperating directly or indirectly. Such cooperation can be regarded as one of the defining elements of a scientific community. Various techniques have been developed to ensure the integrity of scientific methodology within such an environment.
Peer review evaluation
Scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. The referees may or may not recommend publication, publication with suggested modifications, or, sometimes, publication in another journal. This serves to keep the scientific literature free of unscientific or pseudoscientific work, to help cut down on obvious errors, and generally otherwise to improve the quality of the material. The peer review process can have limitations when considering research outside the conventional scientific paradigm: problems of "groupthink" can interfere with open and fair deliberation of some new research.
Documentation and replication
Sometimes experimenters may make systematic errors during their experiments, unconsciously veer from scientific method (Pathological science) for various reasons, or, in rare cases, deliberately report false results. Consequently, it is a common practice for other scientists to attempt to repeat the experiments in order to duplicate the results, thus further validating the hypothesis.
As a result, researchers are expected to practice scientific data archiving in compliance with the policies of government funding agencies and scientific journals. Detailed records of their experimental procedures, raw data, statistical analyses and source code are preserved in order to provide evidence of the effectiveness and integrity of the procedure and assist in reproduction. These procedural records may also assist in the conception of new experiments to test the hypothesis, and may prove useful to engineers who might examine the potential practical applications of a discovery.
When additional information is needed before a study can be reproduced, the author of the study is expected to provide it promptly. If the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research.
Since it is impossible for a scientist to record everything that took place in an experiment, facts selected for their apparent relevance are reported. This may lead, unavoidably, to problems later if some supposedly irrelevant feature is questioned. For example, Heinrich Hertz did not report the size of the room used to test Maxwell's equations, which later turned out to account for a small deviation in the results. The problem is that parts of the theory itself need to be assumed in order to select and report the experimental conditions. The observations are hence sometimes described as being 'theory-laden'.
Dimensions of practice
The primary constraints on contemporary science are:
- Publication, i.e. Peer review
- Resources (mostly funding)
It has not always been like this: in the old days of the "gentleman scientist" funding (and to a lesser extent publication) were far weaker constraints.
Both of these constraints indirectly require scientific method — work that violates the constraints will be difficult to publish and difficult to get funded. Journals require submitted papers to conform to "good scientific practice" and this is mostly enforced by peer review. Originality, importance and interest are more important - see for example the author guidelines for Nature.
Philosophy and sociology of science
Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions derived from philosophy that form the base of the scientific method - namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form the basis on which science is grounded. Logical Positivist, empiricist, falsificationist, and other theories have claimed to give a definitive account of the logic of science, but each has in turn been criticized.
Thomas Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures.
Norwood Russell Hanson, Imre Lakatos and Thomas Kuhn have done extensive work on the "theory laden" character of observation. Hanson (1958) first coined the term for the idea that all observation is dependent on the conceptual framework of the observer, using the concept of gestalt to show how preconceptions can affect both observation and description. He opens Chapter 1 with a discussion of the Golgi bodies and their initial rejection as an artefact of staining technique, and a discussion of Brahe and Kepler observing the dawn and seeing a "different" sun rise despite the same physiological phenomenon. Kuhn and Feyerabend acknowledge the pioneering significance of his work.
Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed".
Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. Thus, if believers in scientific method wish to express a single universally valid rule, Feyerabend jokingly suggests, it should be 'anything goes'. Criticisms such as his led to the strong programme, a radical approach to the sociology of science.
The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth.
Role of chance in discovery
Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what professor of economics Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough - it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try and fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.
The development of the scientific method is inseparable from the history of science itself. Ancient Egyptian documents describe empirical methods in astronomy, mathematics, and medicine. The ancient Greek philosopher Thales in the 6th century BC refused to accept supernatural, religious or mythological explanations for natural phenomena, proclaiming that every event had a natural cause. The development of deductive reasoning by Plato was an important step towards the scientific method. Empiricism seems to have been formalized by Aristotle, who believed that universal truths could be reached via induction.
There are hints of experimental methods from the Classical world (e.g., those reported by Archimedes in a report recovered early in the 20th century from an overwritten manuscript), but the first clear instances of an experimental scientific method seem to have been developed by Islamic scientists who introduced the use of experimentation and quantification within a generally empirical orientation. For example, Alhazen performed optical and physiological experiments, reported in his manifold works, the most famous being Book of Optics (1021).
By the late 15th century, the physician-scholar Niccolò Leoniceno was finding errors in Pliny's Natural History. As a physician, Leoniceno was concerned about these botanical errors propagating to the materia medica on which medicines were based. To counter this, a botanical garden was established at Orto botanico di Padova, University of Padua (in use for teaching by 1546), in order that medical students might have empirical access to the plants of a pharmacopia. The philosopher and physician Francisco Sanches was led by his medical training at Rome, 1571–73, and by the philosophical skepticism recently placed in the European mainstream by the publication of Sextus Empiricus' "Outlines of Pyrrhonism", to search for a true method of knowing (modus sciendi), as nothing clear can be known by the methods of Aristotle and his followers — for example, syllogism fails upon circular reasoning. Following the physician Galen's method of medicine, Sanches lists the methods of judgement and experience, which are faulty in the wrong hands, and we are left with the bleak statement That Nothing is Known (1581). This challenge was taken up by René Descartes in the next generation (1637), but at the least, Sanches warns us that we ought to refrain from the methods, summaries, and commentaries on Aristotle, if we seek scientific knowledge. In this, he is echoed by Francis Bacon, also influenced by skepticism; Sanches cites the humanist Juan Luis Vives who sought a better educational system, as well as a statement of human rights as a pathway for improvement of the lot of the poor.
The modern scientific method crystallized no later than in the 17th and 18th centuries. In his work Novum Organum (1620) — a reference to Aristotle's Organon — Francis Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Then, in 1637, René Descartes established the framework for scientific method's guiding principles in his treatise, Discourse on Method. The writings of Alhazen, Bacon and Descartes are considered critical in the historical development of the modern scientific method, as are those of John Stuart Mill.
Grosseteste was "the principal figure" in bringing about "a more adequate method of scientific inquiry" by which "medieval scientists were able eventually to outstrip their ancient European and Muslim teachers" (Dales 1973:62). ... His thinking influenced Roger Bacon, who spread Grosseteste's ideas from Oxford to the University of Paris during a visit there in the 1240s. From the prestigious universities in Oxford and Paris, the new experimental science spread rapidly throughout the medieval universities: "And so it went to Galileo, William Gilbert, Francis Bacon, William Harvey, Descartes, Robert Hooke, Newton, Leibniz, and the world of the seventeenth century" (Crombie 1962:15). So it went to us also.— Hugh G. Gauch, 2003.
In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the development of current scientific methodology generally. Peirce accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both deduction and induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume, who wrote in the mid-to-late 18th century). Secondly, and of more direct importance to modern method, Peirce put forth the basic schema for hypothesis/testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that, as discussed above in this article, play a role in inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself — indeed this was his primary specialty.
Beginning in the 1930s, Karl Popper argued that there is no such thing as inductive reasoning. All inferences ever made, including in science, are purely deductive according to this view. Accordingly, he claimed that the empirical character of science has nothing to do with induction—but with the deductive property of falsifiability that scientific hypotheses have. Contrasting his views with inductivism and positivism, he even denied the existence of the scientific method: "(1) There is no method of discovering a scientific theory (2) There is no method for ascertaining the truth of a scientific hypothesis, i.e., no method of verification; (3) There is no method for ascertaining whether a hypothesis is 'probable', or probably true". Instead, he held that there is only one universal method, a method not particular to science: The negative method of criticism, or colloquially termed trial and error. It covers not only all products of the human mind, including science, mathematics, philosophy, art and so on, but also the evolution of life. Following Peirce and others, Popper argued that science is fallible and has no authority. In contrast to empiricist-inductivist views, he welcomed metaphysics and philosophical discussion and even gave qualified support to myths and pseudosciences. Popper's view has become known as critical rationalism.
Although science in a broad sense existed before the modern era, and in many historical civilizations (as described above), modern science is so distinct in its approach and successful in its results that it now defines what science is in the strictest sense of the term.
Relationship with mathematics
Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines can clearly distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proven; at such a stage, that statement would be called a conjecture. But when a statement has attained mathematical proof, that statement gains a kind of immortality which is highly prized by mathematicians, and for which some mathematicians devote their lives.
Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proven using time as a mathematical concept in which objects can flow (see Ricci flow).
Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, is a very well known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.
George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
|Mathematical method||Scientific method|
|1||Understanding||Characterization from experience and observation|
|2||Analysis||Hypothesis: a proposed explanation|
|3||Synthesis||Deduction: prediction from the hypothesis|
|4||Review/Extend||Test and experiment|
In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it.
Problems and issues
History, philosophy, sociology
- Goldhaber & Nieto 2010, p. 940
- " Rules for the study of natural philosophy", Newton 1999, pp. 794–6, from Book 3, The System of the World.
- Oxford English Dictionary - entry for scientific.
- "How does light travel through transparent bodies? Light travels through transparent bodies in straight lines only.... We have explained this exhaustively in our Book of Optics. But let us now mention something to prove this convincingly: the fact that light travels in straight lines is clearly observed in the lights which enter into dark rooms through holes.... [T]he entering light will be clearly observable in the dust which fills the air. —Alhazen, translated into English from German by M. Schwarz, from "Abhandlung über das Licht", J. Baarmann (ed. 1882) Zeitschrift der Deutschen Morgenländischen Gesellschaft Vol 36 as quoted in Sambursky 1974, p. 136.
- He demonstrated his conjecture that "light travels through transparent bodies in straight lines only" by placing a straight stick or a taut thread next to the light beam, as quoted in Sambursky 1974, p. 136 to prove that light travels in a straight line.
- David Hockney, (2001, 2006) in Secret Knowledge: rediscovering the lost techniques of the old masters ISBN 0-14-200512-6 (expanded edition) cites Alhazen several times as the likely source for the portraiture technique using the camera obscura, which Hockney rediscovered with the aid of an optical suggestion from Charles M. Falco. Kitab al-Manazir, which is Alhazen's Book of Optics, at that time denoted Opticae Thesaurus, Alhazen Arabis, was translated from Arabic into Latin for European use as early as 1270. Hockney cites Friedrich Risner's 1572 Basle edition of Opticae Thesaurus. Hockney quotes Alhazen as the first clear description of the camera obscura in Hockney, p. 240.
- Morris Kline (1985) Mathematics for the nonmathematician. Courier Dover Publications. p. 284. ISBN 0-486-24823-2
- Shapere, Dudley (1974). Galileo: A Philosophical Study. University of Chicago Press. ISBN 0-226-75007-8.
- Peirce, C. S., Collected Papers v. 1, paragraph 74.
- " The thesis of this book, as set forth in Chapter One, is that there are general principles applicable to all the sciences." __ Gauch 2003, p. xv
- Peirce (1877), "The Fixation of Belief", Popular Science Monthly, v. 12, pp. 1–15. Reprinted often, including (Collected Papers of Charles Sanders Peirce v. 5, paragraphs 358–87), (The Essential Peirce, v. 1, pp. 109–23). Peirce.org Eprint. Wikisource Eprint.
- Gauch 2003, p. 1: This is the principle of noncontradiction.
- Peirce, C. S., Collected Papers v. 5, in paragraph 582, from 1898:
... [rational] inquiry of every type, fully carried out, has the vital power of self-correction and of growth. This is a property so deeply saturating its inmost nature that it may truly be said that there is but one thing needful for learning the truth, and that is a hearty and active desire to learn what is true.
- Taleb contributes a brief description of anti-fragility, http://www.edge.org/q2011/q11_3.html
- Karl R. Popper (1963), 'The Logic of Scientific Discovery'. The Logic of Scientific Discovery pp. 17-20, 249-252, 437-438, and elsewhere.
- Leon Lederman, for teaching physics first, illustrates how to avoid confirmation bias: Ian Shelton, in Chile, was initially skeptical that supernova 1987a was real, but possibly an artifact of instrumentation (null hypothesis), so he went outside and disproved his null hypothesis by observing SN 1987a with the naked eye. The Kamiokande experiment, in Japan, independently observed neutrinos from SN 1987a at the same time.
- Peirce (1908), "A Neglected Argument for the Reality of God", Hibbert Journal v. 7, pp. 90-112. s:A Neglected Argument for the Reality of God with added notes. Reprinted with previously unpublished part, Collected Papers v. 6, paragraphs 452-85, The Essential Peirce v. 2, pp. 434-50, and elsewhere.
- Gauch 2003, p. 3
- History of Inductive Science (1837), and in Philosophy of Inductive Science (1840)
- Schuster and Powers (2005), Translational and Experimental Clinical Research, Ch. 1. Link. This chapter also discusses the different types of research questions and how they are produced.
- This phrasing is attributed to Marshall Nirenberg.
- Karl R. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge, Routledge, 2003 ISBN 0-415-28594-1
- Lindberg 2007, pp. 2–3: "There is a danger that must be avoided. ... If we wish to do justice to the historical enterprise, we must take the past for what it was. And that means we must resist the temptation to scour the past for examples or precursors of modern science. ...My concern will be with the beginnings of scientific theories, the methods by which they were formulated, and the uses to which they were put; ... "
- Galilei, Galileo (M.D.C.XXXVIII), Discorsi e Dimonstrazioni Matematiche, intorno a due nuoue scienze, Leida: Apresso gli Elsevirri, ISBN 0-486-60099-8, Dover reprint of the 1914 Macmillan translation by Henry Crew and Alfonso de Salvio of Two New Sciences, Galileo Galilei Linceo (1638). Additional publication information is from the collection of first editions of the Library of Congress surveyed by Bruno 1989, pp. 261–264.
- Godfrey-Smith 2003 p. 236.
- October 1951, as noted in McElheny 2004, p. 40:"That's what a helix should look like!" Crick exclaimed in delight (This is the Cochran-Crick-Vand-Stokes theory of the transform of a helix).
- June 1952, as noted in McElheny 2004, p. 43: Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix.
- Watson did enough work on Tobacco mosaic virus to produce the diffraction pattern for a helix, per Crick's work on the transform of a helix. pp. 137-138, Horace Freeland Judson (1979) The Eighth Day of Creation ISBN 0-671-22540-5
- — Cochran W, Crick FHC and Vand V. (1952) "The Structure of Synthetic Polypeptides. I. The Transform of Atoms on a Helix", Acta Cryst., 5, 581-586.
- Friday, January 30, 1953. Tea time, as noted in McElheny 2004, p. 52: Franklin confronts Watson and his paper - "Of course it [Pauling's pre-print] is wrong. DNA is not a helix." However, Watson then visits Wilkins' office, sees photo 51, and immediately recognizes the diffraction pattern of a helical structure. But additional questions remained, requiring additional iterations of their research. For example, the number of strands in the backbone of the helix (Crick suspected 2 strands, but cautioned Watson to examine that more critically), the location of the base pairs (inside the backbone or outside the backbone), etc. One key point was that they realized that the quickest way to reach a result was not to continue a mathematical analysis, but to build a physical model.
- "The instant I saw the picture my mouth fell open and my pulse began to race." —Watson 1968, p. 167 Page 168 shows the X-shaped pattern of the B-form of DNA, clearly indicating crucial details of its helical structure to Watson and Crick.
- McElheny 2004 p.52 dates the Franklin-Watson confrontation as Friday, January 30, 1953. Later that evening, Watson urges Wilkins to begin model-building immediately. But Wilkins agrees to do so only after Franklin's departure.
- Saturday, February 28, 1953, as noted in McElheny 2004, pp. 57–59: Watson found the base pairing mechanism which explained Chargaff's rules using his cardboard models.
- Fleck 1979, pp. xxvii-xxviii
- "NIH Data Sharing Policy."
- Stanovich, Keith E. (2007). How to Think Straight About Psychology. Boston: Pearson Education. pg 123
- Brody 1993, pp. 44–45
- Hall, B. K.; Hallgrímsson, B., eds. (2008). Strickberger's Evolution (4th ed.). Jones & Bartlett. p. 762. ISBN 0-7637-0066-5.
- Cracraft, J.; Donoghue, M. J., eds. (2005). Assembling the tree of life. Oxford University Press. p. 592. ISBN 0-19-517234-5.
- Needham & Wang 1954 p.166 shows how the 'flying gallop' image propagated from China to the West.
- "A myth is a belief given uncritical acceptance by members of a group ..." —Weiss, Business Ethics p. 15, as cited by Ronald R. Sims (2003) Ethics and corporate social responsibility: why giants fall p.21
- Imre Lakatos (1976), Proofs and Refutations. Taleb 2007, p. 72 lists ways to avoid narrative fallacy and confirmation bias.
- For more on the narrative fallacy, see also Fleck 1979, p. 27: "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it."
- "Invariably one came up against fundamental physical limits to the accuracy of measurement. ... The art of physical measurement seemed to be a matter of compromise, of choosing between reciprocally related uncertainties. ... Multiplying together the conjugate pairs of uncertainty limits mentioned, however, I found that they formed invariant products of not one but two distinct kinds. ... The first group of limits were calculable a priori from a specification of the instrument. The second group could be calculated only a posteriori from a specification of what was done with the instrument. ... In the first case each unit [of information] would add one additional dimension (conceptual category), whereas in the second each unit would add one additional atomic fact.", —Pages 1-4: MacKay, Donald M. (1969), Information, Mechanism, and Meaning, Cambridge, MA: MIT Press, ISBN 0-262-63-032-X
- See the hypothethico-deductive method, for example, Godfrey-Smith 2003, p. 236.
- Jevons 1874, pp. 265–6.
- pp.65,73,92,398 —Andrew J. Galambos, Sic Itur ad Astra ISBN 0-88078-004-5(AJG learned scientific method from Felix Ehrenhaft
- Galileo 1638, pp. v-xii,1–300
- Brody 1993, pp. 10–24 calls this the "epistemic cycle": "The epistemic cycle starts from an initial model; iterations of the cycle then improve the model until an adequate fit is achieved."
- Iteration example: Chaldean astronomers such as Kidinnu compiled astronomical data. Hipparchus was to use this data to calculate the precession of the Earth's axis. Fifteen hundred years after Kidinnu, Al-Batani, born in what is now Turkey, would use the collected data and improve Hipparchus' value for the precession of the Earth's axis. Al-Batani's value, 54.5 arc-seconds per year, compares well to the current value of 49.8 arc-seconds per year (26,000 years for Earth's axis to round the circle of nutation).
- Recursion example: the Earth is itself a magnet, with its own North and South Poles William Gilbert (in Latin 1600) De Magnete, or On Magnetism and Magnetic Bodies. Translated from Latin to English, selection by Moulton & Schifferes 1960, pp. 113–117. Gilbert created a terrella, a lodestone ground into a spherical shape, which served as Gilbert's model for the Earth itself, as noted in Bruno 1989, p. 277.
- "The foundation of general physics ... is experience. These ... everyday experiences we do not discover without deliberately directing our attention to them. Collecting information about these is observation." —Hans Christian Ørsted("First Introduction to General Physics" ¶13, part of a series of public lectures at the University of Copenhagen. Copenhagen 1811, in Danish, printed by Johan Frederik Schulz. In Kirstine Meyer's 1920 edition of Ørsted's works, vol.III pp. 151-190. ) "First Introduction to Physics: the Spirit, Meaning, and Goal of Natural Science". Reprinted in German in 1822, Schweigger's Journal für Chemie und Physik 36, pp.458-488, as translated in Ørsted 1997, p. 292
- "When it is not clear under which law of nature an effect or class of effect belongs, we try to fill this gap by means of a guess. Such guesses have been given the name conjectures or hypotheses." —Hans Christian Ørsted(1811) "First Introduction to General Physics" as translated in Ørsted 1997, p. 297.
- "In general we look for a new law by the following process. First we guess it. ...", —Feynman 1965, p. 156
- "... the statement of a law - A depends on B - always transcends experience."—Born 1949, p. 6
- "The student of nature ... regards as his property the experiences which the mathematician can only borrow. This is why he deduces theorems directly from the nature of an effect while the mathematician only arrives at them circuitously." —Hans Christian Ørsted(1811) "First Introduction to General Physics" ¶17. as translated in Ørsted 1997, p. 297.
- Salviati speaks: "I greatly doubt that Aristotle ever tested by experiment whether it be true that two stones, one weighing ten times as much as the other, if allowed to fall, at the same instant, from a height of, say, 100 cubits, would so differ in speed that when the heavier had reached the ground, the other would not have fallen more than 10 cubits." Two New Sciences (1638) —Galileo 1638, pp. 61–62. A more extended quotation is referenced by Moulton & Schifferes 1960, pp. 80–81.
- In the inquiry-based education paradigm, the stage of "characterization, observation, definition, …" is more briefly summed up under the rubric of a Question
- "To raise new questions, new possibilities, to regard old problems from a new angle, requires creative imagination and marks real advance in science." —Einstein & Infeld 1938, p. 92.
- Crawford S, Stucki L (1990), "Peer review and the changing research record", "J Am Soc Info Science", vol. 41, pp 223-228
- See, e.g., Gauch 2003, esp. chapters 5-8
- Cartwright, Nancy (1983), How the Laws of Physics Lie. Oxford: Oxford University Press. ISBN 0-19-824704-4
- Andreas Vesalius, Epistola, Rationem, Modumque Propinandi Radicis Chynae Decocti (1546), 141. Quoted and translated in C.D. O'Malley, Andreas Vesalius of Brussels, (1964), 116. As quoted by Bynum & Porter 2005, p. 597: Andreas Vesalius,597#1.
- Crick, Francis (1994), The Astonishing Hypothesis ISBN 0-684-19431-7 p.20
- McElheny 2004 p.34
- Glen 1994, pp. 37–38.
- "The structure that we propose is a three-chain structure, each chain being a helix" — Linus Pauling, as quoted on p. 157 by Horace Freeland Judson (1979), The Eighth Day of Creation ISBN 0-671-22540-5
- McElheny 2004, pp. 49–50: January 28, 1953 - Watson read Pauling's pre-print, and realized that in Pauling's model, DNA's phosphate groups had to be un-ionized. But DNA is an acid, which contradicts Pauling's model.
- June 1952. as noted in McElheny 2004, p. 43: Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix.
- McElheny 2004 p.68: Nature April 25, 1953.
- In March 1917, the Royal Astronomical Society announced that on May 29, 1919, the occasion of a total eclipse of the sun would afford favorable conditions for testing Einstein's General theory of relativity. One expedition, to Sobral, Ceará, Brazil, and Eddington's expedition to the island of Principe yielded a set of photographs, which, when compared to photographs taken at Sobral and at Greenwich Observatory showed that the deviation of light was measured to be 1.69 arc-seconds, as compared to Einstein's desk prediction of 1.75 arc-seconds. — Antonina Vallentin (1954), Einstein, as quoted by Samuel Rapport and Helen Wright (1965), Physics, New York: Washington Square Press, pp 294-295.
- Mill, John Stuart, "A System of Logic", University Press of the Pacific, Honolulu, 2002, ISBN 1-4102-0252-6.
- al-Battani, De Motu Stellarum translation from Arabic to Latin in 1116, as cited by "Battani, al-" (c.858-929) Encyclopaedia Britannica, 15th. ed. Al-Battani is known for his accurate observations at al-Raqqah in Syria, beginning in 877. His work includes measurement of the annual precession of the equinoxes.
- McElheny 2004 p.53: The weekend (January 31-February 1) after seeing photo 51, Watson informed Bragg of the X-ray diffraction image of DNA in B form. Bragg gave them permission to restart their research on DNA (that is, model building).
- McElheny 2004 p.54: On Sunday February 8, 1953, Maurice Wilkes gave Watson and Crick permission to work on models, as Wilkes would not be building models until Franklin left DNA research.
- McElheny 2004 p.56: Jerry Donohue, on sabbatical from Pauling's lab and visiting Cambridge, advises Watson that textbook form of the base pairs was incorrect for DNA base pairs; rather, the keto form of the base pairs should be used instead. This form allowed the bases' hydrogen bonds to pair 'unlike' with 'unlike', rather than to pair 'like' with 'like', as Watson was inclined to model, on the basis of the textbook statements. On February 27, 1953, Watson was convinced enough to make cardboard models of the nucleotides in their keto form.
- "Suddenly I became aware that an adenine-thymine pair held together by two hydrogen bonds was identical in shape to a guanine-cytosine pair held together by at least two hydrogen bonds. ..." —Watson 1968, pp. 194–197.
- McElheny 2004 p.57 Saturday, February 28, 1953, Watson tried 'like with like' and admited these base pairs didn't have hydrogen bonds that line up. But after trying 'unlike with unlike', and getting Jerry Donohue's approval, the base pairs turned out to be identical in shape (as Watson stated above in his 1968 Double Helix memoir quoted above). Watson now felt confident enough to inform Crick. (Of course, 'unlike with unlike' increases the number of possible codons, if this scheme were a genetic code.)
- See, e.g., Physics Today, 59(1), p42. Richmann electrocuted in St. Petersburg (1753)
- Aristotle, "Prior Analytics", Hugh Tredennick (trans.), pp. 181-531 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, UK, 1938.
- "What one does not in the least doubt one should not pretend to doubt; but a man should train himself to doubt," said Peirce in a brief intellectual autobiography; see Ketner, Kenneth Laine (2009) "Charles Sanders Peirce: Interdisciplinary Scientist" in The Logic of Interdisciplinarity). Peirce held that actual, genuine doubt originates externally, usually in surprise, but also that it is to be sought and cultivated, "provided only that it be the weighty and noble metal itself, and no counterfeit nor paper substitute"; in "Issues of Pragmaticism", The Monist, v. XV, n. 4, pp. 481-99, see p. 484, and p. 491. (Reprinted in Collected Papers v. 5, paragraphs 438-63, see 443 and 451).
- Peirce (1898), "Philosophy and the Conduct of Life", Lecture 1 of the Cambridge (MA) Conferences Lectures, published in Collected Papers v. 1, paragraphs 616-48 in part and in Reasoning and the Logic of Things, Ketner (ed., intro.) and Putnam (intro., comm.), pp. 105-22, reprinted in Essential Peirce v. 2, pp. 27-41.
- " ... in order to learn, one must desire to learn ..."—Peirce (1899), "F.R.L." [First Rule of Logic], Collected Papers v. 1, paragraphs 135-40, Eprint
- Peirce (1877), "How to Make Our Ideas Clear", Popular Science Monthly, v. 12, pp. 286–302. Reprinted often, including Collected Papers v. 5, paragraphs 388–410, Essential Peirce v. 1, pp. 124–41. ArisbeEprint. Wikisource Eprint.
- Peirce (1868), "Some Consequences of Four Incapacities", Journal of Speculative Philosophy v. 2, n. 3, pp. 140–57. Reprinted Collected Papers v. 5, paragraphs 264–317, The Essential Peirce v. 1, pp. 28–55, and elsewhere. Arisbe Eprint
- Peirce (1878), "The Doctrine of Chances", Popular Science Monthly v. 12, pp. 604-15, see pp. 610-11 via Internet Archive. Reprinted Collected Papers v. 2, paragraphs 645-68, Essential Peirce v. 1, pp. 142-54. "...death makes the number of our risks, the number of our inferences, finite, and so makes their mean result uncertain. The very idea of probability and of reasoning rests on the assumption that this number is indefinitely great. .... ...logicality inexorably requires that our interests shall not be limited. .... Logic is rooted in the social principle."
- Peirce (c. 1906), "PAP (Prolegomena for an Apology to Pragmatism)" (Manuscript 293, not the like-named article), The New Elements of Mathematics (NEM) 4:319-320, see first quote under "Abduction" at Commens Dictionary of Peirce's Terms.
- Peirce, Carnegie application (L75, 1902), New Elements of Mathematics v. 4, pp. 37-38:
For it is not sufficient that a hypothesis should be a justifiable one. Any hypothesis which explains the facts is justified critically. But among justifiable hypotheses we have to select that one which is suitable for being tested by experiment.
- Peirce (1902), Carnegie application, see MS L75.329-330, from Draft D of Memoir 27:
Consequently, to discover is simply to expedite an event that would occur sooner or later, if we had not troubled ourselves to make the discovery. Consequently, the art of discovery is purely a question of economics. The economics of research is, so far as logic is concerned, the leading doctrine with reference to the art of discovery. Consequently, the conduct of abduction, which is chiefly a question of heuretic and is the first question of heuretic, is to be governed by economical considerations.
- Peirce (1903), "Pragmatism — The Logic of Abduction", Collected Papers v. 5, paragraphs 195-205, especially 196. Eprint.
- Peirce, "On the Logic of Drawing Ancient History from Documents", Essential Peirce v. 2, see pp. 107-9. On Twenty Questions, p. 109:
Thus, twenty skillful hypotheses will ascertain what 200,000 stupid ones might fail to do.
- Peirce (1878), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705-18, see 718 Google Books; 718 via Internet Archive. Reprinted often, including (Collected Papers v. 2, paragraphs 669-93), (The Essential Peirce v. 1, pp. 155-69).
- Peirce (1905 draft "G" of "A Neglected Argument"), "Crude, Quantitative, and Qualitative Induction", Collected Papers v. 2, paragraphs 755–760, see 759. Find under "Induction" at Commens Dictionary of Peirce's Terms.
- . Brown, C. (2005) Overcoming Barriers to Use of Promising Research Among Elite Middle East Policy Groups, Journal of Social Behaviour and Personality, Select Press.
- Hanson, Norwood (1958), Patterns of Discovery, Cambridge University Press, ISBN 0-521-05197-5
- Kuhn 1962, p. 113 ISBN 978-1-4432-5544-8
- Feyerabend, Paul K (1960) "Patterns of Discovery" The Philosophical Review (1960) vol. 69 (2) pp. 247-252
- Kuhn, Thomas S., "The Function of Measurement in Modern Physical Science", ISIS 52(2), 161–193, 1961.
- Feyerabend, Paul K., Against Method, Outline of an Anarchistic Theory of Knowledge, 1st published, 1975. Reprinted, Verso, London, UK, 1978.
- Higher Superstition: The Academic Left and Its Quarrels with Science, The Johns Hopkins University Press, 1997
- Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science, Picador; 1st Picador USA Pbk. Ed edition, 1999
- The Sokal Hoax: The Sham That Shook the Academy, University of Nebraska Press, 2000 ISBN 0-8032-7995-7
- A House Built on Sand: Exposing Postmodernist Myths About Science, Oxford University Press, 2000
- Intellectual Impostures, Economist Books, 2003
- Dunbar, K., & Fugelsang, J. (2005). Causal thinking in science: How scientists and students interpret the unexpected. In M. E. Gorman, R. D. Tweney, D. Gooding & A. Kincannon (Eds.), Scientific and Technical Thinking (pp. 57-79). Mahwah, NJ: Lawrence Erlbaum Associates.
- Oliver, J.E. (1991) Ch2. of The incomplete guide to the art of discovery. New York:NY, Columbia University Press.
- Riccardo Pozzo (2004) The impact of Aristotelianism on modern philosophy. CUA Press. p.41. ISBN 0-8132-1347-9
- The ancient Egyptians observed that heliacal rising of a certain star, Sothis (Greek for Sopdet (Egyptian), known to the West as Sirius), marked the annual flooding of the Nile river. See Neugebauer, Otto (1969) , The Exact Sciences in Antiquity (2 ed.), Dover Publications, ISBN 978-0-486-22332-2, p.82, and also the 1911 Britannica, "Egypt".
- The Rhind papyrus lists practical examples in arithmetic and geometry —1911 Britannica, "Egypt".
- The Ebers papyrus lists some of the 'mysteries of the physician', as cited in the 1911 Britannica, "Egypt"
- R. L. Verma (1969). Al-Hazen: father of modern optics.
- Niccolò Leoniceno (1509), De Plinii et aliorum erroribus liber apud Ferrara, as cited by Sanches, Limbrick & Thomson 1988, p. 13
- 'I have sometimes seen a verbose quibbler attempting to persuade some ignorant person that white was black; to which the latter replied, "I do not understand your reasoning, since I have not studied as much as you have; yet I honestly believe that white differs from black. But pray go on refuting me for just as long as you like." '— Sanches, Limbrick & Thomson 1988, p. 276
- Sanches, Limbrick & Thomson 1988, p. 278.
- Bacon, Francis Novum Organum (The New Organon), 1620. Bacon's work described many of the accepted principles, underscoring the importance of empirical results, data gathering and experiment. Encyclopaedia Britannica (1911), "Bacon, Francis" states: [In Novum Organum, we ] "proceed to apply what is perhaps the most valuable part of the Baconian method, the process of exclusion or rejection. This elimination of the non-essential, ..., is the most important of Bacon's contributions to the logic of induction, and that in which, as he repeatedly says, his method differs from all previous philosophies."
- "John Stuart Mill (Stanford Encyclopedia of Philosophy)". plato.stanford.edu. Retrieved 2009-07-31.
- Gauch 2003, pp. 52–53
- George Sampson (1970). The concise Cambridge history of English literature. Cambridge University Press. p.174. ISBN 0-521-09581-6
- Logik der Forschung, new appendices *XVII–*XIX (not yet available in the English edition Logic of scientific discovery)
- Logic of Scientific discovery, p. 20
- Karl Popper: On the non-existence of scientific method. Realism and the Aim of Science (1983)
- Karl Popper: Science: Conjectures and Refutations. Conjectures and Refuations, section VII
- Karl Popper: On knowledge. In search of a better world, section II
- "The historian ... requires a very broad definition of "science" — one that ... will help us to understand the modern scientific enterprise. We need to be broad and inclusive, rather than narrow and exclusive ... and we should expect that the farther back we go [in time] the broader we will need to be." — David Pingree (1992), "Hellenophilia versus the History of Science" Isis 83 554-63, as cited on p.3, David C. Lindberg (2007), The beginnings of Western science: the European Scientific tradition in philosophical, religious, and institutional context, Second ed. Chicago: Univ. of Chicago Press ISBN 978-0-226-48205-7
- "When we are working intensively, we feel keenly the progress of our work; we are elated when our progress is rapid, we are depressed when it is slow." — the mathematician Pólya 1957, p. 131 in the section on 'Modern heuristic'.
- "Philosophy [i.e., physics] is written in this grand book--I mean the universe--which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth." —Galileo Galilei, Il Saggiatore (The Assayer, 1623), as translated by Stillman Drake (1957), Discoveries and Opinions of Galileo pp. 237-8, as quoted by di Francia 1981, p. 10.
- Pólya 1957 2nd ed.
- George Pólya (1954), Mathematics and Plausible Reasoning Volume I: Induction and Analogy in Mathematics,
- George Pólya (1954), Mathematics and Plausible Reasoning Volume II: Patterns of Plausible Reasoning.
- Pólya 1957, p. 142
- Pólya 1957, p. 144
- Mackay 1991 p.100
- See the development, by generations of mathematicians, of Euler's formula for polyhedra as documented by Lakatos, Imre (1976), Proofs and refutations, Cambridge: Cambridge University Press, ISBN 0-521-29038-4
- Born, Max (1949), Natural Philosophy of Cause and Chance, Peter Smith, also published by Dover, 1964. From the Waynflete Lectures, 1948. On the web. N.B.: the web version does not have the 3 addenda by Born, 1950, 1964, in which he notes that all knowledge is subjective. Born then proposes a solution in Appendix 3 (1964)
- Brody, Thomas A. (1993), The Philosophy Behind Physics, Springer Verlag, ISBN 0-387-55914-0. (Luis De La Peña and Peter E. Hodgson, eds.)
- Bruno, Leonard C. (1989), The Landmarks of Science, ISBN 0-8160-2137-6
- Bynum, W.F.; Porter, Roy (2005), Oxford Dictionary of Scientific Quotations, Oxford, ISBN 0-19-858409-1.
- di Francia, G. Toraldo (1981), The Investigation of the Physical World, Cambridge University Press, ISBN 0-521-29925-X.
- Einstein, Albert; Infeld, Leopold (1938), The Evolution of Physics: from early concepts to relativity and quanta, New York: Simon and Schuster, ISBN 0-671-20156-5
- Feynman, Richard (1965), The Character of Physical Law, Cambridge: M.I.T. Press, ISBN 0-262-56003-8.
- Fleck, Ludwik (1979), Genesis and Development of a Scientific Fact, Univ. of Chicago, ISBN 0-226-25325-2. (written in German, 1935, Entstehung und Entwickelung einer wissenschaftlichen Tatsache: Einführung in die Lehre vom Denkstil und Denkkollectiv) English translation, 1979
- Galileo (1638), Two New Sciences, Leiden: Lodewijk Elzevir, ISBN 0-486-60099-8 Translated from Italian to English in 1914 by Henry Crew and Alfonso de Salvio. Introduction by Antonio Favaro. xxv+300 pages, index. New York: Macmillan, with later reprintings by Dover.
- Gauch, Hugh G., Jr. (2003), Scientific Method in Practice, Cambridge University Press, ISBN 0-521-01708-4 435 pages
- Glen, William (ed.) (1994), The Mass-Extinction Debates: How Science Works in a Crisis, Stanford, CA: Stanford University Press, ISBN 0-8047-2285-4.
- Godfrey-Smith, Peter (2003), Theory and Reality: An introduction to the philosophy of science, University of Chicago Press, ISBN 0-226-30063-3.
- Goldhaber, Alfred Scharff; Nieto, Michael Martin (January–March 2010), "Photon and graviton mass limits", Rev. Mod. Phys. (American Physical Society) 82: 939, doi:10.1103/RevModPhys.82.939. pages 939-979.
- Jevons, William Stanley (1874), The Principles of Science: A Treatise on Logic and Scientific Method, Dover Publications, ISBN 1-4304-8775-5. 1877, 1879. Reprinted with a foreword by Ernst Nagel, New York, NY, 1958.
- Kuhn, Thomas S. (1962), The Structure of Scientific Revolutions, Chicago, IL: University of Chicago Press. 2nd edition 1970. 3rd edition 1996.
- Lindberg, David C. (2007), The Beginnings of Western Science, University of Chicago Press 2nd edition 2007.
- Mackay, Alan L. (ed.) (1991), Dictionary of Scientific Quotations, London: IOP Publishing Ltd, ISBN 0-7503-0106-6
- McElheny, Victor K. (2004), Watson & DNA: Making a scientific revolution, Basic Books, ISBN 0-7382-0866-3.
- Moulton, Forest Ray; Schifferes, Justus J. (eds., Second Edition) (1960), The Autobiography of Science, Doubleday.
- Needham, Joseph; Wang, Ling (王玲) (1954), Science and Civilisation in China, 1 Introductory Orientations, Cambridge University Press
- Newton, Isaac (1687, 1713, 1726), Philosophiae Naturalis Principia Mathematica, University of California Press, ISBN 0-520-08817-4, Third edition. From I. Bernard Cohen and Anne Whitman's 1999 translation, 974 pages.
- Ørsted, Hans Christian (1997), Selected Scientific Works of Hans Christian Ørsted, Princeton, ISBN 0-691-04334-5. Translated to English by Karen Jelved, Andrew D. Jackson, and Ole Knudsen, (translators 1997).
- Peirce, C. S. — see Charles Sanders Peirce bibliography.
- Poincaré, Henri (1905), Science and Hypothesis Eprint
- Pólya, George (1957), How to Solve It, Princeton University Press, ISBN -691-08097-6 Check
- Popper, Karl R., The Logic of Scientific Discovery, 1934, 1959.
- Sambursky, Shmuel (ed.) (1974), Physical Thought from the Presocratics to the Quantum Physicists, Pica Press, ISBN 0-87663-712-8.
- Sanches, Francisco; Limbrick, Elaine. Introduction, Notes, and Bibliography; Thomson, Douglas F.S. Latin text established, annotated, and translated. (1988), That Nothing is Known, Cambridge: Cambridge University Press, ISBN 0-521-35077-8 Critical edition.
- Taleb, Nassim Nicholas (2007), The Black Swan, Random House, ISBN 978-1-4000-6351-2
- Watson, James D. (1968), The Double Helix, New York: Atheneum, Library of Congress card number 68-16217.
- Bauer, Henry H., Scientific Literacy and the Myth of the Scientific Method, University of Illinois Press, Champaign, IL, 1992
- Beveridge, William I. B., The Art of Scientific Investigation, Heinemann, Melbourne, Australia, 1950.
- Bernstein, Richard J., Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis, University of Pennsylvania Press, Philadelphia, PA, 1983.
- Brody, Baruch A. and Capaldi, Nicholas, Science: Men, Methods, Goals: A Reader: Methods of Physical Science, W. A. Benjamin, 1968
- Brody, Baruch A., and Grandy, Richard E., Readings in the Philosophy of Science, 2nd edition, Prentice Hall, Englewood Cliffs, NJ, 1989.
- Burks, Arthur W., Chance, Cause, Reason — An Inquiry into the Nature of Scientific Evidence, University of Chicago Press, Chicago, IL, 1977.
- Alan Chalmers. What is this thing called science?. Queensland University Press and Open University Press, 1976.
- Crick, Francis (1988), What Mad Pursuit: A Personal View of Scientific Discovery, New York: Basic Books, ISBN 0-465-09137-7.
- Dewey, John, How We Think, D.C. Heath, Lexington, MA, 1910. Reprinted, Prometheus Books, Buffalo, NY, 1991.
- Earman, John (ed.), Inference, Explanation, and Other Frustrations: Essays in the Philosophy of Science, University of California Press, Berkeley & Los Angeles, CA, 1992.
- Fraassen, Bas C. van, The Scientific Image, Oxford University Press, Oxford, UK, 1980.
- Franklin, James (2009), What Science Knows: And How It Knows It, New York: Encounter Books, ISBN 1-59403-207-6.
- Gadamer, Hans-Georg, Reason in the Age of Science, Frederick G. Lawrence (trans.), MIT Press, Cambridge, MA, 1981.
- Giere, Ronald N. (ed.), Cognitive Models of Science, vol. 15 in 'Minnesota Studies in the Philosophy of Science', University of Minnesota Press, Minneapolis, MN, 1992.
- Hacking, Ian, Representing and Intervening, Introductory Topics in the Philosophy of Natural Science, Cambridge University Press, Cambridge, UK, 1983.
- Heisenberg, Werner, Physics and Beyond, Encounters and Conversations, A.J. Pomerans (trans.), Harper and Row, New York, NY 1971, pp. 63–64.
- Holton, Gerald, Thematic Origins of Scientific Thought, Kepler to Einstein, 1st edition 1973, revised edition, Harvard University Press, Cambridge, MA, 1988.
- Kuhn, Thomas S., The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, Chicago, IL, 1977.
- Latour, Bruno, Science in Action, How to Follow Scientists and Engineers through Society, Harvard University Press, Cambridge, MA, 1987.
- Losee, John, A Historical Introduction to the Philosophy of Science, Oxford University Press, Oxford, UK, 1972. 2nd edition, 1980.
- Maxwell, Nicholas, The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford, 1998. Paperback 2003.
- McCarty, Maclyn (1985), The Transforming Principle: Discovering that genes are made of DNA, New York: W. W. Norton, pp. 252 , ISBN 0-393-30450-7. Memoir of a researcher in the Avery–MacLeod–McCarty experiment.
- McComas, William F., ed. PDF (189 KB), from The Nature of Science in Science Education, pp53–70, Kluwer Academic Publishers, Netherlands 1998.
- Misak, Cheryl J., Truth and the End of Inquiry, A Peircean Account of Truth, Oxford University Press, Oxford, UK, 1991.
- Piattelli-Palmarini, Massimo (ed.), Language and Learning, The Debate between Jean Piaget and Noam Chomsky, Harvard University Press, Cambridge, MA, 1980.
- Popper, Karl R., Unended Quest, An Intellectual Autobiography, Open Court, La Salle, IL, 1982.
- Putnam, Hilary, Renewing Philosophy, Harvard University Press, Cambridge, MA, 1992.
- Rorty, Richard, Philosophy and the Mirror of Nature, Princeton University Press, Princeton, NJ, 1979.
- Salmon, Wesley C., Four Decades of Scientific Explanation, University of Minnesota Press, Minneapolis, MN, 1990.
- Shimony, Abner, Search for a Naturalistic World View: Vol. 1, Scientific Method and Epistemology, Vol. 2, Natural Science and Metaphysics, Cambridge University Press, Cambridge, UK, 1993.
- Thagard, Paul, Conceptual Revolutions, Princeton University Press, Princeton, NJ, 1992.
- Ziman, John (2000). Real Science: what it is, and what it means. Cambridge, UK: Cambridge University Press.
|Wikibooks has a book on the topic of: The Scientific Method|
- Scientific method at PhilPapers
- Scientific method at the Indiana Philosophy Ontology Project
- An Introduction to Science: Scientific Thinking and a scientific method by Steven D. Schafersman.
- Introduction to the scientific method at the University of Rochester
- Theory-ladenness by Paul Newall at The Galilean Library
- Lecture on Scientific Method by Greg Anderson
- Using the scientific method for designing science fair projects
- SCIENTIFIC METHODS an online book by Richard D. Jarrard
- Richard Feynman on the Key to Science (one minute, three seconds), from the Cornell Lectures.
- Lectures on the Scientific Method by Nick Josh Karean, Kevin Padian, Michael Shermer and Richard Dawkins | http://en.wikipedia.org/wiki/Scientific_research | 13 |
21 | The Knuth-Morris-Pratt algorithm ensures that a string search will never require more than N character comparisons (once some precomputation is performed).
Imagine that we are performing a string search, and the pattern is ``AAAB'' and the text is ``AAAXAAAAA''. If we use the brute-force algorithm, then the first test will fail when the ``B'' in the pattern fails to match the fourth character in the text, which is an ``X''. At this point, the brute-force algorithm will shift the pattern by one position and start over.
However, this seems silly. In our first attempt to match by brute-force, we learned some key information about text- because we managed to successfully match three characters in the string, we know what those three characters are and can reuse this information. More importantly, we know implicitly what the first two characters are of the next substring that we will try- so there is no need to explicitly check them.
The Knuth-Morris-Pratt string searching algorithm uses this information to reduce the number of times it compares each character in the text to a character in the pattern. If properly implemented, the Knuth-Morris-Pratt algorithm only looks at each character in the text once.
The Knuth-Morris-Pratt algorithm, stated in terms of the brute-force algorithm, is given in algorithm 8.3.
The basic form of the matching algorithm is very similar to the brute-force algorithm- except that instead of always incrementing i and setting j back to zero at the end of every failed substring test, we set j back only as far as the beginning of the possible match, and increment i by the corresponding amount.
The difficulty is in figuring out how far to skip when a mismatch occurs. The trick is to precompute a table of skips for each prefix ahead of time, so that in the second step of the algorithm we know immediately how much to increment i and decrement j. Note that this precomputation is based entirely upon P, so the cost of this precomputation is not related to the length of the text.
Before writing the code, however, we must clarify exactly what this table of skips will contain. If we have successfully matched x characters in the text against x characters in the pattern, and a mismatch is detected at the next character, how far should we skip? The goal is to skip as far as possible without missing any potential matches.
This turns out to be the length of the longest prefix of fewer than x characters in the pattern that also appears as a suffix of the first x characters in the pattern. If there is no such prefix, then we know that we can skip over x characters.
It is extremely important to note that the text itself is not considered- if we have already matched x characters in the text, then we know what that text is in terms of the pattern.
If x = 1, then the pattern is immaterial- the skip must always be at least one (or else we won't ever make any progress), so we abandon the matched character, skip over it, and begin again at the mismatch.
A table of the skips for several short strings is shown in figure 8.7.
Figure 8.7: The skipArray for several patterns. The Prefix/Suffix column shows the substring that is both a prefix and suffix of the first j characters of the pattern. A Prefix/Suffix of nil denotes the empty string.
It is sometimes difficult to think concretely about this sort of problem, because humans have an enormous innate pattern-matching ability, and therefore we have the ability to use this kind of algorithm without really understanding it! Algorithms that make a great deal of intuitive sense can sometimes be difficult to implement elegantly (or even correctly). Figure 8.8 shows code that computes the skip array for a pattern.
Figure 8.8: Code to compute the prefix lengths for the Knuth-Morris-Pratt algorithm.
Once the skipArray is computed, however, the code to actually implement the algorithm is very straightforward. A C implementation of the code to implement algorithm 8.3 is shown in figure and 8.9.
Figure 8.9: The Knuth-Morris-Pratt algorithm, implemented in C.
The careful reader will note that this algorithm does not fulfill the promise that we will look at each character in the text only once. Characters in the midst of a match will only be looked at once, and characters that fail immediately to match the first character in the pattern are looked at only once, but we will look at mismatched characters twice. When a match fails somewhere in the midst of a string, the character that caused the failure is tested again in the next attempt. Although we know what this character is and can make use of this information, the only information we are using is what it isn't- it isn't a match.
However, although we may have to look at some characters more than once, the algorithm looks at each character at most twice and so is O(N) (see the exercises for more detail).
We can improve upon the performance of our implementation by extending our table to refine our information about the prefix lengths to include the character that caused the mismatch. This can result in a much larger table, since instead of a one-dimensional table indexed by the number of matching characters, we now need to have a two-dimensional array indexed by both the number of matching characters and the identity of the character that caused the mismatch. Although the table is often sparse, and can be represented in relatively little space, this optimization adds considerably to the memory requirements and/or complexity of the implementation. Whether the added performance is worth the extra memory and complexity is a design decision.
This section illustrates the Knuth-Morris-Pratt algorithm with an example.
Let us begin with an example where the pattern is ACAC and the text is ABACABACAC. In figure 8.10, j and i are shown, along with the pattern shifted by i.
Figure 8.10: An illustration of the Knuth-Morris-Pratt algorithm. | http://ellard.org/dan/www/Q-97/HTML/root/node44.html | 13 |
188 | Chapter 14 Control and conduct of debate
The term ‘debate’ is a technical one meaning the argument for and against a question. The proceedings between a Member moving a motion and the ascertainment by the Chair of the decision of the House constitute a debate. A decision may be reached without debate. In addition, many speeches by Members which are part of the normal routine of the House are excluded from the definition of debate, because there is no motion before the House. These include the asking and answering of questions, ministerial statements, matters of public importance, and personal explanations. However, the word ‘debate’ is often used more loosely, to cover all words spoken by Members during House proceedings.
It is by debate that the House performs one of its more important roles, as emphasised by Redlich:
Without speech the various forms and institutions of parliamentary machinery are destitute of importance and meaning. Speech unites them into an organic whole and gives to parliamentary action self-consciousness and purpose. By speech and reply expression and reality are given to all the individualities and political forces brought by popular election into the representative assembly. Speaking alone can interpret and bring out the constitutional aims for which the activity of parliament is set in motion, whether they are those of the Government or those which are formed in the midst of the representative assembly. It is in the clash of speech upon speech that national aspirations and public opinion influence these aims, reinforce or counteract their strength. Whatever may be the constitutional and political powers of a parliament, government by means of a parliament is bound to trust to speech for its driving power, to use it as the main form of its action.1
The effectiveness of the debating process in Parliament has been seen as very much dependent on the principle of freedom of speech. Freedom of speech in the Parliament is guaranteed by the Constitution,2 and derives ultimately from the United Kingdom Bill of Rights of 1688.3 The privilege of freedom of speech was won by the British Parliament only after a long struggle to gain freedom of action from all influence of the Crown, courts of law and Government. As Redlich said:
. . . it was never a fight for an absolute right to unbridled oratory . . . From the earliest days there was always strict domestic discipline in the House and strict rules as to speaking were always enforced . . . the principle of parliamentary freedom of speech is far from being a claim of irresponsibility for members; it asserts a responsibility exclusively to the House where a member sits, and implies that this responsibility is really brought home by the House which is charged with enforcing it.4
The Speaker plays an important role in the control and conduct of debate through the power and responsibilities vested in the Chair by the House in its rules and practice. The difficulties of maintaining control of debate, and reconciling the need for order with the rights of Members, ‘requires a conduct, on the part of the Speaker, full of resolution, yet of delicacy . . .’.5
Back to top
Manner and right of speech
When Members may speak
A Member may speak to any question before the Chair which is open to debate, when moving a motion which will be open to debate, and when moving an amendment.
A Member may speak during a discussion of a matter of public importance; he or she may make a statement to the House on the presentation of a committee or delegation report, during the periods for Members’ 90 second statements in the House and three minute statements in the Main Committee, and when introducing a private Member’s bill—in none of these instances is there a question before the Chair.
A Member may also speak when asking or answering a question, when raising a point of order or on a matter of privilege, to explain matters of a personal nature, to explain some material part of his or her speech which has been misquoted or misunderstood, when granted leave of the House to make a statement, and by indulgence of the Chair.
Back to top
Matters not open to debate
Pursuant to standing order 78, the following questions and motions are not open to debate, must be moved without comment and must be put immediately and resolved without amendment:
- motion that a Member’s time be extended (S.O. 1);
- motion that the business of the day be called on (S.O. 46(e));
- motion that a Member be heard now (S.O. 65(c));
- motion that a Member be further heard (S.O. 75(b));
- motion that debate be adjourned (S.O. 79);
- motion that a Member be no longer heard (S.O. 80);
- motion that the question be now put (S.O. 81);
- question that the bill or motion be considered urgent, following a declaration of urgency (S.O.s 82–83);
- motion that a Member be suspended (S.O. 94);
- question that amendments made by the Main Committee be agreed to (S.O. 153);
- question that a bill reported from the Main Committee be agreed to (S.O. 153);
- motion that further proceedings on a bill be conducted in the House (S.O. 197); and
- question in the Main Committee that a bill be reported to the House (S.O. 198).
- In addition:
- if required by a Minister, the question for the adjournment of the House under the automatic adjournment provisions must be put immediately and without debate (S.O. 31(c)); and
- if required by a Member, the question for the adjournment of the Main Committee must be put immediately and without debate (S.O. 191(b)).
Back to top
Mover and seconder of motions and amendments
A Member may speak when moving a motion which is open to debate but loses the right to speak to the motion, except in reply, if he or she does not speak immediately. Similarly, a Member who moves an amendment must speak to it immediately, if wishing to speak to it at all. This rule does not apply during the consideration in detail stage of bills or during the consideration of Senate amendments and requests.
A Member who seconds a motion or amendment before the House may speak to it immediately or at a later period during the debate.6 It is common practice for seconders not wishing to speak immediately to state that they reserve the right to speak later. However, such action does not ensure that a Member will be able to speak later in the debate (if, for example, the debate is limited by time, or curtailed by the closure).
Back to top
Question on motion or amendment before the House or Main Committee
A Member may not speak a second or further time to a question before the House except:
- during consideration in detail of a bill;
- during consideration of amendments to a bill made or requested by the Senate;
- having moved a substantive motion or the second or third reading of a bill, the Member is allowed a reply confined to matters raised during the debate;
- during an adjournment debate, if no other Member rises; or
- to explain some material part of his or her speech which has been misquoted or misunderstood. In making this explanation the Member may not interrupt another Member addressing the House, debate the matter, or introduce any new matter.7
Members may speak for an unlimited number of periods during consideration in detail of a bill or consideration of Senate amendments and requests.8 In special circumstances, a Member may speak again by leave—see below ‘Leave to speak again’.
The general rule that each Member may speak only once to each question places restrictions on Members moving and speaking to amendments (other than during consideration in detail or consideration of Senate amendments and requests). A Member who speaks to a question and then sits without moving an amendment that he or she intended to propose cannot subsequently move the amendment, having already spoken to the question before the House. If a Member has already spoken to a question, or has moved an amendment to it, the Member may not be called to move a further amendment or the adjournment of the debate, but may speak to any further amendment which is proposed by another Member. A Member who moves or seconds an amendment cannot speak again on the original question after the amendment has been disposed of, because he or she has already spoken while the original question was before the Chair and before the question on the amendment has been proposed. When an amendment has been moved, and the question on the amendment proposed by the Chair, any Member speaking subsequently is considered to be speaking to both the original question and the amendment and cannot speak again to the original question after the amendment has been disposed of. A Member who has already spoken to the original question prior to the moving of an amendment may speak to the question on the amendment, but the remarks must be confined to the amendment.9 A Member who has spoken to neither the motion nor the amendment may speak to the original question after the amendment has been disposed of. A Member who has spoken to the original question and the amendment may speak to the question on a further amendment, but must confine any remarks to the further amendment.10
In special circumstances, a Member may be granted leave to speak again.11 This most frequently occurs in a situation where a Member has moved but not spoken to a motion, but wishes to speak at a later time without closing the debate.12 A similar situation sometimes occurs when a Member’s earlier speech has been interrupted and he or she has not been present to continue the speech when the debate has been resumed. Leave to speak again in such cases in effect restores a lost opportunity rather than provides an additional one. The granting of leave to speak again in other circumstances is highly unusual.
The mover of a substantive motion or the second or third reading of a bill may speak on a second occasion in reply, but must confine any remarks to matters raised during the debate.13 The mover of an amendment has no right of reply as an amendment is not a substantive motion. The reply of the mover of the original question closes the debate. However, the mover may speak to any amendment moved without closing the debate, but his or her remarks must be confined to the amendment.14 The speech of a Minister acting on behalf of the mover of the original motion does not close the debate.15 The right of reply of the mover has been exercised even though the original question has been rendered meaningless by the omission of words and the rejection of proposed insertions.16
The Chair has ruled that a reply is permitted to the mover of a motion of dissent from a ruling of the Chair.17
The mover of a motion is not entitled to the call to close the debate while any other Member is seeking the call.18 When a mover received the call and stated that he was not speaking to an amendment before the House but to the motion generally and wished to close debate, he was directed by the Chair to speak to the amendment only, in order that the rights of others to be heard were not interfered with.19 In the absence of such circumstances a Minister speaking after an amendment has been proposed closes the debate.20 A Member closing the debate by reply cannot propose an amendment.21
The mover of a motion may speak a second time but avoid closing a debate by seeking ‘leave to speak again without closing the debate’22 (see above ‘Leave to speak again’). Such action is most appropriate in relation to a motion to take note of a document, which is moved as a vehicle to enable debate rather than with the intention of putting a matter to the House for decision.
A Member may speak again to explain some material part of his or her speech which has been misquoted or misunderstood. In making this explanation the Member may not interrupt another Member addressing the House, debate the matter, or introduce any new matter.23 No debate may arise following such an explanation. The correct procedure to be followed by a Member is to rise after the Member speaking has concluded and to inform the Chair that he or she has been misrepresented. The Chair will then permit the Member to proceed with the explanation. It helps in the conduct of the proceedings if Members notify the Chair in advance that they intend to rise to make an explanation. The Chair will seek to ensure that the Member confines himself or herself to correcting any misrepresentation and will not allow wider matters to be canvassed.
Pursuant to standing order 68, a Member may explain how he or she has been misrepresented or explain another matter of a personal nature whether or not there is a question before the House. The Member seeking to make an explanation must rise and seek permission from the Speaker, must not interrupt another Member who is addressing the House, and the matter must not be debated.
Although in practice the Speaker’s permission is freely given, Members have no right to expect it to be granted automatically.24 It is the practice of the House that any Member wishing to make a personal explanation should inform the Speaker beforehand.25 The Speaker has refused to allow a Member to make a personal explanation when prior notice has not been given.26
Personal explanations may be made at any time with the permission of the Chair, provided that no other Member is addressing the House.27 However, recent practice has been for them to be made soon after Question Time.28 Personal explanations claiming misrepresentation may arise from reports in the media, Senate debates, the preceding Question Time, and so on.29 A Minister has presented a list correcting statements made about him in the Senate, rather than go through all the details orally.30 One of the reasons for personal explanations being sought soon after Question Time is that, when a personal explanation is made in rebuttal of a statement made in a question or answer, the question and answer are excluded from any rebroadcast of Question Time. This exclusion is subject to the discretion that the Speaker has to refer a particular case to the Joint Committee on the Broadcasting of Parliamentary Proceedings.31
The fact that a Member has made a personal explanation about a matter does not prevent another Member from referring to the matter even if, for example, the Member has refuted views attributed to him or her.32
In making a personal explanation, a Member must not debate the matter, and may not deal with matters affecting his or her party or, in the case of a Minister, the affairs of the Minister’s department—the explanation must be confined to matters affecting the Member personally.33 A Member cannot make charges or attacks upon another Member under cover of making a personal explanation.34
A personal explanation may be made in the Main Committee,35 or it may be made in the House regarding events in the Main Committee. In making such an explanation the Member may not reflect on the Chair of the Committee.36 The indulgence granted by the Chair for a personal explanation may be withdrawn if the Member uses that indulgence to enter into a general debate.37 A Member has been permitted to make a personal explanation on behalf of a Member who was overseas.38
A personal explanation is not restricted to matters of misrepresentation. For example, Members have used the procedure to explain an action or remark, apologise to the House, clarify a possible misunderstanding, state why they had voted in a particular way, and correct a statement made in debate.39
If the Speaker refuses permission to a Member to make a personal explanation, or directs a Member to resume his or her seat during the course of an explanation, a motion ‘That the Member be heard now’ is not in order, nor may the Member move a motion of dissent from the Speaker’s ‘ruling’ as there is no ruling.40
Back to top
Other matters by indulgence of the Chair
Although the standing orders make provision for Members to speak with permission of the Chair only in respect of a matter of a personal nature (see above), the practice of the House is that, from time to time, the Speaker or Chair grants indulgence for Members to deal with a variety of other matters. The term ‘indulgence’, used to cover the concept of permission or leave from the Chair as distinct from leave of the House,41 is a reminder that its exercise is completely at the Chair’s discretion. It is, as the term suggests, a special concession. Indulgence has been granted, for example, to permit:
- A Minister to correct42 or add to43 an earlier answer to a question without notice;
- the Prime Minister to add to an answer given by another Minister to a question without notice;44
- the Prime Minister to answer a question without notice ruled out of order;45
- Members to put their views on a ruling by the Speaker relating to the sub judice convention;46
- Members to comment on a privilege matter;47
- a Member to seek information on a matter not raised in a second reading speech;48
- Members to speak to a document presented by the Speaker;49
- a Minister to correct a figure given in an earlier speech;50
- a Member to comment on or raise a matter concerning the conduct of proceedings or related matters;51
- the Prime Minister and Leader of the Opposition to congratulate athletes representing Australia;52
- the Prime Minister and Leader of the Opposition to welcome visiting foreign dignitaries present in the gallery;53
- the Prime Minister and Leader of the Opposition to pay tribute to a retiring Governor-General;54
- Members to extend good wishes to persons present in the gallery;55
- questions to56 and statements by57 the Leader of the House relating to the order of business, the Government’s legislative program, etc;
- a Member to ask a question of the Speaker or raise a matter for the Speaker’s consideration;58
- Members to comment in the House on the operations of the Main Committee;59
- Members to extend good wishes to a Member about to retire,60 or to comment on significant achievements by colleagues;61
- the Prime Minister and Leader of the Opposition to make valedictory remarks;62 and
- the Prime Minister and Leader of the Opposition to make statements in relation to natural63 or other64 disasters, in tribute to deceased persons,65 or to speak on matters of significance.66
When the Prime Minister makes a statement by indulgence on an issue, the Leader of the Opposition is commonly also granted indulgence to speak on the same matter. On occasion, indulgence may be extended to a series of Members—for example, after a Member has made a statement to the House announcing his intention to resign, other Members have spoken to pay tribute to the Member or offer their best wishes for the future.67
A frequently used practice is to seek the leave of the House—that is, permission without objection from any Member present68—to make a statement when there is no question before the House. This procedure is used, in the main, by Ministers to announce domestic and foreign policies and other actions or decisions of the Government. A period is provided in the order of business for ministerial statements following Question Time and the presentation of documents on Tuesdays, Wednesdays and Thursdays.69 However, Ministers may make statements at other times as well—in all cases leave is required. Leave is also required for a Member to make a statement when presenting a committee or delegation report outside the period set aside for that purpose on Mondays.70
In the case of a ministerial statement, it is usual for a copy of the proposed statement to be supplied to the Leader of the Opposition or the appropriate shadow minister some minimum time before the statement is made. At the conclusion of the Minister’s speech, he or she may present a copy of the statement and a motion ‘That the House take note of the document’ may be moved. The shadow minister or opposition spokesperson may then speak to that motion, with, commonly, standing orders being suspended to permit a speaking time equal to that taken by the Minister. If a motion to take note is not moved it is usual for leave to be given for the opposition spokesperson to speak on the same subject.
Members seeking leave to make statements must indicate the subject matter in order that the House can make a judgment as to whether or not to grant leave. When a Member has digressed from the subject for which leave was granted, the Chair has:
- directed the Member to confine himself to the subject for which leave was granted;71
- directed the Member to resume his or her seat;72 and
- expressed the opinion that a Member should not take advantage of leave granted to make a statement (in response to another) to raise matters that had no direct relationship to that statement.73
If a Member does not indicate the subject matter of a proposed statement when responding to a statement just made, difficulties may arise for the Chair and these are exemplified by the following case. A Member having been granted leave to respond to a statement made by a Minister and the point having been made that he should remain relevant to the Minister’s statement, the Chair stated that whilst it may be argued that in spirit the leave to respond was related to the Minister’s statement, that was not specifically stated. The Chair had no authority to require the Member to be any more relevant than he saw fit, it being in the hands of the House through the standing orders to take the steps necessary to bring the Member’s remarks to a conclusion.74 Greater control over relevancy can be preserved if, where Members rise to seek leave to make statements following, for example, a ministerial statement, the Chair asks ‘Is the honourable Member seeking leave to make a statement on the same matter?’.
A request for leave cannot be debated, nor can leave be granted conditionally, for example, on the condition that another Member is allowed to make a statement on the same subject.
If leave is not granted, a Minister or Member, on receiving the call, may move ‘That so much of the standing (and sessional) orders be suspended as would prevent the Minister for . . . [the Member for . . . ] making a statement’. This motion must be agreed to by an absolute majority of Members. Alternatively, in the case of a Minister, the printed statement may be presented.
The fact that leave is granted or standing orders are suspended to enable a Member to make a statement only affords the Member an opportunity to do that which would not be ordinarily permissible under the standing orders—that is, make a statement without leave. The normal rules of debate, and the provisions of the standing orders generally, still apply so that if, for example, the automatic adjournment interrupts the Member’s speech, the speech is then terminated unless the adjournment proposal is negatived.
A Member cannot be given leave to make a statement on the next day of sitting in reply to a statement just made, but must ask for such leave on the next day of sitting.75 It is not in order for a motion to be moved that a Member ‘have leave to make a statement’76 or, when leave to make a statement is refused, to move that the Member ‘be heard now’,77 as the latter motion can only be moved to challenge the call of the Chair during debate.78 When a statement is made by leave, there is no time limit on the speech,79 but a motion may be made at any time that the Member speaking ‘be no longer heard’.80 Once granted, leave cannot be withdrawn.81
In the House of Commons leave is not required to make a ministerial statement. In 1902 Prime Minister Barton claimed that it was the inherent right of a leader of a Government to make a statement on any public subject without leave of the House. The Speaker ruled that no Minister had such a right under the standing orders of the House of Representatives.82
The requirement for leave has had the practical effect noted above, that traditionally an advance copy of a proposed ministerial statement is supplied to the Opposition, allowing its spokesperson time to prepare a considered response.
Back to top
Allocation of the call
The Member who moved the motion for the adjournment of a debate is entitled to speak first on the resumption of the debate.83 If the Member does not take up that entitlement on the resumption of the debate, this does not impair his or her right to speak later in the debate.84 However, when a Member is granted leave to continue his or her remarks and the debate is then adjourned, the Member must take the entitlement to pre-audience on the resumption of the debate, otherwise he or she loses the right to continue.
Although the Chair is not obliged to call any particular Member, except for a Member entitled to the first call as indicated above, it is the practice for the Chair, as a matter of courtesy, to give priority to:
- the Prime Minister or a Minister over other government Members85 but not if he or she proposes to speak in reply;86 and
- the leader or deputy leader of opposition parties over other non-government Members.87
A Minister (or Parliamentary Secretary) in charge of business during the consideration in detail of a bill or consideration of Senate amendments (when any Member may speak as many times as he or she wishes) would usually receive priority over other government Members whenever wishing to speak.88 This enables the Minister to explain or comment upon details of the legislation as they arise from time to time in the debate. Speakers have also taken the view that in respect of business such as consideration of Senate messages, the call should, in the first instance, be given to the Minister or Parliamentary Secretary expected to have responsibility for the matter.89
If two or more Members rise to speak, the Speaker calls on the Member who, in the Speaker’s opinion, rose first.90 The Chair’s selection may be challenged by a motion that a Member who was not called ‘be heard now’, and the question must be put immediately and resolved without amendment or debate.91 A Member may move either of these motions in respect of himself or herself.92 It is not in order to challenge the Chair’s decision by way of moving that the Member who received the call ‘be no longer heard’.93 A motion of dissent from the Chair’s allocation of the call should not be accepted, as the Chair is exercising a discretion, not making a ruling.
Standing order 78 provides, among other things, that if a motion that a Member be heard now is negatived, no similar proposal shall be received if the Chair is of the opinion that it is an abuse of the orders or forms of the House or is moved for the purpose of obstructing business.94
Although the allocation of the call is a matter for the discretion of the Chair, it is usual, as a principle, to call Members from each side of the House, government and non-government, alternately. Within this principle minor parties and any independents are given reasonable opportunities to express their views.95 Because of coalition arrangements between the Liberal and National Parties, the allocation of the call between them has varied—for example, in the 30th Parliament, with the respective party numbers 68 and 23, the call was allocated on the basis of a 3:1 ratio; in the 38th Parliament, with the party numbers 76 and 18, the ratio was 4:1; and in the 41st Parliament, with the party numbers 75 and 12, the ratio was 6:1. Independent Members have been called with regard to their numbers as a proportion of the House.
Throughout the history of the House of Representatives a list of intending speakers has been maintained to assist the Chair in allocating the call. As early as 1901 the Speaker noted that, although it was not the practice for Members to send names to him and to be called in the order in which they supplied them, on several occasions when a group of Members had risen together and had then informed the Chair that they wished to speak in a certain order, they had been called in that order so that they might know when they were likely to be called on.96
By the 1950s the Chair was allocating the call with the assistance of a list of speakers provided by the party whips. Speaker Cameron saw this as a perfectly logical and very convenient method of conducting debates. He added that, if they were not adhered to or Members objected to the practice, the House would revert to a system under which there was no list whatsoever and the Chair would call the Member he thought had first risen in his place. He saw this procedure as awkward as some Members were more alert than others and for that reason he thought it better that the Chair be made aware of the intentions of the parties, each party having some idea of their Members best able to deal with particular subjects.97 Although he welcomed lists provided by the whips as useful guides, he stressed that he was not bound by them and indicated that, if it came to his knowledge that certain Members were being precluded from speaking, he would exercise the rights he possessed as Speaker.98 In essence this continues to be the practice followed by the Chair.
It is the responsibility of Members listed to speak to follow proceedings in order to ensure that they will be available at the appropriate time. It is discourteous to the Member speaking, and to the Chair and other participants in the debate, for the next speaker to leave his or her entry to the Chamber to the last minute. If no Member rises to speak there can be no pause in proceedings, and the Chair is obliged to put the question before the House to a vote. In practice, the whips or the duty Minister or shadow minister at the Table assume responsibility for chasing up errant speakers from their respective parties,99 and alert the Chair to any changes to the list.
Remarks addressed to Chair
A Member wishing to speak rises and, when recognised by the Speaker, addresses the Speaker.100 If a Member is unable to rise, he or she is permitted to speak while seated.101 It is regarded as disorderly for a Member to address the House in the second person and Members have often been admonished when they have lapsed into this form of address.102 As remarks must be addressed to the Chair, it is not in order for a Member to turn his or her back to the Chair and address party colleagues.103 A Member should not address the listening public while the proceedings of the House are being broadcast.104
Place of speaking
Standing order 65(c) provides that when two or more Members rise to speak the Speaker shall call upon the Member who, in the Speaker’s opinion, rose first, and standing order 62(a) requires every Member, when in the Chamber, to ‘take his or her seat’. The implication is that a Member should address the House from his or her own seat. Ministers and shadow ministers speak from the Table. Parliamentary Secretaries are allowed to speak from the Table when in charge of the business before the House but at other times are required to speak from their allocated places. The same practice applies in respect of opposition Parliamentary Secretaries.105 An opposition Member who is not a member of the opposition shadow ministry but who is leading for the Opposition in a particular debate, is permitted to speak either from his or her allotted seat or from the Table.106
There is no longer a prohibition on Members reading their speeches. Until 1965 the standing orders provided that ‘A Member shall not read his speech’. In 1964, the Standing Orders Committee recommended that:
As Parliamentary practice recognizes and accepts that, whenever there is reason for precision of statement such as on the second reading of a bill, particularly those of a complex or technical nature, or in ministerial or other statements, it is reasonable to allow the reading of speeches and, as the difficulty of applying the rule against the reading of speeches is obvious, e.g. ‘‘reference to copious notes’’, it is proposed to omit the standing order.107
The recommendation of the committee was subsequently adopted by the House.108
Language of debate
Although there is no specific rule set down by standing order, the House follows the practice of requiring Members’ speeches to be in English. Other Members and those listening to proceedings are entitled to be able to follow the course of a debate, and it is unlikely that the Chair would know whether a speech is in order unless it is delivered in English. It is in order, however, for a Member to use or quote phrases or words in another language during the course of a speech.
In 2003 a meeting of the two Houses in the House of Representatives Chamber was addressed by the President of China in Chinese. Members and Senators used headphones to hear the simultaneous translation into English.109
Back to top
Incorporation of unread material into Hansard
In one form or another the House has always had procedures for the incorporation of unread material into Hansard but there were, until recent years, considerable variations in practice and the Chair from time to time expressed unease at the fact that the practice was allowed and in respect of some of the purposes for which it was used.
Answers to questions in writing are required to be printed in Hansard110 and Budget tables were in the past permitted to be included unread in Hansard.111 The terms of petitions have been incorporated since 1972,112 and the terms of notices not given openly in the House have been included since 1978; in more recent years all notices have been included. The terms of amendments moved are also printed in Hansard, despite the common practice being for Members moving them to refer to previously circulated texts of proposed amendments rather than to read them out in full.
Underlying the attitude of the Chair and the House over the years has been the consistent aim of keeping the Hansard record as a true record of what is said in the House. Early occupants of the Chair saw the practice of including unread matter in Hansard as fraught with danger113 and later Speakers have voiced more specific objections.114 For example, a ‘speech’ may be lengthened beyond a Member’s entitlement under the standing orders, or the incorporated material may contain irrelevant or defamatory matter or unparliamentary language; other Members will not be aware of the contents of the material until production of the daily Hansard next morning when a speech may be discovered to have matter not answered in debate and so appear more authoritative. Similarly, a succeeding Member’s speech may appear to be less relevant and informed than it would have been if he or she had known of the unspoken material before speaking.115
The modern practice of the House on the incorporation of other material, defined by Speakers Snedden and Jenkins in statements on the practice, is based on the premise that Hansard, as an accurate as possible a record of what is said in the House, should not incorporate unspoken material other than items such as tables which need to be seen in visual form for comprehension.116 It is not in order for Members to hand in their speeches as is done in the Congress of the United States of America,117 even when they have been prevented from speaking on a question before the House,118 nor can they have the balance of an unfinished speech incorporated.119 Ministerial statements may not be incorporated,120 nor may Ministers’ second reading speeches121 or explanatory memoranda to bills.122 Matter irrelevant to the question before the House is not permitted to be incorporated.123
Apart from offending against the principle that Hansard is a report of the spoken word, items may also be excluded on technical grounds. Thus, for example, photographs, drawings, tabulated material of excessive length and other documents of a nature or quality not acceptable for printing or which would present technical problems and unduly delay the production of the daily Hansard are not able to be incorporated. In cases where permission has been granted for such an item to be incorporated (usually with the proviso from the Chair that the incorporation would occur only if technically possible), it has been the practice for a note to appear in the Hansard text explaining that the proposed incorporation was omitted for technical reasons. However, in recent years developments in printing technology have made possible the incorporation of a wider range of material—for example, graphs, charts and maps—than was previously the case.
A Minister or Member seeking leave to incorporate material should first show the matter to the Member leading for the Opposition or to the Minister or Parliamentary Secretary at the Table, as the case may be,124 and leave may be refused if this courtesy is not complied with.125 Members must provide a copy of the material they propose to include at the time leave is sought,126 and copies of non-read material intended for incorporation must be lodged with Hansard as early as possible.127
The general rule is not interpreted inflexibly by the Chair. For example, exceptions have been made to enable schedules showing the progress on government responses to committee reports.128 Although other exceptions may be made from time to time, this is not a frequent occurrence and it is common practice of the Chair in such circumstances to remark on, and justify, the departure from the general rule, or to stress that the action should not be regarded as a precedent. The main category of such exceptions in recent years has been in relation to documents whose incorporation has provided information from the Government to the House.129 Other exceptions have been made to facilitate business of the House,130 or to allow the incorporation of material which in other circumstances could have been incorporated as a matter of routine.131 The contents of a letter stick from Aboriginal peoples of the Northern Territory have been incorporated.132
The House has ordered that matter be incorporated.133 Matter has been authorised to be incorporated by a motion moved pursuant to contingent notice, after leave for incorporation had been refused.134 A motion to allow incorporation has also been moved and agreed to following suspension of standing orders.135
On two occasions in 1979 standing orders were suspended to enable certain documents to be incorporated in Hansard, after leave had been refused.136 This action was procedurally defective. The incorporation of unspoken matter in Hansard is, by practice, authorised by the House by its unanimous consent. The unanimous consent is obtained by asking for leave of the House. If leave is refused the authority of the House can only be obtained by moving a positive motion. In order to move a motion without leave it is necessary to suspend the standing orders. The suspension of standing orders opens the way to move a motion for incorporation; it does not of itself allow incorporation as there is no standing order relating to the incorporation of matter in Hansard.
The fact that the House authorises the incorporation of unread matter does not affect the rule that the final decision rests with the Speaker.
Back to top
Display of articles to illustrate speeches
Members have been permitted to display articles to illustrate speeches. The Chair has been of the opinion that unless the matter in question had some relation to disloyalty or was against the standing orders the Chair was not in a position to act but hoped that Members would use some judgment and responsibility in their actions.137 However, the general attitude from the Chair has been that visual props are ‘tolerated but not encouraged’.138 In 1980 the Chair ruled that the display of a handwritten sign containing an unparliamentary word by a seated Member was not permitted.139 Since then the Chair has more than once ruled that the displaying of signs was not permitted.140 Scorecards held up following a Member’s speech have been ordered to be removed.141 In 1985 the Speaker ordered a Member to remove two petrol cans he had brought into the Chamber for the purpose of illustrating his speech.142 It is not in order to display a weapon143or play a tape recorder.144
The wide range of items which have been allowed to be displayed has included items as diverse as a flag,145 photographs and journals,146 plants,147 a gold nugget,148 a bionic ear,149 a silicon chip,150 a flashing marker for air/sea rescue,151 a synthetic quartz crystal,152 superconducting ceramic,153 hemp fibres,154 a heroin ‘cap’,155 a gynaecological instrument,156 a sporting trophy,157 and ugh boots.158 Although newspaper headlines have been displayed for the purpose of illustrating a speech (but not if they contain unparliamentary language),159 more recent practice has been not to permit this.
Back to top
Citation of documents not before the House
If a Minister quotes from a document relating to public affairs, a Member may ask for it to be presented to the House. The document must be presented unless the Minister states that it is of a confidential nature.160 This rule does not apply to private Members.
A Member may quote from documents not before the House, but the quotation must be relevant to the question before the Chair.161It is not in order to quote words debarred by the rules of the House.162 It is not necessary for a Member to vouch for the accuracy of a statement in a document quoted from or referred to,163 but a Member quoting certain unestablished facts concerning another Member contained in a report has been ordered not to put those findings in terms of irrefutable facts.164 It is not necessary for a Member to disclose the source of a quotation165 or the name of the author of a letter from which he or she has quoted.166 The Chair has always maintained that Members themselves must accept responsibility for material they use in debate, and there is no need for them to vouch for its authenticity. Whether the material is true or false will be judged according to events and if a Member uses material, the origin of which he or she is unsure, the responsibility rests with the Member.167
Subject to the rules applying to relevance and unparliamentary expressions, it is not within the province of the Chair to judge whether a document declared to be confidential should be restricted in its use in the House. As the matter is not governed by standing orders, it must be left to the good sense and discretion of a Member to determine whether to use material in his or her possession.168 However, the Chair has ruled that confidential documents submitted to Cabinet in a previous Government must, in the public interest, remain entirely confidential.169
Back to top
Rules governing content of speeches
Relevancy in debate
General principles and exceptions
Of fundamental importance to the conduct of debate in the House is the rule that a Member must speak only on the subject matter of a question under discussion.170 At the same time the standing orders and practice of the House make provision for some important exceptions to this principle when debates of a general nature may take place. These exceptions are:
- on the question for the adjournment of the House to end the sitting, or for the adjournment of the Main Committee;171
- on the debate of the address in reply to the Governor-General’s speech;172
- on the motion for the second reading of the Main Appropriation Bill, and Appropriation or Supply Bills for the ordinary annual services of government, when public affairs may be debated;173 and
- on the question that grievances be noted, a wide debate is permitted.174
The scope of a debate may also be widened by means of an amendment. There may also be a digression from the rule of relevancy during a cognate debate, when two or more items are debated together even though technically only one of the items is the subject of the question before the House.
When two or more related orders of the day are on the Notice Paper,175 it frequently meets the convenience of the House when debating the first of the orders to allow reference to the other related orders and one cognate debate takes place.176 Cognate debates are usually agreed to by the Government and the Opposition as part of the programming process and the orders of the day then linked accordingly on the Daily Program. The Chair formally seeks the agreement of the House to the proposal when the first of the orders so linked is called on for debate.177 Upon the conclusion of the debate separate questions are then put as required on each of the orders of the day as they are called on.
Almost all cognate debates occur on bills—for further discussion of cognate debate in relation to bills see Chapter on ‘Legislation’. However, motions are on occasion debated cognately. A bill has been debated cognately with a motion to take note of documents on a related subject.178 A cognate debate has taken place on three committee reports on unrelated subjects (by the same committee).179
The purpose of a cognate debate is to save the time of the House, but technically Members may still speak to the questions proposed when the other orders of the day encompassed in the cognate debate are called on.180 However, this action is contrary to the spirit of a cognate debate and is an undesirable practice except in special circumstances, for example, when a Member desires to move an amendment to one of the later cognate orders.
Back to top
Persistent irrelevance or tedious repetition
The Speaker, after having called attention to the conduct of a Member who has persisted in irrelevance or tedious repetition, either of his or her own arguments or of the arguments used by other Members in debate, may direct the Member to discontinue his or her speech. The Speaker’s action may be challenged by the Member concerned who has the right to ask the Speaker to put the question that he or she be further heard. This question must be put immediately and resolved without debate.181 The action of the Chair in requiring a Member to discontinue a speech cannot be challenged by a motion of dissent from a ruling, as the Chair has not given a ruling but a direction under the standing orders.182 The Chair is the judge of the relevancy or otherwise of remarks and it is the duty of the Chair to require Members to keep their remarks relevant.183 Only the Member who has been directed to discontinue a speech has the right to move that he or she be further heard and must do so before the call is given to another Member.184
On only two occasions has a Member been directed to discontinue a speech on the ground of tedious repetition185 but on a number of occasions on the ground of persistent irrelevance. A Member has been directed to discontinue his speech following persistent irrelevance while moving a motion,186 and in the former committee of the whole (although later the Member took his second turn, under the then prevailing standing orders, to speak to the question).187 On two occasions the direction of the Chair has been successfully challenged by a motion that the Member be further heard.188
The so-called anticipation rule involves two standing orders—one applying generally and one applying specifically to questions:
- A Member may not anticipate the discussion of a subject which appears on the Notice Paper. In determining whether a discussion is out of order the Speaker must consider the probability of the anticipated matter being brought before the House within a reasonable time. (S.O. 77);
- Questions must not anticipate discussion on an order of the day or other matter (S.O. 100(f)).
The intention behind the rule is to protect matters which are on the agenda for deliberative consideration and decision by the House from being pre-empted by unscheduled debate. The Speaker’s ‘reasonable time’ discretion is to prevent the rule being used mischievously to block debate on a matter.
The words ‘a subject which appears on the Notice Paper’ are taken as applying only to the business section of the Notice Paper and not to matters listed elsewhere—for example, under questions in writing or as subjects of committee inquiry.
A notice of motion has been held to prevent its subject matter being discussed by means of an amendment to a motion or by means of a matter of public importance. A notice of motion has been withdrawn prior to discussion of a matter of public importance on the same subject.189 The rule has been applied to a personal explanation,190 a motion of censure or no confidence,191 the adjournment debate192 and grievance debate.193 During the course of a grievance debate the Chair has prevented a Member from debating a certain matter because it related to the subject of a notice of motion appearing on the Notice Paper in the Member’s name. On the basis that the notice had only been given three weeks previously, the Chair was not in a position at that stage to determine whether or not the matter would be brought before the House within a reasonable time.194
There has been a tendency in recent years for rulings concerning anticipation to be more relaxed. After a long period of sittings the Notice Paper may contain notices and orders of the day on many aspects of government responsibility, with the result that an overly strict application of the rule could rule out a large proportion of subjects raised in debate, Members’ statements or questions without notice, or topics proposed for discussion as matters of public importance. In a statement relating to matters of public importance Speaker Child, who had at the previous sitting accepted a matter which dealt with a subject covered in legislation listed for debate as an order of the day, indicated that, in her view, the discretion available to the Speaker should be used in a very wide sense.195
In general, the approach taken by the Chair has been that it is not in order while debating a question before the House to go into detailed discussion of other business on the Notice Paper. However, incidental reference is permissible.196 Where the topic of a matter of public importance has been very similar to the subject matter of a bill due for imminent debate, the discussion has been permitted, subject to the proviso that the debate on the bill should not be canvassed,197 or that the bill not be referred to in detail.198
The application of the anticipation rule was reviewed by the Procedure Committee in early 2005. The House adopted the committee’s recommendation that, as a trial for the remainder of the 41st Parliament, standing order 77 be amended to read as follows:
During a debate, a Member may not anticipate the discussion of a subject listed on the Notice Paper and expected to be debated on the same or next sitting day. In determining whether a discussion is out of order the Speaker should not prevent incidental reference to a subject.199
The House also adopted the recommendation that standing order 100(f), applying to the asking of questions, be suspended for same period. The effect of standing order 100(f) is discussed in the Chapter on ‘Questions’.
Back to top
Allusion to previous debate or proceedings
Unless the reference is relevant to the discussion, a Member must not refer to debates or proceedings of the current session of the House.200 This rule is not extended to the different stages of a bill. In practice, mere allusion to another debate is rarely objected to. However, debate on a matter already decided by the House should not be reopened. The Chair has stated that the basis of the rule is that, when a subject has been debated and a determination made upon it, it must not be discussed by any means at a later stage.201 The relevant standing order was far more strict in the past, the relevancy proviso being included when permanent standing orders were adopted in 1950. A previous restriction on allusions to speeches made in committee was omitted in 1963 on the recommendation of the Standing Orders Committee ‘as it appeared to be out of date and unnecessarily restrictive’.202
The application of this standing order most often arises when the question before the House is ‘That the House do now adjourn’ or ‘That grievances be noted’. The scope of debate on these questions is very wide ranging and in some instances allusion to previous debate has been allowed,203 although the Chair has sometimes intervened to prevent it.204 Members may be able to overcome the restriction by referring to a subject or issue of concern without alluding to any debate which may have taken place on it. The problem of enforcing the standing order is accentuated by the fact that a session may extend over a three year period.
Back to top
References to committee proceedings
Members may not disclose in debate evidence taken by any committee of the House or the proceedings and reports of those committees which have not been reported to the House, unless disclosure or publication has been authorised by the House or by the committee or subcommittee.205 Members have thus been prevented from referring to evidence not disclosed to the House or basing statements on matters disclosed to the committee.206 However, Members have, from time to time, made statements on the activities of a committee by leave of the House.207 The Chair has permitted reference in debate to committee proceedings which (although unreported) had been relayed throughout Parliament House on the monitoring system.208
Back to top
References to the Senate and Senators
Offensive words cannot be used against either the Senate or Senators.209 It is important that the use of offensive words should be immediately reproved in order to avoid complaints and dissension between the two Houses. Leave has been granted to a Member to make a statement in reply to allegations made in the Senate,210 and to make a personal explanation after having been ruled out of order in replying in debate to remarks made about him in the Senate.211
The former restriction on allusion in debate to proceedings of the Senate212 was omitted from the revised standing orders in 2004. The Senate had not had an equivalent standing order for many years.213 As the House Standing Orders Committee observed in 1970, it was probable that the principal reason for the rule was the understanding that the debates of the one House were not known to the other and could therefore not be noticed, but that the daily publication of debates had changed the situation.214
Back to top
Offensive or disorderly words
Good temper and moderation are the characteristics of parliamentary language. Parliamentary language is never more desirable than when a Member is canvassing the opinions and conduct of his opponents in debate.215
The standing orders contain prohibitions against the use of words which are considered to be offensive (the two Houses of the Parliament, Members and Senators and members of the judiciary being specifically protected—see below).216 The determination as to whether words used in the House are offensive or disorderly rests with the Chair, and the Chair’s judgment depends on the nature of the word and the context in which it is used.
A Member is not allowed to use unparliamentary words by the device of putting them in somebody else’s mouth,217 or in the course of a quotation.218
It is the duty of the Chair to intervene when offensive or disorderly words are used either by the Member addressing the House or any Member present. When attention is drawn to a Member’s conduct (including his or her use of words), the Chair determines whether or not it is offensive or disorderly.219
Once the Chair determines that offensive or disorderly words have been used, the Chair asks that the words be withdrawn. It has been considered that a withdrawal implies an apology220 and need not be followed by an apology unless specifically demanded by the Chair.221 The Chair may ask the Member concerned to explain the sense in which the words were used and upon such explanation the offensive nature of the words may be removed. If there is some uncertainty as to the words complained of, for the sake of clarity, the Chair may ask exactly what words are being questioned. This action avoids confusion and puts the matter clearly before the Chair and Members involved.
The Chair has ruled that any request for the withdrawal of a remark or an allusion considered offensive must come from the Member reflected upon, if present222 and that any request for a withdrawal must be made at the time the remark was made. This latter practice was endorsed by the House in 1974 when it negatived a motion of dissent from a ruling that a request for the withdrawal of a remark should be made at the time the remark was made.223 However, the Speaker has later drawn attention to remarks made and called on a Member to apologise, or to apologise and withdraw.224 Having been asked to withdraw a remark a Member may not do so ‘in deference to the Chair’, must not leave the Chamber225 and must withdraw the remark immediately,226 in a respectful manner,227 unreservedly228 and without conditions229 or qualifications.230 Traditionally Members have been expected to rise in their places to withdraw a remark.231 If a Member refuses to withdraw or prevaricates, the Chair may name the Member for disregarding the authority of the Chair. The Speaker has also directed, in special circumstances, that offensive words be omitted from the Hansard record.232
Back to top
References to and reflections on Members
In the Chamber and the Main Committee Members may not be referred to by name, but by the name of their electoral division, or by the title of their parliamentary or ministerial office.233 The purpose of this rule is to make debate less personal and avoid the direct confrontation of Members addressing one another as ‘you’.234 A degree of formality helps the House remain more dignified and tolerant when political views clash and passions may be inflamed. However, it is the practice of the House that, when appointments to committees or organisations are announced by the Speaker or a Minister, the name of a Member is used.
Offensive words may not be used against any Member235 and all imputations of improper motives to a Member and all personal reflections on other Members are considered to be highly disorderly.236 The practice of the House, based on that of the House of Commons,237 is that Members can only direct a charge against other Members or reflect upon their character or conduct upon a substantive motion which admits of a distinct vote of the House.238 Although a charge or reflection upon the character or conduct of a Member may be made by substantive motion, in expressing that charge or reflection a Member may not use unparliamentary words.239 This practice does not necessarily preclude the House from discussing the activities of any of its Members.240 It is not in order to use offensive words against, make imputations against, or reflect on another Member by means of a quotation or by putting words in someone else’s mouth.
In judging offensive words the following explanation given by Senator Wood as Acting Deputy President of the Senate in 1955 is a useful guide:
. . . in my interpretation of standing order 418 [similar to House of Representatives standing order 90 in relation to Members], offensive words must be offensive in the true meaning of that word. When a man is in political life it is not offensive that things are said about him politically. Offensive means offensive in some personal way. The same view applies to the meaning of ‘‘improper motives’’ and “personal reflections’’ as used in the standing order. Here again, when a man is in public life and a member of this Parliament, he takes upon himself the risk of being criticised in a political way.241
It has also been regarded as disorderly to refer to the lack of sobriety of a Member,242 to imitate the voice or manner of a Member243 and to make certain remarks in regard to a Member’s stature244 or physical attributes.245 Although former Members are not protected by the standing orders,246 the Chair has required a statement relating to a former Member to be withdrawn247 and on another occasion has regarded it as most unfair to import into debate certain actions of a Member then deceased.248
May classifies examples of expressions which are unparliamentary and call for prompt interference as:
- the imputation of false or unavowed motives;
- the misrepresentation of the language of another and the accusation of misrepresentation;
- charges of uttering a deliberate falsehood; and
- abusive and insulting language of a nature likely to create disorder.249
Australian Speakers have followed a similar approach. An accusation that a Member has lied or deliberately misled is clearly an imputation of an improper motive. Such words are ruled out of order and Members making them ordered to withdraw their remarks. The deliberate misleading of the House is a serious matter which could be dealt with as a contempt, and a charge that a Member has done so should only be made by way of a substantive motion.250
In accordance with House of Commons practice, for many years it was ruled that remarks which would be held to be offensive, and so required to be withdrawn, when applied to an identifiable Member, did not have to be withdrawn when applied to a group where individual Members could not be identified. This rule was upheld by distinct votes of the House.251 This did not mean, however, that there were no limits to remarks which could be made reflecting on unidentified Members. For example, a statement that it would be unwise to entrust certain unnamed Members with classified information was required to be withdrawn,252 and Speaker Aston stated that exception would be taken to certain charges, the more obvious of which were those of sedition, treason, corruption or deliberate dishonesty.253 Speaker Snedden supported this practice when he required the withdrawal of the term ‘a bunch of traitors’254 and later extended it:
The consequence is that I have ruled that even though such a remark may not be about any specified person the nature of the language [the Government telling lies] is unparliamentary and should not be used at all.255
In the past there has been a ruling that it was not unparliamentary to make an accusation against a group as distinct from an individual. That is not a ruling which I will continue. I think that if an accusation is made against members of the House which, if made against any one of them, would be unparliamentary and offensive, it is in the interests of the comity of this House that it should not be made against all as it could not be made against one. Otherwise, it may become necessary for every member of the group against whom the words are alleged to stand up and personally withdraw himself or herself from the accusation . . . I ask all honourable members to cease using unparliamentary expressions against a group or all members which would be unparliamentary if used against an individual.256
This practice has been followed by succeeding Speakers.
The use of offensive gestures has been deprecated by the Speaker. It would be open to the Speaker to direct a Member to leave the Chamber or to name a Member for such behaviour.257
Back to top
References to the Queen, the Governor-General and State Governors
A Member must not refer disrespectfully to the Queen, the Governor-General, or a State Governor, in debate or for the purpose of influencing the House in its deliberations.258 According to May the reasons for the rule are:
The irregular use of the Queen’s name to influence a decision of the House is unconstitutional in principle and inconsistent with the independence of Parliament. Where the Crown has a distinct interest in a measure, there is an authorized mode of communicating Her Majesty’s recommendation or consent, through one of her Ministers; but Her Majesty cannot be supposed to have a private opinion, apart from that of her responsible advisers; and any attempt to use her name in debate to influence the judgment of Parliament is immediately checked and censured. This rule extends also to other members of the royal family, but it is not strictly applied when one of its members has made a public statement on a matter of current interest so long as comment is made in appropriate terms.259
Members have been prevented from introducing the name of the sovereign to influence debate,260 canvassing what the sovereign may think of legislation introduced in the Parliament261 and referring to the sovereign in a way intended to influence the reply to a question.262 The rule does not exclude a statement of facts by a Minister concerning the sovereign,263 or debate on the constitutional position of the Crown.
In 1976 Speaker Snedden prohibited in debate any reference casting a reflection upon the Governor-General, unless discussion was based upon a substantive motion drawn in proper terms. He made the following statement to the House based on an assessment of previous rulings:
Some past rulings have been very narrow. It has, for instance, been ruled that the Governor-General must not be either praised or blamed in this chamber and, indeed, that the name of the Governor-General must not be brought into debate at all. I feel such a view is too restrictive. I think honourable members should have reasonable freedom in their remarks. I believe that the forms of the House will be maintained if the Chair permits words of praise or criticism provided such remarks are free of any words which reflect personally on His Excellency or which impute improper motives to him. For instance, to say that in the member’s opinion the Governor-General was right or wrong and give reasons in a dispassionate way for so thinking would in my view be in order. To attribute motive to the Governor-General’s actions would not be in order.264
Some previous rulings have been:
- it is acceptable for a Minister to be questioned regarding matters relating to the public duties for which the Governor-General is responsible, without being critical or reflecting on his conduct;265
- restrictions applying to statements disrespectful to or critical of the conduct of the Governor-General apply equally to the Governor-General designate;266
- reflections must not be cast on past occupants of the position or the office as such;267
- the Governor-General’s name should not be introduced in debate in a manner implying threats;268
- statements critical of and reflecting on the Governor-General’s role in the selection of a Ministry are out of order;269 and
- it is considered as undesirable to introduce into debate the names of the Governor-General’s household.270
Petitions have been presented praying for the House to call on the Governor-General to resign,271 and remarks critical of a Governor-General made in respect of responsibilities he had held before assuming the office, and matters arising from such responsibilities, have been raised.272
Back to top
Reflections on members of the judiciary
Both standing orders and the practice of the House place certain constraints upon references in debate to members of the judiciary. Under the standing orders a Member may not use offensive words against a member of the judiciary.273 This provision was not included in the standing orders until 1950 but prior to then the practice, based on that of the House of Commons, was that, unless discussion was based upon a substantive motion, reflections could not be cast in debate upon the conduct, including a charge of a personal character, of a member of the judiciary. This practice still continues. Decisions as to whether words are offensive or cast a reflection rest with the Chair.
Rulings of the Chair have been wide ranging on the matter, perhaps the most representative being one given in 1937 that ‘From time immemorial, the practice has been not to allow criticism of the judiciary; the honourable member may discuss the judgments of the court, but not the judges’.274 In defining members of the judiciary, the Chair has included the following:
- a Public Service Arbitrator;275
- an Australian judge who had been appointed to the international judiciary;276
- a Conciliation and Arbitration Commissioner;277 and
The Chair has also ruled that an electoral distribution commission is not a judicial body and that a judge acting as a commissioner is not acting in a judicial capacity.278 When judges lead royal commissions or special commissions, they are exercising executive power, not judicial power, and therefore do not attract the protection of standing order 89. The rule has not prevented criticism of the conduct of a person before becoming a judge.279
Judges are expected, by convention, to refrain from politically partisan activities and to be careful not to take sides in matters of political controversy. If a judge breaks this convention, a Member may feel under no obligation to remain mute on the matter in the House.280
Back to top
Reflections on the House and votes of the House
The standing orders provide that offensive words may not be used against the House of Representatives.281 It has been considered unbecoming to permit offensive expressions against the character and conduct of the House to be used by a Member without rebuke, as such expressions may serve to degrade the legislature in the eyes of the people. Thus, the use of offensive words against the institution by one of its Members should not be overlooked by the Chair.
A Member must not reflect adversely on a vote of the House, except on a motion that it be rescinded.282 Under this rule a proposed motion of privilege, in relation to the suspension of two Members from the House in one motion, was ruled out of order as the vote could not be reflected upon except for the purpose of moving a rescission motion.283 A Member, speaking to the question that a bill be read a third time, has been ordered not to reflect on votes already taken during consideration of the bill,284 and a Member has been ordered not to canvass decisions of the House of the same session.285 This rule is not interpreted in such a way as to prevent a reasonable expression of views on matters of public concern.
References to other governments and their representatives
Although there is no provision in the standing orders prohibiting opprobrious references to countries with which Australia is in a state of amity or to their leaders, governments or their representatives in Australia, the Chair has intervened to prevent such references being made, on the basis that the House was guided by House of Commons usage286 on the matter.287 However, from time to time, much latitude has been shown by the Chair and on the one occasion when the House has voted on the matter it rejected the proposed inclusion of this rule into the standing orders. In 1962 the Standing Orders Committee recommended amendments to the standing orders to give effect to the House of Commons practice that questions should not contain discourteous references to a friendly country or its representative.288 The House rejected the recommendation.289
In more recent years the Chair has declined to interfere with the terms of a notice of motion asking the House to censure an ambassador to Australia ‘for his arrogant and contemptuous attitude towards Australia and . . . his provocative public statements’.290 A notice of motion asking the House to condemn a diplomatic representative for ‘lying to the Australian public’ has also been allowed to appear on the Notice Paper.291
In 1986 the Procedure Committee recommended that restrictions relating to reflections in debate on governments or heads of governments, other than the Queen or her representatives in Australia, be discontinued.292 In practice, the latitude referred to earlier has continued to be evident, even though the Procedure Committee recommendation has not been acted upon formally.
The standing orders and practice of the House do not prevent a Member from reflecting on a State Government or Member of a State Parliament, no matter how much such a reference may be deprecated by the Chair.293
Notwithstanding its fundamental right and duty to consider any matter if it is thought to be in the public interest, the House imposes a restriction on itself in the case of matters awaiting or under adjudication in a court of law. This is known as the sub judice convention. The convention is that, subject to the right of the House to legislate on any matter, matters awaiting adjudication in a court of law should not be brought forward in debate, motions or questions. Having no standing order relating specifically to sub judice matters the House has been guided by its own practice. Regard has also been had to that of the House of Commons as declared by resolutions of that House in 1963 and 1972.294
The origin of the convention appears to have been the desire of Parliament to prevent comment and debate from exerting an influence on juries and from prejudicing the position of parties and witnesses in court proceedings.295 It is by this self-imposed restriction that the House not only prevents its own deliberations from prejudicing the course of justice but prevents reports of its proceedings from being used to do so.
The basic features of the practice of the House of Representatives are as follows:
- The application of the sub judice convention is subject to the discretion of the Chair at all times. The Chair should always have regard to the basic rights and interests of Members in being able to raise and discuss matters of concern in the House. Regard needs to be had to the interests of persons who may be involved in court proceedings and to the separation of responsibilities between the Parliament and the judiciary.
- As a general rule, matters before the criminal courts should not be referred to from the time a person is charged until a sentence, if any, has been announced; and the restrictions should again apply if an appeal is lodged and remain until the appeal is decided.
- As a general rule, matters before civil courts should not be referred to from the time they are set down for trial or otherwise brought before the court and, similarly, the restriction should again be applied from the time an appeal is lodged until the appeal is decided.
- In making decisions as to whether the convention should be invoked in particular cases, the Chair should have regard to the likelihood of prejudice to proceedings being caused as a result of references in the House.296
The convention has also been applied in respect of royal commissions. The key feature is that decisions are made on a case by case basis, in light of the circumstances applying.297 The principal distinctions that have been recognised have been that:
- Matters before royal commissions or other similar bodies which are concerned with the conduct of particular persons should not be referred to in proceedings if, in the opinion of the Chair, there is a likelihood of prejudice being caused as a result of the references in the House.
- Matters before royal commissions or similar bodies dealing with broader issues of national importance should be able to be referred to in proceedings unless, in the opinion of the Chair, there are circumstances which would justify the convention being invoked to restrict reference in the House298 (and see below).
The sub judice convention can also be invoked in respect of committee inquiries, although, having the ability to take evidence in private, committees are able to guard against any risk of prejudice to proceedings as a result of evidence given or the reporting of such evidence by the media. During the Transport, Communications and Infrastructure Committee inquiry into aviation safety in 1994–95, for example, the committee decided that it should not receive evidence in public concerning two particular matters, one being the subject of a coronial inquiry and the other the subject of a judicial inquiry.
Back to top
Right to legislate and discuss matters
The right of the House to debate and legislate on matters without outside interference or hindrance is self-evident. Circumstances could be such, for example, that the Parliament decides to consider a change to the law to remedy a situation which is before a court or subject to court action.
Discretion of the Chair
The discretion exercised by the Chair must be considered against the background of the inherent right and duty of the House to debate any matter considered to be in the public interest. Freedom of speech is regarded as a fundamental right without which Members would not be able to carry out their duties. Imposed on this freedom is the voluntary restraint of the sub judice convention, which recognises that the courts are the proper place to judge alleged breaches of the law. It is a restraint born out of respect by Parliament for the judicial arm of government, a democratic respect for the rule of law and the proper upholding of the law by fair trial proceedings. Speaker Snedden stated in 1977:
The question of the sub judice rule is difficult. Essentially it remains in the discretion of the presiding officer. Last year I made a statement in which I expanded on the interpretation of the sub judice rule which I would adopt. I was determined that this national Parliament would not silence itself on issues which would be quite competent for people to speak about outside the Parliament. On the other hand, I was anxious that there should be no prejudice whatever to persons faced with criminal action. Prejudice can also occur in cases of civil action. But I was not prepared to allow the mere issue of a writ to stop discussion by the national Parliament of any issues. Therefore I adopted a practice that it would not be until a matter was set down for trial that I would regard the sub judice rule as having arisen and necessarily stifle speeches in this Parliament. There is a stricter application in the matter of criminal proceedings.299
The major area for the exercise of the Chair’s discretion lies in the Chair’s assessment of the likelihood of prejudice to proceedings.
The Select Committee on Procedure of the House of Commons put the following view as to what is implied by the word ‘prejudice’:
In using the word ‘‘prejudice’’ Your Committee intend the word to cover possible effect on the members of the Court, the jury, the witnesses and the parties to any action. The minds of magistrates, assessors, members of a jury and of witnesses might be influenced by reading in the newspapers comment made in the House, prejudicial to the accused in a criminal case or to any of the parties involved in a civil action.300
It is significant that this view did not include judges but referred only to magistrates, as it could be less likely that a judge would be influenced by anything said in the House. In 1976 Speaker Snedden commented:
. . . I am concerned to see that the parties to the court proceedings are not prejudiced in the hearing before the court. That is the whole essence of the sub judice rule; that we not permit anything to occur in this House which will be to the prejudice of litigants before a court. For that reason my attitude towards the sub judice rule is not to interpret the sub judice rule in such a way as to stifle discussion in the national Parliament on issues of national importance. I have so ruled on earlier occasions. That is only the opposite side of the coin to what is involved here. If I believed that in any way the discussion of this motion or the passage of the motion would prejudice the parties before the court, then I would rule the matter sub judice and refuse to allow the motion to go on; but there is a long line of authority from the courts which indicates that the courts and judges of the courts do not regard themselves as such delicate flowers that they are likely to be prejudiced in their decisions by a debate that goes on in this House. I am quite sure that is true, especially in the case of a court of appeal or, if the matter were to go beyond that, the High Court. I do not think those justices would regard themselves as having been influenced by the debate that may occur here.301
The Chair has permitted comments to be made pertaining to a matter subject to an appeal to the High Court, a decision perhaps reflecting the view that High Court judges would be unlikely to be influenced by references in the House.302
The Speaker has allowed a matter of public importance critical of the Government’s handling of an extradition process to be discussed, despite objection from the Attorney-General on sub judice grounds, on the basis that Members refrain from any comment as to the guilt or innocence of the person named in the proposed matter.303
A matter before the courts has been brought before the House as an item of private Member’s business, the Speaker having concluded that the sub judice rule should not be invoked so as to restrict debate.304 It was noted that the matter was a civil one and that a jury was not involved.
Debate relating to the subject matter of a royal commission has been permitted on the grounds that the commissioner would not be in the least influenced by such remarks (and see below).305
Back to top
Civil or criminal matter
A factor which the Chair must take into account in making a judgment on the application of the sub judice convention is whether the matter is of a criminal or a civil nature. The practice of the House provides for greater caution in the case of criminal matters. First, there is an earlier time for exercising restraint in debate in the House, namely, ‘from the moment a charge is made’ as against ‘from the time the case is set down for trial or otherwise brought before the court’ in the case of a civil matter. In the case of a civil matter it is a sensible provision that the rule should not apply ‘from the time a writ is issued’ as many months can intervene between the issue of a writ and the actual court proceedings. The House should not allow its willingness to curtail debate so as to avoid prejudice to be convoluted into a curtailment of debate by the issue of a ‘stop writ’, namely, a writ the purpose of which is not to bring the matter to trial but to limit discussion of the issue, a step sometimes taken in defamation and other cases. Secondly, there is the greater weight which should be given to criminal rather than civil proceedings. The use of juries in criminal cases and not in civil matters and the possibility of members of a jury being influenced by House debate is also relevant to the differing attitudes taken as between civil and criminal matters.
Chair’s knowledge of the case
A significant practical difficulty which sometimes faces the Chair when application of the sub judice convention is suggested is a lack of knowledge of the particular court proceeding or at least details of its state of progress. If present in the Chamber, the Attorney-General can sometimes help, but often it is a matter of the Chair using its judgment on the reliability of the information given; for example, the Chair has accepted a Minister’s assurance that a matter was not before a court.306
Back to top
Matters before royal commissions and other bodies
Although it is clear that royal commissions do not exercise judicial authority, and that persons involved in royal commissions are not on trial in a legal sense, the proceedings have a quasi-judicial character. The findings of a royal commission can have very great significance for individuals, and the view has been taken that in some circumstances the sub judice convention should be applied to royal commissions.
In 1954 Speaker Cameron took the view that he would be failing in his duty if he allowed any discussion of matters which had been deliberately handed to a royal commission for investigation.307 The contemporary view is that a general prohibition of discussion of the proceedings of a royal commission is too broad and restricts the House unduly. It is necessary for the Chair to consider the nature of the inquiry. Where the proceedings are concerned with issues of fact or findings relating to the propriety of the actions of specific persons the House should be restrained in its references.308 Where, however, the proceedings before a royal commission are intended to produce advice as to future policy or legislation they assume a national interest and importance, and restraint of comment in the House cannot be justified. In 1978 Speaker Snedden drew a Member’s attention to the need for restraint in his remarks about the evidence before a royal commission. Debate was centred on a royal commission appointed by the Government to inquire into a sensitive matter relating to an electoral re-distribution in Queensland involving questions of fact and the propriety of actions of Cabinet Ministers and others.309 The Speaker said:
I interrupt the honourable gentleman to say that a Royal Commission is in course. The sub judice rules adopted by the Parliament and by myself are such that I do not believe that the national Parliament should be deprived of the opportunity of debating any major national matter. However, before the honourable gentleman proceeds further with what he proposes to say I indicate to him that in my view if he wishes to say that evidence ABC has been given he is free to do so. The Royal Commissioner would listen to the evidence and make his judgment on the evidence and not on what the honourable gentleman says the evidence was. But I regard it as going beyond the bounds of our sub judice rules if the honourable gentleman puts any construction on the matter for the simple reason that if the Royal Commissioner in fact concluded in a way which was consistent with the honourable gentleman’s construction it may appear that the Commissioner was influenced, whereas in fact he would not have been. So I ask the honourable gentleman not to put constructions on the matter.310
The question as to whether the proceedings before a royal commission are sub judice is therefore treated with some flexibility to allow for variations in the subject matter, the varying degree of national interest and the degree to which proceedings might be or appear to be prejudiced.
The application of the convention became an issue in 1995 in connection with a royal commission appointed by the Government of Western Australia. In this case, although the terms of reference did not identify persons, the Royal Commissioner subsequently outlined issues which included references to the propriety of the actions of a Minister at the time she had been Premier of Western Australia. In allowing Members to continue to refer to the commission’s proceedings, the Speaker noted that the terms of reference did not require the royal commission to inquire into whether there had been any breach of a law of the Commonwealth, to the fact that the issues had a highly political element, to the publicity already given to the matter and to the purpose of the convention. Nevertheless the Speaker rejected the view that the convention should not continue to be applied to royal commissions, and stated that each case should be judged on its merits.311
When other bodies have a judicial or quasi-judicial function in relation to specific persons the House needs to be conscious of the possibility of prejudicing, or appearing to prejudice, their case. When the judicial function is wider than this—for example, a matter for arbitration or determination by the Industrial Relations Commission—there would generally be no reason for restraint of comment in the House. To disallow debate on such issues would be contrary to one of the most important functions of the House, and the view is held that anything said in the House would be unlikely to influence the commissioners, who make their determinations on the facts as placed before them.
The discretion of the Chair, and the need to recognise the competing considerations, is always at the core of these matters.
Back to top
Interruptions to Members speaking
A Member may only interrupt another Member to:
- call attention to a point of order ;
- call attention to a matter of privilege suddenly arising;
- call attention to the lack of a quorum;
- call attention to the unwanted presence of visitors;
- move that the Member be no longer heard;
- move that the question be now put;
- move that the business of the day be called on; or
- make an intervention as provided in the standing orders.312
Also if the Speaker stands during a debate, any Member then speaking or seeking the call shall sit down and the House shall be silent, so the Speaker may be heard without interruption.313 A Member has been directed to leave the Chamber for an hour for having interjected a second time after having been reminded that the Speaker had risen.314 Members may also be interrupted by the Chair on matters of order and at the expiration of time allotted to debate. It is not in order to interrupt another Member to move a motion, except as outlined above.315
It has not been the practice of the House for Members to ‘give way’ in debate to allow another Member to intervene.316 However, following a Procedure Committee recommendation,317 in September 2002 the House agreed to a trial of a new procedure which permitted Members to intervene in debate in the Main Committee to ask brief questions of the Member speaking.318 The standing order, now adopted permanently, provides as follows:
66A During consideration of any order of the day in the Main Committee a Member may rise and, if given the call, ask the Chair whether the Member speaking is willing to give way. The Member speaking will either indicate his or her:
(a) refusal and continue speaking, or
(b) acceptance and allow the other Member to ask a short question immediately relevant to the Member's speech—
Provided that, if, in the opinion of the Chair, it is an abuse of the orders or forms of the House, the intervention may be denied or curtailed.
In December 2003 the Procedure Committee recommended a further trial involving a five minute period of questions and answers at the end of and in relation to each Member’s speech in second reading debates.319
When a Member is speaking, no Member may converse aloud or make any noise or disturbance to interrupt the Member.320 Should Members wish to refute statements made in debate they have the opportunity to do so when they themselves address the House on the question or, in certain circumstances, by informing the Chair that they have been misrepresented (see p. 483).
In order to facilitate debate the Chair may regard it as wise not to take note of interjections.321 Deputy Speaker Chanter commented in 1920:
I call attention to a rule which is one of the most stringent that we have for the guidance of business [now S.O. 66]. I may say that an ordinary interjection here and there is not usually taken notice of by the Chair, but a constant stream of interjections is decidedly disorderly.322
The Chair, although recognising all interjections as disorderly, has also been of the opinion that it should not interfere as long as they were short and did not interrupt the thread of the speech being delivered.323 The fact that an interjection has been directly invited by the remarks of the Member speaking in no way justifies the interruption of a speech,324 and the Chair has suggested that Members refrain from adopting an interrogatory method of speaking which provokes interjections.325 It is not uncommon for the Chair, when ordering interjectors to desist, to urge the Member speaking to address his or her remarks through the Chair and not to invite or respond to interjections.326 Interjections which are not replied to by the Member with the call or which do not lead to any action or warning by the Chair are not recorded in Hansard.
It may be accepted that, as the House is a place of thrust and parry, the Chair need not necessarily intervene in the ordinary course of debate when an interjection is made. Intervention would be necessary if interjections were, in the opinion of the Chair, too frequent or such as to interrupt the flow of a Member’s speech or were obviously upsetting the Member who had the call. The Chair has a duty to rebuke the person who interjects rather than chastise the Member speaking for replying to an interjection.
Back to top
Curtailment of speeches and debate
Curtailment of speeches
A speech is terminated when a Member resumes his or her seat at the conclusion of his or her remarks, when the time allowed for a speech under the standing orders expires, or when the House agrees to the question ‘That the Member be no longer heard’. Speeches may also be terminated when the time allotted to a particular debate expires, when the House agrees to the question ‘That the question be now put’, or when the House agrees to a motion ‘That the business of the day be called on’ during discussion of a matter of public importance.
Time limits for speeches
Time limits for speeches in the House were first adopted in 1912.327 Following a recommendation from the Standing Orders Committee that the House adopt a specific standing order limiting the time of speeches,328 the House agreed to a motion that ‘in order to secure the despatch of business and the good government of the Commonwealth’ the standing orders be immediately amended in the direction of placing a time limit on the speeches delivered in the House and in committee.329 The standing order, as amended, is now standing order 1 and, unless the House otherwise orders, time limits now apply to all speeches with the exceptions of the main Appropriation Bill for the year, where there is no time limit for the mover of the second reading and for the Leader of the Opposition or one Member deputed by the Leader of the Opposition when speaking to the second reading.
The House may agree to vary, for a specific purpose, time limits provided by standing order 1. As examples of variations in time limits for speeches on bills see Appropriation Bill (No. 1) 1978–79,330 and a package of bills considered together in 1998 to provide for new taxation arrangements.331 Time limits have also been varied for debate on a motion to suspend standing orders and other debates.332
In relation to committee and private Members’ business on Mondays the Selection Committee may allot lesser speaking times than provided by the standing order (see Chapter on ‘Non-government business’). Time limits do not apply when statements are made by leave of the House.333 It is the practice of the House that time limits are not enforced during debate on motions of condolence or on valedictory speeches made at the end of a period of sittings.
The timing clocks are set according to the times prescribed in the standing orders or other orders of the House, even in cases, not uncommon, where informal agreements have been reached between the parties for shorter speaking times for a particular item.334
The period of time allotted for a Member’s speech is calculated from the moment the Member is given the call335 (unless the call is disputed by a motion under standing order 65(c)336) and includes time taken up by interruptions such as divisions337 (but not suspensions of Main Committee proceedings caused by divisions in the House), quorum calls,338 points of order,339 motions of dissent from rulings of the Chair,340 and proceedings on the naming and suspension of a Member.341 The time allotted is not affected by a suspension of the sitting and the clocks are stopped for the duration of the suspension.
It is not unusual before or during important debates for the standing orders to be suspended to grant extended or unlimited time to Ministers and leading Members of the Opposition.342 Sometimes in these circumstances a simple motion for extension of time may be more suitable.
After the maximum period allowed for a Member’s speech has expired the standing order provides that, with the consent of the majority of the House or Main Committee, the Member may be allowed to continue a speech for one period not exceeding 10 minutes, provided that no extension shall exceed half of the original period allotted.343 The motion that a Member’s time be extended may be moved without notice by the Member concerned or by another Member, and must be put immediately and resolved without amendment.344 An extension of time for a specified period, less than the time provided by the standing order, has been granted on a motion moved by leave.345 It has been held that the granting of a second extension requires the suspension of the standing order,346 but the House has granted leave for a Member to continue his speech in this circumstance.347 The Main Committee cannot suspend standing orders but the Committee may grant leave for the time of a speech to be extended. A Member cannot be granted an extension on the question for the adjournment of the House.348 If there is a division on the question that a Member’s time be extended, the extension of time is calculated from the time the Member is called by the Chair.349 Where a Member’s time expires during the counting of a quorum, after a quorum has been formed a motion may be moved to grant the Member an extension of time.350 Where a Member’s time has expired during more protracted proceedings, standing orders have been suspended, by leave, to grant additional time.351
Despite Selection Committee determinations in relation to private Members’ business, Members have spoken again, by leave,352 or spoken by leave after the time allocated for the debate had expired.353 Similarly, despite Selection Committee determinations on times for the consideration of committee and delegation reports, extensions of time have been granted to Members speaking on these items354 and Members have also been given leave to speak again.
With the exceptions stated below, any Member may move at any time that a Member who is speaking ‘be no longer heard’ and the question must be put immediately and resolved without amendment or debate.355 The standing order was introduced at a time when there were no time limits on speeches and, in moving for its adoption, Prime Minister Deakin said:
The . . . new standing order need rarely, if ever, be used for party purposes, and never, I trust, will its application be dictated by partisan motives.356
The motion cannot be moved when a Member is giving a notice of motion or moving the terms of a motion,357 or if, when the same question has been negatived, the Chair is of the opinion that the further motion is an abuse of the orders or forms of the House, or is moved for the purpose of obstructing business.358
The motion is not necessarily accepted by the Chair when a Member is speaking with the Chair’s indulgence; or when a Member is taking or speaking to a point of order or making a personal explanation, as these matters are within the control of the Chair. In respect of a point of order the matter awaits the Chair’s adjudication, and in respect of a personal explanation the Member is speaking with the permission of the Chair under standing order 68. Thus, in both cases the discretion of the Chair may be exercised.359 The Speaker has declined to accept the motion while a Member who had moved a motion of dissent from the Chair’s ruling was speaking, as he desired to hear the basis of the motion of dissent.360 The Chair is not bound to put the question on the motion if the Member speaking resumes his or her seat having completed the speech, the question having been effectively resolved by that action.361 A closure of Member motion may be withdrawn, by leave.362 The motion has been moved in respect of a Member making a statement by leave.363
When the motion has been agreed to, the closured Member has again spoken, by leave.364 The standing order has been interpreted as applying to the speech currently in progress—a closured Member has not been prevented from speaking again on the same question where the standing orders allow this (for example, during the detail stage of a bill).365
Notice has been given of a motion to suspend the operation of the standing order for a period except when the motion was moved by a Minister.366
Back to top
Adjournment and curtailment of debate
Motion for adjournment of debate
Only a Member who has not spoken to the question or who has the right of reply may move the adjournment of a debate. The question must be put immediately and resolved without amendment or debate.367 The motion cannot be moved while another Member is speaking. It can only be moved by a Member who is called by the Speaker in the course of the debate. There is no restriction on the number of times an individual Member may move the motion in the same debate. A motion for the adjournment of the debate on the question ‘That the House do now adjourn’ is not in order.
Unless a Member requests that separate questions be put, the time for the resumption of the debate may be included in the adjournment question,368 and when a Member moves the motion ‘That the debate be now adjourned’ the Chair puts the question in the form ‘That the debate be now adjourned and the resumption of the debate be made an order of the day for . . .’. The time fixed for the resumption of debate is either ‘the next day of sitting’, ‘a later hour this day’, or a specific day. It is only when there is opposition to the adjournment of the debate or to the time for its resumption that the two questions are put separately. When the question to fix a time for the resumption of the debate is put separately, the question is open to amendment and debate. Both debate and any amendment are restricted, by the rule of relevancy, to the question of the time or date when the debate will be resumed. For example, an amendment must be in the form to omit ‘the next sitting’ in order to substitute a specific day.369
If the motion for the adjournment of debate is agreed to, the mover is entitled to speak first when the debate is resumed370 (see p. 487). If the motion is negatived, the mover may speak at a later time during the debate371—this provision has been interpreted as allowing the Member to speak immediately after a division on the motion for the adjournment.372 If the motion is negatived, no similar proposal may be received by the Chair if the Chair is of the opinion that it is an abuse of the orders or forms of the House or is moved for the purpose of obstructing business.373
If the Selection Committee has determined that consideration of an item of private Members’ business should continue on a future day, at the time set for interruption of the item of business or if debate concludes earlier, the Speaker interrupts proceedings and the matter is listed on the Notice Paper for the next sitting.374 The Chair will also do this even if the time available has not expired but where there are no other Members wishing to speak.375
Standing order 39 allows a Member who has presented a committee or delegation report (after any statements allowed have been made), to move a specific motion in relation to the report. Debate on the question must then be adjourned until a future day.376
In the Main Committee, if no Member is able to move adjournment of debate, the Chair can announce the adjournment when there is no further debate on a matter, or at the time set for the adjournment of the Committee.377 In the House, if there is no Member available qualified to move the motion—that is, when all Members present have already spoken in the debate—the Chair may also, without the motion being moved, simply declare that the debate has been adjourned and that the resumption of the debate will be made an order of the day for the next sitting.378
Back to top
Leave to continue remarks
If a Member speaking to a question asks leave of the House to continue his or her remarks when the debate is resumed, this request is taken to be an indication that the Member wishes the debate to be adjourned. If leave is granted, the Chair proposes the question that the debate be adjourned and the resumption of the debate be made an order of the day for an indicated time.379 If leave is refused, the Member may continue speaking until the expiration of the time allowed.380
A Member granted leave to continue his or her remarks is entitled to the first call when the debate is resumed, and may then speak for the remainder of his or her allotted time. If the Member does not speak first when the debate is resumed the entitlement to continue is lost.381
Closure of question
After a question has been proposed from the Chair a Member may move ‘That the question be now put’ and the motion must be put immediately and resolved without amendment or debate.382 No notice is required of the motion and it may be moved irrespective of whether or not another Member is addressing the Chair. When the closure is moved, it applies only to the immediate question before the House.
The requirement for the closure motion to be put immediately and resolved without amendment or debate means that, until the question on this motion has been decided, there is no opportunity for a point of order to be raised or a dissent motion to be moved in respect of the putting of the motion. The closure thus takes precedence over other opportunities or rights allowed by the standing orders.383
The provision for the closure of a question, commonly known as ‘the gag’, was incorporated in the standing orders in 1905384 but was not used until 7 September 1909.385 Since then it has been utilised more frequently.386 The closure has been moved as many as 41 times in one sitting387 and 29 times on one bill.388
If a motion for the closure is negatived, the Chair shall not receive the same proposal again if of the opinion that it is an abuse of the orders or forms of the House or moved for the purpose of obstructing business.389 The closure of a question cannot be moved in respect of any proceedings for which time has been allotted under the guillotine procedure.390 This restriction has been held not to apply to a motion, moved after the second reading of a bill, to refer the bill to a select committee when that proposal had not been included in the allotment of time for the various stages of the bill.391 The closure cannot be moved on a motion in relation to which the Selection Committee has determined that debate should continue on a future day,392 as such matters cannot be brought to a vote without the suspension of standing orders.393 The Chair has declined to accept the closure on a motion of dissent from the Chair’s ruling.394
If a division on the closure motion is in progress or just completed when the time for the automatic adjournment is reached, and the motion is agreed to, a decision is then taken on the main question(s) before the House before the automatic adjournment procedure is invoked.395
When the closure is agreed to, the question is then put on the immediate question by the Chair. If the immediate question is an amendment to the original question, debate may then continue on the original question, or the original question as amended.396 From time to time interruptions have occurred between the agreement to the closure and the putting of the question to which the closure related.397
If the closure is moved and agreed to while a Member is moving or seconding (where necessary) an amendment—that is, before the question on the amendment is proposed from the Chair—the amendment is superseded, and the question on the original question is put immediately.398However, the Chair has declined to accept the closure at the point when a Member was formally seconding an amendment, and then proceeded to propose the question on the amendment.399 Similarly, a motion to suspend standing orders moved during debate of another item of business is superseded by a closure moved before the question on the suspension motion is proposed from the Chair, as the closure applies to the question currently before the House.400
Any Member may move the closure of a question in possession of the House, including a Member who has already spoken to the question.401 It may be moved by a Member during, or at the conclusion of, his or her speech, but no reasons may be given for so moving,402 nor may a Member take advantage of the rules for personal explanations to give reasons.403 If the seconder of a motion has reserved the right to speak, the closure overrides this right.404
Notice has been given of a motion to suspend the operation of the standing order for a period except when the motion was moved by a Minister.405
From time to time the Government may limit debate on a bill, motion, or a proposed resolution for customs or excise tariff by use of the guillotine.406 This procedure is described in detail in the Chapter on ‘Legislation’.
Other provisions for the interruption and conclusion of debates
The standing orders provide for the period of certain debates to be limited in time or to be concluded by procedures not yet dealt with in this chapter. Time limits407 apply to debates on:
- the question ‘That the House do now adjourn’ (S.O. 31);
- the question ‘That grievances be noted’ (S.O. 44);
- a motion for the suspension of standing orders when moved without notice under standing order 47 (S.O. 1);
- a motion for allotment of time under the guillotine procedures (S.O. 84);
- private Members’ business (S.O. 41);
- proceedings on committee and delegation reports on Mondays (S.O. 39); and
- matters of public importance (S.O. 46).
A debate (or discussion) may also be concluded:
- at the expiration of the time allotted under the guillotine procedure (S.O. 85(b));
- on withdrawal of a motion relating to a matter of special interest (S.O. 50);
- at the conclusion of the time determined by the Selection Committee (S.O. 222(d));
- by the closure motion ‘That the question be now put’ (S.O. 81); or
- by a motion ‘That the business of the day be called on’ in respect of a matter of public importance (S.O. 46(e)).
A debate may be interrupted:
- by the automatic adjournment (S.O. 31);
- when the time fixed for the conclusion of certain proceedings under the guillotine procedure has been reached (S.O. 85(a)); or
- at the conclusion of the time determined by the Selection Committee (S.O.s 41, 222(d)).
In all these cases the standing orders make provision as to how the question before the House is to be disposed of (where necessary).
A debate in the Main Committee may be interrupted by:
- the adjournment of the House (S.O. 190(c));
- the motion for the adjournment of the sitting of the Committee (S.O. 190(e)); or
- the motion that further proceedings be conducted in the House (S.O. 197).
The Committee may resume proceedings at the point at which they were interrupted following any suspension or adjournment of the Committee (S.O. 196).
Back to top
Powers of Chair to enforce order
The Speaker or the occupier of the Chair at the time is responsible for the maintenance of order in the House.408 This responsibility is derived specifically from standing order 60 but also from other standing orders and the practice and traditions of the House.
Sanctions against disorderly conduct
Under standing order 91, a Member’s conduct is considered disorderly if the Member has:
- persistently and wilfully obstructed the House;
- used objectionable words, which he or she has refused to withdraw;
- persistently and wilfully refused to conform to a standing order;
- wilfully disobeyed an order of the House;
- persistently and wilfully disregarded the authority of the Speaker; or
- been considered by the Speaker to have behaved in a disorderly manner.
While specific offences are listed, it is not uncommon for a Member to be disciplined for an offence which is not specifically stated in the terms of the standing order but which is considered to be encompassed within its purview. For example, in regard to conduct towards the Chair, Members have been named for imputing motives to, disobedience to, defying, disregarding the authority of, reflecting upon, insolence to, and using expressions insulting or offensive to, the Chair. Since 1905 an unnecessary quorum call has been dealt with as a wilful obstruction of the House.409
When the Speaker’s attention is drawn to the conduct of a Member, the Speaker determines whether or not it is offensive or disorderly.410 The standing orders give the Speaker the power to intervene411 and take action against disorderly conduct by a Member, and to impose a range of sanctions, including directing the Member to leave the Chamber for one hour, or naming the Member.412
Before taking such action the Chair will generally first call a Member to order and sometimes warn the Member, but there is no obligation on the Chair to do so.413 Sometimes the Chair will issue a ‘general warning’, not aimed at any Member specifically.414Members ignoring a warning may expect quick action by the Chair.
Back to top
Direction to leave the Chamber
Pursuant to standing order 94(a), if the Speaker considers a Member’s conduct to be disorderly he or she may direct the Member to leave the Chamber415 for one hour. This action is taken as an alternative to naming the Member—the decision as to whether a naming or a direction to leave is more appropriate is a matter for the Speaker’s discretion. The direction to leave is not open to debate or dissent. When so directed, a Member failing to leave the Chamber immediately416 or continuing to behave in a disorderly manner may be named.417
The Speaker has not proceeded with a direction to leave the Chamber after the Member concerned (the Leader of the Opposition) had apologised for interjecting in a disorderly manner.418 Six Members (including a Minister) have been directed to leave on a single day.419
This procedure was introduced in 1994 following a recommendation by the Procedure Committee. The committee, noting the seriousness of a suspension and that the process was time-consuming and itself disruptive, considered that order in the House would be better maintained if the Speaker were to have available a disciplinary procedure of lesser gravity, but of greater speed of operation. The committee saw its proposed mechanism as a means of removing a source of disorder rather than as a punishment, enabling a situation to be defused quickly before it deteriorated, and without disrupting proceedings to any great extent.420
A Member directed to leave the Chamber for an hour is also excluded during that period from the Chamber galleries and the room in which the Main Committee is meeting.421
The naming of a Member is, in effect, an appeal to the House to support the Chair in maintaining order. Its first recorded use in the House of Commons was in 1641.422 The first recorded naming in the House of Representatives was on 21 November 1901 (Mr Conroy). Mr Conroy apologised to the Chair and the naming was withdrawn.423 The first recorded suspension was in respect of Mr Catts on 18 August 1910.424 A Member is usually named by the name of his or her electoral division, the Chair stating ‘I name the honourable Member for . . .’. Office holders have been named by their title.425 In 1927, when it was put to the Speaker that he should have named a Member by his actual name the Speaker replied:
It is a matter of identification, and the identity of the individual affected is not questioned. I named him as member for the constituency which he represents, and by which he is known in this Parliament.426
Office holders have been named, including Ministers,427 Leaders of the Opposition428 and party leaders.429 Members have been named together, but, except in the one instance, separate motions have been moved and questions put for the suspension of each Member.430 No Member has been named twice on the one occasion, but the Chair has threatened to take this action.431
The naming of a Member usually occurs immediately an offence has been committed but this is not always possible. For example, Members have been named at the next sitting as a result of incidents that occurred at the adjournment of the previous sitting of the House.432 A Member has been named for refusing to withdraw words which the Chair had initially ruled were not unparliamentary. When that ruling was reversed by a successful dissent motion and the Chair then demanded the withdrawal of the words, the Member refused to do so.433
The Chair has refused to accept a dissent motion to the action of naming a Member on the quite correct ground that, in naming a Member, the Chair has not made a ruling.434
Back to top
Proceedings following the naming of a Member
Following the naming of a Member, the Speaker must immediately put the question, on a motion being made, ‘That the Member [for . . . ] be suspended from the service of the House’. No amendment, adjournment, or debate is allowed on the question.435
It is not uncommon for the Chair to withdraw the naming of a Member or for the matter not to be proceeded with after other Members have addressed the Chair on the matter and the offending Member has apologised.436 Such interventions are usually made by a Minister or a member of the opposition executive before the motion for suspension is moved, as it was put on one occasion ‘to give him a further opportunity to set himself right with the House’.437 The motion for suspension has not been proceeded with at the request of the Speaker,438 when the Speaker stated that no further action would be taken if the Member (who had left the Chamber) apologised immediately on his return,439 when a Member’s explanation was accepted by the Chair,440 when the Chair thought it better if the action proposed in naming a Member were forgotten,441 when the Chair accepted an assurance by the Leader of the Opposition that the Member named had not interjected,442 when the Chair acceded to a request by the Leader of the Opposition not to proceed with the matter,443when the Member withdrew the remark which led to his naming and apologised to the Chair444 and when the Member apologised to the Chair.445 A suspension motion has not been proceeded with, the Speaker instead directing the Member to leave the Chamber for one hour.446 On one occasion the motion for a Member’s suspension was moved but, with disorder in the House continuing, the Speaker announced that to enable the House to proceed he would not put the question on the motion.447
A motion for the suspension of a Member has been moved at the commencement of a sitting following his naming during a count out of the previous sitting.448 Although the Chair has ruled that there is nothing in the standing orders which would prevent the House from proceeding with business between the naming of a Member and the subsequent submission of a motion for his suspension,449 the intention of the standing order, as borne out by practice, is presumably that the matter be proceeded with immediately without extraneous interruption.
Following the naming of a Member it is usually the Leader of the House or the Minister leading for the Government at the particular time who moves the motion for the suspension of the Member450 and the Chair has seen it as within its right at any time to call on the Minister leading the House to give effect to its rules and orders.451 The motion for the suspension of a Member has been negatived on two occasions, the first when the Government did not have sufficient Members present to ensure that the motion was agreed to,452 and the second when the Government, for the only time, did not support the Speaker and the motion for the suspension of the Member was moved by the Opposition and negatived. The Speaker resigned on the same day because of this unprecedented lack of support.453
A suspension on the first occasion is for 24 hours; on the second occasion in the same year, for three consecutive sittings; and on the third and any subsequent occasion in the same year, for seven consecutive sittings.454 Suspensions for three and seven sittings are exclusive of the day of suspension. A suspension in a previous session or a direction to leave the Chamber for one hour is disregarded and a ‘year’ means a year commencing on 1 January and ending on 31 December.455 There is only one instance of a Member having been suspended on a third occasion.456
A Member has been suspended from the service of the House ‘Until he returns, with the Speaker’s consent, and apologises to the Speaker’.457 The relevant standing order at that time had a proviso that ‘nothing herein shall be taken to deprive the House of power of proceeding against any Member according to ancient usages’. Members have also been suspended for varying periods in other circumstances—that is, not following a naming by the Chair— see ‘Punishment of Members’ in the Chapter on ‘Privilege’.
Once the House has ordered that a Member be suspended he or she must immediately leave the Chamber. If a Member refuses to leave, the Chair may order the Serjeant-at-Arms to remove the Member.458 On one occasion, the Speaker having ordered the Serjeant-at-Arms to direct a suspended Member to leave, the Member still refused to leave and grave disorder arose which caused the Speaker to suspend the sitting. When the sitting was resumed, the Member again refused to leave the Chamber. Grave disorder again arose and the sitting was suspended until the next day, when the Member then expressed regret and withdrew from the Chamber.459
A Member suspended from the service of the House is excluded from the Chamber, its galleries and the room in which the Main Committee is meeting,460 and may not participate in Chamber related activities. Thus petitions, notices of motion and matters of public importance are not accepted from a Member under suspension. A suspended Member is not otherwise affected in the performance of his or her duties. In earlier years notices of questions have been accepted from a Member after his suspension,461 although this has not been the recent practice, and notices of motions standing in the name of a suspended Member have been called on, and, not being moved or postponed, have been lost, as have matters of public importance.462 Suspension from the service of the House does not exempt a Member from serving on a committee of the House.463 The payment of a Member’s salary and allowances is not affected by a suspension.
Members have been prevented from subsequently raising the subject of a suspension as a matter of privilege as the matter has been seen as one of order, not privilege,464 and because a vote of the House could not be reflected upon except for the purpose of moving that it be rescinded.465 Members have also been prevented from subsequently referring to the naming of a Member once the particular incident was closed.466
A Member, by indulgence of the Speaker, has returned to the Chamber, withdrawn a remark unreservedly and expressed regret. The Speaker then stated that he had no objection to a motion being moved to allow the Member to resume his part in the proceedings, and standing orders were suspended to allow the Member to do so.467 On other occasions Members have returned and apologised following suspension of the standing orders468 and following the House’s agreement to a motion, moved by leave, that ‘he be permitted to resume his seat upon tendering an apology to the Speaker and the House’.469
Back to top
Gross disorder by a Member
If the Speaker determines that there is an urgent need to protect the dignity of the House, he or she can order a grossly disorderly Member to leave the Chamber immediately. When the Member has left, the Speaker must immediately name the Member and put the question for suspension without a motion being necessary. If the question is resolved in the negative, the Member may return to the Chamber.470
This standing order has never been invoked but its pre-1963 predecessor was used on a number of occasions. The standing order was amended in 1963 to make it quite clear that its provisions would apply only in cases which are so grossly offensive that immediate action was imperative and that it could not be used for ordinary offences. In addition, provision was made for the House to judge the matter by requiring the Chair to name the Member immediately after he or she had left the Chamber.471
Grave disorder in the House
In the event of grave disorder occurring in the House, the Speaker, without any question being put, can suspend the sitting and state the time at which he or she will resume the Chair; or adjourn the House to the next sitting. 472 On four occasions when grave disorder has a risen the Chair has adjourned the House until the next sitting.473 The Chair has also suspended the sitting in such circumstances on six occasions.474
Back to top
Disorder in the Main Committee
The Deputy Speaker, or the occupier of the Chair at the time, is responsible for keeping order in the Main Committee. The House may address disorder in the Committee after receiving a report from the Deputy Speaker.475
In the Main Committee the Deputy Speaker has the same responsibility for the preservation of order as the Speaker has in the House.476 However, the Chair of the Main Committee does not have the power to name a Member or to direct a Member to leave. If sudden disorder occurs in the Committee the Deputy Speaker may, or on motion moved without notice by any Member must, suspend or adjourn the sitting immediately. If the sitting is adjourned, any business under discussion and not disposed of at the time of the adjournment is set down on the Notice Paper for the next sitting.477 Following the suspension or adjournment the Deputy Speaker must report the disorder to the House. Any subsequent action against a Member under standing order 94 may only be taken in the House.478
Sittings of the Main Committee have been suspended because of disorder arising. On the first occasion, in reporting the suspension to the House the Main Committee Chair further reported that a Member had disregarded the authority of and reflected on the Chair. Following the report the Member concerned was named by the Speaker and was suspended.479 On a later occasion the Member concerned was named and suspended after the Main Committee Chair reported that the Member had defied the Chair by continuing to interject after having been called to order.480 In 2002 disorder arose when a Member defied the Chair by refusing to withdraw a remark. Instead of suspending the sitting481 the Deputy Speaker requested another Member to move that the Committee adjourn.482 On another occasion the offending Member by indulgence, having withdrawn and apologised when the matter was reported to the House, the Speaker stated that he had discussed the matter with the Deputy Speaker and no further action was taken.483 In such cases the matter considered in the House is the defiance of the Chair, rather than any matter which gave rise to it.
Back to top
Other matters of order relating to Members
The Speaker can intervene to prevent any personal quarrel between Members during proceedings.484 This standing order has only once been invoked to prevent the prosecution of a quarrel485 but the Chair has cited the standing order in admonishing Members for constantly interjecting in order to irritate or annoy others.486
A Member who wilfully disobeys an order of the House may be ordered to attend the House to answer for his or her conduct. A motion to this effect can be moved without notice.487
Back to top | http://www.aph.gov.au/About_Parliament/House_of_Representatives/Powers_practice_and_procedure/practice/chapter14 | 13 |
15 | Algorithms lie at the heart of computing. When you surf the web, you open up a web browser. When you view a website, your web browser takes the HTML file, which contains code, and then the browser uses an algorithm to convert the code into a page that actually makes sense to humans. When you listen to music on iTunes, the music player takes the MP3 file, which encodes the sound as 0′s and 1′s, and it uses an algorithm read the 0′s and 1′s and tell your computer speakers to output sound. When you’re looking for a particular song in your music library, you might want to search alphabetically, so you click on the Name column to sort your songs alphabetically. Your computer doesn’t magically know how to sort your library. It follows a precise set of instructions every time it’s told to sort something. That set of instructions is an algorithm!
Modern electronic computers actually follow a theoretical model invented by Alan Turing in the 1930s. This theoretical model, now called a Turing machine, can be informally described quite simply. A Turing machine is composed of three parts: states, a tape (consisting of blocks or cells), and a tape head, which points to a cell. The modern computer counterpart to the tape is memory, and the counterpart to the tape head is the program counter. In addition, the Turing machine has a transition function, which tells it which state to go to and what to do on the tape, given its current state and tape contents. Think of it this way: you have a set of states such as “hungry”, “sad”, “happy”, etc. and you have a row of infinitely many sheets of paper laid side by side on the ground, but they’re so big that you can only see one of them at a time by walking over it, and each sheet can only contain one character (you have to write them so large so that aliens from space can see them). A transition function would be like a look-up table you carry in your pocket, which says stuff like “if you’re hungry and the paper you’re on contains a 1, then you become angry, change the 1 to a 0 on your sheet of paper, and move to the right by one sheet of paper.” Moreover, one of the states of the Turing machine is designated the initial state (the state which it starts in every time it’s run), some of the states are designated accepting states, and some of them are designated rejecting states. A Turing machine is completely specified by its states along with the above designations and its transition function. When you think Turing machine, think “theoretical computer.” Turing machines don’t have to be simulated by electronic computers. Anything that can do what a Turing machine does can act like a computer. Instead of electronic circuits acting as logic gates, you can use billiard-balls bouncing inside a box instead. This model has actually been proposed.
Now, every program in your computer acts like a Turing machine, except for one difference: the tape is not infinitely long, i.e., you have a finite amount of work space. For now, let’s only consider programs which answer Yes/No questions. For example, consider a program which decides if a given positive integer is prime. Many graphing calculators, such as the TI-89 graphing calculator, have such a function; on the TI-89, the function is called isPrime(). The calculator doesn’t magically know if a number is prime or not. When you tell it to do isPrime(123), it follows a specific set of instructions, or algorithm, which was programmed into it. The algorithm corresponds to the transition function of the Turing machine. When you tell the calculator or Turing machine to do isPrime(123), it first converts the input, 123, into the language with which it operates, for example, binary numbers. In this case, the Turing machine reads as its input the binary of 123, which is 1111011 and writes that down on the first 7 cells of its tape. Then it faithfully follows its transition function and performs steps and finally halts in either an accepting state or rejecting state. If it ends on an accepting state, it spits out the answer True, and if it ends on a rejecting state, it spits out the answer False. In our case, the machine will end up on a rejecting state, since 123 = 3 x 41.
However, programmers nowadays don’t have to worry about the nitty-gritty details of coming up with an appropriate transition function for a Turing machine that will correctly decide if its input is prime and then putting together an electronic circuit that will simulate the Turing machine. Thankfully, all that hard work has been done decades ago, and now everything’s been abstracted. We can write programs in programming languages like C++ or Java, and the compiler does all the work of translating our code into machine code, which the machine then reads in and performs the necessary steps to simulate the appropriate Turing machine.
Now, how does a computer actually check if a number is prime? That is, what is a sequence of instructions such that, when given a positive integer n, will always output the correct answer as to whether n is prime or not? Often one can write an algorithm by thinking about how one would do it by hand. If you were asked to determine if a number is prime or not, how would you do it? Well, one can directly use the definition of primality. The number n is prime if and only if some integer d > 1 evenly divides n. So, one way would be to start from 2 and check if every number up to n-1 divides n. If none do, then n is prime, otherwise n is not. This may seem like a stupid way to do it, and in some sense it is, but it works and you have to keep in mind that computers crunch numbers a lot faster than humans do. The algorithm just described can be written in Python as follows:
def isPrime(n): if n < 2: return False else: for d in range(2, n): if n % d == 0: return False return True
In case you’re not familiar with Python syntax, this is what the code does: it defines a function isPrime(n) which takes n as its input. Then, it first checks if n < 2. If so, then it’ll spit out False, since there are no primes less than 2. If not, then the code goes to the else branch. The for loop basically iteratively sets d to be 2, 3, 4, etc. all the way until n-1. For each value, it’ll test if d divides n. The n % d means taking the remainder when dividing n by d, and == 0 is checking if the result is equal to 0. If this condition is true, then d divides n, so n is not prime, so we return False. Finally, after running through all these values of d, if no divisors were found, then return True.
You may notice that this algorithm takes a linear amount of time in n. That is, if you double n, then the number of divisors the algorithm checks roughly doubles as well. We say that this is an -time algorithm, where is the input. Note that we can modify the algorithm so that it only checks divisors d up to . We can do this because if n had a divisor , then it would have to have a corresponding divisor . Therefore we could make the algorithm an -time algorithm. Making algorithms more efficient is a huge area of study in computer science, and deserves at least a post on its own.
To recap, you should have learned the following from this post:
- An intuitive idea of what a Turing machine is, and how modern electronic computers relate to them
- What an algorithm is
- The relationship between algorithms and Turing machines
- An algorithm for determining the primality of positive integers | http://nolaymanleftbehind.wordpress.com/2011/07/11/algorithms-what-are-they/ | 13 |
30 | Go to the previous, next section.
A Lisp program is composed mainly of Lisp functions. This chapter explains what functions are, how they accept arguments, and how to define them.
In a general sense, a function is a rule for carrying on a computation given several values called arguments. The result of the computation is called the value of the function. The computation can also have side effects: lasting changes in the values of variables or the contents of data structures.
Here are important terms for functions in Emacs Lisp and for other function-like objects.
append. These functions are also called built-in functions or subrs. (Special forms are also considered primitives.)
Usually the reason that a function is a primitives is because it is fundamental, or provides a low-level interface to operating system services, or because it needs to run fast. Primitives can be modified or added only by changing the C sources and recompiling the editor. See section Writing Emacs Primitives.
command-executecan invoke; it is a possible definition for a key sequence. Some functions are commands; a function written in Lisp is a command if it contains an interactive declaration (see section Defining Commands). Such a function can be called from Lisp expressions like other functions; in this case, the fact that the function is a command makes no difference.
Strings are commands also, even though they are not functions. A symbol is a command if its function definition is a command; such symbols can be invoked with M-x. The symbol is a function as well if the definition is a function. See section Command Loop Overview.
Function: subrp object
This function returns
t if object is a built-in function
(i.e. a Lisp primitive).
(subrp 'message) ;
messageis a symbol, => nil ; not a subr object. (subrp (symbol-function 'message)) => t
Function: byte-code-function-p object
This function returns
t if object is a byte-code
function. For example:
(byte-code-function-p (symbol-function 'next-line)) => t
A function written in Lisp is a list that looks like this:
(lambda (arg-variables...) [documentation-string] [interactive-declaration] body-forms...)
(Such a list is called a lambda expression for historical reasons, even though it is not really an expression at all--it is not a form that can be evaluated meaningfully.)
The first element of a lambda expression is always the symbol
lambda. This indicates that the list represents a function. The
reason functions are defined to start with
lambda is so that
other lists, intended for other uses, will not accidentally be valid as
The second element is a list of argument variable names (symbols). This is called the lambda list. When a Lisp function is called, the argument values are matched up against the variables in the lambda list, which are given local bindings with the values provided. See section Local Variables.
The documentation string is an actual string that serves to describe the function for the Emacs help facilities. See section Documentation Strings of Functions.
The interactive declaration is a list of the form
code-string). This declares how to provide arguments if the
function is used interactively. Functions with this declaration are called
commands; they can be called using M-x or bound to a key.
Functions not intended to be called in this way should not have interactive
declarations. See section Defining Commands, for how to write an interactive
The rest of the elements are the body of the function: the Lisp code to do the work of the function (or, as a Lisp programmer would say, "a list of Lisp forms to evaluate"). The value returned by the function is the value returned by the last element of the body.
Consider for example the following function:
(lambda (a b c) (+ a b c))
We can call this function by writing it as the CAR of an expression, like this:
((lambda (a b c) (+ a b c)) 1 2 3)
The body of this lambda expression is evaluated with the variable
a bound to 1,
b bound to 2, and
c bound to 3.
Evaluation of the body adds these three numbers, producing the result 6;
therefore, this call to the function returns the value 6.
Note that the arguments can be the results of other function calls, as in this example:
((lambda (a b c) (+ a b c)) 1 (* 2 3) (- 5 4))
Here all the arguments
(* 2 3), and
(- 5 4) are
evaluated, left to right. Then the lambda expression is applied to the
argument values 1, 6 and 1 to produce the value 8.
It is not often useful to write a lambda expression as the CAR of
a form in this way. You can get the same result, of making local
variables and giving them values, using the special form
(see section Local Variables). And
let is clearer and easier to use.
In practice, lambda expressions are either stored as the function
definitions of symbols, to produce named functions, or passed as
arguments to other functions (see section Anonymous Functions).
However, calls to explicit lambda expressions were very useful in the
old days of Lisp, before the special form
let was invented. At
that time, they were the only way to bind and initialize local
Our simple sample function,
(lambda (a b c) (+ a b c)),
specifies three argument variables, so it must be called with three
arguments: if you try to call it with only two arguments or four
arguments, you get a
It is often convenient to write a function that allows certain arguments
to be omitted. For example, the function
substring accepts three
arguments--a string, the start index and the end index--but the third
argument defaults to the end of the string if you omit it. It is also
convenient for certain functions to accept an indefinite number of
arguments, as the functions
To specify optional arguments that may be omitted when a function
is called, simply include the keyword
&optional before the optional
arguments. To specify a list of zero or more extra arguments, include the
&rest before one final argument.
Thus, the complete syntax for an argument list is as follows:
(required-vars... [&optional optional-vars...] [&rest rest-var])
The square brackets indicate that the
clauses, and the variables that follow them, are optional.
A call to the function requires one actual argument for each of the
required-vars. There may be actual arguments for zero or more of the
optional-vars, and there cannot be any more actual arguments than
&rest exists. In that case, there may be any number of
extra actual arguments.
If actual arguments for the optional and rest variables are omitted,
then they always default to
nil. However, the body of the function
is free to consider
nil an abbreviation for some other meaningful
value. This is what
nil as the third argument
means to use the length of the string supplied. There is no way for the
function to distinguish between an explicit argument of
an omitted argument.
Common Lisp note: Common Lisp allows the function to specify what default value to use when an optional argument is omitted; GNU Emacs Lisp always uses
For example, an argument list that looks like this:
(a b &optional c d &rest e)
b to the first two actual arguments, which are
required. If one or two more arguments are provided,
d are bound to them respectively; any arguments after the first
four are collected into a list and
e is bound to that list. If
there are only two arguments,
nil; if two or three
nil; if four arguments or fewer,
There is no way to have required arguments following optional
ones--it would not make sense. To see why this must be so, suppose
c in the example were optional and
d were required.
If three actual arguments are given; then which variable would the third
argument be for? Similarly, it makes no sense to have any more
arguments (either required or optional) after a
Here are some examples of argument lists and proper calls:
((lambda (n) (1+ n)) ; One required: 1) ; requires exactly one argument. => 2 ((lambda (n &optional n1) ; One required and one optional: (if n1 (+ n n1) (1+ n))) ; 1 or 2 arguments. 1 2) => 3 ((lambda (n &rest ns) ; One required and one rest: (+ n (apply '+ ns))) ; 1 or more arguments. 1 2 3 4 5) => 15
A lambda expression may optionally have a documentation string just after the lambda list. This string does not affect execution of the function; it is a kind of comment, but a systematized comment which actually appears inside the Lisp world and can be used by the Emacs help facilities. See section Documentation, for how the documentation-string is accessed.
It is a good idea to provide documentation strings for all commands, and for all other functions in your program that users of your program should know about; internal functions might as well have only comments, since comments don't take up any room when your program is loaded.
The first line of the documentation string should stand on its own,
apropos displays just this first line. It should consist
of one or two complete sentences that summarize the function's purpose.
The start of the documentation string is usually indented, but since these spaces come before the starting double-quote, they are not part of the string. Some people make a practice of indenting any additional lines of the string so that the text lines up. This is a mistake. The indentation of the following lines is inside the string; what looks nice in the source code will look ugly when displayed by the help commands.
You may wonder how the documentation string could be optional, since there are required components of the function that follow it (the body). Since evaluation of a string returns that string, without any side effects, it has no effect if it is not the last form in the body. Thus, in practice, there is no confusion between the first form of the body and the documentation string; if the only body form is a string then it serves both as the return value and as the documentation.
In most computer languages, every function has a name; the idea of a
function without a name is nonsensical. In Lisp, a function in the
strictest sense has no name. It is simply a list whose first element is
lambda, or a primitive subr-object.
However, a symbol can serve as the name of a function. This happens when you put the function in the symbol's function cell (see section Symbol Components). Then the symbol itself becomes a valid, callable function, equivalent to the list or subr-object that its function cell refers to. The contents of the function cell are also called the symbol's function definition. When the evaluator finds the function definition to use in place of the symbol, we call that symbol function indirection; see section Symbol Function Indirection.
In practice, nearly all functions are given names in this way and
referred to through their names. For example, the symbol
as a function and does what it does because the primitive subr-object
#<subr car> is stored in its function cell.
We give functions names because it is more convenient to refer to them
by their names in other functions. For primitive subr-objects such as
#<subr car>, names are the only way you can refer to them: there
is no read syntax for such objects. For functions written in Lisp, the
name is more convenient to use in a call than an explicit lambda
expression. Also, a function with a name can refer to itself--it can
be recursive. Writing the function's name in its own definition is much
more convenient than making the function definition point to itself
(something that is not impossible but that has various disadvantages in
Functions are often identified with the symbols used to name them. For
example, we often speak of "the function
car", not distinguishing
between the symbol
car and the primitive subr-object that is its
function definition. For most purposes, there is no need to distinguish.
Even so, keep in mind that a function need not have a unique name. While
a given function object usually appears in the function cell of only
one symbol, this is just a matter of convenience. It is easy to store
it in several symbols using
fset; then each of the symbols is
equally well a name for the same function.
A symbol used as a function name may also be used as a variable; these two uses of a symbol are independent and do not conflict.
We usually give a name to a function when it is first created. This
is called defining a function, and it is done with the
defun special form.
Special Form: defun name argument-list body-forms
defun is the usual way to define new Lisp functions. It
defines the symbol name as a function that looks like this:
(lambda argument-list . body-forms)
This lambda expression is stored in the function cell of name.
The value returned by evaluating the
defun form is name,
but usually we ignore this value.
As described previously (see section Lambda Expressions),
argument-list is a list of argument names and may include the
&rest. Also, the first two forms
in body-forms may be a documentation string and an interactive
Note that the same symbol name may also be used as a global variable, since the value cell is independent of the function cell.
Here are some examples:
(defun foo () 5) => foo (foo) => 5 (defun bar (a &optional b &rest c) (list a b c)) => bar (bar 1 2 3 4 5) => (1 2 (3 4 5)) (bar 1) => (1 nil nil) (bar) error--> Wrong number of arguments. (defun capitalize-backwards () "Upcase the last letter of a word." (interactive) (backward-word 1) (forward-word 1) (backward-char 1) (capitalize-word 1)) => capitalize-backwards
Be careful not to redefine existing functions unintentionally.
defun redefines even primitive functions such as
without any hesitation or notification. Redefining a function already
defined is often done deliberately, and there is no way to distinguish
deliberate redefinition from unintentional redefinition.
Defining functions is only half the battle. Functions don't do anything until you call them, i.e., tell them to run. This process is also known as invocation.
The most common way of invoking a function is by evaluating a list. For
example, evaluating the list
(concat "a" "b") calls the function
concat. See section Evaluation, for a description of evaluation.
When you write a list as an expression in your program, the function
name is part of the program. This means that the choice of which
function to call is made when you write the program. Usually that's
just what you want. Occasionally you need to decide at run time which
function to call. Then you can use the functions
Function: funcall function &rest arguments
funcall calls function with arguments, and returns
whatever function returns.
funcall is a function, all of its arguments, including
function, are evaluated before
funcall is called. This
means that you can use any expression to obtain the function to be
called. It also means that
funcall does not see the expressions
you write for the arguments, only their values. These values are
not evaluated a second time in the act of calling function;
funcall enters the normal procedure for calling a function at the
place where the arguments have already been evaluated.
The argument function must be either a Lisp function or a
primitive function. Special forms and macros are not allowed, because
they make sense only when given the "unevaluated" argument
funcall cannot provide these because, as we saw
above, it never knows them in the first place.
(setq f 'list) => list (funcall f 'x 'y 'z) => (x y z) (funcall f 'x 'y '(z)) => (x y (z)) (funcall 'and t nil) error--> Invalid function: #<subr and>
Compare this example with that of
Function: apply function &rest arguments
apply calls function with arguments, just like
funcall but with one difference: the last of arguments is a
list of arguments to give to function, rather than a single
argument. We also say that this list is appended to the other
apply returns the result of calling function. As with
funcall, function must either be a Lisp function or a
primitive function; special forms and macros do not make sense in
(setq f 'list) => list (apply f 'x 'y 'z) error--> Wrong type argument: listp, z (apply '+ 1 2 '(3 4)) => 10 (apply '+ '(1 2 3 4)) => 10 (apply 'append '((a b c) nil (x y z) nil)) => (a b c x y z)
An interesting example of using
apply is found in the description
mapcar; see the following section.
It is common for Lisp functions to accept functions as arguments or
find them in data structures (especially in hook variables and property
lists) and call them using
that accept function arguments are often called functionals.
Sometimes, when you call such a function, it is useful to supply a no-op function as the argument. Here are two different kinds of no-op function:
Function: identity arg
This function returns arg and has no side effects.
Function: ignore &rest args
This function ignores any arguments and returns
A mapping function applies a given function to each element of a
list or other collection. Emacs Lisp has three such functions;
mapconcat, which scan a list, are described
here. For the third mapping function,
section Creating and Interning Symbols.
Function: mapcar function sequence
mapcar applies function to each element of sequence in
turn. The results are made into a
The argument sequence may be a list, a vector or a string. The result is always a list. The length of the result is the same as the length of sequence.
For example: (mapcar 'car '((a b) (c d) (e f))) => (a c e) (mapcar '1+ [1 2 3]) => (2 3 4) (mapcar 'char-to-string "abc") => ("a" "b" "c") ;; Call each function in
my-hooks. (mapcar 'funcall my-hooks) (defun mapcar* (f &rest args) "Apply FUNCTION to successive cars of all ARGS, until one ends. Return the list of results." ;; If no list is exhausted, (if (not (memq 'nil args)) ;; Apply function to CARs. (cons (apply f (mapcar 'car args)) (apply 'mapcar* f ;; Recurse for rest of elements. (mapcar 'cdr args))))) (mapcar* 'cons '(a b c) '(1 2 3 4)) => ((a . 1) (b . 2) (c . 3))
Function: mapconcat function sequence separator
mapconcat applies function to each element of
sequence: the results, which must be strings, are concatenated.
Between each pair of result strings,
mapconcat inserts the string
separator. Usually separator contains a space or comma or
other suitable punctuation.
The argument function must be a function that can take one argument and returns a string.
(mapconcat 'symbol-name '(The cat in the hat) " ") => "The cat in the hat" (mapconcat (function (lambda (x) (format "%c" (1+ x)))) "HAL-8000" "") => "IBM.9111"
In Lisp, a function is a list that starts with
alternatively a primitive subr-object); names are "extra". Although
usually functions are defined with
defun and given names at the
same time, it is occasionally more concise to use an explicit lambda
expression--an anonymous function. Such a list is valid wherever a
function name is.
Any method of creating such a list makes a valid function. Even this:
(setq silly (append '(lambda (x)) (list (list '+ (* 3 4) 'x)))) => (lambda (x) (+ 12 x))
This computes a list that looks like
(lambda (x) (+ 12 x)) and
makes it the value (not the function definition!) of
Here is how we might call this function:
(funcall silly 1) => 13
(It does not work to write
(silly 1), because this function
is not the function definition of
silly. We have not given
silly any function definition, just a value as a variable.)
Most of the time, anonymous functions are constants that appear in
your program. For example, you might want to pass one as an argument
to the function
mapcar, which applies any given function to each
element of a list. Here we pass an anonymous function that multiplies
a number by two:
(defun double-each (list) (mapcar '(lambda (x) (* 2 x)) list)) => double-each (double-each '(2 11)) => (4 22)
In such cases, we usually use the special form
of simple quotation to quote the anonymous function.
Special Form: function function-object
This special form returns function-object without evaluating it.
In this, it is equivalent to
quote. However, it serves as a
note to the Emacs Lisp compiler that function-object is intended
to be used only as a function, and therefore can safely be compiled.
See section Quoting, for comparison.
function instead of
quote makes a difference
inside a function or macro that you are going to compile. For example:
(defun double-each (list) (mapcar (function (lambda (x) (* 2 x))) list)) => double-each (double-each '(2 11)) => (4 22)
If this definition of
double-each is compiled, the anonymous
function is compiled as well. By contrast, in the previous definition
quote is used, the argument passed to
mapcar is the precise list shown:
(lambda (arg) (+ arg 5))
The Lisp compiler cannot assume this list is a function, even though it
looks like one, since it does not know what
mapcar does with the
mapcar will check that the CAR of the third
element is the symbol
+! The advantage of
that it tells the compiler to go ahead and compile the constant
We sometimes write
function instead of
quoting the name of a function, but this usage is just a sort of
(function symbol) == (quote symbol) == 'symbol
documentation in section Access to Documentation Strings, for a
realistic example using
function and an anonymous function.
The function definition of a symbol is the object stored in the function cell of the symbol. The functions described here access, test, and set the function cell of symbols.
Function: symbol-function symbol
This returns the object in the function cell of symbol. If the
symbol's function cell is void, a
void-function error is
This function does not check that the returned object is a legitimate function.
(defun bar (n) (+ n 2)) => bar (symbol-function 'bar) => (lambda (n) (+ n 2)) (fset 'baz 'bar) => bar (symbol-function 'baz) => bar
If you have never given a symbol any function definition, we say that
that symbol's function cell is void. In other words, the function
cell does not have any Lisp object in it. If you try to call such a symbol
as a function, it signals a
Note that void is not the same as
nil or the symbol
void. The symbols
void are Lisp objects,
and can be stored into a function cell just as any other object can be
(and they can be valid functions if you define them in turn with
void is an object. A
void function cell contains no object whatsoever.
You can test the voidness of a symbol's function definition with
fboundp. After you have given a symbol a function definition, you
can make it void once more using
Function: fboundp symbol
t if the symbol has an object in its function cell,
nil otherwise. It does not check that the object is a legitimate
Function: fmakunbound symbol
This function makes symbol's function cell void, so that a
subsequent attempt to access this cell will cause a
error. (See also
makunbound, in section Local Variables.)
(defun foo (x) x) => x (fmakunbound 'foo) => x (foo 1) error--> Symbol's function definition is void: foo
Function: fset symbol object
This function stores object in the function cell of symbol. The result is object. Normally object should be a function or the name of a function, but this is not checked.
There are three normal uses of this function:
defun. See section Classification of List Forms, for an example of this usage.
defunwere not a primitive, it could be written in Lisp (as a macro) using
Here are examples of the first two uses:
firstthe same definition
carhas. (fset 'first (symbol-function 'car)) => #<subr car> (first '(1 2 3)) => 1 ;; Make the symbol
carthe function definition of
xfirst. (fset 'xfirst 'car) => car (xfirst '(1 2 3)) => 1 (symbol-function 'xfirst) => car (symbol-function (symbol-function 'xfirst)) => #<subr car> ;; Define a named keyboard macro. (fset 'kill-two-lines "\^u2\^k") => "\^u2\^k"
When writing a function that extends a previously defined function, the following idiom is often used:
(fset 'old-foo (symbol-function 'foo)) (defun foo () "Just like old-foo, except more so." (old-foo) (more-so))
This does not work properly if
foo has been defined to autoload.
In such a case, when
old-foo, Lisp attempts
old-foo by loading a file. Since this presumably
foo rather than
old-foo, it does not produce the
proper results. The only way to avoid this problem is to make sure the
file is loaded before moving aside the old definition of
See also the function
indirect-function in section Symbol Function Indirection.
You can define an inline function by using
defun. An inline function works just like an ordinary
function except for one thing: when you compile a call to the function,
the function's definition is open-coded into the caller.
Making a function inline makes explicit calls run faster. But it also has disadvantages. For one thing, it reduces flexibility; if you change the definition of the function, calls already inlined still use the old definition until you recompile them.
Another disadvantage is that making a large function inline can increase the size of compiled code both in files and in memory. Since the advantages of inline functions are greatest for small functions, you generally should not make large functions inline.
It's possible to define a macro to expand into the same code that an
inline function would execute. But the macro would have a limitation:
you can use it only explicitly--a macro cannot be called with
mapcar and so on. Also, it takes some work to
convert an ordinary function into a macro. (See section Macros.) To convert
it into an inline function is very easy; simply replace
Inline functions can be used and open coded later on in the same file, following the definition, just like macros.
Emacs versions prior to 19 did not have inline functions.
Here is a table of several functions that do things related to function calling and function definitions. They are documented elsewhere, but we provide cross references here.
Go to the previous, next section. | http://www.slac.stanford.edu/comp/unix/gnu-info/elisp_12.html | 13 |
20 | Boolean satisfiability problem
In computer science, satisfiability (often written in all capitals or abbreviated SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it establishes if the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE. Equally important is to determine whether no such assignments exist, which would imply that the function expressed by the formula is identically FALSE for all possible variable assignments. In this latter case, we would say that the function is unsatisfiable; otherwise it is satisfiable. For example, the formula a AND b is satisfiable because one can find the values a = TRUE and b = TRUE, which make (a AND b) = TRUE. To emphasize the binary nature of this problem, it is frequently referred to as Boolean or propositional satisfiability.
SAT was the first known example of an NP-complete problem. That briefly means that there is no known algorithm that efficiently solves all instances of SAT, and it is generally believed (but not proven, see P versus NP problem) that no such algorithm can exist. Further, a wide range of other naturally occurring decision and optimization problems can be transformed into instances of SAT. A class of algorithms called SAT solvers can efficiently solve a large enough subset of SAT instances to be useful in various practical areas such as circuit design and automatic theorem proving, by solving SAT instances made by transforming problems that arise in those areas. Extending the capabilities of SAT solving algorithms is an ongoing area of progress. However, no current such methods can efficiently solve all SAT instances.
Basic definitions, terminology and applications
In complexity theory, the satisfiability problem (SAT) is a decision problem, whose instance is a Boolean expression written using only AND, OR, NOT, variables, and parentheses. The question is: given the expression, is there some assignment of TRUE and FALSE values to the variables that will make the entire expression true? A formula of propositional logic is said to be satisfiable if logical values can be assigned to its variables in a way that makes the formula true. The Boolean satisfiability problem is NP-complete. The propositional satisfiability problem (PSAT), which decides whether a given propositional formula is satisfiable, is of central importance in various areas of computer science, including theoretical computer science, algorithmics, artificial intelligence, hardware design, electronic design automation, and verification.
A literal is either a variable or the negation of a variable (the negation of an expression can be reduced to negated variables by De Morgan's laws). For example, is a positive literal and is a negative literal.
A clause is a disjunction of literals. For example, is a clause (read as "x-sub-one or not x-sub-2").
There are several special cases of the Boolean satisfiability problem in which the formula are required to be conjunctions of clauses (i.e. formulae in conjunctive normal form). Determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete; this problem is called "3SAT", "3CNFSAT", or "3-satisfiability". Determining the satisfiability of a formula in which each clause is limited to at most two literals is NL-complete; this problem is called "2SAT". Determining the satisfiability of a formula in which each clause is a Horn clause (i.e. it contains at most one positive literal) is P-complete; this problem is called Horn-satisfiability.
The Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, and in fact, this was the first decision problem proved to be NP-complete. However, beyond this theoretical significance, efficient and scalable algorithms for SAT that were developed over the last decade have contributed to dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints. Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors, automatic test pattern generation, routing of FPGAs, and so on. A SAT-solving engine is now considered to be an essential component in the EDA toolbox.
Complexity and restricted versions
SAT was the first known NP-complete problem, as proved by Stephen Cook in 1971 and independently by Leonid Levin in 1973. Until that time, the concept of an NP-complete problem did not even exist. The problem remains NP-complete even if all expressions are written in conjunctive normal form with 3 variables per clause (3-CNF), yielding the 3SAT problem. This means the expression has the form:
- (x11 OR x12 OR x13) AND
- (x21 OR x22 OR x23) AND
- (x31 OR x32 OR x33) AND ...
where each x is a variable or a negation of a variable, and each variable can appear multiple times in the expression.
A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, if a graph has 17 valid 3-colorings, the SAT formula produced by the reduction will have 17 satisfying assignments.
NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See runtime behavior below.
SAT is easier if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals (possibly negated variables). Such a formula is indeed satisfiable if and only if at least one of its terms is satisfiable, and a term is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in polynomial time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form.
SAT is also easier if the number of literals in a clause is limited to 2, in which case the problem is called 2SAT. This problem can also be solved in polynomial time, and in fact is complete for the class NL. Similarly, if we limit the number of literals per clause to 2 and change the OR operations to XOR operations, the result is exclusive-or 2-satisfiability, a problem complete for SL = L.
One of the most important restrictions of SAT is HORNSAT, where the formula is a conjunction of Horn clauses. This problem is solved by the polynomial-time Horn-satisfiability algorithm, and is in fact P-complete. It can be seen as P's version of the Boolean satisfiability problem.
Here is an example, where ¬ indicates negation:
E has two clauses (denoted by parentheses), four variables (x1, x2, x3, x4), and k=3 (three literals per clause).
To solve this instance of the decision problem we must determine whether there is a truth value (TRUE or FALSE) we can assign to each of the variables (x1 through x4) such that the entire expression is TRUE. In this instance, there is such an assignment (x1 = TRUE, x2 = TRUE, x3=TRUE, x4=TRUE), so the answer to this instance is YES. This is one of many possible assignments, with for instance, any set of assignments including x1 = TRUE being sufficient. If there were no such assignment(s), the answer would be NO.
3-SAT is NP-complete and it is used as a starting point for proving that other problems are also NP-hard. This is done by polynomial-time reduction from 3-SAT to the other problem. An example of a problem where this method has been used is the Clique problem. 3-SAT can be further restricted to One-in-three 3SAT, where we ask if exactly one of the literals in each clause is true, rather than at least one. This restriction remains NP-complete.
There is a simple randomized algorithm due to Schöning (1999) that runs in time where n is the number of clauses and succeeds with high probability to correctly decide 3-Sat.
The exponential time hypothesis is that no algorithm can solve 3-Sat faster than .
A variant of the 3-satisfiability problem is the one-in-three 3SAT (also known variously as 1-in-3 SAT and exactly-1 3SAT) is an NP-complete problem. The problem is a variant of the 3-satisfiability problem (3SAT). Like 3SAT, the input instance is a collection of clauses, where each clause consists of exactly three literals, and each literal is either a variable or its negation. The one-in-three 3SAT problem is to determine whether there exists a truth assignment to the variables so that each clause has exactly one true literal (and thus exactly two false literals). (In contrast, ordinary 3SAT requires that every clause has at least one true literal.)
One-in-three 3SAT is listed as NP-complete problem LO4 in the standard reference, Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson. It was proved to be NP-complete by Thomas J. Schaefer as a special case of Schaefer's dichotomy theorem, which asserts that any problem generalizing Boolean satisfiability in a certain way is either in the class P or is NP-complete.
Schaefer gives a construction allowing an easy polynomial-time reduction from 3SAT to one-in-three 3SAT. Let "(x or y or z)" be a clause in a 3CNF formula. Add six new boolean variables a, b, c, d, e, and f, to be used to simulate this clause and no other. Let R(u,v,w) be a predicate that is true if and only if exactly one of the booleans u, v, and w is true. Then the formula "R(x,a,d) and R(y,b,d) and R(a,b,e) and R(c,d,f) and R(z,c,false)" is satisfiable by some setting of the new variables if and only if at least one of x, y, or z is true. We may thus convert any 3SAT instance with m clauses and n variables into a one-in-three 3SAT instance with 5m clauses and n + 6m variables.
Another reduction involves only four new variables and three clauses: .
To prove that must exist, one first express as product of maxterms, then show that
Note the left side is evaluated true if and only if the right hand side is one-in-three 3SAT satisfied. The other variables are newly added variables that don't exist in any expression.
The one-in-three 3SAT problem is often used in the literature as a known NP-complete problem in a reduction to show that other problems are NP-complete.
A clause is Horn if it contains at most one positive literal. Such clauses are of interest because they are able to express implication of one variable from a set of other variables. Indeed, one such clause can be rewritten as , that is, if are all true, then y needs to be true as well.
The problem of deciding whether a set of Horn clauses is satisfiable is in P. This problem can indeed be solved by a single step of the Unit propagation, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literal assigned to true).
A generalization of the class of Horn formulae is that of renamable-Horn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula.
Another special case is the class of problems where each clause only contains exclusive or operators. This is in P, since an XOR-SAT formula is a system of linear equations mod 2, and can be solved by Gaussian elimination.
Schaefer's dichotomy theorem
The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunction of subformulae; each restriction states a specific form for all subformulae: for example, only binary clauses can be subformulae in 2CNF.
Schaefer's dichotomy theorem states that, for any restriction to Boolean operators that can be used to form these subformulae, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem.
As mentioned briefly above, though the problem is NP-complete, many practical instances can be solved much more quickly. Many practical problems are actually "easy", so the SAT solver can easily find a solution, or prove that none exists, relatively quickly, even though the instance has thousands of variables and tens of thousands of constraints. Other much smaller problems exhibit run-times that are exponential in the problem size, and rapidly become impractical. Unfortunately, there is no reliable way to tell the difficulty of the problem without trying it. Therefore, almost all SAT solvers include time-outs, so they will terminate even if they cannot find a solution. Finally, different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. All of these behaviors can be seen in the SAT solving contests.
Extensions of SAT
An extension that has gained significant popularity since 2003 is Satisfiability modulo theories (SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints, uninterpreted functions, etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints.
The satisfiability problem becomes more difficult (PSPACE-complete) if we allow both "for all" and "there exists" quantifiers to bind the Boolean variables. An example of such an expression would be:
SAT itself uses only quantifiers. If we allow only quantifiers, it becomes the Co-NP-complete tautology problem. If we allow both, the problem is called the quantified Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved.
A number of variants deal with the number of variable assignments making the formula true. Ordinary SAT asks if there is at least one such assignment. MAJSAT, which asks if the majority of all assignments make the formula true, is complete for PP, a probabilistic class. The problem of how many variable assignments satisfy a formula, not a decision problem, is in #P. UNIQUE-SAT is the problem of determining whether a formula has exactly one assignment, is complete for US. When it is known that the problem has at least one assignment or no assignments, the problem is called UNAMBIGOUS-SAT. Although this problem seems easier, it has been shown that if there is a practical (randomized polynomial-time) algorithm to solve this problem, then all problems in NP can be solved just as easily.
The maximum satisfiability problem, an FNP generalization of SAT, asks for the maximum number of clauses which can be satisfied by any assignment. It has efficient approximation algorithms, but is NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation scheme (PTAS) for this problem unless P=NP.
An algorithm which correctly answers if an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on formula . If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on , i.e. the first variable is assumed to be 0. If the answer is "no", it is assumed that , otherwise . Values of other variables are found subsequently.
This property is used in several theorems in complexity theory:
Algorithms for solving SAT
There are two classes of high-performance algorithms for solving instances of SAT in practice: the conflict-driven clause learning algorithm, which can be viewed as a modern variant of the DPLL algorithm (well known implementation include Chaff, GRASP) and stochastic local search algorithms, such as WalkSAT.
A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 60s (see references below) and is now commonly referred to as the Davis–Putnam–Logemann–Loveland algorithm ("DPLL" or "DLL"). Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms.
In contrast, randomized algorithms like the PPSZ algorithm by Paturi, Pudlak, Saks, and Zani set variables in a random order according to some heuristics, for example bounded-width resolution. If the heuristic can't find the correct setting, the variable is assigned randomly. The PPSZ algorithm has a runtime of for 3-SAT with a single satisfying assignment. Currently this is the best known runtime for this problem. In the setting with many satisfying assignments the randomized algorithm by Schöning has a better bound.
Modern SAT solvers (developed in the last ten years) come in two flavors: "conflict-driven" and "look-ahead". Conflict-driven solvers augment the basic DPLL search algorithm with efficient conflict analysis, clause learning, non-chronological backtracking (aka backjumping), as well as "two-watched-literals" unit propagation, adaptive branching, and random restarts. These "extras" to the basic systematic search have been empirically shown to be essential for handling the large SAT instances that arise in electronic design automation (EDA). Look-ahead solvers have especially strengthened reductions (going beyond unit-clause propagation) and the heuristics, and they are generally stronger than conflict-driven solvers on hard instances (while conflict-driven solvers can be much better on large instances which actually have an easy instance inside).
Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others. Powerful solvers are readily available as free and open source software. In particular, the conflict-driven MiniSAT, which was relatively successful at the 2005 SAT competition, only has about 600 lines of code. An example for look-ahead solvers is march_dl, which won a prize at the 2007 SAT competition.
Certain types of large random satisfiable instances of SAT can be solved by survey propagation (SP). Particularly in hardware design and verification applications, satisfiability and other logical properties of a given propositional formula are sometimes decided based on a representation of the formula as a binary decision diagram (BDD).
Propositional satisfiability has various generalisations, including satisfiability for quantified Boolean formula problem, for first- and second-order logic, constraint satisfaction problems, 0-1 integer programming, and maximum satisfiability problem.
- Schaefer, Thomas J. (1978). "The complexity of satisfiability problems". Proceedings of the 10th Annual ACM Symposium on Theory of Computing. San Diego, California. pp. 216–226. doi:10.1145/800133.804350.
- "The international SAT Competitions web page". Retrieved 2007-11-15.
- "An improved exponential-time algorithm for k-SAT", Paturi, Pudlak, Saks, Zani
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (December 2010)|
References are ordered by date of publication:
- Davis, M.; Putnam, H. (1960). "A Computing Procedure for Quantification Theory". Journal of the ACM 7 (3): 201. doi:10.1145/321033.321034.
- Davis, M.; Logemann, G.; Loveland, D. (1962). "A machine program for theorem-proving". Communications of the ACM 5 (7): 394–397. doi:10.1145/368273.368557.
- Cook, S. A. (1971). "The complexity of theorem-proving procedures". Proceedings of the 3rd Annual ACM Symposium on Theory of Computing: 151–158. doi:10.1145/800157.805047.
- Michael R. Garey and David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN 0-7167-1045-5. A9.1: LO1 – LO7, pp. 259 – 260.
- Marques-Silva, J. P.; Sakallah, K. A. (1999). "GRASP: a search algorithm for propositional satisfiability". IEEE Transactions on Computers 48 (5): 506. doi:10.1109/12.769433.
- Marques-Silva, J.; Glass, T. (1999). "Combinational equivalence checking using satisfiability and recursive learning". Design, Automation and Test in Europe Conference and Exhibition, 1999. Proceedings (Cat. No. PR00078). p. 145. doi:10.1109/DATE.1999.761110. ISBN 0-7695-0078-1.
- R. E. Bryant, S. M. German, and M. N. Velev, Microprocessor Verification Using Efficient Decision Procedures for a Logic of Equality with Uninterpreted Functions, in Analytic Tableaux and Related Methods, pp. 1–13, 1999.
- Schoning, T. (1999). A probabilistic algorithm for k-SAT and constraint satisfaction problems. p. 410. doi:10.1109/SFFCS.1999.814612.
- Moskewicz, M. W.; Madigan, C. F.; Zhao, Y.; Zhang, L.; Malik, S. (2001). "Chaff". Proceedings of the 38th conference on Design automation - DAC '01. p. 530. doi:10.1145/378239.379017. ISBN 1581132972.
- Clarke, E.; Biere, A.; Raimi, R.; Zhu, Y. (2001). Formal Methods in System Design 19: 7. doi:10.1023/A:1011276507260.
- Gi-Joon Nam; Sakallah, K. A.; Rutenbar, R. A. (2002). "A new FPGA detailed routing approach via search-based Boolean satisfiability". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 21 (6): 674. doi:10.1109/TCAD.2002.1004311.
- Giunchiglia, E.; Tacchella, A. (2004). Giunchiglia, Enrico; Tacchella, Armando, eds. 2919. doi:10.1007/b95238. Missing or empty
- Babic, D.; Bingham, J.; Hu, A. J. (2006). "B-Cubing: New Possibilities for Efficient SAT-Solving". IEEE Transactions on Computers 55 (11): 1315. doi:10.1109/TC.2006.175.
- Rodriguez, C.; Villagra, M.; Baran, B. (2007). "Asynchronous team algorithms for Boolean Satisfiability". 2007 2nd Bio-Inspired Models of Network, Information and Computing Systems. p. 66. doi:10.1109/BIMNICS.2007.4610083.
- Carla P. Gomes, Henry Kautz, Ashish Sabharwal, Bart Selman (2008). "Satisfiability Solvers". In Frank Van Harmelen, Vladimir Lifschitz, Bruce Porter. Handbook of knowledge representation. Foundations of Artificial Intelligence 3. Elsevier. pp. 89–134. doi:10.1016/S1574-6526(07)03002-7. ISBN 978-0-444-52211-5.
More information on SAT:
- WinSAT v2.04: A Windows-based SAT application made particularly for researchers.
- The MiniSAT Solver
- Fast SAT Solver - simple but fast implementation of SAT solver based on genetic algorithms
International Conference on Theory and Applications of Satisfiability Testing:
SAT solving in general:
Evaluation of SAT solvers: | http://en.wikipedia.org/wiki/Boolean_satisfiability_problem | 13 |
16 | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
More generally, decision making is the cognitive process of selecting a course of action from among multiple alternatives. Common examples include shopping, deciding what to eat. Decision making is said to be a psychological construct. This means that although we can never "see" a decision, we can infer from observable behaviour that a decision has been made. Therefore we conclude that a psychological event that we call "decision making" has occurred. It is a construction that imputes commitment to action. That is, based on observable actions, we assume that people have made a commitment to effect the action.
In general there are three ways of analysing consumer buying decisions. They are:
- Economic models - These models are largely quantitative and are based on the assumptions of rationality and near perfect knowledge. The consumer is seen to maximize their utility. See consumer theory. Game theory can also be used in some circumstances.
- Psychological models - These models concentrate on psychological and cognitive processes such as motivation and need reduction. They are qualitative rather than quantitative and build on sociological factors like cultural influences and family influences.
- Consumer behaviour models - These are practical models used by marketers. They typically blend both economic and psychological models.
Nobel laureate Herbert Simon sees economic decision making as a vain attempt to be rational. He claims (in 1947 and 1957) that if a complete analysis is to be done, a decision will be immensely complex. He also says that peoples' information processing ability is very limited. The assumption of a perfectly rational economic actor is unrealistic. Often we are influenced by emotional and non-rational considerations. When we try to be rational we are at best only partially successful.
Models of buyer decision makingEdit
In an early study of the buyer decision process literature, Frank Nicosia (Nicosia, F. 1966; pp 9-21) identified three types of buyer decision making models. They are the univariate model (He called it the "simple scheme".) in which only one behavioural determinant was allowed in a stimulus-response type of relationship; the multi-variate model (He called it a "reduced form scheme".) in which numerous independent variables were assumed to determine buyer behaviour; and finally the system of equations model (He called it a "structural scheme" or "process scheme".) in which numerous functional relations (either univariate or multi-variate) interact in a complex system of equations. He concluded that only this third type of model is capable of expressing the complexity of buyer decision processes. In chapter 7, Nicosia builds a comprehensive model involving five modules. The encoding module includes determinants like "attributes of the brand", "environmental factors", "consumer's attributes", "attributes of the organization", and "attributes of the message". Other modules in the system include, consumer decoding, search and evaluation, decision, and consumption.
A general model of the buyer decision process consists of the following steps:
- Want recognition;
- Search of information on products that could satisfy the needs of the buyer;
- Alternative selection;
- Decision-making on buying the product;
- Post-purchase behavior.
There are a range of alternative models, but that of AIUAPR, which most directly links to the steps in the marketing/promotional process is often seen as the most generally useful[];AWARENESS - before anything else can happen the potential customers must become aware that the product or service exists. Thus, the first task must be to gain the attention of the target audience. All the different models are, predictably, agreed on this first step. If the audience never hears the message they will not act on it, no matter how powerful it is. INTEREST - but it is not sufficient to grab their attention. The message must interest them and persuade them that the product or service is relevant to their needs. The content of the message(s) must therefore be meaningful and clearly relevant to that target audience's needs, and this is where marketing research can come into its own. UNDERSTANDING - once an interest is established, the prospective customer must be able to appreciate how well the offering may meet his or her needs, again as revealed by the marketing research. This may be no mean achievement where the copywriter has just fifty words, or ten seconds, to convey everything there is to say about it. ATTITUDES - but the message must go even further; to persuade the reader to adopt a sufficiently positive attitude towards the product or service that he or she will purchase it, albeit as a trial. There is no adequate way of describing how this may be achieved. It is simply down to the magic of the copywriters art; based on the strength of the product or service itself. PURCHASE - all the above stages might happen in a few minutes while the reader is considering the advertisement; in the comfort of his or her favourite armchair. The final buying decision, on the other hand, may take place some time later; perhaps weeks later, when the prospective buyer actually tries to find a shop which stocks the product. REPEAT PURCHASE - but in most cases this first purchase is best viewed as just a trial purchase. Only if the experience is a success for the customer will it be turned into repeat purchases. These repeats, not the single purchase which is the focus of most models, are where the vendors focus should be, for these are where the profits are generated. The earlier stages are merely a very necessary prerequisite for this!
This is a very simple model, and as such does apply quite generally. Its lessons are that you cannot obtain repeat purchasing without going through the stages of building awareness and then obtaining trial use; which has to be successful. It is a pattern which applies to all repeat purchase products and services; industrial goods just as much as baked beans. This simple theory is rarely taken any further - to look at the series of transactions which such repeat purchasing implies. The consumer's growing experience over a number of such transactions is often the determining factor in the later - and future - purchases. All the succeeding transactions are, thus, interdependent - and the overall decision-making process may accordingly be much more complex than most models allow for. []
Decision making styleEdit
According to Myers (1962), a person's decision making process depends to a significant degree on their cognitive style. Starting from the work of Carl Jung, Myers developed a set of four bi-polar dimensions the Myers Briggs Type Indicator. The terminal points on these dimensions are: thinking and feeling; extraversion and introversion; judgement and perception; and sensing and intuition. He claimed that a person's decision making style is based largely on how they score on these four dimensions. For example, someone that scored near the thinking, extroversion, sensing, and judgement ends of the dimensions would tend to have a logical, analytical, objective, critical, and empirical decision making style.
Cognitive and personal biases in decision makingEdit
It is generally agreed that biases can creep into our decision making processes, calling into question the correctness of a decision. Below is a list of some of the more common cognitive biases.
- Selective search for evidence - We tend to be willing to gather facts that support certain conclusions but disregard other facts that support different conclusions.
- Premature termination of search for evidence - We tend to accept the first alternative that looks like it might work.
- Conservatism and inertia - Unwillingness to change thought patterns that we have used in the past in the face of new circumstances.
- Experiential limitations - Unwillingness or inability to look beyond the scope of our past experiences; rejection of the unfamiliar.
- Selective perception - We actively screen-out information that we do not think is salient.
- Wishful thinking or optimism - We tend to want to see things in a positive light and this can distort our perception and thinking.
- Recency - We tend to place more attention on more recent information and either ignore or forget more distant information.
- Repetition bias - A willingness to believe what we have been told most often and by the greatest number of different of sources.
- Anchoring - Decisions are unduly influenced by initial information that shapes our view of subsequent information.
- Group think - Peer pressure to conform to the opinions held by the group.
- Source credibility bias - We reject something if we have a bias against the person, organization, or group to which the person belongs: We are inclined to accept a statement by someone we like.
- Incremental decision making and escalating commitment - We look at a decision as a small step in a process and this tends to perpetuate a series of similar decisions. This can be contrasted with zero-based decision making.
- Inconsistency - The unwillingness to apply the same decision criteria in similar situations.
- Attribution asymmetry - We tend to attribute our success to our abilities and talents, but we attribute our failures to bad luck and external factors. We attribute other's success to good luck, and their failures to their mistakes.
- Role fulfilment - We conform to the decision making expectations that others have of someone in our position.
- Underestimating uncertainty and the illusion of control - We tend to underestimate future uncertainty because we tend to believe we have more control over events than we really do.
- Faulty generalizations - In order to simplify an extremely complex world, we tend to group things and people. These simplifying generalizations can bias decision making processes.
- Ascription of causality - We tend to ascribe causation even when the evidence only suggests correlation. Just because birds fly to the equatorial regions when the trees lose their leaves, does not mean that the birds migrate because the trees lose their leaves.
- Myers, I. (1962) Introduction to Type: A description of the theory and applications of the Myers-Briggs type indicator, Consulting Psychologists Press, Palo Alto Ca., 1962.
- Nicosia, F. (1966) Consumer Decision Processes, Prentice Hall, Englewood Cliffs, 1966.
- Simon, H. (1947) Administrative behaviour, Macmillan, New York, 1947, (also 2nd edition 1957).es:Proceso de toma de decisiones del comprador
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://psychology.wikia.com/wiki/Buyer_decision_processes | 13 |
28 | Are you confused by terms that educators use? The ASCD Lexicon of Learning might be what you need.
NCLB mandated that states and districts adopt programs and policies supported by scientifically based research. Drawing upon research and an extensive collection of evidence from multiple sources, the Common Core State Standards were developed to reflect the knowledge and skills that young people need for success in college and careers. Those standards impact teachers in several ways, including to guide them "toward curricula and teaching strategies that will give students a deep understanding of the subject and the skills they need to apply their knowledge" (Common Core State Standards Initiative, FAQ section). For many the standards require changes in how mathematics is taught, thus they will influence instructional strategies that educators use. In a standards-based classroom four instructional strategies are key:
Math Methodology is a three part series on instruction, assessment, and curriculum. Sections contains relevant essays and resources:
Part 1: Math Methodology: Instruction
The Instruction Essay (Page 1 of 3) contains the following subsections:
The Instruction Essay (Page 2 of 3) contains the following subsections:
The Instruction Essay (Page 3 of 3) addresses the needs of students with math difficulties and contains the following subsections:
Math Methodology Instruction Resources also includes resources for special needs students (e.g., hearing and visually impaired, learning disabilities, English language learners).
What does it mean to be mathematically literate and proficient?
According to the National Research Council (2012), "Deeper learning is the process through which a person becomes capable of taking what was learned in one situation and applying it to new situations – in other words, learning for “transfer.” Through deeper learning, students develop expertise in a particular discipline or subject area" (p. 1). As mathematics educators, we want our learners ultimately to be mathematically literate and proficient in mathematics. To achieve this, educators will need to focus on deeper learning and learning for understanding.
Volker Ulm (2011) noted that mathematical literacy involves several competencies:
Developing proficiency, as the National Research Council (2001) pointed out, embodies "expertise, competence, knowledge, and facility in mathematics" and the term mathematical proficiency entails what is "necessary for anyone to learn mathematics successfully" (p. 116). It has five interwoven and interdependent strands:
conceptual understanding—comprehension of mathematical concepts, operations, and relations
procedural fluency—skill in carrying out procedures flexibly, accurately, efficiently, and appropriately
strategic competence—ability to formulate, represent, and solve mathematical problems
adaptive reasoning—capacity for logical thought, reflection, explanation, and justification
productive disposition—habitual inclination to see mathematics as sensible, useful, and worthwhile, coupled with a belief in diligence and one’s own efficacy. (National Research Council, 2001, p. 116)
However, becoming mathematically literate and proficient are ongoing processes. Writing in IAE-pedia, David Morsund and Dick Ricketts (2010) noted that becoming proficient is a matter of developing math maturity, which certainly varies among students and which involves how well they learn and understand the math, how well they can apply their knowledge and skills in a variety of math-related problem-solving situations, and in their long term retention.
While teachers have a role to play in helping students to develop understanding, students also have a role to play in the process, which cannot be overlooked. They must have intrinsic motivation, as in Eric Booth's (2013) words: "Learning can be transformed into understanding only with intrinsic motivation. Learners must make an internal shift; they must choose to invest themselves to truly learn and understand" (p. 23). This kind of motivation involves fulfilling their need for creative engagement, which is where the teacher's role in the design of instruction comes into play.
But why focus on understanding?
Consider the following examples of students' reasoning and misconceptions. Have you ever heard students say (or have you as the teacher said), "To multiply by 10, just add a zero after the number"? Or, "The product of two numbers is always bigger than either one"? How about, "The number with the most digits is the biggest." Teachers Magazine with the help of Tim Coulson of the National Numeracy Strategy in England provided 10 such Maths misconceptions (2006) and suggestions to correct the situation. That article sets the tone for the need to teach mathematics right the first time with a focus on understanding.
A focus on understanding is among six key instructional shifts for implementing the Common Core State Standards. Certainly, it is an element of proficiency and literacy. While fluency is among those shifts with students being "expected to have speed and accuracy with simple calculations," for deep understanding, teachers will be expected to "teach more than “how to get the answer” and instead support students’ ability to access concepts from a number of perspectives so that students are able to see math as more than a set of mnemonics or discrete procedures." Further, teachers will need to ensure that students "demonstrate deep conceptual understanding of core math concepts by applying them to new situations. as well as writing and speaking about their understanding" (EngageNY, 2011).
This is not to negate the role of some memorization in mathematics. Morsund and Ricketts (2010) also noted, "It is well recognized that some rote memory learning is quite important in math education. However, most of this rote learning suffers from a lack of long term retention and from the learner’s inability to transfer this learning to new, challenging problem situation both within the discipline of math and to math-related problem situations outside the discipline of math. Thus, math education (as well as education in other disciplines) has moved in the direction of placing much more emphasis on learning for understanding. There is substantial emphasis on learning some “big ideas” that will last a lifetime" (section 1.1: Math Maturity, para. 1-2).
What does literacy look like in the mathematics classroom?
A central strategy for developing mathematical literacy is "enabling students to find their own independent approaches to learning" (Ulm, 2011, p. 5). According to the Ohio Department of Education (2012), there are multiple ways for developing literacy in the mathematics classroom:
Carpenter, Blanton, Cobb, Franke, Kaput, and McClain (2004) proposed that "there are four related forms of mental activity from which mathematical and scientific understanding emerges: (a) constructing relationships, (b) extending and applying mathematical and scientific knowledge, (c) justifying and explaining generalizations and procedures, and (d) developing a sense of identity related to taking responsibility for making sense of mathematical and scientific knowledge" (pp. 2-3). "Placing students' reasoning at the center of instructional decision making... represents a fundamental challenge to core educational practice" (p. 14).
According to Steve Leinwand and Steve Fleishman (2004), since the 1980s research results "consistently point to the importance of using relational practices for teaching mathematics" (p. 88). Such practices involve explaining, reasoning, and relying on multiple representations that help students develop their own understanding of content. Unfortunately, much instruction begins with instrumental practices involving memorizing and routinely applying procedures and formulas. "In existing research, students who learn rules before they learn concepts tend to score lower than do students who learn concepts first" (p. 88).
The importance of addressing misconceptions using relational practices and multiple representations was made clear when a teacher recently voiced concern about being unable to convince a beginning algebra student that (A + B)2 was not A2+ B2. The following visual helped clarify (A+B) (A+B) = A2 + 2AB + B2
This same discussion brought up a comparison to using such a visual for understanding the typical multiplication algorithm in which students have been taught to "leave off the zeroes and move each successive row of digits when multiplying left one place." Students often have no idea as to why they are doing that. Consider the multiplication problem 31 x 25 and how the distributive property plays a role in the algorithm:
The visual suggests that 31 x 25 = (30 + 1)(20 + 5) = (30 x 20) + (30 x 5)+ (1 x 20) + (1 x 5) and that there will be four values (600 + 150 + 20 + 5) to add together after the products are found. As addition can be done in any order, the above might make the transition to the traditional vertical presentation of the algorithm easier to understand, as in the following illustration:
What are the avenues to understanding?
What goes on in the classroom on a daily basis and over the course of a unit of instruction is key to processing information for understanding. Robert Marzano (2009) identified five avenues to understanding: chunking information into small bites, scaffolding, interacting, pacing, and monitoring. Of those, scaffolding is key to the entire process, as it involves the content of those chunks and their presentation in a logical order. After presenting a chunk of reasonable length, it is important for teachers to pause and allow students to interact with each other. A high rate of interaction among learners is a necessary component for understanding. Monitoring enables teachers to determine if a chunk has been understood before moving on. Pacing, how fast or slow to move through chunks, is not easily pre-determined. It depends on being able to read students' understanding and engagement with the content.
Within the classroom, how teaching is organized also matters. Spacing out learning over time with review and quizzing helps learners retain information over the course of the school year and beyond. According to research, such spacing and exposure to concepts and facts should occur on at least two occasions, separated by several weeks or months. Students will learn more when teachers alternate their demonstration of a worked problem with a similar problem that students do for practice. This helps students to learn problem solving strategies, enables them to transfer those strategies more easily, and to solve problems faster. Student learning is improved if teachers connect abstract ideas and concrete contexts via stories, simulations, hands-on activities, visual representations, real-world problem solving, and so on. Teachers can also enhance learning by using higher order questioning and providing opportunities for students to develop explanations. This ranges from creating units of study that provoke question-asking and discussion to simply having students explain their thinking after solving a problem (Pashler, Bain, Bottge, et al., 2007).
Marzano, Pickering, and Pollock (2001) included nine research-based instructional strategies that have a high probability of enhancing student achievement for all students in all subject areas at all grade levels. The authors caution, however, that instructional strategies are only tools and "should not be expected to work equally well in all situations." They are grouped together into three categories, suggested by Pitler, Hubbell, Kuhn, and Malenoski (2007).
Strategies that provide evidence of learning:
Setting objectives and providing feedback--set a unit goal and help students personalize that goal; use contracts to outline specific goals students should attain and grade they will receive if they meet those goals; use rubrics to help with feedback; provide timely, specific, and corrective feedback; consider letting students lead some feedback sessions.
Reinforcing effort and providing recognition--you might have students keep a weekly log of efforts and achievements with periodic reflections of those. They might even mathematically analyze their data. Find ways to personalize recognition, such as giving individualized awards for accomplishments.
Strategies that help students acquire and integrate learning:
Cues, questions, and advance organizers--these should be highly analytical, should focus on what is important, and are most effective when used before a learning experience.
Nonlinguistic representation--incorporate words and images using symbols to show relationships; use physical models and physical movement to represent information
Summarizing and note taking--provide guidelines for creating a summary; give time to students to review and revise notes; use a consistent format when note taking
Cooperative learning--consider common experiences or interests; vary group sizes and objectives. Core components include positive interdependence, group processing, appropriate use of social skills, face-to-face interaction, and individual and group accountability.
Note: Reinforcing effort from the first category also fits into this category to help students.
Strategies that help students practice, review, and apply learning:
Identifying similarities and differences--graphic forms, such as Venn diagrams or charts, are useful
Homework and practice--vary homework by grade level; keep parent involvement to a minimum; provide feedback on all homework; establish a homework policy; be sure students know the purpose of the homework
Generating and testing hypotheses--a deductive (e.g. predict what might happen if ...) , rather than an inductive, approach works best.
Learn more about how to teach for mathematical literacy.
Each month you can freely download an issue in the series Towards New Teaching in Mathematics from SINUS International (Germany). These are in English and great for middle and high school. Issues 1-8 address:
Buy additional resources via CT4ME.
The Amazon widget below shows books using the search phrase: instruction math. You can also use the widget to search with other key words. Suggestions include:
So what can you do to put research into practice?
Educators should have one goal in mind in everything they do: achievement of learners, which includes their ability to transfer knowledge to new situations. To this end, research-based instructional strategies focusing on deep learning should be used, as suggested by the National Research Council (2012):
According to Douglas Reeves (2006), "Schools that have improved achievement and closed the equity gap engage in holistic accountability, extensive nonfiction writing, frequent common assessments, decisive and immediate interventions, and constructive use of data" (p. 90). Such "accountability includes actions of adults, not merely the scores of students" (p. 83). Among those actions of adults is to assist students with gaining proficiency in a range of their own academic learning skills and behaviors. Writing in relation to the new Common Core State Standards, David Conley (2011) emphasized:
These behaviors include goal setting; study skills, both individually and in groups; self-reflection and the ability to gauge the quality of one's work; persistence with difficult tasks; a belief that effort trumps aptitude; and time-management skills. These behaviors may not be tested directly on common assessments, but without them, students are unlikely to be able to undertake complex learning tasks or take control of their own learning. (p. 20)
Assessments are not just summative, but also formative occurring at least quarterly or more with immediate feedback. Beyond a score, feedback contains detailed item and cluster analysis, and is used to inform future instruction. While individual class teachers might not be able to change student schedules to provide double classes in math or literacy for students in need, they can provide such interventions as homework supervision, break down projects into incremental steps, provide time management strategies, project management strategies, study skills, and help with reading the textbook, all of which are among immediate and decisive intervention strategies. An analysis of data in a constructive manner would reveal effective professional practices and lead to discussion on how they might be replicated (Reeves, 2006).
Educators in all instructional settings who put research into practice should apply "The Seven Principles of Good Practice in Undergraduate Education." Such practice emphasizes "active learning, time management, student-faculty contact, prompt feedback, high expectations, diverse learning styles, and cooperation among students" (Garon, 2000, para. 1). However, to reach an entire class, educators need to create an opportunity for full participation and cooperation among students.
Putting research into practice also involves building a community of learners who can dialogue effectively about mathematics, and "do" mathematics. Much depends on the teacher's ability to assist learners with developing thinking skills, which includes incorporating writing and journaling in math classes as a way to demonstrate thinking, and their ability to question, provide feedback, use varied instructional approaches, assist learners with reading math texts and doing homework, and use tools and manipulatives, all of which help concept development. Elaboration of those follows.
Embed Thinking Skills within the Curriculum
Consider learning some basic facts about the brain and the geography of thinking.
Visit The National Institute of Neurological Disorders and Stroke for an introduction to the brain and how it works.
See the table of Thinking and Learning Characteristics of Young People with suggested teaching strategies, presented at PUMUS, the online journal of practical uses of math and science. The table is subdivided into sections for grades K-2, 3-5, and 6-8.
Teaching critical thinking is very hard to do, but there are strategies consistent with research to help learners acquire the ability to think critically. According to Daniel Willingham (2007), a professor of cognitive psychology, "the mental activities that are typically called critical thinking are actually a subset of three types of thinking: reasoning, making judgments and decisions, and problem solving" (p. 11). Studies have revealed that:
First, critical thinking (as well as scientific thinking and other domain-based thinking) is not a skill. There is not a set of critical thinking skills that can be acquired and deployed regardless of context. Second, there are metacognitive strategies that, once learned, make critical thinking more likely. Third, the ability to think critically (to actually do what the metacognitive strategies call for) depends on domain knowledge and practice. (p. 17)
Rupert Wegerif (2002) noted, “[t]he emerging consensus, supported by some research evidence, is that the best way to teach thinking skills is not as a separate subject but through ‘infusing’ thinking skills into the teaching of content areas” (p. 3). In agreement, Willingham (2007) added that when learners "don't have much subject matter knowledge, introducing a concept by drawing on student experiences can help" (p. 18). Further, "Learners need to know what the thinking skills are that they are learning and these need to be explicitly modeled, drawn out and re-applied in different contexts. The evidence also suggests that collaborative learning improves the effectiveness of most activities" (Wegerif, 2002, p. 3).
Not only must the strategies be made explicit, but practice is an essential element. Willingham (2007) suggested:
The first time (or several times) the concept is introduced, explain it with at least two different examples (possibly examples based on students’ experiences ...), label it so as to identify it as a strategy that can be applied in various contexts, and show how it applies to the course content at hand. In future instances, try naming the appropriate critical thinking strategy to see if students remember it and can figure out how it applies to the material under discussion. With still more practice, students may see which strategy applies without a cue from you. (p. 18)
So what are valued thinking skills that might be embedded within a curriculum? Among those are information processing skills, reasoning skills, enquiry skills, creating thinking skills, and evaluation skills. Wegerif (2002) elaborated on each of those:
Information-processing skills: These enable pupils to locate and collect relevant information, to sort, classify, sequence, compare and contrast, and to analyze part/whole relationships.
Reasoning skills: These enable pupils to give reasons for opinions and actions, to draw inferences and make deductions, to use precise language to explain what they think, and to make judgments and decisions informed by reasons or evidence.
Enquiry skills: These enable pupils to ask relevant questions, to pose and define problems, to plan what to do and how to research, to predict outcomes and anticipate consequences, and to test conclusions and improve ideas.
Creative thinking skills: These enable pupils to generate and extend ideas, to suggest hypotheses, to apply imagination, and to look for alternative innovative outcomes.
Evaluation skills: These enable pupils to evaluate information, to judge the value of what they read, hear and do, to develop criteria for judging the value of their own and others’ work or ideas, and to hatve confidence in their judgments. (pp. 4-5)
Donald Treffinger (2008) distinguished between creating thinking and critical thinking, stating that effective problem solvers need both, as they are actually complementary. The former is used to generate options and the latter to focus thinking. Each form of thinking has associated guidelines and tools, illustrated in the following table.
Guidelines and Tools for Creative vs. Critical Thinking
|Guidelines||Defer judgment, seek quantity, encourage all possibilities, look for new combinations that might be stronger than any of their parts.||Use affirmative judgment as opposed to being critical, be deliberate--consider the purpose of focusing, consider novelty and not only what has worked in past, stay on course.|
|Tools||Brainstorming||Hits and Hot Spots--selecting promising options and grouping in meaningful ways|
|Force-Fitting--forcing a relationship between two seemingly unrelated ideas||ALoU--acryonym for what to consider when refining and developing options: A - Advantages, L - Limitations, o - ways to overcome limitations, U - Unique features|
|Attribute Listing||PCA or Paired Comparison Analysis--used to rank options or set priorities|
|SCAMPER--acronym for how to apply checklist of action words to look for new possibilities: S - Substitute, C - Combine, A - Adapt, M - Magnify or Minify, P - Put to other uses, E - Eliminate, R - Reverse or Rearrange)||Sequence: SML--sequence short, medium, or long-term actions|
|Morphological Matrix--identify key parameters of task)||Create Evaluation Matrix-- consider all options and possibilities|
Adapted from Treffinger, D. (2008, Summer). Preparing creative and critical thinkers [online]. Educational Leadership, 65(10). Retrieved from http://www.ascd.org/publications/educational_leadership/summer08/vol65/num10/toc.aspx
Research scientists Derek Cabrera and Laura Colosi (Wheeler, 2010) identified yet another approach to teaching thinking skills, the DSRP method, that is tied to four universal patterns that structure knowledge:
DSRP focuses on making teachers and students more metacognitive and can be used in any standards-based curriculum. Cabrera and Colosi believe the system works because it is so simple.
Incorporate Writing and Journaling in Math
As in other curricular areas, writing and journaling in math class helps students to organize and clarify their thoughts and to reflect on their understanding of concepts. Reeves (2006) noted, "The most effective writing is nonfiction--description, analysis, and persuasion with evidence" (p. 85). Writing includes "editing, collaborative scoring, constructive teacher feedback, and rewriting" (p. 84) in all subject areas, including math.
Principles and Standards for School Mathematics (NCTM, 2000) call for students to communicate about mathematics. Writing across the grades preK-12 is encouraged and should enable all students to--
Port Angeles School District (WA) emphasizes writing in math, as illustrated by their Sample Math Questions for the Washington Assessment of Student Learning (WASL) assessments. Problems by grade level (K-8 and High School) presented in the web site are recommended for student use to communicate (in written form) understanding of math content. The series of problems are grouped by number sense, measurement, geometry, algebraic sense, probability and statistics, logic, and problem solving strategies.
Students also need to learn how to revise their writing. Strategies include using graphic organizers to plan writing exercises, writing on every other line so that there is room for revision, and then rereading a response to see if it makes sense and responds to the topic of the exercise. See for example: Graphic Organizers from Enhance Learning with Technology Web site. What are they? Why use them? How to use them? The site includes numerous links on the topic, examples, and software possibilities to assist with the endeavor.
Marilyn Burns (2004) stated that writing assignments fall into four categories: keeping journals, solving math problems, explaining concepts and ideas, and writing about learning processes. Teachers might provide initial statements, prompts, and guidelines for topics of the day for when students write to a journal. Students might write about their reasoning and problem solving process as they solve math problems. They might comment on why their solution makes sense mathematically and as a real-life solution. When explaining a concept or idea, students might also provide an example. Some writing might include commentary about the general nature of the learning activity, such as what they liked the most or least about a learning unit, or their reactions to working alone or in a group. They might show their creative side to develop a game or learning activity, or compose directions for others on how to do one of their own already-completed math activities.
To illustrate Burns' (2004) ideas, Marian Small (2010) suggested providing parallel tasks to learners as a way to differentiate math instruction. Students might choose between two problems, which differ in difficulty. However, regardless of choice, teachers might pose a set of common questions for all students to answer. Such questions focus on common elements. For example, one question might be a reflection on the estimation of the answer to the problem itself before calculating the exact answer. Another might ask students to explain why a particular operation(s) is needed to solve it, or what would happen if one number was changed, or how mental math might be used, or to explain the exact strategy actually used to solve the problem (p. 32). Students might then write answers to such questions in a journal.
Among Burns' (2004) other strategies to incorporate writing in math is to have students discuss their ideas before writing, post useful vocabulary on a class chart, and use students' writing in subsequent instruction. Posting vocabulary reminds students to use the language of math to express their ideas. Above all students should know that writing supports their learning and helps you to assess their progress. They should share their writing in pairs or small groups so that they can get alternative viewpoints or bring to light conflicting understanding. This latter provides a stringboard for further discussion.
Individuals interested in learning more about how to use writing and journaling in math classes might consult the following. You'll also find resources for products to assist with writing in math:
Improve Questioning and Dialogue
The Common Core Standards (2010) for Mathematical Practice include that students "Construct viable arguments and critique the reasoning of others" (Standard 3). In addressing this standard through questioning and dialogue, teachers facilitate interactive participation to promote their students' conceptual understanding and problem solving abilities. As students communicate with others and present their ideas, the discourse process can also help them to "Attend to precision" as they "try to use clear definitions in discussion with others and in their own reasoning" (Standard 6). To improve questioning and dialogue, both the teacher role and the students' role should be considered.
Participating in a mathematical community through discourse is as much a part of learning mathematics as a conceptual understanding of the mathematics itself. As students learn to make and test conjectures, question, agree, or disagree about problems, they are learning the essence of what it means to do mathematics. If all students are to be engaged, teachers must foster classroom discourse by providing a welcoming community, establishing norms, using supporting motivational discourse, and pressing for conceptual understanding. (Stein, 2007, p. 288)
The process of building a community begins with what the teacher says and the way teachers pose questions, as this affects the richness of a discussion. According to Paul and Elder (1997), "The oldest, and still the most powerful, teaching tactic for fostering critical thinking is Socratic teaching. In Socratic teaching we focus on giving students questions, not answers." Mastering the process of Socratic questioning is highly disciplined:
The Socratic questioner acts as the logical equivalent of the inner critical voice which the mind develops when it develops critical thinking abilities. The contributions from the members of the class are like so many thoughts in the mind. All of the thoughts must be dealt with and they must be dealt with carefully and fairly. By following up all answers with further questions, and by selecting questions which advance the discussion, the Socratic questioner forces the class to think in a disciplined, intellectually responsible manner, while yet continually aiding the students by posing facilitating questions. (Paul & Elder, 1997)
Paul and Elder (1997) noted multiple dimensions for questioning and dialogue:
We can question goals and purposes. We can probe into the nature of the question, problem, or issue that is on the floor. We can inquire into whether or not we have relevant data and information. We can consider alternative interpretations of the data and information. We can analyze key concepts and ideas. We can question assumptions being made. We can ask students to trace out the implications and consequences of what they are saying. We can consider alternative points of view. (Paul & Elder, 1997)
However, to promote thinking and understanding for all learners, the effective questioner also needs to "draw as many students as possible into the discussion," and "periodically summarize what has and what has not been dealt with and/or resolved" (Paul & Elder, 1997). Unfortunately, this does not always occur in classrooms. Too often, math teachers tend to look for one right answer, which leads to one of the biggest problems in the art of questioning--teachers do not have an appropriate wait-time between posing the question and getting the answer. Students need time to process the question and reflect on it before answering. When there is insufficient time given, teachers tend to answer their own question, or will call on students who they are relatively certain will have that answer. Thus, the whole class is not involved.
That discourse was among NCTM's (1991) Professional Standards for Teaching Mathematics. Teachers orchestrate discourse by "posing questions and tasks that elicit, engage, and challenge each student's thinking" (Standard 2). The art of questioning involves knowing when to listen, when to ask students to clarify and justify their ideas, when to take ideas that students present and pursue those in depth, and when and how to convert ideas into math notation. Teachers must decide when to add their own input, when to let students struggle with difficulties, and monitor and encourage participation (Standard 2). They enhance discourse with tasks that employ computers, calculators, and other technology; concrete materials used as models; pictures, diagrams, tables, and graphs; invented and conventional terms and symbols; metaphors, analogies, and stories; written hypotheses, explanations and arguments; and oral presentations and dramatizations (Standard 4). NCTM provides a two-part collection of tips: Asking Good Questions and Promoting Discourse.
In his Questioning Toolkit, Jamie McKenzie listed 17 types of questions and elaborated on their role in addressing the essential questions related to a unit of study. Among those are organizing, elaborating, divergent, subsidiary, probing, clarification, strategic, sorting/sifting, hypothetical, planning, unanswerable, and irrelevant (McKenzie, 1997). Unfortunately, too often teachers unskilled in the art of questioning will pose questions that involve "only simple processes like recognition, rote memory, or selective recall to formulate an answer." Such cognitive-memory questions are at the lowest level of Gallagher and Ascher's Questioning Taxonomy: cognitive-memory, convergent, divergent, and evaluative questions (Vogler, 2008, Gallagher and Ascher's Questioning Taxonomy section).
As in the online learning environment, the richest discussions will come from higher order open-ended questions (i.e., divergent or evaluative questions), as opposed to centering or closed-ended questions (i.e., cognitive-memory or convergent questions), and then probing follow-up questions (Muilenburg & Berge, 2000). Open ended questions also better involve the whole class and thus enable teachers to better differentiate instruction. Marian Small (2010) suggested four strategies on how one might do this in the mathematics classroom:
Teachers should use answers to a question to help formulate the next question, enabling questions to build upon each other. Kenneth Vogler (2008) suggested how sequencing and patterns can be accomplished:
Extending and Lifting--involves asking a series of questions (extending) at the same cognitive level, then asking a question at the next higher level (lifting).
Circular Path--ask an initial question (this one perhaps was not answered) followed by a series of questions leading back to the first one.
Same Path--all questions are asked at the same level, typically at a lower level (e.g., a series of "what is ..." questions).
Narrow to Broad--lower-level, specific questions are followed by higher-level, general questions.
Broad to Narrow--lower-level, general questions are followed by higher-level, specific questions.
A Backbone of Questions with Relevant Digressions--the series of questions relate to the topic of discussion, rather than focus on a particular cognitive level.
Likewise, students have a role in discourse. The art of questioning can be introduced to them as earlier as Kindergarten. They, too, must listen, initiate questions and problems, and respond to others; use a variety of tools to explore examples and counterexamples; convince themselves and others of the representations, solutions, conjectures, and answers. They must rely on evidence and argument to determine validity (NCTM, 1991, Teaching Standard 3).
New teachers, and some of us veterans, might have difficulty in getting students to discuss mathematics in class. You will find helpful suggestions for discussion in How to Get Students to Talk in Class from Stanford University's Center for Teaching and Learning. Among those are to decentralize responses to you as teacher by encouraging learners to direct them specifically to others in the class, share discussion authority with student facilitators, ask open-ended questions, give students time to think and perhaps brainstorm answers to questions with a classmate, be encouraging to those who take risks to answer even if the answer was incorrect, use strategic body language, take notes on student responses to help summarize views later or keep discussion moving, and use active learning strategies.
Consider also the role that new technology tools can play in increasing dialogue about mathematics. These might take the form of wikis, podcasts, blogs, or voting options (known as clickers) that often come with interactive whiteboards. Students might use their classroom wiki to create their own textbook with group understandings of various topics, or for collaborative problem solving, projects, applications of math in everyday life, and so on. They might create podcasts in which they vocalize understandings individually or as a group to share with others. For more on the pedagogic value of podcasts and wikis, see Wiki Pedagogy by Renée Fountain. Blogs would be useful for monitoring individual contributions of learners in discussion on a variety of topics. Their commentaries are revealed in reverse chronological order (i.e., the most recent is listed first). Marzano (2009) noted that whiteboard voting technologies "allow students to electronically cast their vote regarding the correct answer to a question. Their responses are immediately displayed on a pie chart or bar graph, enabling teacher and students to discuss the different perceptions of the correct answer" (p. 87).
For more on podcasts and blogs for learning, read articles by Patricia Deubel (2007): Podcasts: Where's the learning? and Moderating and ethics for the classroom instructional blog.
Another key to successful instruction is effective feedback and reinforcement. However, strictly speaking, feedback is not advice, praise, grades or evaluation, as "none of these provide the descriptive information that students need" about their efforts to reach a goal (Wiggins, 2012, p. 11).
Feedback should be clearly understood, timely, immediately useable by students, consistent, comprehensive, supportive, and valued (Garon, 2000). "When anyone is trying to learn, feedback about the effort has three elements: recognition of the desired goal, evidence about present position, and some understanding of a way to close the gap between the two" (Sadler, in Black & Wiliam, 1998, Self Assessment by Pupils section).
Jan Chappuis (2012) provided the following five characteristics of effective feedback:
Everyone makes mistakes. That is, sometimes we do things that are uncharacteristic of work we might have done in past and which we might be able to correct ourselves through greater attention. So, in providing corrective feedback, we should focus on true errors, rather than pointing out all mistakes. True errors "occur because of a lack of knowledge" and fall into four broad categories," according to Douglas Fisher and Nancy Frey (2012):
David Nicol and Debra Macfarlane-Dick (n.d.) provided additional principles of good feedback, which are drawn from their formative assessment model and review of research literature:
Use Varied Instructional Approaches
Putting research into practice involves teaching for understanding by using a variety of instructional approaches. James Hiebert and Douglas Grouws (2009) stated that "conceptual understanding--the construction of meaningful relationships among mathematical facts, procedures, and ideas; and skill efficiency--the rapid, smooth, and accurate execution of mathematical procedures" are "central to mathematics learning and have often competed for attention" (p. 10). While teachers might wrestle with selecting effective instructional methods for increasing learning, an important point to remember is that "particular methods are not, in general, effective or ineffective. Instructional methods are effective for something" (p. 10). The key is to "balance these two approaches, with a heavier emphasis on conceptual understanding" (p. 11).
Teachers also need to remember that varying instructional approaches is part of differentiated instruction. The Rochester Institute of Technology (2009) noted how a mix of strategies might benefit visual, auditory, and kinesthetic learners. Visual learners appreciate lessons with graphics, illustrations, and demonstrations. Auditory learners might learn best from lectures and discussions. Kinesthetic learners process new information best when it can be touched or manipulated; thus, for this group of learners, written assignments, note taking, examination of objects, and participation in activities are valued strategies to consider.
Teachers might question if their approach should be more teacher-centered or more student-directed. The National Mathematics Advisory Panel (2008) noted, "High-quality research does not support the exclusive use of either approach" (p. 45). The terms themselves are not uniquely defined with "teacher-directed instruction ranging from highly scripted direct instruction approaches to interactive lecture styles, and with student-centered instruction ranging from students having primary responsibility for their own mathematics learning to highly structured cooperative groups" (p. 45). Ball, Ferrini-Mundy, Kilpatrick, Milgram, Schmid, and Schaar (2005) expressed:
Students can learn effectively via a mixture of direct instruction, structured investigation, and open exploration. Decisions about what is better taught through direct instruction and what might be better taught by structuring explorations for students should be made on the basis of the particular mathematics, the goals for learning, and the students' present skills and knowledge. For example, mathematical conventions and definitions should not be taught by pure discovery. Correct mathematical understanding and conclusions are the responsibility of the teacher. Making good decisions about the appropriate pedagogy to use depends on teachers having solid knowledge of the subject. (Areas of Agreement section)
Teachers should exercise caution if students are to use a discovery approach to learning. Discovery learning is a form of partially guided instruction. Partially guided instruction is known by other names, including "problem-based learning, inquiry learning, experiential learning, and constructivist learning" (Clark, Kirschner, & Sweller, 2012, p. 7).
According to Alfieri, Brooks, Aldrich, and Tenenbaum (2011), a review of literature would suggest that "discovery learning occurs whenever the learner is not provided with the target information or conceptual understanding and must find it independently and with only the provided materials" (p. 3). The extent that assistance is provided would depend on the difficulty students might have in discovering target information. Findings in their 2011 meta-analysis of 580 comparisons of discovery learning (unassisted and assisted) and direct instruction suggested that generally "unassisted discovery does not benefit learners, whereas feedback, worked examples, scaffolding, and elicited explanations do" (p. 1). Thus, Alfieri et al. indicated the following implications for teaching:
Although direct teaching is better than unassisted discovery, providing learners with worked examples or timely feedback is preferable. ... Furthermore, [their meta-analysis suggested] teaching practices should employ scaffolded tasks that have support in place as learners attempt to reach some objective, and/or activities that require learners to explain their own ideas. The benefits of feedback, worked examples, scaffolding, and elicited explanation can be understood to be part of a more general need for learners to be redirected, to some extent, when they are mis-constructing. Feedback, scaffolding, and elicited explanations do so in more obvious ways through an interaction with the instructor, but worked examples help lead learners through problem sets in their entireties and perhaps help to promote accurate constructions as a result. (p. 12)
Richard Clark, Paul Kirschner, and John Sweller (2012) further put to rest the debate on the use of partially guided instruction. After a half century of such advocacy, "Evidence from controlled, experimental studies (a.k.a. "gold standard") almost uniformly supports full and explicit instructional guidance" (p. 11). Elaborating, they revealed:
Decades of research clearly demonstrate that for novices (comprising virtually all students), direct, explicit instruction is more effective and more efficient than partial guidance. So, when teaching new content and skills to novices, teachers are more effective when they provide explicit guidance accompanied by practice and feedback, not when they require students to discover many aspects of what they must learn. ... this does not mean direct, expository instruction all day every day. Small group and independent problems and projects can be effective--not as vehicles for making discovery, but as a means of practicing recently learned content and skills. ... Teachers providing explicit instructional guidance fully explain the concepts and skills that students are required to learn. Guidance can be provided through a variety of media, such as lectures, modeling, videos, computer-based presentations, and realistic demonstrations. It can also include class discussions and activities. (p. 6)
Using instructional approaches such as "problem-based learning, scientific experimentation, historical investigation, Socratic seminar, research projects, problem solving, concept attainment, simulations, debates, and producing authentic products and performances" (Tomlinson & McTighe, 2006, p. 110) will help uncover the BIG ideas related to content that lie below the surface of acquiring basic skills and facts.
When teaching for understanding, a unit or course design incorporates instruction and assessment that reflects six facets of understanding. Students are provided opportunities to explain, interpret, apply, shift perspective, empathize, and self-assess (McTighe & Seif, 2002). Framing the essential or BIG questions in a unit is an important skill for educators to acquire, as these questions offer the organizing focus for a unit. Tomlinson and McTighe (2006) suggested two to five essential questions per unit, which are written at age-appropriate levels and sequenced so that one leads to the next. Students need to understand key vocabulary associated with those questions.
The emphasis on vocabulary development is particularly important for learning mathematics with understanding, especially for students for whom English is a second language. Imagine their possible confusion upon encountering homophones like "pi/pie, plane/plain, rows/rose, sine/sign, sum/some" (Bereskin, Dalrymple, Ingalls, et al., 2005, p. 3). Key vocabulary must be explicitly taught, and reinforced by posting symbols with definitions and examples to clarify meaning. Such learners also benefit from materials presented in their native language, where possible. In TIPS for English Language Learners in Mathematics, Bereskin, Dalrymple, Ingalls, and others from the Ontario (CA) Ministry of Education and their Partnership of School Boards proposed the following types of mathematical activities that help to develop both mathematics and language skills:
In discussing essential principles of effective math instruction for all learners, including learners with disabilities and those at risk of school failure, Karen Smith and Carol Geller (2004) said common attributes that have been identified as positively affecting student learning include:
Notice that Smith and Geller (2004) also noted the importance of feedback. In support of the above attributes, Leinwand and Fleishman (2004) suggested the following to teach for meaning:
Note: For examples on how to use open-ended problem-solving that enables learners to develop their own approaches, read Volker Ulm's (2011) Teaching mathematics - Opening up individual paths to learning.
Need more ideas for instructional strategies?
Visit the Teaching Channel for high-quality, free videos on effective teaching practices, inspiring lesson ideas, and the Common Core State Standards.
Consider using whiteboard technology to improve the quality of your lessons.
Steven Ross and Deborah Lowther (2009) noted several valuable features for improving lesson quality when using interactive whiteboards:
Further, when interactive response systems (known as clickers) are used, teachers can pose questions to students, enabling them to get immediate feedback with answers "instantly aggregated and graphically displayed" (Ross & Lowther, 2009, p. 21). This is the kind of feedback enabling timely review of lessons and student-centered community learning.
Teach Reading the Math Text
Students must be taught how to read a math textbook. Most students, in my experience, have never learned how, and rely greatly on explanations from their teachers and jump right in to doing their homework problems without reading the text. According to Mariana Haynes (2007), "The research is clear that when teachers across content areas help students use reading comprehension strategies (such as summarizing, generating questions, and using semantic and graphic organizers), student learning improves substantially. Studies show that explicitly teaching these strategies requires students to actively process information and connect new learning with prior concepts and experiences" (p. 4).
Reading a math text is different from reading texts in other subject areas. Diana Metsisto (2005), who discusses this issue in depth in Reading in the Mathematics Classroom, stated that math texts contain a greater number of concepts per sentence and paragraph than in texts for other subjects. Reading is complicated by the use of numeric and non-numeric symbols, specialized vocabulary, graphics which must be understood, page layouts that are different from other texts, and topic sentences that often occur at the end of paragraphs instead of at the beginning. The text is often written above the reading level of the intended learner. Some small words when used in a math problem make a big difference in students' understanding of a problem and how it is solved. Metsisto provides reading strategies for math texts.
Here are other resources to consult:
Provide Homework Assistance
The issue of assigning homework is controversial in terms of its purpose, what to assign, the amount of time needed to complete it, parental involvement, its actual affect on learning and achievement, and impact on family life and other valuable activities that occur outside of school hours. To help ensure that homework is completed and appropriate, consider the following research-based homework guidelines provided by Robert Marzano and Debra Pickering (2007, p. 78):
Assign purposeful homework. Legitimate purposes for homework include introducing new content, practicing a skill or process that students can do independently but not fluently, elaborating on information that has been addressed in class to deepen students' knowledge, and providing opportunities to explore topics of their own interest.
[E]nsure that homework is at the appropriate level of difficulty. Students should be able to complete homework assignments independently with relative high success rates, but they should still find the assignments challenging enough to be interesting.
Involve parents in appropriate ways (for example, as a sounding board to help students summarize what they learned from the homework) without requiring parents to act as teachers or to police students' homework completion.
Carefully monitor the amount of homework assigned so that it is appropriate to students' age levels and does not take too much time away from other home activities. (p. 78).
A rule of thumb for homework might be that "all daily homework assignments combined should take about as long to complete as 10 minutes multiplied by the students' grade level" and "when required reading is included as a type of homework, the 10-minute rule might be increased to 15 minutes" (Cooper, 2007, cited in Marzano & Pickering, 2007, p. 77). Other tips for getting homework done are in Helping Your Students with Homework, a 1998 booklet based on educational research from the U.S. Department of Education.
Classroom teachers might also make learners and their parents aware of the many homework assistance sites available on the Internet, many of which are noted at CT4ME among our Math Resources: Study Skills and Homework Help.
For more on homework, including the issue of differentiated homework, read Homework: A Math Dilemma and What to Do About It (Deubel, 2007).
Use Tools and Manipulatives
Students' thinking and understanding will be enhanced by their use of a variety of tools, such as graphic organizers, thinking maps, calculators, computers, and manipulatives. However, important variables to consider that influence effectiveness of tools and manipulatives (e.g., using graphic organizers) include such things as "grade level, point of implementation, instructional context, and ease of implementation" (Hall & Strangman, 2002, Factors Influencing Effectiveness section). CT4ME has an entire section devoted to math manipulatives, which includes use of calculators. Here I delve more into graphic organizers and thinking maps.
A graphic organizer is defined as "a visual and graphic display that depicts the relationships between facts, terms, and or ideas within a learning task. Graphic organizers are also sometimes referred to as knowledge maps, concept maps, story maps, cognitive organizers, advance organizers, or concept diagrams" (Hall & Strangman, 2002, Definition section). They are valuable as "a creative alternative to rote memorization"; they "coincide with the brain's style of patterning" and promote this patterning "because material is presented in ways that stimulate students' brains to create meaningful and relevant connections to previously stored memories" (Willis, 2006, Ch. 1, Graphic Organizers section). They are often used in brainstorming and to help learners examine their conceptual understanding of new content.
Graphic organizers might be classified as sequential, relating to a single concept, or multiple concepts. In The Theory Underlying Concept Maps and How to Construct and Use Them, Joseph Novak and Alberto Cañas (2008) stated, concepts within a concept map are "usually enclosed in circles or boxes of some type, and relationships between concepts indicated by a connecting line linking two concepts. Words on the line, referred to as linking words or linking phrases, specify the relationship between the two concepts." Concept is defined as "a perceived regularity in events or objects, or records of events or objects, designated by a label. The label for most concepts is a word, although sometimes we use symbols such as + or %, and sometimes more than one word is used. Propositions are statements about some object or event in the universe, either naturally occurring or constructed. Propositions contain two or more concepts connected using linking words or phrases to form a meaningful statement. Sometimes these are called semantic units, or units of meaning" (Introduction section).
Concept maps are usually developed with in "a hierarchical fashion with the most inclusive, most general concepts at the top of the map and the more specific, less general concepts arranged hierarchically below." Cross-links between sub-domains on the concept map should be added, where possible, as these illustrate that learners understand interrelationships between sub-domains in the map. Specific examples illustrating or clarifying a concept can be added to the concept map, but these would not be placed within ovals or boxes, as they are not concepts (Novak & Cañas, 2008, Introduction section). Novak and Cañas presented examples of concept maps developed with CMap Tools from the Institute for Human and Machine Cognition.
Graphic organizers come in many forms. Other common forms include continuum scales, cycles of events, spider maps, Venn diagrams, compare/contrast matrices, and network tree diagrams. A Venn diagram (two or more overlapping circles) could be used to compare and contrast sets, such as in a study of least common multiple and greatest common factor, or classifying geometric shapes. A tree diagram is useful for determining outcomes in a study of probability of events, permutations and combinations. KWL charts are useful for investigations. Note: CT4ME includes KWL charts in our resource booklets for standardized test prep. Educators might also wish to expand the KWL chart to a KWHL chart or the ultimate KWHLAQ chart to better promote 21st century skill development. These acronyms represent the following questions:
As an example, students can generate their own graphic organizer using the following sample instructions, adapted from Willis (2006, Ch. 1):
Student-generated Graphic Organizer
Adapted from J. Willis, Research-based strategies to ignite student learning, (2006, Ch. 1, Graphic Organizers section)
As another example, Metsisto (2005) suggested the Frayer Model and Semantic Feature Analysis Grid. The Frayer Model is used for vocabulary building and is a chart with four quadrants which can hold a definition, some characteristics/facts, examples, and non-examples of the word or concept. The word or concept might be placed at the center of the chart. In Think Literacy: Mathematics Approaches for Grades 7-12, the Ontario Association for Mathematics Education (2004) further elaborates on reading, writing and oral communication strategies and provides a thorough discussion of the Frayer Model.
Word or Concept
Similar to this Frayer Model, view the short ASCD video of grade 5 math teacher, Malinda Paige, using a Words in Context graphic organizer in a geometry lesson for learning vocabulary. Paige linked her lesson to real-world events. Then download the Words in Context graphic organizer for your lessons. Inspiration and Kidspiration software can be used for other graphic organizers, which Rockingham County Public Schools (VA) has made available in multiple subject areas, including for math.
The Semantic Feature Analysis Grid is a matrix or chart to help students to organize common features and to compare and contrast concepts. Spreadsheets are useful to design these kinds of charts.
Learn more by also reading Knowledge Maps: Tools for Building Structure in Mathematics, in which Astrid Brinkmann (2005) discussed the rules for developing mind maps and concept maps and illustrated how they are used to graphically link ideas and concepts in a well-structured form.
The following are graphic organizer web sites to consider:
Graphic Organizers from Education Oasis include multiple types such as cause and effect, compare and contrast, vocabulary development and concept organizers, brainstorming, KWL, and more.
Graphic Organizers from Education Place include about 38 organizers. Learners can use these freely "to structure writing projects, to help in problem solving, decision making, studying, planning research and brainstorming."
Graphic Organizers from Enhance Learning with Technology Web site. What are they? Why use them? How to use them? The site includes numerous links on the topic, examples, and software possibilities to assist with the endeavor.
Graphic Organizers is based on the work of Edwin Ellis, Ph.D., president of Makes Sense Strategies, and features SMARTsheets. The site also includes examples of how these graphic organizers can be used for math, literature, social studies, science, social/behavior. Register for free downloads.
The Graphic Organizer from Graphic.Org shows graphic organizers, concept mapping, and mind mapping examples related to their use: describing, comparing/contrasting, classifying, causal, sequencing, and decision making.
Thinking maps are closely aligned to graphic organizers; however, in the words of David Hyerle, they are "a LANGUAGE of interdependent graphic primitives....teachers and student thrive within the dynamism of eight integrated tools based on thinking patterns. (a simple analogy may be made to complexity of 8 parts of speech and how they are relatively meaningless in isolation, and convey complexity when used together... this also leads to deep, authentic assessment" (personal communication, October 6, 2007). Thinking maps are open-ended, allow students to draw on their own experience, and help them to identify, "organize, synthesize, and communicate patterns of information by using a common visual language. They enable students to explore multiple perspectives and to develop metacognitive strategies for planning, monitoring, and reflecting" (Lipton & Hyerle, n.d., p. 6). The eight maps are discussed and illustrated with student examples at Designs for Thinking. Lipton and Hyerle also described them, which I have adapted for the following table:
|Circle||helps students generate and identify information in context related to a topic written inside the inner circle; The map might be enclosed in a square for its frame of reference.||
|Tree||can be used both inductively and deductively for classifying or grouping.||
|Bubble||can be used for describing the characteristics, qualities or attributes of something with adjectives. Any number of connecting bubbles can extend from the center.||
|Double-bubble||useful for comparing and contrasting.||
|Flow||enables students to sequence and order events, directions, cycles, and so on.||
|Multi-flow||helps to analyze causes and effects of an event||
|Brace||useful for identifying part-whole relationships of physical structures.||
|Bridge||helps students to interpret analogies and investigate conceptual metaphors|
Adapted from Lipton, L., & Hyerle, D. (n.d.). I see what you mean: Using visual maps to assess student thinking, pp. 2-3. Thinking Foundation. Retrieved from http://www.thinkingfoundation.org/research/journal_articles/journal_articles.html
Overall, Harold Wenglinsky (2004) concluded that "teaching that emphasizes higher-order thinking skills, project based learning, opportunities to solve problems that have multiple solutions, and such hands-on techniques as using manipulatives were all associated with higher performance on the mathematics" National Assessment of Educational Progress among 4th and 8th graders (p. 33). Using such practices to teach for meaning promotes high performance for students at all grade levels. CT4ME has an entire section devoted to Math Manipulatives.
Read the Magic of Math in which Ken Ellis (2007) described Fullerton IV Elementary School's (Roseburg, OR) nationally recognized approach to teaching math and watch the video documentary. Math is embedded throughout the curriculum. Their immersion approach has led to improved test scores. There is a focus on using precise mathematical vocabulary and problem solving in real world contexts. Instructional strategies include a mix of direct instruction, structured investigation, and open exploration. Fullerton is one of 20 Intel Schools of Distinction.
Watch the short video at Edutopia.org: Cooperative Arithmetic: How to Teach Math as a Social Activity. A teacher in Anchorage, Alaska demonstrates how he establishes a cooperative learning environment in an upper-elementary math classroom.
Alfieri, L., Brooks, P. J., Aldrich, N. J., & Tenenbaum, H. R. (2011). Does discovery-based instruction enhance learning? Journal of Educational Psychology, 103(1), 1-18. Retrieved from http://www.cideronline.org/podcasts/pdf/1.pdf
Ball, D. L., Ferrini-Mundy, J., Kilpatrick, J., Milgram, R. J., Schmid, W., & Schaar, R. (2005). Reaching for common ground in K-12 mathematics education. Washington, DC: Mathematics Association of America: MAA Online. Retrieved from http://www.maa.org/common-ground/cg-report2005.html
Bereskin, S., Dalrymple, S., Ingalls, M., et al. (2005). TIPS for English language learners in mathematics. Ontario, CA: Ministry of Education and Partnership in School Boards. Retrieved from http://www.edu.gov.on.ca/eng/studentsuccess/lms/files/ELLMath4All.pdf
Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment [Online]. Phi Delta Kappan, 80(2), 139-144, 146-148. [Note: also see the article at http://blog.discoveryeducation.com/assessment/files/2009/02/blackbox_article.pdf].
Booth, E. (2013). A recipe for artful schooling. Educational Leadership, 70(5), 22-27.
Brinkmann, A. (2005, October 25). Knowledge maps: Tools for building structure in mathematics. International Journal for Mathematics Teaching and Learning. Retrieved from http://www.cimt.plymouth.ac.uk/journal/default.htm
Burns, M. (2004). Writing in math. Educational Leadership, 62(2), 30-33.
Carpenter, T. P., Blanton, M. L., Cobb, P., Franke, M. L., Kaput, J., & McClain, K. (2004). Scaling up innovative practices in mathematics and science: Research report. Madison, WI: National Center for Improving Student Learning and Achievement in Mathematics and Science. Retrieved from http://www.wcer.wisc.edu/NCISLA/publications/reports/NCISLAReport1.pdf
Chappuis, J. (2012). "How am I doing?" Educational Leadership, 70(1), 36-40.
Clark, R., Kirschner, P., & Sweller, J. (2012, Spring). Putting students on the path to learning: The case for fully guided instruction. American Educator, 36(1), 6-11. Retrieved from http://www.aft.org/newspubs/periodicals/ae/index.cfm
Common Core State Standards. (2010). Standards for Mathematical Practice. Retrieved from http://www.corestandards.org/Math/Practice
Conley, D. T. (2011). Building on the common core. Educational Leadership, 68(6), 16-20.
Deubel, P. (2007, October 22). Homework: A math dilemma and what to do about it. T.H.E. Journal. Retrieved from http://thejournal.com/articles/2007/10/22/homework-a-math-dilemma-and-what-to-do-about-it.aspx
Deubel, P. (2007, June 7). Podcasts: Where's the learning? T.H.E. Journal. Retrieved from http://thejournal.com/articles/2007/06/07/podcasts-wheres-the-learning.aspx
Deubel, P. (2007, February 21). Moderating and ethics for the classroom instructional blog. T.H.E. Journal. Retrieved from http://thejournal.com/articles/2007/02/21/moderating-and-ethics-for-the-classroom-instructional-blog.aspx?sc_lang=en
Ellis, K. (2005, November 8). The magic of math. Edutopia Magazine [online]. Retrieved from http://www.edutopia.org/node/1405
EngageNY (2011, August 1). Common core instructional shifts. Retrieved from http://engageny.org/resource/common-core-shifts/
Fisher, D., & Frey, N. (2012, September). Making time for feedback. Educational Leadership, 70(1), 42-47.
Garon, J. (2000, Spring). The seven principles of effective feedback. The Law Teacher, 7(2). Retrieved from http://lawteaching.org/lawteacher/2000spring/sevenprinciples.php
Hall, T., & Strangman, N. (2002). Graphic organizers. Wakefield, MA: National Center on Accessing the General Curriculum. Retrieved from http://www.cast.org/publications/ncac/ncac_go.html
Haynes, M. (2007, April). From state policy to classroom practice: Improving literacy instruction for all students. National Association of State Boards of Education. Available in Resources, Project Pages: Adolescent Literacy: http://www.nasbe.org/
Hiebert, J., & Grouws, D. (2009, Fall). Which instructional methods are most effective for math? Baltimore, MD: John Hopkins University, Better: Evidenced-based Education, 10-11. Retrieved from http://www.betterevidence.org
Leinwand, S., & Fleishman, S. (2004, September). Teach mathematics right the first time. Educational Leadership, 62(1), 88-89.
Lipton, L., & Hyerle, D. (n.d.). I see what you mean: Using visual maps to assess student thinking. Thinking Foundation. Retrieved from http://www.thinkingfoundation.org/research/journal_articles/journal_articles.html
Marzano, R. (2009, October). Helping students process information. Educational Leadership, 67(2), 86-87.
Marzano, R., & Pickering, D. (2007). The case for and against homework. Educational Leadership, 64(6), 74-79.
Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works. Alexandria, VA: ASCD.
Maths misconceptions (2006, January). Teachers Magazine, (42/Primary). Retrieved from http://www.teachernet.gov.uk/teachers/issue42/primary/features/Mathsmisconceptions/
McKenzie, J. (1997, November/December). A questioning toolkit. From Now On, 7(3). Retrieved from http://www.fno.org/nov97/toolkit.html
McTighe, J., & Seif, E. (2002). Indicators of teaching for understanding. Understanding by Design Exchange.
Metsisto, D. (2005). Reading in the mathematics classroom. In J. M. Kenney, E. Hancewicz, L. Heuer, D. Metsisto, & C. L. Tuttle, Literacy strategies for improving mathematics instruction (chapter 2). Alexandria, VA: ASCD. Retrieved from http://www.ascd.org/publications/books/105137/chapters/Reading_in_the_Mathematics_Classroom.aspx
Morsund, D., & Ricketts, D. (2010). Math maturity. In IAE-pedia [Information Aged Education wiki]. Retrieved February 17, 2010, from http://iae-pedia.org/Math_Maturity
Muilenburg, L., & Berge, Z. (2000). A framework for designing questions for online learning. The American Journal of Distance Education. Retrieved from http://smcm.academia.edu/LinMuilenburg/Papers/440394/A_Framework_for_Designing_Questions_for_Online_Learning
National Council of Teachers of Mathematics (2000). Principles and standards for school mathematics. Reston, VA: Author. Retrieved from http://standards.nctm.org/
National Council of Teachers of Mathematics (1991). Professional Standards for Teaching Mathematics. Reston, VA: Author. Retrieved from http://standards.nctm.org/
National Mathematics Advisory Panel (2008). Foundations for success: The final report of the National Mathematics Advisory Panel . Washington, DC: U.S. Department of Education. Retrieved from http://www.ed.gov/about/bdscomm/list/mathpanel/index.html
National Research Council (2012, July). Education for life and work: Developing transferable knowledge and skills in the 21st century [Report Brief]. J. W. Pellegrino, & M. L. Hilton (Eds.); Committee on Defining Deeper Learning and 21st Century Skills; Center for Education; Division on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. Retrieved from http://www.nap.edu/catalog.php?record_id=13398
National Research Council (2001). Adding it up: Helping children learn mathematics. J. Kilpatrick, J. Swafford, & B. Findell (Eds.). Mathematics Learning Study Committee, Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. Retrieved from http://www.nap.edu/catalog.php?record_id=9822
Nichol, D., & Macfarlane-Dick, D. (n.d.). Rethinking formative assessment in HE: A theoretical model and seven principles of good feedback practice. The Higher Education Academy SENLEF Project. Retrieved from http://www.heacademy.ac.uk/806.htm
Novak, J. D., & A. J. Cañas, A. J. (2008). The theory underlying concept maps and how to construct them. Technical Report IHMC CmapTools 2006-01 Rev 01-2008. Florida Institute for Human and Machine Cognition. Retrieved from http://cmap.ihmc.us/Publications/ResearchPapers/TheoryUnderlyingConceptMaps.pdf
Ohio Department of Education (2012, March 2). Ohio Mathematics Common Core Standards and Model Curriculum [YouTube video]. Retrieved from http://www.youtube.com/watch?v=0pJ_nI1AuLA
Ontario Association for Mathematics Education (2004). Think literacy: Mathematics approaches grades 7-12. Retrieved from http://oame.on.ca/main/index1.php?lang=en&code=ThinkLit
Pashler, H., Bain, P., Bottge, B., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning (NCER 2007-2004). Washington, DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/publications/practiceguides/
Paul, R., & Elder, L. (1997, April). Foundation for critical thinking: Socratic teaching. Retrieved from http://www.criticalthinking.org/pages/socratic-teaching/507
Pitler, H., Hubbell, E. R., Kuhn, M., & Malenoski, K. (2007). Using technology with classroom instruction that works. Alexandria, VA: ASCD.
Reeves, D. (2006). The learning leader: How to focus school improvement for better results. Alexandria, VA: ASCD.
Rochester Institute of Technology (2009). Some characteristics of learners, with teaching implications. Retrieved from http://online.rit.edu/faculty/teaching_strategies/adult_learners.cfm
Ross, S., & Lowther, D. (2009, Fall). Effectively using technology in instruction. Baltimore, MD: John Hopkins University, Better: Evidenced-based Education, 20-21. Retrieved from http://www.betterevidence.org
Small, M. (2010). Beyond one right answer. Educational Leadership, 68(1), 29-32.
Smith, K., & Geller, C. (2004). Essential principles of effective mathematics instruction: Methods to reach all students. Preventing School Failure, 48(4), 22-29.
Stein, C. (2007). Let's talk: Promoting mathematical discourse in the classroom. Mathematics Teacher, 101(4), 285-289. Retrieved from http://teachingmathforlearning.wikispaces.com/file/view/Let's+Talk+Discourse.pdf
Tomlinson, C., & McTighe, J. (2006). Integrating differentiated instruction with Understanding by Design. Alexandria, VA: ASCD.
Treffinger, D. (2008, Summer). Preparing creative and critical thinkers [online]. Educational Leadership, 65. Retrieved from http://www.ascd.org/publications/educational-leadership/summer08/vol65/num09/Preparing-Creative-and-Critical-Thinkers.aspx
Ulm, V. (2011). Teaching mathematics - Opening up individual paths to learning. In series: Towards New Teaching in Mathematics, Issue 3. Bayreuth, Germany: SINUS International. Retrieved from http://sinus.uni-bayreuth.de/2974/
Vogler, K. (2008, Summer). Asking good questions [online]. Educational Leadership, 65. Retrieved from http://www.ascd.org/publications/educational-leadership/summer08/vol65/num09/Asking-Good-Questions.aspx
Wegerif, R. (2002, September) Literature review in thinking skills, technology and learning. Futurelab Series. Bristol, UK: Futurelab. Retrieved from http://www.scribd.com/doc/12831545/Thinking-Skills-Review
Wenglinsky, H. (2004). Facts or critical thinking skills? What NAEP results say. Educational Leadership, 62(1), 32-35.
Wheeler, G. (2010, August 19). A simple solution to a complex problem. ASCD Express, 5(23). Retrieved from http://www.ascd.org/ascd-express/vol5/523-toc.aspx
Wiggins, G. (2012). 7 keys to effective feedback. Educational Leadership, 70(1), 11-16.
Willingham, D. T. (2007, Summer). Critical thinking: Why is it so hard to teach? American Educator, 8-19. Retrieved from http://www.aft.org/pdfs/americaneducator/summer2007/Crit_Thinking.pdf
Willis, J. (2006). Research-based strategies to ignite student learning: Insights from a neurologist and classroom teacher. Alexandria, VA: ASCD.
See other Math Methodology pages: | http://www.ct4me.net/math_methodology_2.htm | 13 |
24 | The Shape Of Words To Come: Lojban Morphology
Morphology is the part of grammar that deals with the form of words. Lojban's morphology is fairly simple compared to that of many languages, because Lojban words don't change form depending on how they are used. English has only a small number of such changes compared to languages like Russian, but we do have changes like ``boys'' as the plural of ``boy'', or ``walked'' as the past-tense form of ``walk''. To make plurals or past tenses in Lojban, you add separate words to the sentence that express the number of boys, or the time when the walking was going on.
However, Lojban does have what is called ``derivational morphology'': the capability of building new words from old words. In addition, the form of words tells us something about their grammatical uses, and sometimes about the means by which they entered the language. Lojban has very orderly rules for the formation of words of various types, both the words that already exist and new words yet to be created by speakers and writers.
A stream of Lojban sounds can be uniquely broken up into its component words according to specific rules. These so-called ``morphology rules'' are summarized in this chapter. (However, a detailed algorithm for breaking sounds into words has not yet been fully debugged, and so is not presented in this book.) First, here are some conventions used to talk about groups of Lojban letters, including vowels and consonants.
- V represents any single Lojban vowel except ``y''; that is, it represents ``a'', ``e'', ``i'', ``o'', or ``u''.
- VV represents either a diphthong, one of the following:
- ai ei oi au
- or a two-syllable vowel pair with an apostrophe separating the vowels, one of the following:
- a'a a'e a'i a'o a'u e'a e'e e'i e'o e'u i'a i'e i'i i'o i'u o'a o'e o'i o'o o'u u'a u'e u'i u'o u'u
- C represents a single Lojban consonant, not including the apostrophe, one of ``b'', ``c'', ``d'', ``f'', ``g'', ``j'', ``k'', ``l'', ``m'', ``n'', ``p'', ``r'', ``s'', ``t'', ``v'', ``x'', or ``z''. Syllabic ``l'', ``m'', ``n'', and ``r'' always count as consonants for the purposes of this chapter.
- CC represents two adjacent consonants of type C which constitute one of the 48 permissible initial consonant pairs:
- bl br cf ck cl cm cn cp cr ct dj dr dz fl fr gl gr jb jd jg jm jv kl kr ml mr pl pr sf sk sl sm sn sp sr st tc tr ts vl vr xl xr zb zd zg zm zv
- C/C represents two adjacent consonants which constitute one of the permissible consonant pairs (not necessarily a permissible initial consonant pair). The permissible consonant pairs are explained in Chapter 3. In brief, any consonant pair is permissible unless it contains: two identical letters, both a voiced (excluding ``r'', ``l'', ``m'', ``n'') and and an unvoiced consonant, or is one of certain specified forbidden pairs.
- C/CC represents a consonant triple. The first two consonants must constitute a permissible consonant pair; the last two consonants must constitute a permissible initial consonant pair.
They are also functionally different: cmavo are the structure words, corresponding to English words like ``and'', ``if'', ``the'' and ``to''; brivla are the content words, corresponding to English words like ``come'', ``red'', ``doctor'', and ``freely''; cmene are proper names, corresponding to English ``James'', ``Afghanistan'', and ``Pope John Paul II''.
The first group of Lojban words discussed in this chapter are the cmavo. They are the structure words that hold the Lojban language together. They often have no semantic meaning in themselves, though they may affect the semantics of brivla to which they are attached. The cmavo include the equivalent of English articles, conjunctions, prepositions, numbers, and punctuation marks. There are over a hundred subcategories of cmavo, known as ``selma'o'', each having a specifically defined grammatical usage. The various selma'o are discussed throughout Chapters 5 to 19 and summarized in Chapter 20.
Standard cmavo occur in four forms defined by their word structure. Here are some examples of the various forms:
V-form .a .e .i .o .u CV-form ba ce di fo gu VV-form .au .ei .ia .o'u .u'e CVV-form ki'a pei mi'o coi cu'u
In addition, there is the cmavo ``.y.'' (remember that ``y'' is not a V), which must have pauses before and after it.
A simple cmavo thus has the property of having only one or two vowels, or of having a single consonant followed by one or two vowels. Words consisting of three or more vowels in a row, or a single consonant followed by three or more vowels, are also of cmavo form, but are reserved for experimental use: a few examples are ``ku'a'e'', ``sau'e'', and ``bai'ai''. All CVV cmavo beginning with the letter ``x'' are also reserved for experimental use. In general, though, the form of a cmavo tells you little or nothing about its grammatical use.
``Experimental use'' means that the language designers will not assign any standard meaning or usage to these words, and words and usages coined by Lojban speakers will not appear in official dictionaries for the indefinite future. Experimental-use words provide an escape hatch for adding grammatical mechanisms (as opposed to semantic concepts) the need for which was not foreseen.
The cmavo of VV-form include not only the diphthongs and vowel pairs listed in Section 1, but also the following ten additional diphthongs:
.ia .ie .ii .io .iu .ua .ue .ui .uo .uu
In addition, cmavo can have the form ``Cy'', a consonant followed by the letter ``y''. These cmavo represent letters of the Lojban alphabet, and are discussed in detail in Chapter 17.
Compound cmavo are sequences of cmavo attached together to form a single written word. A compound cmavo is always identical in meaning and in grammatical use to the separated sequence of simple cmavo from which it is composed. These words are written in compound form merely to save visual space, and to ease the reader's burden in identifying when the component cmavo are acting together.
Compound cmavo, while not visually short like their components, can be readily identified by two characteristics:
- They have no consonant pairs or clusters, and
- They end in a vowel.
2.4) ki'e'u'eis a single cmavo reserved for experimental purposes: it has four vowels.
2.5) cy.ibu.abu cy. .ibu .abu
Again the pauses are required (see Section 9); the pause after ``cy.'' merges with the pause before ``.ibu''.
There is no particular stress required in cmavo or their compounds. Some conventions do exist that are not mandatory. For two-syllable cmavo, for example, stress is typically placed on the first vowel; an example is
2.6) .e'o ko ko kurji .E'o ko ko KURji
This convention results in a consistent rhythm to the language, since brivla are required to have penultimate stress; some find this esthetically pleasing.
If the final syllable of one word is stressed, and the first syllable of the next word is stressed, you must insert a pause or glottal stop between the two stressed syllables. Thus
2.7) le re nanmucan be optionally pronounced
2.8) le RE. NANmusince there are no rules forcing stress on either of the first two words; the stress on ``re'', though, demands that a pause separate ``re'' from the following syllable ``nan'' to ensure that the stress on ``nan'' is properly heard as a stressed syllable. The alternative pronunciation
2.9) LE re NANmuis also valid; this would apply secondary stress (used for purposes of emphasis, contrast or sentence rhythm) to ``le'', comparable in rhythmical effect to the English phrase ``THE two men''. In Example 2.8, the secondary stress on ``re'' would be similar to that in the English phrase ``the TWO men''.
Both cmavo may also be left unstressed, thus:
2.10) le re NANmu
This would probably be the most common usage.
Predicate words, called ``brivla'', are at the core of Lojban. They carry most of the semantic information in the language. They serve as the equivalent of English nouns, verbs, adjectives, and adverbs, all in a single part of speech.
Every brivla belongs to one of three major subtypes. These subtypes are defined by the form, or morphology, of the word --- all words of a particular structure can be assigned by sight or sound to a particular type (cmavo, brivla, or cmene) and subtype. Knowing the type and subtype then gives you, the reader or listener, significant clues to the meaning and the origin of the word, even if you have never heard the word before.
The same principle allows you, when speaking or writing, to invent new brivla for new concepts ``on the fly''; yet it offers people that you are trying to communicate with a good chance to figure out your meaning. In this way, Lojban has a flexible vocabulary which can be expanded indefinitely.
All brivla have the following properties:
- always end in a vowel;
- always contain a consonant pair in the first five letters, where ``y'' and apostrophe are not counted as letters for this purpose. (See Section 6.)
- always are stressed on the next-to-the-last (penultimate) syllable; this implies that they have two or more syllables.
Thus, ``bisycla'' has the consonant pair ``sc'' in the first five non-``y'' letters even though the ``sc'' actually appears in the form of ``syc''. Similarly, the word ``ro'inre'o'' contains ``nr'' in the first five letters because the apostrophes are not counted for this purpose.
The three subtypes of brivla are:
- gismu, the Lojban primitive roots from which all other brivla are built;
- lujvo, the compounds of two or more gismu; and
3) fu'ivla (literally ``copy-word''), the specialized words that are not Lojban primitives or natural compounds, and are therefore borrowed from other languages.
The gismu, or Lojban root words, are those brivla representing concepts most basic to the language. The gismu were chosen for various reasons: some represent concepts that are very familiar and basic; some represent concepts that are frequently used in other languages; some were added because they would be helpful in constructing more complex words; some because they represent fundamental Lojban concepts (like ``cmavo'' and ``gismu'' themselves).
The gismu do not represent any sort of systematic partitioning of semantic space. Some gismu may be superfluous, or appear for historical reasons: the gismu list was being collected for almost 35 years and was only weeded out once. Instead, the intention is that the gismu blanket semantic space: they make it possible to talk about the entire range of human concerns.
There are about 1350 gismu. In learning Lojban, you need only to learn most of these gismu and their combining forms (known as ``rafsi'') as well as perhaps 200 major cmavo, and you will be able to communicate effectively in the language. This may sound like a lot, but it is a small number compared to the vocabulary needed for similar communications in other languages.
All gismu have very strong form restrictions. Using the conventions defined in Section 1, all gismu are of the forms CVC/CV or CCVCV. They must meet the rules for all brivla given in Section 3; furthermore, they:
- always have five letters;
- always start with a consonant and end with a single vowel;
- always contain exactly one consonant pair, which is a permissible initial pair (CC) if it's at the beginning of the gismu, but otherwise only has to be a permissible pair (C/C);
- are always stressed on the first syllable (since that is penultimate).
With the exception of five special brivla variables, ``broda'', ``brode'', ``brodi'', ``brodo'', and ``brodu'', no two gismu differ only in the final vowel. Furthermore, the set of gismu was specifically designed to reduce the likelihood that two similar sounding gismu could be confused. For example, because ``gismu'' is in the set of gismu, ``kismu'', ``xismu'', ``gicmu'', ``gizmu'', and ``gisnu'' cannot be.
Almost all Lojban gismu are constructed from pieces of words drawn from other languages, specifically Chinese, English, Hindi, Spanish, Russian, and Arabic, the six most widely spoken natural languages. For a given concept, words in the six languages that represent that concept were written in Lojban phonetics. Then a gismu was selected to maximize the recognizability of the Lojban word for speakers of the six languages by weighting the inclusion of the sounds drawn from each language by the number of speakers of that language. See Section 14 for a full explanation of the algorithm.
Here are a few examples of gismu, with rough English equivalents (not definitions):
3.1) creka shirt 3.2) lijda religion 3.3) blanu blue 3.4) mamta mother 3.5) cukta book 3.6) patfu father 3.7) nanmu man 3.8) ninmu woman
A small number of gismu were formed differently; see Section 15 for a list.
When specifying a concept that is not found among the gismu (or, more specifically, when the relevant gismu seems too general in meaning), a Lojbanist generally attempts to express the concept as a tanru. Lojban tanru are an elaboration of the concept of ``metaphor'' used in English. In Lojban, any brivla can be used to modify another brivla. The first of the pair modifies the second. This modification is usually restrictive --- the modifying brivla reduces the broader sense of the modified brivla to form a more narrow, concrete, or specific concept. Modifying brivla may thus be seen as acting like English adverbs or adjectives. For example,
5.1) skami pilnois the tanru which expresses the concept of ``computer user''.
The simplest Lojban tanru are pairings of two concepts or ideas. Such tanru take two simpler ideas that can be represented by gismu and combine them into a single more complex idea. Two-part tanru may then be recombined in pairs with other tanru, or with individual gismu, to form more complex or more specific ideas, and so on.
The meaning of a tanru is usually at least partly ambiguous: ``skami pilno'' could refer to a computer that is a user, or to a user of computers. There are a variety of ways that the modifier component can be related to the modified component. It is also possible to use cmavo within tanru to provide variations (or to prevent ambiguities) of meaning.
Making tanru is essentially a poetic or creative act, not a science. While the syntax expressing the grouping relationships within tanru is unambiguous, tanru are still semantically ambiguous, since the rules defining the relationships between the gismu are flexible. The process of devising a new tanru is dealt with in detail in Chapter 5.
To express a simple tanru, simply say the component gismu together. Thus the binary metaphor ``big boat'' becomes the tanru
5.2) barda blotirepresenting roughly the same concept as the English word ``ship''.
The binary metaphor ``father mother'' can refer to a paternal grandmother (``a father-ly type of mother''), while ``mother father'' can refer to a maternal grandfather (``a mother-ly type of father''). In Lojban, these become the tanru
5.3) patfu mamtaand
5.4) mamta patfurespectively.
The possibility of semantic ambiguity can easily be seen in the last case. To interpret Example 5.4, the listener must determine what type of motherliness pertains to the father being referred to. In an appropriate context, ``mamta patfu'' could mean not ``grandfather'' but simply ``father with some motherly attributes'', depending on the culture. If absolute clarity is required, there are ways to expand upon and explain the exact interrelationship between the components; but such detail is usually not needed.
When a concept expressed in a tanru proves useful, or is frequently expressed, it is desirable to choose one of the possible meanings of the tanru and assign it to a new brivla. For Example 5.1, we would probably choose ``user of computers'', and form the new word
Such a brivla, built from the rafsi which represent its component words, is called a ``lujvo''. Another example, corresponding to the tanru of Example 5.2, would be:
5.6) bralo'i big-boat shipThe lujvo representing a given tanru is built from units representing the component gismu. These units are called ``rafsi'' in Lojban. Each rafsi represents only one gismu. The rafsi are attached together in the order of the words in the tanru, occasionally inserting so-called ``hyphen'' letters to ensure that the pieces stick together as a single word and cannot accidentally be broken apart into cmavo, gismu, or other word forms. As a result, each lujvo can be readily and accurately recognized, allowing a listener to pick out the word from a string of spoken Lojban, and if necessary, unambiguously decompose the word to a unique source tanru, thus providing a strong clue to its meaning.
The lujvo that can be built from the tanru ``mamta patfu'' in Example 5.4 is
5.7) mampa'uwhich refers specifically to the concept ``maternal grandfather''. The two gismu that constitute the tanru are represented in ``mampa'u'' by the rafsi ``mam-'' and ``-pa'u'', respectively; these two rafsi are then concatenated together to form ``mampa'u''.
Like gismu, lujvo have only one meaning. When a lujvo is formally entered into a dictionary of the language, a specific definition will be assigned based on one particular interrelationship between the terms. (See Chapter 12 for how this has been done.) Unlike gismu, lujvo may have more than one form. This is because there is no difference in meaning between the various rafsi for a gismu when they are used to build a lujvo. A long rafsi may be used, especially in noisy environments, in place of a short rafsi; the result is considered the same lujvo, even though the word is spelled and pronounced differently. Thus the word ``brivla'', built from the tanru ``bridi valsi'', is the same lujvo as ``brivalsi'', ``bridyvla'', and ``bridyvalsi'', each of which uses a different combination of rafsi.
When assembling rafsi together into lujvo, the rules for valid brivla must be followed: a consonant cluster must occur in the first five letters (excluding ``y'' and ``'''), and the lujvo must end in a vowel.
A ``y'' (which is ignored in determining stress or consonant clusters) is inserted in the middle of the consonant cluster to glue the word together when the resulting cluster is either not permissible or the word is likely to break up. There are specific rules describing these conditions, detailed in Section 6.
An ``r'' (in some cases, an ``n'') is inserted when a CVV-form rafsi attaches to the beginning of a lujvo in such a way that there is no consonant cluster. For example, in the lujvo
5.8) soirsai sonci sanmi soldier meal field rationsthe rafsi ``soi-'' and ``-sai'' are joined, with the additional ``r'' making up the ``rs'' consonant pair needed to make the word a brivla. Without the ``r'', the word would break up into ``soi sai'', two cmavo. The pair of cmavo have no relation to their rafsi lookalikes; they will either be ungrammatical (as in this case), or will express a different meaning from what was intended.
Learning rafsi and the rules for assembling them into lujvo is clearly seen to be necessary for fully using the potential Lojban vocabulary.
Most important, it is possible to invent new lujvo while you speak or write in order to represent a new or unfamiliar concept, one for which you do not know any existing Lojban word. As long as you follow the rules for building these compounds, there is a good chance that you will be understood without explanation.
Every gismu has from two to five rafsi, each of a different form, but each such rafsi represents only one gismu. It is valid to use any of the rafsi forms in building lujvo --- whichever the reader or listener will most easily understand, or whichever is most pleasing --- subject to the rules of lujvo making. There is a scoring algorithm which is intended to determine which of the possible and legal lujvo forms will be the standard dictionary form (see Section 12).
Each gismu always has at least two rafsi forms; one is the gismu itself (used only at the end of a lujvo), and one is the gismu without its final vowel (used only at the beginning or middle of a lujvo). These forms are represented as -CVC/CV or -CCVCV (called ``the 5-letter rafsi''), and -CVC/C- or -CCVC- (called ``the 4-letter rafsi'') respectively. The dashes in these rafsi form representations show where other rafsi may be attached to form a valid lujvo. When lujvo are formed only from 4-letter and 5-letter rafsi, known collectively as ``long rafsi'', they are called ``unreduced lujvo''.
Some examples of unreduced lujvo forms are:
6.1) mamtypatfu from ``mamta patfu'' ``mother father'' or ``maternal grandfather'' 6.2) lerfyliste from ``lerfu liste'' ``letter list'' or a ``list of letters'' (letters of the alphabet) 6.3) nancyprali from ``nanca prali'' ``year profit'' or ``annual profit'' 6.4) prunyplipe from ``pruni plipe'' ``elastic (springy) leap'' or ``spring'' (the verb)
6.5) vancysanmi from ``vanci sanmi'' ``evening meal'' or ``supper''In addition to these two forms, each gismu may have up to three additional short rafsi, three letters long. All short rafsi have one of the forms -CVC-, -CCV-, or -CVV-. The total number of rafsi forms that are assigned to a gismu depends on how useful the gismu is, or is presumed to be, in making lujvo, when compared to other gismu that could be assigned the rafsi.
For example, ``zmadu'' (``more than'') has the two short rafsi ``-zma-'' and ``-mau-'' (in addition to its unreduced rafsi ``-zmad-'' and ``-zmadu''), because a vast number of lujvo have been created based on ``zmadu'', corresponding in general to English comparative adjectives ending in ``-er'' such as ``whiter'' (Lojban ``labmau''). On the other hand, ``bakri'' (``chalk'') has no short rafsi and few lujvo.
There are at most one CVC-form, one CCV-form, and one CVV-form rafsi per gismu. In fact, only a tiny handful of gismu have both a CCV-form and a CVV-form rafsi assigned, and still fewer have all three forms of short rafsi. However, gismu with both a CVC-form and another short rafsi are fairly common, partly because more possible CVC-form rafsi exist. Yet CVC-form rafsi, even though they are fairly easy to remember, cannot be used at the end of a lujvo (because lujvo must end in vowels), so justifying the assignment of an additional short rafsi to many gismu.
The intention was to use the available ``rafsi space'' --- the set of all possible short rafsi forms --- in the most efficient way possible; the goal is to make the most-used lujvo as short as possible (thus maximizing the use of short rafsi), while keeping the rafsi very recognizable to anyone who knows the source gismu. For this reason, the letters in a rafsi have always been chosen from among the five letters of the corresponding gismu. As a result, there are a limited set of short rafsi available for assignment to each gismu. At most seven possible short rafsi are available for consideration (of which at most three can be used, as explained above).
Here are the only short rafsi forms that can possibly exist for gismu of the form CVC/CV, like ``sakli''. The digits in the second column represent the gismu letters used to form the rafsi.
CVC 123 -sak- CVC 124 -sal- CVV 12'5 -sa'i- CVV 125 -sai- CCV 345 -kli- CCV 132 -ska-(The only actual short rafsi for ``sakli'' is ``-sal-''.)
For gismu of the form CCVCV, like ``blaci'', the only short rafsi forms that can exist are:
CVC 134 -bac- CVC 234 -lac CVV 13'5 -ba'i- CVV 135 -bai- CVV 23'5 -la'i- CVV 235 -lai- CCV 123 -bla-(In fact, ``blaci'' has none of these short rafsi; they are all assigned to other gismu. Lojban speakers are not free to reassign any of the rafsi; the tables shown here are to help understand how the rafsi were chosen in the first place.)
There are a few restrictions: a CVV-form rafsi without an apostrophe cannot exist unless the vowels make up one of the four diphthongs ``ai'', ``ei'', ``oi'', or ``au''; and a CCV-form rafsi is possible only if the two consonants form a permissible initial consonant pair (see Section 1). Thus ``mamta'', which has the same form as ``salci'', can only have ``mam'', ``mat'', and ``ma'a'' as possible rafsi: in fact, only ``mam'' is assigned to it.
Some cmavo also have associated rafsi, usually CVC-form. For example, the ten common numerical digits, which are all CV form cmavo, each have a CVC-form rafsi formed by adding a consonant to the cmavo. Most cmavo that have rafsi are ones used in composing tanru (for a complete list, see Chapter 12).
The term for a lujvo made up solely of short rafsi is ``fully reduced lujvo''. Here are some examples of fully reduced lujvo:
6.6) cumfri from ``cumki lifri'' ``possible experience'' 6.7) klezba from ``klesi zbasu'' ``category make'' 6.8) kixta'a from ``krixa tavla'' ``cry-out talk'' 6.9) sniju'o from ``sinxa djuno'' ``sign know''
In addition, some of the unreduced forms in the previous example may be fully reduced to:
6.10) mampa'u from ``mamta patfu'' ``mother father'' or ``maternal grandfather'' 6.11) lerste from ``lerfu liste'' ``letter list'' or a ``list of letters''As noted above, CVC-form rafsi cannot appear as the final rafsi in a lujvo, because all lujvo must end with one or two vowels. As a brivla, a lujvo must also contain a consonant cluster within the first five letters --- this ensures that they cannot be mistaken for compound cmavo. Of course, all lujvo have at least six letters since they have two or more rafsi, each at least three letters long; hence they cannot be confused with gismu.
When attaching two rafsi together, it may be necessary to insert a hyphen letter. In Lojban, the term ``hyphen'' always refers to a letter, either the vowel ``y'' or one of the consonants ``r'' and ``n''. (The letter ``l'' can also be a hyphen, but is not used as one in lujvo.)
The ``y''-hyphen is used after a CVC-form rafsi when joining it with the following rafsi could result in an impermissible consonant pair, or when the resulting lujvo could fall apart into two or more words (either cmavo or gismu).
Thus, the tanru ``pante tavla'' (``protest talk'') cannot produce the lujvo ``patta'a'', because ``tt'' is not a permissible consonant pair; the lujvo must be ``patyta'a''. Similarly, the tanru ``mudri siclu'' (``wooden whistle'') cannot form the lujvo ``mudsiclu''; instead, ``mudysiclu'' must be used. (Remember that ``y'' is not counted in determining whether the first five letters of a brivla contain a consonant cluster: this is why.)
The ``y''-hyphen is also used to attach a 4-letter rafsi, formed by dropping the final vowel of a gismu, to the following rafsi. (This procedure was shown, but not explained, in Examples 6.1 to 6.5.) The lujvo forms ``zunlyjamfu'', ``zunlyjma'', ``zuljamfu'', and ``zuljma'' are all legitimate and equivalent forms made from the tanru ``zunle jamfu'' (``left foot''). Of these, ``zuljma'' is the preferred one since it is the shortest; it thus is likely to be the form listed in a Lojban dictionary.
The ``r''-hyphen and its close relative, the ``n''-hyphen, are used in lujvo only after CVV-form rafsi. A hyphen is always required in a two-part lujvo of the form CVV-CVV, since otherwise there would be no consonant cluster.
An ``r-''hyphen or ``n''-hyphen is also required after the CVV-form rafsi of any lujvo of the form CVV-CVC/CV or CVV-CCVCV since it would otherwise fall apart into a CVV-form cmavo and a gismu. In any lujvo with more than two parts, a CVV-form rafsi in the initial position must always be followed by a hyphen. If the hyphen were to be omitted, the supposed lujvo could be broken into smaller words without the hyphen: because the CVV-form rafsi would be interpreted as a cmavo, and the remainder of the word as a valid lujvo that is one rafsi shorter.
An ``n''-hyphen is only used in place of an ``r''-hyphen when the following rafsi begins with ``r''. For example, the tanru ``rokci renro'' (``rock throw'') cannot be expressed as ``ro'ire'o'' (which breaks up into two cmavo), nor can it be ``ro'irre'o'' (which has an impermissible double consonant); the ``n''-hyphen is required, and the correct form of the hyphenated lujvo is ``ro'inre'o''. The same lujvo could also be expressed without hyphenation as ``rokre'o''.
There is also a different way of building lujvo, or rather phrases which are grammatically and semantically equivalent to lujvo. You can make a phrase containing any desired words, joining each pair of them with the special cmavo ``zei''. Thus,
6.12) bridi zei valsiis the exact equivalent of ``brivla'' (but not necessarily the same as the underlying tanru ``bridi valsi'', which could have other meanings.) Using ``zei'' is the only way to get a cmavo lacking a rafsi, a cmene, or a fu'ivla into a lujvo:
6.13) xy. zei kantu X ray
6.14) kulnr,farsi zei lolgai Farsi floor-cover Persian rug
6.15) na'e zei .a zei na'e zei by. livgyterbilma non-A, non-B liver-disease non-A, non-B hepatitis
6.16) .cerman. zei xarnykarce Sherman war-car Sherman tankExample 6.15 is particularly noteworthy because the phrase that would be produced by removing the ``zei''s from it doesn't end with a brivla, and in fact is not even grammatical. As written, the example is a tanru with two components, but by adding a ``zei'' between ``by.'' and ``livgyterbilma'' to produce
6.17) na'e zei .a zei na'e zei by. zei livgyterbilma non-A-non-B-hepatitisthe whole phrase would become a single lujvo. The longer lujvo of Example 6.17 may be preferable, because its place structure can be built from that of ``bilma'', whereas the place structure of a lujvo without a brivla must be constructed ad hoc.
Note that rafsi may not be used in ``zei'' phrases, because they are not words. CVV rafsi look like words (specifically cmavo) but there can be no confusion between the two uses of the same letters, because cmavo appear only as separate words or in compound cmavo (which are really just a notation for writing separate but closely related words as if they were one); rafsi appear only as parts of lujvo.
The use of tanru or lujvo is not always appropriate for very concrete or specific terms (e.g. ``brie'' or ``cobra''), or for jargon words specialized to a narrow field (e.g. ``quark'', ``integral'', or ``iambic pentameter''). These words are in effect names for concepts, and the names were invented by speakers of another language. The vast majority of words referring to plants, animals, foods, and scientific terminology cannot be easily expressed as tanru. They thus must be borrowed (actually ``copied'') into Lojban from the original language.
There are four stages of borrowing in Lojban, as words become more and more modified (but shorter and easier to use). Stage 1 is the use of a foreign name quoted with the cmavo ``la'o'' (explained in full in Chapter 19):
7.1) me la'o ly. spaghetti .ly.is a predicate with the place structure ``x1 is a quantity of spaghetti''.
Stage 2 involves changing the foreign name to a Lojbanized name, as explained in Section 8:
7.2) me la spagetis.
One of these expedients is often quite sufficient when you need a word quickly in conversation. (This can make it easier to get by when you do not yet have full command of the Lojban vocabulary, provided you are talking to someone who will recognize the borrowing.)
Where a little more universality is desired, the word to be borrowed must be Lojbanized into one of several permitted forms. A rafsi is then usually attached to the beginning of the Lojbanized form, using a hyphen to ensure that the resulting word doesn't fall apart.
The rafsi categorizes or limits the meaning of the fu'ivla; otherwise a word having several different jargon meanings in other languages would require the word-inventor to choose which meaning should be assigned to the fu'ivla, since fu'ivla (like other brivla) are not permitted to have more than one definition. Such a Stage 3 borrowing is the most common kind of fu'ivla.
Finally, Stage 4 fu'ivla do not have any rafsi classifier, and are used where a fu'ivla has become so common or so important that it must be made as short as possible. (See Section 16 for a proposal concerning Stage 4 fu'ivla.)
The form of a fu'ivla reliably distinguishes it from both the gismu and the cmavo. Like cultural gismu, fu'ivla are generally based on a word from a single non-Lojban language. The word is ``borrowed'' (actually ``copied'', hence the Lojban tanru ``fukpi valsi'') from the other language and Lojbanized --- the phonemes are converted to their closest Lojban equivalent and modifications are made as necessary to make the word a legitimate Lojban fu'ivla-form word. All fu'ivla:
- must contain a consonant cluster in the first five letters of the word; if this consonant cluster is at the beginning, it must either be a permissible initial consonant pair, or a longer cluster such that each pair of adjacent consonants in the cluster is a permissible initial consonant pair: ``spraile'' is acceptable, but not ``ktraile'' or ``trkaile'';
- must end in one or more vowels;
- must not be gismu or lujvo, or any combination of cmavo, gismu, and lujvo; furthermore, a fu'ivla with a CV cmavo joined to the front of it must not have the form of a lujvo (the so-called ``slinku'i test'');
- cannot contain ``y'', although they may contain syllabic pronunciations of Lojban consonants;
- like other brivla, are stressed on the penultimate syllable.
This is a fairly liberal definition and allows quite a lot of possibilities within ``fu'ivla space''. Stage 3 fu'ivla can be made easily on the fly, as lujvo can, because the procedure for forming them always guarantees a word that cannot violate any of the rules. Stage 4 fu'ivla require running tests that are not simple to characterize or perform, and should be made only after deliberation and by someone knowledgeable about all the considerations that apply.
Here is a simple and reliable procedure for making a non-Lojban word into a valid Stage 3 fu'ivla:
- Eliminate all double consonants and silent letters.
- Convert all sounds to their closest Lojban equivalents. Lojban ``y'', however, may not be used in any fu'ivla.
- If the last letter is not a vowel, modify the ending so that the word ends in a vowel, either by removing a final consonant or by adding a suggestively chosen final vowel.
- If the first letter is not a consonant, modify the beginning so that the word begins with a consonant, either by removing an initial vowel or adding a suggestively chosen initial consonant.
- Prefix the result of steps 1-5 with a 4-letter rafsi that categorizes the fu'ivla into a ``topic area''. It is only safe to use a 4-letter rafsi; short rafsi sometimes produce invalid fu'ivla. Hyphenate the rafsi to the rest of the fu'ivla with an ``r''-hyphen; if that would produce a double ``r'', use an ``n''-hyphen instead; if the rafsi ends in ``r'' and the rest of the fu'ivla begins with ``n'' (or vice versa) use an ``l''-hyphen. (This is the only use of ``l''-hyphen in Lojban.)
- Alternatively, if a CVC-form short rafsi is available it can be used instead of the long rafsi.
- Remember that the stress necessarily appears on the penultimate (next-to-the-last) syllable.
Here are a few examples:
7.3) spaghetti (from English or Italian) spageti (Lojbanize) cidj,r,spageti (prefix long rafsi) dja,r,spageti (prefix short rafsi)where ``cidj-'' is the 4-letter rafsi for ``cidja'', the Lojban gismu for ``food'', thus categorizing ``cidjrspageti'' as a kind of food. The form with the short rafsi happens to work, but such good fortune cannot be relied on: in any event, it means the same thing.
7.4) Acer (the scientific name of maple trees) acer (Lojbanize) xaceru (add initial consonant and final vowel) tric,r,xaceru (prefix rafsi) ric,r,xaceru (prefix short rafsi)where ``tric-'' and ``ric-'' are rafsi for ``tricu'', the gismu for ``tree''. Note that by the same principles, ``maple sugar'' could get the fu'ivla ``saktrxaceru'', or could be represented by the tanru ``tricrxaceru sakta''. Technically, ``ricrxaceru'' and ``tricrxaceru'' are distinct fu'ivla, but they would surely be given the same meanings if both happened to be in use.
7.5) brie (from French) bri (Lojbanize) cirl,r,bri (prefix rafsi)where ``cirl-'' represents ``cirla'' (``cheese'').
7.6) cobra kobra (Lojbanize) sinc,r,kobra (prefix rafsi)where ``sinc-'' represents ``since'' (``snake'').
7.7) quark kuark (Lojbanize) kuarka (add final vowel) sask,r,kuarka (prefix rafsi)where ``sask-'' represents ``saske'' (``science''). Note the extra vowel ``a'' added to the end of the word, and the diphthong ``ua'', which never appears in gismu or lujvo, but may appear in fu'ivla.
The use of the prefix helps distinguish among the many possible meanings of the borrowed word, depending on the field. As it happens, ``spageti'' and ``kuarka'' are valid Stage 4 fu'ivla, but ``xaceru'' looks like a compound cmavo, and ``kobra'' like a gismu.
For another example, ``integral'' has a specific meaning to a mathematician. But the Lojban fu'ivla ``integrale'', which is a valid Stage 4 fu'ivla, does not convey that mathematical sense to a non-mathematical listener, even one with an English-speaking background; its source --- the English word ``integral'' --- has various other specialized meanings in other fields.
Left uncontrolled, ``integrale'' almost certainly would eventually come to mean the same collection of loosely related concepts that English associates with ``integral'', with only the context to indicate (possibly) that the mathematical term is meant.
The prefix method would render the mathematical concept as ``cmacrntegrale'', if the ``i'' of ``integrale'' is removed, or something like ``cmacrnintegrale'', if a new consonant is added to the beginning; ``cmac-'' is the rafsi for ``cmaci'' (``mathematics''). The architectural sense of ``integral'' might be conveyed with ``djinrnintegrale'' or ``tarmrnintegrale'', where ``dinju'' and ``tarmi'' mean ``building'' and ``form'' respectively.
Here are some fu'ivla representing cultures and related things, shown with more than one rafsi prefix:
7.8) bang,r,blgaria Bulgarian (in language) 7.9) kuln,r,blgaria Bulgarian (in culture) 7.10) gugd,r,blgaria Bulgaria (the country)Examples 7.11 and 7.12, used because ``ea'' is not a valid diphthong in Lojban. Arguably, some form of the native name ``Chosen'' should have been used instead of the internationally known ``Korea''; this is a recurring problem in all borrowings. In general, it is better to use the native name unless using it will severely impede understanding: ``Navajo'' is far more widely known than ``Dine'e''.
Lojbanized names, called ``cmene'', are very much like their counterparts in other languages. They are labels applied to things (or people) to stand for them in descriptions or in direct address. They may convey meaning in themselves, but do not necessarily do so.
Because names are often highly personal and individual, Lojban attempts to allow native language names to be used with a minimum of modification. The requirement that the Lojban speech stream be unambiguously analyzable, however, means that most names must be modified somewhat when they are Lojbanized. Here are a few examples of English names and possible Lojban equivalents:
8.1) djim. Jim
8.2) djein. Jane
8.3) .arnold. Arnold
8.4) pit. Pete
8.5) katrinas. Katrina
8.6) kat,r,in. Catherine(Note that syllabic ``r'' is skipped in determining the stressed syllable, so Example 8.6 is stressed on the ``ka''.)
8.7) katis. Cathy
8.8) keit. KateNames may have almost any form, but always end in a consonant, and are followed by a pause. They are penultimately stressed, unless unusual stress is marked with capitalization. A name may have multiple parts, each ending with a consonant and pause, or the parts may be combined into a single word with no pause. For example,
8.9) djan. djonz.and
8.10) djandjonz.are both valid Lojbanizations of ``John Jones''.
The final arbiter of the correct form of a name is the person doing the naming, although most cultures grant people the right to determine how they want their own name to be spelled and pronounced. The English name ``Mary'' can thus be Lojbanized as ``meris.'', ``maris.'', ``meiris.'', ``merix.'', or even ``marys.''. The last alternative is not pronounced much like its English equivalent, but may be desirable to someone who values spelling over pronunciation. The final consonant need not be an ``s''; there must, however, be some Lojban consonant at the end.
Names are not permitted to have the sequences ``la'', ``lai'', or ``doi'' embedded in them, unless the sequence is immediately preceded by a consonant. These minor restrictions are due to the fact that all Lojban cmene embedded in a speech stream will be preceded by one of these words or by a pause. With one of these words embedded, the cmene might break up into valid Lojban words followed by a shorter cmene. However, break-up cannot happen after a consonant, because that would imply that the word before the ``la'', or whatever, ended in a consonant without pause, which is impossible.
For example, the invalid name ``laplas.'' would look like the Lojban words ``la plas.'', and ``ilanas.'' would be misunderstood as ``.i la nas.''. However, ``nederlants.'' cannot be misheard as ``neder lants.'', because ``neder'' with no following pause is not a possible Lojban word.
There are close alternatives to these forbidden sequences that can be used in Lojbanizing names, such as ``ly'', ``lei'', and ``dai'' or ``do'i'', that do not cause these problems.
Lojban cmene are identifiable as word forms by the following characteristics:
- They must end in one or more consonants. There are no rules about how many consonants may appear in a cluster in cmene, provided that each consonant pair (whether standing by itself, or as part of a larger cluster) is a permissible pair.
- They may contain the letter y as a normal, non-hyphenating vowel. They are the only kind of Lojban word that may contain the two diphthongs ``iy'' and ``uy''.
- They are always followed in speech by a pause after the final consonant, written as ``.''.
- They may be stressed on any syllable; if this syllable is not the penultimate one, it must be capitalized when writing. Neither names nor words that begin sentences are capitalized in Lojban, so this is the only use of capital letters.
8.11) pav. the One from the cmavo ``pa'', with rafsi ``pav'', meaning ``one''
8.12) sol. the Sun from the gismu ``solri'', meaning ``solar'', or actually ``pertaining to the Sun''
8.13) ralj. Chief (as a title) from the gismu ``ralju'', meaning ``principal''.
8.14) nol. Lord/Lady from the gismu ``nobli'', with rafsi ``nol'', meaning ``noble''.To Lojbanize a name from the various natural languages, apply the following rules:
- Eliminate double consonants and silent letters.
- Add a final ``s'' or ``n'' (or some other consonant that sounds good) if the name ends in a vowel.
- Convert all sounds to their closest Lojban equivalents.
- If possible and acceptable, shift the stress to the penultimate (next-to-the-last) syllable. Use commas and capitalization in written Lojban when it is necessary to preserve non-standard syllabication or stress. Do not capitalize names otherwise.
- If the name contains an impermissible consonant pair, insert a vowel between the consonants: ``y'' is recommended.
- No cmene may have the syllables ``la'', ``lai'', or ``doi'' in them, unless immediately preceded by a consonant. If these combinations are present, they must be converted to something else. Possible substitutions include ``ly'', ``ly'i'', and ``dai'' or ``do'i'', respectively.
- Change double consonants other than ``cc'' to single consonants.
- Change ``cc'' before a front vowel to ``kc'', but otherwise to ``k''.
- Change ``c'' before a back vowel and final ``c'' to ``k''.
- Change ``ng'' before a consonant (other than ``h'') and final ``ng'' to ``n''.
- Change ``x'' to ``z'' initially, but otherwise to ``ks''.
- Change ``pn'' to ``n'' initially.
- Change final ``ie'' and ``ii'' to ``i''.
- Make the following idiosyncratic substitutions:
- aa a ae e ch k ee i eigh ei ew u igh ai oo u ou u ow au ph f q k sc sk w u y i
- However, the diphthong substitutions should not be done if the two vowels are in two different syllables.
- Change ``h'' between two vowels to ``''', but otherwise remove it completely. If preservation of the ``h'' seems essential, change it to ``x'' instead.
- Place ``''' between any remaining vowel pairs that do not form Lojban diphthongs.
- Some further examples of Lojbanized names are:
English ``Mary'' meris. or meiris. English ``Smith'' smit. English ``Jones'' djonz. English ``John'' djan. or jan. (American) or djon. or jon. (British) English ``Alice'' .alis. English ``Elise'' .eLIS. English ``Johnson'' djansn. English ``William'' .uiliam. or .uil,iam. English ``Brown'' braun. English ``Charles'' tcarlz. French ``Charles'' carl. French ``De Gaulle'' dyGOL. German ``Heinrich'' xainrix. Spanish ``Joaquin'' xuaKIN. Russian ``Svetlana'' sfietlanys. Russian ``Khrushchev'' xrucTCOF. Hindi ``Krishna'' kricnas. Polish ``Lech Walesa'' lex. va,uensas. Spanish ``Don Quixote'' don. kicotes. or modern Spanish: don. kixotes. or Mexican dialect: don. ki'otes. Chinese ``Mao Zedong'' maudzydyn. Japanese ``Fujiko'' fudjikos. or fujikos.
Summarized in one place, here are the rules for inserting pauses between Lojban words:
- Any two words may have a pause between them; it is always illegal to pause in the middle of a word, because that breaks up the word into two words.
- Every word ending in a consonant must be followed by a pause. Necessarily, all such words are cmene.
- Every word beginning with a vowel must be preceded by a pause. Such words are either cmavo, fu'ivla, or cmene; all gismu and lujvo begin with consonants.
- Every cmene must be preceded by a pause, unless the immediately preceding word is one of the cmavo ``la'', ``lai'', ``la'i'', or ``doi'' (which is why those strings are forbidden in cmene). However, the situation triggering this rule rarely occurs.
- If the last syllable of a word bears the stress, and a brivla follows, the two must be separated by a pause, to prevent confusion with the primary stress of the brivla. In this case, the first word must be either a cmavo or a cmene with unusual stress (which already ends with a pause, of course).
- A cmavo of the form ``Cy'' must be followed by a pause unless another ``Cy''-form cmavo follows.
- When non-Lojban text is embedded in Lojban, it must be preceded and followed by pauses. (How to embed non-Lojban text is explained in Chapter 19.)
Given a tanru which expresses an idea to be used frequently, it can be turned into a lujvo by following the lujvo-making algorithm which is given in Section 11.
In building a lujvo, the first step is to replace each gismu with a rafsi that uniquely represents that gismu. These rafsi are then attached together by fixed rules that allow the resulting compound to be recognized as a single word and to be analyzed in only one way.
There are three other complications; only one is serious.
The first is that there is usually more than one rafsi that can be used for each gismu. The one to be used is simply whichever one sounds or looks best to the speaker or writer. There are usually many valid combinations of possible rafsi. They all are equally valid, and all of them mean exactly the same thing. (The scoring algorithm given in Section 12 is used to choose the standard form of the lujvo --- the version which would be entered into a dictionary.)
The second complication is the serious one. Remember that a tanru is ambiguous --- it has several possible meanings. A lujvo, or at least one that would be put into the dictionary, has just a single meaning. Like a gismu, a lujvo is a predicate which encompasses one area of the semantic universe, with one set of places. Hopefully the meaning chosen is the most useful of the possible semantic spaces. A possible source of linguistic drift in Lojban is that as Lojbanic society evolves, the concept that seems the most useful one may change.
You must also be aware of the possibility of some prior meaning of a new lujvo, especially if you are writing for posterity. If a lujvo is invented which involves the same tanru as one that is in the dictionary, and is assigned a different meaning (or even just a different place structure), linguistic drift results. This isn't necessarily bad. Every natural language does it. But in communication, when you use a meaning different from the dictionary definition, someone else may use the dictionary and therefore misunderstand you. You can use the cmavo ``za'e'' (explained in Chapter 19) before a newly coined lujvo to indicate that it may have a non-dictionary meaning.
The essential nature of human communication is that if the listener understands, then all is well. Let this be the ultimate guideline for choosing meanings and place structures for invented lujvo.
The third complication is also simple, but tends to scare new Lojbanists with its implications. It is based on Zipf's Law, which says that the length of words is inversely proportional to their usage. The shortest words are those which are used more; the longest ones are used less. Conversely, commonly used concepts will be tend to be abbreviated. In English, we have abbreviations and acronyms and jargon, all of which represent complex ideas that are used often by small groups of people, so they shortened them to convey more information more rapidly.
Therefore, given a complicated tanru with grouping markers, abstraction markers, and other cmavo in it to make it syntactically unambiguous, the psychological basis of Zipf's Law may compel the lujvo-maker to drop some of the cmavo to make a shorter (technically incorrect) tanru, and then use that tanru to make the lujvo.
This doesn't lead to ambiguity, as it might seem to. A given lujvo still has exactly one meaning and place structure. It is just that more than one tanru is competing for the same lujvo. But more than one meaning for the tanru was already competing for the ``right'' to define the meaning of the lujvo. Someone has to use judgment in deciding which one meaning is to be chosen over the others.
If the lujvo made by a shorter form of tanru is in use, or is likely to be useful for another meaning, the decider then retains one or more of the cmavo, preferably ones that set this meaning apart from the shorter form meaning that is used or anticipated. As a rule, therefore, the shorter lujvo will be used for a more general concept, possibly even instead of a more frequent word. If both words are needed, the simpler one should be shorter. It is easier to add a cmavo to clarify the meaning of the more complex term than it is to find a good alternate tanru for the simpler term.
And of course, we have to consider the listener. On hearing an unknown word, the listener will decompose it and get a tanru that makes no sense or the wrong sense for the context. If the listener realizes that the grouping operators may have been dropped out, he or she may try alternate groupings, or try inserting an abstraction operator if that seems plausible. (The grouping of tanru is explained in Chapter 5; abstraction is explained in Chapter 11.) Plausibility is the key to learning new ideas and to evaluating unfamiliar lujvo.
The following is the current algorithm for generating Lojban lujvo given a known tanru and a complete list of gismu and their assigned rafsi. The algorithm was designed by Bob LeChevalier and Dr. James Cooke Brown for computer program implementation. It was modified in 1989 with the assistance of Nora LeChevalier, who detected a flaw in the original ``tosmabru test''.
Given a tanru that is to be made into a lujvo:
- Choose a 3-letter or 4-letter rafsi for each of the gismu and cmavo in the tanru except the last.
- Choose a 3-letter (CVV-form or CCV-form) or 5-letter rafsi for the final gismu in the tanru.
- Join the resulting string of rafsi, initially without hyphens.
- Add hyphen letters where necessary. It is illegal to add a hyphen at a place that is not required by this algorithm. Right-to-left tests are recommended, for reasons discussed below.
- If there are more than two words in the tanru, put an ``r''-hyphen (or an ``n''-hyphen) after the first rafsi if it is CVV-form. If there are exactly two words, then put an ``r''-hyphen (or an ``n''-hyphen) between the two rafsi if the first rafsi is CVV-form, unless the second rafsi is CCV-form (for example, ``saicli'' requires no hyphen). Use an ``r''-hyphen unless the letter after the hyphen is ``r'', in which case use an ``n''-hyphen. Never use an ``n''-hyphen unless it is required.
- Put a ``y''-hyphen between the consonants of any impermissible consonant pair. This will always appear between rafsi.
- Put a ``y''-hyphen after any 4-letter rafsi form.
- Test all forms with one or more initial CVC-form rafsi --- with the pattern ``CVC ... CVC + X'' --- for ``tosmabru failure''. X must either be a CVCCV long rafsi that happens to have a permissible initial pair as the consonant cluster, or is something which has caused a ``y''-hyphen to be installed between the previous CVC and itself by one of the above rules. The test is as follows:
- Examine all the C/C consonant pairs that join the CVC rafsi, and also the pair between the last CVC and the X portion, ignoring any ``y''-hyphen before the X. These consonant pairs are called ``joints''.
- If all of those joints are permissible initials, then the trial word will break up into a cmavo and a shorter brivla. If not, the word will not break up, and no further hyphens are needed.
- Install a ``y''-hyphen at the first such joint.
Note that the ``tosmabru test'' implies that the algorithm will be more efficient if rafsi junctures are tested for required hyphens from right to left, instead of from left to right; when the test is required, it cannot be completed until hyphenation to the right has been determined.
This algorithm was devised by Bob and Nora LeChevalier in 1989. It is not the only possible algorithm, but it usually gives a choice that people find preferable. The algorithm may be changed in the future. The lowest-scoring variant will usually be the dictionary form of the lujvo. (In previous versions, it was the highest-scoring variant.)
- Count the total number of letters, including hyphens and apostrophes; call it ``L''.
- Count the number of apostrophes; call it ``A''.
- Count the number of ``y''-, ``r''-, and ``n''-hyphens; call it ``H''.
- For each rafsi, find the value in the following table. Sum this value over all rafsi; call it ``R'':
- CVC/CV (final) (-sarji) 1 CVC/C (-sarj-) 2 CCVCV (final) (-zbasu) 3 CCVC (-zbas-) 4 CVC (-nun-) 5 CVV with an apostrophe (-ta'u-) 6 CCV (-zba-) 7 CVV with no apostrophe (-sai-) 8
- Count the number of vowels, not including ``y''; call it ``V''.
The score is then:
- (1000 * L) - (500 * A) + (100 * H) - (10 * R) - V
Here are some lujvo with their scores (not necessarily the lowest scoring forms for these lujvo, nor even necessarily sensible lujvo):
12.1) zbasai zba + sai (1000 * 6) - (500 * 0) + (100 * 0) - (10 * 15) - 3 = 5847 12.2) nunynau nun + y + nau 32500 - (1000 * 7) + (500 * 0) - (100 * 1) + (10 * 13) + 3 = 6967 12.3) sairzbata'u sai + r + zba + ta'u 32500 - (1000 * 11) + (500 * 1) - (100 * 1) + (10 * 21) + 5 = 10385 12.4) zbazbasysarji zba + zbas + y + sarji 32500 - (1000 * 13) + (500 * 0) - (100 * 1) + (10 * 12) + 4 = 12976
This section contains examples of making and scoring lujvo. First, we will start with the tanru ``gerku zdani'' (``dog house'') and construct a lujvo meaning ``doghouse'', that is, a house where a dog lives. We will use a brute-force application of the algorithm in Section 12, using every possible rafsi.
The rafsi for ``gerku'' are:
- -ger-, -ge'u-, -gerk-, -gerku
The rafsi for ``zdani'' are:
- -zda-, -zdan-, -zdani.
Step 1 of the algorithm directs us to use ``-ger-'', ``-ge'u-'' and ``-gerk-'' as possible rafsi for ``gerku''; Step 2 directs us to use ``-zda-'' and ``-zdani'' as possible rafsi for ``zdani''. The six possible forms of the lujvo are then:
- ger-zda ger-zdani ge'u-zda ge'u-zdani gerk-zda gerk-zdani
We must then insert appropriate hyphens in each case. The first two forms need no hyphenation: ``ge'' cannot fall off the front, because the following word would begin with ``rz'', which is not a permissible initial consonant pair. So the lujvo forms are ``gerzda'' and ``gerzdani''.
The third form, ``ge'u-zda'', needs no hyphen, because even though the first rafsi is CVV, the second one is CCV, so there is a consonant cluster in the first five letters. So ``ge'uzda'' is this form of the lujvo.
The fourth form, ``ge'u-zdani'', however, requires an ``r''-hyphen; otherwise, the ``ge'u-'' part would fall off as a cmavo. So this form of the lujvo is ``ge'urzdani''.
The last two forms require ``y''-hyphens, as all 4-letter rafsi do, and so are ``gerkyzda'' and ``gerkyzdani'' respectively.
The scoring algorithm is heavily weighted in favor of short lujvo, so we might expect that ``gerzda'' would win. Its L score is 6, its A score is 0, its H score is 0, its R score is 12, and its V score is 3, for a final score of 5878. The other forms have scores of 7917, 6367, 9506, 8008, and 10047 respectively. Consequently, this lujvo would probably appear in the dictionary in the form ``gerzda''.
For the next example, we will use the tanru ``bloti klesi'' (``boat class'') presumably referring to the category (rowboat, motorboat, cruise liner) into which a boat falls. We will omit the long rafsi from the process, since lujvo containing long rafsi are almost never preferred by the scoring algorithm when there are short rafsi available.
The rafsi for ``bloti'' are ``-lot-'', ``-blo-'', and ``-lo'i-''; for ``klesi'' they are ``-kle-'' and ``-lei-''. Both these gismu are among the handful which have both CVV-form and CCV-form rafsi, so there is an unusual number of possibilities available for a two-part tanru:
lotkle blokle lo'ikle lotlei blolei lo'irlei
Only ``lo'irlei'' requires hyphenation (to avoid confusion with the cmavo sequence ``lo'i lei''). All six forms are valid versions of the lujvo, as are the six further forms using long rafsi; however, the scoring algorithm produces the following results:
lotkle 5878 blokle 5858 lo'ikle 6367 lotlei 5867 blolei 5847 lo'irlei 7456
So the form ``blolei'' is preferred, but only by a tiny margin over ``blokle''; the next two forms are only slightly worse; ``lo'ikle'' suffers because of its apostrophe, and ``lo'irlei'' because of having both apostrophe and hyphen.
Our third example will result in forming both a lujvo and a name from the tanru ``logji bangu girzu'', or ``logical-language group'' in English. (``The Logical Language Group'' is the name of the publisher of this book and the organization for the promotion of Lojban.) The available rafsi are ``-loj-'' and ``-logj-''; ``-ban-'', ``-bau-'', and ``-bang-''; and ``-gri-'' and ``-girzu'', and (for name purposes only) ``-gir-'' and ``-girz-''. The resulting 12 lujvo possibilities are:
loj-ban-gri loj-bau-gri loj-bang-gri logj-ban-gri logj-bau-gri logj-bang-gri loj-ban-girzu loj-bau-girzu loj-bang-girzu logj-ban-girzu logj-bau-girzu logj-bang-girzuand the 12 name possibilities are:
loj-ban-gir. loj-bau-gir. loj-bang-gir. logj-ban-gir. logj-bau-gir. logj-bang-gir. loj-ban-girz. loj-bau-girz. loj-bang-girz. logj-ban-girz. logj-bau-girz. logj-bang-girz.
After hyphenation, we have:
lojbangri lojbaugri lojbangygri logjybangri logjybaugri logjybangygri lojbangirzu lojbaugirzu lojbangygirzu logjybangirzu logjybaugirzu logjybangygirzu lojbangir. lojbaugir. lojbangygir. logjybangir. logjybaugir. logjybangygir. lojbangirz. lojbaugirz. lojbangygirz. logjybangirz. logjybaugirz. logjybangygirz.
The only fully reduced lujvo forms are ``lojbangri'' and ``lojbaugri'', of which the latter has a slightly lower score: 8827 versus 8796, respectively. However, for the name of the organization, we chose to make sure the name of the language was embedded in it, and to use the clearer long-form rafsi for ``girzu'', producing ``lojbangirz.''
Finally, here is a four-part lujvo with a cmavo in it, based on the tanru ``nakni ke cinse ctuca'' or ``male (sexual teacher)''. The ``ke'' cmavo ensures the interpretation ``teacher of sexuality who is male'', rather than ``teacher of male sexuality''. Here are the possible forms of the lujvo, both before and after hyphenation:
nak-kem-cin-ctu nakykemcinctu nak-kem-cin-ctuca nakykemcinctuca nak-kem-cins-ctu nakykemcinsyctu nak-kem-cins-ctuca nakykemcinsyctuca nakn-kem-cin-ctu naknykemcinctu nakn-kem-cin-ctuca naknykemcinctuca nakn-kem-cins-ctu naknykemcinsyctu nakn-kem-cins-ctuca naknykemcinsyctuca
Of these forms, ``nakykemcinctu'' is the shortest and is preferred by the scoring algorithm. On the whole, however, it might be better to just make a lujvo for ``cinse ctuca'' (which would be ``cinctu'') since the sex of the teacher is rarely important. If there was a reason to specify ``male'', then the simpler tanru ``nakni cinctu'' (``male sexual-teacher'') would be appropriate. This tanru is actually shorter than the four-part lujvo, since the ``ke'' required for grouping need not be expressed.
The gismu were created through the following process:
- At least one word was found in each of the six source languages (Chinese, English, Hindi, Spanish, Russian, Arabic) corresponding to the proposed gismu. This word was rendered into Lojban phonetics rather liberally: consonant clusters consisting of a stop and the corresponding fricative were simplified to just the fricative (``tc'' became ``c'', ``dj'' became ``j'') and non-Lojban vowels were mapped onto Lojban ones. Furthermore, morphological endings were dropped. The same mapping rules were applied to all six languages for the sake of consistency.
- All possible gismu forms were matched against the six source-language forms. The matches were scored as follows:
- If three or more letters were the same in the proposed gismu and the source-language word, and appeared in the same order, the score was equal to the number of letters that were the same. Intervening letters, if any, did not matter.
- If exactly two letters were the same in the proposed gismu and the source-language word, and either the two letters were consecutive in both words, or were separated by a single letter in both words, the score was 2. Letters in reversed order got no score.
- Otherwise, the score was 0.
- The scores were divided by the length of the source-language word in its Lojbanized form, and then multiplied by a weighting value specific to each language, reflecting the proportional number of first-language and second-language speakers of the language. (Second-language speakers were reckoned at half their actual numbers.) The weights were chosen to sum to 1.00. The sum of the weighted scores was the total score for the proposed gismu form.
- Any gismu forms that conflicted with existing gismu were removed. Obviously, being identical with an existing gismu constitutes a conflict. In addition, a proposed gismu that was identical to an existing gismu except for the final vowel was considered a conflict, since two such gismu would have identical 4-letter rafsi.
- More subtly: If the proposed gismu was identical to an existing gismu except for a single consonant, and the consonant was ``too similar'' based on the following table, then the proposed gismu was rejected.
- proposed gismu existing gismu
- b p, v c j, s d t f p, v g k, x j c, z k g, x l r m n n m p b, f r l s c, z t d v b, f x g, k z j, s
See Section 4 for an example.
- The gismu form with the highest score usually became the actual gismu. Sometimes a lower-scoring form was used to provide a better rafsi. A few gismu were changed in error as a result of transcription blunders (for example, the gismu ``gismu'' should have been ``gicmu'', but it's too late to fix it now).
Chinese 0.36 English 0.21 Hindi 0.16 Spanish 0.11 Russian 0.09 Arabic 0.07reflecting 1985 number-of-speakers data. A few gismu were made much later
- using updated weights:
- Chinese 0.347 Hindi 0.196 English 0.160 Spanish 0.123 Russian 0.089 Arabic 0.085
Note that the stressed vowel of the gismu was considered sufficiently distinctive that two or more gismu may differ only in this vowel; as an extreme example, ``bradi'', ``bredi'', ``bridi'', and ``brodi'' (but fortunately not ``brudi'') are all existing gismu.
The following gismu were not made by the gismu creation algorithm. They are, in effect, coined words similar to fu'ivla. They are exceptions to the otherwise mandatory gismu creation algorithm where there was sufficient justification for such exceptions. Except for the small metric prefixes and the assignable predicates beginning with ``brod-'', they all end in the letter ``o'', which is otherwise a rare letter in Lojban gismu.
The following gismu represent concepts that are sufficiently unique to Lojban that they were either coined from combining forms of other gismu, or else made up out of whole cloth. These gismu are thus conceptually similar to lujvo even though they are only five letters long; however, unlike lujvo, they have rafsi assigned to them for use in building more complex lujvo. Assigning gismu to these concepts helps to keep the resulting lujvo reasonably short.
broda 1st assignable predicate brode 2nd assignable predicate brodi 3rd assignable predicate brodo 4th assignable predicate brodu 5th assignable predicate cmavo structure word (from ``cmalu valsi'') lojbo Lojbanic (from ``logji bangu'') lujvo compound word (from ``pluja valsi'') mekso Mathematical EXpression
It is important to understand that even though ``cmavo'', ``lojbo'', and ``lujvo'' were made up from parts of other gismu, they are now full-fledged gismu used in exactly the same way as all other gismu, both in grammar and in word formation.
The following three groups of gismu represent concepts drawn from the international language of science and mathematics. They are used for concepts that are represented in most languages by a root which is recognized internationally.
Small metric prefixes (less than 1):
decti .1/deci centi .01/centi milti .001/milli mikri 1E-6/micro nanvi 1E-9/nano picti 1E-12/pico femti 1E-15/femto xatsi 1E-18/atto zepti 1E-21/zepto gocti 1E-24/yocto
Large metric prefixes (greater than 1):
dekto 10/deka xecto 100/hecto kilto 1000/kilo megdo 1E6/mega gigdo 1E9/giga terto 1E12/tera petso 1E15/peta xexso 1E18/exa zetro 1E21/zetta gotro 1E24/yotta
Other scientific or mathematical terms:
delno candela kelvo kelvin molro mole radno radian sinso sine stero steradian tanjo tangent xampo ampere
The gismu ``sinso'' and ``tanjo'' were only made non-algorithmically because they were identical (having been borrowed from a common source) in all the dictionaries that had translations. The other terms in this group are units in the international metric system; some metric units, however, were made by the ordinary process (usually because they are different in Chinese).
Finally, there are the cultural gismu, which are also borrowed, but by modifying a word from one particular language, instead of using the multi-lingual gismu creation algorithm. Cultural gismu are used for words that have local importance to a particular culture; other cultures or languages may have no word for the concept at all, or may borrow the word from its home culture, just as Lojban does. In such a case, the gismu algorithm, which uses weighted averages, doesn't accurately represent the frequency of usage of the individual concept. Cultural gismu are not even required to be based on the six major languages.
The six Lojban source languages:
jungo Chinese (from ``Zhong1 guo2'') glico English xindo Hindi spano Spanish rusko Russian xrabo Arabic
Seven other widely spoken languages that were on the list of candidates for gismu-making, but weren't used:
bengo Bengali porto Portuguese baxso Bahasa Melayu/Bahasa Indonesia ponjo Japanese (from ``Nippon'') dotco German (from ``Deutsch'') fraso French (from ``Français'') xurdo Urdu(Urdu and Hindi began as the same language with different writing systems, but have now become somewhat different principally in borrowed vocabulary. Urdu-speakers were counted along with Hindi-speakers when weights were assigned for gismu-making purposes.)
Countries with a large number of speakers of any of the above languages (where the meaning of ``large'' is dependent on the specific language):
- English: merko American brito British skoto Scottish sralo Australian kadno Canadian
- Spanish: gento Argentinian mexno Mexican
- Russian: softo Soviet/USSR vukro Ukrainian
- Arabic: filso Palestinian jerxo Algerian jordo Jordanian libjo Libyan lubno Lebanese misro Egyptian (from ``Mizraim'') morko Moroccan rakso Iraqi sadjo Saudi sirxo Syrian
- Bahasa Melayu/Bahasa Indonesia: bindo Indonesian meljo Malaysian
- Portuguese: brazo Brazilian
- Urdu: kisto Pakistani
bemro North American (from ``berti merko'') dzipo Antarctican (from ``cadzu cipni'') ketco South American (from ``Quechua'') friko African polno Polynesian/Oceanic ropno European xazdo AsiaticA few smaller but historically important cultures:
latmo Latin/Roman srito Sanskrit xebro Hebrew/Israeli xelso Greek (from ``Hellas'')Major world religions:
budjo Buddhist dadjo Taoist muslo Islamic/Moslem xriso Christian
A few terms that cover multiple groups of the above:
jegvo Jehovist (Judeo-Christian-Moslem) semto Semitic slovo Slavic xispo Hispanic (New World Spanish)
The list of cultures represented by gismu, given in Section 15, is unavoidably controversial. Much time has been spent debating whether this or that culture ``deserves a gismu'' or ``must languish in fu'ivla space''. To help defuse this argument, a last-minute proposal was made when this book was already substantially complete. I have added it here with experimental status: it is not yet a standard part of Lojban, since all its implications have not been tested in open debate, and it affects a part of the language (lujvo-making) that has long been stable, but is known to be fragile in the face of small changes. (Many attempts were made to add general mechanisms for making lujvo that contained fu'ivla, but all failed on obvious or obscure counterexamples; finally the general ``zei'' mechanism was devised instead.)
The first part of the proposal is uncontroversial and involves no change to the language mechanisms. All valid Type 4 fu'ivla of the form CCVVCV would be reserved for cultural brivla analogous to those described in Section 15. For example,
16.1) tci'ile Chileanis of the appropriate form, and passes all tests required of a Stage 4 fu'ivla. No two fu'ivla of this form would be allowed to coexist if they differed only in the final vowel; this rule was applied to gismu, but does not apply to other fu'ivla or to lujvo.
The second, and fully experimental, part of the proposal is to allow rafsi to be formed from these cultural fu'ivla by removing the final vowel and treating the result as a 4-letter rafsi (although it would contain five letters, not four). These rafsi could then be used on a par with all other rafsi in forming lujvo. The tanru
16.2) tci'ile ke canre tutra Chilean type-of (sand territory) Chilean desertcould be represented by the lujvo
16.3) tci'ilykemcantutrawhich is an illegal word in standard Lojban, but a valid lujvo under this proposal. There would be no short rafsi or 5-letter rafsi assigned to any fu'ivla, so no fu'ivla could appear as the last element of a lujvo.
The cultural fu'ivla introduced under this proposal are called ``rafsi fu'ivla'', since they are distinguished from other Type 4 fu'ivla by the property of having rafsi. If this proposal is workable and introduces no problems into Lojban morphology, it might become standard for all Type 4 fu'ivla, including those made for plants, animals, foodstuffs, and other things.
romoi galfi ca Mon Jun 27 23:11:01 PDT 2005
Please contact us with any comments, suggestions or concerns. | http://www.lojban.org/tiki/tiki-download_wiki_attachment.php?attId=194 | 13 |
20 | Introduction to Data Expressions
Introduction to Expressions and Operations
|Neither Microsoft Access nor Microsoft Visual Basic is case-sensitive. Therefore, any word we are going to use that involves a field, its name, and new words we will introduce in this section, whether written in uppercase, lowercase or a mix, as long as it is the same word, represents the same thing. Based on this, the words TRUE, True and true, as related to Microsoft Access, represent the same word. In the same way, if the words NULL, Null, and null are used in an expression, they represent the same thing.|
The data fields we have used so far were created in tables and then made available to other objects (forms and reports), so those objects can implement their own functionality without worrying about displaying empty or insignificant fields. In various scenarios, you will need to display a field that is a combination of other fields.
For example, you may need to combine a first name to a last name fields in order to create a full name field, or, to calculate an employee’s weekly salary, you may need to retrieve the value of a Salary field and multiply it with the value of a total number of hours worked in a week. Most, if not all, of these expressions use what we call operators and operand.
An expression, also called an operation, is a technique of combining two or more values or data fields, to either modify an existing value or to produce a new value. Based on this, to create an expression or to perform an operation, you need at least one value or field and one symbol. A value or field involved in an operation is called an operand. A symbol involved in an operation is called an operator.
A unary operator is one that uses only one operand. An operator is referred to as binary if it operates on two operands.
A constant is a value that does not change. The constants you will be using in your databases have already been created and are built in Microsoft Access. Normally, Visual Basic for Applications (VBA), the version of Microsoft Visual Basic that ships with Microsoft Access also provides many constants. Just in case you are aware of them, you will not be able to use those constants, as Microsoft Access does not inherently “understand” them. For this reason, we will mention here only the constants you can use when building regular expressions.
The algebraic numbers you have been using all the time are constants because they never change. Examples of constant numbers are 12, 0, 1505, or 88146. Therefore, any number you can think of is a constant. Every letter of the alphabet is a constant and is always the same. Examples of constant letters are d, n, c. Some characters on your keyboard represent symbols that are neither letters nor digits. These are constants too. Examples are &, |, @, or !. The names of people are constants too. In fact, any name you can thing of is a contant.
In order to provide a value to an existing field, you can use an operator called assignment and its symbol is "=". It uses the following syntax:
Field/Object = Value/Field/Object
The operand on the left side of the = operator is referred to as the left value:
The operand on the right side of the operator is referred to as the right
value. It can be a
constant, a value, an expression, the name of a field, or an object.
In some other cases, the assignment operator will be part of a longer expression. We will see examples we move on.
An algebraic value is considered positive if it is greater than 0. As a mathematical convention, when a value is positive, you do not need to express it with the + operator. Just writing the number without any symbol signifies that the number is positive. Therefore, the numbers +4, +228, and +90335 can be, and are better, expressed as 4, 228, or 90335. Because the value does not display a sign, it is referred as unsigned.
A value is referred to as negative if it is less than 0. To express a negative value, it must be appended with a sign, namely the - symbol. Examples are -12, -448, -32706. A value accompanied by - is referred to as negative. The - sign must be typed on the left side of the number it is used to negate.
Remember that if a number does not have a sign, it is considered positive. Therefore, whenever a number is negative, it must have a - sign. In the same way, if you want to change a value from positive to negative, you can just add a - sign to its left. In the same way, if you want to negate the value of a field and assign it to another field, you can type the – operator on its left when assigning it.
Besides a numeric value, the value of a field or an object can also be expressed as being negative by typing a - sign to its left. For example, -txtLength means the value of the control named txtLength must be made negative.
The addition is used to add one value or expression to another. It is performed using the + symbol and its syntax is:
Value1 + Value2
The addition allows you to add two numbers such as 12 + 548 or 5004.25 + 7.63
After performing the addition, you get a result. You can provide such a result to another field of a form or report. This can be done using the assignment operator. The syntax used would be:
= Value1 + Value2
To use the result of this type of operation, you can write it in the Control Source property of the field that would show the result.
Subtraction is performed by retrieving one value from another value. This is done using the – symbol. The syntax used is:
Value1 - Value2
The value of Value1 is subtracted from the value of Value2.
Multiplication allows adding one value to itself a certain number of times, set by the second value. The multiplication is performed with the * sign which is typed with Shift + 8. Here is an example:
Value1 * Value2
During the operation, Value1 is repeatedly added to itself, Value2 times. The result can be assigned to the Control Source of a field as.
The division is used to get the fraction of one number in terms of another number. Microsoft Access provides two types of results for the division operation. If you want the result of the operation to be a natural number, called an integer, use the backlash "\" as the operator. Here is an example:
Value1 \ Value2
This operation can be performed on two types of valid numbers, with or without decimal parts. After the operation, the result would be a natural number.
The second type of division results in a decimal number. It is performed with the forward slash "/". Its syntax is:
Value1 / Value2
After the operation is performed, the result is a decimal number.
Exponentiation is the ability to raise a number to the power of another number. This operation is performed using the ^ operator (Shift + 6). It uses the following mathematical formula:
In Microsoft Access, this formula is written as y^x and means the same thing. Either or both y and x can be values or expressions, but they must carry valid values that can be evaluated.
When the operation is performed, the value of y is raised to the power of x. You can display the result of such an operation in a field using the assignment operator as follows:
The division operation gives a result of a number with or without decimal values, which is fine in some circumstances. Sometimes you will want to get the value remaining after a division renders a natural result. The remainder operation is performed with keyword Mod. Its syntax is:
Value1 Mod Value2
The result of the operation can be used as you see fit or you can display it in a control using the assignment operator as follows:
= Value1 Mod Value2
In previous lessons, we learned that a property was something that characterized or describes an object. For example, users mainly use a text box either to read the text it contains, or to change its content, by changing the existing text or by entering new text. Therefore, the text the user types in a text box is a property of the text box. To access the property of an object, type the name of the object, followed by a period, followed by the name of the property you need. The syntax used is:
The property you are trying to use must be a valid property of the object. In Microsoft Access, to use a property of an object, you must know, either based on experience or with certainty, that the property exists. Even so, unfortunately, not all properties are available in Microsoft Access.
To name our objects so far, in some cases we used a name made of one word without space. In some other cases, we used spaces or special characters in a name. This is possible because Microsoft Access allows a great level of flexibility when it comes to names used in a database. Unfortunately, when such names get involved in an expression, there would be an error or the result would be unpredictable.
To make sure Microsoft Access can recognize any name in an expression, you should include it between an opening square bracket "[" and a closing square brackets "]". Examples are [© Year], [Soc. Sec. #], or [Date of Birth]. In the same way, even if the name is in one word, to be safe, you should (always) include it in square brackets. Examples are [Country], [FirstName], or [SocialSecurityNumber]. Therefore, the =txtLength expression that we referred to can be written =[txtLength].
The objects used in Microsoft Access are grouped in categories called collections. For example, the forms belong to a collection of objects called Forms. Consequently, all forms of your database project belong to the Forms collection. The reports belong to a collection of objects called Reports and all reports of your database belong to the Reports collection. The data fields belong to a collection called Controls and all controls of a form or a report of your database belong to the Controls collection.
To call a particular object in an expression, use the exclamation point operator "!". To do this, type the name of the collection followed by the ! operator, followed by the name of the object you want to access. For example, on a form, if you have a text box called txtLength and you want to refer to it, you can type [Controls]![txtLength]. Therefore, the =txtLength expression that we referred to can be written =Controls!txtLength, and =[txtLength] can be written =Controls![txtLength] or =[Controls]![txtLength].
The name of the collection is used to perform what is referred to as qualification: the name of the collection “qualifies” the object. In other words, it helps the database engine locate the object by referring to its collection. This is useful in case two objects of different categories are being referred to.
In a database, Microsoft Access allows two objects to have the same name, as long as they do not belong to the same category. For example, you cannot have two forms called Employees in the same database. In the same way, you cannot have two reports named Contracts in the same database. On the other hand, you can have a form named Employees and a report named Employees in the same database. For this reason, when creating expressions, you should (strongly) qualify the object you are referring to, using its collection. Therefore, when an object named Employees is referred to in an expression, you should specify its collection, using the ! operator. An example would be Forms!Employees which means the Employees form of the Forms collection. If the name of the form is made of more than one word, or for convenience, you must (strongly suggested) use square brackets to delimit the name of the form. The form would be accessed with Forms![Employees].
To refer to a control placed on a form or report, you can type the Forms collection, followed by the ! operator, followed by the name of the form, followed by the ! operator and followed by the name of the control. An example would be Forms!People!LastName. Using the assignment operator that we introduced earlier, if on a form named People, you have a control named LastName and you want to assign its value to another control named FullName, in the Control Source property of the FullName field, you can enter one of the following expressions:
=LastName =[LastName] =Controls!LastName =[Controls]![LastName] =Forms!People!LastName =[Forms]![People]![LastName]
These expressions would produce the same result.
Parentheses are used in two main circumstances: in expressions (or operations) or in functions. The parentheses in an expression help to create sections. This regularly occurs when more than one operators are used in an operation. Consider the following operation: 8 + 3 * 5
The result of this operation depends on whether you want to add 8 to 3 then multiply the result by 5 or you want to multiply 3 by 5 and then add the result to 8. Parentheses allow you to specify which operation should be performed first in a multi-operator operation. In our example, if you want to add 8 to 3 first and use the result to multiply it by 5, you would write (8 + 3) * 5. This would produce 55. On the other hand, if you want to multiply 3 by 5 first then add the result to 8, you would write 8 + (3 * 5). This would produce 23.
As you can see, results are different when parentheses are used on an operation that involves various operators. This concept is based on a theory called operator precedence. This theory manages which operation would execute before which one; but parentheses allow you to control the sequence of these operations.
A function is a task that must be performed to produce a result on a table, a form, or a report. It is like an operation or an expression with the first difference that someone else created it and you can just use it. For example, instead of the addition operator "+", to add two values, you could use a function.
In practicality, you cannot create a function in Microsoft Access. You can only use those that have been created and that exist already. These are referred to as built-in functions.
If you had to create a function (remember that we cannot create a function in Microsoft Access; the following sections are only hypothetical but illustrative of the subject of a function), a formula you would use is:
This syntax is very simplistic but indicates that the minimum piece of information a function needs is a name. The name allows you to refer to this function in other parts of the database. The name of the function is followed by parentheses. As stated already, a function is meant to perform a task. This task would be defined or described in the body of the function. In our simple syntax, the body of the function would start just under its name after the parentheses and would stop just above the End word. The person who creates a function also decides what the function can do. Following our simple formula, if we wanted a function that can open Solitaire, it could appear as follows:
FunctionExample() Open Solitaire End
Once a function has been created, it can be used. Using a function is referred to as calling it. To call a simple
function like the above FunctionExample, you would just type its name.
The person who creates a function also decides what kind of value the function can return. For example, if you create a function that performs a calculation, the function may return a number. If you create another function that combines a first name and a last name, you can make the function return a string that represents a full name.
When asked to perform its task, a function may need one or more values to work with. If a function needs a value, such a value is called a parameter. The parameter is provided in the parentheses of the function. The formula used to create such a function would be:
ReturnValue FunctionName(Parameter) End
Once again, the body of the function would be used to define what the function does. For example, if you were writing a function that multiplies its parameter by 12.58, it would appear almost as follows:
Decimal FunctionName(Parameter) parameter * 12.58 End
While a certain function may need one parameter, another function would need many of them. The number and types of parameters of a function depend on its goal. When a function uses more than one parameter, a comma separates them in the parentheses. The syntax used is:
ReturnValue FunctionName(Parameter1, Parameter2, Parameter_n) End
If you were creating a function that adds its two parameters, it would appear as follows:
NaturalNumber AddTwoNumbers(Parameter1, Parameter2) Parameter1 + Parameter2 End
Once a function has been created, it can be used in other parts of the database. Once again, using a function is referred to as calling it. If a function is taking one or more parameters, it is called differently than a function that does not take any parameter. We saw already how you could call a function that does not take any parameter and assign it to a field using its Control Source. If a function is taking one parameter, when calling it, you must provide a value for the parameter, otherwise the function would not work (when you display the form or report, Microsoft Access would display an error). When you call a function that takes a parameter, the parameter is called an argument. Therefore, when calling the function, we would say that the function takes one argument. In the same way, a function with more than one parameter must be called with its number of arguments.
To call a function that takes an argument, type the name of the function followed by the opening parenthesis "(", followed by the value (or the field name) that will be the argument, followed by a closing parenthesis ")". The argument you pass can be a constant number. Here is an example:
The value passed as argument can be the name of an existing field. The rule to respect is that, when Microsoft Access will be asked to perform the task(s) for the function, the argument must provide, or be ready to provide, a valid value. As done with the argument-less function, when calling this type of function, you can assign it to a field by using the assignment operator in its Control Source property. Here is an example:
If the function is taking more than one argument, to call it, type the values for the arguments, in the exact order indicated, separated from each other by a comma. As for the other functions, the calling can be assigned to a field in its Control Source. All the arguments can be constant values, all of them can be the names of fields or objects, or some arguments can be passed as constants and others as names of fields. Here is an example:
We have mentioned that, when calling a function that takes an argument, you must supply a value for the argument. There is an exception. Depending on how the function was created, it may be configured to use its own value if you fail, forget, or choose not, to provide one. This is known as the default argument. Not all functions follow this rule and you would know either by checking the documentation of that function or through experience.
If a function that takes one argument has a default value for it, then you do not have to supply a value when calling that function. Such an argument is considered optional. Whenever in doubt, you should provide your own value for the argument. That way, you would not only be on the safe side but also you would know with certainty what value the function had to deal with.
If a function takes more than one argument, some argument(s) may have default values while some others do not. The arguments that have default values can be used and you do not have to supply them.
In the above sections, we saw what operators could be used to create an expression. We also had a glimpse of what a function looked like and how it could be called from the Properties window. In those sections, we saw how to manually create an expression and how to manually call a function. To assist you with writing expressions or calling a (built-in) function and reduce the likelihood of a mistake, Microsoft Access is equipped with a good functional dialog box named the Expression Builder.
The Expression Builder is used to create an expression or call a function that would be used as the Control Source of a field. Therefore, to access the Expression Builder, open the Properties window for the control that will use the expression or function, and click its ellipsis button . This would call the Expression Builder dialog box
Like every regular dialog box, the Expression Builder starts on top with its title bar that displays its caption, its help button, and its system Close button.
Unlike a regular dialog box, the Expression Builder is resizable: you can enlarge, narrow, heighten, or shorten it, to a certain extent.
Under the title bar and to the left, the expression area, with a large white background, is used to show the current expression when you have written it. If you already know what you want, you can directly type an expression, a function, or a combination of those in the expression area.
Under the expression area, there are various buttons each displaying an operator. To use an operator in your expression, you can click its button. What happens depends on what was already in the expression or what button you clicked. For example, if you first click = and then click &, another section would be added to the expression. Also, to assist you with creating an expression, sometimes when you have clicked a button, Microsoft Access would add «Expr». This is referred to as a placeholder: it indicates to you that you must replace «Expr» with a valid value, a valid expression, or a valid function call.
Under the bar of buttons, there are three list boxes. The left list displays some categories of items. The top node is the name of the database on which you are working. Under the name of the current database are the names of collections of items of a database. The Tables node represents the Tables collection and it holds the names of the tables of the current database. In the same way, the Forms name represents the Forms collection of a database, and it holds the names of the forms of the current database. If there are no items in a collection, the node would display only a yellow button. If there is at least one item in the category, its node would display a + button.
To access an object of your database, expand its node collection by double-clicking its corresponding button or clicking its + button. After you have expanded a node, a list would appear. In some cases, such as the Forms node, another list of categories may appear:
To access an object of a collection, in the left list, you can click its node. This would fill the middle list with some items that would of course depend on what was selected in the left list. Here is example:
Depending on the object that was clicked in the left list, the middle list can display the Windows controls that are part of, or are positioned on, the form or report. For example, if you click the name of a form in the left list, the middle list would display the names of all the controls on that form. To use one of the controls on the object, in the middle list, you can double-click the item in the middle list. When you do, the name of the control would appear in the expression area.
Some items in the middle list hold their own list of items. To show that list, you must click an item in the middle list. For example, to access the properties of a control positioned on a form, in the left list, expand the Forms node and expand All Forms. Then, in the left list, click the name of a form. This would cause the middle list to display the controls of the selected form. To access the properties of the control, click its name in the middle list. The right list would show its properties:
To use an item from the right list into an expression, you can either click the item and click the Paste button, or you can double-click the item in the right list
Based on these descriptions, to access one of the Microsoft Access built-in functions, in the left list, expand the Functions node and click Built-In Functions. The middle list would display categories of functions. If you see the function you want to use, you can use it. If the right list is too long for you but you know the type of the function you are looking for, you can click its category in the middle list and locate it in the right list.
Once you see the function you want in the right list, you can either click it and click Paste or you can double-click it. If it is a parameter-less function, its name and parentheses would be added to the expression area:
If the function is configured to take arguments, its name and a placeholder for each argument would be added to the expression area:
You must then replace each placeholder with the appropriate value or expression. To assist you with functions, in its bottom section, the Expression Builder shows the syntax of the function, including its name and the name(s) of the argument(s) (unfortunately, this help is not particularly helpful, for example, it shows neither the return type nor the type of each argument).
The top-right section of the Expression Builder displays a few buttons. If you make a mistake after adding an item to the expression area, you can click Undo. To get help while using the Expression Builder, you can click Help. To dismiss the dialog box, you can click Cancel.
After creating the expression, if you are satisfied with it, click OK.
|Previous||Copyright © 2008-2012 FunctionX||Next| | http://www.functionx.com/access2007/Lesson13.htm | 13 |
19 | The Jenkins-Traub Algorithm is a standard in the field of numerical computation of polynomial roots, fundamentally developed as a numerical algorithm specifically for the task of computing polynomial roots. In other words, (i) because it was planned from the outset for numerical purposes rather than being simply an adaptation of an analytic formula, it is extremely robust, effectively minimizing the effects of computer round-off error, while (ii) also being extremely efficient compared to more general methods not written specifically for the task of computing polynomial roots; in fact, the algorithm converges to polynomial roots at a rate better than quadratic. Furthermore, since being introduced over thirty years ago, the algorithm has had time to be rigorously tested and has successfully proven its quality; as a result, it has gained a wide distribution as evidenced by its incorporation in commercial software products and the posting on the NETLIB website of source code for programs based on the algorithm.
Given a function which is defined in terms of one or more independent variables, the roots (also called zeros) of the equation are the values of the independent variable(s) for which the function equals :
Note that, in general, the z values are complex numbers, comprised of real and imaginary components (indicated by the ): .
Consider the following equation:
The values of and which satisfy this equation may not be possible by analytical methods, so the equation would be rearranged into the following form:
which is of the form
Once in this form (the standard form), solving the original equation becomes a matter of finding the and values for which f equals , and well-developed fields of mathematics and computer science provide several root-finding techniques for solving such a problem. Note that since the theme of this article is polynomial root-finding, further examples will focus on single-variable equations, specifically, polynomials.
Consider the following quadratic equation:
This equation is a second-degree polynomial–the highest power applied to the independent variable is 2. Consequently, this equation has two roots; an n-degree polynomial equation has n roots.
Because the equation is a simple polynomial, a first approach to finding its zeros might be to put the equation into the form and make a few educated guesses; a person could quickly determine that (a strictly real result) is one root of this equation. This root could then be divided out of the original polynomial
to yield the second root: .
In this simple example, the two roots were found easily; the second root was found immediately after the first one was known. However, in most real-world problems the roots of a quadratic are not found so easily. Furthermore, in problems where the original polynomial is of degree greater than two, when one root is found, the other roots do not follow immediately — more sophisticated techniques are required.
Among the techniques available are analytical ones, in other words, techniques that yield explicit, algebraic results. For example, in the case of a quadratic equation, explicit expressions for the two roots are possible via the Quadratic Formula: once a quadratic equation is arranged into the standard form
(a, b and c are known constants), the two roots are found via the Quadratic Formula:
The quantity within the square root sign is called the discriminant and determines whether the solution is complex or strictly real. If the discriminant is less than , the numerator contains an imaginary component and the roots are, therefore, complex. If the discriminant is greater than or equal to , the solutions are real. In fact, if the discriminant is , the two roots are real and equal:
In addition to the Quadratic Formula for quadratic equations, analogous formulae exist for polynomials of degree three and four. However, for polynomials of degree five and higher, analytical solutions are not possible (except in special cases), only numerical solutions are possible.
For the multitude of real-world problems that are not amenable to an analytical solution, numerical root-finding is a well-developed field offering a wide variety of tools from which to choose; many computer programs written for the computation of roots are based on algorithms that make them applicable to the computation of roots of functions in addition to polynomials, and virtually all of them employ an iterative approach that terminates once the desired degree of tolerance has been achieved. For example, the bisection method is a simple and reliable method for computing roots of a function when they are known ahead of time to be real only. The algorithm starts with the assumption that a zero is somewhere on a user-supplied interval [a,b]. In other words, the function value f(a) is of the opposite sign of f(b) – in going from a to b, f either goes from being positive to being negative, or it goes from being negative to being positive. A point, m, in the middle of the interval [a,b] is then selected. The function is evaluated at point m: f(m). If the sign of f(m) is the same as the sign of f(a), the desired zero is not in the interval [a,m]. In this case, the interval is cut in half and the new interval becomes [m,b]. On the other hand, if the sign of f(m) is not the same as the sign of f(a), the desired zero is in the interval [a,m]. In this case, the interval is cut in half and the new interval becomes [a,m]. This process is repeated until either the interval converges to a desired tolerance or a value of f is found that is within an acceptable tolerance, in which case the value of z at this point is the value returned by the program as the root.
In addition to the bisection method, more elegant–and faster converging–techniques are available: the False Position method, Brent’s method, the Newton-Raphson method, the secant method, and others. The Newton-Raphson method uses the function and its derivative to quickly converge to a solution. This method is good for situations in which the derivatives are either known or can be calculated with a low computational cost. In other situations, the cost of computing derivatives may be too high. The secant method, by comparison, does not require the use of derivatives, but the calculation of each iterate requires the use of the two previous iterates and does not converge to a solution as quickly as the Newton-Raphson method. In some situations, this method, too, may be deemed impractical. Usually, when some thought is given to a problem, a particular technique can be applied to it that is more appropriate than another. In fact, for some problems, one technique may fail to find a solution at all, whereas another technique will succeed. For other problems, several techniques may, indeed, be able to solve the problem and the numerical analyst may select the one that is simply more computationally efficient.
The field of numerical root-finding is so well-developed that the use of a good-quality numerical program is recommended — when numerical results are sought — even when an analytical solution to a problem is known. In other words, even though an analytical solution to, say, the quartic equation exists, writing a computer program that simply implements the textbook formula is not recommended; computer round-off errors often render results of such programs meaningless. The use of a robust numerical program, based upon sound theory and an excellent algorithm, and coded to thoroughly deal with computer round-off errors, is the recommended action. The Jenkins-Traub Algorithm is such an algorithm; it is a three-stage, extremely effective, globally convergent algorithm designed specifically for computing the roots of polynomials.
Stage One is the “No Shift” stage; the main purpose of this stage is to accentuate the smaller zeros. The search for a zero is started by taking an initial guess of for a fixed number, M, of iterations (M is usually assigned the value 5 on the basis of numerical experience(1)).
Stage Two is the “Fixed Shift” stage, the purpose of this stage is to separate zeros of equal or almost equal magnitude. As a starting point in this stage, the following value is used:
is a lower bound on the magnitudes of the probable zeros in the cluster. could be taken at random, since the cluster could be anywhere in the complex plane; however, in practice is usually initialized to 49°, putting s near the middle of the first quadrant of the complex plane. After a certain number of iterations, if s does not converge to a root, s is assigned a new value by increasing by 94°. Repeated attempts would have the search for a root start with points in all four quadrants of the complex plane until the search is returned to the first quadrant. Should the search, indeed, return to the first quadrant, successive cycles start at points 16° away from the starting point of the preceding cycle.
Stage Three is the “Variable Shift” stage, which is terminated when the computed value of the polynomial at a possible zero is less than or equal to a specified bound.
In addition to the three fundamental stages around which it was developed, the Jenkins-Traub Algorithm incorporates several other techniques for making it as effective as possible. One of those techniques is deflation of the polynomial by synthetic division each time a root is found. Consider the following monic polynomial:
The are known constants and, in general, are complex. Now say the root () has been found. Synthetic division would be employed to divide that root out of the original polynomial:
The are new constants. The root-finding process is then repeated on this new–simpler–polynomial. As each root is found, the polynomial becomes successively simpler and each successive iteration of the algorithm involves, in general, fewer computations.
For polynomials whose coefficients are real only, when a complex root is found (), an additional benefit arises: that root’s complex conjugate is also a root (). In other words, two roots are computed — and the polynomial can be deflated by two degrees — in a single iteration of the algorithm. Furthermore, this deflation involves real only, rather than complex, operations; the product of these two roots is a real quadratic equation:
In this case, synthetic division is employed on the polynomial as follows:
In fact, for computing roots of polynomials which have only real coefficients, a modified version of the Jenkins-Traub Algorithm has been written which incorporates several features that take advantage of the characteristics of real-only polynomials to yield significant decreases in program execution time; “If the complex and real algorithms are applied to the same real polynomial, the real algorithm is about four times as fast.”(4)
The Jenkins-Traub Algorithm not only deflates the polynomial as roots are computed, it computes the roots roughly in order of increasing magnitude. This approach is taken because deflation of a polynomial can be unstable unless done by factors in order of increasing magnitude. By starting the search for roots with an initial guess of , as is done in Stage One, roots are, indeed, computed roughly in order of increasing magnitude, the factors by which the polynomial is successively deflated are roughly in order of increasing magnitude — the deflation of the polynomial is made quite stable.
As a quick check of the effectiveness of the Jenkins-Traub Algorithm, consider a contrived numerical example:
The roots of this polynomial are very close and should test how effectively the algorithm discerns near-multiple roots:
Expanded, the test polynomial becomes
The program, in fact, did a very good job: | http://math-blog.com/2008/03/06/polynomial-root-finding-with-the-jenkins-traub-algorithm/ | 13 |
23 | The assignment of physical processors to processes allows processors to accomplish work. The problem of determining when processors should be assigned and to which processes is called processor scheduling or CPU scheduling.
When more than one process is runable, the operating system must decide which one first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm.
In this section we try to answer following question: What the scheduler try to achieve?
Many objectives must be considered in the design of a scheduling discipline. In particular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for example batch system, interactive system or real-time system, etc. but there are also some goals that are desirable in all systems.
Fairness Fairness is important under all circumstances. A scheduler makes sure that each process gets its fair share of the CPU and no process can suffer indefinite postponement. Note that giving equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant.
Policy Enforcement The scheduler has to make sure that system's policy is enforced. For example, if the local policy is safety then the safety control processes must be able to run whenever they want to, even if it means delay in payroll processes.
Efficiency scheduler should keep the system (or in particular CPU) busy cent percent of the time when possible. If the CPU and all the Input/Output devices can be kept running all the time, more work gets done per second than if some components are idle.
A little thought will show that some of these goals are contradictory. It can
be shown (KLEINROCK) that any scheduling algorithm that favors some class of
jobs hurts another class of jobs. The amount of CPU time available is finite,
The Scheduling algorithms can be divided into two catagories with respect to how they deal with clock interrupts.
A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU cannot be taken away from that process.
Following are some characteristics of nonpreemptive scheduling
A scheduling decipline is preemptive if, once a process has been given the CPU can taken away.
The strategy of allowing processes that are logically runable to be temporarily suspended is called Preemptive Scheduling and it is contrast to the "run to completion" method.
CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU.
Following are some scheduling algorithms we will study
First-Come-First-Served (FCFS) Scheduling
Other names of this algorithm are:
Perhaps, First-Come-First-Served algorithm is the simplest scheduling algorithm is the simplest scheduling algorithm. Processes are dispatched according to their arrival time on the ready queue. Being a nonpreemptive discipline, once a process has a CPU, it runs to completion. The FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair in the sense that long jobs make short jobs wait and unimportant jobs make important jobs wait.
FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not useful in scheduling interactive users because it cannot guarantee good response time. The code for FCFS scheduling is simple to write and understand. One of the major drawback of this scheme is that the average time is often quite long.
The First-Come-First-Served algorithm is rarely used as a master scheme in modern operating systems but it is often embedded within other schemes.
Round Robin Scheduling
One of the oldest, simplest, fairest and most widely used algorithm is round robin (RR).
In the round robin scheduling, processes are dispatched in a FIFO manner but are given a limited amount of CPU time called a time-slice or a quantum.
If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next process waiting in a queue. The preempted process is then placed at the back of the ready list.
Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-sharing environments in which the system needs to guarantee reasonable response times for interactive users.
The only interesting issue with round robin scheme is the length of the quantum. Setting the quantum too short causes too many context switches and lower the CPU efficiency. On the other hand, setting the quantum too long may cause poor response time and appoximates FCFS.
In any event, the average waiting time under round robin scheduling is often quite long.
Shortest-Job-First (SJF) Scheduling
Other name of this algorithm is Shortest-Process-Next (SPN).
Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest estimated run-time-to-completion is run next. In other words, when CPU is available, it is assigned to the process that has smallest next CPU burst.
The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance. Since the SJF scheduling algorithm gives the minimum average time for a given set of processes, it is probably optimal.
The SJF algorithm favors short jobs (or processors) at the expense of longer ones.
The obvious problem with SJF scheme is that it requires precise knowledge of how long a job or process will run, and this information is not usually available.
The best SJF algorithm can do is to rely on user estimates of run times.
In the production environment where the same jobs run regularly, it may be possible to provide reasonable estimate of run time, based on the past performance of the process. But in the development environment users rarely know how their program will execute.
Like FCFS, SJF is non preemptive therefore, it is not useful in timesharing environment in which reasonable response time must be guaranteed.
Shortest-Remaining-Time (SRT) Scheduling
The SRT is the preemtive counterpart of SJF and useful in time-sharing environment.
In SRT scheduling, the process with the smallest estimated run-time to completion is run next, including new arrivals.
In SJF scheme, once a job begin executing, it run to completion.
In SJF scheme, a running process may be preempted by a new arrival process with shortest estimated run-time.
The algorithm SRT has higher overhead than its counterpart SJF.
The SRT must keep track of the elapsed time of the running process and must handle occasional preemptions.
In this scheme, arrival of small processes will run almost immediately. However, longer jobs have even longer mean waiting time.
The shortest-Job-First (SJF) algorithm is a special case of general priority scheduling algorithm.
The basic idea is straightforward: each process is assigned a priority, and priority is allowed to run.
Equal-Priority processes are scheduled in FCFS order.
An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa.
Priority can be defined either internally or externally.
Internally defined priorities use some measurable quantities or qualities to compute priority of a process.
Examples of Internal priorities are:
Externally defined priorities are set by criteria that are external to operating system such as
Priority scheduling can be either preemptive or non preemtive
A major problem with priority scheduling is indefinite blocking or starvation.
A solution to the problem of indefinite blockage of the low-priority process is aging.
Aging is a technique of gradually increasing the priority of processes that wait in the system for a long period of time.
Multilevel Queue Scheduling
A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for instance
Fig 5.6 - pp. 138 in Sinha
In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the process, such as
Algorithm choose the process from the occupied queue that has the highest priority, and run taht process either
Each queue has its own scheduling algorithm or policy.
If each queue has absolute priority over lower-priority queues then no process in the queue could run unless the queue for the highest-priority processes were all empty.
For example, in the above figure no process in the batch queue could run unless the queues for system processes, interactive processes, and interactive editing processes will all empty.
If there is a time slice between the queues then each queue gets a certain amount of CPU times, which it can then schedule among the processes in its queue. For instance;
Since processes do not move between queue so, this policy has the advantage of low scheduling overhead, but it is inflexible.
Multilevel Feedback Queue Scheduling
Multilevel feedback queue-scheduling algorithm allows a process to move between queues.
It uses many ready queues and associate a different priority with each queue.
Algorithm chooses to process with highest priority from the occupied queue and run that process either preemptively or unpreemptively.
If the process uses too much CPU time it will moved to a lower-priority queue. Similarly, a process that wait too long in the lower-priority queue may be moved to a higher-priority queue may be moved to a highest-priority queue. This form of aging prevents starvation.
Figure 5.7 pp. 140 in Sinha
A process entering the ready queue is placed in queue 0.
If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
If it does not complete, it is preempted and placed into queue 2.
Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS basis, only when queue 0 and queue 1 are empty.` | http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/schedule.htm | 13 |
16 | ||This article needs additional citations for verification. (January 2013)|
In microeconomic theory, the opportunity cost of a choice is the value of the best alternative forgone, in a situation in which a choice needs to be made between several mutually exclusive alternatives given limited resources. Assuming the best choice is made, it is the "cost" incurred by not enjoying the benefit that would be had by taking the second best choice available. The New Oxford American Dictionary defines it as "the loss of potential gain from other alternatives when one alternative is chosen". Opportunity cost is a key concept in economics, and has been described as expressing "the basic relationship between scarcity and choice". The notion of opportunity cost plays a crucial part in ensuring that scarce resources are used efficiently. Thus, opportunity costs are not restricted to monetary or financial costs: the real cost of output forgone, lost time, pleasure or any other benefit that provides utility should also be considered opportunity costs.
The term was coined in 1914 by Austrian economist Friedrich von Wieser in his book "Theorie der gesellschaftlichen Wirtschaft". It was first described in 1848 by French classical economist Frédéric Bastiat in his essay "What Is Seen and What Is Not Seen".
Opportunity costs in consumption
Opportunity cost may be expressed in terms of anything which is of value. For example, an individual might decide to use a period of vacation time for travel rather than to do household repairs. The opportunity cost of the trip could be said to be the forgone home renovation.
Opportunity costs in production
Opportunity costs may be assessed in the decision-making process of production. If the workers on a farm can produce either one million pounds of wheat or two million pounds of barley, then the opportunity cost of producing one pound of wheat is the two pounds of barley forgone (assuming the production possibilities frontier is linear). Firms would make rational decisions by weighing the sacrifices involved.
Explicit costs
Explicit costs are opportunity costs that involve direct monetary payment by producers. The opportunity cost of the factors of production not already owned by a producer is the price that the producer has to pay for them. For instance, a firm spends $100 on electrical power consumed, their opportunity cost is $100. The firm has sacrificed $100, which could have been spent on other factors of production.
Implicit costs
Implicit costs are the opportunity costs in factors of production that a producer already owns. They are equivalent to what the factors could earn for the firm in alternative uses, either operated within the firm or rent out to other firms. For example, a firm pays $300 a month all year for rent on a warehouse that only holds product for six months each year. The firm could rent the warehouse out for the unused six months, at any price (assuming a year-long lease requirement), and that would be the cost that could be spent on other factors of production.
Non-monetary opportunity costs
Opportunity costs are not always monetary units or being able to produce one good over another. The opportunity cost can also be unknown, or spawn a series of infinite sub opportunity costs. For instance, an individual could choose not to ask a girl out on a date, in an attempt to make her more interested ("playing hard to get"), but the opportunity cost could be that they get ignored - which could result in other opportunities being lost.
Note that opportunity cost is not the sum of the available alternatives when those alternatives are, in turn, mutually exclusive to each other – it is the value of the next best use. The opportunity cost of a city's decision to build the hospital on its vacant land is the loss of the land for a sporting center, or the inability to use the land for a parking lot, or the money which could have been made from selling the land. Use for any one of those purposes would preclude the possibility to implement any of the other.
See also
- Budget constraint
- Economic value added
- Opportunity cost of capital
- Parable of the broken window
- Production-possibility frontier
- There Ain't No Such Thing As A Free Lunch
- Time management
- "Opportunity Cost". Investopedia. Retrieved 2010-09-18.
- James M. Buchanan (2008). "Opportunity cost". The New Palgrave Dictionary of Economics Online (Second ed.). Retrieved 2010-09-18.
- "Opportunity Cost". Economics A-Z. The Economist. Retrieved 2010-09-18.
- Friedrich von Wieser (1927). In A. Ford Hinrichs (translator). Social Economics. New York: Adelphi. Retrieved 2011-10-07.
• Friedrich von Wieser (November 1914). Theorie der gesellschaftlichen Wirtschaft [Theory of Social Economics] (in German). Original publication. | http://en.wikipedia.org/wiki/Opportunity_cost | 13 |
31 | Would you like to add or edit content here? Here's how you can have an account!
Note to contributors: Try not to put the fallacy in this page - this is mainly an index to individual pages with more details on each argument. Create a link on this page, save it, and then click on that link to set up your new page.
What is a logical fallacy?
A 'logical fallacy' is a structural, or purely formal defect in an argument. Logical fallacies fall into two varieties: invalid arguments, and circular arguments.
Primary types of logical fallacies
An argument is said to be valid if and only if the truth of the premises guarantee the truth of the conclusion. It is invalid if and only if the conclusion does not necessarily follow from the premises. It is important to realize that calling an argument 'valid' is not the same as saying its conclusion is true. Logic does not concern itself with the content of an argument - only its form. From the standpoint of logic, the following is a perfectly valid argument:
(p1) Dinosaurs are still living;
(p2) If Dinosaurs are still living, then Elvis is still living;
(c1) Therefore, Elvis is still living.
If the two premises are true, then the conclusion must be true. You and I, of course, know that these premises are not in fact true (in the normal sense of words, that is. Strictly scientifically speaking, all birds are dinosaurs, so premise 1 is true. This does, however, not make premise 2 any more true) - but their falsity is not a matter of pure logic. Simply because somebody says something false, it does not follow that that person has committed a logical fallacy.
Valid arguments fall into two categories: sound and unsound. A sound argument is a valid argument with true premises. An unsound argument is a valid argument with false premises. The above argument about dinosaurs is unsound.
Invalid arguments will always be logically fallacious. By way of example, the following are invalid arguments:
(p1) If x is a human male, then x is a human;
(p2) x is not a human male;
(c1) Therefore, x is not human.
(p1) If July is before June, then George Bush is the thirty-second president of the United States;
(p2) July is not before June;
(c1) Therefore, George Bush is not the thirty-second president of the United States.
Each of these arguments are invalid because they are of the form:
(p1) If A, then B
(c1) Therefore, not-B,
which is an invalid logical structure. Simply because the 'if'-clause of an 'if, then' statement is false does not mean that the 'then'-clause is false.
Circular Arguments are valid, but they are still considered logical fallacies. More accurately, they are simply useless - they establish nothing that they do not presuppose. The following are simple examples of circular arguments:
(p1) San Fransisco is in California;
(c2) Therefore, San Fransisco is in California.
(p1) Nobody likes Cherries;
(p2) If anyone liked cherries, then it wouldn't be the case that nobody likes cherries;
(c1) Therefore, nobody likes cherries.
(p1) God exists;
(p2) God knows everything;
(p3) God is not a liar;
(p4) God wrote the Bible;
(p5) The Bible says that God exists;
(c1) Therefore, God exists.
Because these argument must assume what they are attempting to prove, they are logically superfluous. In each case, the argument provides absolutely no support for the conclusion. We might just as well have stated the conclusion without argument. Most often, circular arguments hide their circularity by keeping one of their premises implicit, as in the following:
(p1) God is not a liar;
(p2) God wrote the Bible;
(p3) The Bible says that God exists;
(c1) Therefore, God exists.
Though it does not explicitly say so, this argument obviously depends upon a suppressed premise - namely, that God exists. More accurately, no one would accept the premises unless they had already accepted the conclusion.
Certain types of argumentation may be wrong or misleading without thereby constituting a 'logical fallacy.' It may be terrible to tell someone: 'agree with me or you will die,' but it is not terrible because of its logical structure. Similarly, we may feel that certain groups are too easily convinced to believe something or too quick to adopt a political position on the basis of emotion. However, their doing so does not constitute a logical fallacy.
List of Fallacies
- Ad Hominem
- Ad Hominem Tu Quoque
- Ad Verecundiam
- Appeal to Belief
- Appeal to Common Practice
- Appeal to Consequences of a Belief
- Appeal to Emotion
- Appeal to Fear
- Appeal to Flattery
- Appeal to Novelty
- Appeal to Pity
- Appeal to Popularity
- Appeal to Ridicule
- Appeal to Spite
- Appeal to Tradition
- Begging the Question
- Biased Sample
- Burden of Proof
- Circumstantial Ad Hominem
- Confusing Cause and Effect
- False Dilemma
- False Analogy
- Gambler's Fallacy
- Genetic Fallacy
- Guilt By Association
- Hasty Generalization
- Ignoring A Common Cause
- Middle Ground
- Misleading Vividness
- Moving the Bar
- Personal Attack
- Poisoning the Well
- Post Hoc
- Questionable Cause
- Red Herring
- Relativist Fallacy
- Slippery Slope
- Special Pleading
- Straw Man
- Two Wrongs Make A Right
- Master list of logical fallacies
- Logical fallacy summary
- Manufacturing Consent - Documentary by Noam Chomsky
- Russell's Law - “It Is Impossible To Distinguish A Creationist From A Parody Of A Creationist”
This site costs a lot of money in bandwidth and resources. We are glad to bring it to you free, but would you consider helping support our site by making a donation? Any amount would go a long way towards helping us continue to provide this useful service to the community.
Click on the Paypal button below to donate. Your support is most appreciated! | http://www.freethoughtpedia.com/wiki/Logical_fallacies | 13 |
23 | Paired data, correlation & regression
Paired Sample t-test
A paired sample t-test is used to determine whether there is a significant difference between the average values of the same measurement made under two different conditions. Both measurements are made on each unit in a sample, and the test is based on the paired differences between these two values. The usual null hypothesis is that the difference in the mean values is zero. For example, the yield of two strains of barley is measured in successive years in twenty different plots of agricultural land (the units) to investigate whether one crop gives a significantly greater yield than the other, on average.
The paired sample t-test is a more powerful alternative to a two sample procedure, such as the two sample t-test, but can only be used when we have matched samples.
A correlation coefficient is a number between -1 and 1 which measures the degree to which two variables are linearly related. If there is perfect linear relationship with positive slope between the two variables, we have a correlation coefficient of 1; if there is positive correlation, whenever one variable has a high (low) value, so does the other. If there is a perfect linear relationship with negative slope between the two variables, we have a correlation coefficient of -1; if there is negative correlation, whenever one variable has a high (low) value, the other has a low (high) value. A correlation coefficient of 0 means that there is no linear relationship between the variables.
There are a number of different correlation coefficients that might be appropriate depending on the kinds of variables being studied.
Pearson's Product Moment Correlation Coefficient
Pearson's product moment correlation coefficient, usually denoted by r, is one example of a correlation coefficient. It is a measure of the linear association between two variables that have been measured on interval or ratio scales, such as the relationship between height in inches and weight in pounds. However, it can be misleadingly small when there is a relationship between the variables but it is a non-linear one.
There are procedures, based on r, for making inferences about the population correlation coefficient. However, these make the implicit assumption that the two variables are jointly normally distributed. When this assumption is not justified, a non-parametric measure such as the Spearman Rank Correlation Coefficient might be more appropriate.
See also correlation coefficient.
Spearman Rank Correlation Coefficient
The Spearman rank correlation coefficient is one example of a correlation coefficient. It is usually calculated on occasions when it is not convenient, economic, or even possible to give actual values to variables, but only to assign a rank order to instances of each variable. It may also be a better indicator that a relationship exists between two variables when the relationship is non-linear.
Commonly used procedures, based on the Pearson's Product Moment Correlation Coefficient, for making inferences about the population correlation coefficient make the implicit assumption that the two variables are jointly normally distributed. When this assumption is not justified, a non-parametric measure such as the Spearman Rank Correlation Coefficient might be more appropriate.
See also correlation coefficient.
The method of least squares is a criterion for fitting a specified model to observed data. For example, it is the most commonly used method of defining a straight line through a set of points on a scatterplot.
A regression equation allows us to express the relationship between two (or more) variables algebraically. It indicates the nature of the relationship between two (or more) variables. In particular, it indicates the extent to which you can predict some variables by knowing others, or the extent to which some are associated with others.
The equation will specify the average magnitude of the expected change in Y given a change in X.
The regression equation is often represented on a scatterplot by a regression line.
A regression line is a line drawn through the points on a scatterplot to summarise the relationship between the variables being studied. When it slopes down (from top left to bottom right), this indicates a negative or inverse relationship between the variables; when it slopes up (from bottom right to top left), a positive or direct relationship is indicated.
The regression line often represents the regression equation on a scatterplot.
Simple Linear Regression
Simple linear regression aims to find a linear relationship between a response variable and a possible predictor variable by the method of least squares.
Multiple linear regression aims is to find a linear relationship between a response variable and several possible predictor variables.
Nonlinear regression aims to describe the relationship between a response variable and one or more explanatory variables in a non-linear fashion.
Residual (or error) represents unexplained (or residual) variation after fitting a regression model. It is the difference (or left over) between the observed value of the variable and the value suggested by the regression model.
Multiple Regression Correlation Coefficient
The multiple regression correlation coefficient, R², is a measure of the proportion of variability explained by, or due to the regression (linear relationship) in a sample of paired data. It is a number between zero and one and a value close to zero suggests a poor model.
A very high value of R² can arise even though the relationship between the two variables is non-linear. The fit of a model should never simply be judged from the R² value.
A 'best' regression model is sometimes developed in stages. A list of several potential explanatory variables are available and this list is repeatedly searched for variables which should be included in the model. The best explanatory variable is used first, then the second best, and so on. This procedure is known as stepwise regression.
Dummy Variable (in regression)
In regression analysis we sometimes need to modify the form of non-numeric variables, for example sex, or marital status, to allow their effects to be included in the regression model. This can be done through the creation of dummy variables whose role it is to identify each level of the original variables separately.
Transformation to Linearity
Transformations allow us to change all the values of a variable by using some mathematical operation, for example, we can change a number, group of numbers, or an equation by multiplying or dividing by a constant or taking the square root. A transformation to linearity is a transformation of a response variable, or independent variable, or both, which produces an approximate linear relationship between the variables. | http://www.stats.gla.ac.uk/steps/glossary/paired_data.html | 13 |
23 | Acres and a Mule:
Slavery as a legal institution lasted for about 250
years up until the Emancipation
Proclamation of 1865 and for another 100 years, African Americans were
subjected to Jim Crow laws of which they were not seen as legally equal
until 1965. Initially,
reparations were to be paid by giving freed slaves 40 acres of land and a
mule, but the bill was vetoed by President Andrew Johnson in 1869 after
having passed in Congress. However,
the issue was far from being put to rest.
One hundred years later in 1969, the Black
Manifesto was published, demanding monetary compensation equaling $3
billion dollars from predominantly white places of worship (Catholic,
Protestant and Jews) depending on the predetermined amount that the
National Black Economic Development Conference calculated. This request
stemmed out of the Civil Rights movement, a fundamentally moral position
taken up by religious leaders. Its
more radical counterpart, the Black militant and power movement felt that
the Civil Rights movement did little to improve the economic situation
despite what was given in the legal sense through the Equal Rights
Amendments of 1964 and 1965. Initially, there were religious groups and churches fighting
for social programs to eradicate poverty and working against forms of
discrimination, “By fall of 1968 nearly $50 million had been pledged and
some millions expended.” However,
these actions resulted in more emergency, short-term help rather than
systemic change. And with the
election of a more conservative president, President Nixon, the tide in
favor of poverty programs and economic development of black community
changed and it was no longer a national priority.
As a result, the Manifesto,
written by SNCC leader, James Forman, brought to attention the forgotten
or tabled issues at hand. However,
the form of attack was not directed at the government on behalf of the
black churches, but rather a public intrusion on predominantly white
places of worship in which the Manifesto
was read aloud. Needless to
say, the response was immediate and the reparation issue, in this more
modern context, became heated and controversial.
Coming up with a cost for what were considered lost wages
implicated national level guilt as well as suggesting that monetary
compensation would begin to make up for historical oppression:
For centuries we have been forced to live as
colonized people inside the United States, victimized by the most vicious,
racist system in the world. We have helped to build the most industrious
country in the world…We are also not aware that the exploitation of
colored peoples around the world is aided and abetted by white Christian
churches and synagogues…(this) is only a beginning of reparations due us
as people who have been exploited and degraded, brutalized, killed and
In general, the churches that were asked to raise
money for the reparation cause rejected this proposal.
Some absolutely denied any right to the suggested money, whereas
some believed that money should not be given to the black community
directly, but through some federal or state social program.
Something was accomplished, however, as the
religious community became aware of the grievances held by the
black community directed against the church.
The most contemporary manifestation of making
reparations has come about in a law suit against the government headed by
Alexander Pires, books such as Randall Robinson’s The Debt: What Americans Owe Blacks (2000), and Richard F.
America’s Wealth of Races
(1990) and Paying the Social Debt:
What White America Owes Black America (1993). On an international
scale, both the United Nations and Nigeria have formalized a position that
the US should respond to this issue, at least with an apology and at most
to right the wrong by paying in the form of economic compensation. A
growing interest has been fostered at Boston University in the very recent
“Great Debate: Should The U.S. Pay Reparations for Slavery” (November
2001), and earlier, in the short-lived running of David Horowitz’s
student newspaper ad, “Ten Reasons Why Reparations for Blacks is a Bad
Idea for Blacks and Racist Too." The new amount that the most contemporary form of
reparations is close to $8 billion, (if each descendent of a slave
received $150,000) an estimated amount that takes into account what 40
acres and a mule would be worth and lost wages over the 250 year period.
The Manifesto and the
legal case is built on a precedence of making reparations to the Indians,
Holocaust victims, Japanese Internment camp victims, and the Tuskegee
Syphilis experiments (the only reparations given to descendents that were
not in the direct family.)
Briefly, the issues of contention are how Americans
can now be responsible given that slavery ended over 150 years ago, and
given that there is no direct connection between the people of today and
slaves of multiple generations ago. Another
issue at stake is whether monetary compensation can make up for slavery,
and whether an apology and/or developing social programs would be more
appropriate to the present black situation.
Third, making reparations can be seen as a handout, further
stigmatizing and perpetuating the victim mentality of the black community.
Fourth, there is an economic challenge that is implicit in asking
for the money in that it could fundamentally change the economic structure
giving African Americans the upper hand in this society.
And finally, one might ask, could reparations bring about unintended
consequences, such as exacerbating racial tensions and creating or
sustaining racial division in the country?
The most critical issue for this paper begs the moral
and in some cases, explicit religious response. From a theological perspective, the concrete issue of making
reparations for slavery will be analyzed using three main themes:
evil/sin, guilt, and redemption.
Manifesto in asking monetary compensation can be summed up in one
sentence: “Reparations is a scheme for the rearrangement of wealth to
offset past iniquities or correct an imbalance in society.” In this first section of the theological analysis, the
iniquity created by slavery will be analyzed in two ways, the structural
possibility for slavery and the perpetuation of its sinful effect in
today’s society using process theology.
This first section will set up the possibility for approving
reparations, although it will do so critically, and in the end with some
reservations, and will leave it open to further sections in this paper to
finalize this approval or entirely reject this possibility.
Theodicy: The Structural Possibility for Slavery
The institution of slavery unequivocally was an evil
institution manipulating Judeo-Christian ideas to justify the practice.
The Black Manifesto cites
29 grievances against religious organizations, specifically against the
dogma and practices of the church that made it possible to keep slaves in
bondage. The theological
ideas of a sovereign God and the eschatological hope were used to justify
and maintain the cruel treatment of slaves.
Process theology rejects both of these propositions, and offers in
its place an explanation for how slavery came into existence and
justification for liberation from its historical and present oppression.
and the World
According to Norman Pittenger, evil is “that which
holds back, diminishes, or distorts the creative advances of the cosmos
toward the shared increase of good.” (74) Evil is both deprivation and
privation and stands in stark opposition to potential goodness either
through discord or triviality (to choose against the possibility of
goodness.) The status of the
world exists in clash and in harmony between two principles, creative and
destructive principles. The world, thus, is perpetually being made and
perpetually being diminished. The
perishing principle is a result of existential finitude of the world in
that the universe is in constant and eternal process, the things of this
world will always be being.
Given the structure of the world there are capacities
for intrinsic good and evil, instrumental good and evil, and the power for
self-determination. In a
world where good and evil is intrinsic and instrumental, the case can be
made for the cruelest of structures: “The evils of pain, suffering,
injustice, catastrophe, etc. are possible in a world structured to evoke
novelty, integration, adventure, and all of the other components of
worthwhile experiences.” Slavery
can be classified in this way, and justifies a beginning analysis on the
veracity of claims made in behalf of reparations.
However, before this is done, one should examine how evil is
brought into being given the structural possibility for slavery.
Codetermination of Power, and Evil
The theodicy question involves not only the existence
of evil, but also an existent God who is good, and all-powerful.
The possibility for slavery is not a determined reality, but rather
brought about through a series of events, choices, and occasions in time.
Who is responsible for slavery? All actors are implicated, even
God. Although God in his
infinite way is working toward creating increased order and goodness, the
world can work against this. God
can only suggest his initial aim, and in an unlimited way he can persuade
for the world to unify itself in the most optimal, intense way for
altruistic satisfaction. God cannot ultimately be in control simply
because the structure of the world allows God passive power and he cannot
prevent evil from occurring. God
is held responsible to heighten our reception to his persuasion, and to
act in novelty and creativity in the world, but he is both limited in
power and is affected or changed by what happens in the world: “Process
theodicy projects a deity who is deeply involved in and profoundly
affected by the experience of finite creatures.”
This has to do with the principle of codetermination of power in
The powers that cause all events to be is produced
and shared between God and the finite world.
In other words, “God is responsible for evil, but not indictable
for it” because “finite actualities can fail to conform to the divine
aims for it.” Humans are
meant to enjoy and to contribute to the world, so they are given freedom
in direct relation to the level of intensity and instrumentality to bring
about the best possible satisfaction.
However, the more freedom that is given to humans the greater the
possibility that freedom will be used as increased “intense and
instrumental” means to go against God’s initial aim creating more evil
and suffering. God also shares in the pain of the world and is affected by
the demonic forms of impoverishment, injustice, and violence.
In this way, God becomes partially implicated by evil since he is
correlated to all that occurs in the world.
God, as can be concluded by this analysis, is not
omnipotent, although the case can still be made for his goodness and for
his love, since his initial aim is to suggest and make possible
increasingly the good in the world. God has and will always have a concern
for the world. He shows this
concern by acting and disclosing himself in the world and as Pittenger
states, God “can make even the wrath of man, as well as whatever other
evil there is in the world, ‘turn to his praise.’”
Obviously, in the case of slavery, the initial aim of God was
rejected in the most intense of ways.
Clearly, slavery needs to be seen in light of God’s goodness,
human action, and the process of culminating evil in the world.
The next section will deal with how slavery has affected the world
today in terms of human sin and the oppressive force of evil persisting
through the centuries.
Human Sin and
In order to make the case for reparations, one should
establish a direct connection of slavery to the contemporary situation and
thus, establish a case for direct and collective responsibility for
slavery. Stackhouse, a
Christian social ethicist claims, “One of the decisive things we ought
to have done is overcome the generalized structured that cast dimensions
of poverty and racism in the society, an inheritance from slave days now
built into the very fabric of the culture."
Before one can make that claim that as a society we owe a debt to
the black community, one should articulate clearly what was lost,
suffered, and deprived in the event of slavery and its perpetual evil
The essence and purpose of humankind is “the
reality of the decisions of creatures, at every level form the quantum of
energy up to the free choice made by man.”
To be human is to choose, to decide, to create, and to be empowered
to be fulfilled in the world. It
is the choice for self-actualization and self-fulfillment, it is the
“spontaneous, creative self-determination in every event.”
Thus, to take away these basic rights is an act of sin against a
person. In fact original sin, comes from the “situation or state of
deprivation or alienation in which men find themselves.”
Process theology also asserts that human beings do not start with a
level playing field in that original sin affects some individuals more
than others. This is
different, for example in a reformed theology where all
humans are “totally depraved.” Acts
that result in terminating the right to determine one’s future and limit
his/her freedom to be fulfilled is the kind of oppression that occurred in
slavery and as result is present still today.
In fact, clearly the reparation demand is nothing
short of claiming the right, in an economic and social way, to fulfill
their human purpose. Forman
states, that “essentially, the fight for reparation is one of
self-determination and the transfer of power.”
In this way, sin has an indelible effect through the passing of
time. Once that right to freedom to be and to choose had been stripped of
the black community, it remained so and perpetually sustained oppression
far past the point of the Emancipation
Proclamation (1865). The thwarted creative potential has and continues
to deprive the black community from accomplishing and contributing to
society, to its community, and to their personal selves.
And far worse, is the prevention of the black community to be
united and engaged with the initial aim of the infinite God. The following
sections will deal with oppression on the economic and religious level.
Sin And Economic Oppression
Those involved in the initial demands for reparations
held a view that saw slavery as a systemic issue. David Griffin offers a
valuable and description of what kind of structure slavery was; it was
“the corporate structure of alienation and oppression which has been
built up through centuries of human sin.”
The injustice incurred in slavery requires an acknowledgement of
societal responsibility for conditioning black people to feel inferior.
However, at the same time, process theology aligns itself with
liberation theology to say that the black community is “not necessarily
a total victim of (societal) values… individuals can exert an influence
back on it and thereby transform it.” Furthermore, Suchocki contends
that “cumulative acts of human beings (are) the sources of the
demonic.” In process
theology, all acts and occasions of interdependent.
This is how Forman sees it when he asserts,
Operating upon all of us are a whole set of control
factors, many of which we are not aware. These control factors however,
have been drummed in our heads for centuries, and we accept them as
realities, hence the major reason we are not all totally dedicated to
The societal factors that Forman points are systemic
in nature, and more specifically requires an economic response.
To be oppressed is to be fundamentally economically
oppressed in that slaves had and the present black community has a “lack
of adequate material prerequisites for a good life and of the opportunity
to determine their own destinies and to make significant contributions to
history.” For the
proponents of reparations for slavery, it requires a collective change in
the system, and an overturn on who remains in control over the system. Early reparation proponents want to see an economic shift of
power from white hands to black hands either through a peaceful exchange,
and if this did not work, through more violent forms of revolution and
guerilla warfare. (Forman, 115) The
attack on systemic evil has not been just toward society proper, but also
towards the church in its responsibility for perpetuating black
The Church’s Responsibility to Systemic and Historical Sin
The direct responsibility for slavery is not just on
the conscience of society, but on the church as well. As a sign of repentance, the church was asked to pay
reparations long before the government.
And as a responsive community, given the process structure, the
church can continue to perpetuate racial divides or ameliorate the
situation by restoring freedom, power and creative control in and through
society. Even in silence the church stands condemned in a way that
Forman writes so clearly, “Basically the Black Manifesto is an
historical reminder to the white religious establishment...and highlights
the contradictions between words and deeds…(which) has been to form an
unholy alliance with a worldwide system of oppression.”
There ought to be a religious assessment of its responsibility to
the systemic perpetuation of evil and then, a plausible solution to help
the plight of the present black situation.
Critiques and Conclusion of Section I
There are a few points of critique that should be
made in light of the prior discussion to establish the relationship
between accepting or rejecting reparations on a theological basis in the
following two sections. The
discussion here poses several questions through two aims of inquiry, how
the reparation cause is ill-fit to a process theology and how process
theology fails to serve reparation aims.
The first critique is on the God and theodicy issue.
Given God’s persuasive nature, why do so many remain un-persuaded?
This empirical question again tests the goodness of God as well as
his adequacy. Second, process
theology relies heavily on aesthetic qualities of possibility, that the
moral question posed here may not be as important. The world must be given
in this way to offer the possibility for God’s involvement in the world,
but does it do so in a way that makes God more concerned with his initial
aim, then what is really happening in the world?
Third, in the possibility for change, who and what is
given the authority to make things happen, and in the same sense what is
the security in buying into the process theology. Next, do the means
justify the ends, and will the means accomplish the process goal of
fulfillment and creative potentiality?
Fifth, since in process theology emphasis is given to the
individual not the institution, can an institution effectively repent
given this emphasis? Sixth, revolutionary sentiments may or may not be in
line with process theology since violence would be a form of discord.
Lastly, one should consider unintended consequences:
Could reparations lead to even greater racist sentiments, creating more
divisions in society, and incur unhelpful anger on the side of the white
community? Furthermore, in
this same line of thought, whose to say that reparations is what the
average black man and woman desires? Could it be just the agenda of black
leaders only? And if this is
true, can the goal of reparations really be brought about if the black
community is not willing to take advantage of their newly achieved
freedom? These seven points of contention speak to the inadequacy of
and disparity between a perfect fit of analysis and subject of analysis.
Despite these multiple critiques, however, quite apparently there is a connection between reparations for slavery and process theological considerations of theodicy and oppression. The question now is to ask if that connection is sufficient enough to side with reparations for slavery. The following two sections will proffer an answer using the theological concepts of guilt and redemption while taking into consideration the discussion and points of critiques developed in Section I.
Guilt is one of the greatest issues at play in the
debate over reparations for slavery and is a strong force on both sides of
the argument. Those in favor of reparations proclaim that the United
States, and essentially the descendents of slave owners, should feel
guilty for the years of kidnapping, bondage, and oppression they forced
upon the slaves. To make amends for these acts, the proponents of
reparations believe reparations of some monetary sort should be paid to
African-Americans today. Those who oppose reparations recognize the guilt
in the same way that their opponents do but believe, among other things,
that reparations is an attempt to absolve the guilt. Reparations might do
more harm than good in terms of helping African-Americans and improving
race relations, because it would likely put an end to building the bridges
burned by slavery.
The case for reparations put forth by Alexader Pires
at the recent Great Debate on the campus of Boston University is largely
built upon the obligation America has to the African-American community.
Pires, who recently won a lawsuit against the United States for $1 million
due to black farmers in the South, is collaborating with other noted
attorneys such as Johnny Cochran to file a formal lawsuit against the
government for reparations for slavery. This relies on several issues,
including precedents such as reparations dealt to victims of
Japanese-American internment camps and the Holocaust.
Also at play are issues regarding the obligation
Pires believes the government has to black Americans for building the
American economic system into what he calls the most powerful economic
structure in the history of the world. Since the slave-driven antebellum
cotton industry in the South was the most successful industry in the world
at the time, reparations proponents believe something is owed to those who
built that industry and the powerful economy that followed.
Reparations are also called for by the empirical data
that shows a strong link between slavery and the current socio-economic
status of African-Americans. By virtue of a poor post-war effort to
assimilate the former slaves into society, far too many blacks live in bad
neighborhoods, work jobs that do not pay a living wage, are undereducated,
or are incarcerated. These
statistics point to a strong link to slavery and call for reparations to
help get these people on something closer to equal footing with others in
Christopher Hitchens also argued in favor of
reparations at the Great Debate but from a realist’s perspective.
Hitchens, a noted writer and editor, argued that reparations is not an
ideal circumstance but the best recourse available today to help to
resolve the present-day problems that linger from slavery. Reparations
does not solve all the problems, according to Hitchens, but he says one
should not make “the best the enemy of the good.” By this, Hitchens is
saying that the one should not put down reparations because it is not the
best possible solution to the dilemma at hand. The best ways to solve the
problem are not attainable because we do not live in an ideal world, so
one should not expect ideal solutions. The imperfection of reparations is
not a suitable reason to discount it, or in other words, “don’t make
the best the enemy of the good.”
The arguments against reparations are plenty and one
does not have to look far to find someone who disagrees with paying them.
About one year ago, David Horowitz bought advertising space in many
college newspapers including the Daily
Free Press at Boston University for his article “Ten Reasons Why
Reparations for Slavery is a Bad Idea and Racist Too.” The ad caused
hysteria and disruption in nearly every locale that the article was seen,
including Boston University where the ad was pulled after one appearance.
Many papers banned the ad, causing an uproar regarding the rights of free
speech, while many students protested against Horowitz’s advertisement
and ideology. In his article, Horowitz describes ten ways in which
reparations is either ineffective, unnecessary, racist, or foolish. Many
of Horowitz’s arguments are important points in the debate over
reparations and are at the heart of the dilemma, while others arguments
seem venomous, heartless, and even inaccurate. To better understand the
argument against reparations, it is important to take a closer look at
Horowitz’s article but also to keep in mind that Horowitz surely does
not speak for all those opposed to reparations for slavery.
Horowitz’s first argument against reparations is
that there is not one group solely responsible for slavery in America. He
claims that Africans and Arabs should be indicted alongside white slave
owners and claims that 3,000 blacks owned slaves and questions whether
their descendents should be paid reparations. This argument is both
logical and helpful, because it brings in to question who is owed
reparations and the complications in making such a determination.
Next Horowitz argues that black Americans have
prospered economically by living in the United States and are better off
economically than they would have been in their forefathers’ native
lands. This claim is off base, because the fact that the black community
has in some ways been able to compete in society does not offset the other
statistics that suggest something different.
Thirdly, Horowitz argues that it is unfair to ask
descendants of non-slaveholders to pay reparations because their ancestors
were not the oppressors and, in some cases, gave their lives to free the
slaves. This is certainly a strong point against reparations, because one
is asking the descendants of those who freed the slaves to pay reparations
for the oppression. Furthermore, Horowitz next points out that many
Americans are descendants of immigrants who weren’t even in the United
States at the time of slavery and should not be asked to pay reparations.
In this light, reparations for slavery might be on the right track but is
asking some people to pay for a crime that their ancestors didn’t even
Horowitz’s fifth point recognizes that those in
favor of reparations are making judgments based on race rather than on
injury. Many blacks, Horowitz claims, are not descendents of slaves and
some are even descendants of slaves, so it would be irresponsible to pay
reparations to these people. Moreover, Horowitz points out that this case
would set a precedent in that never before have reparations been paid to
anyone other than the victims or their direct descendants, such as in the
cases regarding the Japanese-American internment camps and the Holocaust.
While that is an interesting part of the reparations story, it doesn’t
affect whether reparations should be paid; it simply means that this case
would set a precedent. Perhaps this case could even set a precedent for
crimes the United States committed against the Native Americans when the
country was being formed.
Next, Horowitz writes that it is unfair to give
reparations because descendants of slaves do not suffer economically from
slavery. In this portion of his argument, Horowitz argues that blacks have
had an opportunity to be successful economically since slavery and many
have achieved economic success. Those who have not, Horowitz writes, are
victims of their own failures rather than the failure of the American
system and are not due reparations. Horowitz, however, is unfairly holding
the majority up to the standard of the minority. While it is true that
many blacks have been successful in society, too many statistics point to
the fact that their descending from slavery has had an adverse effect on
the standing of blacks in society today.
Horowitz’s seventh argument states that reparations
is another attempt to turn blacks into victims rather than to hold them
responsible for their state in today’s society. Reparations, then, is a
way for the government to help people who can’t help themselves. Once
again, Horowitz is holding the black majority to the standard of the
successful black minority and overlooking too many other factors. While
Horowitz has a point that reparations might make blacks into victims, he
fails to notice that the entire point of reparations is that blacks are
victims and are due compensation not only for their work as slaves but
also for the poor way in which the American government helped them
assimilate into society.
Next, Horowitz claims that reparations have already
been paid through the Civil Rights Act of 1965 and welfare benefits.
Horowitz does not recognize, however, that the giving of civil rights to
the descendants of slaves is completely different than paying reparations.
Recognizing the blacks’ civil rights helped bring the African-American
community into the fold but did not right the wrongs of centuries of
slavery in the past. Horowitz also fails to realize that welfare benefits
do not go only to blacks but to all who qualify for them and are not
adequate restitution for the slaves’ oppression nor does it account for
the wages the slaves lost by working without pay.
Finally, Horowitz closes his argument with two
shortsighted and heartless points about the state of African-Americans in
today’s society. First, Horowitz claims that African-Americans owe a
debt of gratitude for being brought to America and for the whites who
spearheaded the abolitionist movement to free blacks from slavery.
Secondly, Horowitz writes that reparations places African-Americans
against the nation that gave them freedom and that they should be more
appreciative of being part of such a prosperous nation. In these two
points, Horowitz becomes the supreme judge as to what is good and evil and
that blacks are better off in America than in their homelands. Horowitz
does not consider that economic power might not be an appropriate measure
of whether one should be happy in his or her country. Also, Horowitz
believes that blacks should be grateful to live in the United States
rather than upset that they were raped of their free will to choose where
to live their lives.
One point Horowitz misses in this debate is the
effect reparations would likely play on race relations today. Since the
lines are drawn fairly clearly as far as who is in favor of reparations
and who is opposed -- and often in heated fashion with such a
controversial issue -- it is likely that reparations would perpetuate
racial division in American society. Whites who did not want to pay
reparations, for instance, would likely resent blacks for taking money
that they did not deserve. Blacks also might be indicted in this process
because it might bring to the surface new feelings of resentment in the
black community toward whites for slavery. Moreover, many whites would
likely feel no further need to help blacks to get a foot up in society if
reparations were paid. Reparations, then, is not a starting point for
reconciling this issue but a distinct end in which whites feel there is
not further need to help blacks.
Guilt plays a major role in the issue of paying
reparations for slavery. Advocates of reparations play on the guilt of the
descendants of slave owners and the American government by asking them to
own up to their responsibility. Opponents of reparations, such as David
Horowitz, do not feel guilty for the state of blacks in today’s society
and place the blame on their own failure to realize opportunities for
Karl Rahner addresses the issue of guilt and sin in
his systematic theology The Content
of Faith, which is particularly relevant to the issue of reparations
for slavery. Rahner believes that sin is not only a part of the past but
recognizes that the present and the future are built upon that past.
“ . . . sin is not a contingent act which I
performed in the past and whose effect is no longer with me,” writes
Rahner. “It is certainly not like breaking a window which falls into a
thousand pieces, but afterward I remained personally unaffected by it. Sin
determines the human being in a definite way: he has not only sinned, but
he himself is a sinner. He is a sinner not only by a formal, juridical
imputation of a former act, but also in an existential way, so that in
looking back on our past actions we always find ourselves to be
This understanding of sin, and guilt regarding past
sin, should make one cautious to pay reparations for slavery. If
reparations would indeed become an end to the white community’s
willingness to help the black community, it seems that reparations would
become a way of a people trying to wipe the slate clean of their past
actions. The government might then believe that it no longer has an
obligation to help blacks succeed in American society, because they have
paid them reparations; no longer does the government have to take
responsibility for its past sin, since reparations have already been paid
and wiped the slate clean. This is one of the greatest reasons that
reparations could be a very unhelpful choice for American society.
According to Rahner, true guilt is only understood
through God’s revelation and grace. Rahner would likely say then that
the guilt that Alexander Pires is trying to get the United States
government to admit to can only come through God’s grace.
“ . . . it remains true that the real knowledge of
guilt, that is, the sorrowful admission of sin, is the product of God’s
revelation and grace. Grace is already at work in us when we admit guilt
as our own reality, or at least admit the possibility of guilt in our own
lives . . . On the other hand, a purely natural knowledge of guilt -- one
that is completely independent of grace (if this is philosophically
possible) -- would be suppressed if God’s grace and the light of
revelation were not there to help us.”
Rahner offers another helpful understanding of guilt
in which the person refuses to admit to his or her guilt and instead
represses it. Repression, of course, only exacerbates the problem.
“By this basically false type of arguing that we
use in trying to excuse ourselves before God, our conscience, our life,
and the world, we manifest nor out innocence, but only the way in which
the unenlightened person, as yet untouched by the grace of God, considers
his own guilt, that is, he will not admit it. He prefers to repress it.”
If one considers this in terms of the debate over
reparations for slavery, the government is only exacerbating the
present-day lingering effects of slavery by not reacting to it. Instead of
paying reparations for slavery, the government and those opposing
reparations like David Horowitz are only making the problem worse by not
admitting their guilt and facing their responsibility.
The debate over reparations for slavery is a difficult and controversial one with many theological implications. Advocates of reparations point to a strong link between slavery and the current state of the African-American socio-economic class and a need for the government to own up to its responsibility regarding slavery. The opponents should not, however, all be classified into one group, since the camp that David Horowitz represents opposes reparations on often venomous, frivolous claims. While recognizing the good that reparations could do, one must also acknowledge the problems that such a occurrence would be sure to instigate. Increased racial tension, the resurfacing of the guilt regarding slavery, and the chance that reparations could put an end to other types of support given the African-American community, reparations for slavery is not worth the trouble it would cause. Without regard to the stress it would put on the American economy, reparations are simply worth the trouble. Social programs for the whole American public that target certain aspects of society known to be of concern to African-Americans would be a step in the right direction.
The question of reparations for slavery demands the resolution of a host of philosophical and theological issues. What is the nature of sin? What is an individual? What is the meaning of history, and what impact does it have on the present and the future? What are the limits of an individual’s responsibility in relation to their culture? What is the relationship between justice and freedom, redemption and forgiveness?
Though it is obviously impossible to resolve these issues through this discussion, some definitions must be attempted. The most basic question arises from an apparent absurdity in the proposition of reparations. Why should anyone today benefit from the suffering of their ancestors, and why should anyone be compelled to compensate for past wrongs? The fact is that if the reparations are intended as a redress for American slavery than neither American slavery nor any of its perpetrators or victims exists today. Thus the question is raised as to the nature of an individual and that person’s relationship to history. Are human beings fundamentally independent units of value, meaning, and purpose, relating only incidentally to each other; is community an abstraction from individual goals and needs; is existence an act of individual reason or will rather than a gift, and is individual life an entity which is primarily responsible only to itself? If this classically liberal definition of the individual is accepted, than the argument for reparations is moot. No individuals exist who are responsible for slavery, and there is no possible object of the reparations.
The question arises then, is this a valid definition of an individual? From the perspective of Christian orthodoxy, several critiques can be made. The bible has bequeathed to humanity a vision of human beings both created and free, both receiving the conditions of existence and in turn transforming them. From the classical liberal perspective, the role of God for the individual is limited to the creation of the conditions of existence by fiat; individuals can struggle against these conditions (hence the Protestant struggle with authority), attempt to reject them (the heroic-existentialist tradition), or passively accept them through a gesture of obedience and surrender. The kind of freedom envisioned by the bible as existing by virtue of God’s creatorship, a freedom which emerges from God’s inner being and remains rooted in it, is impossible from the classical liberal perspective.
What are the responsibilities of an individual from the orthodox perspective? The context of creation vastly widens the scope of human possibility and responsibility. God as the creator, endowing humans with the freedom of creaturely relationality, suggests the possibility of a meaning for human life beyond the leveling of universal laws of nature. This meaning is the meaning of relationship; God is the thing (or the equality of thingness) that every other thing has in common. This awareness of a grand intention binds the universe together, and reveals itself to human beings as the gift of history.
As specifically created beings, humans receive the conditions not just of universal existence but of a particular place and time. Each person exists not just in general but in particular, in a precise moment in history. This means that each individual is constantly receiving the present as an effect of the past. This insight is what has traditionally been called by Christians the communion of saints. Every person receives the past into his or her experience of the present; for Christians, this past is blessed, hallowed, and filled with grace by the completed lives of the ancestors who in turn received it from their own historical past. Each past moment has been forgiven and redeemed by the God who is revealed in history; therefore, each past moment is a bearer of grace and meaning for the present. In biblical narrative, the continuity between generations is organic. The cycle of Abraham contains within itself all of the patterns of Israelite history: ethnic conflicts, stupendous acts of faith, dialogues with divinity, struggles with election, and inter-family wars are all prefigured in the life of the one ancestor.
The letter to the Hebrews eloquently witnesses to this merging of the historical and the personal: “We might even say that Levi, who collects the tenth, paid the tenth through Abraham, because when Melchizedek met Abraham, Levi was still in the body of his ancestor” (Hebrews 7:9-10, NIV). Further, Hebrews interprets the past not only as embodied in the present, but also as a wellspring of comfort and encouragement:
And what more shall I say? I do not have time to tell about Gideon, Barak, Samson, Jepthah, David, Samuel and the prophets, who through faith conquered kingdoms, administered justice, and gained what was promised; who shut the mouths of lions, quenched the fury of the flames, and escaped the edge of the sword; whose weakness was turned to strength; and who became powerful in battle and routed foreign armies; these were all commended for their faith, yet none of them received what had been promised. God had planned something better for us so that only together with us would they be made perfect” (Hebrews 11:32-39, NIV).
The something better alluded to here is, for the author of Hebrews, the present moment; the gift of the ancestors’ redemption of the past on behalf of the present culminates in the Incarnation, when past, present, and future are united and eternally redeemed in the person of Christ.
How does this affect the question of reparations for slavery? Because humans are in our depths created, historical beings, to the extent that the conditions of our existence are determined by the past, we bear a deep responsibility for the past as it is revealed in each aspect of present existence. Completed actions which refused the grace of God, contributed to injustice, and denied the relational nature of human life continue to impact the present in profoundly destructive ways.
The institution of slavery, a monstrous action only completed at the cost of tremendous suffering, has exerted an enormous impact on the present. Randall Robinson has memorably described this suffering and its continuing effects:
Robinson’s argument for reparations rests on the notion that the evil passed on to the present by slavery is so enormous that no length of time will ever cause it to dissipate; instead, its effects will continue to be received by future generations, growing worse rather than better with time. An equally powerful good, Robinson argues, must be generated in order to counter the evil .
What might be the nature of this good action, bequeathed to the future by means of the present? Redemptive action can take two forms: symbolic and practical. To perform a symbolic act of redemption is to restore by means of reinterpretation, to demonstrate the hidden relationships between actions, to acknowledge the falsehood of past interpretations, and to ask for forgiveness, whether on behalf of one’s own actions or those completed actions to which we remain responsible.
Symbolic actions are an active transformation of present reality. Symbolic redemption can be expressed artistically, liturgically, or politically, it can be both public and private, and it can involve individuals or institutions. Robinson argues that symbolic redemption is the first step that individuals ought to take in response to slavery:
One argument against reparations is that any such reparations would necessarily be a one-time event, by which presumably the complicit present could wash its hands of its historical past and forever absolve itself of blame. However, the kind of symbolic redemption advocated by Robinson is not a payoff but a transformation, with effects necessarily flowing forward into the future. The transformation would be first personal, as individuals repent of their prejudices, commit their resources towards the cause of justice, and work actively towards there-establishment of truly relational identities, and secondly institutional, as governments, businesses, and churches all strive to repair past injustices and ongoing institutional biases. All of this could happen as part of a deliberate and public acknowledgement by institutions of their role in both the past and present effects of slavery, taking the form of a request for forgiveness and a pledge of restitution.
With this confession in place, it would not be out of place for governments to call businesses to account for profits gained at the expense of slaves, to commit financial resources towards redeeming those individuals and communities who continue to be affected by slavery, and to seek to dismantle all institutions which continue to perpetuate the effects of slavery.
The concept of a war on poverty is not new, but the understanding of racial poverty as both arising from within a historical context and potentially redeemed by that context provides an interpretive resource which is often lacking from programs of institutional reform. What is the responsibility of an individual affected by racial poverty? Do their circumstances absolve them of the responsibility and the dignity that comes with being truly free? Or are they wholly responsible for their conditions and for every negative consequence which results from them? Individuals affected by racial poverty are not limited by their historical circumstances, but they are conditioned by them; the conditions of their existence arise out of those circumstances and thus the consequences of their actions can never be understood apart from them. Like every other created being, they are both free and bound, determined by history and yet finding freedom in the midst of that determination. Thus, the response to the problem of racial poverty must account for both of these realities, engaging the individual as a free being and yet always discerning the continuing effects of the past as a present reality.
To take such a course of action is to actively participate in the sacrament of history as God’s self-revelation. As free beings continually caught up between the redemption of the past and the hope for the future, we are God’s vehicles for transformation, both placed within history and bearing history into the future. This is a task for which God has amply equipped us, filling us with grace through the redemptive love of Christ, the unifier of all things past, present, and future. From this perspective, making reparations for slavery is not a case of overcoming a special evil but rather part of an ongoing responsibility both to the past - the ancestors from whom we come - and to the future, the generations who will rely on us for grace and for the hope of glory.
National Public Radio broadcast
Churches Reactions to the Call for
Reparations for Slavery
The Case for Reparations for Slavery with
Ten Reasons Why Reparations is a Bad Idea
. . . and Racist Too
CBS News Article on Alexander Pires' Call
for Reparations for Slavery
Support for David Horowitz's Arguments
Against Reparations for Slavery
Reparations for Slavery Discussion Board
Arguments Against the Reparations for
Pires, Alexander. The Great Debate. Tsai Performance Center,
Boston. November 7, 2001.
Pires, Alexander. The Great Debate. Tsai Performance Center,
Boston. November 7, 2001.
Hitchens, Christopher. The Great Debate. Tsai Performance
Center, Boston. November 7, 2001.
Rahner, Karl. The Content of Faith. New York: Crossroad. 1999,
Rahner, Karl. The Content of Faith. New York: Crossroad. 1999, | http://people.bu.edu/wwildman/WeirdWildWeb/courses/theo1/projects/2001_coophenkphillips/index.htm | 13 |
35 | Reasoning is the use of information to arrive at a conclusion. The challenge
of reasoning is applying logical judgment to arrive at an appropriate
conclusion. For example, what is enough evidence? When is a comparison or analogy
appropriately applied and when is it a false analogy?
The outcomes of the reasoning process, the process of critically
thinking, evaluating, logically considering information are conclusions. Sound,
effective reasoning results is a supported, accurate, appropriate conclusion,
and fallacious reasoning results in an error.
It is up to YOU to distinguish sound and appropriate reasoning from fallacious reasoning.
Sound, Effective, Appropriate Reasoning:
Strong, effective reasoning uses one or more of the following approaches.
- EXPERIMENT OR DIRECT DEMONSTRATION: Direct demonstration is one of the most conclusive methods of proof. However, just as a demonstration may support a conclusion, support from one demonstration does not prove a general conclusion. Sound reasoning involves the application of supporting evidence which is representative of all the cases. Failure to meet an appropriate standard of supporting evidence can result in the fallacy of hasty or over generalization.
- STATISTICS, INSTANCES, AND EXAMPLES: Using figures, data, and statistics relevant to a situation is another method of supporting a conclusion. Such statistics must, however, apply in all important respects to the statement we are supporting; otherwise they are pointless and misleading. Reading, interpreting, and evaluating statistical information can sometimes be difficult. It is also possible to use representative, though not exceptional, examples to illustrate or support a conclusion. On the other hand, unreliable evidence may lead to fallacious reasoning.
- COMPARISON OR ANALOGY: Comparing two or more things or ideas which are alike in most respects is one method of inferring support for a conclusion. If they are similar in several important ways, one might assume that they are similar in other ways. This is reasoning by comparison or analogy; however, a false analogy may arise when the compared items are both similar in some ways, and very different in other ways and an inappropriate extension of an analogy may lead to unwarranted conclusions.
- INFERENCE: Often one assumes something to be true but finds it difficult to give clearly demonstrable proof for the conclusion. In such cases one may show the conclusion to be in all probability true by inference from definite facts and concrete details called circumstantial evidence. Inference or circumstantial evidence is perhaps the weakest method of attempting proof because it is not conclusive. Such evidence merely increases the probability of trust as we are able to increase the number of attendant circumstances. So, although it is sometimes necessary and acceptable to use inference, it is important to recognized a conclusion based on inference which may not be as strong as one based on direct evidence.
- ACCEPTING THE STATEMENT OF AUTHORITY: The opinion, conclusion, or statement of a specialist or expert can support or verify a conclusion. Such support must come from an acknowledged authority in the field of thought from which the conclusion is drawn. While the opinion of Michael Jordan should be considered as strong support for a conclusion about basketball, his ideas about medical treatment of the aging process are not. Remember, the expert's opinion is respected by us because the expert is recognized by others as an authority. Beware of the self-proclaimed expert, the person who claims authority for themselves.
- CAUSE AND EFFECT: One of
the most logical methods of proving a statement true is to reason from one or
more known facts or circumstance to their probable future effect.
- Building a logical argument from the specific details to the general
conclusion is called the inductive method of reasoning. For example, the
temperature outside is below freezing, and there is water in the bird bath;
therefore that water will soon freeze to ice. The inductive method of
reasoning would allow one to conclude that the water in the birdbath will
freeze. This is logical given the conditions, however it is a conclusion
based on the logic not an observation. The compliment, moving from a known or accepted conclusion (the general)
out to find what specifics must also be true is labeled the deductive method
of reasoning. In this case we might see the ice in the birdbath. Thus the
conclusion is an observed and accepted fact. The deductive logical method
would then allow one to conclude that the outside temperature must be (or
have been) below freezing. Positing a cause and effect relationship requires three logical
- The cause must occur before the effect. This is most evident, but often ignored in some social science research and in everyday arguments. The cause must be statistically connected to the effect. This concordance of cause and effect may be probabilistic. That is, it does not always need to occur. For example, a difference in percent of falls leading to broken bones being higher among older people does not mean that every old person who falls ends up with a broken bone.
- There must be no alternative or spurious causes. This is the hardest condition to meet. In actuality, it cannot be met. However, finding that an observed statistical connection between two variables is spurious (sharing a common relationship to a third unanalyzed variable and not causally related to each other) is frequently the focus of on-going scientific debate and the scientific method.
- There are instances where one may mistake the cause.
- Building a logical argument from the specific details to the general conclusion is called the inductive method of reasoning. For example, the temperature outside is below freezing, and there is water in the bird bath; therefore that water will soon freeze to ice. The inductive method of reasoning would allow one to conclude that the water in the birdbath will freeze. This is logical given the conditions, however it is a conclusion based on the logic not an observation. The compliment, moving from a known or accepted conclusion (the general) out to find what specifics must also be true is labeled the deductive method of reasoning. In this case we might see the ice in the birdbath. Thus the conclusion is an observed and accepted fact. The deductive logical method would then allow one to conclude that the outside temperature must be (or have been) below freezing. Positing a cause and effect relationship requires three logical conditions.
The most common type of errors made in reasoning fall into one of the following general mistakes.
- HASTY or OVER GENERALIZATION: A conclusion drawn from an insufficient number of facts, instances, examples, or statistics results in an error of reasoning. Evidence supporting a conclusion needs to be broadly based and generally representative of the total defined population. So, in the case of much research using "college students" the conclusions were drawn from samples of college, male, freshmen, but the conclusions were presented as representative of adults (both male and female, college educated and not, etc.). When the evidence is sufficient and representative of the larger situation then sound reasoning has occurred.
- UNRELIABLE EVIDENCE: Testimonial evidence of an authority is trustworthy only when the person who gives the evidence has full and accurate knowledge and is unbiased and honest. If a milk-distributor advertises, "Ours is the best milk sold in town," or a politician says, "Vote for my friend Mayor Bains; he's a capable administrator," the statement of neither is considered highly reliable; for each is biased. Although evidence is sometimes unreliable, systematic and representative instances and statistics are a solid foundation for sound reasoning.
- FALSE ANALOGY: If we say, "We traveled the 1721 miles from our home in Nebraska to Jacksonville, Florida, in forty-three hours of driving time; therefore we shall cover the 1635 miles from our home to Seattle, Washington, in forty-one hours," we are drawing a false analogy. We have not taken into consideration the fact that much of the driving to Seattle must be done over mountainous terrain, against prevailing westerly winds, and for a relatively longer period each day against the glare of the sun than when one is traveling southeastward. Likewise a prospective purchaser is reasoning from a false analogy when he says, "Fifteen hundred dollars is too much for this new car; I bought one of the same make and body style for nine hundred and fifty dollars in 1939." An analogy, then is false if the things compared differ in even one important respect or it the comparison is based upon conditions that have changed. There are times when an appropriate analogy may be helpful in arriving at and supporting sound reasoning.
- INVALID EXTENSION OF AN ANALOGY: This happens when one assumes that some analogy holds true in all aspects. For example, "Learning mathematics is like learning to ride a bicycle. Once you've master it you don't lose it." Even if the conclusion about riding a bicycle were true, it may not have anything to do with how we learn mathematics. The false argument by analogy is very prevalent in political and social argument where a model that explains macro-economic systems is applied to explain micro-personal behavior.
- MISTAKING THE CAUSE: Assigning a wrong cause is a frequent error in reasoning back from a known result or effect to a probable cause. Thus, a student mistakes the cause (and incidentally indulges in rationalizing) when he says, "I'm going out to the movies every night before a term examination hereafter. I attended a late movie before my big science test, and I got a higher grade on it than on any of the tests the following day." Common superstitions often arise from mistaking the cause. "Beans planted when the east wind blows will never sprout," say a superstitious farmer. Sometimes the beans do not sprout, but the east win did not kill them. It may, however, have brought on a prolonged period of cold, rainy weather that made the seeds rot in the ground. The scientific method is based on a constant dialectic between positing appropriate cause and effect and mistaking the cause.
- BEGGING THE QUESTION: We beg the question when we assume without proof that something is true or false. Suppose a man said, "This unfair system of estimating taxes should be abolished!" He would be begging the question; for he must first state that the system of taxation is unfair. In a second step, he may go on to prove that the system of taxation is unfair. Finally, he may prove that because of its unfairness it should be abolished. We also beg the question when we reason in a circle, thus: "installment buying should be prohibited because it is economically unwise; we know that it is economically unwise because it should be prohibited."
- IGNORING THE QUESTION: We ignore the question when we do not meet the real issues or issues of an argument, coming to an irrelevant conclusion or arriving at a conclusion by illogical reasoning. We appeal to the prejudice, selfish desires, or other emotions of our hearers or readers; we try to over awe our opponent by quoting from authorities that are not pertinent or by bluster and impressive manner; or we invoke out worm tradition or custom. | http://www.documentingexcellence.com/stat_tool/reasoning.htm | 13 |
30 | Philosophy 103: Introduction to Logic
The Structure of Arguments
Abstract: The concept of an
argument is discussed together with the related concepts of premiss, conclusion,
inference, entailment, proposition, and statement.
I. We have seen that one main branch of philosophy is epistemology and one main branch of
epistemology is logic.
A. What is epistemology?
B. What is logic? Simply put, the purpose of
logic is to sort out the good arguments from the poor ones.
II. So the chief concern of logic is the structure of an
A. Every argument in logic has a structure, and
every argument can be described in terms of this structure.
1. Argument: any group of propositions of
which one is claimed to follow logically from the others.
a. In logic, the normal sense of
"argument," such as my neighbor yelling to me about my trashcans is not termed
"an argument" in logic.
b. By "argument," we mean a
demonstration or a proof of some statement, not emotional language. E.g.,
"That bird is a crow; therefore, it's black."
2. The central parts of an argument include ...
a. Premiss: (more usually spelled
"premise") a proposition which gives reasons, grounds, or evidence for accepting
some other proposition, called the conclusion.
b. Conclusion: a proposition, which is
purported to be established on the basis of other propositions.
B. Consider the following example of an argument
paraphrased from an argument given by Fritz Perls in In and Out of the Garbage Pail.
If we set our ideals too high, then we
will not meet those ideals.
If we do not meet those ideals, then we
are less than we could be.
If we are less than we could be, then
we feel inferior.
If we set ideals too high, then we feel
1. By convention, the reasons or premisses are
set above a line that separates the premisses from the conclusion. The line is sometimes
thought of as symbolizing the word "therefore" in ordinary language.
2. As you read the passage and come to understand
it, you are undergoing a psychological process called "making the
a. An inference is the reasoning process
by which a logical relation is understood.
b. The logical relation is considered valid
(good) or not valid (not good) even if we do not understand the inference right away. In
other words, it is convenient to consider the logical relation as not being dependant for
its validity on the psychological process of an inference.
c. In this manner, logic is not considered as
"the science of reasoning." It is prescriptive, as discussed in a previous
3. So, this logical relation between the
premisses and conclusion of Perl's argument holds regardless of whether we pay attention
a. Using the bold letters, we can symbolize his
argument as follows:
H ® N
N ® L L ® I H ® I
b. This kind of logical relation is called an entailment.
An entailment is a logical relation between or among propositions such that the
truth of one proposition is determined by the truth of another proposition or other
propositions, and this determination is a function solely of the meaning and syntax of the
c. Another way to remember the difference between an
inference and an entailment is to note that people infer something, and
propositions entail something.
d. The argument structure is the sum and substance of logic.
All that remain in this course is to sketch out a bit of what this means. (Note that
Perls', argument has a good structure, so if the conclusion is false, one of the premisses
has to be false.)
III. We have spoken earlier of the relation between or among
propositions. What is a proposition or statement (we will use these words
A. Statement: a verbal expression that can
be regarded as true or false (but not both). Hence, a statement or a proposition is a
sentence with a truth-value. We can still regard a sentence as a statement even if the
truth-value of the statement is not known.
B. Hence logic is just concerned with those
statements that have truth-values. (There is very much of life that is irrelevant to
Consider the confusion that would result if we
considered the following sentences as statements:
1. "Good morning." (What's so good
2. "You are looking good today." (Well,
I just saw my doctor and ...)
3. "What is so rare as a day in June? Then,
if ever, come perfect days..." (Well, I don't know about that.)
4. To a waiter: "I'd like a cup of
coffee." (Yeah, but I think bigger, I'd like a BMW.)
Thus, phatic communication, greetings, commands,
requests, and poetry, among other uses of language, are not mean to be taken as
C. Which of the following sentences are
1. There is iron ore on the other side of Pluto.
2. Tomorrow, it will rain.
3. Open the door, please.
4. Whales are reptiles.
5. "Yond' Cassius has a lean and hungry
6. Pegasus has wings.
7. You should vote in all important elections.
IV. More distinctions with regard to statements are worth
A. Consider whether there are two statements in
A Republican is President (of the U.S.).
Republican is President (of the U.S.).
1. Aside from the ambiguity of when the
statements are uttered, of which President is being spoken, and so on, we would say that
there is one statement and two sentences in the box. Sometimes logicians make a
distinction between a sentence token (the ink, chalk marks, or pixels) and a sentence type
(the meaning of the marks).
2. Every statement comes with an implicit time,
place, and reference.
B. Summary of the distinction between a sentence
and a statement assumes that adequate synonymy of expression and translation between
languages is possible.
1. One statement can be expressed by two
different sentences. For example, the sentence "The cup is half-empty."
expresses the same statement as "The cup is half-full."
A sentence can express different statements at
different times. For example, the sentence "A Democrat is the U.S. President" as
expressed in 1962 and 2002 is two different statements.
3. A statement is independent of the language in
which it is asserted, but a sentence is not. For example, the sentences "Das ist aber
viel!' and "But that is a lot" express the same statement, ceteris paribus.
4. A sentence can express an argument composed of
For example, the sentence "The graphical method of solving a
system of equations is an approximation, since reading the point of intersection depends
on the accuracy with which the lines are drawn and on the ability to interpret the
coordinates of the point" can be interpreted as two or three different
statements depending on how we wish to analyze it. | http://philosophy.lander.edu/logic/structure.html | 13 |
15 | Agner Fog: Cultural selection © 1999
2. The history of cultural selection theory
Lamarck and Darwin
The idea of cultural selection first arose in victorian England - a culture that had more success in the process of cultural selection than any other society. But before we talk about this theory we must take a look at the theory of biological evolution, founded by Lamarck and Darwin.
The french biologist Jean-Baptiste de Lamarck was the first to talk about the evolution of species. He believed that an animal, which has acquired a beneficial trait or ability by learning, is able to transmit this acquired trait to its offspring (Lamarck 1809). The idea that acquired traits can be inherited is called lamarckism after him. Half a century later the english biologist Charles Darwin published the famous book "On the origin of Species" in which he rejected Lamarck's hypothesis and put forward the theory that the evolution of species happens by a combination of variation, selection, and reproduction.
It was a big problem for the evolutionary thinkers of that time that they did not know the laws of inheritance. Indeed the austrian monk Gregor Mendel at around the same time was carrying out a series of experiments, which led him to those laws of inheritance that today carry his name and constitute the foundation of modern genetics, but the important works of Mendel did not become generally known until the beginning of the twentieth century, and were thus unknown to nineteenth century british philosophers. They knew nothing about genes or mutations, and consequently Darwin was unable to explain where the random variations came from. As a consequence of the criticism against his theory Darwin had to revise his Origin of Species and assume that acquired traits can be inherited, and that this was the basis of the variation that was necessary for natural selection to be possible (Darwin 1869, 1871). In 1875 the german biologist August Weismann published a series of experiments that disproved the theory that acquired traits can be inherited. His book, which was translated into english in 1880-82, caused lamarckism to lose many of its adherents.
Although Darwin had evaded the question of the descent of man in his first book it was fairly obvious that the principle of natural selection could apply to human evolution. At that time no distinction was drawn between race and culture, and hence the evolution from the savage condition to modern civilized society came to be described in darwinian terms. The earliest example of such a description is an essay by the british economist Walter Bagehot in The Fortnightly in 1867. Bagehot imagined that the earliest humans were without any kind of organization, and he described how social organization might have originated:
"But when once polities were begun, there is no difficulty in explaining why they lasted. Whatever may be said against the principle of 'natural selection' in other departments, there is no doubt of its predominance in early human history. The strongest killed out the weakest, as they could. And I need not pause to prove that any form of polity is more efficient than none; that an aggregate of families owning even a slippery allegiance to a single head, would be sure to have the better of a set of families acknowledging no obedience to anyone, but scattering loose about the world and fighting where they stood. [...] What is there requisite is a single government - call it Church or State, as you like - regulating the whole of human life. [...] The object of such organizations is to create what may be called a cake of custom."
When we look at this citation with contemporary eyes, it seems like a clear example of cultural selection: The best organized groups vanquished the poorly organized groups. But in Bagehot's frame of reference the concept of cultural selection hardly had any meaning. As a consequence of lamarckism no distinction was drawn between social and organic inheritance. Nineteenth century thinkers believed that customs, habits, and beliefs would precipitate in the nervous tissue within a few generations and become part of our innate dispositions. As no distinction was drawn between race and culture, social evolution was regarded as racial evolution. Initially Bagehot regarded his model for human evolution as analogous with, but not identical to, Darwin's theory - not because of the difference between social and organic inheritance, but because of the difference between humans and animals. Bagehot did not appreciate that humans and animals have a common descent. He even discussed whether the different human races have each their own Adam and Eve (Bagehot 1869). He did, of course, revise his opinions in 1871 when Darwin published The Descent of Man.
Despite these complications, I do consider Bagehot important for the theory of cultural selection because he focuses on customs, habits, beliefs, political systems and other features which today are regarded as essential parts of culture, rather than physical traits which today we mainly attribute to organic inheritance. It is important for his theory that customs etc. can be transmitted not only from parent to child, but also from one family to another. When one people defeats another people in war and conquers their land, then the victors art of war will also be transferred to or imitated by the defeated people, so that an ever stronger art of war will spread. Interestingly, unlike later philosophers, Bagehot does not regard this natural evolution as necessarily beneficial: It favors strength in war, but not necessarily other skills (Bagehot 1868).
The anthropologist Edward B. Tylor has had a significant influence on evolutionary thought and on the very concept of culture. The idea that modern civilized society has arisen by a gradual evolution from more primitive societies is primarily attributed to Tylor. The predominant view at that time was that savages and barbarian peoples had come into being by a degeneration of civilized societies. Tylor's books contain a comprehensive description of customs, techniques and beliefs in different cultures, and how these have changed. He discusses how similarities between cultures can be due to either diffusion or parallel independent evolution. Darwin's theory about natural selection is not explicitly mentioned, but he is no doubt inspired by Darwin, as is obvious from the following citation:
"History within its proper field, and ethnography over a wider range, combine to show that the institutions which can best hold their own in the world gradually supersede the less fit ones, and that this incessant conflict determines the general resultant course of culture." (Tylor 1871, vol. 1:68-69).
Tylor was close to describing the principle of cultural selection as early as 1865, i.e. before the abovementioned publications by Bagehot:
"On the other hand, though arts which flourish in times of great refinement or luxury, and complex processes which require a combination of skill or labour hard to get together and liable to be easily disarranged, may often degenerate, yet the more homely and useful the art, and the less difficult the conditions for its exercise, the less likely it is to disappear from the world, unless when superseded by some better device." (Tylor 1865:373).
While Darwin was dealing with the survival of the fittest, Tylor was more concerned with the survival of the unfit. The existence of outdated institutions and customs, which no longer had any usefulness, were Tylor's best proof that modern society had evolved from a more primitive condition. Tylor's attitude towards darwinism seem to have been rather ambivalent, since his only reference to Darwin is the following enigmatic statement in the preface to the second edition of his principal work Primitive Culture:
"It may have struck some readers as an omission, that in a work on civilization insisting so strenuously on a theory of development or evolution, mention should scarcely have been made or Mr. Darwin and Mr. Herbert Spencer, whose influence on the whole course of modern thought on such subjects should not be left without formal recognition. This absence of particular reference is accounted for by the present work, arranged on its own lines, coming scarcely into contact of detail with the previous works of these eminent philosophers." (Tylor 1873).
This ambiguity has led to disagreement among historians of ideas about Tylor's relationship to darwinism. Greta Jones (1980:20), for example, writes that Tylor dissociated himself from darwinism, whereas Opler (1965) goes to great lengths to demonstrate darwinian tendencies in Tylor's Primitive Culture, and even categorizes Tylor as cultural darwinist. This categorization is a considerable exaggeration since Tylor did not have any coherent theory of causation (Harris 1969, p. 212). A central issue has been whether nineteenth century evolutionary thinkers were racist or not, i.e. whether they attributed the supremacy of civilized peoples to organic inheritance or culture. This controversy is meaningless, however, because no clear distinction was drawn at that time between organic and social inheritance. Tylor used the word race synonymously with culture or tribe, as did most of his contemporaries.
As early as 1852, before the publication of Darwin's Origin of Species, the prominent english philosopher Herbert Spencer described the principle that the most fit individuals survive while the less fit die in the struggle for existence. This principle initially had only an inferior importance in Spencer's evolutionary philosophy, which was based on the idea that all kinds of evolutions follow the same fundamental principles. The Universe, the Earth, the species, the individuals, and society all evolve by the same pattern and in the same direction, according to Spencer, namely towards ever more differentiation and equilibrium. It was all part of one and the same process:
"... there are not several kinds of Evolution having certain traits in common, but one Evolution going on everywhere after the same manner." (Spencer, H. 1862).
In 1857, only two years before Darwin's book about the origin of species, Spencer described the cause of this evolution as "that ultimate mystery which must ever transcend human intelligence". (Spencer, H. 1857).
The evolution of societies is going through four stages, according to Spencer: Out of the unorganized savage condition came the first barbarian societies of nomads and herders. These have later been united into towns and nation states, called militant societies. The last stage in the evolution is called the industrial society, which will continue to evolve towards equilibrium, zero growth, peace and harmony.
Social evolution is primarily determined by external factors, such as climate, fertility of the soil, vegetation, fauna, and the basic characteristics of the humans themselves. Secondary factors include modifications imposed by the humans on their environment, themselves, and their society, as well as interaction with other societies. The main driving force in this evolution is population growth. The continued increase in population necessitates ever more effective food production methods, and hence an increasing degree of organization, division of labor, and technological progress.
War plays a significant role in the transition from the barbarian to the militant society. Any war or threat of war necessitates the formation of alliances and establishment of a strong central government. The militant society is therefore characterized by a strong monopoly of power to which the population must submit. The end result of a war is often the fusion of two societies into one bigger society, whereby the two cultures get mixed and the best aspects from each culture is preserved. This creation of bigger and bigger states makes possible the last step in Spencer's evolutionary scheme: industrialization. The rigid and totalitarian central government is still an impediment to industrialization because it obstructs private economic initiatives and scientific progress. The militant society will therefore in times of peace move towards more individual freedom and democracy, and hence become what Spencer calls the industrial society (Spencer, H. 1873, 1876).
Charles Darwin's book about the origin of species exerted an important influence on Spencer's philosophy, although he never totally rejected lamarckism. The principle of the survival of the fittest is only applicable to the evolution of the species and societies, not to the evolution of the Earth or the Universe, and neither to the ontogenetic development of the individual. The principle of natural selection could therefore not acquire the same central position in Spencer's evolutionary thought that it had in Darwin's.
Spencer applied the principle of the survival of the fittest to the formation of the first primitive societies in the same way as Bagehot did:
"... this formation of larger societies by the union of smaller ones in war, and this destruction or absorption of the smaller un-united societies by the united larger ones, is an inevitable process through which the varieties of men most adapted for social life, supplant the less adapted varieties." (Spencer, H. 1893)
Just like Bagehot and Tylor, Spencer hardly distinguished between social and organic inheritance. It is therefore difficult to decide whether the above citation refers to genetic or cultural selection. Spencer does, however, apply the principle of natural selection to phenomena which from a contemporary point of view can only be regarded as social heritage. Spencer describes the origin of religion in this way:
"If we consider that habitually the chief or ruler, propitiation of whose ghost originates a local cult, acquired his position through successes of one or other kind, we must infer that obedience to the commands emanating from him, and maintenance of the usages he initiated, is, on the average of cases, conducive to social prosperity so long as conditions remain the same; and that therefore this intense conservatism of ecclesiastical institutions is not without a justification. Even irrespective of the relative fitness of the inherited cult to the inherited social circumstances, there is an advantage in, if not indeed a necessity for, acceptance of traditional beliefs, and consequent conformity to the resulting customs and rules." (Spencer, H. 1896).
The principle of the survival of the fittest can obviously lead to a philosophy of the right of the superior forces, i.e. a laissez faire-policy. To Spencer this principle applied primarily to the individual. He was against any kind of social policy for the benefit of the poor and weak individuals. Spencer was a leading advocate of "competitive individualism" in economic and social matters (Jones, G. 1980). He does not see egoism and altruism as opposites, but as two sides of the same coin. Whoever wants the best for himself also wants the best for society because he is part of society, and egoism thereby becomes an important driving force in the evolution of society (Spencer, H. 1876).
Spencer did not, however, support laissez faire-policy when it came to international wars (Schallberger 1980). He was very critical of Britain's increasing militarization and imperialism which he saw as an evolutionary retrogression. He also warned about the fact that in modern society it is mostly the strongest men who go to war and die, whereas the weakest remain back and reproduce. Persistent optimist that he was, Spencer still believed that wars were a transitory stage in human evolutionary history:
"But as there arise higher societies, implying individual characters fitted for closer co-operation, the destructive activities exercised by such higher societies have injurious re-active effects on the moral natures of their members - injurious effects which outweigh the benefits resulting from extirpation of inferior races. After this stage has been reached, the purifying process, continuing still an important one, remains to be carried on by industrial war - by a competition of societies during which the best, physically, emotionally, and intellectually, spread most, and leave the least capable to disappear gradually, from failing to leave a sufficiently-numerous posterity." (Spencer, H. 1873).
Spencer's theories have first and foremost been criticized for the paradox that the free rein of the superior forces should lead to harmony. He denied the disadvantages of the capitalist society in order to be able to maintain his a priori belief that evolution is the same as progress, said his opponents. It is said, that Spencer in his older days became more disillusioned and began to realize this problem (Schallberger 1980).
The french historian of literature Ferdinand Brunetière was inspired by Darwin's evolutionary theory, and thought that literature and other arts evolved according to a set of rules which were analogous to, but not identical to, the rules that govern biological evolution:
"Et, dès à présent, si l'apparition de certaines espèces, en un point donné de l'espace et du temps, a pour effet de causer la disparation de certaines autres espèces; ou encore, s'il est vrai que la lutte pour la vie ne soit jamais plus âpre qu'entre espèces voisines, les exemples ne s'offrent-ils pas en foule pour nous rappeler qu'il n'en est pas autrement dans l'histoire de la littérature et de l'art?" (Brunetière 1890).
Although the concept of cultural inheritance is not explicitly mentioned by Brunetière, he does undeniably distinguish between race and culture. He says that the evolution of literature and art depend on race as well as on environment, social and historical conditions, and individual factors2. Furthermore, he does distinguish between evolution and progress.
The first to give a precise formulation of cultural selection theory was Leslie Stephen. In his book The science of ethics (1882) he draws a clear distinction between social and organic evolution, and explains the difference between these two processes by examples such as the following:
"Improved artillery, like improved teeth, will enable the group to which it belongs to extirpate or subdue its competitors. But in another respect there is an obvious difference. For the improved teeth belong only to the individuals in whom they appear and to the descendants to whom they are transmitted by inheritance; but the improved artillery may be adopted by a group of individuals who form a continuous society with the original inventor. The invention by one is thus in certain respects an invention by all, though the laws according to which it spreads will of course be highly complex."
The distinction between cultural and organic evolution is important to Stephen because the organic evolution is so slow that it has no relevance in social science. Stephen also discusses what the unit of selection is. In primitive tribal wars it may be an entire tribe that is extinguished and replaced by another tribe with a more effective art of war; but in modern wars between civilized states it is rather one political system winning over another, while the greater part of the defeated people survive. Ideas, too, can be selected in a process which does not depend on the birth and death of people. Stephen is thus aware that different phenomena spread by different mechanisms, as we can see from the following citation:
"Beliefs which give greater power to their holders have so far a greater chance of spreading as pernicious beliefs would disappear by facilitating the disappearance of their holders. This, however, expresses what we may call a governing or regulative condition, and does not give the immediate law of diffusion. A theory spreads from one brain to another in so far as one man is able to convince another, which is a direct process, whatever its ultimate nature, and has its own laws underlying the general condition which determines the ultimate survival of different systems of opinion." (Stephen 1882).
Leslie Stephen's brilliant theories of cultural evolution have largely been ignored and seem to have had no influence on later philosophers. Benjamin Kidd's work "Social Evolution" from 1894, for instance, does not mention cultural selection.
Benjamin Kidd was inspired by both Marx and Spencer (mostly Spencer) but criticized both. It may seem as if he tried to strike the golden mean. He granted to the marxists that the members of the ruling class were not superior. He believed that the ruling families were degenerating so that new rulers had to be recruited from below. He was therefore against privileges. He denied the innate intellectual superiority of the white race, which he ascribed to social heritage, by which he meant accumulated knowledge. On the other hand he agreed with the racists that the english race was superior when it came to "social efficiency", by which he meant the ability to organize and to suppress egoistic instincts to the benefit of the community and the future. Kidd attributed this altruism to the religious instinct. Curious as it may seem, he explained the evolution of religion by natural selection of the strongest race on the basis of organic inheritance. Although Kidd refers to Leslie Stephen in other contexts, he never mentions selection based on social heritage. As a consequence of Weismann's rejection of lamarckism, Kidd saw an eternal competition as necessary for the continued evolution of the race. He therefore rejected socialism, which he believed would lead to degeneration.
2.2 Social darwinism
The difficulty in distinguishing between social and organic inheritance continued until well after world war I. The mass psychologist William McDougall, for example, described the selection of populations on the basis of religion, military strength, or economical competence, without talking about social inheritance. These characters were in McDougall's understanding based on inborn dispositions in the different races (McDougall 1908, 1912).
This focus on natural selection and the survival of the fittest as the driving force in the evolution of society paved the way for a multitude of philosophies that glorified war and competition. The aryan race was regarded as superior to all other races, and the proofs were seen everywhere: australians, maoris, red indians, and negroes - everybody succumbed in the competition with the white man.
The term social darwinism was introduced in 1885 by Spencer's opponents and has since then been applied to any social philosophy based on darwinism (Bannister 1979). The definition of this term has been lax and varying, depending on what one wanted to include under this invective.
It was Spencer, not Darwin, who coined the expression "the survival of the fittest". Implicit in this formulation lies the assumption that fittest = best, i.e. the one who survives in the competition is the best. Only many years later was it realized that this expression is a tautology, because fitness is indeed defined as the ability to survive - hence: the survival of the survivor (Peters 1976).
An implicit determinism was also buried in Darwin's expression "natural selection". What was natural was also beneficial and desirable. Humans and human society was, in the worldview of the social darwinists, part of nature, and the concept of naturalness had then, as it has today, an almost magical appeal. Regarding man as part of nature must, in its logical consequence, mean that everything human is natural - nothing is unnatural. The concept of naturalness is therefore meaningless, but nobody seems to have realized that this was no objective category, but an arbitrary value-laden concept. By calling the evolution natural, you preclude yourself from choosing. Everything is left to the free reign of the superior forces. Nobody dared to break the order of nature, or to question the desirability of the natural selection. Evolution and progress were synonyms.
Social darwinism was used to justify all kinds of liberalism, imperialism, racism, nazism, fascism, eugenics, etc. I shall refrain from listing the numerous ideologies that social darwinism has fostered - many books have already been written on that subject - but merely remark that social darwinism was not rejected until the second world war had demonstrated the horrors to which this line of thought may lead.
The american sociologist Albert G. Keller criticized the previous social darwinists for basing their evolutionary theory on organic inheritance (1916). He rejected that acquired characteristics such as tradition and moral could be inherited by referring to Weismann.
Keller was inspired by Darwin's general formula for biological evolution: that the conjoined effect of variation, selection and reproduction leads to adaptation. By simple analogy he defined social variation, social selection, and social reproduction. Keller regarded this idea as his own. He did of course refer to several british social thinkers, including Spencer and Bagehot, but he interpreted their theories as based on organic inheritance. He had no knowledge of Leslie Stephen.
Keller's book is a systematic examination of the three factors: variation, selection, and reproduction, and hence the first thorough representation of cultural selection theory. Many years should pass before another equally exhaustive discussion of cultural selection was published. Keller described many different selection mechanisms. He used the term automatic selection to designate the outcome of conflicts. This could happen with or without bloodshed. The opposite of automatic selection was labeled rational selection, i.e. the result of rational decisions based on knowledge. Keller drew a clear distinction between biological and cultural selection and between biological and cultural fitness. He maintained that the two processes were in conflict with each other and would lead in different directions (Keller 1916). The social reproduction was carried by tradition, education, belief, and worship of ancestors. Religion was described as a very strong preserving and guiding force:
"Discipline was precisely what men needed in the childhood of the race and have continued to require ever since. Men must learn to control themselves. Though the regulative organization exercised considerable discipline, its agents were merely human; the chief had to sleep occasionally, could not be everywhere at once, and might be deceived and evaded. Not so the ghosts and spirits. The all-seeing daimonic eye was sleepless; no time or place was immune from its surveillance. Detection was sure. Further, the penalty inflicted was awesome. Granted that the chief might beat or maim or fine or kill, there were yet limits to what he could do. The spirits, on the other hand, could inflict strange agonies and frightful malformations and transformations. Their powers extended even beyond the grave and their resources for harm outran the liveliest imaginings [...] there is no doubt that its disciplinary value has superseeded all other compulsions to which mankind has ever been subject." (Sumner & Keller 1927).
Keller's criticism of social darwinism (1916) was purely scientific, not political, and he was an adherent of eugenics, which until the second world war was widely regarded as a progressive idea.
Spencer imagined society as an organism, where the different institutions are comparable with those organs in an organism that have similar functions. The government, for example, was regarded as analogous with a brain, and roads were paralleled with veins. This metaphor has been popular among later social scientists and led to a line of thought called functionalism. This theoretical school is concerned with analyzing what function different institutions have in society. Functionalism is therefore primarily a static theory, which seldom concerns itself with studying change. Even though evolutionism was strongly criticized in this period, there was no fundamental contradiction between evolutionism and functionalism, and some outstanding functionalists have expressed regret that evolutionism was unpopular:
"Evolutionism is at present rather unfashionable. Nevertheless, its main assumptions are not only valid, but also they are indispensible to the field-worker as well as to the student of theory." (Malinowski 1944).
Functionalists defended their lack of interest in evolutionary theory by claiming that a structural and functional analysis of society must precede any evolutionary analysis (Bock 1963). One of the most famous anthropologists Alfred R. Radcliffe-Brown had the same view on evolutionism as his equally famous colleague Bronislaw Malinowski (Radcliffe-Brown 1952). He drew a distinction between different kinds of changes in a society: firstly, the fundamental changes in society as an adaptation to altered outer conditions; secondly, the adaptation of different social institutions to each other; and thirdly, the adaptation of individuals to these institutions. Radcliffe-Brown described these changes only in general terms as "adjustment" and "adaptation".
Malinowski, on the other hand, goes into more detail with evolutionary theory. A cultural phenomenon can, according to Malinowski, be introduced into a society either by innovation or by diffusion from another society. The maintenance of the phenomenon then depends on its influence on the fitness of the culture, or its "survival value". Malinowski attributes great importance to diffusion in this context. Since cultural phenomena, as opposed to genes, can be transmitted from one individual to another or from one society to another, then wars should not be necessary for the process of cultural evolution, according to Malinowski. A degenerating society can either be incorporated under a more effective society or adopt the institutions of the higher culture. This selection process will result in greater effectivity and improved life conditions (Malinowski 1944).
A synthesis between evolutionism and functionalism should certainly be possible, since the selection theory gives a possible connection between the function of a cultural institution and its origin. A functional institution will win over a less effective institution in the process of cultural selection (Dore 1961). Considering the domination of functionalist thought, it is no surprise that evolutionism got a renaissance from about 1950.
The name "neo-evolutionism" implies that this is something new, which is somewhat misleading. Some neo-evolutionists rejected this term and called their science "plain old evolutionism" - and so it was! (Sahlins & Service 1960, p. 4). The tradition from Spencer and Tylor was continued without much novel thinking. The neo-evolutionists focused on describing the evolution of societies through a number of stages, finding similarities between parallel evolutionary processes, and finding a common formula for the direction of evolution. One important difference from nineteenth century evolutionism was that the laws of biological inheritance now were known to everyone. No one could carry on with confusing genetic and social inheritance, and a clear distinction was drawn between racial and social evolution. Theories were no longer racist, and the old social darwinism was rejected.
Whereas genetic inheritance can only go from parent to child, the cultural heritage can be transmitted in all directions, even between unrelated peoples. The neo-evolutionists therefore found diffusion important. They realized that a culture can die without the people carrying that culture being extinguished. In other words, the cultural evolution does not, unlike the genetic evolution, depend on the birth and death of individuals (Childe 1951).
An important consequence of diffusion is convergence. In prehistoric primitive societies social evolution was divergent. Each tribe adapted specifically to its environment. But in modern society communication is so effective that diffusion plays a major role. All cultures move in the same direction because advantageous innovations spread from one society to another, hence convergence (Harding 1960, Mead 1964).
The neo-evolutionists considered it important to find a universal law describing the direction of evolution:
"To be an evolutionist, one must define a trend in evolution..." (Parsons 1966, p. 109)3.
And there were many suggestions to what this trend was. Childe (1951) maintained that the cultural evolution proceeded in the same direction as the biological evolution, and in fact had replaced the latter. As an example, he mentioned that we put on a fur coat when it is cold instead of developing a fur, as the animals do. Spencer had already characterized the direction of evolution by ever increasing complexity and integration, and this idea still had many adherents among the neo-evolutionists (Campbell 1965, Eder 1976).
To Leslie White (1949) integration meant a strong political control and ever greater political units. This integration was not a goal in itself but a means towards the true goal of evolution: the greatest possible and most effective utilization of energy. White argued in thermodynamic terminology for the view that the exploitation of energy was the universal measure of cultural evolution. He expressed this with the formula:
Energy x Technology ® Culture
Talcott Parsons (1966), among others, characterized the direction of evolution as an ever growing accumulation of knowledge and an improvement of the adaptability of the humans (Sahlins 1960; Kaplan, D. 1960; Parsons 1966). Yehudi Cohen (1974) has listed several criteria which he summarizes as man's attempts to free himself from the limitations of his habitat. Zoologist Alfred Emerson defined the cultural evolution as increasing homeostasis (self-regulation). He was criticized for an all-embracing, imprecise, and value-laden use of this concept (Emerson 1956). The most all-encompassing definition of the direction of evolution is found in the writings of Margaret Mead (1964:161):
"Directionality, at any given period, is provided by the competitive status of cultural inventions of different types and the competitive status of the societies carrying them; the outcome of each such competition, as it involves irreversible change (for example, in the destruction of natural resources or an invention that makes obsolete an older invention), defines the directional path."
Such a tautology is so meaningless that one must wonder how the neo-evolutionists could maintain the claim that evolution follows a certain definable direction.
Characteristically, most neo-evolutionists used more energy on studying the course and direction of evolution than its fundamental mechanisms. Most were content with repeating the three elements in Darwin's general formula: variation, selection, and reproduction, without going into detail. In particular, there was surprisingly little attention to the process of selection. Hardly anyone cared to define the criteria that determined, which features were promoted by the cultural selection, and which were weeded out. They were satisfied with the general criterion: survival value. Still the tautology is haunting! Without the selection criterion they also missed any argument why the evolution should go in the claimed direction.
There was also a certain confusion over what the unit of selection was. Was it customs, which were selected, or was it the people bearing them? Or was it entire societies that were the objects of the selection process? Some thinkers failed to define any unit of selection at all. Many used the word invention (Childe 1936, 1951). Emerson (1956, 1965) had the idea that symbols in the cultural evolution were equivalent to genes in the biological evolution. Parsons (1966) mentioned several possible units of selection, and Mead presented the most complete list of possible units of selection:
"a single trait, a trait cluster, a functional complex, a total structure; a stage of complexity in energy use; a type of social organization" (Mead 1964).
A few scientists have given a reasonably detailed description of possible selection processes (Murdock 1956, Kaplan, D. 1960, Parsons 1966). The most comprehensive list of selection mechanisms is found in an often cited article by the social psychologist Donald Campbell (1965):
"Selective survival of complete social organizations, selective diffusion or borrowing between social groups, selective propagation of temporal variations, selective imitation of inter-individual variations, selective promotion to leadership and educational roles, rational selection."
Several philosophers found that human scientific knowledge evolves by the selection of hypotheses (Kuhn 1962, Popper 1972, Toulmin 1972, Hull 1988).
The german sociologist Klaus Eder has developed a model where the selection of cognitive structures, rather than mere knowledge, controls cultural evolution. Man's moral structuring of interactive behavior, systems of religious interpretations, and symbolic structuring of the social world, are important elements in the worldview, on which the social structure is based. According to Eder, mutations in this cognitive structure and selective rewarding of those moral innovations that improve society's problem solving capability and hence its ability to maintain itself, is what controls social evolution. Adaptation to the ecological conditions, and other internal conditions, are the most important factors in Eder's theory, whereas he attributes little significance to external factors, such as contact with other societies (Eder 1976).
The main criticism against nineteenth century evolutionism was that it did not distinguish between evolution and progress, and the theories were often called teleological. Another word, which was often used when criticizing evolutionism, was unilinearity. This referred to the idea that all societies were going through the same linear series of evolutionary stages. In other words: a universal determinism and a conception of parallel evolutionary courses. Twentieth century neo-evolutionists were busy countering this criticism by claiming that their theories were multilinear. They emphasized local differences between different societies due to different environments and life conditions. The claim about multilinearity was however somewhat misrepresenting since they still imagined a linear scale for measuring evolutionary level (See Steward 1955 for a discussion of these concepts).
In 1960 a new dichotomy was introduced in evolutionary theory: specific versus general evolution. Specific evolution denotes the specific adaptation of a species or a society to the local life conditions or to a particular niche. General evolution, on the other hand, meant an improved general ability to adapt. A species or a society with a high adaptability may outcompete a specifically adapted species or society, especially in a changing environment. In other cases, a specifically adapted species or society may survive in a certain niche (Sahlins & Service 1960). This dichotomy seemed to solve the confusion: general evolution was unilinear, while specific evolution was multilinear (White 1960).
Neo-evolutionism was mainly used for explaining the differences between industrialized countries and developing countries, and between past and present. The talk was mainly about fundamental principles, and rarely went into detail with the evolutionary history of specific cultures or specific historic occurrences. The explanatory power of the theories was usually limited to the obvious: that certain innovations spread because they are advantageous, whereas the unfavorable innovations are forgotten.
Contemporary social scientists are often eager to distance themselves from social evolutionism. Never the less, evolutionary thought is still prevalent in many areas of the social sciences, and evolutionist theories are still being published (e.g. Graber, R.B. 1995).
Another research tradition, which for many years has been seen as an alternative to evolutionism, is diffusionism. This research tradition focuses on diffusion, rather than innovation, as an explanation for social change. Strictly speaking, the diffusionist representation involves the same three elements that evolutionism is based on: innovation, selection, and reproduction - but viewed from another standpoint. The difference between the two paradigms is that diffusionism focuses on the spatial dimension of reproduction, i.e. the geographical spread of a phenomenon, whereas evolutionism focuses on the time dimension of reproduction, i.e. the continued existence and maintenance of a phenomenon. Diffusionists regard innovation as a rare and unique occurrence, whereas evolutionists acknowledge the possibility that the same innovation can occur several times at different places independently. The concept of selection is rarely discussed by that name by the diffusionists, although they often work with concepts such as barriers to diffusion or differences in receptivity to new ideas (Ormrod 1992). Many diffusionists regard themselves as in opposition to evolutionism, without realizing that the difference between the two models is quantitative, rather than qualitative.
The first great scientist within diffusionism was the french sociologist Gabriel Tarde. He did not deny the theory of natural selection, but thought that this theory was a gross generalization which had been ascribed more importance than its explanatory power could justify, and that random occurrences play a more important role than the evolutionists would admit (Tarde 1890, 1902). Although Tarde accepted the importance of progress, he was no determinist. Progress was not inevitable. The keyword in Tarde's theory was imitation. Innovations spread from one people to another by imitation. He distinguished between two kinds of innovations: accumulative and alternative. By alternative inventions he meant ideas or customs which could not spread without displacing some other idea or custom. With this concept selection was sneaked into Tarde's theory under the name of opposition. Opposition between alternative innovations could take the form of war, competition, or discussion (Tarde 1890, 1898).
Another early proponent of diffusionism was the american anthropologist Franz Boas. It was Boas who started the discussion about whether similarities between distant cultures were due to diffusion or independent innovation. He criticized the evolutionists for attributing too much importance to parallel evolution, i.e. the assumption that the same phenomenon has arisen independently at different places. Boas is usually considered one of the greatest opponents of evolutionism, but it is worth mentioning that he did not reject the theoretical foundation of evolutionism. Boas was opposed to great generalizations, and he emphasized that similarities between two cultures could be explained either by diffusion or parallel evolution and that it was impossible to distinguish between these two possibilities without closer investigation (Harris 1969:259,291). In his discussions he gave examples of both diffusion and parallel invention. As is evident from the following citation, he did indeed recognize that the two processes are both controlled by the same selection process:
"When the human mind evolves an idea, or when it borrows the same idea, we may assume that it has been evolved or accepted because it conforms with the organization of the human mind; else it would not be evolved or accepted. The wider the distribution of an idea, original or borrowed, the closer must be its conformity with the laws governing the activities of the human mind. Historical analysis will furnish the data referring to the growth of ideas among different people; and comparisons of the processes of their growth will give us knowledge of the laws which govern the evolution and selection of ideas." (Boas 1898, cit. after Stocking 1974).
Later diffusionists have actually described the attributes of an invention that have significance for whether it will spread or not. Everett Rogers lists the following attributes of an invention as important: advantage relative to alternatives, compatibility with existing structures, complexity, trialability, and observability. Rogers repeatedly emphasizes, however, that it is the perceived, rather than the objective attributes of the invention that matters (Rogers, E.M. 1983). By this emphasis he places the locus of control in the potential adopter of a new invention rather than in the inanimate invention itself. And herein lies the hidden agenda of the conflict between diffusionists and evolutionists: The diffusionists want to maintain an anthropocentric worldview, where the world is governed by conscious decisions of persons with a free will, whereas the non-anthropocentric model of evolutionism attributes an important amount of control to haphazard and often unanticipated effects and automatic mechanisms.
The most obvious difference between diffusionism and evolutionism is that diffusionism first and foremost is an idiographic tradition. It focuses on specific studies of delimited phenomena, trying to map the geographical distribution of a certain custom or technology, and finding out where it has first arisen and how it has spread. Diffusionists reject the great generalizations, and believe more in chance occurrences than in universal laws. Evolutionism, on the contrary, is a nomothetic science, which seldom has been applied to the study of specific details (Harris 1969).
The difference between the two research traditions can also be illustrated as a difference between a physical-chemical metaphor and a biological metaphor. Diffusion is a process whereby different molecules get mixed because of their random movements. By using the random motion of molecules as a metaphor for customs spreading in society, the diffusionists have stressed the importance of randomness. This metaphor naturally draws the attention of the scientists toward the spatial dimension, the velocity with which customs spread geographically, and the barriers impeding this expansion. The metaphor encompasses only the movement aspect, but neither innovation, selection, or reproduction. The latter three aspects belong to the biological metaphor on which social evolutionism is built. Evolutionism focuses on the time dimension, and it is important to notice that the time dimension is irreversible. Due to this irreversibility, the attention of the evolutionists becomes focused on the direction of the evolution. Evolutionism has thus become a deterministic philosophy of progress.
The most extreme form of diffusionism is built on the concept of a few culture centers, where innovations miraculously arise, and then spread in concentric circles from that center. This line of thought came primarily from religious circles as a reaction against the atheistic evolutionism, and as an attempt to bring science in harmony with the christian story of creation (Harris 1969).
Early diffusionism can hardly be said to be a theoretical school, since it first and foremost was a reaction against the excessive theorizing of the evolutionists. Diffusionism has even been called a non-principle (Harris 1969).
Many diffusion studies have been made independently within many different areas of research all throughout the twentieth century. These are mainly idiographic studies, too numerous to mention here (See Katz et al. 1963; Rogers, E.M. 1983). Most diffusionists study only inventions that are assumed to be advantageous so that they can ignore selection criteria (Rogers, E.M. 1983).
Occasionally, diffusion studies have been combined with darwinian thinking, namely in linguistics (Greenberg 1959). It may seem illogical to apply selection theory to linguistics, since it must be difficult for linguists to explain why one synonym or one pronunciation should spread at the expense of another, when, in principle, they are equally applicable. Gerard et. al. (1956) proposes that the selection criteria are that the word must be easy to pronounce and easy to understand.
Geographer Richard Ormrod has argued for incorporating the concepts of adaptation and selection in diffusion studies. A diffusing innovation is selected by potential adopters who decide whether to adopt the innovation or not. Ormrod understands that the fitness of an innovation depends on local conditions. What is fit in one place may not be fit at some other location. Consequently, innovations are often modified in order to adapt them to local conditions (Ormrod 1992).
Newer diffusion theories have departed somewhat from the purely ideographic tradition and developed a detailed mathematical formalism enabling a description of the velocity with which innovations spread in society (Hamblin et. al 1973, Valente 1993). Incidentally, sociobiologists have produced very similar mathematical models for cultural diffusion (Aoki et.al. 1996), but the two schools are still developing in parallel without reference to each other.
In the early 1970's a new paradigm emerged within biology, dealing with the explanation of social behavior of animals and humans by referring to evolutionary, genetic, and ecological factors. The principal work within this new paradigm was E.O. Wilson's famous and controversial book: Sociobiology (1975), which named and defined this discipline. Wilson's book provoked a fierce criticism from the sociologists (see e.g. Sahlins 1976). The conflict between the biological and the humanistic view of human nature seems impossible to resolve, and the heated debate is still going on today, twenty years later.
Apparently, it has been quite natural for the early ethologists and sociobiologists to reflect over the relationship between genetic and cultural inheritance. Several thinkers have independently introduced this discussion to the sociobiological and evolutionary paradigm, in most cases without knowledge of the previous literature on this subject.
The possibility of selection based on cultural inheritance is briefly mentioned by one of the founders of ethology, Konrad Lorenz (1963), and likewise in Wilson's book sociobiology (1975). In a later book (1978) Wilson mentions the important difference between genetic and cultural evolution, that the latter is lamarckian, and therefore much faster.
In 1970, archaeologist Frederick Dunn defined cultural innovation, transmission, and adaptation with explicit reference to the analogy with darwinian evolutionary theory, but avoided any talk about cultural selection - apparently in order to avoid being connected with social darwinism and evolutionism, to which he found it necessary to dissociate himself:
"Although several analogies have been drawn between biological evolutionary concepts and cultural evolution, the reader will appreciate that they are of a different order than those analogies that once gave "cultural evolution" an unsavory reputation [...] In particular, I avoid any suggestion of inevitable and necessary tendencies toward increasing complexity and "improvement" of cultural traits and assemblages with the passage of time." (Dunn 1970).
In 1968 anthropologist and ethologist F.T. Cloak published a rudimentary sketch of a cultural evolutionary theory closely related to the genetic theory, imagining that culture was transmitted in the form of small independent information units, subject to selection. In a later article (1975) he explained the distinction between the cultural instructions and the material culture that these instructions give rise to, analogously with the distinction between genotype and phenotype in biology. He also pointed out the possibility for conflict between cultural instructions and their bearers, as he compared the phenomenon with a parasite or virus.
In 1972 psychologist Raymond Cattell published a book attempting to construct an ethic on a scientific, evolutionary basis. He emphasized cultural group selection as a mechanism by which man evolves cooperation, altruism, and moral behavior. He held the opinion that this mechanism ought to be promoted, and imagined giant sociocultural experiments with this purpose. By this argument he copied eugenic philosophy to cultural evolution.
At a symposium in 1971 about human evolution, biologist C.J. Bajema proposed a simple model for the interaction between genetic and cultural inheritance. He imagined this process as a synergistic interaction, where the cultural part of the process was defined accordingly:
"Cultural adaptation to the environment takes place via the differential transmission of ideas which influence how human beings perceive and interact with the environment which affect survival and reproductive patterns in and between human populations." (Bajema 1972).
A somewhat more detailed description of cultural selection mechanisms was presented by anthropologist Eugene Ruyle at another meeting in 1971. Ruyle emphasized the psychological selection in the individual's "struggle for satisfaction". His description of selection mechanisms seems to be very much inspired by Donald Campbell's article from 1965 (see page 28), although he denies the possibility for cultural group selection (Ruyle 1973).
Among the first biologists taking up the idea of cultural selection was also Luigi Cavalli-Sforza, who on a conference in 1970 published a theory of cultural selection based on the fact that some ideas are more readily accepted than others (Cavalli-Sforza 1971). It is apparent from this publication, that Cavalli-Sforza is totally ignorant of the previous literature on this subject despite some knowledge of anthropology. His only reference to cultural selection is the colleague Kenneth Mather, who mentions group selection based on social inheritance in a book on human genetics. Mather (1964) does not mention from where he has this idea. Since neither Cavalli-Sforza, nor Mather, at this time reveal any knowledge of cultural evolution theory in the social sciences, we must assume that they have invented most of this theory by themselves. Curiously enough, the abovementioned article by Cavalli-Sforza contains a discussion of the difficulty in deciding whether an idea that occurs in multiple different places has spread by diffusion or has been invented independently more than one time.
Together with his colleague Marcus Feldman, Cavalli-Sforza has later published several influential articles on cultural selection. Their literature search has been rather casual. In 1973 they referred to an application of selection theory in linguistics (Gerard et al. 1956) and to a short mentioning of the theory in a discussion of eugenics (Motulsky 1968). Not until 1981 did they refer to more important publications such as White (1959) and Campbell (1965).
The publications of Cavalli-Sforza and Feldman were strongly influenced by their background in genetics, which is an exact science. Their advancement of selection theory consisted mainly of setting up mathematical models (Cavalli-Sforza & Feldman 1981). The concise description of the models by mathematical formulae has certain advantages, but apparently also serious drawbacks. Many social phenomena are more complex and irregular than mathematical formulae can express, and the representations reveal that the examples given were chosen to fit the mathematical models, rather than vice versa. The majority of their models thus describe vertical transmission, i.e. from parents to children, rather than other kinds of transmission. There was also a certain focus on models in which the selection depends on from whom an idea comes, rather than the quality of the idea itself. Such models may admittedly have some relevance in the description of social stratification and social mobility.
2.7 Interaction between genetic and cultural selection
In 1976 William Durham asserted that the genetic and the cultural evolution are mutually interacting, and hence in principle cannot be analyzed separately as independent processes. The interaction between these two processes was aptly named genetic/cultural coevolution. Unlike several other thinkers, Durham did not at this time see any conflict between these two kinds of evolution. In his understanding the two selection processes were both directed towards the same goal: the maximum possible reproduction of the individual and its nearest relatives. This criterion is what biologists call inclusive fitness. Despite criticism from both anthropologists and biologists (Ruyle, et al. 1977), Durham stuck to his position for a couple of years (Durham 1979), but has later accepted that genetic and cultural fitness are in principle different, although he maintained that the two kinds of selection in most cases reinforce each other and only rarely are in opposition to each other (Durham 1982, 1991). The most important selection mechanism in Durham's theory is conscious choices based on criteria which in themselves may be subject to cultural selection. He emphasized the distinction between cultural information units, called memes, and the behaviors they control. Genes and memes form two parallel information channels and their reciprocal interaction is symmetrical in Durham's model. Unfortunately, he did not distinguish clearly between selective transmission of memes, and selective use of these (Durham 1991, this problem is discussed on page 72).
While Durham regarded genetic and cultural selection as synergistic, two other scientists, Robert Boyd and Peter Richerson (1976, 1978), asserted that genetic and cultural fitness are two fundamentally different concepts, and if they point in the same direction it is only a coincidence. Boyd and Richerson have developed a theoretical model for the conflict between these two selection processes and the consequences of such a conflict (1978).
In a later article (1982) Boyd and Richerson claimed that humans have a genetic predisposition for cultural conformism and ethnocentrism, and that this trait promotes cultural group selection. This mechanism can then lead to cooperation, altruism, and loyalty to a group. These are characters that usually have been difficult for sociobiologists to explain because Darwin's principle of natural selection presumably would lead to egoism. Several other researchers have since proposed similar theories explaining altruism by cultural selection mechanisms (Feldman, Cavalli-Sforza & Peck 1985; Simon, H. 1990; Campbell 1991; Allison 1992).
In 1985, Boyd and Richerson at last provided a more thorough and well-founded collection of models for cultural selection. Their book also describes how those genes that make cultural transmission and selection possible may have originated, as well as an analysis of the conditions that determine whether cultural selection will increase or decrease genetic fitness (Boyd & Richerson 1985, see also Richerson & Boyd 1989).
While Boyd and Richerson maintain that cultural evolution is able to override genetic evolution, sociobiologist Edward Wilson and physicist Charles Lumsden had the opposite view on the gene/culture coevolution. They believed that the genetic evolution controls the cultural evolution. Their basic argument was that the cultural selection is controlled by people's genetically determined preferences, the so-called epigenic rules. They imagined that the genes control the culture like a dog on a leash (Lumsden & Wilson 1981). Let me illustrate this so-called leash principle by the following example: Assume that a certain food item can be prepared in two different ways, A and B. A is the most common because it tastes better, but B is the healthiest. In this situation genetic evolution will change people's taste so that they prefer B, and consequently cultural selection will quickly make B the most widespread recipe.
Lumsden and Wilson's book expressed an extreme biologic reductionism, since they imagined that genes are able to control almost everything by adjusting human preferences. In this model, culture becomes almost superfluous. Their book has been highly disputed. One important point of criticism was that their theory lacked empirical support. Although Lumsden and Wilson have documented that humans do have certain inborn preferences, they have never demonstrated any differences between humans in different cultures with respect to such preferences (Cloninger & Yokoyama 1981; Lewin 1981; Smith & Warren 1982; Lumsden, Wilson, et.al. 1982; Almeida et.al. 1984; Rogers, A.R. 1988). A problem with the leash principle is to explain cultural traits that reduce genetic fitness. This argument has been met by the construction of a model of cultural transmission analogous to sexual selection - a genetic selection mechanism which is famous for its potential for reducing fitness (see chapt. 4.2) (Takahasi 1998).
In later publications, Lumsden and Wilson no longer insisted that cultural differences have a genetic explanation, but they did not retract this claim either. They still maintained that even small changes in the genetic blend of a population can lead to considerable changes in the culture (Lumsden & Wilson 1985; Lumsden 1988, 1989).
At a workshop in 1986 entitled "Evolved Constraints on Cultural Evolution"4 there was general agreement that a human is not born as a tabula rasa, but does indeed have genetically determined predispositions to learn certain behavior patterns easier than others. But there was no approval of the claim that genetic evolution can be so fast that it is able to govern cultural evolution. On the contrary, certain models were published showing that cultural evolution in some cases may produce behaviors that are genetically maladaptive, and that the leash principle in fact can be turned upside down, so that it is culture that controls the genes (Richerson & Boyd 1989, Barkow 1989).
An important contribution to the debate came from the psychologists John Tooby and Leda Cosmides, who proposed a new kind of human ethology which they call evolutionary psychology5. According to this theory, man's psyche is composed of a considerable number of specialized mechanisms, each of which has been evolved for a specific adaptive function, and do not necessarily work as universal learning mechanisms or fitness maximizing mechanisms. These psychological mechanisms are so complex and the genetic evolution so slow, that we must assume that the human psyche is adapted to the life-style of our ancestors in the pleistocene period:
"The hominid penetration into the "cognitive niche" involved the evolution of some psychological mechanisms that turned out to be relatively general solutions to problems posed by "local" conditions [...] The evolution of the psychological mechanisms that underlie culture turned out to be so powerful that they created a historical process, cultural change, which (beginning at least as early as the Neolithic) changed conditions far faster than organic evolution could track, given its inherent limitations on rates of successive substitution. Thus, there is no a priori reason to suppose that any specific modern cultural or behavioral practice is "adaptive" [...] or that modern cultural dynamics will necessarily return cultures to adaptive trajectories if perturbed away. Adaptive tracking must, of course, have characterized the psychological mechanisms governing culture during the Pleistocene, or such mechanisms could never have evolved; however, once human cultures were propelled beyond those Pleistocene conditions to which they were adapted at high enough rates, the formerly necessary connection between adaptive tracking and cultural dynamics was broken." (Tooby & Cosmides 1989).
The theory that genetically determined preferences control the direction of cultural evolution, has been put forward many times, and also without Lumsden and Wilson's exaggeration of the power of the genes. Psychologist Colin Martindale calls this principle hedonic selection:
"It is certainly possible that some of the genes freed by the capacity for culture may serve to "fine-tune" human hedonic responses so as to increase the probability that what brings pleasure will direct behavior in a way likely to increase [genetic] fitness. [...] it is generally assumed that hedonic selection will proceed in a certain direction until it is checked by the production of traits that render their possessors unfit [...]" (Martindale 1986).
While some scientists stress the importance of psychological mechanisms (e.g. Mundinger 1980), others regard the survival of the individual or group as the ultimate criterion for cultural selection:
"In the short run, various criteria - including efficiency of energy capture, and the satisfaction of perceived needs and wants - may determine the selection and retention of certain behavior. In the longer term, however, only if that behavior contributes to the persistence of the group or population in terms of reproductive continuity will it be truly retained." (Kirch 1980).
This model does not leave much room for psychological selection of cultural phenomena. According to Kirch (1980), such a selection is not allowed to run further than the higher selection with the individual or the group as unit of selection allows.
In recent years, the theory of gene/culture coevolution has been refined by a group of canadian biologists lead by C.S. Findlay. Findlay has continued the strictly mathematical tradition of Cavalli-Sforza, and constructed a series of mathematical models for cultural evolution and gene/culture coevolution. The mathematical analysis reveals that even relatively simple cultural systems can give rise to a great variety of complex phenomena, which are not possible in genetic systems of similar composition. These peculiar phenomena include the existence of multiple equilibrium states, oscillating systems, and stable polymorphism (Findlay, Lumsden & Hansell 1989a,b; Findlay 1990, 1992). Real world examples for such complex mechanisms were not given, but a few studies applying gene/culture coevolution theory to actual observations have been published (Laland, Kumm & Feldman 1995).
Richard Dawkins famous and controversial book The selfish gene (1976) described genes as selfish beings striving only to make as many copies of themselves as possible. The body of an animal can thus be viewed as nothing more than the genes tool for making more genes. Many people feel that Dawkins is turning things upside down, but his way of seeing things has nevertheless turned out to be very fruitful. In a short chapter in the same book he has applied a similar point of view to culturally transmitted traits. Dawkins has introduced the new name meme (rhymes with beam) for cultural replicators. A meme is a culturally transmitted unit of information analogous to the gene (Dawkins 1976, 1993).
The idea that a meme can be viewed as a selfish replicator that manipulates people to make copies of itself has inspired many scholars in the recent years. An obvious example is a religious cult which spends most of its energy on recruiting new members. The sect supports a set of beliefs that makes its members do exactly that: work hard to recruit new members.
A meme is not a form of life. Strictly speaking, the meme cannot reproduce itself, it can only influence people to replicate it. This is analogous to a virus: a virus does not contain the apparatus necessary for its own reproduction. Instead it parasitizes its host and uses the reproductive apparatus of the host cell to make new viruses. The same applies to a computer virus: it takes over the control of the infected computer for a while and uses it to make copies of itself (Dawkins 1993). Viruses and computer viruses are the favorite metaphors used in meme theory, and the vocabulary is borrowed from virology: host, infection, immune reaction, etc.
The idea of selfish memes has developed into a new theoretical tradition which is usually called meme theory or memetics. While meme theorists agree that most memes are beneficial to their hosts, they often concentrate on adverse or parasitic memes because this is an area where meme theory has greater explanatory power than alternative paradigms. Unlike the more mathematically oriented sociobiologists, the meme theorists have no problem finding convincing real-life examples that support their theories. In fact, in the beginning this tradition relied more on cases and examples than on theoretic principles.
Several meme theorists have studied the evolution of religions or cults. A religion or sect is a set of memes which are transmitted together and reinforce each other. Certain memes in such a meme complex are hooks which make the entire set of beliefs propagate by providing an incentive for the believer to proselytize. Other memes in the complex makes the host resistant to infection by rival beliefs. The belief that blind faith is a virtue has exactly this function. Other very powerful parts of the meme complex may be promises of rewards or punishments in the after-life (Paradise or Hell-fire) which make the host obey the commands of all the memes in the complex (Lynch 1996, Brodie 1996).
Examples of the unintended effects of cultural selection abound in memetic theory texts. One example is charity organizations spending most of their money on promotion:
"It is their effectiveness in attracting funding and volunteers that determines whether they can stay in existence and perform their functions [...] Given limited resources in the world and new organizations being introduced all the time, the surviving organizations must become better and better at surviving. Any use of their money or energy for anything other than surviving - even using it for the charitable purpose for which they were created! - provides an opening for a competing group to beat them out for resources." (Brodie 1996:158)
Another example of parasitic memes is chain letters which contain a promise of reward for sending out copies of the letter or punishment for breaking the chain (Goodenough & Dawkins 1994).
One reason why arbitrary memes can spread is peoples gullibility. Ball (1984) argues, that gullibility can actually be a (genetic) fitness advantage: Believing the same as others do, has the advantage of improved cooperation and belonging to a group. Peoples tendency to follow any new fad is what Ball (1984) calls the bandwagon effect.
The stability of a meme complex depends on its ability to make its host resistant to rival beliefs. Beliefs in supernatural and invisible phenomena are difficult to refute, and hence quite stable. Secular belief-complexes will be stable only if they have a similar defense against disproof. Such a defense can be the belief that a grand conspiracy has covered up all evidence by infiltrating the most powerful social institutions (Dennett 1995).
While most meme theorists paint a fairly pessimistic picture of memes as parasitic epidemics, Douglas Rushkoff has presented a quite optimistic view of the memes that infest public media. He has studied how memes containing controversial or counter-cultural messages can penetrate mainstream media packaged as trojan horses. This gives grass-roots activists and other people without money or political positions the power to influence public opinion and provoke social change (Rushkoff 1994). Rushkoff does not seem to worry that the public agenda is thus determined by who has the luck to launch the most effective media viruses rather than by who has the most important messages to tell.
The paradigm of meme theory is only gradually crystallizing into a rigorous science. Most of the publications are in the popular science genre with no exact definitions or strict formalism. Dennett does not even consider it a science because it lacks reliable formalizations, quantifiable results, and testable hypotheses, but he appreciates the insight it gives (1995). There is no common agreement about the definition of a meme. While most meme theorists consider the meme to be analogous to biological genotype and the phenotype has its parallel in social behavior or social structure, William Benzon has it exactly opposite (Benzon 1996, Speel & Benzon 1997).
The analogy with biology is often taken very far (e.g. Dennett 1990, 1995) which makes the theory vulnerable to criticism. Critics have argued that humans are intelligent and goal-seeking beings which are more influenced by logical, true, informative, problem-solving, economic, and well-organized ideas than by illogical, false, useless or harmful beliefs (Percival 1994).
Memetics will probably continue to be a soft science. Heyes and Plotkin have used cognitive psychology and brain neurology to argue that information is being transformed while stored in human memory and may be altered under the influence of later events. This leads them to argue that memes cannot be distinct, faithful copies of particulate information-bits, but blending and ever changing clusters of information (Heyes & Plotkin 1989). The products of cultural evolution or conceptual evolution cannot be systematized into distinct classes and it is impossible to make a strict evolutionary taxonomy of cultures (Hull 1982, Benzon 1996).
Richard Brodie, a computer engineer, has divided memes into three fundamental classes: distinction memes that define names and categories, strategy memes that define strategies of behavior and theories about cause and effect, and association memes that make the presence of one thing trigger a thought or feeling about something else (Brodie 1996).
Brodie has paid particular attention to the selection criteria that make some memes spread more than others. Based on evolutionary psychology6, his theory says that memes have higher fitness when they appeal to fundamental instincts:
"Memes involving danger, food, and sex spread faster than other memes because we are wired to pay more attention to them - we have buttons around those subjects." (Brodie 1996:88)
In other words, the memes that push the right buttons in our psyche are the most likely to spread. The most fundamental buttons have already been mentioned: danger, food, and sex. Other buttons identified by Brodie include: belonging to a group, distinguishing yourself, obeying authority, power, cheap insurance, opportunity, investment with low risk and high reward, protecting children.
For example, the danger button is the reason why horror movies are popular. The cheap insurance button is what makes people knock on wood even when they claim not to be superstitious. And the low risk - high reward button is what makes people invest in lotteries even when the chance of winning is abysmally small (Brodie 1996).
Meme theorists have a peculiar penchant for self-referential theories. Scientific theories are memes, and the theory of memes itself is often called the meme meme or metameme. When meme theorists are discussing scientific memes, they usually pick examples from those sciences with which they are most familiar. This extraordinary scientific self-awareness has led many meme theorists to present their theories in the most popularized way with the deliberate, and often proclaimed, aim of spreading the meme meme most effectively (e.g. Lynch 1996, Brodie 1996).
2.9 Sociology and anthropology
The selection theory is quite unpopular among modern sociologists and anthropologists (Berghe 1990) and only few express a positive view (e.g. Blute 1987). Opponents of the theory claim that there is no cultural analogy to genes and that the selection theory attributes too much importance to competition, whereas cooperation and conscious planning is ignored (Hallpike 1985, Adams 1991). The critics attribute a more literal analogy with darwinism to the adherents of the theory than they have ever stated, to make the theory look absurd. Biologists Pulliam and Dunford have characterized the gap between biology and social sciences in this way:
"It seems to us that decades of development in intellectual isolation from each other have allowed biological and social scientists to diverge in interests, ideas and especially language to the point where the two types of scientists now find it painfully difficult to communicate." (Pulliam & Dunford 1980)
This is no exaggeration. Many social scientists have rejected sociobiology, and for good reasons. The following is an excerpt from a radio-transmitted debate in connection with Lumsden and Wilson's book: Genes, Mind and Culture (1981):
John Maddox: "Should it be possible, or should it not be possible, on the basis of your theory, to be able to predict which people go to the back door and which to the front door when they go to visit John Turner in Leeds?
Edward O. Wilson: "If there can be demonstrated substantial genetic variation in some of the epigenetic rules that produce strong bias, yes. But that is difficult to pin down at this very early, very primitive level of our understanding of human behavioral genetics." (Maddox et al., 1984).
When Wilson, who is regarded as the founder of and foremost representative of sociobiology, can come up with such absurd a biological reductionism, it is no wonder that most sociologists and anthropologists take no interest in sociobiology, but instead develop their own theories. Many social scientists depict society as an autonomous system in order to avoid biological and psychological reductionism (Yengoyan 1991).
There are, nevertheless, significant similarities between biological and sociological theories of culture. The french sociologist Pierre Bourdieu has studied the reproduction of social structures in the educational system (Bourdieu & Passeron 1970), and the british cultural sociologist Raymond Williams has elaborated further on this theory, and demonstrated that the cultural reproduction is subject to a conscious selection:
"For tradition ('our cultural heritage') is self-evidently a process of deliberate continuity, yet any tradition can be shown, by analysis, to be a selection and reselection of those significant received and recovered elements of the past which represent not a necessary but a desired continuity. In this it resembles education, which is a comparable selection of desired knowledge and modes of learning and authority." (Williams, R. 1981:187)
Williams has brilliantly explained how different cultural forms are connected with different degrees of autonomy and degrees of freedom, and hence unequal possibilities for selection. Williams analyzes both cultural innovation, reproduction, and selection, but oddly enough, he never combines these three concepts to a coherent evolutionary theory, and he omits any reference to evolutionary scientists (Williams, R. 1981). This omission is probably due to a resistance against overstated generalizations and, quite likely, a fear of being associated with social darwinism.
The philosopher Rom Harré has theorized over social change from a mainly sociological paradigm. He discussed whether innovations are random or not, and hence whether social evolution can be characterized as darwinian or lamarckian. Harré has made a distinction between cultural informations, and the social practice they produce, but he has not gone into details with the selection process and its mechanisms (Harré 1979, 1981).
Sociologist Michael Schmid has proposed a reconstruction of the theory of collective action based on selectionist thought, but with few references to biology. He argues that collective actions regulated by social rules have consequences which tend to stabilize or destabilize these rules. This is an evolutionary mechanism which Schmid calls internal selection, because all factors are contained within the social system. The selective impact of external resources on the stability of social regulations is considered external selection (Schmid 1981, 1987; Kopp & Schmid 1981). His theory has had some influence on social systems theory which in turn has influenced sociocybernetics (Luhmann 1984, Zouwen 1997).
2.10 Attempts to make a synthesis of sociobiology and anthropology
It seems obvious to try to fit sociobiologic theory into anthropology, and there have of course been several attempts along this line. Unfortunately, those attempting to do so have seldom been able to escape the limitations of their old paradigms and the results have seldom been very convincing.
In 1980, the biologists Ronald Pulliam and Christopher Dunford published a book in the popular science genre with this purpose. Despite intentions to make their book interdisciplinary, they disclose a rather limited knowledge of the humanistic sciences.
David Rindos, who is a botanist as well as an anthropologist, has written several articles about cultural selection (1985, 1986). His articles contain some errors and misconceptions which, for the sake of brevity, I will not mention here, but instead refer to Robert Carneiro's criticism (1985).
In an article by anthropologist Mark Flinn and Zoologist Richard Alexander (1982) the theory of coevolution is turned down by rejecting the culture/biology dichotomy and the difference between cultural and genetic fitness. Their argumentation has been rebutted by Durham (1991) and others.
Ethologist Robert Hinde has likewise attempted to bridge the gap between biology and sociology, but his discussions largely remain within the ethological paradigm. Cultural selection theory is cursorily mentioned, but cultural fitness is not discussed (Hinde 1987).
Sociologist Jack Douglas has combined a special branch of social science, namely the sociology of deviance, with the theory of cultural selection. By combining sociology, sociobiology, and psychology, Douglas has created a model for social change, where social rules are seen as analogous to genes, and deviations from the rules play the same role in social evolution as mutations do in genetic evolution. Douglas' theory addresses the question of how social deviations arise, and how people overcome the shame that deviation from the rules entail (Douglas, J. 1977).
Archaeologist Patrick Kirch has presented a fairly detailed theory for cultural selection, and unlike most other researchers in selection theory, he has supported his theory thoroughly with specific examples. As mentioned on page 40, Kirch does not ascribe much importance to conscious or psychological selection, but regards the survival of the individual or the group as the ultimate selection criterion. Such cultural phenomena which has no obvious importance for survival, such as for example art or play, are regarded as random and neutral towards selection (Kirch 1980).
Like Patrich Kirch, anthropologist Michael Rosenberg emphasizes that cultural innovations are not necessarily random, but often the result of purposeful reactions to a stressful situation such as overpopulation. In particular he contends that agriculture initially arose as a reaction to overpopulation:
"... an allocation model proposes that in certain types of habitats, hunter-gatherers will resolve the symptoms of population pressure-induced stress through the voluntary or involuntary allocation of standing wild resources. It further proposes that, in a still more limited number of cases (given the institution of territorial systems), the consequences of growing population pressure-induced stress will be perceived as being most readily mitigated by food production, rather than by warfare or some other behavior intended to address these proximate consequences. Finally, it also proposes that it is under precisely such circumstances that sedentism, food storage, and other behaviors thought integral to the process develop to be selected for." (Rosenberg, M. 1990).
The proficiency of the abovementioned scientists notwithstanding, I will maintain that their attempts at forming a synthesis of the different sciences have so far been insufficient. Not until recently has a fairly sound combination of sociology and sociobiology been presented. In 1992, the two sociologists Tom Burns and Thomas Dietz published a theory for cultural evolution based on the theory of the relationship between individual agency and social structure. Culture is defined as a set of rules which is established, transmitted, and used selectively. Burns and Dietz explain how an existing social structure sets limits to what kind of thoughts and actions are possible. An implicit selection lies in the requirement that actions and ideas must be compatible with the social structure, and that different sub-structures must be mutually compatible. According to Burns and Dietz, cultural selection proceeds in two steps: A greater or lesser part of the available resources is allocated to different actors or groups according to certain rules; these resources can subsequently be utilized to maintain and reinforce the concerned group or institution and its rules. Of course Burns and Dietz also mention the obvious selection that takes place by the exercise of power, as well as the limitations constituted by the material environment and the ecology (Burns & Dietz 1992). Despite the fact that these two sociologists better than most other scientists have been able to integrate different paradigms, their theory has been criticized for being reductionist and for not paying enough attention to certain important parts of social life (Strauss 1993).
Political scientist Ann Florini has recently applied selection theory to the development of international norms. According to her model, three conditions must be met for an international norm to spread: firstly, the norm has to get prominence, usually by being promoted by a norm entrepreneur; secondly, it must be compatible with preexisting norms; and thirdly, it must fit the environmental conditions. She argues that new norms mainly are adopted through emulation of influential actors, rather than through a rational evaluation of all avaliable alternatives (Florini 1996).
2.11 Social psychology
Studies of cultural selection from the point of view of social psychology and cognitive psychology have been too few to form a separate research tradition. This is clearly a neglected area of research.
The distortion of memes through imperfect communication between humans has been explained by Heyes & Plotkin (1989) and Sperber (1990, 1996). This is seen as an important difference between genetic and cultural evolution: cultural informations are generally transformed or modified each time they are copied, and perfect copying is the exception rather than the rule. This is very unlike the case of genetic evolution where the copying of genes as a rule is perfect, and mutation is the exception. In Sperber's model, cultural representations are generally transformed each time they are copied, and this transformation is mostly in the direction of the representation that is most psychologically attractive, most compatible with the rest of the culture, or most easy to remember. Such an 'optimal' representation is called an attractor, and the repeated process of distortion through copying is seen as a trajectory with random fluctuations tending towards the nearest attractor (Sperber 1996).
While other scientists present a simple model of memes being either present or not present in a human brain, Dan Sperber emphasizes that there are different ways of holding a belief. He makes a distinction between intuitive beliefs, which are the product of spontaneous and unconscious perceptual and inferential processes, and reflective beliefs, which are believed by virtue of second order beliefs about them. A claim that is not understood but nevertheless believed because it comes from some authority, is an example of a reflective belief. The commitment to a belief can vary widely, from loosely held opinions to fundamental creeds, from mere hunches to carefully thought out convictions (Sperber 1990).
Psychological and cognitive factors may have important influence on the selection of cultural information. The following factors are mentioned by Sperber: the ease with which a particular representation can be memorized; the existence of background knowledge in relationship to which the representation is relevant; and a motivation to communicate the information (Sperber 1990).
2.12 Economic competition
A well known analogy to darwinian evolution is economic competition between enterprises. This analogy has been explored most notably by the two economists Richard Nelson and Sidney Winter, who have developed a useful model for economic change. Their theory, which they call evolutionary, is contrasted with traditional economic theory, called orthodox, by its better ability to cope with technological change. Nelson and Winter argue that technological innovation and progress plays an important role in modern economic growth, but is inadequately dealt with in orthodox economic theory. Different firms have different research strategies and different amounts of resources to invest in research and development and hence unequal chances of making technological innovations that improve their competitiveness. Nelson and Winter regard knowledge as accumulative and the process of innovation is therefore described as irreversible.
The so-called orthodox economic theory is criticized for its heavy reliance on the assumption that firms behave in the way that optimizes their profit. Finding the optimal strategy requires perfect knowledge and computing skills. It is argued that knowledge is never perfect and research is costly, and therefore the theoretical optimum may never be found. In contrast to orthodox economic theory, Nelson and Winter argue that economic equilibrium may exist in a market where nothing is optimal, and that many firms may stick to their old routines unless external factors provoke them to search for new strategies:
"A historical process of evolutionary change cannot be expected to "test" all possible behavioral implications of a given set of routines, much less test them all repeatedly [...] There is no reason to expect, therefore, that the surviving patterns of behavior of a historical selection process are well adapted for novel conditions not repeatedly encountered in that process [...] In a context of progressive change, therefore, one should not expect to observe ideal adaptation to current conditions by the products of evolutionary processes." (Nelson & Winter 1982:154)
Nelson and Winter (1982) have developed their evolutionary theory of economics to a high level of mathematical refinement in order to explain important aspects of economic growth as fueled by technological advance better than orthodox economic theory can.
A more general theory of the evolution of business and other organizations has been published by sociologist Howard Aldrich (1979), based on the general formula of variation, selection, and retention. Unlike Nelson and Winter who emphasize goal-directed problem solving as an important source of variation, Aldrich underplays planned innovations and attaches more importance to random variations. Mechanisms of selection include selective survival of whole organizations, selective diffusion of successful innovations between organizations, and selective retention of successful activities within an organization.
The effect of the environment is an important element in Aldrichs theory. He classifies environments according to several dimensions, such as capacity, homogeneity, stability, predictability, concentration versus dispersion of resources, etc. Different combinations of these parameters can provide different niches to which an organization may adapt (Aldrich 1979).
In a long-term perspective, economic growth may not be steady but rather characterized by periods of relative structural stability and inertia, separated by rapid transitions from one structural regime to another. This is explained by Geoffrey Hodgson (1996) as analogous to the punctuated equilibria model of biological evolution (see chapt. 3.9). A similar theory has been applied to the development of organizations in economic competition. A firm's ability to adapt to changes in the market situation may be impeded by memetic constraints within the organization just like the adaptability of a biological species may be impeded by genetic constraints (see chapt. 3.9). Overcoming such constraints produces a leap in the development of the firm resembling the process of punctuated equilibria in biological evolution (Price 1995).
2.13 Universal selection theory
Selection theory has been found useful for explaining many different phenomena in the world. Several philosophers have therefore been interested in studying similarities between different classes of phenomena which all depend on the same neo-darwinian formula: blind variation and selective retention (Cziko 1995).
Biological and cultural evolution are obvious examples, but also ontogenetic growth and individual learning have been shown to involve such processes. A particularly convincing example is immunology: an organisms development of antibodies involves a process which is remarkably similar to biological evolution (Cziko 1995). Examples from the inorganic world are more subtle: In the growth of a crystal, each new molecule is wandering randomly about until by chance it hits a fitting place in the crystal lattice. A molecule in a fit position is more likely to be retained than a molecule at an unfit position. This explains how the highly ordered structure of a crystal or a snowflake is generated.
You may notice, that the neo-darwinian formula for biological evolution has been modified here: the word blind has been replaced for random, and reproduction has been changed to retention. These modifications have been made for a reason. In cultural evolution, for example, the variation is seldom completely random. Cultural innovations are often goal-directed although still tentative. The philosophers meet the criticism that variation may be non-random by saying that a new innovation is not guaranteed to be successful, and hence it can be said to be blind to the outcome of the experimental variation (Campbell 1974). This modification has not stopped the criticism, since innovations may be both goal-directed and intelligent to such a degree that the outcome can be predicted with a reasonably high degree of reliability (Hull 1982).
The use of the word retention, rather than reproduction, implies that the selected character is preserved, but not necessarily multiplied. In the crystal-growth example, each new molecule has to go through the same process of blind-variation-and-selective-retention rather than copying the knowledge from its predecessors. This mechanism is far less effective than biological evolution, where each new generation inherits the accumulated effect of all prior selections. The new generations do not have to wait for the successful mutations to be repeated. This is a fundamental difference, which many philosophers fail to recognize.
Campbell has introduced a new branch of universal selection theory called evolutionary epistemology. He argues that any adaptation of an organism to its environment represents a form of knowledge of the environment. For example, the shapes of fish and whales represent a functional knowledge of hydrodynamics. The process of blind-variation-and-selective-retention produces such knowledge in a process resembling logical induction. Campbell claims that any increase in fitness of a system to its environment can only be achieved by this process. His theory entails three doctrines of which this is the first one.
Campbells argument is symmetric: not only does he say that adaptation is knowledge, he also says that knowledge is adaptation. This means that all human knowledge ultimately stems from processes of blind-variation-and-selective-retention. Hence the term evolutionary epistemology.
There are many processes which bypass the fundamental selection processes. This includes selection at higher levels, feed back, vicarious selection, etc. Intelligent problem solving is an obvious example of such a vicarious selection mechanism: it is much more effective and less costly than the primitive processes based on random mutation and selective survival.
But all such mechanisms, which bypass the lower-level selection processes, are themselves representations of knowledge, ultimately achieved by blind-variation-and-selective-retention. This is Campbells second doctrine.
The third doctrine is that all such bypass mechanisms also contain a process of blind-variation-and-selective-retention at some level of their own operation. Even non-tentative ways of acquiring knowledge, such as visual observation, or receiving verbal instruction from somebody who knows, are thus processes involving blind-variation-and-selective-retention according to Campbells third doctrine (Campbell 1974, 1990).
Allow me to discuss this controversial claim in some detail. The most deterministic and error-free knowledge-gaining process we can think of is using a computer to get the result of a mathematical equation. Where does a modern computer get its error-free quality from? From digitalization. A fundamental digital circuit has only two possible stable states, designated 0 and 1. Any slight noise or deviation from one of these states will immediately be corrected with a return to the nearest stable state. This automatic error-correction is indeed a process of selective retention.
Going down to an even more fundamental level, we find that the computer circuits are made of transistors, and that the electronic processes in a transistor involve blind-variation-and-selective-retention of electrons in a semiconductor crystal.
This argument is seemingly a defense of Campbells third doctrine. But only seemingly so! My project here has not been to defend this doctrine but to reduce it ad absurdum. Campbell tells us that the translation of DNA into proteins involves blind-variation-and-selective-retention. What he does not tell us is that this applies to all chemical reactions. In fact, everything that molecules, atoms, and sub-atomic particles do, can be interpreted as blind-variation-and-selective-retention. And since everything in the Universe is made of such particles, everything can be said to rely on blind-variation-and-selective-retention.
The problem with the claim that advanced methods of acquiring knowledge involve blind-variation-and-selective-retention is that it is extremely reductionistic. The third doctrine involves the common reductionist fallacy of ignoring that a complex system can have qualities which the constituent elements do not have. At the most fundamental level, everything involves blind-variation-and-selective-retention, but this may be irrelevant for an analysis of the higher-level functioning.
I recognize that Campbells first and second doctrines provide a promising solution to the fundamental philosophical problem of where knowledge comes from and what knowledge is, but I find the third doctrine so reductionistic that it is irrelevant.
Undeniably, however, the general darwinian formula represents an excellent mechanism for acquiring new knowledge. This mechanism is utilized in computerized methods for solving difficult optimization problems with many parameters. The principle, which is called evolutionary computation, involves computer simulation of a population of possible solutions to a given problem. New solutions are generated by mutation and sexual recombination of previous solutions, and each new generation of solutions is subjected to selection based on their fitness (Bäck et. al. 1997).
This chapter has not been an account of the history of evolutionary ideas, but a study of how the principle of selection has been used for explaining cultural change. Although the principle of selection is often found in evolutionary thinking, it has sometimes played only a minor role, since traditional evolutionism has been more concerned with the direction of evolution than with its mechanism (Rambo 1991). This is one of the reasons why evolutionism often has been criticized for being teleological. The vast criticism against evolutionism has only been briefly reported here.
Nineteenth century evolutionists lacked a clear distinction between organic and social inheritance because they did not know Mendel's laws of inheritance. 'Race' and 'culture' were synonymous to them. The principle of the survival of the fittest meant that evolution was dependent on the strongest individuals winning over the weaker ones. Since this process was regarded as natural and no distinction was made between evolution and progress, the logical consequence of this philosophy was a laissez faire-policy where the right of superior forces was the rule. In an extreme ethnocentrism, the so called social darwinists believed that their own race and culture was superior to everybody else and that it therefore was their right and duty to conquer the entire world.
There was a strong opposition between social darwinism and socialism, because the former philosophy assumes that weakness is inborn and must naturally lead to an unkind fate, whereas the socialists believe that poverty and weakness are caused by social factors and ought to be remedied.
In Herbert Spencer's philosophy, all kinds of evolution were analogous: The Universe, the Earth, the species, the individuals, and the society - all were evolving due to one and the same process. This theory has later been rejected and it is unfortunate that such diverse kinds of change are still designated by the same word: 'evolution'. Spencer compared society with an organism and the different institutions in society were paralleled with organs. While this metaphor, which has been quite popular in social science, may be appropriate in connection with a static model of society such as functionalism, it may lead to serious fallacies when social change is being studied. A consequence of the organism analogy is namely that a theory of social change is modeled after individual development rather than after the evolution of species. In the embryonic development of a body, everything is predetermined and the cause of change is inherent in the body which is changing. When transferred to the evolution of society, this line of thought leads to a deterministic, unilinear, and teleological philosophy7. The idea of analogy between different kinds of evolution has recently been revived in the universal selection theory.
The words social darwinism, determinism, unilinearity, and teleology were invectives used mainly by the opponents of evolutionism. These concepts were so vaguely defined that the critics could include any theory under these headings while the proponents of evolutionism with the same ease were able to demonstrate that their theories were indeed not deterministic, teleological, etc.
The debate was - and still is - highly dominated by conflicts between incompatible worldviews and views of human nature. The controversies over nature versus nurture, biology versus culture, determinism versus free will, etc. has made it impossible to reach an agreement and the conflict between different paradigms has so far lasted more than a century. Both sides have exaggerated their positions into extreme reductionism which has made them vulnerable to criticism. Adherents of the philosophy of free will wanted an idiographic description whereas the biologically oriented scientists demanded a nomothetic representation.
Most social evolutionists were more interested in describing the direction or goal of evolution than its causes. Many failed to specify the unit of selection, mechanism of selection, or mode of reproduction, and only few distinguished between genetic and cultural fitness. Their theory therefore had only little explanatory power, and in particular lacked any explanation why the evolution should go in the claimed direction.
The polarization of opinions did not decrease when sociobiologists took the lead in the 1970's. With an excessive use of mathematical formulae, the theoreticians distanced themselves more and more from the real world phenomena they were to describe, and many simplifications and dubious assumptions became necessary in order to make the models mathematically tractable. The mathematical models include so many parameters that it has become impossible to determine the numerical value of them all and it is therefore only possible to draw qualitative and conditional conclusions despite the intense focus on quantitative models. Of course the mathematical language has also widened the communication gap between sociobiologists and anthropologists.
Cultural selection theory has so far never been a separate discipline, but has been investigated by scientists from several different branches of science, such as philosophy, economy, sociology, anthropology, social psychology, linguistics, sociobiology, etc. The consequence of severe communication gaps between the different sciences and neglectful literature search has been that the same ideas have been forgotten and reinvented several times without much progress. This is the reason why primitive and antiquated theories still pop up. Many scientists fail to acknowledge the fundamental differences between genetic and cultural selection (e.g. Ruse 1974; Hill 1978; Harpending 1980; van Parijs 1981; Mealey 1985; Russell & Russell 1982-1992, 1990), and some of these theories are even more insufficient than Leslie Stephen's neglected theory from 1882.
The latest development is the school of memetics which is a much less exact discipline than sociobiology. The lack of rigor and sophistication in memetics has often been deplored, but the softness of this paradigm may help bridge the gap between the biological and humanistic sciences in the future.
In connection with the theory of cultural selection, it has often been stated that knowledge is accumulated. It is an incredible paradox that this very theory itself has deviated so much from this principle when viewed as a case in the history of ideas. The theories of social change have followed a dramatic zigzag course, where every new theoretical fad has rejected the previous one totally rather than modifying and improving it; and where the same ideas and principles have been forgotten and reinvented again and again through more than a century.
2. Brunetières book L'Évolution des Genres dans l'Histoire de la Littérature (1890) was planned as a work in four volumes, of which volume two should describe the general principles for the evolution of literature. Although the first volume was reprinted in several editions, the planned following volumes were never published.
3. Italics are in the original. This also applies to the succeeding citations.
4. The contributions in this workshop have been published in Ethology and Sociobiology, vol. 10, no. 1-3, 1989, edited by Jerome H. Barkow.
5. An introduction to evolutionary psychology can be found in Barkow et.al. 1992.
6. See above.
7. Spencer's somewhat inconsistent attitude to this question has often been debated. See Haines (1988). | http://agner.org/cultsel/chapt2/ | 13 |
18 | An argument uses premises to reach a conclusion, but we can’t just accept that every valid argument proves the conclusion to be true. If an argument has a valid form, we need to know that the premises are true before we can know the conclusion is true. We rarely know for certain that the premises of an argument are true. Instead, we do our best at justifying the premises. One way to do this is to provide evidence—reasons we should believe something to be likely true or accurate. Many people equate “evidence” with “observation,” but there could be other reasons to accept beliefs as well. I will discuss three types of evidence:
- Introspective experience
- Noninferential justification
Observation or “empirical evidence” is evidence based on experience. What we perceive with the senses is observation. For example, I can see a cat on a mat, feel my hands, taste my food, and hear a barking dog. Observation is the main source of scientific evidence, but it’s also a source of much of our common knowledge. Consider the following argument:
- Socrates is a man.
- All men are mortal.
- Therefore, Socrates is mortal.
This argument is valid, but is it sound? Do we know that the conclusion is true because the premises are true? I think so. How do we know Socrates is a man? There were eyewitnesses who described him and his behavior. This information was recorded in texts. How do we know all men are mortal? Every man we have ever observed has died. There are no men we know of who have lived longer than 150 years. Both premises are justified using observation.
Not all observation is direct. Sometimes scientists use tools, such as microscopes, to enhance our ability to observe the world around us. In the argument above we rely on the observations of others. Additionally, we often rely on the experiences of others. I haven’t known enough men to know that they are all mortal on my own, but we figure that if there were any immortal men, then they would have been discovered by now; and such a discovery would have been shared with the rest of us.
Observation is “theory-laden” in the sense that all observation must be interpreted and such interpretations are based on assumptions. For example, my observation that I have two hands is based on the assumption that I’m not sleeping, that an external world exists other than myself, and that the experiences I have are best understood with the assumption that my two hands are causing them. Without interpretation I would have experiences of colors, movement, feelings, and so on; but I couldn’t know that I have two hands without assuming that such colors, movement, and feelings are based on an external reality containing solid objects and so on.
Introspection—an examination of what it’s like to have experiences—involves observations without a concern for objects outside of ourselves. I have introspective experiences of my thoughts and feelings, and these experiences aren’t merely based on sight, sound, touch, taste, or smell. These experiences aren’t of anything outside of myself or part of an external world. We tend to assume these experiences are “within the mind” and are “psychological.”
Introspection gives us access to understand qualia—the “what it’s like” of our experiences. My experience of pain is a clear example of a qualia. One thing that’s important about pain to me is what it’s like to experience it.
Some of my introspection seems quite unlike perception. Sometimes I have thoughts that aren’t put into words and don’t seem like anything I can perceive with the five senses. However, many introspective experiences are related to perception. When I see a green frog, I think I’m experiencing something outside of myself that’s part of an external world using my eyes. However, the green color of the frog looks a certain way to me that might not be part of an external world. Each color has a qualia, a way it looks to us that’s not merely wavelengths of light reflecting off of objects. The qualia of each color is what differentiates the look of each other color and many people have a “favorite color” based on how it looks to them.
Consider the following argument that makes use of introspective experience:
- Pain is bad.
- We usually shouldn’t make bad things happen.
- Kicking and punching people causes pain.
- Therefore, we usually shouldn’t kick and punch people.
We know that “pain is bad” because we experience it that way. The second premise that “we usually shouldn’t make bad things happen” is more difficult to justify, but it’s a common assumption among people. The third premise that “kicking and punching causes people pain” is quickly discovered by people through observation after they are kicked and punched by others.
We often say that a belief is “intuitive” (e.g. solid objects exist) or counterintuitive (e.g. solid objects don’t exist), and what’s intuitive is often taken to be justified and what’s counterintuitive is taken to be unjustified. We often call intuitive beliefs “common sense” but not all intuitive beliefs are “common knowledge.” Intuitive justification can require maturity and understanding that most people fail to attain. Prejudice, having a hunch, or “women’s intuition” is not intuition of the philosophical variety, although we might often confuse them with the philosophical variety.
What exactly it means to say something is “intuitive” isn’t entirely clear, and there are at least three different forms of intuition: (1) the justification for a belief that’s hard to articulate in words, (2) assumptions we have found successful, and (3) noninferential justification. I will discuss each of these.
1. Justification that’s difficult to articulate
It’s often impossible to fully articulate why our beliefs are justified. We think we know some of our beliefs are true with a high degree of confidence, even if we can’t fully articulate how we know the belief is true, and even if we can’t fully justify our belief to others using argumentation. For example, “1+1=2” is intuitive, but it’s hard for many of us to prove it’s true and fully explain how we know it’s true. Intuitive beliefs could be based on any form of evidence: Observation, introspection, successful assumptions, noninferential justification, etc. What we know from these sources of justification are not necessarily easy to fully understand or communicate to others.
Although intuitive beliefs are difficult to prove to be true through argumentation, many philosophers try to justify them using arguments. This might actually just prove to other people that they share our intuitions. Consider the following intuitive argument:
- Imagine that what always happens in the past isn’t likely to happen in the future because the laws of nature will change. In that case we have no reason to think (a) the sun will rise tomorrow or (b) eating lots of fatty foods tomorrow will be unhealthy.
- However, we know that the sun will probably rise tomorrow and that eating lots of fatty foods tomorrow will probably be unhealthy.
- Therefore, what always happened in the past will likely happen in the future because the laws of nature will probably stay the same.
We often generalize about what happened in the past to predict the future, but it’s difficult to prove that the future will ever be like the past—even though we often think we know it will be with a high degree of confidence. The first premise emphasizes the high confidence we have that the sun will rise tomorrow and eating lots of fatty foods tomorrow will be unhealthy to show how counerintuitive it is to believe that what always happened in the past probably won’t be like the future because it requires us to reject certain beliefs we think we know are true.
2. Successful assumptions
Some intuitive beliefs could be successful assumptions similar to how scientists use provisional “working hypotheses” that seem to explain our observations until they are proven false. (e.g. Scientists assumed that the Sun revolves around the Earth at one point.) In that case it’s hard to explain how justified our intuitive belief is because it’s hard to explain to people all the ways the belief has proven to be successful. It could be that our assumption that the laws of nature will still be the same in the future is a successful assumption of this kind, and it seems highly successful. Rejecting this assumption would make living our lives impossible. We could never assume that food will be nutritious or that money could still buy goods tomorrow, but we continually find these assumptions to be successful.
If a belief is a successful assumption, then we can explain how the belief is justified based on successful risky predictions, the lack of viable alternatives, and the possibility of attaining counter-evidence. Our assumption that the future laws of nature will be the same has enabled us to make every successful risky prediction we’ve ever made, the rejection of such a belief seems absurd rather than a serious alternative, and we could imagine counterevidence against such a belief (e.g. if the law of gravity stopped functioning tomorrow).
On the other hand the assumption that I can’t find my keys because a ghost moves them lacks support from risky predictions and it fails to be as viable as alternatives (e.g. maybe I just forget where I put them).
3. Noninferential justification
Noninferential justification is evidence that we can understand without an argument. One possible source of noninferential justification is “self-evidence.” Some self-evident beliefs could be true by definition, such as “all bachelors are unmarried” and others could be justified based on the concepts involved. Perhaps anyone who understands what the concept of pain is will then understand that “pain is bad.” Many philosophers agree that what’s true by definition can be known noninferentially, but it’s much more controversial to think that conceptual knowledge can be justified using noninferential evidence beyond our definitions.
Noninferential justification is notoriously difficult to communicate to other people, but many mathematical concepts like “infinity” do seem to be plausibly understood in noninferential ways.
Arguments without evidence are not informative. Whenever we provide arguments, we need to consider how we know something is probably true or justified. If this is difficult, then it is likely that our conclusion is either unjustified or that we have intuitive evidence for it. | http://ethicalrealism.wordpress.com/2011/05/31/three-forms-of-evidence/ | 13 |
20 | DNA replication is a biological process that occurs in all living organisms and copies their DNA. DNA replication during mitosis is the basis for biological inheritance. The process of DNA replication starts when one double-stranded DNA molecule produces two identical copies of the molecule. Each strand of the original double-stranded DNA molecule serves as template for the production of the complementary strand, a process referred to as semiconservative replication. Cellular proofreading and error-checking mechanisms ensure near perfect fidelity for DNA replication.12
In a cell, DNA replication begins at specific locations, or origin of replication, in the genome.3 Unwinding of DNA at the origin, and synthesis of new strands, forms a replication fork. A number of proteins are associated with the fork and assist in the initiation and continuation of DNA synthesis. Most prominently, DNA polymerase synthesizes the new DNA by adding matching nucleotides to the template strand.
DNA replication can also be performed in vitro (artificially, outside a cell). DNA polymerases isolated from cells and artificial DNA primers can be used to initiate DNA synthesis at known sequences in a template DNA molecule. The polymerase chain reaction (PCR), a common laboratory technique, cyclically apply such artificial synthesis to amplify a specific target DNA fragment from a pool of DNA.
DNA usually exists as a double-stranded structure, with both strands coiled together to form the characteristic double-helix. Each single strand of DNA is a chain of four types of nucleotides. Nucleotides in DNA contain a deoxyribose sugar, a phosphate, and a nucleobase. The four types of nucleotide correspond to the four nucleobases adenine, cytosine, guanine, and thymine, commonly notated as A,C, G and T. These nucleotides form phosphodiester bonds, creating the phosphate-deoxyribose backbone of the DNA double helix with the nucleobases pointing inward. Nucleotides (bases) are matched between strands through hydrogen bonds to form base pairs. Adenine pairs with thymine (two hydrogen bonds), and cytosine pairs with guanine (three hydrogen bonds) because a purine must pair with a pyrimidine.
DNA strands have a directionality, and the different ends of a single strand are called the "3' (three-prime) end" and the "5' (five-prime) end" with the direction of the naming going 5 prime to the 3 prime region. The strands of the helix are anti-parallel with one being 5 prime to 3 then the opposite strand 3 prime to 5. These terms refer to the carbon atom in deoxyribose to which the next phosphate in the chain attaches. Directionality has consequences in DNA synthesis, because DNA polymerase can synthesize DNA in only one direction by adding nucleotides to the 3' end of a DNA strand.
The pairing of bases in DNA through hydrogen bonding means that the information contained within each strand is redundant. The nucleotides on a single strand can be used to reconstruct nucleotides on a newly synthesized partner strand.4
DNA polymerases are a family of enzymes that carry out all forms of DNA replication.6 However, a DNA polymerase can only extend an existing DNA strand paired with a template strand; it cannot begin the synthesis of a new strand. To begin synthesis, a short fragment of DNA or RNA, called a primer, must be created and paired with the template DNA strand.
DNA polymerase then synthesizes a new strand of DNA by extending the 3' end of an existing nucleotide chain, adding new nucleotides matched to the template strand one at a time via the creation of phosphodiester bonds. The energy for this process of DNA polymerization comes from two of the three total phosphates attached to each unincorporated base. (Free bases with their attached phosphate groups are called nucleoside triphosphates.) When a nucleotide is being added to a growing DNA strand, two of the phosphates are removed and the energy produced creates a phosphodiester bond that attaches the remaining phosphate to the growing chain. The energetics of this process also help explain the directionality of synthesis—if DNA were synthesized in the 3' to 5' direction, the energy for the process would come from the 5' end of the growing strand rather than from free nucleotides.
In general, DNA polymerases are extremely accurate, making less than one mistake for every 107 nucleotides added.7 Even so, some DNA polymerases also have proofreading ability; they can remove nucleotides from the end of a strand in order to correct mismatched bases. If the 5' nucleotide needs to be removed during proofreading, the triphosphate end is lost. Hence, the energy source that usually provides energy to add a new nucleotide is also lost.
The rate of DNA replication in a living cell was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli.8 During the period of exponential DNA increase at 30°C, the rate was 749 nucleotides per second. The mutation rate per base pair per replication during phage T4 DNA synthesis is 1.7 per 10−8.9 Thus DNA replication is both impressively fast and accurate.
DNA Replication, like all biological polymerization processes, proceeds in three enzymatically catalyzed and coordinated steps: initiation, elongation and termination.
For a cell to divide, it must first replicate its DNA.10 This process is initiated at particular points in the DNA, known as "origins", which are targeted by proteins that initiate DNA synthesis.3 Origins contain DNA sequences recognized by replication initiator proteins (e.g., DnaA in E. coli and the Origin Recognition Complex in yeast).11 Sequences used by initiator proteins tend to be "AT-rich" (rich in adenine and thymine bases), because A-T base pairs have two hydrogen bonds (rather than the three formed in a C-G pair). AT-rich sequences are easier to unzip since less energy is required to break relatively fewer hydrogen bonds.12 Once the origin has been located, these initiators recruit other proteins and form the pre-replication complex, which unzips, or separates, the DNA strands at the origin.
All known DNA replication systems require a free 3' hydroxyl group before synthesis can be initiated (Important note: DNA is read in 3' to 5' direction whereas a new strand is synthesised in the 5' to 3' direction—this is often confused). Four distinct mechanisms for synthesis have been described.
- All cellular life forms and many DNA viruses, phages and plasmids use a primase to synthesize a short RNA primer with a free 3′ OH group which is subsequently elongated by a DNA polymerase.
- The retroelements (including retroviruses) employ a transfer RNA that primes DNA replication by providing a free 3′ OH that is used for elongation by the reverse transcriptase.
- In the adenoviruses and the φ29 family of bacteriophages, the 3' OH group is provided by the side chain of an amino acid of the genome attached protein (the terminal protein) to which nucleotides are added by the DNA polymerase to form a new strand.
- In the single stranded DNA viruses — a group that includes the circoviruses, the geminiviruses, the parvoviruses and others — and also the many phages and plasmids that use the rolling circle replication (RCR) mechanism, the RCR endonuclease creates a nick the genome strand (single stranded viruses) or one of the DNA strands (plasmids). The 5′ end of the nicked strand is transferred to a tyrosine residue on the nuclease and the free 3′ OH group is then used by the DNA polymerase for new strand synthesis.
The first is the best known of these mechanisms and is used by the cellular organisms. In this mechanism, once the two strands are separated, primase adds an RNA primers to the template strands. The leading strand receives one RNA primer while the lagging strand receives several. The leading strand is extended from the primer in one motion by DNA polymerase, while the lagging strand is extended discontinuously from each primer, forming Okazaki fragments. RNase removes the primer RNA fragments, and another DNA Polymerase enters to fill the gaps. When this is complete, a single nick on the leading strand and several nicks on the lagging strand can be found. Ligase works to fill these nicks in, thus completing the newly replicated DNA molecule.
The primase used in this process differs significantly between bacteria and archaea/eukaryotes. Bacteria use a primase belonging to the DnaG protein superfamily which contains a catalytic domain of the TOPRIM fold type. The TOPRIM fold contains an α/β core with four conserved strands in a Rossmann-like topology. This structure is also found in the catalytic domains of topoisomerase Ia, topoisomerase II, the OLD-family nucleases and DNA repair proteins related to the RecR protein.
The primase used by archaea and eukaryotes in contrast contains a highly derived version of the RNA recognition motif (RRM). This primase is structurally similar to many viral RNA dependent RNA polymerases, reverse transcriptases, cyclic nucleotide generating cyclases and DNA polymerases of the A/B/Y families that are involved in DNA replication and repair. All these proteins share a catalytic mechanism of di-metal-ion-mediated nucleotide transfer, whereby two acidic residues located at the end of the first strand and between the second and third strands of the RRM-like unit respectively, chelate two divalent cations.
As DNA synthesis continues, the original DNA strands continue to unwind on each side of the bubble, forming a replication fork with two prongs. In bacteria, which have a single origin of replication on their circular chromosome, this process eventually creates a "theta structure" (resembling the Greek letter theta: θ). In contrast, eukaryotes have longer linear chromosomes and initiate replication at multiple origins within these.13
|Enzyme||Function in DNA replication|
|DNA Helicase||Also known as helix destabilizing enzyme. Unwinds the DNA double helix at the Replication Fork.|
|DNA Polymerase||Builds a new duplex DNA strand by adding nucleotides in the 5' to 3' direction. Also performs proof-reading and error correction.|
|DNA clamp||A protein which prevents DNA polymerase III from dissociating from the DNA parent strand.|
|Single-Strand Binding (SSB) Proteins||Bind to ssDNA and prevent the DNA double helix from re-annealing after DNA helicase unwinds it thus maintaining the strand separation.|
|Topoisomerase||Relaxes the DNA from its super-coiled nature.|
|DNA Gyrase||Relieves strain of unwinding by DNA helicase; this is a specific type of topisomerase|
|DNA Ligase||Re-anneals the semi-conservative strands and joins Okazaki Fragments of the lagging strand.|
|Primase||Provides a starting point of RNA (or DNA) for DNA polymerase to begin synthesis of the new DNA strand.|
|Telomerase||Lengthens telomeric DNA by adding repetitive nucleotide sequences to the ends of eukaryotic chromosomes.|
The replication fork is a structure that forms within the nucleus during DNA replication. It is created by helicases, which break the hydrogen bonds holding the two DNA strands together. The resulting structure has two branching "prongs", each one made up of a single strand of DNA. These two strands serve as the template for the leading and lagging strands, which will be created as DNA polymerase matches complementary nucleotides to the templates; the templates may be properly referred to as the leading strand template and the lagging strand templates
The leading strand is the template strand of the DNA double helix so that the replication fork moves along it in the 3' to 5' direction. This allows the newly synthesized strand complementary to the original strand to be synthesized 5' to 3' in the same direction as the movement of the replication fork.
On the leading strand, a polymerase "reads" the DNA and adds nucleotides to it continuously. This polymerase is DNA polymerase III (DNA Pol III) in prokaryotes and presumably Pol ε715 in yeasts. In human cells the leading and lagging strands are synthesized by Pol α and Pol δ within the nucleus and Pol γ in the mitochondria. Pol ε can substitute for Pol δ in special circumstances.16
The lagging strand is the strand of the template DNA double helix that is oriented so that the replication fork moves along it in a 5' to 3' manner. Because of its orientation, opposite to the working orientation of DNA polymerase III, which moves on a template in a 3' to 5' manner, replication of the lagging strand is more complicated than that of the leading strand.
On the lagging strand, primase "reads" the DNA and adds RNA to it in short, separated segments. In eukaryotes, primase is intrinsic to Pol α.17 DNA polymerase III or Pol δ lengthens the primed segments, forming Okazaki fragments. Primer removal in eukaryotes is also performed by Pol δ.18 In prokaryotes, DNA polymerase I "reads" the fragments, removes the RNA using its flap endonuclease domain (RNA primers are removed by 5'-3' exonuclease activity of polymerase I [weaver, 2005]), and replaces the RNA nucleotides with DNA nucleotides (this is necessary because RNA and DNA use slightly different kinds of nucleotides). DNA ligase joins the fragments together.
As helicase unwinds DNA at the replication fork, the DNA ahead is forced to rotate. This process results in a build-up of twists in the DNA ahead.19 This build-up would form a resistance that would eventually halt the progress of the replication fork. DNA Gyrase is an enzyme that temporarily breaks the strands of DNA, relieving the tension caused by unwinding the two strands of the DNA helix; DNA Gyrase achieves this by adding negative supercoils to the DNA helix.20
Bare single-stranded DNA tends to fold back on itself and form secondary structures; these structures can interfere with the movement of DNA polymerase. To prevent this, single-strand binding proteins bind to the DNA until a second strand is synthesized, preventing secondary structure formation.21
Clamp proteins form a sliding clamp around DNA, helping the DNA polymerase maintain contact with its template, thereby assisting with processivity. The inner face of the clamp enables DNA to be threaded through it. Once the polymerase reaches the end of the template or detects double-stranded DNA, the sliding clamp undergoes a conformational change that releases the DNA polymerase. Clamp-loading proteins are used to initially load the clamp, recognizing the junction between template and RNA primers.2:274-5
Within eukaryotes, DNA replication is controlled within the context of the cell cycle. As the cell grows and divides, it progresses through stages in the cell cycle; DNA replication occurs during the S phase (synthesis phase). The progress of the eukaryotic cell through the cycle is controlled by cell cycle checkpoints. Progression through checkpoints is controlled through complex interactions between various proteins, including cyclins and cyclin-dependent kinases.22
The G1/S checkpoint (or restriction checkpoint) regulates whether eukaryotic cells enter the process of DNA replication and subsequent division. Cells that do not proceed through this checkpoint remain in the G0 stage and do not replicate their DNA.
Replication of chloroplast and mitochondrial genomes occurs independent of the cell cycle, through the process of D-loop replication.
Most bacteria do not go through a well-defined cell cycle but instead continuously copy their DNA; during rapid growth, this can result in the concurrent occurrences of multiple rounds of replication.23 In E. coli, the best-characterized bacteria, DNA replication is regulated through several mechanisms, including: the hemimethylation and sequestering of the origin sequence, the ratio of ATP to ADP, and the levels of protein DnaA. All these control the process of initiator proteins binding to the origin sequences.
Because E. coli methylates GATC DNA sequences, DNA synthesis results in hemimethylated sequences. This hemimethylated DNA is recognized by the protein SeqA, which binds and sequesters the origin sequence; in addition, DnaA (required for initiation of replication) binds less well to hemimethylated DNA. As a result, newly replicated origins are prevented from immediately initiating another round of DNA replication.24
ATP builds up when the cell is in a rich medium, triggering DNA replication once the cell has reached a specific size. ATP competes with ADP to bind to DnaA, and the DnaA-ATP complex is able to initiate replication. A certain number of DnaA proteins are also required for DNA replication — each time the origin is copied, the number of binding sites for DnaA doubles, requiring the synthesis of more DnaA to enable another initiation of replication.
Eukaryotes initiate DNA replication at multiple points in the chromosome, so replication forks meet and terminate at many points in the chromosome; these are not known to be regulated in any particular way. Because eukaryotes have linear chromosomes, DNA replication is unable to reach the very end of the chromosomes, but ends at the telomere region of repetitive DNA close to the end. This shortens the telomere of the daughter DNA strand. This is a normal process in somatic cells. As a result, cells can only divide a certain number of times before the DNA loss prevents further division. (This is known as the Hayflick limit.) Within the germ cell line, which passes DNA to the next generation, telomerase extends the repetitive sequences of the telomere region to prevent degradation. Telomerase can become mistakenly active in somatic cells, sometimes leading to cancer formation.
Additionally, to aid termination, the progress of the DNA replication fork must stop or be blocked. Essentially, there are two methods that organisms do this, firstly, it is to have a termination site sequence in the DNA, and secondly, it is to have a protein which binds to this sequence to physically stop DNA replication proceeding. This is named the DNA replication terminus site-binding protein or in other words, Ter protein.
Because bacteria have circular chromosomes, termination of replication occurs when the two replication forks meet each other on the opposite end of the parental chromosome. E coli regulate this process through the use of termination sequences that, when bound by the Tus protein, enable only one direction of replication fork to pass through. As a result, the replication forks are constrained to always meet within the termination region of the chromosome.25
Researchers commonly replicate DNA in vitro using the polymerase chain reaction (PCR). PCR uses a pair of primers to span a target region in template DNA, and then polymerizes partner strands in each direction from these primers using a thermostable DNA polymerase. Repeating this process through multiple cycles produces amplification of the targeted DNA region. At the start of each cycle, the mixture of template and primers is heated, separating the newly synthesized molecule and template. Then, as the mixture cools, both of these become templates for annealing of new primers, and the polymerase extends from these. As a result, the number of copies of the target region doubles each round, increasing exponentially.26
- Imperfect DNA replication results in mutations. Berg JM, Tymoczko JL, Stryer L, Clarke ND (2002). "Chapter 27: DNA Replication, Recombination, and Repair". Biochemistry. W.H. Freeman and Company. ISBN 0-7167-3051-0.
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2002). "Chapter 5: DNA Replication, Repair, and Recombination". Molecular Biology of the Cell. Garland Science. ISBN 0-8153-3218-1.
- Berg JM, Tymoczko JL, Stryer L, Clarke ND (2002). "Chapter 27, Section 4: DNA Replication of Both Strands Proceeds Rapidly from Specific Start Sites". Biochemistry. W.H. Freeman and Company. ISBN 0-7167-3051-0.
- Alberts, B., et.al., Molecular Biology of the Cell, Garland Science, 4th ed., 2002, pp. 238-240 ISBN 0-8153-3218-1
- Allison, Lizabeth A. Fundamental Molecular Biology. Blackwell Publishing. 2007. p.112 ISBN 978-1-4051-0379-4
- Berg JM, Tymoczko JL, Stryer L, Clarke ND (2002). Biochemistry. W.H. Freeman and Company. ISBN 0-7167-3051-0. Chapter 27, Section 2: DNA Polymerases Require a Template and a Primer
- McCulloch SD, Kunkel TA (January 2008). "The fidelity of DNA synthesis by eukaryotic replicative and translesion synthesis polymerases". Cell Research 18 (1): 148–61. doi:10.1038/cr.2008.4. PMID 18166979.
- McCarthy D, Minner C, Bernstein H, Bernstein C (1976). "DNA elongation rates and growing point distributions of wild-type phage T4 and a DNA-delay amber mutant". J Mol Biol 106 (4): 963–81. PMID 789903.
- Drake JW (1970) The Molecular Basis of Mutation. Holden-Day, San Francisco ISBN 0816224501 ISBN 978-0816224500
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2002). Molecular Biology of the Cell. Garland Science. ISBN 0-8153-3218-1. Chapter 5: DNA Replication Mechanisms
- Weigel C, Schmidt A, Rückert B, Lurz R, Messer W (November 1997). "DnaA protein binding to individual DnaA boxes in the Escherichia coli replication origin, oriC". The EMBO Journal 16 (21): 6574–83. doi:10.1093/emboj/16.21.6574. PMC 1170261. PMID 9351837.
- Lodish H, Berk A, Zipursky LS, Matsudaira P, Baltimore D, Darnell J (2000). Molecular Cell Biology. W. H. Freeman and Company. ISBN 0-7167-3136-3.12.1. General Features of Chromosomal Replication: Three Common Features of Replication Origins
- Huberman JA, Riggs AD (1968). "On the mechanism of DNA replication in mammalian chromosomes". J Mol Biol 32 (2): 327–341. PMID 5689363.
- Griffiths A.J.F., Wessler S.R., Lewontin R.C., Carroll S.B. (2008). Introduction to Genetic Analysis. W. H. Freeman and Company. ISBN 0-7167-6887-9.[ Chapter 7: DNA: Structure and Replication. pg 283-290 ]
- Pursell, Z.F. et al. (2007). "Yeast DNA Polymerase ε Participates in Leading-Strand DNA Replication". Science 317 (5834): 127–130. doi:10.1126/science.1144067. PMC 2233713. PMID 17615360.
- Hansen, Barbara (2011). Biochemistry and Medical Genetics: Lecture Notes. Kaplan Medical. p. 21.
- Elizabeth R. Barry; Stephen D. Bell (12/2006). "DNA Replication in the Archaea". Microbiology and Molecular Biology Reviews 70 (4): 876–887. doi:10.1128/MMBR.00029-06. PMC 1698513. PMID 17158702.
- Distinguishing the pathways of primer removal during Eukaryotic Okazaki fragment maturation Contributor Author Rossi, Marie Louise. Date Accessioned: 2009-02-23T17:05:09Z. Date Available: 2009-02-23T17:05:09Z. Date Issued: 2009-02-23T17:05:09Z. Identifier Uri: http://hdl.handle.net/1802/6537. Description: Dr. Robert A. Bambara, Faculty Advisor. Thesis (PhD) - School of Medicine and Dentistry, University of Rochester. UR only until January 2010. UR only until January 2010.
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2002). Molecular Biology of the Cell. Garland Science. ISBN 0-8153-3218-1. DNA Replication Mechanisms: DNA Topoisomerases Prevent DNA Tangling During Replication
- DNA gyrase: structure and function. [Crit Rev Biochem Mol Biol. 1991] - PubMed - NCBI
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2002). Molecular Biology of the Cell. Garland Science. ISBN 0-8153-3218-1. DNA Replication Mechanisms: Special Proteins Help to Open Up the DNA Double Helix in Front of the Replication Fork
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2002). Molecular Biology of the Cell. Garland Science. ISBN 0-8153-3218-1. Intracellular Control of Cell-Cycle Events: S-Phase Cyclin-Cdk Complexes (S-Cdks) Initiate DNA Replication Once Per Cycle
- Tobiason DM, Seifert HS (2006). "The Obligate Human Pathogen, Neisseria gonorrhoeae, Is Polyploid". PLoS Biology 4 (6): e185. doi:10.1371/journal.pbio.0040185. PMC 1470461. PMID 16719561.
- Slater S, Wold S, Lu M, Boye E, Skarstad K, Kleckner N (September 1995). "E. coli SeqA protein binds oriC in two different methyl-modulated reactions appropriate to its roles in DNA replication initiation and origin sequestration". Cell 82 (6): 927–36. doi:10.1016/0092-8674(95)90272-4. PMID 7553853.
- TA Brown (2002). Genomes. BIOS Scientific Publishers. ISBN 1-85996-228-126.96.36.199. Termination of replication
- Saiki, RK; Gelfand DH, Stoffel S, Scharf SJ, Higuchi R, Horn GT, Mullis KB, Erlich HA (1988). "Primer-directed enzymatic amplification of DNA with a thermostable DNA polymerase". Science 239 (4839): 487–91. doi:10.1126/science.2448875. PMID 2448875. | http://www.bioscience.ws/encyclopedia/index.php?title=DNA_replication | 13 |
22 | Science Fair Project Encyclopedia
The algebra of sets
The algebra of sets develops and describes the basic properties and laws of sets, the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations.
The algebra of sets is the development of the fundamental properties of set operations and set relations. These properties provide insight into the fundamental nature of sets. They also have practical considerations.
Just like expressions and calculations in ordinary arithmetic, expressions and calculations involving sets can be quite complex. It is helpful to have systematic procedures available for manipulating and evaluating such expressions and performing such computations.
In the case of arithmetic, it is elementary algebra that develops the fundamental properties of arithmetic operations and relations.
For example, the operations of addition and multiplication obey familiar laws such as associativity, commutativity and distributivity, while, the "less than or equal" relation satisfies such laws as reflexivity, antisymmetry and transitivity. These laws provide tools which facilitate computation, as well as describe the fundamental nature of numbers, their operations and relations.
The algebra of sets is the set-theoretic analogue of the algebra of numbers. It is the algebra of the set-theoretic operations of union, intersection and complementation, and the relations of equality and inclusion. These are the topics covered in this article. For a basic introduction to sets see, Set, for a fuller account see Naive set theory.
The fundamental laws of set algebra
The binary operations of set union and intersection satisfy many identities. Several of these identities or "laws" have well established names. Three pairs of laws, are stated, without proof, in the following proposition.
PROPOSITION 1: For any sets A, B, and C, the following identities hold:
- commutative laws:
- A ∪B = B ∪A
- A ∩B = B ∩A
- associative laws:
- (A ∪B) ∪C = A ∪(B ∪C)
- (A ∩B) ∩C = A ∩(B ∩C)
- distributive laws:
- A ∪(B ∩C) = (A ∪B) ∩(A ∪C)
- A ∩(B ∪C) = (A ∩B) ∪(A ∩C)
Notice that the analogy between unions and intersections of sets, and addition and multiplication of numbers, is quite striking. Like addition and multiplication, the operations of union and intersection are commutative and associative, and intersection distributes over unions. However, unlike addition and multiplication, union also distributes over intersection.
PROPOSITION 2: For any subset A of universal set U, the following identities hold:
- identity laws:
- A ∪Ø = A
- A ∩U = A
- complement laws:
- A ∪AC = U
- A ∩AC = Ø
The identity laws (together with the commutative laws) say that, just like 0 and 1 for addition and multiplication, Ø and U are the identity elements for union and intersection, respectively.
Unlike addition and multiplication, union and intersection do not have inverse elements. However the complement laws give the fundamental properties of the somewhat inverse-like unary operation of set complementation.
The preceding five pairs of laws: the commutative, associative, distributive, identity and complement laws, can be said to encompass all of set algebra, in the sense that every valid proposition in the algebra of sets can be derived from them.
The principle of duality
The above propositions display the following interesting pattern. Each of the identities stated above is one of a pair of identities, such that, each can be transformed into the other by interchanging ∪ and ∩, and also Ø and U.
These are examples of an extremely important and powerful property of set algebra, namely, the principle of duality for sets, which asserts that for any true statement about sets, the dual statement obtained by interchanging unions and intersections, interchanging U and Ø and reversing inclusions is also true. A statement is said to be self-dual if it is equal to its own dual.
Some additional laws for unions and intersections
The following proposition states six more important laws of set algebra, involving unions and intersections.
PROPOSITION 3: For any subsets A and B of a universal set U, the following identities hold:
- idempotent laws:
- A ∪A = A
- A ∩A = A
- domination laws:
- A ∪U = U
- A ∩Ø = Ø
- absorption laws:
- A ∪(A ∩B) = A
- A ∩(A ∪B) = A
As noted above each of the laws stated in proposition 3, can be derived from the five fundamental pairs of laws stated in proposition 1 and proposition 2. As an illustration, a proof is given below for the idempotent law for union.
|A ∪A||= (A ∪A) ∩U||by the identity law for intersection|
|= (A ∪A) ∩(A ∪AC)||by the complement law for union|
|= A ∪(A ∩AC)||by the distributive law of union over intersection|
|= A ∪Ø||by the complement law for intersection|
|= A||by the identity law for union|
The following proof illustrates that the dual of the above proof is the proof of the dual of the idempotent law for union, namely the idempotent law for intersection.
|A ∩A||= (A ∩A) ∪Ø||by the identity law for union|
|= (A∩A) ∪(A ∩AC)||by the complement law for intersection|
|= A ∩(A ∪AC)||by the distributive law of intersection over union|
|= A ∩U||by the complement law for union|
|= A||by the identity law for intersection|
Some additional laws for complements
The following proposition states five more important laws of set algebra, involving complements.
PROPOSITION 4: Let A and B be subsets of a universe U, then:
- De Morgan's laws:
- (A ∪B)C = AC ∩BC
- (A ∩B)C = AC ∪BC
- double complement or Involution law:
- ACC = A
- complement laws for the universal set and the empty set:
- ØC = U
- UC = Ø
Notice that the double complement law is self-dual.
The next proposition, which is also self-dual, says that the complement of a set is the only set that satisfies the complement laws. In other words, complementation is characterized by the complement laws.
PROPOSITION 5: Let A and B be subsets of a universe U, then:
- uniqueness of complements:
- If A ∪B = U, and A ∩B = Ø then B = AC.
The algebra of inclusion
The following proposition says that inclusion is a partial order.
PROPOSITION 6: If A, B and C are sets then the following hold:
- A ⊆ A
- A ⊆ B and B ⊆ A if and only if A = B
- If A ⊆ B and B ⊆ C then A ⊆ C
The following proposition says that for any set S the power set of S ordered by inclusion is a bounded lattice, and hence together with the distributive and complement laws above, show that it is a Boolean algebra.
PROPOSITION 7: If A, B and C are subsets of a set S then the following hold:
- existence of joins:
- A ⊆ A ∪B
- If A ⊆ C and B ⊆ C then A ∪B ⊆ C
- existence of meets:
- A ∩B ⊆ A
- If C ⊆ A and C ⊆ B then C ⊆ A ∩B
The following proposition says that, the statement "A ⊆ B ", is equivalent to various other statements involving unions, intersections and complements.
PROPOSITION 8: For any two sets A and B, the following are equivalent:
- A ⊆ B
- A ∩B = A
- A ∪B = B
- A − B = Ø
- BC ⊆ AC
The above proposition shows that the relation of set inclusion can be characterized by either of the operations of set union or set intersection, which means that the notion of set inclusion is axiomatically superfluous.
The algebra of relative complements
The following proposition, lists several identities concerning relative complements or set-theoretic difference.
PROPOSITION 9: For any universe U and subsets A, B, and C of U, the following identities hold:
- C − (A ∩B) = (C − A) ∪(C − B)
- C − (A ∪B) = (C − A) ∩(C − B)
- C − (B − A) = (A ∩C) ∪(C − B)
- (B − A) ∩C = (B ∩C) − A = B ∩(C − A)
- (B − A) ∪C = (B ∪C) − (A − C)
- A − A = Ø
- Ø − A = Ø
- A − Ø = A
- B − A = AC ∩B
- (B − A)C = A ∪BC
- U − A = AC
- A − U = Ø
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/The_algebra_of_sets | 13 |
26 | What is Syllogism?
- Syllogism is logic. As discussed in class, all math is logic driven. Thinking logically involves taking widely accepted concepts and applying them to life.
For example, If I let go of my pencil and it falls on the ground, it is only logical to assume it fell due to gravity. I can safely assume this because gravity is known as the force on all objects which pulls them to the center of the earth.
- In addition to logic, syllogism involves deductive reasoning. If there is no clear reason as to why something happened, by using deductive reasoning you can hone in on what the cause might be.
- Aristotle has a major impact on syllogism, with his work on logic, known as formal logic. Though his work may seem trivial, it really introduced a new kind of formal system of thought. He placed emphasis on answering logical questions and using proof.
An example would be: If all humans are mortal and all Greeks are humans, then all Greeks are mortals.
Syllogisms are interesting because they can include false statements or conclusions that appear to be true. Truth and validity in syllogisms are very different things. A syllogism can be true, but not valid in a logical sense, or vice-versa.
Types of Syllogism
There are three main types of syllogism-
1. Conditional: If A is true, then B is also true.
2. Categorical: If A is in C then B is also in C.
3. Disjunctive: If A is true, then B is false.
Conditional syllogisms have three parts- a major premise, a minor premise, and a conclusion. The first two statements are presumed true. For example, a major premise could be something like 'men like beer'. This is the "A" part of the statement. The second part is the minor premise. It could be something like 'beer tastes good'. This is the B part. The third statement is the conclusion, which combines the first two statements. 'If your a man, you enjoy drinking beer'. Neither of the first two statements mentioned that, but through the combination of the first two statements, it is an easy conclusion to reach. Here is another example- imagine you are at a barber shop.
You're hair looks bad. I am qualified to cut and style hair. I can make your hair look good.
This is a silly example, but it does the job. When the barber says your hair looks bad, that is ok, because he is about to make your hair look good again. However, something that must be taken into consideration is that conditional syllogisms are rarely completed with all three sentences. Most often only the first two parts are needed and sometimes even the first part will do the trick. The listener should infer the meaning of the conditional syllogism on their own, i.e. commercials. Name brand medicines aren't neccesarily better than their generic counterparts, but we tend to think so because they lead people to believe they can do the job better.
Catagorical syllogisms tend to be more cut and dry, although somewhat similar to conditional syllogisms. They too include a major premise, a minor premise, and a conclusion. An example of a catagorical syllogism would be 'All men are mortal'. This is assumed to be true. The second statement would say something like 'Jim is a man.' This is also assumed true. The conclusion would state 'Jim is mortal'. Through the combination of the first two parts, another easy conclusion is reached. All men are mortal, and since Jim is a man, he must be mortal. It should be noted that categorical syllogisms must follow these six premises-
Last but not least we have disjunctive syllogism. It only has two parts, a major premise and a minor premise. They cannot both be true at once. That is to say, if A is true, then B is false. If B is true, A is false. An example would be a politician saying "Vote for me or vote for higher taxes". The alternative is bleak, but it is stated in such a way that would give you no option than to vote for that politician. | http://math2033.uark.edu/wiki/index.php/Syllogisms | 13 |
18 | In implementing the Algebra process and content performance indicators, it is expected that students will identify and justify mathematical relationships. The intent of both the process and content performance indicators is to provide a variety of ways for students to acquire and demonstrate mathematical reasoning ability when solving problems. Local curriculum and local/state assessments must support and allow students to use any mathematically correct method when solving a problem.
Throughout this document the performance indicators use the words investigate, explore, discover, conjecture, reasoning, argument, justify, explain, proof, and apply. Each of these terms is an important component in developing a studentís mathematical reasoning ability. It is therefore important that a clear and common definition of these terms be understood. The order of these terms reflects different stages of the reasoning process.
Investigate/Explore - Students will be given situations in which they will be asked to look for patterns or relationships between elements within the setting.
Discover - Students will make note of possible patterns and generalizations that result from investigation/exploration.
Conjecture - Students will make an overall statement, thought to be true, about the new discovery.
Reasoning - Students will engage in a process that leads to knowing something to be true or false.
Argument - Students will communicate, in verbal or written form, the reasoning process that leads to a conclusion. A valid argument is the end result of the conjecture/reasoning process.
Justify/Explain - Students will provide an argument for a mathematical conjecture. It may be an intuitive argument or a set of examples that support the conjecture. The argument may include, but is not limited to, a written paragraph, measurement using appropriate tools, the use of dynamic software, or a written proof.
Proof - Students will present a valid argument, expressed in written form, justified by axioms, definitions, and theorems.
Apply - Students will use a theorem or concept to solve an algebraic or numerical problem.
Use a variety of problem solving strategies to understand new mathematical content
A.PS.2 Recognize and understand equivalent representations of a problem situation or a mathematical concept
Observe and explain patterns to formulate generalizations and conjectures
A.PS.4 Use multiple representations to represent and explain problem situations (e.g., verbally, numerically, algebraically, graphically)
Choose an effective approach to solve a problem from a variety of strategies (numeric, graphic, algebraic)
A.PS.6 Use a variety of strategies to extend solution methods to other problems A.PS.7 Work in collaboration with others to propose, critique, evaluate, and value alternative approaches to problem solving
A.PS.8 Determine information required to solve a problem, choose methods for obtaining the information, and define parameters for acceptable solutions A.PS.9 Interpret solutions within the given constraints of a problem A.PS.10 Evaluate the relative efficiency of different representations and solution methods of a problem
A.RP.1 Recognize that mathematical ideas can be supported by a variety of strategies
Use mathematical strategies to reach a conclusion and provide supportive arguments for a conjecture
A.RP.3 Recognize when an approximation is more appropriate than an exact answer
A.RP.4 Develop, verify, and explain an argument, using appropriate mathematical ideas and language A.RP.5 Construct logical arguments that verify claims or counterexamples that refute them A.RP.6 Present correct mathematical arguments in a variety of forms A.RP.7 Evaluate written arguments for validity
A.RP.8 Support an argument by using a systematic approach to test more than one case A.RP.9
Devise ways to verify results or use counterexamples to refute incorrect statements
A.RP.10 Extend specific results to more general cases A.RP.11 Use a Venn diagram to support a logical argument A.PR.12 Apply inductive reasoning in making and supporting mathematical conjectures
Communicate verbally and in writing a correct, complete, coherent, and clear design (outline) and explanation for the steps used in solving a problem
Use mathematical representations to communicate with appropriate accuracy, including numerical tables, formulas, functions, equations, charts, graphs, Venn diagrams, and other diagrams
A.CM.3 Present organized mathematical ideas with the use of appropriate standard notations, including the use of symbols and other representations when sharing an idea in verbal and written form A.CM.4
Explain relationships among different representations of a problem
A.CM.5 Communicate logical arguments clearly, showing why a result makes sense and why the reasoning is valid A.CM.6 Support or reject arguments or questions raised by others about the correctness of mathematical work
A.CM.7 Read and listen for logical understanding of mathematical thinking shared by other students A.CM.8
Reflect on strategies of others in relation to oneís own strategy
A.CM.9 Formulate mathematical questions that elicit, extend, or challenge strategies, solutions, and/or conjectures of others
A.CM.10 Use correct mathematical language in developing mathematical questions that elicit, extend, or challenge other studentsí conjectures A.CM.11 Represent word problems using standard mathematical notation A.CM.12 Understand and use appropriate language, representations, and terminology when describing objects, relationships, mathematical solutions, and rationale A.CM.13 Draw conclusions about mathematical ideas through decoding, comprehension, and interpretation of mathematical visuals, symbols, and technical writing
Understand and make connections among multiple representations of the same mathematical idea
A.CN.2 Understand the corresponding procedures for similar problems or mathematical concepts
A.CN.3 Model situations mathematically, using representations to draw conclusions and formulate new situations
Understand how concepts, procedures, and mathematical results in one area of mathematics can be used to solve problems in other areas of mathematics A.CN.5 Understand how quantitative models connect to various physical models and representations
A.CN.6 Recognize and apply mathematics to situations in the outside world A.CN.7 Recognize and apply mathematical ideas to problem situations that develop outside of mathematics A.CN.8 Develop an appreciation for the historical development of mathematics
A.R.1 Use physical objects, diagrams, charts, tables, graphs, symbols, equations, or objects created using technology as representations of mathematical concepts A.R.2 Recognize, compare, and use an array of representational forms A.R.3 Use representation as a tool for exploring and understanding mathematical ideas
A.R.4 Select appropriate representations to solve problem situations A.R.5 Investigate relationships between different representations and their impact on a given problem
A.R.6 Use mathematics to show and understand physical phenomena (e.g., find the height of a building if a ladder of a given length forms a given angle of elevation with the ground) A.R.7 Use mathematics to show and understand social phenomena (e.g., determine profit from student and adult ticket sales) A.R.8
Use mathematics to show and understand mathematical phenomena (e.g., compare the graphs of the functions represented by the equations and )
Students will understand numbers, multiple ways of representing numbers, relationships among numbers, and number systems.
Identify and apply the properties of real numbers (closure, commutative, associative, distributive, identity, inverse) Note: Students do not need to identify groups and fields, but students should be engaged in the ideas.
Students will understand meanings of operations and procedures, and how they relate to one another.
A.N.2 Simplify radical terms (no variable in the radicand) A.N.3 Perform the four arithmetic operations using like and unlike radical terms and express the result in simplest form A.N.4 Understand and use scientific notation to compute products and quotients of numbers greater than 100% A.N.5 Solve algebraic problems arising from situations that involve fractions, decimals, percents (decrease/increase and discount), and proportionality/direct variation A.N.6 Evaluate expressions involving factorial(s), absolute value(s), and exponential expression(s) A.N.7 Determine the number of possible events, using counting techniques or the Fundamental Principle of Counting A.N.8 Determine the number of possible arrangements (permutations) of a list of items
Variables and Expressions
A.A.1 Translate a quantitative verbal phrase into an algebraic expression A.A.2 Write verbal expressions that match given mathematical expressions
|A.A.3||Distinguish the difference between an algebraic expression and an algebraic equation|
|A.A.4||Translate verbal sentences into mathematical equations or inequalities|
|A.A.5||Write algebraic equations or inequalities that represent a situation|
|A.A.6||Analyze and solve verbal problems whose solution requires solving a linear equation in one variable or linear inequality in one variable|
|A.A.7||Analyze and solve verbal problems whose solution requires solving systems of linear equations in two variables|
|A.A.8||Analyze and solve verbal problems that involve quadratic equations|
|A.A.9||Analyze and solve verbal problems that involve exponential growth and decay|
|A.A.10||Solve systems of two linear equations in two variables algebraically (See A.G.7)|
|A.A.11||Solve a system of one linear and one quadratic equation in two variables, where only factoring is required Note: The quadratic equation should represent a parabola and the solution(s) should be integers.|
Variables and Expressions
A.A.12 Multiply and divide monomial expressions with a common base, using the properties of exponents Note: Use integral exponents only. A.A.13 Add, subtract, and multiply monomials and polynomials A.A.14 Divide a polynomial by a monomial or binomial, where the quotient has no remainder A.A.15 Find values of a variable for which an algebraic fraction is undefined. A.A.16 Simplify fractions with polynomials in the numerator and denominator by factoring both and renaming them to lowest terms A.A.17 Add or subtract fractional expressions with monomial or like binomial denominators A.A.18 Multiply and divide algebraic fractions and express the product or quotient in simplest form A.A.19 Identify and factor the difference of two perfect squares A.A.20 Factor algebraic expressions completely, including trinomials with a lead coefficient of one (after factoring a GCF)
Equations and Inequalities
A.A.21 Determine whether a given value is a solution to a given linear equation in one variable or linear inequality in one variable A.A.22 Solve all types of linear equations in one variable A.A.23 Solve literal equations for a given variable A.A.24 Solve linear inequalities in one variable A.A.25 Solve equations involving fractional expressions Note: Expressions which result in linear equations in one variable. A.A.26 Solve algebraic proportions in one variable which result in linear or quadratic equations A.A.27 Understand and apply the multiplication property of zero to solve quadratic equations with integral coefficients and integral roots A.A.28 Understand the difference and connection between roots of a quadratic equation and factors of a quadratic expression
Patterns, Relations, and Functions
A.A.29 Use set-builder notation and/or interval notation to illustrate the elements of a set, given the elements in roster form A.A.30 Find the complement of a subset of a given set, within a given universe A.A.31 Find the intersection of sets (no more than three sets) and/or union of sets (no more than three sets)
A.A.32 Graph the Explain slope as a rate of change between dependent and independent variables A.A.33 Determine the slope of a line, given the coordinates of two points on the line A.A.34 Write the equation of a line, given its slope and the coordinates of a point on the line A.A.35 Write the equation of a line, given the coordinates of two points on the line A.A.36 Write the equation of a line parallel to the x- or y-axis A.A.37 Determine the slope of a line, given its equation in any form A.A.38 Determine if two lines are parallel, given their equations in any form A.A.39 Determine whether a given point is on a line, given the equation of the line A.A.40 Determine whether a given point is in the solution set of a system of linear inequalities A.A.41 Determine the vertex and axis of symmetry of a parabola, given its equation (See A.G.10 )
Find the sine, cosine, and tangent ratios of an angle of a right triangle, given the lengths of the sides
|A.A.43||Determine the measure of an angle of a right triangle, given the length of any two sides of the triangle|
|A.A.44||Find the measure of a side of a right triangle, given an acute angle and the length of another side|
|A.A.45||Determine the measure of a third side of a right triangle using the Pythagorean theorem, given the lengths of any two sides|
A.G.1 Find the area and/or perimeter of figures composed of polygons and circles or sectors of a circle Note: Figures may include triangles, rectangles, squares, parallelograms, rhombuses, trapezoids, circles, semi-circles, quarter-circles, and regular polygons (perimeter only). A.G.2 Use formulas to calculate volume and surface area of rectangular solids and cylinders
Students will apply coordinate geometry to analyze problem solving situations.
A.G.3 Determine when a relation is a function, by examining ordered pairs and inspecting graphs of relations A.G.4 Identify and graph linear, quadratic (parabolic), absolute value, and exponential functions A.G.5 Investigate and generalize how changing the coefficients of a function affects its graph A.G.6 Graph linear inequalities A.G.7 Graph and solve systems of linear equations and inequalities with rational coefficients in two variables (See A.A.10) A.G.8 Find the roots of a parabolic function graphically Note: Only quadratic equations with integral solutions. A.G.9 Solve systems of linear and quadratic equations graphically Note: Only use systems of linear and quadratic equations that lead to solutions whose coordinates are integers. A.G.10 Determine the vertex and axis of symmetry of a parabola, given its graph (See A.A.41) Note: The vertex will have an ordered pair of integers and the axis of symmetry will have an integral value.
Units of Measurement
A.M.1 Calculate rates using appropriate units (e.g., rate of a space ship versus the rate of a snail) A.M.2 Solve problems involving conversions within measurement systems, given the relationship between the units
Students will understand that all measurement contains error and be able to determine its significance.
Error and Magnitude
|A.M.3||Calculate the relative error in measuring square and cubic units, when there is an error in the linear measure|
|A.M.2||Solve problems involving conversions within measurement systems, given the relationship between the units|
Students will collect, organize, display, and analyze data.
Organization and Display of Data
|A.S.1||Categorize data as qualitative or quantitative|
|A.S.2||Determine whether the data to be analyzed is univariate or bivariate|
|A.S.3||Determine when collected data or display of data may be biased|
|A.S.4||Compare and contrast the appropriateness of different measures of central tendency for a given data set|
|A.S.5||Construct a histogram, cumulative frequency histogram, and a box-and-whisker plot, given a set of data|
|A.S.6||Understand how the five statistical summary (minimum, maximum, and the three quartiles) is used to construct a box-and-whisker plot|
|A.S.7||Create a scatter plot of bivariate data|
|A.S.8||Construct manually a reasonable line of best fit for a scatter plot and determine the equation of that line|
Analysis of Data
|A.S.9||Analyze and interpret a frequency distribution table or histogram, a cumulative frequency distribution table or histogram, or a box-and-whisker plot|
|A.S.10||Evaluate published reports and graphs that are based on data by considering: experimental design, appropriateness of the data analysis, and the soundness of the conclusions|
|A.S.11||Find the percentile rank of an item in a data set and identify the point values for first, second, and third quartiles|
Identify the relationship between the independent and dependent variables from a scatter plot (positive, negative, or none)
|A.S.13||Understand the difference between correlation and causation|
|A.S.14||Identify variables that might have a correlation but not a causal relationship|
Students will make predictions that are based upon data analysis.
Predictions from Data
Identify and describe sources of bias and its effect, drawing conclusions from data
|A.S.16||Recognize how linear transformations of one-variable data affect the dataís mean, median, mode, and range|
|A.S.17||Use a reasonable line of best fit to make a prediction involving interpolation or extrapolation|
Students will understand and apply concepts of probability.
|A.S.18||Know the definition of conditional probability and use it to solve for probabilities in finite sample spaces|
|A.S.19||Determine the number of elements in a sample space and the number of favorable events|
Calculate the probability of an event and its complement
|A.S.21||Determine empirical probabilities based on specific sample data|
Determine, based on calculated probability of a set of events, if:
Calculate the probability
|Table of Contents||Prekindergarten||Kindergarten||Grade 1||Grade 2|
|Grade 3||Grade 4||Grade 5||Grade 6||Grade 7| | http://www.p12.nysed.gov/ciai/mst/math/standards/algebra.html | 13 |
15 | The test form
of an argument is what results from replacing different words, or sentences, that make up the argument with letters; the letters are called variables
Some examples of valid arguments forms are modus ponens, modus tollens, and disjunctive syllogism. One invalid argument form is affirming the consequent.
Just as variables can stand for various numbers in mathematics, variables can stand for various words, or sentences, in logic. Argument forms are very important in the study of logic. The parts of argument forms--sentence forms (see below)--are equally important. In a logic course one would learn how to determine what the forms of various sentences and arguments are.
The basic notion of argument form can be introduced with an example. Here is an example of an argument:
A All humans are mortal. Socrates is human. Therefore, Socrates is mortal.
We can rewrite argument A by putting each sentence on its own line:
- All humans are mortal.
- Socrates is human.
- Therefore, Socrates is mortal.
To demonstrate the important notion of the form
of an argument, substitute letters for similar items throughout B
- All S is P.
- a is S.
- Therefore, a is P.
All we have done in C
is to put 'S' for 'human' and 'humans', 'P' for 'mortal', and a
for 'Socrates'; what results, C
, is the form
of the original argument in A
. So argument form C
is the form of argument A
. Moreover, each individual sentence of C
is the sentence form
of its respective sentence in A
There is a good reason why attention to argument and sentence forms is important. The reason is this: form is what makes an argument valid or cogent. | http://july.fixedreference.org/en/20040724/wikipedia/Argument_form | 13 |
22 | An argument is a set of propositions designed to demonstrate that a particular conclusion, called the thesis, is true. An argument is not simply a statement of opinion, but an attempt to give reasons for holding certain opinions. An historical argument gives reasons for holding a certain opinion about an event in the past.
There are many disciplines in which the answers to questions can be presented in an straightforward, unambiguous manner. History is not one of these. Unlike physics or chemistry, where there is usually only one generally accepted answer to any question, in history there are usually many ways that one can understand, or interpret, what has happened in the past. It is therefore necessary to choose from among these possibilities and decide which one is correct. This choice should be based on a solid understanding of the issues and the evidence. We should be able to give reasons for our choice, our opinion, on that subject. This choice should be based on evidence.
As we have discussed earlier, there are two principle sources of evidence we can use for developing our opinions about what happened in the past: primary and secondary sources. Secondary sources are useful because they present the conclusions of those who have more knowledge and expertise on the subject than we are like ly to have. On the other hand, if we want to find out what really happened for ourselves, we need to look at the primary sources, just as those who wrote the secondary sources did. This exercise, therefore, will help you develop an argument based on primary sources.
III. Parts of an Argument
A. Thesis: that statement which you are trying to prove. In an argumentative essay, this conclusions would appear as your thesis statement. In an philosophy class, this would be called the "conclusion."
B. Argument: the reasons you give for your conclusion. An argument is considered persuasive if the reasons given are good reasons for the conclusion; an argument is considered unpersuasive if the reasons are not good reasons for the conclusion. In an argumentative essay, these reasons will generally appear as the topic sentences of individual paragraphs. In a philosophy class, these reasons would be referred to as "premises."
C. Evidence: the concrete "facts" upon which you base your argument. Evidence can be descriptions of events, philosophical concepts, economic statistics, laws, battles, paintings, poems or any other information you have about the past. Some of this information you will find in secondary sources, such as your textbooks, but for this course, most of the evidence should come from primary sources.
IV. Evaluating an Argument
- 1. Is the argument persuasive? That is, does the argument in fact give reasons to believe the thesis?
- 2. Are the reasons plausible?
- 3. Is there sufficient evidence to support the argument? While writers often cite an example as a way to illustrate a particular point, a single example is often not sufficient to support a generalization.
- 4. Are the examples representative? That is, do the examples chosen truly reflect the historical situation or were they chosen to exclude evidence which would tend to disprove or complicate the thesis?
- 5. Does the argument present enough background information so that the reader can assess the significance of the evidence presented?
- 6. Does the argument take into account counterexamples?
- 7. Does the argument refute possible objections?
- 8. Does the argument cite sources? | http://thenagain.info/Classes/Basics/HistArg.html | 13 |
18 | news & tips
A collection of helpful articles on teachers and teaching
Teaching Critical Thinking Through Debate
As the time draws near for U.S. citizens to exercise their voting rights, teachers have an opportunity to engage students in classroom discussions surrounding the presidential elections.
Students must be taught to frame their knowledge with deeper concepts than what immediately surrounds them. For example, French and Spanish territories in what is now known as Texas were governed by political ideologies of those countries. Asking if or how those early days influenced our current political environment broadens the scope of understanding and applicability.
How to Teach Critical Thinking
Teachers encourage critical thinking development through instructional processes like scaffolding and modeling. Students who see their teacher asking questions that require in-depth exploration on a regular basis will begin to ask deeper questions about their own perceptions.
The development of critical thinking skills is segmented into several steps:
- Knowledge acquisition: Receiving information and placing that data into retrievable chunks for future application
- Comprehension: Understanding the knowledge gained thoroughly
- Application: Finding ways to apply that knowledge to real life in a meaningful way
- Evaluation: Analyzing applications for accuracy
- Incorporation: Using acquired knowledge in myriad ways and for other purposes than originally identified
- Review: Evaluating the process through more challenging questions and applications
By leading students through this process, teachers trigger analytical thought and prompt students to look beyond their own knowledge base to expand their comprehension of concepts such as political ideology.
Thinking Critically About Presidential Debates
Using debate strategies as a conceptual starting point, educators can help their students become superior critical thinkers by gradually adding more challenging questions. Utilizing the presidential debates as an example, teachers can assign topics like taxes or government spending — two highly debated issues in the current election cycle. Preparing a classroom for one-on-one debates to improve critical thinking skills involves understanding the topic as well as other factors that affect audience perception.
Debate Strategy #1: Saying what you mean in a clear, concise manner
Educators might ask students to consider the phrase, “I will not raise taxes if I am elected.” Building on this statement, students should be able to identify several areas for further exploration and thoughtful consideration such as the questions listed below.
- What is the definition of taxes?
- Does the speaker have the authority to follow through on these statements?
- What makes this speaker a credible authority?
- Are there situations that could force the speaker to reverse his or her position?
Critical thinkers will find more complex questions as they carefully consider the statement.
Debate Strategy #2: Matching body language to spoken words
Controlling body language during a debate is almost as important as the words and inflection. In recent debates between the incumbent and the challenger, news commentators have spent hours analyzing body language. Hand gestures, facial expressions, posture and encroaching on personal space can be positive and negative attributes. Teachers can ask students to observe candidates’ body language during a debate and consider these questions:
- Does this person’s body language mirror what they are saying?
- What kind of body language do the candidates have toward each other?
Debate coaches advise getting to know the competition by watching films, reading published commentaries or interviews and examining past actions or political voting records.
Debate Strategy #3: Debating in the classroom
Allowing students to host mock-presidential debates is an excellent way to demonstrate the need to ask challenging questions. Every debate will reveal at least one weakness. Discovering these weaknesses provides openings for further understanding and more advanced critical thinking skills.
Teachers that incorporate presidential debate analysis and mock debates as part of their lesson plans will find ample opportunity to strengthen critical thinking skills. | http://lessonplanspage.com/teaching-critical-thinking-through-debate/ | 13 |
30 | 3.2 Truth Tables
A truth table lists all possible combinations of truth values. A single statement p has two possible truth values: truth (T) and falsity (F). Given two statements p and q, there are four possible truth value combinations, ranging from TT to FF. So there are four rows in the truth table. In general, given n statements, there are 2n cases (or rows) in the truth table.
3.2.1 Basic Truth Tables of the Five Connectives
Formally, the following five basic truth tables define the five connectives.
The Truth Table of Negation
The possible truth values of a negation are opposite to the possible truth values of the statement it negates. If p is true, then ∼p is false. If p is false, then ∼p is true.
The Truth Table of Conjunction
A conjunction p ∙ q is true only when both of its conjuncts are true. It is false in all other three cases.
p ∙ q
The Truth Table of Disjunction
A disjunction p ∨ q is false only when both of its disjuncts are false. In the other three cases, the disjunction is true.
p ∨ q
The Truth Table of Conditional
A conditional is false only when its antecedent is true but its consequent is false. This is so because p ⊃ q says that p is a sufficient condition of q. Now if p is true but q is false, then p cannot be a sufficient condition for q. Consequently, the conditional p ⊃ q would be false.
p ⊃ q
The Truth Table of Biconditional
A biconditional p ≡ q is true only when both p and q share the same truth value. If p and q have opposite truth values, then the biconditional is false.
p ≡ q
3.2.2 Determining the Truth Value of a Compound Statement
The truth value of a compound statement is determined by the truth values of the simple statements it contains and the basic truth tables of the five connectives. In the following example,
the statements C and D are given as true, but E is given as false. To determine the truth value of the conditional, we first write down the given truth value under each letter. Afterwards, using the truth table of conjunction, we can determine the truth value of the antecedent C ∙ D. Because both C and D are true, C ∙ D is true. We then write down “T” under the dot “∙” to indicate that C ∙ D is true. Finally, since the antecedent C ∙ D is true, but the consequent E is false, the conditional is false. The final truth value is written under the horseshoe “⊃”.
In the next example, the compound statement is a disjunction.
The statement H is given as true, but G and K false. To figure out the truth value of the disjunction, we need to first determine the truth value of the second disjunct ∼H ⊃ K. Since H is true, ∼H is false. We write down “F” under the tilde “∼”. Next, since the antecedent ∼H is false and the consequent K is false, the conditional ∼H ⊃ K is true. So we write down “T” under the horseshoe “⊃”. In the last step, we figure out that the disjunction is true because the first disjunct G is false but the second disjunct ∼H ⊃ K is true.
In the third example, we try to determine the truth value of a biconditional statement from the given truth values that A and D are true, but M and B are false.
Since M is false, but A is true, from the third row in the truth table of biconditional, we know that M ≡ A is false, and write down “F” under the triple bar “≡”. We then decide that D ⊃ B is false because D is true but B is false. Next we write down “T” under the tilde “∼” to indicate that ∼(M ≡ A) is true. Finally, we can see that the whole conjunction is false because the second conjunct is false.
3.2.3 Three Properties of Statement
In Propositional Logic, a statement is tautologous, self-contradictory or contingent. Which property it has is determined by its possible truth values.
A statement is tautologous if it is logically true, that is, if it is logically impossible for the statement to be false. If we look at the truth table of a tautology, we would see that all its possible truth values are Ts. One of the simplest tautology is a disjunction such as D ∨ ∼D.
To see all the possible truth values of D ∨ ∼D, we construct its truth table by first listing all the possible truth values of the statement D under the letter “D”. Next, we derive the truth values in the green column under the tilde “∼” from the column under the second “D”. The tilde column highlighted in green lists all the possible truth values of ∼D. Finally, from the column under the first “D” and the tilde column, we can come up with all the possible truth values of D ∨ ∼D. They are highlighted in red color, and since the red column under the wedge “∨” lists all the possible truth values of D ∨ ∼D, we put a border around it to indicate that it is the final (or main) column in the truth table. Notice that the two possible truth values in this column are Ts. Since the truth table lists all the possible truth values, it shows that it is logically impossible for D ∨ ∼D to be false. Accordingly, it is a tautology.
The next tautology K ⊃ (N ⊃ K) has two different letters: “K” and “N”. So its truth table has four (22 = 4) rows.
To construct the table, we put down the letter “T” twice and then the letter “F” twice under the first letter from the left, the letter “K”. As a result, we have “TTFF” under the first “K” from the left. We then repeat the column for the second “K”. Under “N”, the second letter from the left, we write down one “T” and then one “F” in turn until the column is completed. This results in having “TFTF” under “N”. Next, we come up with the possible truth values for N ⊃ K since it is inside the parentheses. The column is highlighted in green. Afterwards, we derive the truth values for the first horseshoe (here highlighted in red) based on the truth values in the first K column and the second horseshoe column (i.e., the green column). Finally we put a border around the first horseshoe column to show it is the final column. We see that all the truth values in that column are all Ts. So K ⊃ (N ⊃ K) is a tautology.
A statement is self-contradictory if it is logically false, that is, if it is logically impossible for the statement to be true. After completing the truth table of the conjunction D ∙ ∼D, we see that all the truth values in the main column under the dot are Fs. The truth table illustrates clearly that it is logically impossible for D ∙ ∼D to be true.
The conjunction G ∙ ∼(H ⊃ G) has two distinct letters “G” and “H”, so its truth table has four rows.
We write down “TTFF” under the first “G” from the left, and then repeat the values under the second “G”. Under “H”, we put down “TFTF”. We then derive the truth values under the horseshoe from the H column and the second G column. Next, all the possible truth values for ∼(H ⊃ G) are listed under the tilde (highlighted in blue). Notice they are opposite to the truth values in the horseshoe column. Afterwards, we use the first G column and the tilde column to come up with the truth values listed under the dot. Since they are all Fs, G ∙ ∼(H ⊃ G) is a self-contradiction.
A statement is contingent if it is neither tautologous nor self-contradictory. In other words, it is logically possible for the statement to be true and it is also logically possible for it to be false. The conditional D ⊃ ∼D is contingent because its final column contains both a T and a F. Since each row in the truth table represents one logical possibility, this shows that it is logically possible for D ⊃ ∼D to be true, as well as for it to be false.
In constructing the truth table for B ≡ (∼E ⊃ B), we need to first come up with the truth values for ∼E because it is the antecedent of the conditional ∼E ⊃ B inside the parentheses. We then use the truth values under the tilde and the second “B” to derive the truth values for the horseshoe column. Next we come up with the column under the triple bar from the first B column and the horseshoe column. The final column has three Ts, representing the logical possibility of being true, and one F, representing the logical possibility of being false. Since its main column contains both T and F, B ≡ (∼E ⊃ B) is contingent.
3.2.4 Relations between Two Statements
By comparing all the possible truth values of two statements, we can determine which of the following logical relations exists between them: logical equivalence, contradiction, consistency and inconsistency.
Two statements are logically equivalent if they necessarily have the same truth value. This means that their possible truth values listed in the two final columns are the same in each row.
To see whether the pair of statements K ⊃ H and ∼H ⊃ ∼K are logically equivalent to each other, we construct a truth table for each statement.
Notice in both truth tables, the statement K has the truth value distribution TTFF, and H has the truth value distribution TFTF. This is crucial because we need to make sure that we are dealing with the same truth value distributions in each row. For instance, in the third row, K is false but H is true in both truth tables. After we complete both truth tables, we see that the two main (or final) columns are identical to each other. This shows that the two statements are logically equivalent.
It is important to be able to tell whether two English sentences are logically equivalent. To see whether these two statements
- The stock market will fall if interest rates are raised.
- The stock market won’t fall only if interest rates are not raised.
are logically equivalent, we first symbolize them as R ⊃ F and ∼F ⊃ ∼R. We then construct their truth tables.
Since the final two columns are identical, indeed they are logically equivalent.
Two statements are contradictory to each other if they necessarily have the opposite truth values. This means that their truth values in the final columns are opposite in every row of the truth tables. After completing the truth tables for D ⊃ B and D ∙ ∼B, we can see clearly from the two final columns that they are contradictory to each other.
Two statements are consistent if it is logically possible for both of them to be true. This means that there is at least one row in which the truth values in both the final columns are true.
To find out whether ∼A ∙ ∼R and ∼(R ∙ A) are consistent with each other, we construct their truth tables below.
Notice again that we have to write down “TTFF” under both “A”, and then “TFTF” under both “R”. After both truth tables are completed, we can see that in the fourth row of the final column each statement has T as its truth value. Since each row stands for a logical possibility, this means that it is logically possible for both of them to be true. So they are consistent with each other.
If we cannot find at least one row in which the truth values in both the final columns are true, then the two statements are inconsistent. That is, it is not logically possible for both of them to be true. In other words, at least one of them must be false. Therefore, if it comes to our attention that two statements are inconsistent, then we must reject at least one of them as false. Failure to do so would mean being illogical.
In the final columns of the truth tables of M ∙ S and ∼(M ≡ S), we do not find a row in which both statements are true. This shows that they are inconsistent with each other, and at least one of them must be false.
There can be more than one logical relation between two statements. If two statements are contradictory to each other, then they would have opposite truth values in every row of the main columns. As a result, there cannot be a row in which both statements are true. This means that they must also be inconsistent to each other. However, if two statements are inconsistent, it does not follow that they must be contradictory to each other. The above pair, M ∙ S and ∼(M ≡ S) are inconsistent, but not contradictory to each other. The last row of the two final columns shows that it is logically possible for both statements to be false.
For logically equivalent statements to be consistent with one another, they have to meet the condition that none of them is a self-contradictory statement. The final column of a self-contradictory statement contains no T. So it is not logically possible for a pair of self-contradictory statements to be consistent with each other.
3.2.5 Truth Tables for Arguments
A deductive argument is valid if its conclusion necessarily follows from its premises. That is, if the premises are true, then the conclusion must be true. This means that if it is logically possible for the premises to be true but the conclusion false, then the argument is invalid. Since a truth table lists all logical possibilities, we can use it to determine whether a deductive argument is valid. The whole process has three steps:
- Symbolize the deductive argument;
- Construct the truth table for the argument;
- Determine the validity—look to see if there is at least one row in the truth table in which the premises are true but the conclusion false. If such a row is found, this would mean that it is logically possible for the premises to be true but the conclusion false. Accordingly, the argument is invalid. If such a row is not found, this would mean that it is not logically possible to have true premises with a false conclusion. Therefore, the argument is valid.
To decide whether the argument
|If young people don’t have good economic opportunities, there would be more gang violence. Since there is more gang violence, young people don’t have good economic opportunities.||3.2a|
is valid, we first symbolize each statement to come up with the argument form. Next, we line up the three statements horizontally, separating the two premises with a single vertical line, and the premises and the conclusion with a double vertical line.
Afterwards, we write down “TTFF” under “O” and “TFTF” under “V”. We then derive the truth values for ∼O. Next, we complete the column under the horseshoe and put a border around it. We then complete the column for the second premise V and the conclusion ∼O. The three columns with borders list all the possible truth values for the three statements. To determine the validity of (3.2a), we go over the three main columns row by row to see whether there is a row in which the premises are true, but the conclusion false. We find such a case in the first row. This means that it is logically possible for the premises to be true, but the conclusion false. So (3.2a) is invalid.
To determine whether Argument (3.2b) is valid, we check the three final columns row by row to see if there is a row in which the premises are true but the conclusion false. We do not find such a row. So (3.2b) is valid.
|Psychics can foretell the future only if the future has been determined. But the future has not been determined. It follows that psychics cannot foretell the future.||3.2b|
In the next example, there are three different letters in the argument form of (3.2c). So its truth table has eight (23 = 8) rows. To exhaust all possible truth value combinations, we write down “TTTTFFFF” under the first letter from the left, “E”. For the second letter from the left, “F”, we put down “TTFFTTFF”, and for the third, “P”, “TFTFTFTF”. For the first premise, we first come up with the truth value for F ∙ P and write down the truth values under the dot. We then derive the truth values under the horseshoe using the first E column and the dot column. Next, we fill out the columns for ∼E and ∼P. After the truth table is completed, we go over the three main columns to see if there is at least one row with true premises but a false conclusion. We find such a case in the fifth and the seventh rows. So the argument is invalid.
|Public education will improve only if funding for education is increased and parents are more involved in the education process. Since public education is not improving, we can conclude that there is not enough parental involvement in the education process.||3.2c|
Notice the next argument (3.2d) has three premises. After symbolization, its argument form contains three different letters. So its truth table has eight rows. After completing the truth table, we check each row of the four final columns, looking for rows with true premises but a false conclusion. We do not find any. So (3.2d) is valid.
|If more money is spent on building prisons, then less money would go to education. But kids would not be well-educated if less money goes to education. We have spent more money building prisons. As a result, kids would not be well-educated.||3.2d|
- Determine the truth-values of the following symbolized statements. Let A, B, and C be true; G, H, and K, false; M and N, unknown truth-value. You need to show how you determine the truth-value step by step.
- A ∙ ∼G
- ∼A ∨ G
- ∼(A ⊃ G)
- ∼G ≡ (B ⊃ K)
- (A ⊃ ∼C) ∨ C
- B ∙ (∼H ⊃ A)
- M ⊃ ∼G
- (M ∨ ∼A) ∨ H
- (N ∙ ∼N) ≡ ∼K
- ∼(M ∨ ∼M) ⊃ N
- Use the truth table to decide whether each of the following symbolized statements is tautologous, self-contradictory or contingent.
- ∼G ⊃ G
- D ⊃ (B ∨ ∼B)
- K ∙ (∼M ∨ ∼K)
- (S ⊃ H) ≡ (∼H ∙ S)
- (R ⊃ E) ∨ ∼H
- (N ∙ ∼(D ∨ ∼E)) ≡ D
- Use truth tables to determine whether each pair of statements are logically equivalent, contradictory, consistent or inconsistent. If necessary, symbolize the statements. Identify all the relations between the statements.
- G ∙ ∼D
D ∨ ∼G
- ∼(M ⊃ B)
∼B ∙ ∼M
- ∼A ≡ C
(A ∙ ∼C) ∨ (C ∙ ∼A)
- J ⊃ ∼(L ∨ N)
(∼L ⊃ N) ∙ J
⊃ K) ∙ ∼O)
O ∨ (E ∙ ∼K)
- ∼(H ∨ ∼(R ∙ S))
(∼S ∙ R) ⊃ ∼H
- If Steve does not support you, then he is not your friend. (S, F)
If Steve supports you, then he is your friend. (S, F)
- If someone loves you, then she or he is nice to you. (L, N)
If someone is nice to you, then she or he loves you. (N, L)
- Without campaign finance reform people would not have equal access to
political power. (C, E)
With campaign finance reform people would have equal access to political power. (C, E)
- The economy will slow down unless consumer confidence stays high and
inflation is under control. (S, H, U)
The economy will slow down if consumer confidence does not stay high and inflation is not under control. (S, H, U)
- Symbolize the arguments and then use the truth table to decide whether they are valid.
- Not both the music program and the art curriculum will be cut. So if the music program is not cut, then the art curriculum will. (M, A)
- Retailers cannot have a good holiday season unless the consumer confidence in the economy is high. Currently the consumer confidence in the economy is high. We can predict that retailers can have a good holiday season. (R, C)
- If nations do not cut back the use of fossil fuel, then worldwide pollution will get worse. Nations are cutting back the use of fossil fuel. Therefore, worldwide pollution will not get worse. (C, W)
- People would not demand cultural assimilation if they embrace diversity. However, people do demand cultural assimilation. Consequently, they do not embrace diversity. (A, D)
- We can have world peace only if people are compassionate and not prejudiced. Since people are prejudiced; therefore, we cannot have world peace. (W, C, P)
- Germ-line genetic engineering should be banned if we do not want to have designer babies. If we want to have designer babies, there would be greater social inequality. So either germ-line genetic engineering should be banned or there would be greater social inequality. (G, D, I)
- The economy will suffer if we fall behind other nations in science, technology and innovation. If we do not improve mathematics education, we will fall behind other nations in science, technology and innovation. Therefore, the economy will suffer unless we improve mathematics education. (S, F, I)
- If we continue the acceleration of production and consumption, nations will fight over natural resources unless alternative technologies are developed. Since we are developing alternative technologies, nations won’t fight over natural resources. (C, F, D)
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. | http://www.butte.edu/~wmwu/iLogic/3.2/iLogic_3_2.html | 13 |
17 | The Proof Builder uses a logical system that closely resembles the calculus used by E. J. Lemmon in his book Beginning Logic (London: Chapman & Hall) and by Colin Allen in his book Logic Primer (Cambridge: MIT Press 1992). Some familiarity with either system or with natural deduction calculi will be required when using the Proof Builder. If you feel not sufficiently familiar with any of these systems, you might consider having a look at the other functions of the Gateway to Logic.
Propositional expressions consist of propositional variables, connectives and brackets.
A propositional variable (or, shorter, just "variable") is an uppercase letter, followed by zero or more digits or letters. Thus, the strings "P", "Q", "P1", "Q42", as well as "Proposition" or "IamAniceProposition" are (different) propositional variables. Note that most tasks accept propositions which start with a lowercase letter, too. Since the proof checker does not, you should probably always use capitalized propositional variables.
Semantically, as their name indicates, propositional variables signify a proposition (you can treat them as an abbreviation of an English proposition). For example, you can decide to use the propositional variable "P" as an abbreviation of the English proposition "It is raining", or the variable "Q42" as an abbreviation of "All pigs are pink".
The strings "~", "&", "v", "->", "<->", and only these strings, are connectives. They are used for composing new propositions out of (usually two) existing ones.
The string "~" is the first and only exception to the rule that connectives compose two propositions into a new one: It needs only one existing proposition. In general, if foo is a proposition, by placing "~" in front of it you form a new proposition.
As indicated by the preceding chapter, "P" is a proposition. Thus, we can form a new proposition out of "P" by placing a "~" in front of it: "~P". Of course, we can repeat that step: Now knowing that "~P" is a proposition, we can create another proposition by just writing a "~" in front of it: "~~P".
As its name implies, the string "~" is used to deny the truth of a proposition. Hence, if we use "P" as an abbreviation for "It is raining", "~P" means (more or less) "It is not raining".
The string "&" is used to combine two existing propositions into another one by writing the "&" between them, but we have to enclose the expression formed so far in brackets.
So, if we use "~~P" and "Q42" (we already know that both are propositions), we easily form the new proposition "(~~P & Q42)".
The "&" is used to express confidence that both the sentence to its left as well as the sentence to its right are true.
The string "v" (the lowercase letter "v"), analogous to "&", combines two propositions. If foo and bar are propositions, the string "(foo v bar)" is a proposition, too.
Please note that if there is a propositional variable right before or after "v", the gateway requires you to separate the "v" from the variables by at least one space character. Although this might seem strange at the first glance, especially since in the case of the other connectives there is no such requirement, there is a simple reason. It is easiest demonstrated by a simple example: In the case of the proposition "PvQ v R", the gateway could not decide whether "PvQ" is one variable, namely "PvQ", or if "PvQ" is the disjunction of the two variables "P" and "Q".
The meaning of "v" is quite the same as the meaning of the English phrase "...or..., or both". Thus, "P" abbreviating "It is raining" and "Q" abbreviating "The sun is shining", "P v Q" signifies "It is raining, or the sun is shining, or both".
As in the case of "&" and "v", "->" combines two existing propositions into a new one. As usual, if foo and bar are two propositions, the expression "(foo -> bar)" is a proposition, too.
The connective "->" usually is translated by the phrase "if... then...". Although this is not wrong, it is sometimes misleading.
If foo and bar are two propositions, the expression "(foo <-> bar)" is a proposition, too.
The translation of "<->" is "...if and only if...", which often is abbreviated as "iff". "(foo <-> bar)" signifies that either both foo and bar are true or that both are false.
Although a strict formal language usually requires brackets, the gateway allows you to leave them out. In this case, it evaluates negations, then conjunctions, followed by disjunctions, conditionals and finally biconditionals. Two or more occurrances of the same connective are evaluated from left to right. Hint: If you don't understand this paragraph, use brackets.
If some of the keys of your keyboard are broken or you can't find a certain key you would need to enter a connective, you need not despair. There are alternatives.
Remark: In fact it is desirable to use "|" instead of "v", since that makes the lexicographic analysis of your input deterministic. If you don't understand this remark, you should either use "|" instead of "v" or always place a space character before and after the "v".
Like propositional variables, predicates start with an uppercase letter, followed by zero or more letters or digits. The arguments of the predicate are enclosed by brackets and separated by commas. Examples are "Philosopher(Sokrates)" or "Loves(Foo,Bar)".
Individual constants look exactly like propositional variables, i.e. they consist of one uppercase letter, followed by zero or more letters or digits. Individual constants are "Sokrates", "Platon" or "Frege". Individual constants denotate exactly one individual, each - hence the name.
Invidual variables consist of one lowercase letter, followed by zero or more letters or digits. Individual variables signify nothing.
The strings "exist" and "all" represent existential and universal quantification. The bound variable and the quantified expression, enclosed in brackets and separated by a comma, follow the quantifier.
"All pigs are pink" could be translated as "all( x, Pig(x)->Pink(x) )". "Some pigs are not pink" would be "exist( x, Pig(x) & ~Pink(x) )".
The syntax of proofs, too, resembles Lemmon's notation. The following example shows a proof which the gateway accepts without modifications.
1 (1) P->Q A 2 (2) ~Q A 3 (3) P A 1,3 (4) Q 1,3MPP 1,2,3 (5) Q & ~Q 2,4&I 1,2 (6) ~P 3,5RAA 1 (7) ~Q -> ~P 2,6CP
Explaining the deduction rules in detail would lead us a bit too far into detail. All I can provide here is an overview: | http://logik.phl.univie.ac.at/~chris/gateway/proof-syntax.html | 13 |
146 | The word “argument” can be used to designate a dispute or a fight, or it can be used more technically. The focus of this article is on understanding an argument as a collection of truth-bearers (that is, the things that bear truth and falsity, or are true and false) some of which are offered as reasons for one of them, the conclusion. This article takes propositions rather than sentences or statements or utterances to be the primary truth bearers. The reasons offered within the argument are called “premises”, and the proposition that the premises are offered for is called the “conclusion”. This sense of “argument” diverges not only from the above sense of a dispute or fight but also from the formal logician’s sense according to which an argument is merely a list of statements, one of which is designated as the conclusion and the rest of which are designated as premises regardless of whether the premises are offered as reasons for believing the conclusion. Arguments, as understood in this article, are the subject of study in critical thinking and informal logic courses in which students usually learn, among other things, how to identify, reconstruct, and evaluate arguments given outside the classroom.
Arguments, in this sense, are typically distinguished from both implications and inferences. In asserting that a proposition P implies proposition Q, one does not thereby offer P as a reason for Q. The proposition frogs are mammals implies that frogs are not reptiles, but it is problematic to offer the former as a reason for believing the latter. If an arguer offers an argument in order to persuade an audience that the conclusion is true, then it is plausible to think that the arguer is inviting the audience to make an inference from the argument’s premises to its conclusion. However, an inference is a form of reasoning, and as such it is distinct from an argument in the sense of a collection of propositions (some of which are offered as reasons for the conclusion). One might plausibly think that a person S infers Q from P just in case S comes to believe Q because S believes that P is true and because S believes that the truth of P justifies belief that Q. But this movement of mind from P to Q is something different from the argument composed of just P and Q.
The characterization of argument in the first paragraph requires development since there are forms of reasoning such as explanations which are not typically regarded as arguments even though (explanatory) reasons are offered for a proposition. Two principal approaches to fine-tuning this first-step characterization of arguments are what may be called the structural and pragmatic approaches. The pragmatic approach is motivated by the view that the nature of an argument cannot be completely captured in terms of its structure. In what follows, each approach is described, and criticism is briefly entertained. Along the way, distinctive features of arguments are highlighted that seemingly must be accounted for by any plausible characterization. The classification of arguments as deductive, inductive, and conductive is discussed in section 3.
Not any group of propositions qualifies as an argument. The starting point for structural approaches is the thesis that the premises of an argument are reasons offered in support of its conclusion (for example, Govier 2010, p.1, Bassham, G., W. Irwin, H. Nardone, J. Wallace 2005, p.30, Copi and Cohen 2005, p.7; for discussion, see Johnson 2000, p.146ff ). Accordingly, a collection of propositions lacks the structure of an argument unless there is a reasoner who puts forward some as reasons in support of one of them. Letting P1, P2, P3, …, and C range over propositions and R over reasoners, a structural characterization of argument takes the following form.
A collection of propositions, P1, …, Pn, C, is an argument if and only if there is a reasoner R who puts forward the Pi as reasons in support of C.
The structure of an argument is not a function of the syntactic and semantic features of the propositions that compose it. Rather, it is imposed on these propositions by the intentions of a reasoner to use some as support for one of them. Typically in presenting an argument, a reasoner will use expressions to flag the intended structural components of her argument. Typical premise indicators include: “because”, “since”, “for”, and “as”; typical conclusion indicators include “therefore”, “thus”, “hence”, and “so”. Note well: these expressions do not always function in these ways, and so their mere use does not necessitate the presence of an argument.
Different accounts of the nature of the intended support offered by the premises for the conclusion in an argument generate different structural characterizations of arguments (for discussion see Hitchcock 2007). Plausibly, if a reasoner R puts forward premises in support of a conclusion C, then (i)-(iii) obtain. (i) The premises represent R’s reasons for believing that the conclusion is true and R thinks that her belief in the truth of the premises is justified. (ii) R believes that the premises make C more probable than not. (iii) (a) R believes that the premises are independent of C ( that is, R thinks that her reasons for the premises do not include belief that C is true), and (b) R believes that the premises are relevant to establishing that C is true. If we judge that a reasoner R presents an argument as defined above, then by the lights of (i)-(iii) we believe that R believes that the premises justify belief in the truth of the conclusion. In what immediately follows, examples are given to explicate (i)-(iii).
A: John is an only child.
B: John is not an only child; he said that Mary is his sister.
If B presents an argument, then the following obtain. (i) B believes that the premise ( that is, Mary is John’s sister) is true, B thinks this belief is justified, and the premise is B’s reason for maintaining the conclusion. (ii) B believes that John said that Mary is his sister makes it more likely than not that John is not an only child, and (iii) B thinks that that John said that Mary is his sister is both independent of the proposition that Mary is John’s sister and relevant to confirming it.
A: The Democrats and Republicans don’t seem willing to compromise.
B: If the Democrats and Republicans are not willing to compromise, then the U.S. will go over the fiscal cliff.
B’s assertion of a conditional does not require that B believe either the antecedent or consequent. Therefore, it is unlikely that B puts forward the Democrats and Republicans are not willing to compromise as a reason in support of the U.S. will go over the fiscal cliff, because it is unlikely that B believes either proposition. Hence, it is unlikely that B’s response to A has the structure of an argument, because (i) is not satisfied.
A: Doctor B, what is the reason for my uncle’s muscular weakness?
B: The results of the test are in. Even though few syphilis patients get paresis, we suspect that the reason for your uncle’s paresis is the syphilis he suffered from 10 years ago.
Dr. B offers reasons that explain why A’s uncle has paresis. It is unreasonable to think that B believes that the uncle’s being a syphilis victim makes it more likely than not that he has paresis, since B admits that having syphilis does not make it more likely than not that someone has (or will have) paresis. So, B’s response does not contain an argument, because (ii) is not satisfied.
A: I don’t think that Bill will be at the party tonight.
B: Bill will be at the party, because Bill will be at the party.
Suppose that B believes that Bill will be at the party. Trivially, the truth of this proposition makes it more likely than not that he will be at the party. Nevertheless, B is not presenting an argument. B’s response does not have the structure of an argument, because (iiia) is not satisfied. Clearly, B does not offer a reason for Bill will be at the party that is independent of this. Perhaps, B’s response is intended to communicate her confidence that Bill will be at the party. By (iiia), a reasoner R puts forward Sasha Obama has a sibling in support of Sasha is not an only child only if R’s reasons for believing do not include R’s belief that is true. If R puts forward in support of and, say, erroneously believes that the former is independent of the latter, then R’s argument would be defective by virtue of being circular. Regarding (iiib), that Obama is U.S. President entails that the earth is the third planet from the sun or it isn’t, but it is plausible to suppose that the former does not support the latter because it is irrelevant to showing that the earth is the third planet from the sun or it isn’t is true.
Premises offered in support of a conclusion are either convergent or divergent. This difference marks a structural distinction between arguments.
Tom is happy only if he is playing guitar.
Tom is not playing guitar.
∴ Tom is not happy.
Suppose that a reasoner R offers and as reasons in support of . The argument is presented in what is called standard form; the premises are listed first and a solid line separates them from the conclusion, which is prefaced by “∴”. This symbol means “therefore”. Premises and are convergent because they do not support the conclusion independently of one another, that is, they support the conclusion jointly. It is unreasonable to think that R offers and individually, as opposed to collectively, as reasons for . The following representation of the argument depicts the convergence of the premises.
Combining and with the plus sign and underscoring them indicates that they are convergent. The arrow indicates that they are offered in support of . To see a display of divergent premises, consider the following.
Tom said that he didn’t go to Samantha’s party.
No one at Samantha’s party saw Tom there.
∴ Tom did not attend Samantha’s party.
These premises are divergent, because each is a reason that supports independently of the other. The below diagram represents this.
An extended argument is an argument with at least one premise that a reasoner attempts to support explicitly. Extended arguments are more structurally complex than ones that are not extended. Consider the following.
The keys are either in the kitchen or the bedroom. The keys are not in the kitchen. I did not find the keys in the kitchen. So, the keys must be in the bedroom. Let’s look there!
The argument in standard form may be portrayed as follows:
I just searched the kitchen and I did not find the keys.
∴ The keys are not in the kitchen.
The keys are either in the kitchen or the bedroom.
∴ The keys are in the bedroom.
An enthymeme is an argument which is presented with at least one component that is suppressed.
A: I don’t know what to believe regarding the morality of abortion.
B: You should believe that abortion is immoral. You’re a Catholic.
That B puts forward A is a Catholic in support of A should believe that abortion is immoral suggests that B implicitly puts forward all Catholics should believe that abortion is immoral in support of . Proposition may plausibly be regarded as a suppressed premise of B’s argument. Note that and are convergent. A premise that is suppressed is never a reason for a conclusion independent of another explicitly offered for that conclusion.
There are two main criticisms of structural characterizations of arguments. One criticism is that they are too weak because they turn non-arguments such as explanations into arguments.
A: Why did this metal expand?
B: It was heated and all metals expand when heated.
B offers explanatory reasons for the explanandum (what is explained): this metal expanded. It is plausible to see B offering these explanatory reasons in support of the explanandum. The reasons B offers jointly support the truth of the explanandum, and thereby show that the expansion of the metal was to be expected. It is in this way that B’s reasons enable A to understand why the metal expanded.
The second criticism is that structural characterizations are too strong. They rule out as arguments what intuitively seem to be arguments.
A: Kelly maintains that no explanation is an argument. I don’t know what to believe.
B: Neither do I. One reason for her view may be that the primary function of arguments, unlike explanations, is persuasion. But I am not sure that this is the primary function of arguments. We should investigate this further.
B offers a reason, the primary function of arguments, unlike explanations, is persuasion, for the thesis no explanation is an argument. Since B asserts neither nor , B does not put forward in support of . Hence, by the above account, B’s reasoning does not qualify as an argument. A contrary view is that arguments can be used in ways other than showing that their conclusions are true. For example, arguments can be constructed for purposes of inquiry and as such can be used to investigate a hypothesis by seeing what reasons might be given to support a given proposition (see Meiland 1989 and Johnson and Blair 2006, p.10). Such arguments are sometimes referred to as exploratory arguments. On this approach, it is plausible to think that B constructs an exploratory argument [exercise for the reader: identify B’s suppressed premise].
Briefly, in defense of the structuralist account of arguments one response to the first criticism is to bite the bullet and follow those who think that at least some explanations qualify as arguments (see Thomas 1986 who argues that all explanations are arguments). Given that there are exploratory arguments, the second criticism motivates either liberalizing the concept of support that premises may provide for a conclusion (so that, for example, B may be understood as offering in support of ) or dropping the notion of support all together in the structural characterization of arguments (for example, a collection of propositions is an argument if and only if a reasoner offers some as reasons for one of them. See Sinnott-Armstrong and Fogelin 2010, p.3).
The pragmatic approach is motivated by the view that the nature of an argument cannot be completely captured in terms of its structure. In contrast to structural definitions of arguments, pragmatic definitions appeal to the function of arguments. Different accounts of the purposes arguments serve generate different pragmatic definitions of arguments. The following pragmatic definition appeals to the use of arguments as tools of rational persuasion (for definitions of argument that make such an appeal, see Johnson 2000, p. 168; Walton 1996, p. 18ff; Hitchcock 2007, p.105ff)
A collection of propositions is an argument if and only if there is a reasoner R who puts forward some of them (the premises) as reasons in support of one of them (the conclusion) in order to rationally persuade an audience of the truth of the conclusion.
One advantage of this definition over the previously given structural one is that it offers an explanation why arguments have the structure they do. In order to rationally persuade an audience of the truth of a proposition, one must offer reasons in support of that proposition. The appeal to rational persuasion is necessary to distinguish arguments from other forms of persuasion such as threats. One question that arises is: What obligations does a reasoner incur by virtue of offering supporting reasons for a conclusion in order to rationally persuade an audience of the conclusion? One might think that such a reasoner should be open to criticisms and obligated to respond to them persuasively (See Johnson 2000 p.144 et al, for development of this idea). By appealing to the aims that arguments serve, pragmatic definitions highlight the acts of presenting an argument in addition to the arguments themselves. The field of argumentation, an interdisciplinary field that includes rhetoric, informal logic, psychology, and cognitive science, highlights acts of presenting arguments and their contexts as topics for investigation that inform our understanding of arguments (see Houtlosser 2001 for discussion of the different perspectives of argument offered by different fields).
For example, the acts of explaining and arguing—in sense highlighted here—have different aims. Whereas the act of explaining is designed to increase the audience’s comprehension, the act of arguing is aimed at enhancing the acceptability of a standpoint. This difference in aim makes sense of the fact that in presenting an argument the reasoner believes that her standpoint is not yet acceptable to her audience, but in presenting an explanation the reasoner knows or believes that the explanandum is already accepted by her audience (See van Eemeren and Grootendorst 1992, p.29, and Snoeck Henkemans 2001, p.232). These observations about the acts of explaining and arguing motivate the above pragmatic definition of an argument and suggest that arguments and explanations are distinct things. It is generally accepted that the same line of reasoning can function as an explanation in one dialogical context and as an argument in another (see Groarke and Tindale 2004, p. 23ff for an example and discussion). Eemeren van, Grootendorst, and Snoeck Henkemans 2002 delivers a substantive account of how the evaluation of various types of arguments turns on considerations pertaining to the dialogical contexts within which they are presented and discussed.
Note that, since the pragmatic definition appeals to the structure of propositions in characterizing arguments, it inherits the criticisms of structural definitions. In addition, the question arises whether it captures the variety of purposes arguments may serve. It has been urged that arguments can aim at engendering any one of a full range of attitudes towards their conclusions (for example, Pinto 1991). For example, a reasoner can offer premises for a conclusion C in order to get her audience to withhold assent from C, suspect that C is true, believe that is merely possible that C is true, or to be afraid that C is true.
The thought here is that these are alternatives to convincing an audience of the truth of C. A proponent of a pragmatic definition of argument may grant that there are uses of arguments not accounted for by her definition, and propose that the definition is stipulative. But then a case needs to be made why theorizing about arguments from a pragmatic approach should be anchored to such a definition when it does not reflect all legitimate uses of arguments. Another line of criticism of the pragmatic approach is its rejecting that arguments themselves have a function (Goodwin 2007) and arguing that the function of persuasion should be assigned to the dialogical contexts in which arguments take place (Doury 2011).
Arguments are commonly classified as deductive or inductive (for example, Copi, I. and C. Cohen 2005, Sinnott-Armstrong and Fogelin 2010). A deductive argument is an argument that an arguer puts forward as valid. For a valid argument, it is not possible for the premises to be true with the conclusion false. That is, necessarily if the premises are true, then the conclusion is true. Thus we may say that the truth of the premises in a valid argument guarantees that the conclusion is also true. The following is an example of a valid argument: Tom is happy only if the Tigers win, the Tigers lost; therefore, Tom is definitely not happy.
A step-by-step derivation of the conclusion of a valid argument from its premises is called a proof. In the context of a proof, the given premises of an argument may be viewed as initial premises. The propositions produced at the steps leading to the conclusion are called derived premises. Each step in the derivation is justified by a principle of inference. Whether the derived premises are components of a valid argument is a difficult question that is beyond the scope of this article.
An inductive argument is an argument that an arguer puts forward as inductively strong. In an inductive argument, the premises are intended only to be so strong that, if they were true, then it would be unlikely, although possible, that the conclusion is false. If the truth of the premises makes it unlikely (but not impossible) that the conclusion is false, then we may say that the argument is inductively strong. The following is an example of an inductively strong argument: 97% of the Republicans in town Z voted for McX, Jones is a Republican in town Z; therefore, Jones voted for McX.
In an argument like this, an arguer often will conclude “Jones probably voted for McX” instead of “Jones voted for McX,” because they are signaling with the word “probably” that they intend to present an argument that is inductively strong but not valid.
In order to evaluate an argument it is important to determine whether or not it is deductive or inductive. It is inappropriate to criticize an inductively strong argument for being invalid. Based on the above characterizations, whether an argument is deductive or inductive turns on whether the arguer intends the argument to be valid or merely inductively strong, respectively. Sometimes the presence of certain expressions such as ‘definitely’ and ‘probably’ in the above two arguments indicate the relevant intensions of the arguer. Charity dictates that an invalid argument which is inductively strong be evaluated as an inductive argument unless there is clear evidence to the contrary.
Conductive arguments have been put forward as a third category of arguments (for example, Govier 2010). A conductive argument is an argument whose premises are divergent; the premises count separately in support of the conclusion. If one or more premises were removed from the argument, the degree of support offered by the remaining premises would stay the same. The previously given example of an argument with divergent premises is a conductive argument. The following is another example of a conductive argument. It most likely won’t rain tomorrow. The sky is red tonight. Also, the weather channel reported a 30% chance of rain for tomorrow.
The primary rationale for distinguishing conductive arguments from deductive and inductive ones is as follows. First, the premises of conductive arguments are always divergent, but the premises of deductive and inductive arguments are never divergent. Second, the evaluation of arguments with divergent premises requires not only that each premise be evaluated individually as support for the conclusion, but also the degree to which the premises support the conclusion collectively must be determined. This second consideration mitigates against treating conductive arguments merely as a collection of subarguments, each of which is deductive or inductive. The basic idea is that the support that the divergent premises taken together provide the conclusion must be considered in the evaluation of a conductive argument. With respect to the above conductive argument, the sky is red tonight and the weather channel reported a 30% chance of rain for tomorrow are offered together as (divergent) reasons for It most likely won’t rain tomorrow. Perhaps, collectively, but not individually, these reasons would persuade an addressee that it most likely won’t rain tomorrow.
A group of propositions constitutes an argument only if some are offered as reasons for one of them. Two approaches to identifying the definitive characteristics of arguments are the structural and pragmatic approaches. On both approaches, whether an act of offering reasons for a proposition P yields an argument depends on what the reasoner believes regarding both the truth of the reasons and the relationship between the reasons and P. A typical use of an argument is to rationally persuade its audience of the truth of the conclusion. To be effective in realizing this aim, the reasoner must think that there is real potential in the relevant context for her audience to be rationally persuaded of the conclusion by means of the offered premises. What, exactly, this presupposes about the audience depends on what the argument is and the context in which it is given. An argument may be classified as deductive, inductive, or conductive. Its classification into one of these categories is a prerequisite for its proper evaluation.
Michigan State University
U. S. A.
Last updated: May 13, 2013 | Originally published: May 12, 2013
Article printed from Internet Encyclopedia of Philosophy: http://www.iep.utm.edu/argument/
Copyright © The Internet Encyclopedia of Philosophy. All rights reserved. | http://www.iep.utm.edu/argument/print/ | 13 |
19 | An expression, also called an operation, is a technique of combining two or more values or data fields, to either modify an existing value or to produce a new value. Based on this, to create an expression or to perform an operation, you need at least one value or field and one symbol. A value or field involved in an operation is called an operand. A symbol involved in an operation is called an operator.
A unary operator is one that uses only one operand. An operator is referred to as binary if it operates on two operands.
A constant is a value that does not change. The constants you will be using in your databases have already been created and are built in Microsoft Access. Normally, Visual Basic for Applications (VBA), the version of Microsoft Visual Basic that ships with Microsoft Access also provides many constants. Just in case you are aware of them, you will not be able to use those constants, as Microsoft Access does not inherently "understand" them. For this reason, we will mention here only the constants you can use when building regular expressions.
The algebraic numbers you have been using all the time are constants because they never change. Examples of constant numbers are 12, 0, 1505, or 88146. Therefore, any number you can think of is a constant. Every letter of the alphabet is a constant and is always the same. Examples of constant letters are d, n, c. Some characters on your keyboard represent symbols that are neither letters nor digits. These are constants too. Examples are &, |, @, or !. The names of people are constants too. In fact, any name you can thing of is a contant.
In order to provide a value to an existing field, you can use an operator called assignment and its symbol is "=". It uses the following syntax:
Field/Object = Value/Field/Object
The operand on the left side of the = operator is referred to as the left value:
The operand on the right side of the operator is
referred to as the right value. It can be a constant, a value, an
expression, the name of a field, or an object.
In some other cases, the assignment operator will be part of a longer expression. We will see examples we move on.
An algebraic value is considered positive if it is greater than 0. As a mathematical convention, when a value is positive, you do not need to express it with the + operator. Just writing the number without any symbol signifies that the number is positive. Therefore, the numbers +4, +228, and +90335 can be, and are better, expressed as 4, 228, or 90335. Because the value does not display a sign, it is referred as unsigned.
A value is referred to as negative if it is less than 0. To express a negative value, it must be appended with a sign, namely the - symbol. Examples are -12, -448, -32706. A value accompanied by - is referred to as negative. The - sign must be typed on the left side of the number it is used to negate.
Remember that if a number does not have a sign, it is considered positive. Therefore, whenever a number is negative, it must have a - sign. In the same way, if you want to change a value from positive to negative, you can just add a - sign to its left. In the same way, if you want to negate the value of a field and assign it to another field, you can type the - operator on its left when assigning it.
Besides a numeric value, the value of a field or an object can also be expressed as being negative by typing a - sign to its left. For example, -txtLength means the value of the control named txtLength must be made negative.
The addition is used to add one value or expression to another. It is performed using the + symbol and its syntax is:
Value1 + Value2
The addition allows you to add two numbers such as 12 + 548 or 5004.25 + 7.63
After performing the addition, you get a result. You can provide such a result to another field of a form or report. This can be done using the assignment operator. The syntax used would be:
= Value1 + Value2
To use the result of this type of operation, you can write it in the Control Source property of the field that would show the result.
Subtraction is performed by retrieving one value from another value. This is done using the - symbol. The syntax used is:
Value1 - Value2
The value of Value1 is subtracted from the value of Value2.
Multiplication allows adding one value to itself a certain number of times, set by the second value. The multiplication is performed with the * sign which is typed with Shift + 8. Here is an example:
Value1 * Value2
During the operation, Value1 is repeatedly added to itself, Value2 times. The result can be assigned to the Control Source of a field as.
The division is used to get the fraction of one number in terms of another number. Microsoft Access provides two types of results for the division operation. If you want the result of the operation to be a natural number, called an integer, use the backlash "\" as the operator. Here is an example:
Value1 \ Value2
This operation can be performed on two types of valid numbers, with or without decimal parts. After the operation, the result would be a natural number.
The second type of division results in a decimal number. It is performed with the forward slash "/". Its syntax is:
Value1 / Value2
After the operation is performed, the result is a decimal number.
Exponentiation is the ability to raise a number to the power of another number. This operation is performed using the ^ operator (Shift + 6). It uses the following mathematical formula:
In Microsoft Access, this formula is written as y^x and means the same thing. Either or both y and x can be values or expressions, but they must carry valid values that can be evaluated.
When the operation is performed, the value of y is raised to the power of x. You can display the result of such an operation in a field using the assignment operator as follows:
The division operation gives a result of a number with or without decimal values, which is fine in some circumstances. Sometimes you will want to get the value remaining after a division renders a natural result. The remainder operation is performed with keyword Mod. Its syntax is:
Value1 Mod Value2
The result of the operation can be used as you see fit or you can display it in a control using the assignment operator as follows:
= Value1 Mod Value2
In previous lessons, we learned that a property was something that characterized or describes an object. For example, users mainly use a text box either to read the text it contains, or to change its content, by changing the existing text or by entering new text. Therefore, the text the user types in a text box is a property of the text box. To access the property of an object, type the name of the object, followed by a period, followed by the name of the property you need. The syntax used is:
The property you are trying to use must be a valid property of the object. In Microsoft Access, to use a property of an object, you must know, either based on experience or with certainty, that the property exists. Even so, unfortunately, not all properties are available in Microsoft Access.
To name our objects so far, in some cases we used a name made of one word without space. In some other cases, we used spaces or special characters in a name. This is possible because Microsoft Access allows a great level of flexibility when it comes to names used in a database. Unfortunately, when such names get involved in an expression, there would be an error or the result would be unpredictable.
To make sure Microsoft Access can recognize any name in an expression, you should include it between an opening square bracket "[" and a closing square brackets "]". Examples are [© Year], [Soc. Sec. #], or [Date of Birth]. In the same way, even if the name is in one word, to be safe, you should (always) include it in square brackets. Examples are [Country], [FirstName], or [SocialSecurityNumber]. Therefore, the =txtLength expression that we referred to can be written =[txtLength].
The objects used in Microsoft Access are grouped in categories called collections. For example, the forms belong to a collection of objects called Forms. Consequently, all forms of your database project belong to the Forms collection. The reports belong to a collection of objects called Reports and all reports of your database belong to the Reports collection. The data fields belong to a collection called Controls and all controls of a form or a report of your database belong to the Controls collection.
To call a particular object in an expression, use the exclamation point operator "!". To do this, type the name of the collection followed by the ! operator, followed by the name of the object you want to access. For example, on a form, if you have a text box called txtLength and you want to refer to it, you can type [Controls]![txtLength]. Therefore, the =txtLength expression that we referred to can be written =Controls!txtLength, and =[txtLength] can be written =Controls![txtLength] or =[Controls]![txtLength].
The name of the collection is used to perform what is referred to as qualification: the name of the collection "qualifies" the object. In other words, it helps the database engine locate the object by referring to its collection. This is useful in case two objects of different categories are being referred to.
In a database, Microsoft Access allows two objects to have the same name, as long as they do not belong to the same category. For example, you cannot have two forms called Employees in the same database. In the same way, you cannot have two reports named Contracts in the same database. On the other hand, you can have a form named Employees and a report named Employees in the same database. For this reason, when creating expressions, you should (strongly) qualify the object you are referring to, using its collection. Therefore, when an object named Employees is referred to in an expression, you should specify its collection, using the ! operator. An example would be Forms!Employees which means the Employees form of the Forms collection. If the name of the form is made of more than one word, or for convenience, you must (strongly suggested) use square brackets to delimit the name of the form. The form would be accessed with Forms![Employees].
To refer to a control placed on a form or report, you can type the Forms collection, followed by the ! operator, followed by the name of the form, followed by the ! operator and followed by the name of the control. An example would be Forms!People!LastName. Using the assignment operator that we introduced earlier, if on a form named People, you have a control named LastName and you want to assign its value to another control named FullName, in the Control Source property of the FullName field, you can enter one of the following expressions:
=LastName =[LastName] =Controls!LastName =[Controls]![LastName] =Forms!People!LastName =[Forms]![People]![LastName]
These expressions would produce the same result.
Parentheses are used in two main circumstances: in expressions (or operations) or in functions. The parentheses in an expression help to create sections. This regularly occurs when more than one operators are used in an operation. Consider the following operation: 8 + 3 * 5
The result of this operation depends on whether you want to add 8 to 3 then multiply the result by 5 or you want to multiply 3 by 5 and then add the result to 8. Parentheses allow you to specify which operation should be performed first in a multi-operator operation. In our example, if you want to add 8 to 3 first and use the result to multiply it by 5, you would write (8 + 3) * 5. This would produce 55. On the other hand, if you want to multiply 3 by 5 first then add the result to 8, you would write 8 + (3 * 5). This would produce 23.
As you can see, results are different when parentheses are used on an operation that involves various operators. This concept is based on a theory called operator precedence. This theory manages which operation would execute before which one; but parentheses allow you to control the sequence of these operations.
A function is a task that must be performed to produce a result on a table, a form, or a report. It is like an operation or an expression with the first difference that someone else created it and you can just use it. For example, instead of the addition operator "+", to add two values, you could use a function.
In practicality, you cannot create a function in Microsoft Access. You can only use those that have been created and that exist already. These are referred to as built-in functions.
If you had to create a function (remember that we cannot create a function in Microsoft Access; the following sections are only hypothetical but illustrative of the subject of a function), a formula you would use is:
This syntax is very simplistic but indicates that the minimum piece of information a function needs is a name. The name allows you to refer to this function in other parts of the database. The name of the function is followed by parentheses. As stated already, a function is meant to perform a task. This task would be defined or described in the body of the function. In our simple syntax, the body of the function would start just under its name after the parentheses and would stop just above the End word. The person who creates a function also decides what the function can do. Following our simple formula, if we wanted a function that can open Solitaire, it could appear as follows:
FunctionExample() Open Solitaire End
Once a function has been created, it can be used. Using
a function is referred to as calling it. To call a simple function like the
above FunctionExample, you would just type its name.
The person who creates a function also decides what kind of value the function can return. For example, if you create a function that performs a calculation, the function may return a number. If you create another function that combines a first name and a last name, you can make the function return a string that represents a full name.
When asked to perform its task, a function may need one or more values to work with. If a function needs a value, such a value is called a parameter. The parameter is provided in the parentheses of the function. The formula used to create such a function would be:
ReturnValue FunctionName(Parameter) End
Once again, the body of the function would be used to define what the function does. For example, if you were writing a function that multiplies its parameter by 12.58, it would appear almost as follows:
Decimal FunctionName(Parameter) parameter * 12.58 End
While a certain function may need one parameter, another function would need many of them. The number and types of parameters of a function depend on its goal. When a function uses more than one parameter, a comma separates them in the parentheses. The syntax used is:
ReturnValue FunctionName(Parameter1, Parameter2, Parameter_n) End
If you were creating a function that adds its two parameters, it would appear as follows:
NaturalNumber AddTwoNumbers(Parameter1, Parameter2) Parameter1 + Parameter2 End
Once a function has been created, it can be used in other parts of the database. Once again, using a function is referred to as calling it. If a function is taking one or more parameters, it is called differently than a function that does not take any parameter. We saw already how you could call a function that does not take any parameter and assign it to a field using its Control Source. If a function is taking one parameter, when calling it, you must provide a value for the parameter, otherwise the function would not work (when you display the form or report, Microsoft Access would display an error). When you call a function that takes a parameter, the parameter is called an argument. Therefore, when calling the function, we would say that the function takes one argument. In the same way, a function with more than one parameter must be called with its number of arguments.
To call a function that takes an argument, type the name of the function followed by the opening parenthesis "(", followed by the value (or the field name) that will be the argument, followed by a closing parenthesis ")". The argument you pass can be a constant number. Here is an example:
The value passed as argument can be the name of an existing field. The rule to respect is that, when Microsoft Access will be asked to perform the task(s) for the function, the argument must provide, or be ready to provide, a valid value. As done with the argument-less function, when calling this type of function, you can assign it to a field by using the assignment operator in its Control Source property. Here is an example:
If the function is taking more than one argument, to call it, type the values for the arguments, in the exact order indicated, separated from each other by a comma. As for the other functions, the calling can be assigned to a field in its Control Source. All the arguments can be constant values, all of them can be the names of fields or objects, or some arguments can be passed as constants and others as names of fields. Here is an example:
We have mentioned that, when calling a function that takes an argument, you must supply a value for the argument. There is an exception. Depending on how the function was created, it may be configured to use its own value if you fail, forget, or choose not, to provide one. This is known as the default argument. Not all functions follow this rule and you would know either by checking the documentation of that function or through experience.
If a function that takes one argument has a default value for it, then you do not have to supply a value when calling that function. Such an argument is considered optional. Whenever in doubt, you should provide your own value for the argument. That way, you would not only be on the safe side but also you would know with certainty what value the function had to deal with.
If a function takes more than one argument, some argument(s) may have default values while some others do not. The arguments that have default values can be used and you do not have to supply them.
To assist you with writing expressions or calling a (built-in) function and reduce the likelihood of a mistake, Microsoft Access is equipped with a good functional dialog box named the Expression Builder.
The Expression Builder is used to create an expression or call a function that would be used as the Control Source of a field.
To access the Expression Builder, open the Property Sheet for the control that will use the expression or function, and click its ellipsis button . This would call the Expression Builder dialog box
Like every regular dialog box, the Expression Builder starts on top with its title bar that displays its caption and its system Close button. Unlike a regular dialog box, the Expression Builder is resizable: you can enlarge, narrow, heighten, or shorten it, to a certain extent.
Under the title bar, there is a label followed by a link: Calculated Control. If you click that link, a Help window would come up:
Under the link, there is an example of an expression.
The main upper area of the Expression Builder shows a rectangular text box with a white background. It is used to show the current expression when you have written it. If you already know what you want, you can directly type an expression, a function, or a combination of those.
The right section of the Expression Builder displays a few buttons. After creating an expression, to submit it, you click OK. To abandon whatever you have done, yo can click Cancel or press Esc. To get help while using the Expression Builder, you can click Help. To show a reduced height of the Expression Builder, click the << Less button. The button would change to More >>:
To show the whole dialog box, click More >>.
Under the text box, there are three boxes. The left list displays some categories of items. Some items in the left list appear with a + button. To access an object, expand its node collection by double-clicking its corresponding button or clicking its + button. After you have expanded a node, a list would appear. In some cases, such as the Forms node, another list of categories may appear.
To access an object of a collection, in the left list, you can click its node. This would fill the middle list with some items that would of course depend on what was selected in the left list. Here is example:
The top node is the name of the form or report on which you are working. Under that name are the Functions node. To access a function, first expand the Functions node. To use one of the Microsoft Access built-in functions, in the left list, click Built-In Functions. The middle list would display categories of functions. If you see the function you want to use, you can use it. If the right list is too long and you know the type of the function you are looking for, you can click its category in the middle list and locate it in the right list.
Once you see the function you want in the right list, you can double-click it. If it is a parameter-less function, its name and parentheses would be added to the expression area:
If the function is configured to take arguments, its name and a placeholder for each argument would be added to the expression area:
You must then replace each placeholder with the appropriate value or expression. To assist you with functions, in its bottom section, the Expression Builder shows the syntax of the function, including its name and the name(s) of the argument(s). To get more information about a function, click its link in the bottom section of the Expression Builder. A help window would display. Here is an example:
Besides the built-in functions, if you had created a function in the current database, in the left list, click the name of the database, its function(s) would display in the middle list.
Depending on the object that was clicked in the left list, the middle list can display the Windows controls that are part of, or are positioned on, the form or report. For example, if you click the name of a form in the left list, the middle list would display the names of all the controls on that form. To use one of the controls on the object, you can double-click the item in the middle list. When you do, the name of the control would appear in the expression area.
Some items in the middle list hold their own list of items. To show that list, you must click an item in the middle list. For example, to access the properties of a control positioned on a form, in the left list, expand the Forms node and expand All Forms:
Then, in the left list, click the name of a form. This would cause the middle list to display the controls of the selected form. To access the properties of the control, click its name in the middle list. The right list would show its properties:
As mentioned already, after creating the expression, if you are satisfied with it, click OK. | http://www.functionx.com/access/Lesson16.htm | 13 |
15 | Version 1.89.J01 - 3 April 2012
Units is a program for computations on values expressed in terms of different measurement units. It is an advanced calculator that takes care of the units. You can try it here:
Suppose you want to compute the mass, in pounds, of water that fills to the depth of 7 inches a rectangular area 5 yards by 4 feet 3 inches. You recall from somewhere that 1 liter of water has the mass of 1 kilogram. To obtain the answer, you multiply the water's volume by its specific mass. Enter this after You have above:
5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter
then enter pounds after You want and hit the Enter key or press the Compute button. The following will appear in the result area:
5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter = 2321.5398 pounds
You did not have to bother about conversions between yards, feet, inches, liters, kilograms, and pounds. The program did it all for you behind the scenes.
Units supports complicated expressions and a number of mathematical functions, as well as units defined by linear, nonlinear, and piecewise-linear functions. See under Expressions for detailed specifications.
Units has an extensive data base that, besides units from different domains, cultures, and periods, contains many constants of nature, such as:
pi ratio of circumference to diameter c speed of light e charge on an electron h Planck's constant force acceleration of gravity
As an example of using these constants, suppose you want to find the wavelength, in meters, of a 144 MHz radio wave. It is obtained by dividing the speed of light by the frequency. The speed of light is 186282.39 miles/sec. But, you do not need to know this exact number. Just press Clear and enter this after You have:
c / 144 MHz
Enter m after You want and hit the Enter key. You will get this result:
c /144MHz = 2.0818921 m
Sometimes you may want to express the result as a sum of different units, for example, to find what is 2 m in feet and inches. To try this, press Clear and enter 2 m after You have. Then enter after You want:
ft;inand hit Enter. You will get this result:
2 m = 6 ft + 6.7401575 in
Other examples of computations:
Feet and inches to metric: 6 ft + 7 in = 200.66 cm Time in mixed units: 2 years = 17531 hours + 37 min + 31.949357 s Angle in mixed units: radian = 57 deg + 17 ' + 44.806247 " Fahrenheit to Celsius: tempF(97) = tempC(36.111111) Electron flow: 5 mA = 3.1207548e16 e/sec Energy of a photon: h * c / 5896 angstrom = 2.1028526 eV Mass to energy: 1 g * c^2 = 21.480764 kilotons tnt Baking: 2 cups flour_sifted = 226.79619 g Weight as force: 5 force pounds = 22.241108 newton
You can explore the units data base with the help of the four buttons under You have field. By entering any string in You have field and pressing the Search button, you obtain a list of all unit names that contain that string as a substring. For example, if you enter year at You have and press Search, you get a list of about 25 different kinds of year, including marsyear and julianyear.
Pressing Definition displays this in the result area:
year = tropicalyear = 365.242198781 day = 31556926 s,
which tells you that year is defined as equal to tropicalyear, which is equal to 365.242198781 days or 31556926 seconds.
If you now enter tropicalyear at You have and press the Source button, you open a browser on the unit data base at the place containing the definition of tropicalyear. You find there a long comment explaining that unit. You may then freely browse the data base to find other units and facts about them.
Pressing Conformable units will give you a list of all units for measuring the same property as tropicalyear, namely the length of a time interval. The list contains over 80 units.
Instead of the applet shown above, you can use Units as a stand-alone application. As it is written in Java, you can use it under any operating system that supports Java Runtime Environment (JRE) release 1.5.0 or later. To install Units on your computer, download the Java archive (JAR) file that contains the executable Java classes. Save the JAR file in any directory, under any name of your choice, with extension .jar. If your system has an association of .jar files with javaw command (which is usually set up when you install JRE), just double-click on the JAR file icon. If this does not work, you can type
java -jar jarfile
at the command prompt, where jarfile is the name you gave to the JAR file. Each way should open the graphic interface of Units, similar to one at the beginning of this page.
With Units installed on your computer, you can use it interactively from command line, or invoke it from scripts. It imitates then almost exactly the behavior of GNU Units from which it has evolved. See under Command interface for details.
You also have a possibility to modify the file that contains unit definitions, or add your own definitions in separate file(s). (The applet can only use its own built-in file.) See under Adding own units for explanation how to do it.
The complete package containing the JAR and the Java source can be downloaded as a gzipped tar file from the SourceForge project page.
You use expressions to specify computations on physical and other quantities. A quantity is expressed as the product of a numerical value and a unit of measurement. Each quantity has a dimension that is either one of the basic dimensions such as length or mass, or a combination of those. For example, 7 mph is the product of number 7 and unit mile/hour; it has the dimension of length divided by time. For a deeper discussion, see articles on physical quantity and dimensional analysis.
For each basic dimension, Units has one primitive unit: meter for length, gram for mass, second for time, etc.. The data base defines each non-primitive unit in such a way that it can be converted to a combination of primitive units. For example, mile is defined as equal to 1609.344 m and hour to 3600 s. Behind the scenes, Units replaces the units you specify by these values, so 7 mph becomes:
7 mph = 7 * mile/hour = 7 * (1609.344*m)/(3600*s) = 3.12928 m/s
This is the quantity 7 mph reduced to primitive units. The result of a computation can, in particular, be reduced to a number, which can be regarded as a dimensionless quantity:
17 m / 120 in = 5.5774278
In your expressions, you can use any units named in the units data base. You find there all standard abbreviations, such as ft for foot, m for meter, or A for ampere.
For readability, you may use plural form of unit names, thus writing, for example, seconds instead for second. If the string you specified does not appear in the data base, Units will try to ignore the suffix s or es. It will also try to remove the suffix ies and replace it by y. The data base contains also some irregular plurals such as feet.
The data base defines all standard metric prefixes as numbers. Concatenating a prefix in front of a unit name means multiplication by that number. Thus, the data base does not contain definitions of units such as milligram or millimeter. Instead, it defines milli- and m- as prefixes that you can apply to gram, g, meter, or m, obtaining milligram, mm, etc..
Only one prefix is permitted per unit, so micromicrofarad will fail. However, micro is a number, so micro microfarad will work and mean .000001 microfarad.
Numbers are written using standard notation, with or without decimal point. They may be written with an exponent, for example 3.43e-8 to mean 3.43 times 10 to the power of -8.
By writng a quantity as 1.2 meter or 1.2m, you really mean 1.2 multiplied by meter. This is multiplication denoted by juxtaposition. You can use juxtaposition, with or without space, to denote multiplication also in other contexts, whenever you find it convenient.
In addition to that, you indicate multiplication in the usual way by an asterisk (*). Division is indicated by a slash (/) or per. Division of numbers can also be indicated by the vertical dash (|). Examples:
10cm 15cm 1m = 15 liters 7 * furlongs per fortnight = 0.0011641667 m/s 1|2 meter = 0.5 m
The multiplication operator * has the same precedence as / and per; these operators are evaluated from left to right.
Multiplication using juxtaposition has higher precedence than * and division. Thus, m/s s/day does not mean (m/s)*(s/day) but m/(s*s)/day = m/(s*s*day), which has dimension of length per time cubed. Similarly, 1/2 meter means 1/(2 meter) = .5/meter, which is probably not what you would intend.
The division operator | has precedence over both kinds of multiplication, so you can write 'half a meter' as 1|2 meter. This operator can only be applied to numbers.
Sums are written with the plus (+) and minus (-). Examples:
2 hours + 23 minutes - 32 seconds = 8548 seconds 12 ft + 3 in = 373.38 cm 2 btu + 450 ft lbf = 2720.2298 J
The quantities which are added together must have identical dimensions. For example, 12 printerspoint + 4 heredium results in this message:
Sum of non-conformable values: 0.0042175176 m 20186.726 m^2.
Plus and minus can be used as unary operators. Minus as a unary operator negates the numerical value of its operand.
Exponents are specified using the operator ^ or **. The exponent must be a number. As usual, x^(1/n) means the n-th root of x, and x^(-n) means 1/(x^n):
cm^3 = 0.00026417205 gallon 100 ft**3 = 2831.6847 liters acre^(1/2) = 208.71074 feet (400 W/m^2 / stefanboltzmann)^0.25 = 289.80881 K 2^-0.5 = 0.70710678
An exponent n or 1/n where n is not an integer can only be applied to a number. You can take the n-th root of non-numeric quantity only if that quantity is an n-th power:
foot^pi = Non-numeric base, 0.3048 m, for exponent 3.1415927. hectare**(1/3) = 10000 m^2 is not a cube.
An exponent like 2^3^2 is evaluated right to left.
The operators ^ and ** have precedence over multiplication and division, so 100 ft**3 is 100 cubic feet, not (100 ft)**3. On the other hand, they have a lower priority than prefixing and |, so centimeter^3 means cubic centimeter, but centi meter^3 is 1/100 of a cubic meter. The square root of two thirds can be written as 2|3^1|2.
Abbreviation. You may concatenate a one-digit exponent,
2 through 9, directly after a unit name.
In this way you abbreviate foot^3 to foot3
and sec^2 to sec2.
But beware: $ 2 means two dollars, but $2 means one dollar squared.
Units provides a number of functions that you can use in your computation. You invoke a function in the usual way, by writing its name followed by the argument in parentheses. Some of them are built into the program, and some are defined in the units data base.
The built-in functions include sin, cos, tan, their inverses asin, acos, atan, and:
ln natural logarithm log base-10 logarithm log2 base-2 logarithm exp exponential sqrt square root, sqrt(x) = x^(1/2) cuberoot cube root, cuberoot(x) = x^(1/3)
The argument of sin, cos, and tan must be a number or an angle. They return a number.
The argument of asin, acos, atan, ln, log, log2, and exp must be a number. The first three return an angle and the remaining return a number.
The argument of sqrt and cuberoot must be a number, or a quantity that is a square or a cube.
circlearea area of circle with given radius pH converts pH value to moles per liter tempF converts temperature Fahrenheit to temperature Kelvin wiregauge converts wire gauge to wire thickness
Most of them are used to handle nonlinear scales, as explained under Nonlinear meaures.
By preceding a function's name with a tilde (~) you obtain an inverse of that function:
circlearea(5cm) = 78.539816 cm^2 ~circlearea(78.539816 cm^2) = 5 cm pH(8) = 1.0E-8 mol/liter ~pH(1.0E-8 mol/liter) = 8 tempF(97) = 309.26111 K ~tempF(309.26111 K) = 96.999998 wiregauge(11) = 2.3048468 mm ~wiregauge(2.3048468 mm) = 11
The following table summarizes all operators in the order of precedence.
prefix concatenated exponent number division | (left to right) unary + - exponent ^ ** (right to left) multiplication by juxtaposition (left to right) multiplication and division * / per (left to right) sum + - (left to right)
A plus and minus is treated as unary only if it comes first in the expression or follows any of the operators ^, **, *, /, per, +, or -. Thus, 5 -2 is interpreted as '5 minus 2', and not as '5 times -2'.
Parentheses can be applied in the usual way to indicate the order of evaluation.
The syntax of expressions is defined as follows. Phrases and symbols in quotes represent themselves, | means 'or', ? means optional occurrence, and * zero or more occurrences.
expr = term (('+' | '-') term)* | ('/' | 'per') product term = product (('*' | '/' | 'per') product)* product = factor factor* factor = unary (('^' | '**') unary)* unary = ('+' | '-')? primary primary = unitname | numexpr | bfunc '(' expr ')' | '~'? dfunc '(' expr ')' | '(' expr ')' numexpr = number ('|' number)* number = mantissa exponent? mantissa = '.' digits | digits ('.' digits?)? exponent = ('e' | 'E') sign? digits unitname = unit name with optional prefix, suffix, and / or one-digit exponent bfunc = built-in function name: sqrt, cuberoot, sin, cos, etc. dfunc = defined function name
Names of syntactic elements shown above in italics may appear in error messages that you receive if you happen to enter an incorrect expression. For example:
You have: 1|m After '1|': expected number. You have: cm^per $ After 'cm^': expected unary. You have: 3 m+*lbf After '3 m+': expected term.
Spaces are in principle ignored, but they are often required in multiplication by juxtaposition. For example, writing newtonmeter will result in the message Unit 'newtonmeter' is unknown; you need a space in the product newton meter.
To avoid ambiguity, a space is also required before a number that follows another number. Thus, an error will be indicated after 1.2 in 1.2.3.
Multiplication by juxtaposition may also result in another ambiguity. As e is a small unit of charge, an expression like 3e+2C can be regarded as meaning (3e+2)*C or (3*e)+(2*C). This ambiguity is resolved by always including as much as possible in a number.
In the Overview, it was shown how you specify the result by entering a unit name at You want. In fact, you can enter there any expression specifying a quantity with the same dimension as the expression at You have:
You have: 10 gallons You want: 20 cm * circlearea(5cm) 10 gallons = 24.09868 * 20 cm * circlearea(5cm)
This tells you that you can almost fit 10 gallons of liquid into 24 cans of diameter 10 cm and 20 cm tall. However:
You have: 10 gallons You want: circlearea(5cm) Conformability error 10 gallons = 0.037854118 m^3 circlearea(5cm) = 0.0078539816 m^2
Some units, like radian and steradian, are treated as dimensionless and equal to 1 if it is necessary for conversion. For example, power is equal to torque times angular velocity. The dimension of expression at You have below is kg m^2 radian/s^3, and the dimension of watt is kg m^2/s^3 The computation is made possible by treating radian as dimensionless:
You have: (14 ft lbf) (12 radians/sec) You want: watts (14 ft lbf) (12 radians/sec) = 227.77742 watts
Note that dimensionless units are not treated as dimensionless in other contexts. They cannot be used as exponents so for example, meter^radian is not allowed.
You can also enter at You want an expression with dimension that is an inverse of that at You have:
You have: 8 liters per 100 km You want: miles per gallon reciprocal conversion 1 / 8 liters per 100 km = 29.401823 miles per gallon
Here, You have has the dimension of volume divided by length, while the dimension of You want is length divided by volume. This is indicated by the message reciprocal conversion, and by showing the result as equal to the inverse of You have.
You may enter at You want the name of a function, without argument. This will apply the function's inverse to the quantity from You have:
You have: 30 cm^2 You want: circlearea 30 cm^2 = circlearea(0.030901936 m) You have: 300 K You want: tempF 300 K = tempF(80.33)Of course, You have must specify the correct dimension:
You have: 30 cm You want: circlearea Argument 0.3 m of function ~circlearea is not conformable to 1 m^2.
If you leave You want field empty, you obtain the quantity from You have reduced to primitive units:
You have: 7 mph You want: 3.12928 m / s
You have: 2 m You want: ft;in;1|8 in 2 m = 6 ft + 6 in + 5.9212598 * 1|8 inNote that you are not limited to unit names, but can use expressions like 1|8 in above. The first unit is subtracted from the given value as many times as possible, then the second from the rest, and so on; finally the rest is converted exactly to the last unit in the list.
Ending the unit list with ';' separates the integer and fractional parts of the last coefficient:
You have: 2 m You want: ft;in;1|8 in; 2 m = 6 ft + 6 in + 5|8 in + 0.9212598 * 1|8 inEnding the unit list with ';;' results in rounding the last coefficient to an integer:
You have: 2 m You want: ft;in;1|8 in;; 2 m = 6 ft + 6 in + 6|8 in (rounded up to nearest 1|8 in)Each unit on the list must be conformable with the first one on the list, and with the one you entered at You have:
You have: meter You want: ft;kg Invalid unit list. Conformability error: ft = 0.3048 m kg = 1 kg You have: meter You want: lb;oz Conformability error meter = m lb = 0.45359237 kgOf course you should list the units in a decreasing order; otherwise, the result may not be very useful:
You have: 3 kg You want: oz;lb 3 kg = 105 oz + 0.051367866 lbA unit list such as
cup;1|2 cup;1|3 cup;1|4 cup;tbsp;tsp;1|2 tsp;1|4 tspcan be tedious to enter. Units provides shorthand names for some common combinations:
hms hours, minutes, seconds dms angle: degrees, minutes, seconds time years, days, hours, minutes and seconds usvol US cooking volume: cups and smallerUsing these shorthands, or unit list aliases, you can do the following conversions:
You have: anomalisticyear You want: time 1 year + 25 min + 3.4653216 sec You have: 1|6 cup You want: usvol 2 tbsp + 2 tspYou cannot combine a unit list alias with other units: it must appear alone at You want.
Some measures cannot be expressed as the product of a number and a measurement unit. Such measures are called nonlinear.
An example of nonlinear measure is the pH value used to express the concentration of certain substance in a solution. It is a negative logarithmic measure: a tenfold increase of concentration decreases the pH value by one. You convert between pH values and concentration using the function pH mentioned under Functions:
You have: pH(6) You want: micromol/gallon pH(6) = 3.7854118 micromol/gallon
For conversion in the opposite direction, you use the inverse of pH, as described under Specifying result:
You have: 0.17 micromol/cm^3 You want: pH 0.17 micromol/cm^3 = pH(3.7695511)
Other example of nonlinear measures are different "gauges". They express the thickness of a wire, plate, or screw, by a number that is not obviously related to the familiar units of length. (Use the Search button on gauge to find them all.) Again, they are handled by functions that convert the gauge to units of length:
You have: wiregauge(11) You want: inches wiregauge(11) = 0.090742002 inches You have: 1mm You want: wiregauge 1mm = wiregauge(18.201919)
The most common example of nonlinear measure is the temperature indicated by a thermometer, or absolute temperature: you cannot really say that it becomes two times warmer when the thermometer goes from 20°F to 40°F. Absolute temperature is expressed relative to an origin; such measure is called affine. To handle absolute temperatures, Units provides functions such as tempC and tempF that convert them to degrees Kelvin. (Other temp functions can be found using the Search button.) The following shows how you use these functions to convert absolute temperatures:
You have: tempC(36) You want: tempF tempC(36) = tempF(96.8)
meaning that 36°C on a thermometer is the same as 96.8°F.
You can think of pH(6), wiregauge(11), tempC(36), or tempF(96.8) not as functions but as readings on the scale pH, tempC, or tempF, used to measure some physical quantity. You can read the examples above as: 'what is 0.17 micromol/cm^3 on the pH scale?', or 'what is 1 mm on the wiregauge scale?', or 'what is the tempF reading corresponding to 36 on tempC scale?'
Note that absolute temperature is not the same as temperature difference, in spite of their units having the same names. The latter is a linear quantity. Degrees Celsius and degrees Fahrenheit for measuring temperature difference are defined as linear units degC and degF. They are converted to each other in the usual way:
You have: 36 degC You want: degF 36 degC = 64.8 degF
Some units have different values in different locations. The localization feature accomodates this by allowing the units database to specify region-dependent definitions.
In the database, the US units that differ from their British counterparts have names starting with us: uston, usgallon, etc.. The corresponding British units are: brton, brgallon, etc.. When using Units, you have a possibility to specify en_US or en_GB as 'locale'. Each of them activates a portion of the database that defines short aliases for these names. Thus, specifying en_US as locale activates these aliases:
ton = uston gallon = usgallon etc.while en_GB activates these:
ton = brton gallon = brgallon etc.
The US Survey foot, yard, and mile can be obtained by using the US prefix. These units differ slightly from the international length units. They were in general use until 1959, and are still used for geographic surveys. The acre is officially defined in terms of the US Survey foot. If you want an acre defined according to the international foot, use intacre. The difference between these units is about 4 parts per million. The British also used a slightly different length measure before 1959. These can be obtained with the prefix UK.
The units data base is defined by a units data file. A default units file, called units.dat, is packaged in the JAR file together with the program. (You can extract it from there using the jar tool of Java.)
If you want to add your own units, you can write your own units file. See how to do it under Writing units file. If you place that file in your home directory under the name units.dat, it will be read after the default units file.
You may also supply one or more own unit files and access them using the property list or command option. In each case, you specify the order in which Units will read them. If a unit with the same name is defined more than once, Units will use the last definition that it encounters.
Note that adding your own unit files is possible only if you run Units from a downloaded JAR file. An applet can only use the default units.dat file.
If you want, you may run Units from command line. It imitates then almost exactly the behavior of GNU Units. (The differences are listed under What is different.)
To use the command line interface, you need to download the Java archive (JAR) file that contains the executable classes and the data file. You can save the JAR in any directory of your choice, and give it any name compatible with your file system. The following assumes that you saved the JAR file under the name jarfile. It also assumes that you have a Java Runtime Environment (JRE) version 1.5.0 or later that is invoked by typing java at your shell prompt.
java -jar jarfile -i or java -jar jarfile options
at your shell prompt. The program will print something like this:
2192 units, 71 prefixes, 32 nonlinear units You have:
At You have prompt, type the expression you want to evaluate. Next, Units will print You want. There you tell how you want your result, in the same way as in the graphical interface. See under Expressions and Specifying result. As an example, suppose you just want to convert ten meters to feet. Your dialog will look like this:
You have: 10 meters You want: feet * 32.8084 / 0.03048
The answer is displayed in two ways. The first line, which is marked with a * to indicate multiplication, says that the quantity at You have is 32.8084 times the quantity at You want. The second line, marked with a / to indicate division, gives the inverse of that number. In this case, it tells you that 1 foot is equal to about 0.03 dekameters (dekameter = 10 meters). It also tells you that 1/32.8 is about .03.
Units prints the inverse because sometimes it is a more convenient number. For example, if you try to convert grains to pounds, you will see the following:
You have: grains You want: pounds * 0.00014285714 / 7000
From the second line of the output you can immediately see that a grain is equal to a seven thousandth of a pound. This is not so obvious from the first line of the output.
If you find the output format confusing, try using the -v ('verbose') option, which gives:
You have: 10 meters You want: feet 10 meters = 32.8084 feet 10 meters = (1 / 0.03048) feet
You can suppress printing of the inverse using the -1 ('one line') option. Using both -v and -1 produces the same output as the graphical interface:
You have: 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter You want: pounds 5 yards * (4 feet + 3 in) * 7 in * 1 kg/liter = 2321.5398 pounds
If you request a conversion between units which measure reciprocal dimensions, Units will display the conversion results with an extra note indicating that reciprocal conversion has been done:
You have: 6 ohms You want: siemens reciprocal conversion * 0.16666667 / 6Again, you may use the -v option to get more comprehensible output:
You have: 6 ohms You want: siemens reciprocal conversion 1 / 6 ohms = 0.16666667 siemens 1 / 6 ohms = (1 / 6) siemens
When you specify compact output with -c, you obtain only the conversion factors, without indentation:
You have: meter You want: yard 1.0936133 0.9144
When you specify compact output and perform conversion to mixed units, you obtain only the conversion factors separated by semicolons. Note that unlike the case of regular output, zeros are included in this output list:
You have: meter You want: yard;ft;in 1;0;3.3700787
If you only want to find the reduced form or definition of a unit, simply press return at You want prompt. For example:
You have: 7 mph You want: 3.12928 m/s You have: jansky You want: Definition: jansky = fluxunit = 1e-26 W/m^2 Hz = 1e-26 kg / s^2
The definition is shown if you entered a unit name at You have prompt. The example indicates that jansky is defined as equal to fluxunit which in turn is defined to be a certain combination of watts, meters, and hertz. The fully reduced form appears on the far right.
If you type ? at You want prompt, the program will display a list of named units which are conformable with the unit that you entered at You have prompt. Note that conformable unit combinations will not appear on this list.
Typing help at either prompt displays a short help message. You can also type help followed by a unit name. This opens a window on the units file at the point where that unit is defined. You can read the definition and comments that may give more details or historical information about the unit.
Typing search followed by some text at either prompt displays a list of all units whose names contain that text as a substring, along with their definitions. This may help in the case where you aren't sure of the right unit name.
To end the session, you type quit at either prompt, or press the Enter (Return) key at You have prompt.
You can use Units to perform computations non-interactively from the command line. To do this, type
java -jar jarfile [options] you-have [you-want]
at your shell prompt. (You will usually need quotes to protect the expressions from interpretation by the shell.) For example, if you type
java -jar jarfile "2 liters" "quarts"the program will print
* 2.1133764 / 0.47317647and then exit.
If you omit you-want, Units will print out definition of the specified unit.
The following options allow you to use alternative units file(s), check your units file, or change the output format:
The Java imitation is not an exact port of the original GNU units. The following is a (most likely incomplete) list of differences.
You can supply some parameters to Units by setting up a Property list. It is a file named units.opt, placed in the same directory as the JAR file. It may look like this:
GUIFONT = Lucida ENCODING = Cp850 LOCALE = en_GB UNITSFILE = ; c:\\Java\\gnu\\units\\my.dat
The options -e, -f, -g, and -l specified on the command line override settings from the Property list.
You embed a Units applet in a Web page by means of this tag:
<APPLET CODE="units.applet.class" ARCHIVE="http://units-in-java.sourceforge.net/Java-units.1.89.J01.jar" WIDTH=500 HEIGHT=400> <PARAM NAME="LOCALE" VALUE="locale"> <PARAM NAME="GUIFONT" VALUE="fontname"> </APPLET>
Notice that because an applet cannot access any files on your system, you can use only the default units file packaged in the JAR file.
You may view the source of this page for an example of Web page with an embedded Units applet.
The units data base is defined by a units data file. A default units file, called units.dat, is packaged in the JAR file together with the program. This section tells you how to write your own units file that you can use together with, or instead of, the default file, as described under Adding own units.
The file has to use the UTF-8 character encoding. Since the ASCII characters appear the same in all encodings, you do not need to worry about UTF-8 as long as your definitions use only these characters.
Each definition occupies one line, possibly continued by the backslash character (\) that appears as the last character.
Comments start with a # character, which can appear anywhere in a line. Following #, the comment extends to the end of the line.
Empty lines are ignored.
A unit is specified on a single line by giving its name followed by at least one blank, followed by the definition.
A unit name must not contain any of the characters + - * / | ^ ( ) ; #. It cannot begin with a digit, underscore, tilde, decimal point, or comma. It cannot end with an underscore, decimal point, or comma.
If a name ends in a digit other than zero or one, the digit must be preceded by a string beginning with an underscore, and afterwards consisting only of digits, decimal points, or commas. For example, NO_2, foo_2,1 or foo_3.14 would be valid names but foo2 or foo_a2 would be invalid.
The definition is either an expression, defining the unit in terms of other units, or ! indicating a primitive unit, or !dimensionless indicating a dimensionless primitive unit.
Be careful to define new units in terms of old ones so that a reduction leads to the primitive units. You can check this using the -C option. See under Checking your definitions.
Here is an example of a short units file that defines some basic units:
m ! # The meter is a primitive unit sec ! # The second is a primitive unit rad !dimensionless # A dimensionless primitive unit micro- 1e-6 # Define a prefix minute 60 sec # A minute is 60 seconds hour 60 min # An hour is 60 minutes inch 0.0254 m # Inch defined in terms of meters ft 12 inches # The foot defined in terms of inches mile 5280 ft # And the mile
A unit which ends with a - character is a prefix. If a prefix definition contains any / characters, be sure they are protected by parentheses. If you define half- 1/2 then halfmeter would be equivalent to 1 / 2 meter.
Here is an example of function definition:
tempF(x) [1;K] (x+(-32)) degF + stdtemp ; (tempF+(-stdtemp))/degF + 32
The definition begins with the function name followed immediately (with no spaces) by the name of the parameter in parentheses. Both names must follow the same rules as unit names.
Next, in brackets, is a specification of the units required as arguments by the function and its inverse. In the example above, the tempF function requires an input argument conformable with 1. The inverse function requires an input argument conformable with K. Note that this is also the dimension of function's result.
Next come the expressions to compute the function and its inverse, separated by a semicolon. In the example above, the tempF function is computed as
tempF(x) = (x+(-32)) degF + stdtemp
The inverse has the name of the function as its parameter. In our example, the inverse is
~tempF(tempF) = (tempF+(-stdtemp))/degF + 32
This inverse definition takes an absolute temperature as its argument and converts it to the Fahrenheit temperature. The inverse can be omitted by leaving out the ; character, but then conversions to the unit will be impossible.
If you wish to make synonyms for nonlinear units, you still need to define both the forward and inverse functions. So to create a synonym for tempF you could write
fahrenheit(x) [1;K] tempF(x); ~tempF(fahrenheit)
The example below is a function to compute the area of a circle. Note that this definition requires a length as input and produces an area as output, as indicated by the specification in brackets.
circlearea(r) [m;m^2] pi r^2 ; sqrt(circlearea/pi)
Empty or omitted argument specification means that Units will not check dimension of the argument you supply. Anything compatible with the specified computation will work. For example:
square(x) x^2 ; sqrt(square) square(5) = 25 square(2m) = 4 m^2
Some functions cannot be computed using an expression. You have then a possibility to define such a function by a piecewise linear approximation. You provide a table that lists values of the function for selected values of the argument. The values for other arguments are computed by linear interpolation.
An example of piecewise linear function is:
zincgauge[in] 1 0.002, 10 0.02, 15 0.04, 19 0.06, 23 0.1
In this example, zincgauge is the name of the function. The unit in square brackets applies to the result. Tha argument is always a number. No spaces can appear before the ] character, so a definition like foo[kg meters] is illegal; instead write foo[kg*meters].
The definition is a list of pairs optionally separated by commas. Each pair defines the value of the function at one point. The first item in each pair is the function argument; the second item is the value of the function at that argument (in the units specified in brackets). In this example, you define zincgauge at five points. We have thus zincgauge(1) = 0.002 in.
Definitions like this may be more readable if written using continuation characters as
zincgauge[in] \ 1 0.002 \ 10 0.02 \ 15 0.04 \ 19 0.06 \ 23 0.1
If you define a piecewise linear function that is not strictly monotone, the inverse will not be well defined. In such a case, Units will return the smallest inverse.
Unit list aliases are treated differently from unit definitions, because they are a data entry shorthand rather than a true definition for a new unit. A unit list alias definition begins with !unitlist and includes the alias and the definition; for example, the aliases included in the standard units data file are:
!unitlist hms hr;min;sec !unitlist time year;day;hr;min;sec !unitlist dms deg;arcmin;arcsec !unitlist ftin ft;in;1|8 in !unitlist usvol cup;3|4 cup;2|3 cup;1|2 cup;1|3 cup;1|4 cup;\ tbsp;tsp;1|2 tsp;1|4 tsp;1|8 tsp
Unit list aliases are only for unit lists, so the definition must include a ';'. Unit list aliases can never be combined with units or other unit list aliases, so the definition of time shown above could not have been shortened to year;day;hms. As usual, be sure to run Units with option -C to ensure that the units listed in unit list aliases are conformable.
A locale region in the units file begins with !locale followed by the name of the locale. The locale region is terminated by !endlocale. The following example shows how to define a couple of units in a locale.
!locale en_GB ton brton gallon brgallon !endlocale
A file can be included by giving the command !include followed by full path to the file.
You are recommended to check the new or modified units file by invoking Units from command line with option -C. Of course, the file must be made available to Units as described under Adding own units.
The option will check that the definitions are correct, and that all units reduce to primitive ones. If you created a loop in the units definitions, Units will hang when invoked with the -C option. You will need to use the combined -Cv option which prints out each unit as it checks them. The program will still hang, but the last unit printed will be the unit which caused the infinite loop.
If the inverse of a function is omitted, the -C option will display a warning. It is up to you to calculate and enter the correct inverse function to obtain proper conversions. The -C option tests the inverse at one point and prints an error if it is not valid there, but this is not a guarantee that your inverse is correct.
The -C option will print a warning if a non-monotone piecewise linear function is encountered.
Units works internally with double-byte Unicode characters.
The unit data files use the UTF-8 encoding. This enables you to use Unicode characters in unit names. However, you can not always access them.
The graphical interface of Units can display all characters available in its font. Those not available are shown as empty rectangles. The default font is Monospaced. It is a so-called logical font, or a font family, with different versions depending on the locale. It usually contains all the national characters and much more, but far from all of Unicode. You may specify another font by using the property GUIFONT, or an applet parameter, or the command option -g.
You can enter into the Units window all chracters available at your keyboard, but there is no facility to enter any other Unicode characters.
The treatment of Unicode characters at the command interface depends on the operating system and the Java installation. The operating system may use character encoding different from the default set up for Java Virtual Machine (JVM). As the result, names such as ångström typed in the command window are not recognized as unit names. If you encounter this problem, and know the encoding used by the system, you can identify the encoding to Units with the help of the property ENCODING or command option -e. (In Windows XP, you can find the encoding using the command chcp. In one case investigated by the author, the encoding was Cp437, while the JVM default was Cp1252.)
The units.dat file supplied with Units contains commands !utf8 and !endutf8. This is so because it is taken unchanged from GNU units. The commands enclose the portions of file that use non-ASCII characters so they can be skipped in environments that do not support UTF-8. Because Java always supports the UTF-8 encoding for input files, the commands are ignored in Units.
The program documented here is a Java development of GNU Units 1.89e, a program written in C by Adrian Mariano (email@example.com). The file units.dat containing the units data base was created by Adrian Mariano, and is maintained by him. The package contains the latest version obtained from GNU Units repository.
GNU Units copyright © 1996, 1997, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2010, 2011 by Free Software Foundation, Inc. Java version copyright © 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 by Roman Redziejowski.
The program is free software: you can redistribute it and/or modify it under the terms of the GNU Library General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
The program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
This Web page copyright © 2012 by Roman Redziejowski. The author gives unlimited permission to copy, translate and/or distribute it document, with or without modifications, as long as this notice is preserved, and information is provided about any changes.
Substantial parts of this text have been taken, directly or modified, from the manual Unit Conversion, edition 1.89g, written by Adrian Mariano, copyright © 1996, 1997, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2011 by Free Software Foundation, Inc., under a written prmission contained in that document. | http://units-in-java.sourceforge.net/ | 13 |
15 | Let P and Q be two convex polygons whose intersection is
a convex polygon.The algorithm for finding this convex intersection
polygon can be described by these three steps:
What's a pocket lid?
Construct the convex hull of the union of P and Q;
For each pocket lid of the convex hull, find the intersection of P
and Q that lies in the pocket;
Merge together the polygonal chains between the intersection points found
A pocket lid is a line segment belonging to the convex hull of the
union of P and Q, but which belongs to neither P nor
Why does it connect a vertex of P with a vertex of Q?
A pocket lid connects a vertex of P with
a vertex of Q; if it were to connect two vertices of P, then
P would not be convex, since the lid lies on the convex hull and
is not a segment of P.
Computing the Convex Hull: the Rotating Calipers
To compute the convex hull of the two convex polygons,
the algorithm uses the rotating calipers. It works as follows:
Finding the intersection of P and Q in the pocket
Find the leftmost vertex of each polygon.
At each of those two vertices, place a vertical line passing through it.
Associate that line to the polygon to which the vertex belongs. The line
does not intersect its associated polygon, since the polygon is convex
. See the figure below:
Rotate these two lines (called calipers) by the smallest angle between
a caliper and the segment following the vertex it passes through (in clockwise
order). The rotation is done about the vertex through which the line passes
on the associated polygon. If the line passes through more than one vertex
of the assciated polygon, the farthest (in clockwise order) is taken. The
result is show below:
Whenever the order of the two calipers change, a pocket has been
found. To detect this, a direction is associated to one of the lines (for
example the green one, associated to P). Then all points of the
red line (associated to Q) are either to the left or to the right
of the green line. When a rotation makes them change from one side to the
other of the green line, then the order of the two lines has changed.
Here's what our example looks like just before and after the algorithm
has found the first pocket; as you can see, if the line associated with
P initially had its associated direction pointing up, then the line
associated with Q was to the right of it at the beginning, and is
now to the left of it:
The algorithm terminates once it has gone around both polygons.
Once the pockets have been found, the intersection
of the polygons at the bottom of the pocket needs to be determined. The
pockets themselves form a very special type of polygon: a sail polygon:
that is, a polygon composed of two concave chains sharing a common vertex
at one extermity, and connected by a segment (the mast) at the other
end. By a procedure similar to a special-purpose triangulation for sail
polygons, the segments of P and Q which intersect can be
identified in O(k+l), where k and l are the
number of vertices of P and Q which are inside the pocket.
The idea is to start the triangulation from the mast, and as points from
P and Q are considered, a check is made to see that the chain
from Q is still on the same side as the chain from P.
Here is a pseudo-code of this algorithm. It is assumed
that the indices of the vertices of P and Q are in increasing
order from the lid to the bottom of the pocket (i.e.: P and Q
are not enumerated in the same order).
i <- 1; j <- 1;
finished <- true;
while ( leftTurn(
p(i), p(i+1), q(j+1) ) ) do
j <- j + 1;
finished <- false;
while ( rightTurn(
q(j), q(j+1), p(i+1) ) ) do
i <- i + 1;
finished <- false;
At the end of this procedure, the indices i and j
indicate the vertices of P and Q, respectively, which are
at the start of the two intersecting segments (in other words, the two
intersecting segments are p(i),p(i+1) and
q(i),q(i+1). The intersection of these two segments is part of
the intersection polygon, and can be found with your favorite line intersection
What remains to be done is to build the resulting
polygon. One way of doing this is to start at one of the vertices given
by the above algorithm, compute the intersection, add that
point, and then to continue adding points by following either P
or Q deeper below the pocket until it comes out of another pocket
(i.e. until the vertex to consider for addition happens to have been the
output of the algorithm for another pocket). Then from that pocket the
chain of the other polygon can be followed under the pocket. This would
be done until the pocket the chain comes out of is the pocket that
the merging started with.
Checking for intersection
All of this assumes that the polygons do intersect.
However, there are three ways in which no polygonal intersection could
For case 2, this is detected if, during the triangulation step, the algorithm
makes a complete loop around one of the polygons. Detecting case 3 is even
easier to detect ; in such a case no pockets will be found by the convex
The intersection is either a point or a line. No provisions are made for
this in the algorithm, and in this case the output will be a polygon consisting
of either two vertices at the same location(in the case of a point), or
four vertices on two distinct locations (in the case of a line);
The polygons simply do not intersect each other and are seperable;
The polygons are one inside another. One could argue that the intersection
of two such polygons is the contained polygon, but that is the computer
graphics way of seeing things. In mathematics, there is no intersection
in such a case. In any event, the algorithm has to detect this case independently
of wether or not it outputs it as an intersection or not.
As implemented in the applet, the algorithm will only
find intersections which form non-degenerate polygons. In other words,
it will not handle properly intersections which consist of a single line
or point. However, the algorithm does detect the cases where no intersection
exists at all: in the case where one polygon is contained in another, the
rotating calipers will not find any pockets; in the case where the two
polygons are completely outside one another, the triangulation algorithm
will detect that it has looped around one of the polygons.
Also, the assumption of generality of the point
positions also holds in the implementation. If the points are not in general
position, the intersection might be a degenerate polygon, or the
intersection polygon might contain two consecutive segments which lie on
the same line.
In the article, one polygon is enumerated in clockwise
order, the other in counter-clockwise order. The implementation has both
polygons in clockwise order; it simply involves a bit more housekeeping
and a bit of extra care in the merging step.
Forcing input of a convex polygon
The interface for building the polygons has the nice
feature (or annoying feature, depending on your character), of forcing
the user to enter a convex polygon in clockwise order. This is done with
a simple online algorithm that checks the following conditions while inserting
a new vertex i+1 between vertex i and vertex i+2:
In the code, this is actually implemented as a point being to the left
(over) or to the right (under) of a line, rather than left turns and right
turns, but the idea is exactly the same.
vertices i-1, i, i+1 form a right turn;
vertices i, i+1, i+2 form a right turn;
vertices i+1, i+2, i+3 form a right turn.
Follow the links below: | http://www.iro.umontreal.ca/~plante/compGeom/algorithm.html | 13 |
24 | How to Teach using Data Simulations
Research that examines the use of simulations on student outcomes suggests that even 'well-designed' simulations are unlikely to be effective if the interaction with the student isn't carefully structured (Lane & Peres, 2006). Consequently, how simulations are used is of great importance.
Simulations can involve physical materials (drawing items from a bag, tossing coins, sampling candies) or they can involve generating data on the computer (drawing samples from a population or generating data based on a probability model). Even when using computer simulations, Rossman and Chance (2006) suggest always beginning with a concrete simulation (e.g., having students take random samples of words from the Gettysburg address before taking simulated samples using their Sampling Words applet, or having students take physical samples of Reese's Pieces candies before using a web applet to simulate samples of candies).
Effective Ways to Use SimulationsRegardless of whether the simulation is based on concrete materials, a computer program, or a web applet there are some suggested ways to use simulations to enhance students learning. These include:
- Give students a problem to discuss, ask them to make a prediction about the answer, then simulate data to test their predictions (e.g., predicting the average family size if a country adopts a One Son policy).
- Ask students to predict what will happen under certain conditions, then test it out (e.g., what will happen to the shape of a sampling distribution if the sample size is increased).
- Ask students to come up with rules for certain phenomena (e.g., what factors affects the width of a confidence interval and why).
- Ask students to create a model and use it to simulate data to test whether a particular outcome is due to chance or do to some other factor (e.g., simulate data for outcomes of fair coin tosses and use it to test whether a coin when balanced on its side is just as likely to land heads up or heads down).
- Ask students to run a simulation to discover an important idea (e.g., take random samples of words and create a distribution of mean word lengths, to compare to a distribution of mean words lengths generated by judgmental samples taken by students, to learn that only random samples will be representative of the population).
Cautions about Using Simulations with StudentsHere are some practical considerations to keep in mind when designing or using activities involving simulations.
- The best designed simulation will be ineffective if students are not engaged or get lost in the details and direction. Assigning students to groups with designated roles when using an activity involving simulation can help students divide up the work, where one student reads directions and another enters or analyzes data.
- It is important to structure good discussions about the use and results of simulations to allow students to draw appropriate conclusions. Designing questions that promote reflection or controversy can lead to good discussions. Also, having students make predictions which are tested can lead students to discuss their reasoning as they argue for different predicted results.
- Select technology that facilitates student interaction and is accessible for students. It is crucial that the focus remains on the statistical concept and not on the technology. Consequently, technology should be chosen in light of the students' backgrounds, course goals, and teacher knowledge.
- Select technology tools that allow for quick, immediate, and visual feedback. Examples of technology that have been found especially useful here are Fathom Software, Java applets, and Sampling SIM software, (more information). Integrate the simulations throughout the course. This allows students to see simulation as a regular tool for analysis and not just something for an in-class activity.
Using a Visual Model to Illustrate the Simulation ProcessKeeping track of populations (or random variables), samples, and sample statistics can be confusing to students when running certain simulations. It can be useful to use a graphical diagram to illustrate what is happening when simulating data, helping students to distinguish between population, samples, and distributions of sample statistics. The Simulation Process Model (Lane-Getaz, 2006b) can be used for this purpose.
The Simulation Process Model (Microsoft Word 64kB Oct15 06) provides a framework for students to develop a deeper understanding of the simulation process through visualization. The first tier of the model represents the population and its associated parameter. The second tier of the model represents a given number of samples drawn from the population and their associated statistics. The third tier of the model represents the distribution of sample statistics (i.e., the sampling distribution).
The Sampling Reese's Piece activity provides a good example of how the Simulation Process Model might be used. In this activity students use an applet to simulate samples of candies, while a graph of the distribution of orange candies is dynamically generated. The population of candies (shown in a candy machine) would be the first tier of the model. Multiple random samples of 25 candies and the proportion of orange candies in each sample make up the second tier of the model. Finally, the distribution of the sample proportions of oranges candies make up the third (and bottom) tier of the model. Sharing this model with students after they complete the simulation activity can help them better understand the simulation and distinguish between the different levels of data.
Examples of Simulation Activities
Generating Data by Specifying a Probability ModelIn the One Son Policy simulation students are first presented with a research question about the consequences of the one son policy, where families continue to have children until they have one boy, then they stop. Students are then asked to make conjectures about the average family size and ratio of boys to girls under this policy. Then students simulate this policy, with coins and a computer applet. Students then compare their conjectures to their observed results. Through this simulation students gain a deeper understanding of the processes associated with probability models.
Hypothesis Testing and InferenceIn the Coke vs. Pepsi Taste Test Challenge students first design and conduct an experiment where students participate in a blind taste test. Students collect and analyze data on whether their peers can detect the difference in colas, using simulation to generate data to compare their results to.
Sampling from a PopulationIn the Reese's Pieces Activity students first make a prediction about the proportion of orange Reese's Pieces in the population of Reese's Pieces candy, then randomly sample 25 candies and record the proportion of oranges candies, then simulate data using an applet.
Assessment of Learning after a Simulation ActivityThere are different ways student learning can be assessed after using a simulation activity. These include:
- Assessing students' understanding of what the simulation is illustrating. For example, do students now understand the meaning of a 95% confidence interval.
- Assessing whether students can apply their learning to a different problem or context, such as critiquing a research finding that includes a confidence interval, where students interpret the correct use or misuse of the term margin of error.
- Assessing if students understand the simulation process. For example, students can be given the Simulation Process Model and be asked to map the different levels of data from a simulation activity to the three tiers of the model. | http://www.nagt.org/sp/library/datasim/how.html | 13 |
17 | Today’s discussion lesson will focus primarily on building understanding for the following three academic language objectives within what makes an effective argument:
· In the end, students will practice framing an argument of their own by writing down an outline that includes: their thesis statement, who their intended audience is, and their evidence.
To incorporate Web 2.0 technology into this lesson, I have designed a Prezi presentation (if you haven't used Prezi for your own presentations, I thoroughly recommend it!).
To be taken to an interactive presentation for this topic click here.
Suggestions for holding the discussion:
With the Prezi loaded on the screen, focus the students’ attention by asking them to raise their hand if they have ever gotten into an argument. Keep their hands raised if they’ve ever gotten into an argument with their care-givers or parents? With their peers? How did it feel? What do you remember? Ask the students to think about these experiences and what was frustrating about them.
What do you think an argument is? (ask students to provide different examples)
The students will mainly discuss three topics related to arguments in today’s class:
1. What are some examples you’ve experienced of an argument is unfocused or unclear?
Provide several examples of what an unfocused or unclear argument is. Then, to discuss this topic, have students participate in a think-pair-share format about their own experiences with unclear arguments. After students have discussed for five minutes introduce the concept of a thesis statement and ask elaborative interrogation questions like “why is a thesis statement important?” and “How would a thesis statement have changed your experiences you mentioned?”\
2. Why is it important for an argument to have the right audience in mind?
Remain in a large group discussion setting and pose several examples of argument scenarios to students with the wrong audience in mind (i.e. preparing a formal academic paper full of big words for a kindergarten class, talking about the importance of owning a snake in front of people who have fears of reptiles, etc. ) Ask students to come up and share their own. Then discuss the concept of audience and what should change in our arguments if we are to talk to the correct audience.
3. What does it feel like when an argument uses unfair or illogical evidence to make its point?
Provide several examples of what unfair or illogical evidence in an argument is. To discuss this question students will break into small groups and talk about their own experiences. Regroup after a couple of minutes and discuss what constitutes effective evidence, what makes evidence reliable and objective. Students will be asked to provide examples of reliable and unreliable evidence sources.
To end the discussion, ask the students to begin wrapping up what we’ve talked about by reviewing our main points. I will ask for different volunteers to recap what they think the important parts to remember about considering thesis statements, evidence, and audience were. Provide certain arguments that the students are familiar with from prior lessons and have them break down the argument into what the thesis statement was, what the audience was, and what was used for evidence. Ask students to make other connections from what was talked about in class today with what other topics and units that they have learned this year.
Rationale for lesson: This lesson originated in the middle of a unit on local government, where students prepared to participate in a classroom role-playing activity--a city council meeting hearing proposals for and against a proposed community center.
A presentation on the three elements of an effective argument was helpful to help frame how to create effective arguments for each of the multiple perspectives represented by the council and citizens in their scenario.
While this lesson does not need to be coupled with a problem-based learning scenario, beyond classroom discussion used to analyze and talk about what makes an effective argument, it is essential that students have the chance to practice writing out their own arguments--complete with a thesis statement, described audience, and evidence.
In the end, this lesson builds essential background knowledge for students about the elements of effective arguments through classroom discussion and small-group work.
Links to Prior Knowledge and Experiences: Students will hopefully come into this lesson with background knowledge in writing a basic essay in their language arts class and understanding the concept of a thesis statement. Students will also need to connect with prior knowledge about arguments in their own lives and what the strengths and weaknesses of these arguments have been.
Vocabulary: Beyond the academic language focuses of thesis statement, evidence, and audience, the following concepts/terms will also be discussed in this lesson: making a claim, counterargument, critical reading, reference.
The following website from the Writing Center at University of North Carolina has been helpful in preparing this lesson and is recommended for students seeking out additional direction: http://writingcenter.unc.edu/resources/handouts-demos/writing-the-paper/argument.
for Sophia online college credit courses.Join Now | http://www.sophia.org/elements-of-an-effective-argument-tutorial?subject=political-science | 13 |
47 | English Language Learners
Students must be prepared for future success in this increasingly complex and global society. As the rigor of academic expectations increase, there is one ever present skill that students will always need…critical thinking. Today’s classrooms must focus on learning how to find the solution, rather than focusing on the answer; thus, critical thinking should be infused throughout the curriculum. The result is for all students to be taught to think critically.
The Revised Bloom’s – DOK Wheel provides educators quick and easy access to critical thinking tools for engaging student in rigorous and complex thinking to address the cognitive demands of standards and the cognitive demands of assessments. Students need to learn how to process information rather than merely memorizing. Teachers must know how to assess different levels of knowledge such as factual, conceptual, procedural, and metacognitive. Bloom’s Taxonomy uses verbs to classify levels of thinking or cognitive processing from low levels to high levels of thought. Depth of Knowledge (DOK) levels are determined by the context in which the verb is used and the depth of thinking required. Both models are beneficial as Bloom’s identifies the different levels of thinking and DOK reflects the depth of thinking required within the levels. Teachers can increase student learning which results in increased achievement if they regularly employ the models Revised Bloom’s and Depth of Knowledge to generate higher levels of student thinking.
Students must be guided to become producers of knowledge. An essential instructional task of the teacher is to design activities or to create an environment that allows students opportunities to engage in higher-order thinking (Queensland Department of Education, 2002). With the Revised Bloom’s – DOK Wheel, teachers can incorporate all levels of the taxonomy and address deeper levels of thought in order to develop and increase rigorous, complex thinking within students. Questions, tasks, learning activities, and assessment items in every subject area can be developed that enhance teaching and learning. This resource helps teachers individualize learning according to the interests, abilities, and specific learning needs present in the differentiated classroom, from special needs students to students in gifted education. Students can become active participants in independent and/or collaborative settings while acquiring and applying critical thinking.
Critical thinking is an important issue in education today. Attention is focused on quality thinking as an important element of life success (Huitt, 1998; Thomas and Smoot, 1994). In the 1950s, Bloom found that 95% of the test questions developed to assess learning required students to only think at the lowest level of learning, the recall of information. Similar findings indicated an overemphasis on lower-level questions and activities with little emphasis on the development of students’ thinking skills (Risner, Skeel, and Nicholson, 1992). “Perhaps most importantly in today’s information age, thinking skills are viewed as crucial for educated persons to cope with a rapidly changing world. Many educators believe that specific knowledge will not be as important to tomorrow’s workers and citizens as the ability to learn and make sense of new information” (Gough, 1991). “Now, a considerable amount of attention is given to students’ abilities to think critically about what they do” (Hobgood, Thibault, and Walberg, 2005). It is imperative for students to communicate their thinking coherently and clearly to peers, teachers, and others.
Critical thinking is crucial in all instruction as indicated by state or national standards. Critical thinking tasks allow students to explain their thought processes and offer teachers opportunities to identify the precise point at which students demonstrate misunderstanding of mathematical skills, strategies, or conceptual understanding. The literature notes that when students use their critical thinking abilities integrated with content instruction, depth of knowledge can result. Teachers are encouraged to refrain from limiting instruction to lectures or tasks to rote memorization that exercise only lower levels of thought as opposed to incorporating those which build conceptual understanding (Bransford, Brown, and Cocking, 2000).
The ability to engage in careful, reflective thought is viewed in education as paramount. Teaching students to become skilled thinkers is a goal of education. Students must be able to acquire and process information since the world is changing so quickly. Some studies purport that students exhibit insufficient levels of skill in critical or creative thinking. In his review of research on critical thinking, Norris (1985) surmised that students’ critical thinking abilities are not widespread. From this study, Norris reported that most students do not score well on tests that measure ability to recognize assumptions, evaluate controversy, and scrutinize inferences.
Thus, student performances on measures of higher-order thinking ability continue to reveal a critical need for students to develop the skills and attitudes of quality thinking. Furthermore, another reason that supports the need for thinking skills instruction is the fact that educators appear to be in general agreement that it is possible to increase students' creative and critical thinking capacities through instruction and practice. Presseisen (1986) asserts that the basic premise is students can learn to think better if schools teach them how to think. Adu-Febiri (2002) agrees that thinking can be learned. Students can be assisted in organizing the content of their thinking to facilitate complex reasoning. Revised Bloom’s - DOK Wheel assists teachers in actually facilitating students to think rather than providing students only with content knowledge.
The literature indicates Bloom’s Taxonomy is a widely accepted organizational structure to assist students in organizing the content of their thinking to facilitate complex reasoning. According to Sousa (2006), Bloom’s Taxonomy is compatible with the manner in which the brain processes information to promote comprehension. Bloom, Englehart, Furst, Hill, and Krathwohl (1956) developed this classification system for levels of intellectual behavior in learning. Bloom’s Taxonomy contains three domains: the cognitive, psychomotor, and affective. Within the cognitive domain, Bloom identified six levels: knowledge, comprehension, application, analysis, synthesis, and evaluation. This domain and all levels are still useful today in developing critical thinking skills in students.
Anderson and Krathwohl (2001), revised Bloom’s Taxonomy in order to present a useful framework to educators as they work to align curriculum, instruction, and assessment. Basically, the six level names were changed to verbs to portray thinking as an active process. Knowledge changed to Remember, Comprehension to Understanding, Application to Apply, Analysis to Analyze, Synthesis to Create, and Evaluation to Evaluate. Create (originally Synthesis) was moved to the sixth level, since the revision reflected this level as containing critical and creative thought, showing it to be a higher level than Evaluate. The types of thinking were called the Cognitive Dimension and are ordered in terms of increasing complexity. In the newly revised taxonomy, a second dimension came in to being, the Knowledge Dimension. Four types of knowledge were identified: Factual, Conceptual, Procedural, and Metacognitive. According to Anderson, et al., “All subject matters are composed of specific content, but how this content is structured by teachers in terms of their objectives and instructional activities results in different types of knowledge being emphasized in the unit.” Students must make meaning of knowledge at a deeper level or integrate or organize it in a useful way. When teachers can better organize the knowledge and content of subject matter or ‘what students think about’ (facts, concepts, procedures, or metacognition), they can help students make more meaningful transfers or applications. Research indicates that when students use higher cognitive processing with factual information, then higher retention results (Hunkins, 1995; Sprenger, 2005). In conclusion, the Knowledge Dimension does not demonstrate if students are involved in higher levels of thought with selected questions, the levels of cognitive engagement (Cognitive Dimension) are what drives thinking. Emphasis is placed on the fact that the Revised Taxonomy should be adapted and used in the way educators find the taxonomy to best serve their needs. Teaching critical thinking skills is one of the greatest challenges facing teachers in the classroom today. The Original and Revised Taxonomies are useful today in developing and categorizing critical thinking skills of students. Bloom’s Taxonomy continues to be a widely accepted model for the development of higher level thinking.
The model Depth of Knowledge (DOK) was developed by Norman Webb in 1997 for the purpose of analyzing alignment between standards and assessments (Webb, 1997). Webb’s DOK is now being used to measure the different levels of cognitive complexity in academic standards and curriculum (Webb 2002; 2006). Dr. Webb advocates the necessity of instructional tasks and assessment items matching the standards. Webb stresses that educators become aware of the level of demonstration required by students when tasks and assessment items are developed, thus the rationale for his four levels of DOK. Each level expresses a different level of cognitive expectation or depth of knowledge. Level 1 requires students to recall information. Level 2 asks students to think beyond reproduction of responses. Students use more than one cognitive process or follow more than one step at this level. Students at Level 3 demonstrate higher and more complex levels of thought than the previous levels. Responses may have multiple answers, yet often students must choose one and justify the reasoning behind the selection. Level 4 requires students to make several connections with ideas. Typically, performance assessments and open-ended responses are created for this level of thought.
In Bloom’s Taxonomy, the focus is on the activities of the student such as apply, analyze, or create. DOK places the emphasis on the complexity of the cognitive processes (applying, analyzing, creating,) that each of the activities requires of the students. Complexity relates to the cognitive steps students engage as they arrive at solutions or answers. Thus, Bloom’s is a structure that identifies the type of thinking students demonstrate; whereas, DOK is a structure that determines what students know and to what depth they exhibit that knowledge. Both are measures of higher-order thinking.
Revised Bloom’s – DOK Wheel, showcases the Revised Bloom’s Taxonomy and Webb’s Depth of Knowledge in a wheel format. This educator’s tool features six levels of Revised Bloom’s Taxonomy on one side and the four levels of Webb’s Depth of Knowledge on the other. One side of the wheel has six windows showcasing Student Expectations and Engagement (Questioning) Prompts for Revised Bloom’s Taxonomy. The face describes each of the six levels, identifies Cognitive Processes (Verbs) for each level, and defines the Knowledge Dimension of Bloom’s Taxonomy. The opposite side of the wheel has four windows showcasing Student Expectations and Engagement Prompts for the four levels of Webb’s Depth of Knowledge (DOK). The face describes each of the four levels of DOK and identifies Cognitive Processes for each level.
Rigorous critical thought is an important issue in education today; thus, the reason attention is focused on quality thinking as an important element of life success (Huitt, 1998; Thomas and Smoot, 1994). However, Wagner (2008) noted that despite all of the literature that advocates rigor and complexity in thinking, researchers found that elementary students spend more than 90 percent of their time sitting and listening to teachers. Questions are posed, yet many teachers continue to only allow one student to respond at a time. Walsh and Sattes (2011) advocate that classroom expectations should exist where all students are expected to compose responses to all questions posed. Teachers must model for students how questions should be asked of students and teachers, with students providing evidence or examples that support given responses. Anderson and Krathwohl (2001) advise teachers to also ask questions about the remember and understand levels of Revised Bloom’s Taxonomy. Likewise, questions should often be asked at DOK 2 and 3 levels.
Research indicates that thinking skills instruction makes a positive difference in the achievement levels of students. Studies that reflect achievement over time show that learning gains can be accelerated. These results indicate that the teaching of thinking skills can enhance the academic achievement of participating students (Bass and Perkins, 1984; Bransford, 1986; Freseman, 1990; Kagan, 1988; Matthews, 1989; Nickerson, 1984). Critical thinking is a complex activity and we should not expect one method of instruction to prove sufficient for developing each of its component parts. Carr (1990) acknowledges that while it is possible to teach critical thinking and its components as separate skills, they are developed and used best when learned in connection with content knowledge. To develop competency in critical thinking, students must use these skills across the disciplines or the skills could simply decline and disappear. Teachers should expect students to use these skills in every class and evaluate their skills accordingly. Hummel and Huitt (1994) stated, "What you measure is what you get."
Students are not likely to develop these complex skills or to improve their critical thinking if educators fail to establish definite expectations and measure those expectations with some type of assessment. Assessments (e.g., tests, demonstrations, exercises, panel discussions) that target higher-level thinking skills could lead teachers to teach content at those levels, and students, according to Redfield and Rousseau (1981), to perform at those levels. Students may know an enormous amount of facts, concepts, and principles, but they also must be able to effectively process knowledge in a variety of increasingly complex ways. The questioning or engagement prompts in this valuable teacher resource can be used to plan daily instruction as students explore content and gather knowledge; they can be used as periodic checkpoints for understanding; they can be used as a practice review or in group discussions; or they could be used as ongoing assessment tools as teachers gather formative and summative data.
Teachers play a key role in promoting critical thinking between and among students. Questioning stems in the content areas act as communication tools. Four forms of communication are affected in critical thinking: speaking, listening, reading, and writing. a A wide range of questioning prompts are provided to encourage students to think critically which contributes to their intellectual growth. This educational resource relates to any content that is presented to students and saves teachers activity preparation time. A teacher must examine what he/she fully intends to achieve from the lesson and then select the appropriate critical thinking engagement prompt(s) to complement the instructional purpose or the cognitive level of thinking. The questioning stem itself influences the level of thinking or determines the depth of thinking that occurs.
Solving problems in the real world and making worthwhile decisions is valued in our rapidly changing environment today. Paul (1985) points out that “thinking is not driven by answers but by questions.” The driving forces in the thinking process are the questions. When a student needs to think through an idea or issue or to rethink anything, questions must be asked to stimulate thought. When answers are given, sometimes thinking stops completely. When an answer generates another question then thought continues.
Questions lead to understanding. Many students typically have no questions. They might sit in silence with their minds inactive as well. Sometimes the questions students have tend to be shallow and nebulous which might demonstrate that they are not thinking through the content they are expected to be learning. If we, as educators, want students to think, we must stimulate and cultivate thinking with questions (Paul, 1990). By engaging students in a variety of questioning that relates to the idea or content being studied, students develop and apply critical thinking skills. Consequently, by using the analysis, synthesis, and evaluation levels of Bloom’s Taxonomy and DOK Levels 2, 3, and 4, students are challenged to work at tasks that are more demanding and thought-provoking. These kinds of tasks can lead to situations and tasks where students make real-life connections.
Teachers need to plan for the type of cognitive processing they wish to foster and then design learning environments and experiences accordingly. Studies suggest that the classroom environment can be arranged to be conducive to high-level thinking. The findings include the following: an environment free from threats, multi-level materials, acceptance of diversity, flexible grouping, the teacher as a co-learner, and a nurturing atmosphere. A climate which promotes psychological safety and one in which students respect each other and their ideas appears to be the most beneficial (Klenz, 1987; Marzano, Brandt, Hughes, Jones, Presseisen, Rankin, and Suhor, 1988). Sometimes it is necessary to lecture. Other times, the teacher balances methods of instruction by providing opportunities for the students to take some ownership of their learning. Lovelace (2005) concluded that matching a student’s learning style with the instruction can improve academic achievement and student attitudes toward learning. In addition, there are stems identified that allow students to demonstrate learning and thinking using visual, auditory, or tactile/kinesthetic modes. The range of activities or tasks run the gamut from creative opportunities (writing a poem, composing a song, designing an advertisement, constructing a model) to participating in a panel discussion, presenting a speech, conducting a survey, holding an interview, using a graphic organizer, or simply compiling a list. The Revised Bloom’s – DOK Wheel is a vital tool in establishing a thinking-centered environment.
“Multiple forms of student engagement exist when high-level thinking is fostered. Examples of engagement include: collaborative group activities, problem-solving experiences, open-ended questions that encourage divergent thinking, activities that promote the multiple intelligences and recognize learning styles, and activities in which both genders participate freely. Brain researchers suggest teachers use a variety of higher-order questions in a supportive environment to strengthen the brain” (Cardellichio and Field, 1997). “Meaningful learning requires teachers to change their role from sage to guide, from giver to collaborator, from instructor to instigator” (Ó Murchú, 2003). “Since students learn from thinking about what they are doing, the teacher’s role becomes one who stimulates and supports activities that engage learners in critical thinking” (Bhattacharya, 2002). The Revised Bloom’s - DOK Wheel provides support to the teacher in promoting different levels and depths of student engagement. The role of the teacher is just as important as it has always been, perhaps more so. Teachers scaffold learning so that students can assume a more participatory role in their own learning. This means that lessons are in fact more carefully constructed to guide students through the exploration of content using both the Revised Bloom’s and DOK frameworks for critical thinking. Attention to Bloom’s Taxonomy and Webb’s DOK does not mean that every class period must be optimally designed to place students in inquiry-based roles. Teaching requires that we constantly assess where students are and how best to address their needs.
The No Child Left Behind Act of 2001 as well as academic standards emphasize the need for evidence-based instructional materials. Mentoring Minds Product Development Team sought to develop an educational resource that teachers could employ as they develop K-12 students who value knowledge and learning, and as they prepare students for life beyond the classroom. This preparedness consists of a culture of thoughtful learning if the goal is for students to advance thinking (Perkins, 1992). It appears important for students to learn the language of thinking to better communicate their thoughts. The Revised Bloom’s – DOK Wheel contains engagement prompts and cognitive processes which students may use in their daily academic conversations. Students should be encouraged to process their thoughts through questioning prompts, tasks, or the processes or verbs identified on the wheel. These components can jumpstart conversations with peers, discussions in small groups, or those with teachers. This wheel can become a building block for facilitating student interactions with texts, digital media, other students, and in other meaningful ways. Teachers may utilize the wheel to directly motivate and teach students to be purposeful and thoughtful in their thinking. Thus, the thinking culture can be strengthened in classrooms, creating independent lifelong learners.
The conclusions reached by researchers substantiate the fact that students achieve more when they manipulate topics at the higher levels of rigorous thought. These skills have little value without the ability to know how, when, and where to apply them. The utilization of the Revised Bloom’s - DOK Wheel provides direction to teachers as they apply the levels of Bloom’s Taxonomy and strengthen the abilities of students to think in depth using Webb’s Depth of Knowledge (DOK) framework. Incorporating the Revised Bloom’s – DOK Wheel as a planning tool for high-quality classroom instruction and assessment, the teacher can structure learning experiences to promote complexity of thought as well as teaching students how to learn as opposed to simply what to learn. Mentoring Minds seeks to support educators in their endeavors to help students acquire life-long skills of becoming independent thinkers and problem solvers.
Bibliography for Revised Bloom’s – DOK Wheel
Adu-Febiri, F. (2002). Thinking skills in education: ideal and real academic cultures. CDTL Brief, 5, Singapore: National University of Singapore.
Anderson, L., et al. (2001). A taxonomy for learning, teaching, and assessing – A revision of Bloom’s Taxonomy of educational objectives. New York: Addison Wesley Longman, Inc.
Bass, G. , Jr. & Perkins, H. (1984). Teaching critical thinking skills with CAI. Electronic Learning, 14, 32, 34, 96.
Bhattacharya, M. (2002). Creating a meaningful learning environment using ICT. CDTL Brief, 5, Singapore: National University of Singapore. Retrieved March 2007 from http://www.cdtl.nus.edu.sg/brief/v5n3/sec3.htm
Bloom, B., Englehart, M. , Furst, E. , Hill, W. , & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive Domain. New York: Longmans Green.
Bransford, J.D. , Burns, M. , Delclos, V. & Vye, N. (1986) Teaching thinking: evaluating evaluations and broadening the data base. Educational Leadership, 44, 68-70.
Carr, K. (1990). How can we teach critical thinking? ERIC Digest. ERIC NO. : ED326304.
Cardellichio, T. & Field, W. (1997). Seven strategies to enhance neural branching. Educational Leadership, 54, (6).
Education Queensland. (2002). What is higher-order thinking? A guide to Productive Pedagogies: Classroom reflection manual. Queensland: Department of Education.
Farkas, R.D. (2003). "Effects of traditional versus learning-styles instructional methods on middle school students. Journal of Educational Research, 97, 43-81.
Freseman, R. (1990). Improving Higher Order Thinking of Middle School Geography Students By Teaching Skills Directly. Fort Lauderdale, FL: Nova University.
Gough, D. (1991). Thinking about Thinking. Alexandria, VA: National Association of Elementary School Principals.
Hobgood, B. , Thibault, M. , & Walbert, D. (2005). Kinetic connections: Bloom’s taxonomy in action. University of North Carolina at Chapel Hill: Learn NC.
Huitt, W. (1998). Critical thinking: An overview. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved May 5, 2007 from http://chiron.valdosta.edu/whuitt/col/cogsys/critthnk.html. [Revision of paper presented at the Critical Thinking Conference sponsored by Gordon College, Barnesville, GA, March 1993.
Hummel, J., & Huitt, W. (1994). What you measure is what you get. Georgia ASCD Newsletter: The Reporter, 10-11.
Hunkins, F. (1995). Teaching thinking through effective questioning (2nd ed.). Norwood, MA: Christopher-Gordon.
Kagan, D. (1988). Evaluating a language arts program designed to teach higher level thinking skills. Reading Improvement (25), 29-33.
Klenz, S. (1987). Creative and Critical Thinking, Saskatchewan Education Understanding the Common Essential Learnings, Regina, SK: Saskatchewan Education.
Lovelace, M. (2005). Meta-analysis of experimental research based on the Dunn and Dunn model. Journal of Educational Research, 98: 176-183.
Marzano, R. , Brandt, R. , Hughes, C. , Jones, B. , Presseisen, B. , Rankin, S. & Suhor, C. (1988). Dimensions of Thinking: A Framework for Curriculum and Instruction. Alexandria, VA: Association for Supervision and Curriculum Development.
Matthews, D. (1989).The effect of a thinking-skills program on the cognitive abilities of middle school students. Clearing House, 62, 202-204.
Nickerson, R. (1984). Research on the Training of Higher Cognitive Learning and Thinking Skills. Final Report # 5560. Cambridge, MA: Bolt, Beranek and Newman, Inc.
Norris, S.P. (1985). Synthesis of research on critical thinking. Educational Leadership, 42, 40-45.
Ó Murchú, D. (2003). Mentoring, Technology and the 21st Century’s New Perspectives, Challenges and Possibilities for Educators. Second Global Conference, Virtual Learning & Higher Education, Oxford, UK.
Paul, R.W. (1985). Bloom’s taxonomy and critical thinking instruction. Educational Leadership, 42, 36-39.
Paul, R. (1990). Critical Thinking: What Every Person Needs to Survive in a Rapidly Changing World. Rohnert Park, CA: Center for Critical Thinking and Moral Critique.
Presseisen, B.Z. (1986). Critical Thinking and Thinking Skills: State of the Art Definitions and Practice in Public Schools. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA.
Redfield, D. L. , & Rousseau, E. W. (1981). A meta-analysis of experimental research on teacher questioning behavior. Review of Educational Research, 51, 181-193.
Risner, G., Skeel, D., & Nicholson, J. (1992). A closer look at textbooks. The Science Teacher, 61(7), 42–45.
Sousa, D. (2006). How the brain learns. Thousand Oaks, CA: Corwin Press.
Sprenger, M. (2005). How to teach students to remember. Alexandria, VA: Association for Supervision and Curriculum Development.
Tama, C. (1989). Critical thinking has a place in every classroom. Journal of Reading, 33, 64-65.
Thomas, G. , & Smoot, G. (1994, February/March). Critical thinking: A vital work skill. Trust for Educational Leadership, 23, 34-38.
Walsh, J. & Sattes, B. (2011). Thinking through quality questioning. Thousand Oaks, CA: Corwin, A SAGE Company.
Webb, N. L. (1997). Criteria for alignment of expectations and assessments in mathematics and science education. Council of Chief State School Officers and National Institute for Science Education Research Monograph No. 6. Madison: University of Wisconsin, Wisconsin Center for Education Research.
Webb, N. (2006). Depth-of-Knowledge (DOK) levels for reading. Retrieved Spring 2010 from http://www.education.ne.gov/assessment/pdfs/Reading_DOK.pdf
Webb, N. (2002). Depth-of-Knowledge levels for four content areas. Wisconsin Center for Educational Research.
- Accomodations Wheel
- ADD/ADHD Wheel
- Behavior Guide
- Bully Guide
- Critical Thinking Strategies Guide
- Critical Thinking Educator Wheel
- Intervention Strategies Guide
- Master Instructional Strategies
- STAAR Motivation Math
- Motivation Math Case Study
- STAAR Motivation Reading
- Response to Intervention (RtI) Strategies
- Vocabulary Products
- STAAR Motivation Science
- STAAR Motivation Writing
- Motivation Math
- Motivation Reading
- ¡Escribir Como Estrellas!
- Motivation Math Middle School Case Study
- Math Benchmark Assessments | http://www.mentoringminds.com/research/english-language-learners-guide | 13 |
16 | This article is part of the series on:
|Introduction to Genetics|
|General flow: DNA > RNA > Protein|
|special transfers (RNA > RNA,
RNA > DNA, Protein > Protein)
|Transcription (Transcription factors,
(functional groups, peptides,
|epigenetic regulation (Hox genes,
In biology, transcription is the cellular process of synthesizing RNA based on a DNA template. DNA transcription generates the information-carrying messenger RNAs (mRNAs) used for protein synthesis as well as the other RNA molecules (transfer RNA, ribosomal RNA, etc.) that have catalytic and structural roles in the cell.
In transcription, molecules of RNA are synthesized based on the information stored in DNA, although utilizing only a portion of the DNA molecule to produce the much smaller RNAs. Both nucleic acid sequences, DNA and RNA, use complementary language, and the information is simply transcribed, or copied, from one molecule to the other. One significant difference between the RNA and DNA sequences is the substitution of the base uracil (U) in RNA in place of the closely related base thymine (T) of DNA. Both of these bases pair with adenine (A).
The process of transcription, which is critical for all life and serves as the first stage in building proteins, is very complex and yet remarkably precise. The harmony underlying nature is reflected in the intricate coordination involved in producing RNA molecules from particular segments of the DNA molecule.
Overview of basic process
Transcription, or RNA synthesis, is the process of transcribing DNA nucleotide sequence information into RNA sequence information. The RNA retains the information of the specific region of the DNA sequence from which it was copied.
DNA transcription is similar to DNA replication in that one of the two strands of DNA acts as a template for the new molecule. However, in DNA replication, the new strand formed remains annealed to the DNA strand from which it was copied, whereas in DNA transcription the single-stranded RNA product does not remain attached to the DNA strand, but rather is released as the DNA strand reforms. In addition, RNA molecules are short and are only copied from a portion of the DNA (Alberts et al. 1989).
Transcription has some proofreading mechanisms, but they are fewer and less effective than the controls for copying DNA; therefore, transcription has a lower copying fidelity than DNA replication (Berg et al. 2006).
Synthesis of RNA molecules is done by RNA polymerase enzymes. Eukaryotes have different RNA polymerase molecules to synthesize different types of RNA but most of our knowledge of RNA polymerase comes from the single enzyme that mediates all RNA synthesis in bacteria (Alberts et al. 1989). Both bacterial and eukaryotic RNA polymerases are large, complicated molecules with a total mass of over 500,000 daltons (Alberts et al. 1989).
The stretch of DNA that is transcribed into an RNA molecule is called a transcription unit. A DNA transcription unit that is translated into protein contains sequences that direct and regulate protein synthesis in addition to coding the sequence that is translated into protein. RNA molecules, like DNA molecules, have directionality, which is indicated by reference to either the 5’ end or the 3’ (three prime) end (Zengel 2003). The regulatory sequence that is before (upstream (-), towards the 5' DNA end) the coding sequence is called 5' untranslated region (5'UTR), and sequence found following (downstream (+), towards the 3' DNA end) the coding sequence is called 3' untranslated region (3'UTR).
As in DNA replication, RNA is synthesized in the 5' → 3' direction (from the point of view of the growing RNA transcript). Only one of the two DNA strands is transcribed. This strand is called the “template strand,” because it provides the template for ordering the sequence of nucleotides in an RNA transcript. The other strand is called the coding strand, because its sequence is the same as the newly created RNA transcript (except for uracil being substituted for thymine). The DNA template strand is read 3' → 5' by RNA polymerase and the new RNA strand is synthesized in the 5'→ 3' direction.
The RNA polymerase enzyme begins synthesis at a specific start signal on the DNA (called a promoter) and ends its synthesis at a termination signal, whereupon the complete RNA chain and the polymerase are released (Alberts et al. 1989). Essentially, a polymerase binds to the 3' end of a gene (promoter) on the DNA template strand and travels toward the 5' end. The promoter determines which of the two strands of DNA are transcribed for the particular region of DNA being transcribed (Alberts et al. 1989). During transcription, the RNA polymerase, after binding to promoter, opens up a region of DNA to expose the nucleotides and moves stepwise along the DNA, unwinding the DNA to expose areas for transcription, and ends when it encounters the termination signal (Alberts et al. 1989).
One function of DNA transcription is to produce messenger RNAs for the production of proteins via the process of translation. DNA sequence is enzymatically copied by RNA polymerase to produce a complementary nucleotide RNA strand, called messenger RNA (mRNA), because it carries a genetic message from the DNA to the protein-synthesizing machinery of the cell in the ribosomes. In the case of protein-encoding DNA, transcription is the first step that usually leads to the expression of the genes, by the production of the mRNA intermediate, which is a faithful transcript of the gene's protein-building instruction.
In mRNA, as in DNA, genetic information is encoded in the sequence of four nucleotides arranged into codons of three bases each. Each codon encodes for a specific amino acid, except the stop codons that terminate protein synthesis. With four different nucleotides, there are 64 different codons possible. All but three of these combinations (UAA, UGA, and UAG—the stop codons) code for a particular amino acid. However, there are only twenty amino acids, so some amino acids are specified by more than one codon (Zengel 2003).
Unlike DNA replication, mRNA transcription can involve multiple RNA polymerases on a single DNA template and multiple rounds of transcription (amplification of particular mRNA), so many mRNA molecules can be produced from a single copy of a gene.
DNA transcription also produces transfer RNAs (tRNAs), which also are important in protein synthesis. Transfer RNAs transport amino acids to the ribosomes and then act to transfer the correct amino acid to the correct part of the growing polypeptide. Transfer RNAs are small noncoding RNA chains (74-93 nucleotides). They have a site for amino acid attachment, and a site called an anticodon. The anticodon is an RNA triplet complementary to the mRNA triplet that codes for their cargo amino acid. Each tRNA transports only one particular amino acid.
Transcription is divided into 5 stages: Pre-initiation, initiation promoter clearance, elongation, and termination.
Prokaryotic vs. eukaryotic transcription
There are a number of significant differences between prokaryotic transcription and eukaryotic transcription.
A major distinction is that prokaryotic transcription occurs in the cytoplasm alongside translation. Eukaryotic transcription is localized to the nucleus, where it is separated from the cytoplasm by the nuclear membrane. The transcript is then transported into the cytoplasm where translation occurs.
Another important difference is that eukaryotic DNA is wound around histones to form nucleosomes and packaged as chromatin. Chromatin has a strong influence on the accessibility of the DNA to transcription factors and the transcriptional machinery including RNA polymerase.
In prokaryotes, mRNA is not modified. Eukaryotic mRNA is modified through RNA splicing, 5' end capping, and the addition of a polyA tail.
All RNA synthesis is mediated by a single RNA polymerase molecule, while in eukaryotes there are three different RNA polymerases, one making all of the mRNAs for protein synthesis and the others making RNAs with structural and catalytic roles (tRNAs, rRNAs, and so on)
Unlike DNA replication, transcription does not need a primer to start. RNA polymerase simply binds to the DNA and, along with other co-factors, unwinds the DNA to create an initial access to the single-stranded DNA template. However, RNA Polymerase does require a promoter, like the ation bubble, so that the RNA polymerase has sequence.
Proximal (core) Promoters: TATA promoters are found around -10 and -35 bp to the start site of transcription. Not all genes have TATA box promoters and there exists TATA-less promoters as well. The TATA promoter consensus sequence is TATA(A/T)A(A/T). Some strong promoters have UP sequences involved so that the certain RNA polymerases can bind in greater frequencies.
The following are the steps involved in TATA Promoter Complex formation: 1. General transcription factors bind 2. TFIID, TFIIA, TFIIB, TFIIF (w/RNA Polymerase), TFIIH/E The complex is called the closed pre-initiation complex and is closed. Once the structure is opened by TFIIH initiation starts.
In bacteria, transcription begins with the binding of RNA polymerase to the promoter in DNA. The RNA polymerase is a core enzyme consisting of five subunits: 2 α subunits, 1 β subunit, 1 β' subunit, and 1 ω subunit. At the start of initiation, the core enzyme is associated with a sigma factor (number 70) that aids in finding the appropriate -35 and -10 basepairs downstream of promoter sequences.
Transcription initiation is far more complex in eukaryotes, the main difference being that eukaryotic polymerases do not directly recognize their core promoter sequences. In eukaryotes, a collection of proteins called transcription factors mediate the binding of RNA polymerase and the initiation of transcription. Only after certain transcription factors are attached to the promoter does the RNA polymerase bind to it. The completed assembly of transcription factors and RNA polymerase bind to the promoter, called transcription initiation complex. Transcription in archaea is similar to transcription in eukaryotes (Quhammouch et al. 2003).
After the first bond is synthesized, the RNA polymerase must clear the promoter. During this time there is a tendency to release the RNA transcript and produce truncated transcripts. This is called abortive initiation and is common for both eukaryotes and prokaroytes. Once the transcript reaches approximately 23 nucleotides it no longer slips and elongation can occur. This is an ATP dependent process.
Promoter clearance also coincides with phosphorylation of serine 5 on the carboxy terminal domain which is phosphorylated by TFIIH.
One strand of DNA, the template strand (or coding strand), is used as a template for RNA synthesis. As transcription proceeds, RNA polymerase traverses the template strand and uses base pairing complementarity with the DNA template to create an RNA copy. Although RNA polymerase traverses the template strand from 3' → 5', the coding (non-template) strand is usually used as the reference point, so transcription is said to go from 5' → 3'. This produces an RNA molecule from 5' → 3', an exact copy of the coding strand (except that thymines are replaced with uracils, and the nucleotides are composed of a ribose (5-carbon) sugar where DNA has deoxyribose (one less oxygen atom) in its sugar-phosphate backbone).
In producing mRNA, multiple RNA polymerases can be involved on a single DNA template and result in many mRNA molecules from a single gene via multiple rounds of transcription.
This step also involves a proofreading mechanism that can replace incorrectly incorporated bases.
Prokaryotic elongation starts with the "abortive initiation cycle." During this cycle RNA polymerase will synthesize mRNA fragments 2-12 nucleotides long. This continues to occur until the σ factor rearranges, which results in the transcription elongation complex (which gives a 35 bp moving footprint). The σ factor is released before 80 nucleotides of mRNA are synthesized.
In eukaryotic transcription, the polymerase can experience pauses. These pauses may be intrinsic to the RNA polymerase or due to chromatin structure. Often the polymerase pauses to allow appropriate RNA editing factors to bind.
Bacteria use two different strategies for transcription termination. In Rho-independent transcription termination, RNA transcription stops when the newly synthesized RNA molecule forms a G-C rich hairpin loop, followed by a run of U's, which makes it detach from the DNA template. In the "Rho-dependent" type of termination, a protein factor called "Rho" destabilizes the interaction between the template and the mRNA, thus releasing the newly synthesized mRNA from the elongation complex.
Transcription termination in eukaryotes is less well understood. It involves cleavage of the new transcript, followed by template-independent addition of As at its new 3' end, in a process called polyadenylation.
Active transcription units are clustered in the nucleus, in discrete sites called “transcription factories.” Such sites could be visualized after allowing engaged polymerases to extend their transcripts in tagged precursors (Br-UTP or Br-U), and immuno-labeling the tagged nascent RNA. Transcription factories can also be localized using fluorescence in situ hybridization, or marked by antibodies directed against polymerases. There are ~10,000 factories in the nucleoplasm of a HeLa cell, among which are ~8,000 polymerase II factories and ~2,000 polymerase III factories. Each polymerase II factor contains ~8 polymerases. As most active transcription units are associated with only one polymerase, each factory will be associated with ~8 different transcription units. These units might be associated through promoters and/or enhancers, with loops forming a "cloud" around the factor.
A molecule that allows the genetic material to be realized as a protein was first hypothesized by Jacob and Monod. RNA synthesis by RNA polymerase was established in vitro by several laboratories by 1965; however, the RNA synthesized by these enzymes had properties that suggested the existence of an additional factor needed to terminate transcription correctly.
In 1972, Walter Fiers became the first person to actually prove the existence of the terminating enzyme.
Roger D. Kornberg won the 2006 Nobel Prize in Chemistry "for his studies of the molecular basis of eukaryotic transcription" (NF 2006).
Some viruses (such as HIV), have the ability to transcribe RNA into DNA. HIV has an RNA genome that is duplicated into DNA. The resulting DNA can be merged with the DNA genome of the host cell.
The main enzyme responsible for synthesis of DNA from an RNA template is called reverse transcriptase. In the case of HIV, reverse transcriptase is responsible for synthesizing a complementary DNA strand (cDNA) to the viral RNA genome. An associated enzyme, ribonuclease H, digests the RNA strand, and reverse transcriptase synthesizes a complementary strand of DNA to form a double helix DNA structure. This cDNA is integrated into the host cell's genome via another enzyme (integrase) causing the host cell to generate viral proteins, which reassemble into new viral particles. Subsequently, the host cell undergoes programmed cell death (apoptosis).
Some eukaryotic cells contain an enzyme with reverse transcription activity called telomerase. Telomerase is a reverse transcriptase that lengthens the ends of linear chromosomes. Telomerase carries an RNA template from which it synthesizes DNA repeating sequence, or "junk" DNA. This repeated sequence of "junk" DNA is important because every time a linear chromosome is duplicated, it is shortened in length. With "junk" DNA at the ends of chromosomes, the shortening eliminates some repeated, or junk sequence, rather than the protein-encoding DNA sequence that is further away from the chromosome ends. Telomerase is often activated in cancer cells to enable cancer cells to duplicate their genomes without losing important protein-coding DNA sequence. Activation of telomerase could be part of the process that allows cancer cells to become technically immortal.
- Alberts, B., D. Bray, J. Lewis, M. Raff, K. Roberts, and J. D. Watson. 1989. Molecular Biology of the Cell, 2nd edition. New York: Garland Publishing. ISBN 0824036956.
- Berg, J., J. L. Tymoczko, and L. Stryer. 2006. Biochemistry, 6th edition. San Francisco: W. H. Freeman. ISBN 0716787245.
- Brooker, R. J. 2005. Genetics: Analysis and Principles, 2nd edition. New York: McGraw-Hill.
- Ouhammouch, M., R. E. Dewhurst, W. Hausner, M. Thomm, and E. P. Geiduschek. 2003. Activation of archaeal transcription by recruitment of the TATA-binding protein. Proceedings of the National Academy of Sciences of the United States of America 100(9): 5097–5102. PMID 12692306. Retrieved February 20, 2009.
- Nobel Foundation (NF). 2006. The Nobel Prize in Chemistry 2006: Roger D. Kornberg. Nobel Foundation. Retrieved February 20, 2009.
- Zengel, J. 2003. Translation. In R. Robinson, Genetics. New York: Macmillan Reference USA. OCLC 55983868.
All links retrieved February 20, 2009.
- Interactive Java simulation of transcription initiation. From Center for Models of Life at the Niels Bohr Institute.
- Interactive Java simulation of transcription interference—a game of promoter dominance in bacterial virus. From Center for Models of Life at the Niels Bohr Institute.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Transcription_(genetics) | 13 |
25 | Origins of the American Civil War
Historians debating the origins of the American Civil War focus on the reasons seven states declared their secession from the U.S. and joined to form the Confederate States of America (the "Confederacy"). The main explanation is slavery, especially Southern anger at the attempts by Northern antislavery political forces to block the expansion of slavery into the western territories. Southern slave owners held that such a restriction on slavery would violate the principle of states' rights.
Abraham Lincoln won the 1860 presidential election without being on the ballot in ten of the Southern states. His victory triggered declarations of secession by seven slave states of the Deep South, and their formation of the Confederate States of America, even before Lincoln took office. Nationalists (in the North and elsewhere) refused to recognize the secessions, nor did any foreign government, and the U.S. government in Washington refused to abandon its forts that were in territory claimed by the Confederacy. War began in April 1861 when Confederates attacked Fort Sumter, a major U.S. fortress in South Carolina, the state that had been the first to declare its independence.
As a panel of historians emphasized in 2011, "while slavery and its various and multifaceted discontents were the primary cause of disunion, it was disunion itself that sparked the war." States' rights and the tariff issue became entangled in the slavery issue, and were intensified by it. Other important factors were party politics, abolitionism, Southern nationalism, Northern nationalism, expansionism, sectionalism, economics and modernization in the Antebellum period.
The United States had become a nation of two distinct regions. The free states in New England, the Northeast, and the Midwest had a rapidly growing economy based on family farms, industry, mining, commerce and transportation, with a large and rapidly growing urban population. Their growth was fed by a high birth rate and large numbers of European immigrants, especially Irish, British and German. The South was dominated by a settled plantation system based on slavery. There was some rapid growth taking place in the Southwest, (e.g., Texas), based on high birth rates and high migration from the Southeast, but it had a much lower immigration rate from Europe. The South also had fewer large cities, and little manufacturing except in border areas. Slave owners controlled politics and economics, though about 70% of Southern whites owned no slaves and usually were engaged in subsistence agriculture.
Overall, the Northern population was growing much more quickly than the Southern population, which made it increasingly difficult for the South to continue to influence the national government. By the time of the 1860 election, the heavily agricultural southern states as a group had fewer Electoral College votes than the rapidly industrializing northern states. Lincoln was able to win the 1860 Presidential election without even being on the ballot in ten Southern states. Southerners felt a loss of federal concern for Southern pro-slavery political demands, and continued domination of the Federal government by "Slaveocracy" was on the wane. This political calculus provided a very real basis for Southerners' worry about the relative political decline of their region due to the North growing much faster in terms of population and industrial output.
In the interest of maintaining unity, politicians had mostly moderated opposition to slavery, resulting in numerous compromises such as the Missouri Compromise of 1820. After the Mexican-American War, the issue of slavery in the new territories led to the Compromise of 1850. While the compromise averted an immediate political crisis, it did not permanently resolve the issue of the Slave power (the power of slaveholders to control the national government on the slavery issue). Part of the 1850 compromise was the Fugitive Slave Law of 1850, requiring that Northerners assist Southerners in reclaiming fugitive slaves, which many Northerners found to be extremely offensive.
Amid the emergence of increasingly virulent and hostile sectional ideologies in national politics, the collapse of the old Second Party System in the 1850s hampered efforts of the politicians to reach yet one more compromise. The compromise that was reached (the 1854 Kansas-Nebraska Act) outraged too many northerners, and led to the formation of the Republican Party, the first major party with no appeal in the South. The industrializing North and agrarian Midwest became committed to the economic ethos of free-labor industrial capitalism.
Arguments that slavery was undesirable for the nation had long existed, and early in U.S. history were made even by some prominent Southerners. After 1840, abolitionists denounced slavery as not only a social evil but a moral wrong. Many Northerners, especially leaders of the new Republican Party, considered slavery a great national evil and believed that a small number of Southern owners of large plantations controlled the national government with the goal of spreading that evil. Southern defenders of slavery, for their part, increasingly came to contend that blacks actually benefited from slavery, an assertion that alienated Northerners even further.
Early Republic
At the time of the American Revolution, the institution of slavery was firmly established in the American colonies. It was most important in the six southern states from Maryland to Georgia, but the total of a half million slaves were spread out through all of the colonies. In the South 40% of the population was made up of slaves, and as Americans moved into Kentucky and the rest of the southwest fully one-sixth of the settlers were slaves. By the end of the war, the New England states provided most of the American ships that were used in the foreign slave trade while most of their customers were in Georgia and the Carolinas.
During this time many Americans found it difficult to reconcile slavery with their interpretation of Christianity and the lofty sentiments that flowed from the Declaration of Independence. A small antislavery movement, led by the Quakers, had some impact in the 1780s and by the late 1780s all of the states except for Georgia had placed some restrictions on their participation in slave trafficking. Still, no serious national political movement against slavery developed, largely due to the overriding concern over achieving national unity. When the Constitutional Convention met, slavery was the one issue "that left the least possibility of compromise, the one that would most pit morality against pragmatism. In the end, while many would take comfort in the fact that the word slavery never occurs in the Constitution, critics note that the three-fifths clause provided slaveholders with extra representatives in Congress, the requirement of the federal government to suppress domestic violence would dedicate national resources to defending against slave revolts, a twenty-year delay in banning the import of slaves allowed the South to fortify its labor needs, and the amendment process made the national abolition of slavery very unlikely in the foreseeable future.
With the outlawing of the African slave trade on January 1, 1808, many Americans felt that the slavery issue was resolved. Any national discussion that might have continued over slavery was drowned out by the years of trade embargoes, maritime competition with Great Britain and France, and, finally, the War of 1812. The one exception to this quiet regarding slavery was the New Englanders' association of their frustration with the war with their resentment of the three-fifths clause that seemed to allow the South to dominate national politics.
In the aftermath of the American Revolution, the northern states (north of the Mason-Dixon Line separating Pennsylvania and Maryland) abolished slavery by 1804. In the 1787 Northwest Ordinance, Congress (still under the Articles of Confederation) barred slavery from the Mid-Western territory north of the Ohio River, but when the U.S. Congress organized the southern territories acquired through the Louisiana Purchase, the ban on slavery was omitted.
Missouri Compromise
In 1819 Congressman James Tallmadge, Jr. of New York initiated an uproar in the South when he proposed two amendments to a bill admitting Missouri to the Union as a free state. The first barred slaves from being moved to Missouri, and the second would free all Missouri slaves born after admission to the Union at age 25. With the admission of Alabama as a slave state in 1819, the U.S. was equally divided with 11 slave states and 11 free states. The admission of the new state of Missouri as a slave state would give the slave states a majority in the Senate; the Tallmadge Amendment would give the free states a majority.
The Tallmadge amendments passed the House of Representatives but failed in the Senate when five Northern Senators voted with all the Southern senators. The question was now the admission of Missouri as a slave state, and many leaders shared Thomas Jefferson's fear of a crisis over slavery—a fear that Jefferson described as "a fire bell in the night". The crisis was solved by the Compromise of 1820, which admitted Maine to the Union as a free state at the same time that Missouri was admitted as a slave state. The Compromise also banned slavery in the Louisiana Purchase territory north and west of the state of Missouri along the line of 36–30. The Missouri Compromise quieted the issue until its limitations on slavery were repealed by the Kansas Nebraska Act of 1854.
In the South, the Missouri crisis reawakened old fears that a strong federal government could be a fatal threat to slavery. The Jeffersonian coalition that united southern planters and northern farmers, mechanics and artisans in opposition to the threat presented by the Federalist Party had started to dissolve after the War of 1812. It was not until the Missouri crisis that Americans became aware of the political possibilities of a sectional attack on slavery, and it was not until the mass politics of the Jackson Administration that this type of organization around this issue became practical.
Nullification Crisis
The American System, advocated by Henry Clay in Congress and supported by many nationalist supporters of the War of 1812 such as John C. Calhoun, was a program for rapid economic modernization featuring protective tariffs, internal improvements at Federal expense, and a national bank. The purpose was to develop American industry and international commerce. Since iron, coal, and water power were mainly in the North, this tax plan was doomed to cause rancor in the South where economies were agriculture-based. Southerners claimed it demonstrated favoritism toward the North.
The nation suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. The highly protective Tariff of 1828 (also called the "Tariff of Abominations"), designed to protect American industry by taxing imported manufactured goods, was enacted into law during the last year of the presidency of John Quincy Adams. Opposed in the South and parts of New England, the expectation of the tariff’s opponents was that with the election of Andrew Jackson the tariff would be significantly reduced.
By 1828 South Carolina state politics increasingly organized around the tariff issue. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and his vice-president John C. Calhoun, the most effective proponent of the constitutional theory of state nullification through his 1828 "South Carolina Exposition and Protest".
Congress enacted a new tariff in 1832, but it offered the state little relief, resulting in the most dangerous sectional crisis since the Union was formed. Some militant South Carolinians even hinted at withdrawing from the Union in response. The newly elected South Carolina legislature then quickly called for the election of delegates to a state convention. Once assembled, the convention voted to declare null and void the tariffs of 1828 and 1832 within the state. President Andrew Jackson responded firmly, declaring nullification an act of treason. He then took steps to strengthen federal forts in the state.
Violence seemed a real possibility early in 1833 as Jacksonians in Congress introduced a "Force Bill" authorizing the President to use the Federal army and navy in order to enforce acts of Congress. No other state had come forward to support South Carolina, and the state itself was divided on willingness to continue the showdown with the Federal government. The crisis ended when Clay and Calhoun worked to devise a compromise tariff. Both sides later claimed victory. Calhoun and his supporters in South Carolina claimed a victory for nullification, insisting that it had forced the revision of the tariff. Jackson's followers, however, saw the episode as a demonstration that no single state could assert its rights by independent action.
Calhoun, in turn, devoted his efforts to building up a sense of Southern solidarity so that when another standoff should come, the whole section might be prepared to act as a bloc in resisting the federal government. As early as 1830, in the midst of the crisis, Calhoun identified the right to own slaves as the chief southern minority right being threatened:
I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness.
The issue appeared again after 1842's Black Tariff. A period of relative free trade after 1846's Walker Tariff reduction followed until 1860, when the protectionist Morrill Tariff was introduced by the Republicans, fueling Southern anti-tariff sentiments once again.
Gag Rule debates
From 1831 to 1836 William Lloyd Garrison and the American Anti-Slavery Society (AA-SS) initiated a campaign to petition Congress in favor of ending slavery in the District of Columbia and all federal territories. Hundreds of thousands of petitions were sent with the number reaching a peak in 1835.
The House passed the Pinckney Resolutions on May 26, 1836. The first of these resolutions stated that Congress had no constitutional authority to interfere with slavery in the states and the second that it "ought not" do so in the District of Columbia. The third resolution, known from the beginning as the "gag rule", provided that:
All petitions, memorials, resolutions, propositions, or papers, relating in any way, or to any extent whatsoever, to the subject of slavery or the abolition of slavery, shall, without being either printed or referred, be laid on the table and that no further action whatever shall be had thereon.
The first two resolutions passed by votes of 182 to 9 and 132 to 45. The gag rule, supported by Northern and Southern Democrats as well as some Southern Whigs, was passed with a vote of 117 to 68.
Former President John Quincy Adams, who was elected to the House of Representatives in 1830, became an early and central figure in the opposition to the gag rules. He argued that they were a direct violation of the First Amendment right "to petition the Government for a redress of grievances". A majority of Northern Whigs joined the opposition. Rather than suppress anti-slavery petitions, however, the gag rules only served to offend Americans from Northern states, and dramatically increase the number of petitions.
Since the original gag was a resolution, not a standing House Rule, it had to be renewed every session and the Adams' faction often gained the floor before the gag could be imposed. However in January 1840, the House of Representatives passed the Twenty-first Rule, which prohibited even the reception of anti-slavery petitions and was a standing House rule. Now the pro-petition forces focused on trying to revoke a standing rule. The Rule raised serious doubts about its constitutionality and had less support than the original Pinckney gag, passing only by 114 to 108. Throughout the gag period, Adams' "superior talent in using and abusing parliamentary rules" and skill in baiting his enemies into making mistakes, enabled him to evade the rule and debate the slavery issues. The gag rule was finally rescinded on December 3, 1844, by a strongly sectional vote of 108 to 80, all the Northern and four Southern Whigs voting for repeal, along with 55 of the 71 Northern Democrats.
Antebellum South and the Union
There had been a continuing contest between the states and the national government over the power of the latter—and over the loyalty of the citizenry—almost since the founding of the republic. The Kentucky and Virginia Resolutions of 1798, for example, had defied the Alien and Sedition Acts, and at the Hartford Convention, New England voiced its opposition to President James Madison and the War of 1812, and discussed secession from the Union.
Southern culture
Although a minority of free Southerners owned slaves (and, in turn, a minority of similar proportion within these slaveholders who owned the vast majority of slaves), Southerners of all classes nevertheless defended the institution of slavery– threatened by the rise of free labor abolitionist movements in the Northern states– as the cornerstone of their social order.
Based on a system of plantation slavery, the social structure of the South was far more stratified and patriarchal than that of the North. In 1850 there were around 350,000 slaveholders in a total free Southern population of about six million. Among slaveholders, the concentration of slave ownership was unevenly distributed. Perhaps around 7 percent of slaveholders owned roughly three-quarters of the slave population. The largest slaveholders, generally owners of large plantations, represented the top stratum of Southern society. They benefited from economies of scale and needed large numbers of slaves on big plantations to produce profitable labor-intensive crops like cotton. This plantation-owning elite, known as "slave magnates", was comparable to the millionaires of the following century.
In the 1850s as large plantation owners outcompeted smaller farmers, more slaves were owned by fewer planters. Yet, while the proportion of the white population consisting of slaveholders was on the decline on the eve of the Civil War—perhaps falling below around a quarter of free southerners in 1860—poor whites and small farmers generally accepted the political leadership of the planter elite.
Several factors helped explain why slavery was not under serious threat of internal collapse from any moves for democratic change initiated from the South. First, given the opening of new territories in the West for white settlement, many non-slaveowners also perceived a possibility that they, too, might own slaves at some point in their life.
Second, small free farmers in the South often embraced hysterical racism, making them unlikely agents for internal democratic reforms in the South. The principle of white supremacy, accepted by almost all white southerners of all classes, made slavery seem legitimate, natural, and essential for a civilized society. White racism in the South was sustained by official systems of repression such as the "slave codes" and elaborate codes of speech, behavior, and social practices illustrating the subordination of blacks to whites. For example, the "slave patrols" were among the institutions bringing together southern whites of all classes in support of the prevailing economic and racial order. Serving as slave "patrollers" and "overseers" offered white southerners positions of power and honor. These positions gave even poor white southerners the authority to stop, search, whip, maim, and even kill any slave traveling outside his or her plantation. Slave "patrollers" and "overseers" also won prestige in their communities. Policing and punishing blacks who transgressed the regimentation of slave society was a valued community service in the South, where the fear of free blacks threatening law and order figured heavily in the public discourse of the period.
Third, many small farmers with a few slaves and yeomen were linked to elite planters through the market economy. In many areas, small farmers depended on local planter elites for vital goods and services including (but not limited to) access to cotton gins, access to markets, access to feed and livestock, and even for loans (since the banking system was not well developed in the antebellum South). Southern tradesmen often depended on the richest planters for steady work. Such dependency effectively deterred many white non-slaveholders from engaging in any political activity that was not in the interest of the large slaveholders. Furthermore, whites of varying social class, including poor whites and "plain folk" who worked outside or in the periphery of the market economy (and therefore lacked any real economic interest in the defense of slavery) might nonetheless be linked to elite planters through extensive kinship networks. Since inheritance in the South was often unequitable (and generally favored eldest sons), it was not uncommon for a poor white person to be perhaps the first cousin of the richest plantation owner of his county and to share the same militant support of slavery as his richer relatives. Finally, there was no secret ballot at the time anywhere in the United States – this innovation did not become widespread in the U.S. until the 1880s. For a typical white Southerner, this meant that so much as casting a ballot against the wishes of the establishment meant running the risk of social ostracization.
Thus, by the 1850s, Southern slaveholders and non-slaveholders alike felt increasingly encircled psychologically and politically in the national political arena because of the rise of free soilism and abolitionism in the Northern states. Increasingly dependent on the North for manufactured goods, for commercial services, and for loans, and increasingly cut off from the flourishing agricultural regions of the Northwest, they faced the prospects of a growing free labor and abolitionist movement in the North.
Militant defense of slavery
With the outcry over developments in Kansas strong in the North, defenders of slavery— increasingly committed to a way of life that abolitionists and their sympathizers considered obsolete or immoral— articulated a militant pro-slavery ideology that would lay the groundwork for secession upon the election of a Republican president. Southerners waged a vitriolic response to political change in the North. Slaveholding interests sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile" and "ruinous" legislation. Behind this shift was the growth of the cotton industry, which left slavery more important than ever to the Southern economy.
Reactions to the popularity of Uncle Tom's Cabin (1852) by Harriet Beecher Stowe (whom Abraham Lincoln reputedly called "the little woman that started this great war") and the growth of the abolitionist movement (pronounced after the founding of The Liberator in 1831 by William Lloyd Garrison) inspired an elaborate intellectual defense of slavery. Increasingly vocal (and sometimes violent) abolitionist movements, culminating in John Brown's raid on Harpers Ferry in 1859 were viewed as a serious threat, and—in the minds of many Southerners—abolitionists were attempting to foment violent slave revolts as seen in Haiti in the 1790s and as attempted by Nat Turner some three decades prior (1831).
After J. D. B. DeBow established De Bow's Review in 1846, it grew to become the leading Southern magazine, warning the planter class about the dangers of depending on the North economically. De Bow's Review also emerged as the leading voice for secession. The magazine emphasized the South's economic inequality, relating it to the concentration of manufacturing, shipping, banking and international trade in the North. Searching for Biblical passages endorsing slavery and forming economic, sociological, historical and scientific arguments, slavery went from being a "necessary evil" to a "positive good". Dr. J.H. Van Evrie's book Negroes and Negro slavery: The First an Inferior Race: The Latter Its Normal Condition– setting out the arguments the title would suggest– was an attempt to apply scientific support to the Southern arguments in favor of race based slavery.
Latent sectional divisions suddenly activated derogatory sectional imagery which emerged into sectional ideologies. As industrial capitalism gained momentum in the North, Southern writers emphasized whatever aristocratic traits they valued (but often did not practice) in their own society: courtesy, grace, chivalry, the slow pace of life, orderly life and leisure. This supported their argument that slavery provided a more humane society than industrial labor.
In his Cannibals All!, George Fitzhugh argued that the antagonism between labor and capital in a free society would result in "robber barons" and "pauper slavery", while in a slave society such antagonisms were avoided. He advocated enslaving Northern factory workers, for their own benefit. Abraham Lincoln, on the other hand, denounced such Southern insinuations that Northern wage earners were fatally fixed in that condition for life. To Free Soilers, the stereotype of the South was one of a diametrically opposite, static society in which the slave system maintained an entrenched anti-democratic aristocracy.
Southern fears of modernization
According to the historian James M. McPherson, exceptionalism applied not to the South but to the North after the North phased out slavery and launched an industrial revolution that led to urbanization, which in turn led to increased education, which in its own turn gave ever-increasing strength to various reform movements but especially abolitionism. The fact that seven immigrants out of eight settled in the North (and the fact that most immigrants viewed slavery with disfavor), compounded by the fact that twice as many whites left the South for the North as vice versa, contributed to the South's defensive-aggressive political behavior. The Charleston Mercury read that on the issue of slavery the North and South "are not only two Peoples, but they are rival, hostile Peoples." As De Bow's Review said, "We are resisting revolution.... We are not engaged in a Quixotic fight for the rights of man.... We are conservative."
Southern fears of modernity
Allan Nevins argued that the Civil War was an "irrepressible" conflict, adopting a phrase first used by U.S. Senator and Abraham Lincoln's Secretary of State William H. Seward. Nevins synthesized contending accounts emphasizing moral, cultural, social, ideological, political, and economic issues. In doing so, he brought the historical discussion back to an emphasis on social and cultural factors. Nevins pointed out that the North and the South were rapidly becoming two different peoples, a point made also by historian Avery Craven. At the root of these cultural differences was the problem of slavery, but fundamental assumptions, tastes, and cultural aims of the regions were diverging in other ways as well. More specifically, the North was rapidly modernizing in a manner threatening to the South. Historian McPherson explains:
When secessionists protested in 1861 that they were acting to preserve traditional rights and values, they were correct. They fought to preserve their constitutional liberties against the perceived Northern threat to overthrow them. The South's concept of republicanism had not changed in three-quarters of a century; the North's had.... The ascension to power of the Republican Party, with its ideology of competitive, egalitarian free-labor capitalism, was a signal to the South that the Northern majority had turned irrevocably towards this frightening, revolutionary future.
Harry L. Watson has synthesized research on antebellum southern social, economic, and political history. Self-sufficient yeomen, in Watson's view, "collaborated in their own transformation" by allowing promoters of a market economy to gain political influence. Resultant "doubts and frustrations" provided fertile soil for the argument that southern rights and liberties were menaced by Black Republicanism.
J. Mills Thornton III, explained the viewpoint of the average white Alabamian. Thornton contends that Alabama was engulfed in a severe crisis long before 1860. Deeply held principles of freedom, equality, and autonomy, as expressed in republican values appeared threatened, especially during the 1850s, by the relentless expansion of market relations and commercial agriculture. Alabamians were thus, he judged, prepared to believe the worst once Lincoln was elected.
Sectional tensions and the emergence of mass politics
The politicians of the 1850s were acting in a society in which the traditional restraints that suppressed sectional conflict in the 1820s and 1850s– the most important of which being the stability of the two-party system– were being eroded as this rapid extension of mass democracy went forward in the North and South. It was an era when the mass political party galvanized voter participation to an unprecedented degree, and a time in which politics formed an essential component of American mass culture. Historians agree that political involvement was a larger concern to the average American in the 1850s than today. Politics was, in one of its functions, a form of mass entertainment, a spectacle with rallies, parades, and colorful personalities. Leading politicians, moreover, often served as a focus for popular interests, aspirations, and values.
Historian Allan Nevins, for instance, writes of political rallies in 1856 with turnouts of anywhere from twenty to fifty thousand men and women. Voter turnouts even ran as high as 84% by 1860. An abundance of new parties emerged 1854–56, including the Republicans, People's party men, Anti-Nebraskans, Fusionists, Know-Nothings, Know-Somethings (anti-slavery nativists), Maine Lawites, Temperance men, Rum Democrats, Silver Gray Whigs, Hindus, Hard Shell Democrats, Soft Shells, Half Shells and Adopted Citizens. By 1858, they were mostly gone, and politics divided four ways. Republicans controlled most Northern states with a strong Democratic minority. The Democrats were split North and South and fielded two tickets in 1860. Southern non-Democrats tried different coalitions; most supported the Constitutional Union party in 1860.
Many Southern states held constitutional conventions in 1851 to consider the questions of nullification and secession. With the exception of South Carolina, whose convention election did not even offer the option of "no secession" but rather "no secession without the collaboration of other states", the Southern conventions were dominated by Unionists who voted down articles of secession.
Historians today generally agree that economic conflicts were not a major cause of the war. While an economic basis to the sectional crisis was popular among the “Progressive school” of historians from the 1910s to the 1940s, few professional historians now subscribe to this explanation. According to economic historian Lee A. Craig, "In fact, numerous studies by economic historians over the past several decades reveal that economic conflict was not an inherent condition of North-South relations during the antebellum era and did not cause the Civil War."
When numerous groups tried at the last minute in 1860–61 to find a compromise to avert war, they did not turn to economic policies. The three major attempts at compromise, the Crittenden Compromise, the Corwin Amendment and the Washington Peace Conference, addressed only the slavery-related issues of fugitive slave laws, personal liberty laws, slavery in the territories and interference with slavery within the existing slave states.
Economic value of slavery to the South
Historian James L. Huston emphasizes the role of slavery as an economic institution. In October 1860 William Lowndes Yancey, a leading advocate of secession, placed the value of Southern-held slaves at $2.8 billion. Huston writes:
Understanding the relations between wealth, slavery, and property rights in the South provides a powerful means of understanding southern political behavior leading to disunion. First, the size dimensions of slavery are important to comprehend, for slavery was a colossal institution. Second, the property rights argument was the ultimate defense of slavery, and white southerners and the proslavery radicals knew it. Third, the weak point in the protection of slavery by property rights was the federal government.... Fourth, the intense need to preserve the sanctity of property rights in Africans led southern political leaders to demand the nationalization of slavery– the condition under which slaveholders would always be protected in their property holdings.
The cotton gin greatly increased the efficiency with which cotton could be harvested, contributing to the consolidation of "King Cotton" as the backbone of the economy of the Deep South, and to the entrenchment of the system of slave labor on which the cotton plantation economy depended.
The tendency of monoculture cotton plantings to lead to soil exhaustion created a need for cotton planters to move their operations to new lands, and therefore to the westward expansion of slavery from the Eastern seaboard into new areas (e.g., Alabama, Mississippi, and beyond to East Texas).
Regional economic differences
The South, Midwest, and Northeast had quite different economic structures. They traded with each other and each became more prosperous by staying in the Union, a point many businessmen made in 1860–61. However Charles A. Beard in the 1920s made a highly influential argument to the effect that these differences caused the war (rather than slavery or constitutional debates). He saw the industrial Northeast forming a coalition with the agrarian Midwest against the Plantation South. Critics challenged his image of a unified Northeast and said that the region was in fact highly diverse with many different competing economic interests. In 1860–61, most business interests in the Northeast opposed war.
After 1950, only a few mainstream historians accepted the Beard interpretation, though it was accepted by libertarian economists. As Historian Kenneth Stampp—who abandoned Beardianism after 1950, sums up the scholarly consensus: "Most historians...now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united."
Free labor vs. pro-slavery arguments
Historian Eric Foner argued that a free-labor ideology dominated thinking in the North, which emphasized economic opportunity. By contrast, Southerners described free labor as "greasy mechanics, filthy operators, small-fisted farmers, and moonstruck theorists". They strongly opposed the homestead laws that were proposed to give free farms in the west, fearing the small farmers would oppose plantation slavery. Indeed, opposition to homestead laws was far more common in secessionist rhetoric than opposition to tariffs. Southerners such as Calhoun argued that slavery was "a positive good", and that slaves were more civilized and morally and intellectually improved because of slavery.
Religious conflict over the slavery question
Led by Mark Noll, a body of scholarship has highlighted the fact that the American debate over slavery became a shooting war in part because the two sides reached diametrically opposite conclusions based on reading the same authoritative source of guidance on moral questions: the King James Version of the Bible.
After the American Revolution and the disestablishment of government-sponsored churches, the U.S. experienced the Second Great Awakening, a massive Protestant revival. Without centralized church authorities, American Protestantism was heavily reliant on the Bible, which was read in the standard 19th-century Reformed hermeneutic of "common sense", literal interpretation as if the Bible were speaking directly about the modern American situation instead of events that occurred in a much different context, millennia ago. By the mid-19th century this form of religion and Bible interpretation had become a dominant strand in American religious, moral and political discourse, almost serving as a de facto state religion.
The problem that this caused for resolving the slavery question was that the Bible, interpreted under these assumptions, seemed to clearly suggest that slavery was Biblically justified:
- The pro-slavery South could point to slaveholding by the godly patriarch Abraham (Gen 12:5; 14:14; 24:35–36; 26:13–14), a practice that was later incorporated into Israelite national law (Lev 25:44–46). It was never denounced by Jesus, who made slavery a model of discipleship (Mk 10:44). The Apostle Paul supported slavery, counseling obedience to earthly masters (Eph 6:5–9; Col 3:22–25) as a duty in agreement with "the sound words of our Lord Jesus Christ and the teaching which accords with godliness" (1 Tim 6:3). Because slaves were to remain in their present state unless they could win their freedom (1 Cor 7:20–24), he sent the fugitive slave Onesimus back to his owner Philemon (Phlm 10–20). The abolitionist north had a difficult time matching the pro-slavery south passage for passage. [...] Professor Eugene Genovese, who has studied these biblical debates over slavery in minute detail, concludes that the pro-slavery faction clearly emerged victorious over the abolitionists except for one specious argument based on the so-called Curse of Ham (Gen 9:18–27). For our purposes, it is important to realize that the South won this crucial contest with the North by using the prevailing hermeneutic, or method of interpretation, on which both sides agreed. So decisive was its triumph that the South mounted a vigorous counterattack on the abolitionists as infidels who had abandoned the plain words of Scripture for the secular ideology of the Enlightenment.
Protestant churches in the U.S., unable to agree on what God's Word said about slavery, ended up with schisms between Northern and Southern branches: the Methodists in 1844, the Baptists in 1845, and the Presbyterians in 1857. These splits presaged the subsequent split in the nation: "The churches played a major role in the dividing of the nation, and it is probably true that it was the splits in the churches which made a final split of the national inevitable." The conflict over how to interpret the Bible was central:
- The theological crisis occasioned by reasoning like [conservative Presbyterian theologian James H.] Thornwell's was acute. Many Northern Bible-readers and not a few in the South felt that slavery was evil. They somehow knew the Bible supported them in that feeling. Yet when it came to using the Bible as it had been used with such success to evangelize and civilize the United States, the sacred page was snatched out of their hands. Trust in the Bible and reliance upon a Reformed, literal hermeneutic had created a crisis that only bullets, not arguments, could resolve.
- The question of the Bible and slavery in the era of the Civil War was never a simple question. The issue involved the American expression of a Reformed literal hermeneutic, the failure of hermeneutical alternatives to gain cultural authority, and the exercise of deeply entrenched intuitive racism, as well as the presence of Scripture as an authoritative religious book and slavery as an inherited social-economic relationship. The North– forced to fight on unfriendly terrain that it had helped to create– lost the exegetical war. The South certainly lost the shooting war. But constructive orthodox theology was the major loser when American believers allowed bullets instead of hermeneutical self-consciousness to determine what the Bible said about slavery. For the history of theology in America, the great tragedy of the Civil War is that the most persuasive theologians were the Rev. Drs. William Tecumseh Sherman and Ulysses S. Grant.
There were many causes of the Civil War, but the religious conflict, almost unimaginable in modern America, cut very deep at the time. Noll and others highlight the significance of the religion issue for the famous phrase in Lincoln's second inaugural: "Both read the same Bible and pray to the same God, and each invokes His aid against the other."
The Territorial Crisis and the United States Constitution
Between 1803 and 1854, the United States achieved a vast expansion of territory through purchase, negotiation and conquest. Of the states carved out of these territories by 1845, all had entered the union as slave states: Louisiana, Missouri, Arkansas, Florida and Texas, as well as the southern portions of Alabama and Mississippi. And with the conquest of northern Mexico, including California, in 1848, slaveholding interests looked forward to the institution flourishing in these lands as well. Southerners also anticipated garnering slaves and slave states in Cuba and Central America. Northern free soil interests vigorously sought to curtail any further expansion of slave soil. It was these territorial disputes that the proslavery and antislavery forces collided over.
The existence of slavery in the southern states was far less politically polarizing than the explosive question of the territorial expansion of the institution in the west. Moreover, Americans were informed by two well-established readings of the Constitution regarding human bondage: that the slave states had complete autonomy over the institution within their boundaries, and that the domestic slave trade – trade among the states – was immune to federal interference. The only feasible strategy available to attack slavery was to restrict its expansion into the new territories. Slaveholding interests fully grasped the danger that this strategy posed to them. Both the South and the North believed: “The power to decide the question of slavery for the territories was the power to determine the future of slavery itself.”
By 1860, four doctrines had emerged to answer the question of federal control in the territories, and they all claimed to be sanctioned by the Constitution, implicitly or explicitly. Two of the “conservative” doctrines emphasized the written text and historical precedents of the founding document, while the other two doctrines developed arguments that transcended the Constitution.
One of the “conservative” theories, represented by the Constitutional Union Party, argued that the historical designation of free and slave apportionments in territories should be become a Constitutional mandate. The Crittenden Compromise of 1860 was an expression of this view.
The second doctrine of Congressional preeminence, championed by Abraham Lincoln and the Republican Party, insisted that the Constitution did not bind legislators to a policy of balance – that slavery could be excluded altogether in a territory at the discretion of Congress – with one caveat: the due process clause of the Fifth Amendment must apply. In other words, Congress could restrict human bondage, but never establish it. The Wilmot Proviso announced this position in 1846.
Of the two doctrines that rejected federal authority, one was articulated by northern Democrat of Illinois Senator Stephen A. Douglas, and the other by southern Democrats Senator Jefferson Davis of Mississippi and Senator John C. Breckinridge of Kentucky.
Douglas devised the doctrine of territorial or “popular” sovereignty, which declared that the settlers in a territory had the same rights as states in the Union to establish or disestablish slavery – a purely local matter. Congress, having created the territory, was barred, according to Douglas, from exercising any authority in domestic matters. To do so would violate historic traditions of self-government, implicit in the US Constitution. The Kansas-Nebraska Act of 1854 legislated this doctrine.
The fourth in this quartet is the theory of state sovereignty (“states’ rights”), also known as the “Calhoun doctrine” after the South Carolinian political theorist and statesman John C. Calhoun. Rejecting the arguments for federal authority or self-government, state sovereignty would empower states to promote the expansion of slavery as part of the Federal Union under the US Constitution – and not merely as an argument for secession. The basic premise was that all authority regarding matters of slavery in the territories resided in each state. The role of the federal government was merely to enable the implementation of state laws when residents of the states entered the territories. Calhoun asserted that the federal government in the territories was only the agent of the several sovereign states, and hence incapable of forbidding the bringing into any territory of anything that was legal property in any state. State sovereignty, in other words, gave the laws of the slaveholding states extra-jurisdictional effect.
“States’ rights” was an ideology formulated and applied as a means of advancing slave state interests through federal authority. As historian Thomas L Krannawitter points out, “[T]he Southern demand for federal slave protection represented a demand for an unprecedented expansion of federal power.”
By 1860, these four doctrines comprised the major ideologies presented to the American public on the matters of slavery, the territories and the US Constitution.
Antislavery movements in the North gained momentum in the 1830s and 1840s, a period of rapid transformation of Northern society that inspired a social and political reformism. Many of the reformers of the period, including abolitionists, attempted in one way or another to transform the lifestyle and work habits of labor, helping workers respond to the new demands of an industrializing, capitalistic society.
Antislavery, like many other reform movements of the period, was influenced by the legacy of the Second Great Awakening, a period of religious revival in the new country stressing the reform of individuals which was still relatively fresh in the American memory. Thus, while the reform spirit of the period was expressed by a variety of movements with often-conflicting political goals, most reform movements shared a common feature in their emphasis on the Great Awakening principle of transforming the human personality through discipline, order, and restraint.
"Abolitionist" had several meanings at the time. The followers of William Lloyd Garrison, including Wendell Phillips and Frederick Douglass, demanded the "immediate abolition of slavery", hence the name. A more pragmatic group of abolitionists, like Theodore Weld and Arthur Tappan, wanted immediate action, but that action might well be a program of gradual emancipation, with a long intermediate stage. "Antislavery men", like John Quincy Adams, did what they could to limit slavery and end it where possible, but were not part of any abolitionist group. For example, in 1841 Adams represented the Amistad African slaves in the Supreme Court of the United States and argued that they should be set free. In the last years before the war, "antislavery" could mean the Northern majority, like Abraham Lincoln, who opposed expansion of slavery or its influence, as by the Kansas-Nebraska Act, or the Fugitive Slave Act. Many Southerners called all these abolitionists, without distinguishing them from the Garrisonians. James M. McPherson explains the abolitionists' deep beliefs: "All people were equal in God's sight; the souls of black folks were as valuable as those of whites; for one of God's children to enslave another was a violation of the Higher Law, even if it was sanctioned by the Constitution."
Stressing the Yankee Protestant ideals of self-improvement, industry, and thrift, most abolitionists– most notably William Lloyd Garrison– condemned slavery as a lack of control over one's own destiny and the fruits of one's labor.
The experience of the fifty years… shows us the slaves trebling in numbers—slaveholders monopolizing the offices and dictating the policy of the Government—prostituting the strength and influence of the Nation to the support of slavery here and elsewhere—trampling on the rights of the free States, and making the courts of the country their tools. To continue this disastrous alliance longer is madness.… Why prolong the experiment?
Abolitionists also attacked slavery as a threat to the freedom of white Americans. Defining freedom as more than a simple lack of restraint, antebellum reformers held that the truly free man was one who imposed restraints upon himself. Thus, for the anti-slavery reformers of the 1830s and 1840s, the promise of free labor and upward social mobility (opportunities for advancement, rights to own property, and to control one's own labor), was central to the ideal of reforming individuals.
Controversy over the so-called Ostend Manifesto (which proposed the U.S. annexation of Cuba as a slave state) and the Fugitive Slave Act kept sectional tensions alive before the issue of slavery in the West could occupy the country's politics in the mid-to-late 1850s.
Antislavery sentiment among some groups in the North intensified after the Compromise of 1850, when Southerners began appearing in Northern states to pursue fugitives or often to claim as slaves free African Americans who had resided there for years. Meanwhile, some abolitionists openly sought to prevent enforcement of the law. Violation of the Fugitive Slave Act was often open and organized. In Boston– a city from which it was boasted that no fugitive had ever been returned– Theodore Parker and other members of the city's elite helped form mobs to prevent enforcement of the law as early as April 1851. A pattern of public resistance emerged in city after city, notably in Syracuse in 1851 (culminating in the Jerry Rescue incident late that year), and Boston again in 1854. But the issue did not lead to a crisis until revived by the same issue underlying the Missouri Compromise of 1820: slavery in the territories.
Arguments for and against slavery
William Lloyd Garrison, a prominent abolitionist, was motivated by a belief in the growth of democracy. Because the Constitution had a three-fifths clause, a fugitive slave clause and a 20-year extension of the Atlantic slave trade, Garrison once publicly burned a copy of the U.S. Constitution and called it "a covenant with death and an agreement with hell". In 1854, he said:
|“||I am a believer in that portion of the Declaration of American Independence in which it is set forth, as among self-evident truths, "that all men are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness." Hence, I am an abolitionist. Hence, I cannot but regard oppression in every form—and most of all, that which turns a man into a thing—with indignation and abhorrence.||”|
|“||(Thomas Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error.... Our new government is founded upon exactly the opposite idea; its foundations are laid, its corner-stone rests, upon the great truth that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition.||”|
"Free soil" movement
The assumptions, tastes, and cultural aims of the reformers of the 1830s and 1840s anticipated the political and ideological ferment of the 1850s. A surge of working class Irish and German Catholic immigration provoked reactions among many Northern Whigs, as well as Democrats. Growing fears of labor competition for white workers and farmers because of the growing number of free blacks prompted several northern states to adopt discriminatory "Black Codes".
In the Northwest, although farm tenancy was increasing, the number of free farmers was still double that of farm laborers and tenants. Moreover, although the expansion of the factory system was undermining the economic independence of the small craftsman and artisan, industry in the region, still one largely of small towns, was still concentrated in small-scale enterprises. Arguably, social mobility was on the verge of contracting in the urban centers of the North, but long-cherished ideas of opportunity, "honest industry" and "toil" were at least close enough in time to lend plausibility to the free labor ideology.
In the rural and small-town North, the picture of Northern society (framed by the ethos of "free labor") corresponded to a large degree with reality. Propelled by advancements in transportation and communication– especially steam navigation, railroads, and telegraphs– the two decades before the Civil War were of rapid expansion in population and economy of the Northwest. Combined with the rise of Northeastern and export markets for their products, the social standing of farmers in the region substantially improved. The small towns and villages that emerged as the Republican Party's heartland showed every sign of vigorous expansion. Their vision for an ideal society was of small-scale capitalism, with white American laborers entitled to the chance of upward mobility opportunities for advancement, rights to own property, and to control their own labor. Many free-soilers demanded that the slave labor system and free black settlers (and, in places such as California, Chinese immigrants) should be excluded from the Great Plains to guarantee the predominance there of the free white laborer.
Opposition to the 1847 Wilmot Proviso helped to consolidate the "free-soil" forces. The next year, Radical New York Democrats known as Barnburners, members of the Liberty Party, and anti-slavery Whigs held a convention at Buffalo, New York, in August, forming the Free-Soil Party. The party supported former President Martin Van Buren and Charles Francis Adams, Sr., for President and Vice President, respectively. The party opposed the expansion of slavery into territories where it had not yet existed, such as Oregon and the ceded Mexican territory.
Relating Northern and Southern positions on slavery to basic differences in labor systems, but insisting on the role of culture and ideology in coloring these differences, Eric Foner's book Free Soil, Free Labor, Free Men (1970) went beyond the economic determinism of Charles A. Beard (a leading historian of the 1930s). Foner emphasized the importance of free labor ideology to Northern opponents of slavery, pointing out that the moral concerns of the abolitionists were not necessarily the dominant sentiments in the North. Many Northerners (including Lincoln) opposed slavery also because they feared that black labor might spread to the North and threaten the position of free white laborers. In this sense, Republicans and the abolitionists were able to appeal to powerful emotions in the North through a broader commitment to "free labor" principles. The "Slave Power" idea had a far greater appeal to Northern self-interest than arguments based on the plight of black slaves in the South. If the free labor ideology of the 1830s and 1840s depended on the transformation of Northern society, its entry into politics depended on the rise of mass democracy, in turn propelled by far-reaching social change. Its chance would come by the mid-1850s with the collapse of the traditional two-party system, which had long suppressed sectional conflict.
Slavery question in territories acquired from Mexico
Soon after the Mexican War started and long before negotiation of the new US-Mexico border, the question of slavery in the territories to be acquired polarized the Northern and Southern United States in the most bitter sectional conflict up to this time, which lasted for a deadlock of four years during which the Second Party System broke up, Mormon pioneers settled Utah, the California Gold Rush settled California, and New Mexico under a federal military government turned back Texas's attempt to assert control over territory Texas claimed as far west as the Rio Grande. Eventually the Compromise of 1850 preserved the Union, but only for another decade. Proposals included:
- The Wilmot Proviso banning slavery in any new territory to be acquired from Mexico, not including Texas which had been annexed the previous year. Passed by the United States House of Representatives in August 1846 and February 1847 but not the Senate. Later an effort to attach the proviso to the Treaty of Guadalupe Hidalgo also failed.
- Failed amendments to the Wilmot Proviso by William W. Wick and then Stephen Douglas extending the Missouri Compromise line (36°30' parallel north) west to the Pacific, allowing slavery in most of present day New Mexico and Arizona, Las Vegas, Nevada, and Southern California, as well as any other territories that might be acquired from Mexico. The line was again proposed by the Nashville Convention of June 1850.
- Popular sovereignty, developed by Lewis Cass and Douglas as the eventual Democratic Party position, letting each territory decide whether to allow slavery.
- William L. Yancey's "Alabama Platform", endorsed by the Alabama and Georgia legislatures and by Democratic state conventions in Florida and Virginia, called for no restrictions on slavery in the territories either by the federal government or by territorial governments before statehood, opposition to any candidates supporting either the Wilmot Proviso or popular sovereignty, and federal legislation overruling Mexican anti-slavery laws.
- General Zachary Taylor, who became the Whig candidate in 1848 and then President from March 1849 to July 1850, proposed after becoming President that the entire area become two free states, called California and New Mexico but much larger than the eventual ones. None of the area would be left as an unorganized or organized territory, avoiding the question of slavery in the territories.
- The Mormons' proposal for a State of Deseret incorporating most of the area of the Mexican Cession but excluding the largest non-Mormon populations in Northern California and central New Mexico was considered unlikely to succeed in Congress, but nevertheless in 1849 President Zachary Taylor sent his agent John Wilson westward with a proposal to combine California and Deseret as a single state, decreasing the number of new free states and the erosion of Southern parity in the Senate.
- The Compromise of 1850, proposed by Henry Clay in January 1850, guided to passage by Douglas over Northern Whig and Southern Democrat opposition, and enacted September 1850, admitted California as a free state including Southern California and organized Utah Territory and New Mexico Territory with slavery to be decided by popular sovereignty. Texas dropped its claim to the disputed northwestern areas in return for debt relief, and the areas were divided between the two new territories and unorganized territory. El Paso where Texas had successfully established county government was left in Texas. No southern territory dominated by Southerners (like the later short-lived Confederate Territory of Arizona) was created. Also, the slave trade was abolished in Washington, D.C. (but not slavery itself), and the Fugitive Slave Act was strengthened.
States' rights
States' rights was an issue in the 19th century for those who felt that the federal government was superseded by the authority of the individual states and was in violation of the role intended for it by the Founding Fathers of the United States. Kenneth M. Stampp notes that each section used states' rights arguments when convenient, and shifted positions when convenient. For example, the Fugitive Slave Act of 1850 was justified by its supporters as a state's right to have its property laws respected by other states, and was resisted by northern legislatures in the form of state personal liberty laws that placed state laws above the federal mandate.
States’ rights and slavery
Arthur M. Schlesinger, Jr. noted that the states' rights “never had any real vitality independent of underlying conditions of vast social, economic, or political significance.” He further elaborated:
From the close of the nullification episode of 1832–1833 to the outbreak of the Civil War, the agitation of state rights was intimately connected with the new issue of growing importance, the slavery question, and the principle form assumed by the doctrine was the right of secession. The pro-slavery forces sought refuge in the state rights position as a shield against federal interference with pro-slavery projects.... As a natural consequence, anti-slavery legislatures in the North were led to lay great stress on the national character of the Union and the broad powers of the general government in dealing with slavery. Nevertheless, it is significant to note that when it served anti-slavery purposes better to lapse into state rights dialectic, northern legislatures did not hesitate to be inconsistent.
Echoing Schlesinger, Forrest McDonald wrote that “the dynamics of the tension between federal and state authority changed abruptly during the late 1840s” as a result of the acquisition of territory in the Mexican War. McDonald states:
And then, as a by-product or offshoot of a war of conquest, slavery– a subject that leading politicians had, with the exception of the gag rule controversy and Calhoun’s occasional outbursts, scrupulously kept out of partisan debate– erupted as the dominant issue in that arena. So disruptive was the issue that it subjected the federal Union to the greatest strain the young republic had yet known.
States' rights and minority rights
States' rights theories gained strength from the awareness that the Northern population was growing much faster than the population of the South, so it was only a matter of time before the North controlled the federal government. Acting as a "conscious minority", Southerners hoped that a strict, constructionist interpretation of the Constitution would limit federal power over the states, and that a defense of states' rights against federal encroachments or even nullification or secession would save the South. Before 1860, most presidents were either Southern or pro-South. The North's growing population would mean the election of pro-North presidents, and the addition of free-soil states would end Southern parity with the North in the Senate. As the historian Allan Nevins described Calhoun's theory of states' rights, "Governments, observed Calhoun, were formed to protect minorities, for majorities could take care of themselves".
Until the 1860 election, the South’s interests nationally were entrusted to the Democratic Party. In 1860, the Democratic Party split into Northern and Southern factions as the result of a "bitter debate in the Senate between Jefferson Davis and Stephen Douglas". The debate was over resolutions proposed by Davis “opposing popular sovereignty and supporting a federal slave code and states’ rights” which carried over to the national convention in Charleston.
Davis defined equality in terms of the equal rights of states, and opposed the declaration that all men are created equal. Jefferson Davis stated that a "disparaging discrimination" and a fight for "liberty" against "the tyranny of an unbridled majority" gave the Confederate states a right to secede. In 1860, Congressman Laurence M. Keitt of South Carolina said, "The anti-slavery party contend that slavery is wrong in itself, and the Government is a consolidated national democracy. We of the South contend that slavery is right, and that this is a confederate Republic of sovereign States."
Stampp mentioned Confederate Vice President Alexander Stephens' A Constitutional View of the Late War Between the States as an example of a Southern leader who said that slavery was the "cornerstone of the Confederacy" when the war began and then said that the war was not about slavery but states' rights after Southern defeat. Stampp said that Stephens became one of the most ardent defenders of the Lost Cause.
To the old Union they had said that the Federal power had no authority to interfere with slavery issues in a state. To their new nation they would declare that the state had no power to interfere with a federal protection of slavery. Of all the many testimonials to the fact that slavery, and not states rights, really lay at the heart of their movement, this was the most eloquent of all.
The Compromise of 1850
The victory of the United States over Mexico resulted in the addition of large new territories conquered from Mexico. Controversy over whether these territories would be slave or free raised the risk of a war between slave and free states, and Northern support for the Wilmot Proviso, which would have banned slavery in the conquered territories, increased sectional tensions. The controversy was temporarily resolved by the Compromise of 1850, which allowed the territories of Utah and New Mexico to decide for or against slavery, but also allowed the admission of California as a free state, reduced the size of the slave state of Texas by adjusting the boundary, and ended the slave trade (but not slavery itself) in the District of Columbia. In return, the South got a stronger fugitive slave law than the version mentioned in the Constitution. The Fugitive Slave Law would reignite controversy over slavery.
Fugitive Slave Law issues
The Fugitive Slave Law of 1850 required that Northerners assist Southerners in reclaiming fugitive slaves, which many Northerners found to be extremely offensive. Anthony Burns was among the fugitive slaves captured and returned in chains to slavery as a result of the law. Harriett Beecher Stowe's best selling novel Uncle Tom's Cabin greatly increased opposition to the Fugitive Slave Law.
Kansas-Nebraska Act (1854)
Most people thought the Compromise had ended the territorial issue, but Stephen A. Douglas reopened it in 1854, in the name of democracy. Douglas proposed the Kansas-Nebraska Bill with the intention of opening up vast new high quality farm lands to settlement. As a Chicagoan, he was especially interested in the railroad connections from Chicago into Kansas and Nebraska, but that was not a controversial point. More importantly, Douglas firmly believed in democracy at the grass roots—that actual settlers have the right to decide on slavery, not politicians from other states. His bill provided that popular sovereignty, through the territorial legislatures, should decide "all questions pertaining to slavery", thus effectively repealing the Missouri Compromise. The ensuing public reaction against it created a firestorm of protest in the Northern states. It was seen as an effort to repeal the Missouri Compromise. However, the popular reaction in the first month after the bill's introduction failed to foreshadow the gravity of the situation. As Northern papers initially ignored the story, Republican leaders lamented the lack of a popular response.
Eventually, the popular reaction did come, but the leaders had to spark it. Chase's "Appeal of the Independent Democrats" did much to arouse popular opinion. In New York, William H. Seward finally took it upon himself to organize a rally against the Nebraska bill, since none had arisen spontaneously. Press such as the National Era, the New York Tribune, and local free-soil journals, condemned the bill. The Lincoln-Douglas debates of 1858 drew national attention to the issue of slavery expansion.
Founding of the Republican Party (1854)
Convinced that Northern society was superior to that of the South, and increasingly persuaded of the South's ambitions to extend slave power beyond its existing borders, Northerners were embracing a viewpoint that made conflict likely; however, conflict required the ascendancy of a political group to express the views of the North, such as the Republican Party. The Republican Party– campaigning on the popular, emotional issue of "free soil" in the frontier– captured the White House after just six years of existence.
The Republican Party grew out of the controversy over the Kansas-Nebraska legislation. Once the Northern reaction against the Kansas-Nebraska Act took place, its leaders acted to advance another political reorganization. Henry Wilson declared the Whig Party dead and vowed to oppose any efforts to resurrect it. Horace Greeley's Tribune called for the formation of a new Northern party, and Benjamin Wade, Chase, Charles Sumner, and others spoke out for the union of all opponents of the Nebraska Act. The Tribune's Gamaliel Bailey was involved in calling a caucus of anti-slavery Whig and Democratic Party Congressmen in May.
Meeting in a Ripon, Wisconsin, Congregational Church on February 28, 1854, some thirty opponents of the Nebraska Act called for the organization of a new political party and suggested that "Republican" would be the most appropriate name (to link their cause to the defunct Republican Party of Thomas Jefferson). These founders also took a leading role in the creation of the Republican Party in many northern states during the summer of 1854. While conservatives and many moderates were content merely to call for the restoration of the Missouri Compromise or a prohibition of slavery extension, radicals advocated repeal of the Fugitive Slave Laws and rapid abolition in existing states. The term "radical" has also been applied to those who objected to the Compromise of 1850, which extended slavery in the territories.
But without the benefit of hindsight, the 1854 elections would seem to indicate the possible triumph of the Know-Nothing movement rather than anti-slavery, with the Catholic/immigrant question replacing slavery as the issue capable of mobilizing mass appeal. Know-Nothings, for instance, captured the mayoralty of Philadelphia with a majority of over 8,000 votes in 1854. Even after opening up immense discord with his Kansas-Nebraska Act, Senator Douglas began speaking of the Know-Nothings, rather than the Republicans, as the principal danger to the Democratic Party.
When Republicans spoke of themselves as a party of "free labor", they appealed to a rapidly growing, primarily middle class base of support, not permanent wage earners or the unemployed (the working class). When they extolled the virtues of free labor, they were merely reflecting the experiences of millions of men who had "made it" and millions of others who had a realistic hope of doing so. Like the Tories in England, the Republicans in the United States would emerge as the nationalists, homogenizers, imperialists, and cosmopolitans.
Those who had not yet "made it" included Irish immigrants, who made up a large growing proportion of Northern factory workers. Republicans often saw the Catholic working class as lacking the qualities of self-discipline, temperance, and sobriety essential for their vision of ordered liberty. Republicans insisted that there was a high correlation between education, religion, and hard work—the values of the "Protestant work ethic"—and Republican votes. "Where free schools are regarded as a nuisance, where religion is least honored and lazy unthrift is the rule", read an editorial of the pro-Republican Chicago Democratic Press after James Buchanan's defeat of John C. Fremont in the 1856 presidential election, "there Buchanan has received his strongest support".
Ethno-religious, socio-economic, and cultural fault lines ran throughout American society, but were becoming increasingly sectional, pitting Yankee Protestants with a stake in the emerging industrial capitalism and American nationalism increasingly against those tied to Southern slave holding interests. For example, acclaimed historian Don E. Fehrenbacher, in his Prelude to Greatness, Lincoln in the 1850s, noticed how Illinois was a microcosm of the national political scene, pointing out voting patterns that bore striking correlations to regional patterns of settlement. Those areas settled from the South were staunchly Democratic, while those by New Englanders were staunchly Republican. In addition, a belt of border counties were known for their political moderation, and traditionally held the balance of power. Intertwined with religious, ethnic, regional, and class identities, the issues of free labor and free soil were thus easy to play on.
Events during the next two years in "Bleeding Kansas" sustained the popular fervor originally aroused among some elements in the North by the Kansas-Nebraska Act. Free-State settlers from the North were encouraged by press and pulpit and the powerful organs of abolitionist propaganda. Often they received financial help from such organizations as the Massachusetts Emigrant Aid Company. Those from the South often received financial contributions from the communities they left. Southerners sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile and ruinous legislation".
While the Great Plains were largely unfit for the cultivation of cotton, informed Southerners demanded that the West be open to slavery, often—perhaps most often—with minerals in mind. Brazil, for instance, was an example of the successful use of slave labor in mining. In the middle of the 18th century, diamond mining supplemented gold mining in Minas Gerais and accounted for a massive transfer of masters and slaves from Brazil's northeastern sugar region. Southern leaders knew a good deal about this experience. It was even promoted in the pro-slavery DeBow's Review as far back as 1848.
Fragmentation of the American party system
"Bleeding Kansas" and the elections of 1856
In Kansas around 1855, the slavery issue reached a condition of intolerable tension and violence. But this was in an area where an overwhelming proportion of settlers were merely land-hungry Westerners indifferent to the public issues. The majority of the inhabitants were not concerned with sectional tensions or the issue of slavery. Instead, the tension in Kansas began as a contention between rival claimants. During the first wave of settlement, no one held titles to the land, and settlers rushed to occupy newly open land fit for cultivation. While the tension and violence did emerge as a pattern pitting Yankee and Missourian settlers against each other, there is little evidence of any ideological divides on the questions of slavery. Instead, the Missouri claimants, thinking of Kansas as their own domain, regarded the Yankee squatters as invaders, while the Yankees accused the Missourians for grabbing the best land without honestly settling on it.
However, the 1855–56 violence in "Bleeding Kansas" did reach an ideological climax after John Brown– regarded by followers as the instrument of God's will to destroy slavery– entered the melee. His assassination of five pro-slavery settlers (the so-called "Pottawatomie Massacre", during the night of May 24, 1856) resulted in some irregular, guerrilla-style strife. Aside from John Brown's fervor, the strife in Kansas often involved only armed bands more interested in land claims or loot.
Of greater importance than the civil strife in Kansas, however, was the reaction against it nationwide and in Congress. In both North and South, the belief was widespread that the aggressive designs of the other section were epitomized by (and responsible for) what was happening in Kansas. Consequently, "Bleeding Kansas" emerged as a symbol of sectional controversy.
Indignant over the developments in Kansas, the Republicans—the first entirely sectional major party in U.S. history—entered their first presidential campaign with confidence. Their nominee, John C. Frémont, was a generally safe candidate for the new party. Although his nomination upset some of their Nativist Know-Nothing supporters (his mother was a Catholic), the nomination of the famed explorer of the Far West with no political record was an attempt to woo ex-Democrats. The other two Republican contenders, William H. Seward and Salmon P. Chase, were seen as too radical.
Nevertheless, the campaign of 1856 was waged almost exclusively on the slavery issue—pitted as a struggle between democracy and aristocracy—focusing on the question of Kansas. The Republicans condemned the Kansas-Nebraska Act and the expansion of slavery, but they advanced a program of internal improvements combining the idealism of anti-slavery with the economic aspirations of the North. The new party rapidly developed a powerful partisan culture, and energetic activists drove voters to the polls in unprecedented numbers. People reacted with fervor. Young Republicans organized the "Wide Awake" clubs and chanted "Free Soil, Free Labor, Free Men, Frémont!" With Southern fire-eaters and even some moderates uttering threats of secession if Frémont won, the Democratic candidate, Buchanan, benefited from apprehensions about the future of the Union.
Dred Scott decision (1857) and the Lecompton Constitution
The Lecompton Constitution and Dred Scott v. Sandford were both part of the Bleeding Kansas controversy over slavery as a result of the Kansas Nebraska Act, which was Stephen Douglas' attempt at replacing the Missouri Compromise ban on slavery in the Kansas and Nebraska territories with popular sovereignty, which meant that the people of a territory could vote either for or against slavery. The Lecompton Constitution, which would have allowed slavery in Kansas, was the result of massive vote fraud by the pro-slavery Border Ruffians. Douglas defeated the Lecompton Constitution because it was supported by the minority of pro-slavery people in Kansas, and Douglas believed in majority rule. Douglas hoped that both South and North would support popular sovereignty, but the opposite was true. Neither side trusted Douglas.
The Supreme Court decision of 1857 in Dred Scott v. Sandford added to the controversy. Chief Justice Roger B. Taney's decision said that slaves were "so far inferior that they had no rights which the white man was bound to respect", and that slavery could spread into the territories even if the majority of people in the territories were anti-slavery. Lincoln warned that "the next Dred Scott decision" could threaten Northern states with slavery.
Buchanan, Republicans and anti-administration Democrats
President James Buchanan decided to end the troubles in Kansas by urging Congress to admit Kansas as a slave state under the Lecompton Constitution. Kansas voters, however, soundly rejected this constitution— at least with a measure of widespread fraud on both sides— by more than 10,000 votes. As Buchanan directed his presidential authority to this goal, he further angered the Republicans and alienated members of his own party. Prompting their break with the administration, the Douglasites saw this scheme as an attempt to pervert the principle of popular sovereignty on which the Kansas-Nebraska Act was based. Nationwide, conservatives were incensed, feeling as though the principles of states' rights had been violated. Even in the South, ex-Whigs and border states Know-Nothings— most notably John Bell and John J. Crittenden (key figures in the event of sectional controversies)— urged the Republicans to oppose the administration's moves and take up the demand that the territories be given the power to accept or reject sovereignty.
As the schism in the Democratic party deepened, moderate Republicans argued that an alliance with anti-administration Democrats, especially Stephen Douglas, would be a key advantage in the 1860 elections. Some Republican observers saw the controversy over the Lecompton Constitution as an opportunity to peel off Democratic support in the border states, where Frémont picked up little support. After all, the border states had often gone for Whigs with a Northern base of support in the past without prompting threats of Southern withdrawal from the Union.
Among the proponents of this strategy was The New York Times, which called on the Republicans to downplay opposition to popular sovereignty in favor of a compromise policy calling for "no more slave states" in order to quell sectional tensions. The Times maintained that for the Republicans to be competitive in the 1860 elections, they would need to broaden their base of support to include all voters who for one reason or another were upset with the Buchanan Administration.
Indeed, pressure was strong for an alliance that would unite the growing opposition to the Democratic Administration. But such an alliance was no novel idea; it would essentially entail transforming the Republicans into the national, conservative, Union party of the country. In effect, this would be a successor to the Whig party.
Republican leaders, however, staunchly opposed any attempts to modify the party position on slavery, appalled by what they considered a surrender of their principles when, for example, all the ninety-two Republican members of Congress voted for the Crittenden-Montgomery bill in 1858. Although this compromise measure blocked Kansas' entry into the union as a slave state, the fact that it called for popular sovereignty, rather than outright opposition to the expansion of slavery, was troubling to the party leaders.
In the end, the Crittenden-Montgomery bill did not forge a grand anti-administration coalition of Republicans, ex-Whig Southerners in the border states, and Northern Democrats. Instead, the Democratic Party merely split along sectional lines. Anti-Lecompton Democrats complained that a new, pro-slavery test had been imposed upon the party. The Douglasites, however, refused to yield to administration pressure. Like the anti-Nebraska Democrats, who were now members of the Republican Party, the Douglasean insisted that they— not the administration— commanded the support of most northern Democrats.
Extremist sentiment in the South advanced dramatically as the Southern planter class perceived its hold on the executive, legislative, and judicial apparatus of the central government wane. It also grew increasingly difficult for Southern Democrats to manipulate power in many of the Northern states through their allies in the Democratic Party.
Historians have emphasized that the sense of honor was a central concern of upper class white Southerners. The idea of being treated like a second class citizen was anathema and could not be tolerated by an honorable southerner. The anti-slavery position held that slavery was a negative or evil phenomenon that damaged the rights of white men and the prospects of republicanism. To the white South this rhetoric made Southerners second-class citizens because it trampled their Constitutional rights to take their property anywhere.
Assault on Sumner (1856)
On May 19 Massachusetts Senator Charles Sumner gave a long speech in the Senate entitled "The Crime Against Kansas" , which condemned the Slave Power as the evil force behind the nation's troubles. Sumner said the Southerners had committed a "crime against Kansas", singling out Senator Andrew P. Butler of South Carolina:
- "Not in any common lust for power did this uncommon tragedy have its origin. It is the rape of a virgin Territory, compelling it to the hateful embrace of slavery; and it may be clearly traced to a depraved desire for a new Slave State, hideous offspring of such a crime, in the hope of adding to the power of slavery in the National Government."
Sumner's cast the South Carolinian as having "chosen a mistress [the harlot slavery]... who, though ugly to others, is always lovely to him, though polluted in the sight of the world is chaste in his sight." According to Hoffer (2010), "It is also important to note the sexual imagery that recurred throughout the oration, which was neither accidental nor without precedent. Abolitionists routinely accused slaveholders of maintaining slavery so that they could engage in forcible sexual relations with their slaves." Three days later, Sumner, working at his desk on the Senate floor, was beaten almost to death by Congressman Preston S. Brooks, Butler's nephew. Sumner took years to recover; he became the martyr to the antislavery cause who said the episode proved the barbarism of slave society. Brooks was lauded as a hero upholding Southern honor. The episode further polarized North and South, strengthened the new Republican Party, and added a new element of violence on the floor of Congress.
Emergence of Lincoln
Republican Party structure
Despite their significant loss in the election of 1856, Republican leaders realized that even though they appealed only to Northern voters, they need win only two more states, such as Pennsylvania and Illinois, to win the presidency in 1860.
As the Democrats were grappling with their own troubles, leaders in the Republican party fought to keep elected members focused on the issue of slavery in the West, which allowed them to mobilize popular support. Chase wrote Sumner that if the conservatives succeeded, it might be necessary to recreate the Free Soil Party. He was also particularly disturbed by the tendency of many Republicans to eschew moral attacks on slavery for political and economic arguments.
The controversy over slavery in the West was still not creating a fixation on the issue of slavery. Although the old restraints on the sectional tensions were being eroded with the rapid extension of mass politics and mass democracy in the North, the perpetuation of conflict over the issue of slavery in the West still required the efforts of radical Democrats in the South and radical Republicans in the North. They had to ensure that the sectional conflict would remain at the center of the political debate.
William Seward contemplated this potential in the 1840s, when the Democrats were the nation's majority party, usually controlling Congress, the presidency, and many state offices. The country's institutional structure and party system allowed slaveholders to prevail in more of the nation's territories and to garner a great deal of influence over national policy. With growing popular discontent with the unwillingness of many Democratic leaders to take a stand against slavery, and growing consciousness of the party's increasingly pro-Southern stance, Seward became convinced that the only way for the Whig Party to counteract the Democrats' strong monopoly of the rhetoric of democracy and equality was for the Whigs to embrace anti-slavery as a party platform. Once again, to increasing numbers of Northerners, the Southern labor system was increasingly seen as contrary to the ideals of American democracy.
Republicans believed in the existence of "the Slave Power Conspiracy", which had seized control of the federal government and was attempting to pervert the Constitution for its own purposes. The "Slave Power" idea gave the Republicans the anti-aristocratic appeal with which men like Seward had long wished to be associated politically. By fusing older anti-slavery arguments with the idea that slavery posed a threat to Northern free labor and democratic values, it enabled the Republicans to tap into the egalitarian outlook which lay at the heart of Northern society.
In this sense, during the 1860 presidential campaign, Republican orators even cast "Honest Abe" as an embodiment of these principles, repeatedly referring to him as "the child of labor" and "son of the frontier", who had proved how "honest industry and toil" were rewarded in the North. Although Lincoln had been a Whig, the "Wide Awakes" (members of the Republican clubs), used replicas of rails that he had split to remind voters of his humble origins.
In almost every northern state, organizers attempted to have a Republican Party or an anti-Nebraska fusion movement on ballots in 1854. In areas where the radical Republicans controlled the new organization, the comprehensive radical program became the party policy. Just as they helped organize the Republican Party in the summer of 1854, the radicals played an important role in the national organization of the party in 1856. Republican conventions in New York, Massachusetts, and Illinois adopted radical platforms. These radical platforms in such states as Wisconsin, Michigan, Maine, and Vermont usually called for the divorce of the government from slavery, the repeal of the Fugitive Slave Laws, and no more slave states, as did platforms in Pennsylvania, Minnesota, and Massachusetts when radical influence was high.
Conservatives at the Republican 1860 nominating convention in Chicago were able to block the nomination of William Seward, who had an earlier reputation as a radical (but by 1860 had been criticized by Horace Greeley as being too moderate). Other candidates had earlier joined or formed parties opposing the Whigs and had thereby made enemies of many delegates. Lincoln was selected on the third ballot. However, conservatives were unable to bring about the resurrection of "Whiggery". The convention's resolutions regarding slavery were roughly the same as they had been in 1856, but the language appeared less radical. In the following months, even Republican conservatives like Thomas Ewing and Edward Baker embraced the platform language that "the normal condition of territories was freedom". All in all, the organizers had done an effective job of shaping the official policy of the Republican Party.
Southern slave holding interests now faced the prospects of a Republican President and the entry of new free states that would alter the nation's balance of power between the sections. To many Southerners, the resounding defeat of the Lecompton Constitution foreshadowed the entry of more free states into the Union. Dating back to the Missouri Compromise, the Southern region desperately sought to maintain an equal balance of slave states and free states so as to be competitive in the Senate. Since the last slave state was admitted in 1845, five more free states had entered. The tradition of maintaining a balance between North and South was abandoned in favor of the addition of more free soil states.
Sectional battles over federal policy in the late 1850s
Lincoln-Douglas Debates
The Lincoln-Douglas Debates were a series of seven debates in 1858 between Stephen Douglas, United States Senator from Illinois, and Abraham Lincoln, the Republican who sought to replace Douglas in the Senate. The debates were mainly about slavery. Douglas defended his Kansas Nebraska Act, which replaced the Missouri Compromise ban on slavery in the Louisiana Purchase territory north and west of Missouri with popular sovereignty, which allowed residents of territories such as the Kansas to vote either for or against slavery. Douglas put Lincoln on the defensive by accusing him of being a Black Republican abolitionist, but Lincoln responded by asking Douglas to reconcile popular sovereignty with the Dred Scott decision. Douglas' Freeport Doctrine was that residents of a territory could keep slavery out by refusing to pass a slave code and other laws needed to protect slavery. Douglas' Freeport Doctrine, and the fact that he helped defeat the pro-slavery Lecompton Constitution, made Douglas unpopular in the South, which led to the 1860 split of the Democratic Party into Northern and Southern wings. The Democrats retained control of the Illinois legislature, and Douglas thus retained his seat in the U.S. Senate (at that time United States Senators were elected by the state legislatures, not by popular vote); however, Lincoln's national profile was greatly raised, paving the way for his election as president of the United States two years later.
In The Rise of American Civilization (1927), Charles and Mary Beard argue that slavery was not so much a social or cultural institution as an economic one (a labor system). The Beards cited inherent conflicts between Northeastern finance, manufacturing, and commerce and Southern plantations, which competed to control the federal government so as to protect their own interests. According to the economic determinists of the era, both groups used arguments over slavery and states' rights as a cover.
Recent historians have rejected the Beardian thesis. But their economic determinism has influenced subsequent historians in important ways. Modernization theorists, such as Raimondo Luraghi, have argued that as the Industrial Revolution was expanding on a worldwide scale, the days of wrath were coming for a series of agrarian, pre-capitalistic, "backward" societies throughout the world, from the Italian and American South to India. But most American historians point out the South was highly developed and on average about as prosperous as the North.
Panic of 1857 and sectional realignments
A few historians believe that the serious financial panic of 1857 and the economic difficulties leading up to it strengthened the Republican Party and heightened sectional tensions. Before the panic, strong economic growth was being achieved under relatively low tariffs. Hence much of the nation concentrated on growth and prosperity.
The iron and textile industries were facing acute, worsening trouble each year after 1850. By 1854, stocks of iron were accumulating in each world market. Iron prices fell, forcing many American iron mills to shut down.
Republicans urged western farmers and northern manufacturers to blame the depression on the domination of the low-tariff economic policies of southern-controlled Democratic administrations. However the depression revived suspicion of Northeastern banking interests in both the South and the West. Eastern demand for western farm products shifted the West closer to the North. As the "transportation revolution" (canals and railroads) went forward, an increasingly large share and absolute amount of wheat, corn, and other staples of western producers– once difficult to haul across the Appalachians– went to markets in the Northeast. The depression emphasized the value of the western markets for eastern goods and homesteaders who would furnish markets and respectable profits.
Aside from the land issue, economic difficulties strengthened the Republican case for higher tariffs for industries in response to the depression. This issue was important in Pennsylvania and perhaps New Jersey.
Southern response
Meanwhile, many Southerners grumbled over "radical" notions of giving land away to farmers that would "abolitionize" the area. While the ideology of Southern sectionalism was well-developed before the Panic of 1857 by figures like J.D.B. DeBow, the panic helped convince even more cotton barons that they had grown too reliant on Eastern financial interests.
Thomas Prentice Kettell, former editor of the Democratic Review, was another commentator popular in the South to enjoy a great degree of prominence between 1857 and 1860. Kettell gathered an array of statistics in his book on Southern Wealth and Northern Profits, to show that the South produced vast wealth, while the North, with its dependence on raw materials, siphoned off the wealth of the South. Arguing that sectional inequality resulted from the concentration of manufacturing in the North, and from the North's supremacy in communications, transportation, finance, and international trade, his ideas paralleled old physiocratic doctrines that all profits of manufacturing and trade come out of the land. Political sociologists, such as Barrington Moore, have noted that these forms of romantic nostalgia tend to crop up whenever industrialization takes hold.
Such Southern hostility to the free farmers gave the North an opportunity for an alliance with Western farmers. After the political realignments of 1857–58—manifested by the emerging strength of the Republican Party and their networks of local support nationwide—almost every issue was entangled with the controversy over the expansion of slavery in the West. While questions of tariffs, banking policy, public land, and subsidies to railroads did not always unite all elements in the North and the Northwest against the interests of slaveholders in the South under the pre-1854 party system, they were translated in terms of sectional conflict—with the expansion of slavery in the West involved.
As the depression strengthened the Republican Party, slave holding interests were becoming convinced that the North had aggressive and hostile designs on the Southern way of life. The South was thus increasingly fertile ground for secessionism.
The Republicans' Whig-style personality-driven "hurrah" campaign helped stir hysteria in the slave states upon the emergence of Lincoln and intensify divisive tendencies, while Southern "fire eaters" gave credence to notions of the slave power conspiracy among Republican constituencies in the North and West. New Southern demands to re-open the African slave trade further fueled sectional tensions.
From the early 1840s until the outbreak of the Civil War, the cost of slaves had been rising steadily. Meanwhile, the price of cotton was experiencing market fluctuations typical of raw commodities. After the Panic of 1857, the price of cotton fell while the price of slaves continued its steep rise. At the 1858 Southern commercial convention, William L. Yancey of Alabama called for the reopening of the African slave trade. Only the delegates from the states of the Upper South, who profited from the domestic trade, opposed the reopening of the slave trade since they saw it as a potential form of competition. The convention in 1858 wound up voting to recommend the repeal of all laws against slave imports, despite some reservations.
John Brown and Harpers Ferry (1859)
On October 16, 1859, radical abolitionist John Brown led an attempt to start an armed slave revolt by seizing the U.S. Army arsenal at Harper's Ferry, Virginia (now West Virginia). Brown and twenty followers, both whites (including two of Brown's sons) and blacks (three free blacks, one freedman, and one fugitive slave), planned to seize the armory and use weapons stored there to arm black slaves in order to spark a general uprising by the slave population.
Although the raiders were initially successful in cutting the telegraph line and capturing the armory, they allowed a passing train to continue on to Washington, D.C., where the authorities were alerted to the attack. By October 17 the raiders were surrounded in the armory by the militia and other locals. Robert E. Lee (then a Colonel in the U.S. Army) led a company of U.S. Marines in storming the armory on October 18. Ten of the raiders were killed, including both of Brown's sons; Brown himself along with a half dozen of his followers were captured; four of the raiders escaped immediate capture. Six locals were killed and nine injured; the Marines suffered one dead and one injured. The local slave population failed to join in Brown's attack.
Brown was subsequently hanged for treason (against the Commonwealth of Virginia), as were six of his followers. The raid became a cause célèbre in both the North and the South, with Brown vilified by Southerners as a bloodthirsty fanatic, but celebrated by many Northern abolitionists as a martyr to the cause of freedom.
Elections of 1860
Initially, William H. Seward of New York, Salmon P. Chase of Ohio, and Simon Cameron of Pennsylvania, were the leading contenders for the Republican presidential nomination. But Abraham Lincoln, a former one-term House member who gained fame amid the Lincoln-Douglas Debates of 1858, had fewer political opponents within the party and outmaneuvered the other contenders. On May 16, 1860, he received the Republican nomination at their convention in Chicago, Illinois.
The schism in the Democratic Party over the Lecompton Constitution and Douglas' Freeport Doctrine caused Southern "fire-eaters" to oppose front runner Stephen A. Douglas' bid for the Democratic presidential nomination. Douglas defeated the proslavery Lecompton Constitution for Kansas because the majority of Kansans were antislavery, and Douglas' popular sovereignty doctrine would allow the majority to vote slavery up or down as they chose. Douglas' Freeport Doctrine alleged that the antislavery majority of Kansans could thwart the Dred Scott decision that allowed slavery by withholding legislation for a slave code and other laws needed to protect slavery. As a result, Southern extremists demanded a slave code for the territories, and used this issue to divide the northern and southern wings of the Democratic Party. Southerners left the party and in June nominated John C. Breckinridge, while Northern Democrats supported Douglas. As a result, the Southern planter class lost a considerable measure of sway in national politics. Because of the Democrats' division, the Republican nominee faced a divided opposition. Adding to Lincoln's advantage, ex-Whigs from the border states had earlier formed the Constitutional Union Party, nominating John C. Bell for President. Thus, party nominees waged regional campaigns. Douglas and Lincoln competed for Northern votes, while Bell, Douglas and Breckinridge competed for Southern votes.
"Vote yourself a farm– vote yourself a tariff" could have been a slogan for the Republicans in 1860. In sum, business was to support the farmers' demands for land (popular also in industrial working-class circles) in return for support for a higher tariff. To an extent, the elections of 1860 bolstered the political power of new social forces unleashed by the Industrial Revolution. In February 1861, after the seven states had departed the Union (four more would depart in April–May 1861; in late April, Maryland was unable to secede because it was put under martial law), Congress had a strong northern majority and passed the Morrill Tariff Act (signed by Buchanan), which increased duties and provided the government with funds needed for the war.
Split in the Democratic Party
The Alabama extremist William Lowndes Yancey's demand for a federal slave code for the territories split the Democratic Party between North and South, which made the election of Lincoln possible. Yancey tried to make his demand for a slave code moderate enough to get Southern support and yet extreme enough to enrage Northerners and split the party. He demanded that the party support a slave code for the territories if later necessary, so that the demand would be conditional enough to win Southern support. His tactic worked, and lower South delegates left the Democratic Convention at Institute Hall in Charleston, South Carolina and walked over to Military Hall. The South Carolina extremist Robert Barnwell Rhett hoped that the lower South would completely break with the Northern Democrats and attend a separate convention at Richmond, Virginia, but lower South delegates gave the national Democrats one last chance at unification by going to the convention at Baltimore, Maryland before the split became permanent. The end result was that John C. Breckinridge became the candidate of the Southern Democrats, and Stephen Douglas became the candidate of the Northern Democrats.
Yancy's previous 1848 attempt at demanding a slave code for the territories was his Alabama Platform, which was in response to the Northern Wilmot Proviso attempt at banning slavery in territories conquered from Mexico. Both the Alabama Platform and the Wilmot Proviso failed, but Yancey learned to be less overtly radical in order to get more support. Southerners thought they were merely demanding equality, in that they wanted Southern property in slaves to get the same (or more) protection as Northern forms of property.
Southern secession
With the emergence of the Republicans as the nation's first major sectional party by the mid-1850s, politics became the stage on which sectional tensions were played out. Although much of the West– the focal point of sectional tensions– was unfit for cotton cultivation, Southern secessionists read the political fallout as a sign that their power in national politics was rapidly weakening. Before, the slave system had been buttressed to an extent by the Democratic Party, which was increasingly seen as representing a more pro-Southern position that unfairly permitted Southerners to prevail in the nation's territories and to dominate national policy before the Civil War. But Democrats suffered a significant reverse in the electoral realignment of the mid-1850s. 1860 was a critical election that marked a stark change in existing patterns of party loyalties among groups of voters; Abraham Lincoln's election was a watershed in the balance of power of competing national and parochial interests and affiliations.
Once the election returns were certain, a special South Carolina convention declared "that the Union now subsisting between South Carolina and other states under the name of the 'United States of America' is hereby dissolved", heralding the secession of six more cotton states by February, and the formation of an independent nation, the Confederate States of America. Lipset (1960) examined the secessionist vote in each Southern state in 1860–61. In each state he divided the counties into high, medium or low proportion of slaves. He found that in the 181 high-slavery counties, the vote was 72% for secession. In the 205 low-slavery counties. the vote was only 37% for secession. (And in the 153 middle counties, the vote for secession was in the middle at 60%). Both the outgoing Buchanan administration and the incoming Lincoln administration refused to recognize the legality of secession or the legitimacy of the Confederacy. After Lincoln called for troops, four border states (that lacked cotton) seceded.
Disputes over the route of a proposed transcontinental railroad affected the timing of the Kansas Nebraska Act. The timing of the completion of a railroad from Georgia to South Carolina also was important, in that it allowed influential Georgians to declare their support for secession in South Carolina at a crucial moment. South Carolina secessionists feared that if they seceded first, they would be as isolated as they were during the Nullification Crisis. Support from Georgians was quickly followed by support for secession in the same South Carolina state legislature that previously preferred a cooperationist approach, as opposed to separate state secession.
The Totten system of forts (including forts Sumter and Pickens) designed for coastal defense encouraged Anderson to move federal troops from Fort Moultrie to the more easily defended Fort Sumter in Charleston harbor, South Carolina. Likewise, Slemmer moved U.S. troops from Fort Barrancas to the more easily defended Fort Pickens in Florida. These troop movements were defensive from the Northern point of view, and acts of aggression from the Southern point of view. Also, an attempt to resupply Fort Sumter via the ship Star of the West was seen as an attack on a Southern owned fort by secessionists, and as an attempt to defend U.S. property from the Northern point of view.
The tariff issue is greatly exaggerated by Lost Cause historians. The tariff had been written and approved by the South, so it was mostly Northerners (especially in Pennsylvania) who complained about the low rates; some Southerners feared that eventually the North would have enough control it could raise the tariff at will.
As for states' rights, while a states' right of revolution mentioned in the Declaration of Independence was based on the inalienable equal rights of man, secessionists believed in a modified version of states' rights that was safe for slavery.
These issues were especially important in the lower South, where 47 percent of the population were slaves. The upper South, where 32 percent of the population were slaves, considered the Fort Sumter crisis—especially Lincoln's call for troops to march south to recapture it—a cause for secession. The northernmost border slave states, where 13 percent of the population were slaves, did not secede.
Fort Sumter
When South Carolina seceded In December 1860, Major Robert Anderson, a pro-slavery, former slave-owner from Kentucky, remained loyal to the Union. He was the commanding officer of United States Army forces in Charleston, South Carolina—the last remaining important Union post In the Deep South. Acting without orders, he moved his small garrison from Fort Moultrie, which was indefensible, to the more modern, more defensible, Fort Sumter in the middle of Charleston Harbor. South Carolina leaders cried betrayal, while the North celebrated with enormous excitement at this show of defiance against secessionism. In February 1861 the Confederate States of America was formed and took charge. Jefferson Davis, the Confederate President, ordered the fort be captured. The artillery attack was commanded by Brig. Gen. P. G. T. Beauregard, who had been Anderson's student at West Point. The attack began April 12, 1861, and continued until Anderson, badly outnumbered and outgunned, surrendered the fort on April 14. The battle began the American Civil War, As an overwhelming demand for war swept both the North and South, with only Kentucky attempting to remain neutral.
The opening of the Civil War, as well as the modern meaning of the American flag, according to Adam Goodheart (2011), was forged in December 1860, when Anderson, acting without orders, moved the American garrison from Fort Moultrie to Fort Sumter, in Charleston Harbor, in defiance of the overwhelming power of the new Confederate States of America. Goodheart argues this was the opening move of the Civil War, and the flag was used throughout the North to symbolize American nationalism and rejection of secessionism.
- Before that day, the flag had served mostly as a military ensign or a convenient marking of American territory, flown from forts, embassies, and ships, and displayed on special occasions like the Fourth of July. But in the weeks after Major Anderson's surprising stand, it became something different. Suddenly the Stars and Stripes flew – as it does today, and especially as it did after September 11 – from houses, from storefronts, from churches; above the village greens and college quads. For the first time American flags were mass-produced rather than individually stitched and even so, manufacturers could not keep up with demand. As the long winter of 1861 turned into spring, that old flag meant something new. The abstraction of the Union clause was transfigured into a physical thing: strips of cloth that millions of people would fight for, and many thousands die for.
Onset of the Civil War and the question of compromise
Abraham Lincoln's rejection of the Crittenden Compromise, the failure to secure the ratification of the Corwin amendment in 1861, and the inability of the Washington Peace Conference of 1861 to provide an effective alternative to Crittenden and Corwin came together to prevent a compromise that is still debated by Civil War historians. Even as the war was going on, William Seward and James Buchanan were outlining a debate over the question of inevitability that would continue among historians.
Two competing explanations of the sectional tensions inflaming the nation emerged even before the war. Buchanan believed the sectional hostility to be the accidental, unnecessary work of self-interested or fanatical agitators. He also singled out the "fanaticism" of the Republican Party. Seward, on the other hand, believed there to be an irrepressible conflict between opposing and enduring forces.
The irrepressible conflict argument was the first to dominate historical discussion. In the first decades after the fighting, histories of the Civil War generally reflected the views of Northerners who had participated in the conflict. The war appeared to be a stark moral conflict in which the South was to blame, a conflict that arose as a result of the designs of slave power. Henry Wilson's History of The Rise and Fall of the Slave Power in America (1872–1877) is the foremost representative of this moral interpretation, which argued that Northerners had fought to preserve the union against the aggressive designs of "slave power". Later, in his seven-volume History of the United States from the Compromise of 1850 to the Civil War, (1893–1900), James Ford Rhodes identified slavery as the central—and virtually only—cause of the Civil War. The North and South had reached positions on the issue of slavery that were both irreconcilable and unalterable. The conflict had become inevitable.
But the idea that the war was avoidable did not gain ground among historians until the 1920s, when the "revisionists" began to offer new accounts of the prologue to the conflict. Revisionist historians, such as James G. Randall and Avery Craven, saw in the social and economic systems of the South no differences so fundamental as to require a war. Randall blamed the ineptitude of a "blundering generation" of leaders. He also saw slavery as essentially a benign institution, crumbling in the presence of 19th century tendencies. Craven, the other leading revisionist, placed more emphasis on the issue of slavery than Randall but argued roughly the same points. In The Coming of the Civil War (1942), Craven argued that slave laborers were not much worse off than Northern workers, that the institution was already on the road to ultimate extinction, and that the war could have been averted by skillful and responsible leaders in the tradition of Congressional statesmen Henry Clay and Daniel Webster. Two of the most important figures in U.S. politics in the first half of the 19th century, Clay and Webster, arguably in contrast to the 1850s generation of leaders, shared a predisposition to compromises marked by a passionate patriotic devotion to the Union.
But it is possible that the politicians of the 1850s were not inept. More recent studies have kept elements of the revisionist interpretation alive, emphasizing the role of political agitation (the efforts of Democratic politicians of the South and Republican politicians in the North to keep the sectional conflict at the center of the political debate). David Herbert Donald argued in 1960 that the politicians of the 1850s were not unusually inept but that they were operating in a society in which traditional restraints were being eroded in the face of the rapid extension of democracy. The stability of the two-party system kept the union together, but would collapse in the 1850s, thus reinforcing, rather than suppressing, sectional conflict.
Reinforcing this interpretation, political sociologists have pointed out that the stable functioning of a political democracy requires a setting in which parties represent broad coalitions of varying interests, and that peaceful resolution of social conflicts takes place most easily when the major parties share fundamental values. Before the 1850s, the second American two party system (competition between the Democrats and the Whigs) conformed to this pattern, largely because sectional ideologies and issues were kept out of politics to maintain cross-regional networks of political alliances. However, in the 1840s and 1850s, ideology made its way into the heart of the political system despite the best efforts of the conservative Whig Party and the Democratic Party to keep it out.
Contemporaneous explanations
|“||The new [Confederate] Constitution has put at rest forever all the agitating questions relating to our peculiar institutions—African slavery as it exists among us—the proper status of the negro in our form of civilization. This was the immediate cause of the late rupture and present revolution. . . .(Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error.... Our new government is founded upon exactly the opposite idea; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery– subordination to the superior race– is his natural and normal condition.||”|
In July 1863, as decisive campaigns were fought at Gettysburg and Vicksburg, Republican senator Charles Sumner re-dedicated his speech The Barbarism of Slavery and said that desire to preserve slavery was the sole cause of the war:
|“||[T]here are two apparent rudiments to this war. One is Slavery and the other is State Rights. But the latter is only a cover for the former. If Slavery were out of the way there would be no trouble from State Rights.
The war, then, is for Slavery, and nothing else. It is an insane attempt to vindicate by arms the lordship which had been already asserted in debate. With mad-cap audacity it seeks to install this Barbarism as the truest Civilization. Slavery is declared to be the "corner-stone" of the new edifice.
Lincoln's war goals were reactions to the war, as opposed to causes. Abraham Lincoln explained the nationalist goal as the preservation of the Union on August 22, 1862, one month before his preliminary Emancipation Proclamation:
|“||I would save the Union. I would save it the shortest way under the Constitution. The sooner the national authority can be restored; the nearer the Union will be "the Union as it was." ... My paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that.... I have here stated my purpose according to my view of official duty; and I intend no modification of my oft-expressed personal wish that all men everywhere could be free.||”|
On March 4, 1865, Lincoln said in his Second Inaugural Address that slavery was the cause of the War:
|“||One-eighth of the whole population were colored slaves, not distributed generally over the Union, but localized in the southern part of it. These slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union even by war, while the Government claimed no right to do more than to restrict the territorial enlargement of it.||”|
See also
- American Civil War
- Compensated Emancipation
- Conclusion of the American Civil War
- Issues of the American Civil War
- Slavery in the United States
- Timeline of events leading to the American Civil War
- Elizabeth R. Varon, Bruce Levine, Marc Egnal, and Michael Holt at a plenary session of the organization of American Historians, March 17, 2011, reported by David A. Walsh "Highlights from the 2011 Annual Meeting of the Organization of American Historians in Houston, Texas" HNN online
- David Potter, The Impending Crisis, pages 42–50
- The Mason-Dixon Line and the Ohio River were key boundaries.
- Fehrenbacher pp.15–17. Fehrenbacher wrote, "As a racial caste system, slavery was the most distinctive element in the southern social order. The slave production of staple crops dominated southern agriculture and eminently suited the development of a national market economy."
- Fehrenbacher pp. 16–18
- Goldstone p. 13
- McDougall p. 318
- Forbes p. 4
- Mason pp. 3–4
- Freehling p.144
- Freehling p. 149. In the House the votes for the Tallmadge amendments in the North were 86–10 and 80-14 in favor, while in the South the vote to oppose was 66–1 and 64-2.
- Missouri Compromise
- Forbes pp. 6–7
- Mason p. 8
- Leah S. Glaser, "United States Expansion, 1800–1860"
- Richard J. Ellis, Review of The Shaping of American Liberalism: The Debates over Ratification, Nullification, and Slavery. by David F. Ericson, William and Mary Quarterly, Vol. 51, No. 4 (1994), pp. 826–829
- John Tyler, Life Before the Presidency
- Jane H. Pease, William H. Pease, "The Economics and Politics of Charleston's Nullification Crisis", Journal of Southern History, Vol. 47, No. 3 (1981), pp. 335–362
- Remini, Andrew Jackson, v2 pp. 136–137. Niven pg. 135–137. Freehling, Prelude to Civil War pg 143
- Craven pg.65. Niven pg. 135–137. Freehling, Prelude to Civil War pg 143
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights, and the Nullification Crisis (1987), page 193; Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816–1836. (1965), page 257
- Ellis p. 193. Ellis further notes that “Calhoun and the nullifiers were not the first southerners to link slavery with states’ rights. At various points in their careers, John Taylor, John Randolph, and Nathaniel Macon had warned that giving too much power to the federal government, especially on such an open-ended issue as internal improvement, could ultimately provide it with the power to emancipate slaves against their owners’ wishes.”
- Jon Meacham (2009), American Lion: Andrew Jackson in the White House, p. 247; Correspondence of Andrew Jackson, Vol. V, p. 72.
- Varon (2008) p. 109. Wilentz (2005) p. 451
- Miller (1995) pp. 144–146
- Miller (1995) pp. 209–210
- Wilentz (2005) pp. 470–472
- Miller, 112
- Miller, pp. 476, 479–481
- Huston p. 41. Huston writes, "...on at least three matters southerners were united. First, slaves were property. Second, the sanctity of southerners' property rights in slaves was beyond the questioning of anyone inside or outside of the South. Third, slavery was the only means of adjusting social relations properly between Europeans and Africans."
- Brinkley, Alan (1986). American History: A Survey. New York: McGraw-Hill. p. 328.
- Moore, Barrington (1966). Social Origins of Dictatorship and Democracy. New York: Beacon Press. p. 117.
- North, Douglas C. (1961). The Economic Growth of the United States 1790–1860. Englewood Cliffs. p. 130.
- Elizabeth Fox-Genovese and Eugene D. Genovese, Slavery in White and Black: Class and Race in the Southern Slaveholders' New World Order (2008)
- James M. McPherson, "Antebellum Southern Exceptionalism: A New Look at an Old Question", Civil War History 29 (September 1983)
- "Conflict and Collaboration: Yeomen, Slaveholders, and Politics in the Antebellum South", Social History 10 (October 1985): 273–98. quote at p. 297.
- Thornton, Politics and Power in a Slave Society: Alabama, 1800–1860 (Louisiana State University Press, 1978)
- McPherson (2007) pp.4–7. James M. McPherson wrote in referring to the Progressive historians, the Vanderbilt agrarians, and revisionists writing in the 1940s, “While one or more of these interpretations remain popular among the Sons of Confederate Veterans and other Southern heritage groups, few historians now subscribe to them.”
- Craig in Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), p.505.
- Donald 2001 pp 134–38
- Huston pp. 24–25. Huston lists other estimates of the value of slaves; James D. B. De Bow puts it at $2 billion in 1850, while in 1858 Governor James Pettus of Mississippi estimated the value at $2.6 billion in 1858.
- Huston p. 25
- Soil Exhaustion as a Factor in the Agricultural History of Virginia and Maryland, 1606–1860
- Encyclopedia of American Foreign Policy – A-D
- Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), 145 151 505 512 554 557 684; Richard Hofstadter, The Progressive Historians: Turner, Beard, Parrington (1969); for one dissenter see Marc Egnal. "The Beards Were Right: Parties in the North, 1840–1860". Civil War History 47, no. 1. (2001): 30–56.
- Kenneth M. Stampp, The Imperiled Union: Essays on the Background of the Civil War (1981) p 198
- Also from Kenneth M. Stampp, The Imperiled Union p 198
Most historians... now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united. Beard oversimplified the controversies relating to federal economic policy, for neither section unanimously supported or opposed measures such as the protective tariff, appropriations for internal improvements, or the creation of a national banking system.... During the 1850s, Federal economic policy gave no substantial cause for southern disaffection, for policy was largely determined by pro-Southern Congresses and administrations. Finally, the characteristic posture of the conservative northeastern business community was far from anti-Southern. Most merchants, bankers, and manufacturers were outspoken in their hostility to antislavery agitation and eager for sectional compromise in order to maintain their profitable business connections with the South. The conclusion seems inescapable that if economic differences, real though they were, had been all that troubled relations between North and South, there would be no substantial basis for the idea of an irrepressible conflict.
- James M. McPherson, Antebellum Southern Exceptionalism: A New Look at an Old Question Civil War History – Volume 50, Number 4, December 2004, page 421
- Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", The American Historical Review Vol. 44, No. 1 (1938), pp. 50–55 full text in JSTOR
- John Calhoun, Slavery a Positive Good, February 6, 1837
- Noll, Mark A. (2002). America's God: From Jonathan Edwards to Abraham Lincoln. Oxford University Press. p. 640.
- Noll, Mark A. (2006). The Civil War as a Theological Crisis. UNC Press. p. 216.
- Noll, Mark A. (2002). The US Civil War as a Theological War: Confederate Christian Nationalism and the League of the South. Oxford University Press. p. 640.
- Hull, William E. (February 2003). "Learning the Lessons of Slavery". Christian Ethics Today 9 (43). Retrieved 2007-12-19.
- Methodist Episcopal Church, South
- Presbyterian Church in the United States
- Gaustad, Edwin S. (1982). A Documentary History of Religion in America to the Civil War. Wm. B. Eerdmans Publishing Co. pp. 491–502.
- Johnson, Paul (1976). History of Christianity. Simon & Schuster. p. 438.
- Noll, Mark A. (2002). America's God: From Jonathan Edwards to Abraham Lincoln. Oxford University Press. pp. 399–400.
- Miller, Randall M.; Stout, Harry S.; Wilson, Charles Reagan, eds. (1998). "title=The Bible and Slavery". Religion and the American Civil War. Oxford University Press. p. 62.
- Bestor, 1964, pp. 10–11
- McPherson, 2007, p. 14.
- Stampp, pp. 190–193.
- Bestor, 1964, p. 11.
- Krannawitter, 2008, pp. 49–50.
- McPherson, 2007, pp. 13–14.
- Bestor, 1964, pp. 17–18.
- Guelzo, pp. 21–22.
- Bestor, 1964, p. 15.
- Miller, 2008, p. 153.
- McPherson, 2007, p. 3.
- Bestor, 1964, p. 19.
- McPherson, 2007, p. 16.
- Bestor, 1964, pp. 19–20.
- Bestor, 1964, p. 21
- Bestor, 1964, p. 20
- Bestor, 1964, p. 20.
- Russell, 1966, p. 468-469
- Bestor, 1964, p. 23
- Russell, 1966, p. 470
- Bestor, 1964, p. 24
- Bestor, 1964, pp. 23-24
- Holt, 2004, pp. 34–35.
- McPherson, 2007, p. 7.
- Krannawitter, 2008, p. 232.
- Bestor, 1964, pp. 24–25.
- "The Amistad Case". National Portrait Gallery. Retrieved 2007-10-16.
- McPherson, Battle Cry p. 8; James Brewer Stewart, Holy Warriors: The Abolitionists and American Slavery (1976); Pressly, 270ff
- Wendell Phillips, "No Union With Slaveholders", January 15, 1845, in Louis Ruchames, ed. The Abolitionists (1963), p.196.
- Mason I Lowance, Against Slavery: An Abolitionist Reader, (2000), page 26
- "Abolitionist William Lloyd Garrison Admits of No Compromise with the Evil of Slavery". Retrieved 2007-10-16.
- Alexander Stephen's Cornerstone Speech, Savannah; Georgia, March 21, 1861
- Stampp, The Causes of the Civil War, page 59
- Schlessinger quotes from an essay “The State Rights Fetish” excerpted in Stampp p. 70
- Schlessinger in Stampp pp. 68–69
- McDonald p. 143
- Kenneth M. Stampp, The Causes of the Civil War, p. 14
- Nevins, Ordeal of the Union: Fruits of Manifest Destiny 1847–1852, p. 155
- Donald, Baker, and Holt, p.117.
- When arguing for the equality of states, Jefferson Davis said, "Who has been in advance of him in the fiery charge on the rights of the States, and in assuming to the Federal Government the power to crush and to coerce them? Even to-day he has repeated his doctrines. He tells us this is a Government which we will learn is not merely a Government of the States, but a Government of each individual of the people of the United States". – Jefferson Davis' reply in the Senate to William H. Seward, Senate Chamber, U.S. Capitol, February 29, 1860, From The Papers of Jefferson Davis, Volume 6, pp. 277–84.
- When arguing against equality of individuals, Davis said, "We recognize the fact of the inferiority stamped upon that race of men by the Creator, and from the cradle to the grave, our Government, as a civil institution, marks that inferiority". – Jefferson Davis' reply in the Senate to William H. Seward, Senate Chamber, U.S. Capitol, February 29, 1860, – From The Papers of Jefferson Davis, Volume 6, pp. 277–84. Transcribed from the Congressional Globe, 36th Congress, 1st Session, pp. 916–18.
- Jefferson Davis' Second Inaugural Address, Virginia Capitol, Richmond, February 22, 1862, Transcribed from Dunbar Rowland, ed., Jefferson Davis, Constitutionalist, Volume 5, pp. 198–203. Summarized in The Papers of Jefferson Davis, Volume 8, p. 55.
- Lawrence Keitt, Congressman from South Carolina, in a speech to the House on January 25, 1860: Congressional Globe.
- Stampp, The Causes of the Civil War, pages 63–65
- William C. Davis, Look Away, pages 97–98
- David Potter, The Impending Crisis, page 275
- First Lincoln Douglas Debate at Ottawa, Illinois August 21, 1858
- Bertram Wyatt-Brown, Southern Honor: Ethics and Behavior in the Old South (1982) pp 22–23, 363
- Christopher J. Olsen (2002). Political Culture and Secession in Mississippi: Masculinity, Honor, and the Antiparty Tradition, 1830–1860. Oxford University Press. p. 237. footnote 33
- Lacy Ford, ed. (2011). A Companion to the Civil War and Reconstruction. Wiley. p. 28.
- Michael William Pfau, "Time, Tropes, and Textuality: Reading Republicanism in Charles Sumner's 'Crime Against Kansas'", Rhetoric & Public Affairs vol 6 #3 (2003) 385–413, quote on p. 393 online in Project MUSE
- In modern terms Sumner accused Butler of being a "pimp who attempted to introduce the whore, slavery, into Kansas" says Judith N. McArthur; Orville Vernon Burton (1996). "A Gentleman and an Officer": A Military and Social History of James B. Griffin's Civil War. Oxford U.P. p. 40.
- Williamjames Hoffer, The Caning of Charles Sumner: Honor, Idealism, and the Origins of the Civil War (2010) p. 62
- William E. Gienapp, "The Crime Against Sumner: The Caning of Charles Sumner and the Rise of the Republican Party," Civil War History (1979) 25#3 pp. 218-245 doi:10.1353/cwh.1979.0005
- Donald, David; Randal, J.G. (1961). The Civil War and Reconstruction. Boston: D.C. Health and Company. p. 79.
- Allan, Nevins (1947). Ordeal of the Union (vol. 3) III. New York: Charles Scribner's Sons. p. 218.
- Moore, Barrington, p.122.
- William W, Freehling, The Road to Disunion: Secessionists Triumphant 1854–1861, pages 271–341
- Roy Nichols, The Disruption of American Democracy: A History of the Political Crisis That Led Up To The Civil War (1949)
- Seymour Martin Lipset, Political Man: The Social Bases of Politics (Doubleday, 1960) p. 349.
- Maury Klein, Days of Defiance: Sumter, Secession, and the Coming of the Civil War (1999)
- David M. Potter, The Impending Crisis, pages 14–150
- William W. Freehling, The Road to Disunion, Secessionists Triumphant: 1854–1861, pages 345–516
- Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", American Historical Review Vol. 44, No. 1 (October 1938), pp. 50–55 in JSTOR
- Daniel Crofts, Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989
- Adam Goodheart, 1861: The Civil War Awakening (2011) ch 2–5
- Adam Goodheart, "Prologue", in 1861: The Civil War Awakening (2011)
- Letter to Horace Greeley, August 22, 1862
- Craven, Avery. The Coming of the Civil War (1942) ISBN 0-226-11894-0
- Donald, David Herbert, Baker, Jean Harvey, and Holt, Michael F. The Civil War and Reconstruction. (2001)
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights and the Nullification Crisis. (1987)
- Fehrenbacher, Don E. The Slaveholding Republic: An Account of the United States Government's Relations to Slavery. (2001) ISBN 1-195-14177-6
- Forbes, Robert Pierce. The Missouri Compromise and ItAftermath: Slavery and the Meaning of America. (2007) ISBN 978-0-8078-3105-2
- Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816–1836. (1965) ISBN 0-19-507681-8
- Freehling, William W. The Road to Disunion: Secessionists at Bay 1776–1854. (1990) ISBN 0-19-505814-3
- Freehling, William W. and Craig M. Simpson, eds. Secession Debated: Georgia's Showdown in 1860 (1992), speeches
- Hesseltine; William B. ed. The Tragic Conflict: The Civil War and Reconstruction (1962), primary documents
- Huston, James L. Calculating the Value of the Union: Slavery, Property Rights, and the Economic Origins of the Civil War. (2003) ISBN 0-8078-2804-1
- Mason, Matthew. Slavery and Politics in the Early American Republic. (2006) ISBN 13:978-0-8078-3049-9
- McDonald, Forrest. States' Rights and the Union: Imperium in Imperio, 1776–1876. (2000)
- McPherson, James M. This Mighty Scourge: Perspectives on the Civil War. (2007)
- Miller, William Lee. Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress. (1995) ISBN 0-394-56922-9
- Niven, John. John C. Calhoun and the Price of Union (1988) ISBN 0-8071-1451-0
- Perman, Michael, ed. Major Problems in Civil War & Reconstruction (2nd ed. 1998) primary and secondary sources.
- Remini, Robert V. Andrew Jackson and the Course of American Freedom, 1822–1832,v2 (1981) ISBN 0-06-014844-6
- Stampp, Kenneth, ed. The Causes of the Civil War (3rd ed 1992), primary and secondary sources.
- Varon, Elizabeth R. Disunion: The Coming of the American Civil War, 1789–1859. (2008) ISBN 978-0-8078-3232-5
- Wakelyn; Jon L. ed. Southern Pamphlets on Secession, November 1860 – April 1861 (1996)
- Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln. (2005) ISBN 0-393-05820-4
Further reading
- Ayers, Edward L. What Caused the Civil War? Reflections on the South and Southern History (2005). 222 pp.
- Beale, Howard K., "What Historians Have Said About the Causes of the Civil War", Social Science Research Bulletin 54, 1946.
- Boritt, Gabor S. ed. Why the Civil War Came (1996)
- Childers, Christopher. "Interpreting Popular Sovereignty: A Historiographical Essay", Civil War History Volume 57, Number 1, March 2011 pp. 48–70 in Project MUSE
- Crofts Daniel. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989), pp 353–82 and 457-80
- Etcheson, Nicole. "The Origins of the Civil War", History Compass 2005 #3 (North America)
- Foner, Eric. "The Causes of the American Civil War: Recent Interpretations and New Directions". In Beyond the Civil War Synthesis: Political Essays of the Civil War Era, edited by Robert P. Swierenga, 1975.
- Kornblith, Gary J., "Rethinking the Coming of the Civil War: A Counterfactual Exercise". Journal of American History 90.1 (2003): 80 pars. detailed historiography; online version
- Pressly, Thomas. Americans Interpret Their Civil War (1966), sorts historians into schools of interpretation
- SenGupta, Gunja. “Bleeding Kansas: A Review Essay”. Kansas History 24 (Winter 2001/2002): 318–341.
- Tulloch, Hugh. The Debate On the American Civil War Era (Issues in Historiography) (2000)
- Woodworth, Steven E. ed. The American Civil War: A Handbook of Literature and Research (1996), 750 pages of historiography; see part IV on Causation.
"Needless war" school
- Craven, Avery, The Repressible Conflict, 1830–61 (1939)
- The Coming of the Civil War (1942)
- "The Coming of the War Between the States", Journal of Southern History 2 (August 1936): 30–63; in JSTOR
- Donald, David. "An Excess of Democracy: The Civil War and the Social Process" in David Donald, Lincoln Reconsidered: Essays on the Civil War Era, 2d ed. (New York: Alfred A. Knopf, 1966), 209–35.
- Holt, Michael F. The Political Crisis of the 1850s. (1978) emphasis on political parties and voters
- Randall, James G. "The Blundering Generation", Mississippi Valley Historical Review 27 (June 1940): 3–28 in JSTOR
- James G. Randall. The Civil War and Reconstruction. (1937), survey and statement of "needless war" interpretation
- Pressly, Thomas J. "The Repressible Conflict", chapter 7 of Americans Interpret Their Civil War (Princeton: Princeton University Press, 1954).
- Ramsdell, Charles W. "The Natural Limits of Slavery Expansion", Mississippi Valley Historical Review, 16 (September 1929), 151–71, in JSTOR; says slavery had almost reached its outer limits of growth by 1860, so war was unnecessary to stop further growth. online version without footnotes
Economic causation and modernization
- Beard, Charles, and Mary Beard. The Rise of American Civilization. Two volumes. (1927), says slavery was minor factor
- Luraghi, Raimondo, "The Civil War and the Modernization of American Society: Social Structure and Industrial Revolution in the Old South Before and During the War", Civil War History XVIII (September 1972). in JSTOR
- McPherson, James M. Ordeal by Fire: the Civil War and Reconstruction. (1982), uses modernization interpretation.
- Moore, Barrington. Social Origins of Dictatorship and Democracy. (1966). modernization interpretation
- Thornton, Mark; Ekelund, Robert B. Tariffs, Blockades, and Inflation: The Economics of the Civil War. (2004), stresses fear of future protective tariffs
Nationalism and culture
- Crofts Daniel. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Current, Richard. Lincoln and the First Shot (1963)
- Nevins, Allan, author of most detailed history
- Ordeal of the Union 2 vols. (1947) covers 1850–57.
- The Emergence of Lincoln, 2 vols. (1950) covers 1857–61; does not take strong position on causation
- Olsen, Christopher J. Political Culture and Secession in Mississippi: Masculinity, Honor, and the Antiparty Tradition, 1830–1860" (2000), cultural interpretation
- Potter, David The Impending Crisis 1848–1861. (1976), Pulitzer Prize-winning history emphasizing rise of Southern nationalism
- Potter, David M. Lincoln and His Party in the Secession Crisis (1942).
- Miller, Randall M., Harry S. Stout, and Charles Reagan Wilson, eds. Religion and the American Civil War (1998), essays
Slavery as cause
- Ashworth, John
- Slavery, Capitalism, and Politics in the Antebellum Republic. (1995)
- "Free labor, wage labor, and the slave power: republicanism and the Republican party in the 1850s", in Melvyn Stokes and Stephen Conway (eds), The Market Revolution in America: Social, Political and Religious Expressions, 1800–1880, pp. 128–46. (1996)
- Donald, David et al. The Civil War and Reconstruction (latest edition 2001); 700-page survey
- Fellman, Michael et al. This Terrible War: The Civil War and its Aftermath (2003), 400-page survey
- Foner, Eric
- Free Soil, Free Labor, Free Men: the Ideology of the Republican Party before the Civil War. (1970, 1995) stress on ideology
- Politics and Ideology in the Age of the Civil War. New York: Oxford University Press. (1981)
- Freehling, William W. The Road to Disunion: Secessionists at Bay, 1776–1854 1991., emphasis on slavery
- Gienapp William E. The Origins of the Republican Party, 1852–1856 (1987)
- Manning, Chandra. What This Cruel War Was Over: Soldiers, Slavery, and the Civil War. New York: Vintage Books (2007).
- McPherson, James M. Battle Cry of Freedom: The Civil War Era. (1988), major overview, neoabolitionist emphasis on slavery
- Morrison, Michael. Slavery and the American West: The Eclipse of Manifest Destiny and the Coming of the Civil War (1997)
- Ralph E. Morrow. "The Proslavery Argument Revisited", The Mississippi Valley Historical Review, Vol. 48, No. 1. (June 1961), pp. 79–94. in JSTOR
- Rhodes, James Ford History of the United States from the Compromise of 1850 to the McKinley-Bryan Campaign of 1896 Volume: 1. (1920), highly detailed narrative 1850–56. vol 2 1856–60; emphasis on slavery
- Schlesinger, Arthur Jr. "The Causes of the Civil War" (1949) reprinted in his The Politics of Hope (1963); reintroduced new emphasis on slavery
- Stampp, Kenneth M. America in 1857: A Nation on the Brink (1990)
- Stampp, Kenneth M. And the War Came: The North and the Secession Crisis, 1860–1861 (1950).
- Civil War and Reconstruction: Jensen's Guide to WWW Resources
- Report of the Brown University Steering Committee on Slavery and Justice
- State by state popular vote for president in 1860 election
- Tulane course – article on 1860 election
- Onuf, Peter. "Making Two Nations: The Origins of the Civil War" 2003 speech
- The Gilder Lehrman Institute of American History
- CivilWar.com Many source materials, including states' secession declarations.
- Causes of the Civil War Collection of primary documents
- Declarations of Causes of Seceding States
- Alexander H. Stephens' Cornerstone Address
- An entry from Alexander Stephens' diary, dated 1866, reflecting on the origins of the Civil War.
- The Arguments of the Constitutional Unionists in 1850–51
- Shmoop US History: Causes of the Civil War – study guide, dates, trivia, multimedia, teachers' guide
- Booknotes interview with Stephen B. Oates on The Approaching Fury: Voices of the Storm, 1820–1861, April 27, 1997. | http://en.wikipedia.org/wiki/Origins_of_the_American_Civil_War | 13 |
18 | Rationality is something we all value, but which most of us find difficult to clearly define. We tend to think we're rational and others are not. We tend to think that we know what's rational and what isn't. But if we can't precisely define rationality, how can we be sure?
On this page, we'll briefly outline the subject of rational thinking, and provide links to more in-depth material.
The first thing to be aware of is that there is an ideal form of rationality. Humans don't reason in this ideal fashion. Instead, humans are driven to conclusions by intuitions that are rapid, effortless, and mostly unconscious. Our intuitions can approximate rational thinking, but they frequently deviate from the ideal. Our intuitions come with cognitive biases. A cognitive bias is a flaw in our intuition that causes us to reach the wrong conclusion sometimes. Cognitive bias knocks our thinking off of the ideal course, and toward faulty conclusions. Fortunately, it is possible to understand our cognitive biases, and apply a critical thinking process to get us back on course.
The principles of rationality tell us how to rationally update our beliefs in light of logic and evidence, and how to practically incorporate these beliefs into our decision-making. This is the imprecise definition of rationality that most thoughtful people can agree on. It tells us what the principles do, but not what the principles are.
Rational thinking is following the ideal set of rules for inferring conclusions from known facts and new evidence.
No matter what you want to do, no matter what values you want to satisfy, you'll need an accurate picture of your self and the world around you. This is what rationality provides.
How To Know: The Principles of Rational Inference
The principles that tell us how to update our beliefs are called the principles of rational inference, and these principles are precise technical rules.
The first principle of rational inference is called deduction. Deduction tells us how to infer new specifics from general rules. For example, if all cats are mammals, and Fluffy is a cat, then we can infer that Fluffy is a mammal.
The second principle of rational inference is called induction. Induction is a principle that allows us to infer general rules from specific past experiences. Induction gives us probability estimates or relative probability estimates. For example, the fact that heavy objects always fall to the ground when released is known by induction. If a heavy object floated into the sky when released, there would be no contradiction, but we do rationally believe that such an occurrence would be unlikely. The technical rules of induction tell us how we should update our confidence in a theory based upon our experience.
These principles are exclusive. There are no other ways to rationally infer things, except by methods that can be justified by these principles.
Deduction and induction are the only ways to make rational inferences.
Any deliberate reasoning process which violates these rules is irrational. But before we write off all human thinking as irrational, it's important to understand how humans think, and how they approximate ideal reasoning.
Inference: Ideal versus Human
An ideal thinker would only believe things that were rationally justified by the two principles of rational inference. We humans are not ideal rational thinkers, and, for us, this isn't always a bad thing.
In recent decades, researchers in psychology and behavioral economics have discovered that the human mind thinks in two distinct modes. The first mode of thinking is fast, intuitive, automatic and effortless. Automatic thinking is also subconscious. The second mode of thinking is slow, logical and requires deliberate effort.
Automatic thinking is invaluable for human living. Without our automatic modes of thinking, we would be unable to dance, drive cars, or interact with people in the social fashion to which we are accustomed. Most of the things we do every day are automatic. However, automatic thinking is prone to certain kinds of errors. In order to get quick answers, the human mind takes shortcuts. It sacrifices accuracy to get speed. There are hundreds of examples of ways that automatic thinking leads us to incorrect and irrational answers. Some of these examples are amusing, and others, like stereotyping or self-justification can be quite disturbing. These automatic deviations from ideal rationality are known as cognitive biases.
The logical mode of our thinking usually works to correct our automatic judgments, and make them more rational. The logical side of our thinking is what gives rise to critical thinking.
Critical thinking is a deliberate attempt to correct our cognitive bias, and return to more ideal rational thinking.
Unfortunately, we cannot always muster the effort to engage our logical thinking mode. This happens when we are fatigued, when we are disinterested. On other occasions, we engage our logical faculties, but they don't protect us because we don't know the right way to expose the errors in automatic thinking.
Values: How to Decide
Ideal inference is essentially value-free. When evaluating the facts, values are irrelevant. Valuing a possible answer does not make it more likely to be correct. For example, most of us would value having a billion dollars in our bank accounts, but our desire for money doesn't make it more likely that our bank account inexplicably contains a billion dollars.
The principles of rational inference allow an ideal reasoner to suspend value judgments and get a more accurate picture of the world. They also allow the ideal reasoner to more accurately forecast the outcomes of each potential choice facing the reasoner. But how does an ideal reasoner pick among his or her choices?
The answer will come as a surprise to many people:
The ideal reasoner chooses the action whose outcome best satisfies his or her values.
Unlike the situation with inferences, there are no value-free decisions. Without our values, we would lack any basis for preferring one choice/outcome over another.
Neurologists have discovered that patients with brain injuries that disable their value and emotion centers can make excellent inferences, but are unable to make simple decisions about what to do.
A rational thinker makes value-free inferences, but value-laden choices.
Whatever values you endorse, rational inference and decision-making can help you satisfy your values.
Specialized knowledge about cognitive bias and critical thinking has been available for decades, but despite the publication of numerous popular books on the topic of bias and critical thinking, this knowledge has not yet taken root in popular consciousness. If so few of us can define rational thinking, then we must admit that we live in a predominantly pre-rational culture.
How can we do better?
The first step is to create a cultural awareness of ideal rational thinking. Ideal rationality creates a frame for understanding cognitive bias and critical thinking.
Cognitive bias is more challenging to comprehend when it takes the form of a sundry list of psychological phenomena. The concept of ideal versus non-ideal rational thinking creates a narrative for understanding the myriad of biases. Cognitive biases are the ways that we humans predictably deviate from ideal rational thinking.
The same framing applies to critical thinking. Critical thinking looks like a dry list of thinking practices, but how can we describe critical thinking in an intuitive way? Here, again, the frame of ideal rational thinking creates an intuitive narrative for thinking about critical thinking. Critical thinking can be grasped as a set of techniques that we apply to our automatic inferences to bring them closer to ideal rational inferences.
Framing the issues in terms of ideal rational thought also gives us a sense of progress and a direction for cultural development. The notion of ideal rational inference gives us a potential way to measure the rationality of our inferences and decisions. Measurement, in turn, offers us a way to chart the progress of humanity towards a more rational society – a society that better satisfies our values of justice and morality.
Finally, the frame gives us a basis for altering our educational institutions. The principles of rational thought and an understanding of our shared cognitive biases are things that every citizen should be exposed to in school.
Copyright 2011 Rational Future Institute NFP | http://www.rationalfuture.org/rationality.html | 13 |
17 | Select a term from the dropdown text box. The online statistics
glossary will display a definition, plus links to other
related web pages.
Correlation coefficients measure the strength of
association between two variables. The most common correlation
coefficient, called the
Pearson product-moment correlation coefficient,
measures the strength of the
linear association between variables.
The sign and the
of a Pearson correlation coefficient
describe the direction and the magnitude of the relationship
between two variables.
The value of a correlation coefficient ranges between -1 and
The greater the absolute value of a correlation coefficient,
the stronger the linear relationship.
The strongest linear relationship is indicated by a correlation
coefficient of -1 or 1.
The weakest linear relationship is indicated by a correlation
coefficient equal to 0.
A positive correlation means that if one variable gets bigger,
the other variable tends to get bigger.
A negative correlation means that if one variable gets bigger,
the other variable tends to get smaller.
Keep in mind that the Pearson correlation coefficient only measures
linear relationships. Therefore, a correlation of 0 does not
mean zero relationship between two variables; rather, it means
zero linear relationship. (It is possible for two
variables to have zero linear relationship and a strong
curvilinear relationship at the same time.)
A formula for computing a Pearson correlation coefficient is given below.
The correlation r between two variables is:
r = Σ (xy) / sqrt [ ( Σ x2 ) * ( Σ y2 ) ]
where Σ is the summation symbol,
x = xi
is the x value for observation i,
is the mean x value,
y = yi
is the y value for observation i,
is the mean y value.
Fortunately, you will rarely have to compute a correlation
coefficient by hand. Many software packages (e.g., Excel) and most
have a correlation function that will do the job for you. | http://stattrek.com/statistics/dictionary.aspx?definition=Correlation | 13 |
34 | Elementary propositions, the simplest kind of proposition, consist of names (4.22) and depict a possible state of affairs (4.21). Just as the existence or non-existence of any possible state of affairs has no bearing on the existence or non-existence of any other possible state of affairs, so does the truth or falsity of any elementary proposition have no bearing on the truth or falsity of any other elementary proposition. And just as the totality of all existent states of affairs is the world, so the totality of all true elementary propositions is a complete description of the world (4.26).
Any given elementary proposition is either true or false. Combining the two elementary propositions, p and q, produces four separate truth- possibilities: (1) both p and q are true, (2) p is true and q is false, (3) p is false and q is true, and (4) both p and q are false. We can express the truth-conditions of a proposition that joins p and q—say, "if p then q—in terms of these four truth- possibilities in a table, thus:
p | q | T | T | T T | F | T F | T | F F | F | T
This table is a propositional sign for "if p then q." The results of this table can be expressed linearly, thus: "(TTFT)(p,q)" (4.442). From this notation it becomes clear that there are no "logical objects," such as a sign expressing the "if then" conditional (4.441).
A proposition that is true no matter what (e.g. "(TTTT)(p,q)") is called a "tautology" and a proposition that is false no matter what (e.g. "(FFFF)(p,q)") is called a "contradiction" (4.46). Tautologies and contradictions lack sense in that they do not represent any possible situations, but they are not nonsense, either. A tautology is true and a contradiction is false no matter how things stand in the world, whereas nonsense is neither true nor false.
Propositions are built up as truth-functions of elementary propositions (5). The "truth-grounds" of a proposition are the truth-possibilities under which the proposition comes out true (5.101). A proposition that shares all the truth- grounds of one or several other propositions is said to follow from those propositions (5.11). If one proposition follows from another, we can say the sense of the former is contained in the sense of the latter (5.122). For instance, the truth-grounds for "p" are contained in the truth-grounds for "p.q" ("p" is true in all those cases where "p.q" is true), so we can say that "p" follows from "p.q" and that the sense of "p" is contained in the sense of "p.q."
We can infer whether one proposition follows from another from the structure of the propositions themselves: there is no need for "laws of inference" to tell us how we can and cannot proceed in logical deduction (5.132). We must also recognize, however, that we can only infer propositions from one another if they are logically connected: we cannot infer one state of affairs from a totally distinct state of affairs. Thus, Wittgenstein concludes, there is no logical justification for inferring future events from those of the present (5.1361).
We say that "p" says less than "p.q" because it follows from "p.q." Consequently, a tautology says nothing at all, since it follows from all propositions and no further propositions follow from it.
The logic of inference is the basis for probability. Let us take as an example the two propositions "(TFFF)(p,q)" ("p and q") and "(TTTF)(p,q)" ("p or q"). We can say that the former proposition gives a probability of one/3 to the latter proposition, because—excluding all external considerations—if the former is true, then there is a one in three chance that the latter will be true as well. Wittgenstein emphasizes that this is only a theoretical procedure; in reality there are no degrees of probability: propositions are either true or false (5.153).
Truth tables are tables we can draw up to schematize a proposition and determine its truth-conditions. Wittgenstein does this at 4.31 and 4.442. Wittgenstein did not invent truth tables, but their use in modern logic is usually traced to his introduction of them in the Tractatus. Wittgenstein was also the first philosopher to recognize that they could be wielded as a significant philosophical tool.
The assumption that underlies Wittgenstein's work here is that the sense of a proposition is given if its truth conditions are given. If we know under what circumstances a proposition is true and under what circumstances it is false, then we know all there is to know about that proposition. On reflection, this assumption is perfectly reasonable. If I know what would have to be the case for "Your dog is eating my hat" to be true, and if I know what would have to be the case for it to be false, then I can be said to know what that proposition means. An exhaustive list of the truth-possibilities of a proposition, coupled with an indication of which truth-possibilities make the proposition come out true and which false, will tell us all we need to know about that proposition.
This is exactly what truth tables do. Any proposition, according to Wittgenstein, consists of one or more elementary proposition, each of which can be true or false independently of any other. If we put all the elementary propositions that constitute a given proposition into a truth table that lists all the possible combinations of true or false that can hold between them, we will have an exhaustive list of the truth-conditions of the given proposition. Thus, a truth table can show us the sense of the proposition. The proposition "p.q" ("p and q") can equally well be expressed as a truth table, or as "(TFFF)(p,q)."
The great advantage of this notation is that it expresses the sense of a proposition without any of the connectives we normally find in logical notation, such as "and," "or," and "if then." Clearly, none of these connectives are essential to the sense of the proposition, thus giving credence to Wittgenstein's "fundamental idea" (4.0312) that "the 'logical constants' are not representatives." In a truth table, the connections between elementary propositions "show" themselves, and so need not be said.
Wittgenstein also explains that this method can "show" the workings of logical inference, thus rendering unnecessary the "laws of inference" that both Frege and Russell had built into their axiomatic systems. One proposition follows from a second proposition if the first is true whenever the second is true. If we express "p or q" as "(TTTF)(p,q)" and "p and q" as "(TFFF)(p,q)" we can see that the former follows from the latter by comparing their truth-grounds: where there is a "T" in the latter proposition, there is a corresponding "T" in the former proposition. We don't need a law of inference to tell us this: it shows itself plainly in the truth-grounds of the two propositions.
The limiting cases of propositions are tautologies and contradictions. Wittgenstein uses the German word sinnloss ("senseless") to describe the peculiar status of tautologies and contradictions, in contrast to unsinnig, or "nonsensical." They are not nonsense because they consist of elementary propositions and are held together in a logical way. However, these elementary propositions are held together in such a way that they do not represent any possible state of affairs. Tautologies, as necessarily true and not representative of any particular fact, are particularly interesting to Wittgenstein. As we shall see, he will claim at 6.1 that the propositions of logic are tautologies.
Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note! | http://www.sparknotes.com/philosophy/tractatus/section7.rhtml | 13 |
16 | Georg Wilhelm Friedrich Hegel
Georg Wilhelm Friedrich Hegel (August 27, 1770 – November 14, 1831) was a German philosopher, the main representative of nineteenth century German Idealism, and one of the major thinkers in the history of Western philosophy.
Building on the foundation laid by Johann Gottlieb Fichte and Friedrich Wilhelm Schelling, Hegel developed a speculative system practically unrivaled in the scope of its ambition. Hegel’s highly systematic philosophy has been characterized as a form of panlogism. A system which portrays rational thought as the ultimate reality as well as the instrument to explain all reality. In that system, the Absolute, considered by Schelling to be beyond the grasp of reason, is described in its development as Spirit through a dialectical process, an idea that would later be borrowed by Karl Marx.
Hegel described his method as speculative, in the sense that it unveiled the hidden dimensions of reality through an analysis of the thought process of the dialectic. Being and non-being, for instance, are usually considered opposites that destroy each other. For Hegel, their mutual negation leads to the third element of a triad, in which both earlier elements are sublated, absent as such, yet included in a higher form. This formula was applied by Hegel to all aspects of thought and nature, leading to a comprehensive system where the Absolute’s development is explained through its own internal mechanism.
The fascination exerted by Hegel’s system rests on its ability to explain existing contradictions and how they are transcended without resorting to an external explanation. His apparent ability to produce a "theory of everything" was based on the simple laws of thought considered in an unexpected new light. His philosophy has often been considered through simplified caricatures, rather than for what it really is. The often-heard criticism that, in his logical deductions, he used sophistry covered up by obscure language cannot be ignored. The very mechanism of his dialectical movement has often been questioned, and the results of his speculation can appear far removed from reality. Hegel’s intention was to show how contradiction is solved on increasingly higher levels of development. He in fact introduced conflict into the idea of the Absolute. As with the other German Idealists, the nature of the Absolute, which he often called God is largely unclear. In particular, Hegel's teachings blur the dividing line between the notion of a transcendent God and the immanent absolute of pantheism.
Hegel's system, in spite of its fascinating character, split within his own school into Right and Left Hegelianism. It faced two different particular reactions against it: Soren Kierkegaard's God-centered existentialism and Ludwig Feuerbach's atheistic anthropology.
Life and Work
Hegel was born in Stuttgart, Württemberg, in present-day southwest Germany, on August 27, 1770. As a child he was a voracious reader. In part, Hegel's literate childhood can be attributed to his uncharacteristically progressive mother who actively nurtured her children's intellectual development. The Hegels were a well-established middle class family in Stuttgart—his father was a civil servant in the administrative government of Württemberg. Hegel was a sickly child and almost died of illness before he was six.
Hegel attended the seminary at Tübingen with the poet Friedrich Hölderlin and the objective idealist Friedrich Schelling. In their shared dislike for what was regarded as the restrictive environment of the Tübingen seminary, the three became close friends and mutually influenced each other's ideas. The three watched the unfolding of the French Revolution and immersed themselves in the emerging criticism of the idealist philosophy of Immanuel Kant. Hegel also became fascinated by the works of Baruch Spinoza and Jean-Jacques Rousseau.
After graduating in 1793, Hegel worked as a tutor in Bern, Switzerland, and later in Frankfurt. During that period, he completed his first work, which was in theology and was only published in 1907 as Hegel’s early theological writings.
In 1801, Hegel became a professor at the University of Jena, the cultural center of that time, and he soon began to collaborate with Schelling in editing the Critical Journal of Philosophy. In 1807, his first main work, Phenomenology of Spirit was published. Due to the political turmoil of the time, Hegel was soon forced to leave for Nürnberg, where he served as the principal of a high school. During that period, he quietly continued his philosophical work and published the Science of Logic. After a brief interlude teaching at the University of Erlangen, Hegel held the chair of philosophy at the University of Berlin until his death from cholera in 1831. There, he acquired a position of quasi absolute authority in the field, a position that was not to last. The very element in his philosophy that fascinated his listeners was soon perceived as unorthodox and by the time of his death the establishment was ready for change.
Unlike his younger friend Schelling, Hegel was rather uncharismatic and unremarkable in his early development. It would take some time for his systematic thought to take shape. Once that was the case, however, Hegel’s philosophy easily outshone all of its rivals, at least for a limited period of time, and it would remain as a key landmark in the history of philosophy.
Apart from minor publications, Hegel published only four books in his life: the Phenomenology of Spirit (or Phenomenology of Mind), his account of the evolution of consciousness from sense-perception to absolute knowledge, published in 1807; the Science of Logic, the logical and metaphysical core of his philosophy, in three volumes, published in 1812, 1813, and 1816; Encyclopedia of the Philosophical Sciences, a summary of his entire philosophical system, which was originally published in 1816 and revised in 1827 and 1830; and the (Elements of the) Philosophy of Right, his political philosophy, published in 1822. He also published some articles early in his career and during his Berlin period. A number of other works on the philosophy of history, religion, aesthetics, and the history of philosophy were compiled from the lecture notes of his students and published posthumously.
Modern philosophy, culture, and society seemed to Hegel fraught with contradictions and tensions, such as those between the subject and object of knowledge, mind and nature, self and other, freedom and authority, knowledge and faith, the Enlightenment and Romanticism. Hegel's main philosophical project was to take these contradictions and tensions and interpret them as part of a comprehensive, evolving, rational unity that, in different contexts, he called "the absolute idea" or "absolute knowledge." According to Hegel, the main characteristic of this unity was that it evolved through and manifested itself in contradiction and negation. Contradiction and negation have a dynamic quality that at every point in each domain of reality—consciousness, history, philosophy, art, nature, society—lead to further development until a rational unity is reached that preserves the contradictions as phases and sub-parts of a larger, evolutionary whole. This whole is mental because it is mind that can comprehend all of these phases and sub-parts as steps in its own process of comprehension. It is rational because the same, underlying, logical, developmental order underlies every domain of reality and is the order of rational thought. It is not a thing or being that lies outside of other existing things or minds. Rather, it comes to completion only in the philosophical comprehension of individual existing human minds that, through their own understanding, bring this developmental process to an understanding of itself.
Early theological writings
Hegel’s early writings are significant in two ways: they already show his concern with the theme of alienation and they also show his theological orientation, an orientation which subsequently took on a philosophical form but as such remained to the end. In his earliest work, Hegel notes that, unlike ancient Greek and Roman religions, Christianity had become far removed from the everyday frame of mind, something like a lifeless additional explanation imposed from the outside on the modern mind. It also alienated the human psyche from its pursuit of beauty, freedom, and happiness. A little later, he came to see religion mainly in terms of ethics (as Kant did), before concluding that the narrowly ethical stage was transcended by Jesus’ vision of love, thus restoring the alienated self of humankind.
The succession of Hegel’s writings constitutes a consistent whole that can really be called a system, unlike the works of his predecessors Fichte and Schelling, whose ideas changed considerably over time. Hegel’s thought is post-Kantian in that it has its starting point in the thinking I but, like Fichte, Hegel rejects Kant’s notion of the unknowable thing-in-itself. For him, the development of a thought system like his own is precisely the embodiment of the thing-in-itself, which he calls Absolute Spirit. In his early work on the Difference between the Philosophical Systems of Fichte and Schelling he further sided with Schelling in rejecting Fichte’s exclusive emphasis on the Ego, agreeing with Schelling’s view that the Absolute had to include both the subject and the object. However, he strongly disagreed with Schelling’s views on the obscure nature of that Absolute and its inaccessibility to rational thought.
Overview of Hegel’s system
Hegel’s system consists of three main parts: The Logic (ontology), which deals with the nature of the Absolute prior to the “creation” of the world; the Philosophy of Nature, which deals with the Absolute’s estrangement in the material world; and the Philosophy of the Spirit, which covers the return of the Absolute into itself through the human spirit.
For Hegel, the Absolute, reality itself, is not something transcendent that cannot be known (as for Kant), nor is it something beyond conceptual formulation (as for Schelling). In Hegel’s own words, the real is rational and what is rational is real. In Hegel’s dialectic, the Absolute unfolds conceptually and historically according to purely logical laws. Logic forms its very substance.
Hegel calls his method speculative. For Kant, speculation meant the attempt of reason to go beyond the realm of the senses into what is unknowable—an inevitable and understandable tendency, but one that could only lead to failure. For Hegel, the term is entirely positive, meaning the capacity of the mind to discover the hidden contradictions in thought as well as their resolution. History has been unkind towards what has generally been perceived as the excessive claims of Hegelian speculation and in current usage speculation is much closer to the meaning Kant gave it than to that of Hegel.
Phenomenology of Spirit
In his best known and first important work, the Phenomenology of Spirit, Hegel leads the reader through a sort of propaedeutic or prolegomena—an introduction to what he considers the genuine philosophical approach, culminating in absolute knowledge. Hegel rejects Schelling’s mystical approach that leaves the Absolute in darkness “where all cows are black.” The Phenomenology of Spirit can also be considered as a history of consciousness, from the lowest to the highest stage. First, there is the stage of ordinary sense-certainty leading to the scientific approach; this is the level of consciousness. Second, there is the level of self-consciousness. At this stage, intersubjectivity (the recognition of one self by another) is seen as essential, which leads Hegel to historical considerations on social relations. Hegel makes his well-known statement about the “unhappy consciousness” (das unglückliche Bewusstsein), that of the human mind divided between the consciousness of its imperfect self and the projection of perfection into a transcendent Being (seen as typical of medieval Catholicism).
The third and final stage is characterized by reason (Vernunft) as opposed to mere understanding (Verstand). This level is characterized by the realization of universal self-consciousness, which itself goes through many stages and sub-stages.
Science of Logic
Hegel’s Science of Logic can be seen as the timeless description of the functioning of the mind of God. It follows the same triadic patterns as the Phenomenology and predictably this pattern will also be found in all other writings, because for Hegel it is the structure of all being. Thus, what Hegel means by logic is very different from the conventional meaning of the term. It does not express the formal laws of thinking, such as the principles of identity and contradiction in a static manner, but intends to elucidate the unfolding of reality as thought.
Hegel starts with “being,” which is naturally associated in people’s mind with the notion of fullness and completion, because content is automatically assumed under that name. Being in-itself, however, is totally empty, as it has no specification—it is just being. It thus easily turns into its opposite, “non-being” (for-itself or otherness), because both are identically empty. The contradiction between the two is thus only apparent and it can be transcended by reason (Vernunft), which realizes that both can be brought to a higher level encompassing them without contradiction. That higher level is becoming (in-and-for-itself) and it is reached through the process of sublation (Aufhebung), one of Hegel’s most ingenious discoveries. Aufhebung has the triple connotation of cancellation, keeping aside for later, and bringing to a higher level. Thus, through the dialectical movement, every negation is in turn negated and what seemed lost reappears on a higher level of manifestation, leading all the way up to the Absolute Idea.
The work of speculative thought is thus to reveal the contradiction inherent in an apparently simple concept such as being and then to show how this contradiction can be sublated. By showing this to be the spontaneous process of manifestation of reality, Hegel actually rendered unnecessary any appeal to a higher force (a transcendent God) to explain creation. And by showing how in this process contradiction is overcome, he rendered unnecessary any separate explanation of evil. Hegel’s philosophy stands or falls with that claim.
It is, in fact, far from clear how and why, for instance, being and non-being turn into becoming, other than that this movement is posited by Hegel, and the initial emptiness of being is a very debatable statement based on a purely intellectual vision of being. Even if one accepts being and non-being as Hegel sees them, the “fuel” or “engine” that makes them transcend each other in becoming amounts to a pure leap of faith, since non-being does not offer anything to being that is not already contained in it. In this sense, Hegel’s system could be called a form of panlogical mysticism or rationalized Romanticism, where Schelling’s mysterious Absolute is replaced by the equally mysterious laws of Absolute Thought.
Philosophy of Nature
If the Logic deals with Spirit as it is in-itself, the Philosophy of Nature deals with the self-alienation of Spirit in the natural world before it returns into itself, which is the topic of the Philosophy of Spirit. The Philosophy of Nature is not meant to be a history of nature (Hegel dismisses the idea of evolution), but rather a presentation of the structure of nature according to the triadic pattern. This part of Hegel’s system is particularly controversial, as Hegel often tries to fit the reality of nature into his preconceived vision. Hegel also sometimes refers to nature as the realm of contingency, he speaks of the impotence of nature, and he even states that nature is a fall away from idea, which raises many questions about his overall perspective.
Philosophy of History
Hegel's works have a reputation for their difficulty, and for the breadth of the topics they attempt to cover. Hegel introduced a system for understanding the history of philosophy and the world itself, often described as a “progression in which each successive movement emerges as a solution to the contradictions inherent in the preceding movement.” For example, the French Revolution for Hegel constitutes the introduction of real freedom into Western societies for the first time in recorded history. But precisely because of its absolute novelty, it is also absolutely radical: on the one hand the upsurge of violence required to carry out the revolution cannot cease to be itself, while on the other, it has already consumed its opponent. The revolution therefore has nowhere to turn but onto its own result: the hard-won freedom is consumed by a brutal Reign of Terror. History, however, progresses by learning from its mistakes: only after and precisely because of this experience can one posit the existence of a constitutional state of free citizens, embodying both the benevolent organizing power of rational government and the revolutionary ideals of freedom and equality.
Philosophy of Right
The Philosophy of Right is one of the most important parts of Hegel’s system. In the overall scheme, it represents the stage of the objective Spirit in Hegel’s Philosophy of Spirit, i.e., the second last stage of the whole edifice. It is the stage where the Spirit returns into itself at the level of institutions. The Philosophy of Right is dealt with in the Encyclopaedia of the Philosophical Sciences, but even more extensively in Hegel’s textbook on the Philosophy of Right, based on his public lectures.
Since, for Hegel, it is the totality as the full manifestation of the Absolute that matters, it is normal that his ethics would be less limited to the individual’s consciousness than Kant’s categorical imperative. For Hegel, ethics and right culminate in the state as the concrete manifestation of the Spirit through human interaction. But first, on the level of law, Hegel deals with the notion of crime and punishment. Punishment is seen as the negation of the crime and Hegel even states that the criminal implicitly calls for his punishment as the logical outcome of his crime. This law is then internalized in conscience on the level of morality. Third, it is fully manifested at the successive levels of family, society, and state.
Hegel’s statement that Prussia represents the ultimate fulfillment of world history and the perfect self-manifestation of the Absolute Spirit has often been ridiculed, and it indeed appears as a rather pathetic claim in hindsight. Also, Hegel’s emphasis on the state has a connotation on oppressiveness. However, at least on the level of his vision, it is perfectly natural that Hegel would see the embodiment of the Absolute in the whole, i.e., the state, as the culminating point, rather than any individual achievement. Also, though there was an overlap between his views and the immediate interests of the Prussian State of his time, Hegel was not really a conservative supporter of that state and his philosophy soon fell out of favor.
In addition, Hegel did not really consider the Prussian State as the ultimate end of history, especially since the level of the state itself does not represent the culmination of his system. For Hegel, philosophy is the owl of Minerva, i.e., it reflects on the state of things it finds when it appears and it cannot not prophesize the future. For instance, for him, Plato’s Republic represents a reflection of the Greek political situation of that time, rather than an utopist vision.
More problematic, though consistent with the whole system, is Hegel’s understanding of war as a necessity, as the process by which one state negates another to drive history forward. There, Hegel differs entirely from Kant, who was hoping for a world federation of States and perpetual peace. Hegel did see certain individuals as the carriers of the “world spirit” and he considered the German people to be the first to achieve the full awareness of the freedom of the human spirit.
Philosophy of Spirit
The Philosophy of Spirit properly closes Hegel’s system. In it, the “world spirit” is not seen as realized in a world state, but rather in the Absolute Spirit fully becoming himself in Absolute Thought, through art, religion, and philosophy. Based on the state as a precondition for their development, these three spheres represent three different formulations of the same content, that of the Absolute Spirit. Hegel introduces an elaborate overview of historical development in these areas. However, the difference between temporal sequence and timeless structure is not always obvious.
Philosophy of Religion
Religion and philosophy, in particular, have the same object: to know God. If philosophy replaces analogy and historical sequences with logical structures and abstraction, it thus remains essentially religious in Hegel’s eyes. Hegel salutes the early attempt by Anselm of Canterbury to express religious faith in rational language. In his 1824 lectures, Hegel is credited with defining the field of philosophy of religion, though the philosophical study of religion as a modern discipline has become something quite different from what was really Hegel’s speculative philosophical theology.
Hegel revisits the themes of Christian theology along the lines of his own vision. The proof of God’s existence, in his view, is provided by the system itself, which is the full manifestation of the Absolute and requires no further external evidence. As for religious consciousness, Hegel again sees it as developing in three stages: the simple consciousness of God as the infinite Being; the awareness of one’s self as sinner as opposed to God; and the sense of salvation and newly found communion through religious practice.
Finally, there are three stages of historical development of religion: natural religion, where religious consciousness is undifferentiated; Jewish, Greek, and Roman religion, which is seen as the religion of individuality; and absolute religion, Christianity, where God is seen as both transcendent and immanent through the God-man, Christ, and the Trinity.
God and the Absolute
Hegel’s identification of God as the Absolute is a key aspect of his philosophy. It is also one of the most ambiguous ones. In his philosophy of religion, for instance, Hegel specifically intends to explain the Christian themes in terms of his philosophical terminology and just simply in terms of his system. For the very reasons that have become apparent throughout this article, many have felt that Hegel’s Christian language in fact covers a line of thought far removed from, even opposed to, that of Christianity. Examples are the fact that God is seen as much as the end product of history as he is seen as its beginning, the fact that there is no clear difference between Creator and creation, and the fact that evil and sin are seen more as an inevitable transition towards ultimate completion than as an accident contrary to God’s original goal.
Hegel and the culmination of German Idealism
The Hegelian system represents the culmination of the philosophical movement known as German Idealism, a movement essentially represented by Fichte, Schelling, and Hegel, but that also has ramifications beyond the strictly philosophical realm.
German Idealism directly developed out of Kant’s critical philosophy. Kant had sought to put an end to what he called dogmatism by showing that the great metaphysical systems of the past were based on unwarranted assumptions (belief in God and the afterlife) and reached beyond the grasp of human reason. Kant’s conclusion that human consciousness was unable to reach metaphysical certainties on a theoretical level and was thus limited to the moral certainties of practical reason was immediately challenged by his successors, beginning with Fichte. What stayed, however, was Kant’s starting point in transcendental consciousness, i.e., the conclusion that all certain knowledge must be based on a function of our mind preceding experience.
Unlike Kant, the German Idealists believed that through its own activity the human mind was indeed capable of reaching ultimate knowledge and it is on that foundation that they developed their systems. Though Hegel’s system is at least equal to any earlier metaphysical system in size, scope, and ambition, it thus has a very different starting point. Depending on one’s viewpoint, one can consider that his speculative system completes Kant’s system or that it rather repudiates its conclusions and reverts to the days of traditional metaphysics.
Those who accept Hegel’s dialectics will consider his system as an innovative approach to the problem of agnosticism that had represented the limit of Kant’s investigations. By showing the actual unfolding of the Absolute, Hegel removed any need to posit a transcendent “thing in itself” and thus eliminated the last remnants of dogmatism in Kant’s philosophy. On the other hand, many will see Hegel’s system as the apex of philosophical hubris, i.e., a mistaken attempt to achieve through mere speculation what revelation and tradition had been unable to complete. For them, by proceeding as he did, Hegel ignored Kant’s justified caveat and undid what he had accomplished.
In either case, Hegel’s system undeniably represents the most complete of the three philosophies that make up German Idealism. If Fichte’s system can be referred to as subjective idealism due to his focus on the Ego, and Schelling’s system as objective idealism because he posits an Absolute as independent from the Ego, Hegel’s system embodies the views of Absolute idealism, i.e., the belief that the underlying reality of the cosmos is an absolute Spirit that transcends any individual spirit.
Speculation and the Dialectic
One important question concerning Hegel is the extent to which his philosophy is conflict-oriented. In popularized accounts, Hegel's dialectic often appears broken up for convenience into three moments called "thesis" (in the French historical example, the revolution), "antithesis" (the terror which followed), and "synthesis" (the constitutional state of free citizens). In fact, Hegel used this classification only once, when discussing Kant; it was developed earlier by Fichte in his loosely analogous account of the relation between the individual subject and the world. Heinrich Moritz Chalybäus, a Hegelian apologist, introduced this terminology to describe Hegel’s system in 1837.
More importantly, Marx and Engels applied these expressions to their dialectial materialism, thus using their potential towards a conflict-oriented explanation of history. There is no question that Hegel’s philosophy was, so to speak, highjacked by Marx, who admittedly used it in a sense that was diametrically opposed to that of Hegel. It is nevertheless significant that Hegel’s method had dialectical materialism as its historically most significant result. This is clearly due to its core constituent, the dialectical movement, meant to explain progress and fulfillment as the overcoming of an inherently conflictual nature of reality.
Still, for Hegel, reason is ultimately "speculative," not "dialectical." Instead of thesis-antithesis-synthesis, Hegel used different terms to speak about triads, including immediate-mediate-concrete as well as abstract-negative-concrete, but Hegel's works do speak frequently about a synthetic logic.
Hegel's philosophy is not intended to be easy reading because it is technical writing. Hegel presumed his readers would be well-versed in Western philosophy, up to and including Descartes, Spinoza, Hume, Kant, Fichte, and Schelling. Without this background, Hegel is practically impossible to read.
Ironically, Hegel has managed to be both one of the most influential thinkers in modern philosophy while simultaneously being one of the most inaccessible. Because of this, Hegel's ultimate legacy will be debated for a very long time. He has been such a formative influence on such a wide range of thinkers that one can give him credit or assign him blame for almost any position.
Arthur Schopenhauer, for a very short time a fellow colleague of Hegel's at the University of Berlin, is famous for his scathing criticism of Hegel. He had this to say about his philosophy:
The height of audacity in serving up pure nonsense, in stringing together senseless and extravagant mazes of words, such as had been only previously known in madhouses, was finally reached in Hegel, and became the instrument of the most barefaced, general mystification that has ever taken place, with a result which will appear fabulous to posterity, as a monument to German stupidity.
Many other newer philosophers who prefer to follow the tradition of British Philosophy have made similar statements. But even in Britain, Hegel exercised a major influence on the philosophical school called "British Idealism," which included Francis Herbert Bradley and philosopher Bernard Bosanquet, in England, and Josiah Royce at Harvard.
Right Hegelians and Left Hegelians
Historians have spoken of Hegel's influence as represented by two opposing camps. The Right Hegelians, the direct disciples of Hegel at the Friedrich-Wilhelms-Universität (now known as the Humboldt University of Berlin), advocated evangelical orthodoxy and the political conservatism of the post-Napoleon Restoration period.
The Left Hegelians, also known as the Young Hegelians, interpreted Hegel in a revolutionary sense, leading to an advocation of atheism in religion and liberal democracy in politics. Thinkers and writers traditionally associated with the Young Hegelians include Bruno Bauer, Arnold Ruge, David Friedrich Strauss, Ludwig Feuerbach, Max Stirner, and most famously, the younger Karl Marx and Friedrich Engels—all of whom knew and were familiar with the writings of each other. A group of the Young Hegelians known as Die Freien ("The Free") gathered frequently for debate in Hippel's Weinstube (a winebar) in Friedrichsstrasse, Berlin in the 1830s and 1840s. In this environment, some of the most influential thinking of the last 160 years was nurtured—the radical critique and fierce debates of the Young Hegelians inspired and shaped influential ideas of atheism, humanism, communism, anarchism, and egoism.
Except for Marx and Marxists, almost none of the so-called "Left Hegelians" actually described themselves as followers of Hegel, and several of them openly repudiated or insulted the legacy of Hegel's philosophy. Even Marx stated that to make Hegel's philosophy useful for his purposes, he had to "turn Hegel upside down." Nevertheless, this historical category is often deemed useful in modern academic philosophy. The critiques of Hegel offered from the "Left Hegelians" led the line of Hegel's thinking into radically new directions—and form an important part of the literature on and about Hegel.
In the latter half of the twentieth century, Hegel's philosophy underwent a major renaissance. This was due partly to the rediscovery and reevaluation of him as a possible philosophical progenitor of Marxism by philosophically oriented Marxists, partly through a resurgence of the historical perspective that Hegel brought to everything, and partly through increasing recognition of the importance of his dialectical method. The book that did the most to reintroduce Hegel into the Marxist canon was perhaps Georg Lukacs's History and Class Consciousness. This sparked a renewed interest in Hegel, reflected in the work of Herbert Marcuse, Theodor Adorno, Ernst Bloch, Raya Dunayevskaya, Alexandre Kojève, and Gotthard Günther, among others. The Hegel renaissance also highlighted the significance of Hegel's early works, i.e., those published prior to the Phenomenology of Spirit. More recently two prominent American philosophers, John McDowell and Robert Brandom (sometimes, half-seriously referred to as the Pittsburgh Hegelians), have exhibited a marked Hegelian influence.
Beginning in the 1960s, Anglo-American Hegel scholarship has attempted to challenge the traditional interpretation of Hegel as offering a metaphysical system. This view, often referred to as the “non-metaphysical option,” has had a decided influence on most major English language studies of Hegel in the past 40 years. The works of U.S. neoconservative Francis Fukuyama's controversial book The End of History and the Last Man was heavily influenced by a famous Hegel interpreter from the Marxist school, Alexandre Kojève. Among modern scientists, the physicist David Bohm, the mathematician William Lawvere, the logician Kurt Godel, and the biologist Ernst Mayr have been deeply interested in or influenced by Hegel's philosophical work. The contemporary theologian Hans Küng has advanced contemporary scholarship in Hegel studies.
The very latest scholarship in Hegel studies reveals many sides of Hegel that were not typically seen in the West before 1990. For example, the essence of Hegel's philosophy is the idea of freedom. With the idea of freedom, Hegel attempts to explain world history, fine art, political science, the free thinking that is science, the attainments of spirituality, and the resolution to problems of metaphysics.
One appropriate way to assess Hegel's work would be to understand it in the historical context of his days. During his formative 10 years (1788-1799) as a young theologian, he was faced with the diversity of conflicting schools of religion: institutional Christianity, Pietism, Enlightenment religion, Romanticism, and Kantianism. This diversity, in fact, started with the collapse of the Medieval synthesis into the Renaissance and the Protestant Reformation 300 years before Hegel and still continued to exist with even more variety in his days. Thinkers such as Kant and Schleiermacher attempted to come up a synthesis. So did Hegel. His formative years as a theologian ended with a new understanding of Jesus' vision of love beyond the tension between Kantianism (Judaism) and Romanticism (Hellenism), as can be seen in his The Spirit of Christianity and its Fate written in 1798-1799. Here, we can trace Hegel's concern to dialectically reconcile the opposites of experience into a higher unity. Needless to say, this was far more developed later as a new form of logic in his philosophical writings, where he reached what Paul Tillich calls his "universal synthesis," going beyond all kinds of opposites. It is probably useful to appreciate Hegel's attempt to come up with unity beyond fragmentation and alienation, given the historical diversity of schools at that time, although whether his attempt was successful or not is another matter.
Given the fact that his absolute idealism, with God and the world, or spirit and matter, respectively as subject and object to be united by rational necessity, was split into Right and Left Hegelianism, his universal synthesis proved to be far from successful. Ludwig Feuerbach among other Left Hegelians deliberately turned Hegel's absolute idealism upside down, reversing Hegel's subject-object order, and to this Marx added the conflict-orientation of the Hegelian dialectic and came up with dialectical materialism and historical materialism. By contrast, Right Hegelianism faded away; after less than a generation, Hegel's philosophy was suppressed and even banned by the Prussian right-wing, thus having no influence on the nationalist movement in Germany. But, on the right side, there emerged another school of religion, which had a lasting influence beyond the nineteenth century. It was the existentialism of Danish philosopher Soren Kierkegaard, a contemporary of Feuerbach, and as a reaction against Hegel's system, it was tied with individual faith and asserted that truth is subjectivity. According to Tillich, therefore, Hegel's universal synthesis "broke down" into Feuerbach's atheistic anthropology and Kierkegaard's God-centered existentialism.
Many consider Hegel's thought to represent the summit of early nineteenth-century Germany's movement of philosophical idealism. But all those who received a profound influence from it in the nineteenth century opposed it. Even modern analytic and positivistic philosophers have considered Hegel a principal target because of what they consider the obscurantism of his philosophy. Perhaps this basic rejection of Hegelianism will continue until a satisfactory path for a synthesis is found, realizing Hegel's dream. Is the contemporary renaissance of Hegelian studies interested in pursuing it?
Famous Hegel Quotations
- "Logic is to be understood as the System of Pure Reason, as the realm of Pure Thought. This realm is Truth as it is without veil, and in its own Absolute nature. It can therefore be said that this Content is the exposition of God as God is in God's eternal essence before the creation of Nature and a finite mind."—The Science of Logic
- "The science of logic which constitutes Metaphysics proper or purely speculative philosophy, has hitherto still been much neglected."—The Science of Logic
- "It is remarkable when a nation loses its Metaphysics, when the Spirit which contemplates its own Pure Essence is no longer a present reality in the life of a nation."—The Science of Logic
- "What is rational is actual and what is actual is rational." (Was vernünftig ist, das ist Wirklich; und was wirklich ist, das ist vernünftig.)—The Philosophy of Right
- On first seeing Napoleon: "I saw the World Spirit (Weltgeist) seated on a horse."—Lectures on the Philosophy of World History
- "We may affirm absolutely that nothing great in this world has been accomplished without passion."—Lectures on the Philosophy of World History
- "To make abstractions hold in reality is to destroy reality." (Abstraktionen in der Wirklichkeit geltend machen, heißt Wirklichkeit zerstören.)
- "As far as the individual is concerned, each individual is in any case a child of his time; thus, philosophy, too, is its own time comprehended in thoughts." (Was das Individuum betrifft, so ist ohnehin jedes ein Sohn seiner Zeit; so ist auch Philosophie ihre Zeit in Gedanken erfaßt.)—The Philosophy of Right
- "The owl of Minerva spreads its wings only with the falling of dusk."— 1821 The Philosophy of Right
- "The true is the whole." (Das Wahre ist das Ganze.)—The Phenomenology of Spirit, section 20.
- Phenomenology of Spirit (Phänomenologie des Geistes, sometimes translated as Phenomenology of Mind) 1807
- Science of Logic (Wissenschaft der Logik) 1812–1816 (last edition of the first part 1831)
- Encyclopedia of the Philosophical Sciences (Enzyklopaedie der philosophischen Wissenschaften) 1817–1830
- Divided into three Major Sections:
- The Logic
- Philosophy of Nature
- Philosophy of Mind
- Divided into three Major Sections:
- Elements of the Philosophy of Right (Grundlinien der Philosophie des Rechts) 1821
- Lectures on Aesthetics
- Lectures on the Philosophy of World History
- Lectures on the History of Philosophy
- Lectures on Philosophy of Religion
- Adorno, Theodor W. Hegel: Three Studies, translated by Shierry M. Nicholsen. Cambridge, MA: MIT Press, 1994. ISBN 0262510804
- Beiser, Frederick C. The Cambridge Companion to Hegel. New York: Cambridge University Press, 1993. ISBN 0521387116
- Collingwood, R.G. The Idea of History. Oxford: Oxford University Press, 1946. ISBN 0192853066
- Dickey, Laurence. Hegel: Religion, Economics, and the Politics of Spirit, 1770–1807. New York: Cambridge University Press, 1987. ISBN 0521330351
- Forster, Michael. Hegel and Skepticism. Harvard University Press, 1989. ISBN 0674387074
- Forster, Michael. Hegel's Idea of a Phenomenology of Spirit. University of Chicago Press, 1998. ISBN 0226257428
- Harris, H.S. Hegel: Phenomenology and System. Indianapolis: Hackett, 1995.
- Hartnack, Justus. An Introduction to Hegel's Logic. Indianapolis: Hackett, 1998. ISBN 0872204243
- Kadvany, John. Imre Lakatos and the Guises of Reason. Durham and London: Duke University Press, 2001. ISBN 0822326590
- Kojève, Alexandre. Introduction to the Reading of Hegel: Lectures on the Phenomenology of Spirit. Cornell University Press, 1980. ISBN 0801492033
- Lukacs, Georg. History and Class Consciousness. (original 1923) MIT Press, 1972, ISBN 0262620200 (English)
- Marcuse, Herbert. Reason and Revolution: Hegel and the Rise of Social Theory. London, 1941.
- Pinkard, Terry P. Hegel: A Biography. Cambridge University Press, 2000. ISBN 0521496799
- Taylor, Charles. Hegel. Cambridge University Press, 1975. ISBN 0521291992
- Wallace, Robert M. Hegel's Philosophy of Reality, Freedom, and God. Cambridge University Press, 2005. ISBN 0521844843
- Westphal, Kenneth R. Hegel's Epistemology: A Philosophical Introduction to the Phenomenology of Spirit. Indianapolis: Hackett, 2003. ISBN 0872206459
all links Retrieved September 14, 2008.
- Hegel by HyperText, reference archive on Marxists.org.
- Hegel.net - resources available under the GNU FDL
- Links on Hegel's life
- Commented link list
- Explanation of Hegel, mostly in German
- The Hegel Society of America
- Hegel in Stanford Encyclopedia of Philosophy
- Hegel's Science of Philosophy
- Hegel page in 'The History Guide'
Hegel Texts Online
- Works by Georg Wilhelm Friedrich Hegel. Project Gutenberg
- Philosophy of History Introduction, Translated by J. Sibree
General Philosophy Sources
- Philosophy Sources on Internet EpistemeLinks
- Stanford Encyclopedia of Philosophy
- Paideia Project Online
- The Internet Encyclopedia of Philosophy
- Project Gutenberg
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Georg_Wilhelm_Friedrich_Hegel | 13 |
26 | Hypothesis Testing Goal: Make statement(s) regarding unknown population parameter values based on sample data Elements of a hypothesis test: Null hypothesis ...
Formulating a HYPOTHESIS X and Y FORM DIRECTION FINDING RELATIONS HYPOTHESIS link TWO or more VARIABLES you believe to be related HYPOTHESIS (ES) generalization ...
Hypothesis Testing Hypothesis Testing (Ht): Introduction After discussing procedures for data preparation and preliminary analysis, the next step for many studies is ...
BRM_Lecture9 Hypothesis Testing.ppt - Search introduction after discussing procedures preparation preliminary studies
Chapter 8: Introduction to Hypothesis Testing Hypothesis Testing The general goal of a hypothesis test is to rule out chance (sampling error) as a plausible ...
Hypothesis Testing and Comparing Two Proportions Hypothesis Testing: Deciding whether your data shows a “real” effect, or could have happened by chance
Hypothesis Testing An Inference Procedure We will study procedures for both the unknown population mean on a quantitative variable and the unknown population ...
Chapter 11 Introduction to Hypothesis Testing Nonstatistical Hypothesis Testing… A criminal trial is an example of hypothesis testing without the statistics.
Writing A Proper Hypothesis Using the “If / Then” Method Parts of the Statement Independent Variable: The condition be studied. It is controlled by the experimenter.
Hypothesis Testing Chapter 7 Hypothesis Testing 7-1 Overview 7-2 Basics of Hypothesis Testing 7-3 Testing a Claim About a Proportion 7-5 Testing a Claim About a Mean ...
Statistical Inference I: Hypothesis testing; sample size Statistics Primer Statistical Inference Hypothesis testing P-values Type I error Type II error Statistical ...
Hypothesis & Research Questions Understanding Differences between qualitative and quantitative approaches We have identified three major approaches to research ...
Hypothesis & Research Questions.ppt.plainformat.ppt - Search research questions understanding differences between qualitative quantitative approaches identified three major
Hypothesis Testing 7-1 Basics of Hypothesis Testing 7-2 Testing a Claim about a Mean: Large Samples 7-3 Testing a Claim about a Mean: Small Samples
Hypothesis Testing Comparing One Sample to its Population Hypothesis Testing w/ One Sample If the population mean (μ) and standard deviation (σ) are known: Testing ...
(8th Edition) Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Chapter Topics Hypothesis testing methodology Z test for the mean ( known) P-value ...
Hypothesis Testing LIR 832 Lecture #3 January 30, 2007 Topics of the Day A. Our Fundamental Problem Again: Learning About Populations from Samples B. Basic Hypothesis ...
LIR 832 - Lecture 3.ppt - Search lecture january topics fundamental problem learning populations samples basic
HYPOTHESIS TESTING Introduction In making inference from data analysed, there is the need to subject the results to some rigour. Drawing meanings from data in this ...
Basic Elements of Testing Hypothesis Dr. M. H. Rahbar Professor of Biostatistics Department of Epidemiology Director, Data Coordinating Center College of Human Medicine
19021.ppt - Search rahbar professor biostatistics department epidemiology coordinating center college human
By: Becca Doll, Stevie Lemons, Eloise Nelson In other words… a hypothesis test can tell if the sample is sufficient to retain or fail to reject the null.
Chapter 7: Hypothesis Testing A hypothesis is a conjecture about a population. Typically, these hypotheses will be stated in terms of a parameter.
DIRECTIONAL HYPOTHESIS The 1-tailed test: Instead of dividing alpha by 2, you are looking for unlikely outcomes on only 1 side of the distribution | http://happytreeflash.com/hypothesis-ppt.html | 13 |
113 | Logic in Argumentative Writing
This handout is designed to help writers develop and use logical arguments in writing. Through an introduction in some of the basic terms and operations of logic, the handout helps writers analyze the arguments of others and generate their own arguments. However, it is important to remember that logic is only one aspect of a successful argument. Non-logical arguments, statements that cannot be logically proven or disproved, are important in argumentative writing, such as appeals to emotions or values. Illogical arguments, on the other hand, are false and must be avoided.
Logic is a formal system of analysis that helps writers invent, demonstrate, and prove arguments. It works by testing propositions against one another to determine their accuracy. People often think they are using logic when they avoid emotion or make arguments based on their common sense, such as "Everyone should look out for their own self interests" or "People have the right to be free." However, unemotional or common sense statements are not always equivalent to logical statements. To be logical, a proposition must be tested within a logical sequence.
The most famous logical sequence, called the syllogism, was developed by the Greek philosopher Aristotle. His most famous syllogism is:
Premise 1: All men are mortal.
Premise 2: Socrates is a man.
Conclusion: Therefore, Socrates is mortal.
In this sequence, premise 2 is tested against premise 1 to reach the logical conclusion. Within this system, if both premises are considered valid, there is no other logical conclusion than determining that Socrates is a mortal.
This guide provides some vocabulary and strategies for determining logical conclusions.
Before using logic to reach conclusions, it is helpful to know some important vocabulary related to logic.
Premise: Proposition used as evidence in an argument.
Conclusion: Logical result of the relationship between the premises. Conclusions serve as the thesis of the argument.
Argument: The assertion of a conclusion based on logical premises.
Syllogism: The simplest sequence of logical premises and conclusions, devised by Aristotle.
Enthymeme: A shortened syllogism which omits the first premise, allowing the audience to fill it in. For example, "Socrates is mortal because he is a human" is an enthymeme which leaves out the premise "All humans are mortal."
Induction: A process through which the premises provide some basis for the conclusion.
Deduction: A process through which the premises provide conclusive proof for the conclusion.
Reaching Logical Conclusions
Reaching logical conclusions depends on the proper analysis of premises. The goal of a syllogism is to arrange premises so that only one true conclusion is possible.
Consider the following premises:
Premise 1: Non-renewable resources do not exist in infinite supply.
Premise 2: Coal is a non-renewable resource.
From these two premises, only one logical conclusion is available:
Conclusion: Coal does not exist in infinite supply.
Often logic requires several premises to reach a conclusion.
Premise 1: All monkeys are primates.
Premise 2: All primates are mammals.
Premise 3: All mammals are vertebrate animals. Conclusions: Monkeys are vertebrate animals.
Logic allows specific conclusions to be drawn from general premises. Consider the following premises:
Premise 1: All squares are rectangles.
Premise 2: Figure 1 is a square.
Conclusion: Figure 1 is also a rectangle.
Notice that logic requires decisive statements in order to work. Therefore, this syllogism is false:
Premise 1: Some quadrilaterals are squares.
Premise 2: Figure 1 is a quadrilateral.
Conclusion: Figure 1 is a square.
This syllogism is false because not enough information is provided to allow a verifiable conclusion. Figure 1 could just as likely be a rectangle, which is also a quadrilateral.
Logic can also mislead when it is based on premises that an audience does not accept. For instance:
Premise 1: People with red hair are not good at checkers.
Premise 2: Bill has red hair.
Conclusion: Bill is not good at checkers.
Within the syllogism, the conclusion is logically valid. However, it is only true if an audience accepts Premise 1, which is very unlikely. This is an example of how logical statements can appear accurate while being completely false.
Logical conclusions also depend on which factors are recognized and ignored by the premises. Therefore, different premises could lead to very different conclusions about the same subject. For instance, these two syllogisms about the platypus reveal the limits of logic for handling ambiguous cases:
Premise 1: All birds lay eggs.
Premise 2: Platypuses lay eggs.
Conclusion: Platypuses are birds.
Premise 1: All mammals have fur.
Premise 2: Platypuses have fur.
Conclusion: Platypuses are mammals.
Though logic is a very powerful argumentative tool and is far preferable to a disorganized argument, logic does have limitations. It must also be effectively developed from a syllogism into a written piece.
Fallacies are common errors in reasoning that will undermine the logic of your argument. Fallacies can be either illegitimate arguments or irrelevant points, and are often identified because they lack evidence that supports their claim. Avoid these common fallacies in your own arguments and watch for them in the arguments of others.
Slippery Slope: This is a conclusion based on the premise that if A happens, then eventually through a series of small steps, through B, C,..., X, Y, Z will happen, too, basically equating A and Z. So, if we don't want Z to occur, A must not be allowed to occur either. Example:
If we ban Hummers because they are bad for the environment eventually the government will ban all cars, so we should not ban Hummers.
In this example, the author is equating banning Hummers with banning all cars, which is not the same thing.
Hasty Generalization: This is a conclusion based on insufficient or biased evidence. In other words, you are rushing to a conclusion before you have all the relevant facts. Example:
Even though it's only the first day, I can tell this is going to be a boring course.
In this example, the author is basing his evaluation of the entire course on only the first day, which is notoriously boring and full of housekeeping tasks for most courses. To make a fair and reasonable evaluation the author must attend not one but several classes, and possibly even examine the textbook, talk to the professor, or talk to others who have previously finished the course in order to have sufficient evidence to base a conclusion on.
Post hoc ergo propter hoc: This is a conclusion that assumes that if 'A' occurred after 'B' then 'B' must have caused 'A.' Example:
I drank bottled water and now I am sick, so the water must have made me sick.
In this example, the author assumes that if one event chronologically follows another the first event must have caused the second. But the illness could have been caused by the burrito the night before, a flu bug that had been working on the body for days, or a chemical spill across campus. There is no reason, without more evidence, to assume the water caused the person to be sick.
Genetic Fallacy: A conclusion is based on an argument that the origins of a person, idea, institute, or theory determine its character, nature, or worth. Example:
The Volkswagen Beetle is an evil car because it was originally designed by Hitler's army.
In this example the author is equating the character of a car with the character of the people who built the car. However, the two are not inherently related.
Begging the Claim: The conclusion that the writer should prove is validated within the claim. Example:
Filthy and polluting coal should be banned.
Arguing that coal pollutes the earth and thus should be banned would be logical. But the very conclusion that should be proved, that coal causes enough pollution to warrant banning its use, is already assumed in the claim by referring to it as "filthy and polluting."
Circular Argument: This restates the argument rather than actually proving it. Example:
George Bush is a good communicator because he speaks effectively.
In this example, the conclusion that Bush is a "good communicator" and the evidence used to prove it "he speaks effectively" are basically the same idea. Specific evidence such as using everyday language, breaking down complex problems, or illustrating his points with humorous stories would be needed to prove either half of the sentence.
Either/or: This is a conclusion that oversimplifies the argument by reducing it to only two sides or choices. Example:
We can either stop using cars or destroy the earth.
In this example, the two choices are presented as the only options, yet the author ignores a range of choices in between such as developing cleaner technology, car sharing systems for necessities and emergencies, or better community planning to discourage daily driving.
Ad hominem: This is an attack on the character of a person rather than her/his opinions or arguments. Example:
Green Peace's strategies aren't effective because they are all dirty, lazy hippies.
In this example, the author doesn't even name particular strategies Green Peace has suggested, much less evaluate those strategies on their merits. Instead, the author attacks the characters of the individuals in the group.
Ad populum: This is an emotional appeal that speaks to positive (such as patriotism, religion, democracy) or negative (such as terrorism or fascism) concepts rather than the real issue at hand. Example:
If you were a true American you would support the rights of people to choose whatever vehicle they want.
In this example, the author equates being a "true American," a concept that people want to be associated with, particularly in a time of war, with allowing people to buy any vehicle they want even though there is no inherent connection between the two.
Red Herring: This is a diversionary tactic that avoids the key issues, often by avoiding opposing arguments rather than addressing them. Example:
The level of mercury in seafood may be unsafe, but what will fishers do to support their families?
In this example, the author switches the discussion away from the safety of the food and talks instead about an economic issue, the livelihood of those catching fish. While one issue may affect the other it does not mean we should ignore possible safety issues because of possible economic consequences to a few individuals.
Straw Man: This move oversimplifies an opponent's viewpoint and then attacks that hollow argument.
People who don't support the proposed state minimum wage increase hate the poor.
In this example, the author attributes the worst possible motive to an opponent's position. In reality, however, the opposition probably has more complex and sympathetic arguments to support their point. By not addressing those arguments, the author is not treating the opposition with respect or refuting their position.
Moral Equivalence: This fallacy compares minor misdeeds with major atrocities.
That parking attendant who gave me a ticket is as bad as Hitler.
In this example, the author is comparing the relatively harmless actions of a person doing their job with the horrific actions of Hitler. This comparison is unfair and inaccurate.
Using Logic in Writing
Understanding how to create logical syllogisms does not automatically mean that writers understand how to use logic to build an argument. Crafting a logical sequence into a written argument can be a very difficult task. Don't assume that an audience will easily follow the logic that seems clear to you. When converting logical syllogisms into written arguments, remember to:
- lay out each premise clearly
- provide evidence for each premise
- draw a clear connection to the conclusion
Say a writer was crafting an editorial to argue against using taxpayer dollars for the construction of a new stadium in the town of Mill Creek. The author's logic may look like this:
Premise 1: Projects funded by taxpayer dollars should benefit a majority of the public.
Premise 2: The proposed stadium construction benefits very few members of the public.
Conclusion: Therefore, the stadium construction should not be funded by taxpayer dollars.
This is a logical conclusion, but without elaboration it may not persuade the writer's opposition, or even people on the fence. Therefore, the writer will want to expand her argument like this:
Historically, Mill Creek has only funded public projects that benefit the population as a whole. Recent initiatives to build a light rail system and a new courthouse were approved because of their importance to the city. Last election, Mayor West reaffirmed this commitment in his inauguration speech by promising "I am determined to return public funds to the public." This is a sound commitment and a worthy pledge.
However, the new initiative to construct a stadium for the local baseball team, the Bears, does not follow this commitment. While baseball is an enjoyable pastime, it does not receive enough public support to justify spending $210 million in public funds for an improved stadium. Attendance in the past five years has been declining, and last year only an average of 400 people attended each home game, meaning that less than 1% of the population attends the stadium. The Bears have a dismal record at 0-43 which generates little public interest in the team.
The population of Mill Creek is plagued by many problems that affect the majority of the public, including its decrepit high school and decaying water filtration system. Based on declining attendance and interest, a new Bears stadium is not one of those needs, so the project should not be publicly funded. Funding this project would violate the mayor's commitment to use public money for the public.
Notice that the piece uses each paragraph to focus on one premise of the syllogism (this is not a hard and fast rule, especially since complex arguments require far more than three premises and paragraphs to develop). Concrete evidence for both premises is provided. The conclusion is specifically stated as following from those premises.
Consider this example, where a writer wants to argue that the state minimum wage should be increased. The writer does not follow the guidelines above when making his argument.
It is obvious to anyone thinking logically that minimum wage should be increased. The current minimum wage is an insult and is unfair to the people who receive it. The fact that the last proposed minimum wage increase was denied is proof that the government of this state is crooked and corrupt. The only way for them to prove otherwise is to raise minimum wage immediately.
The paragraph does not build a logical argument for several reasons. First, it assumes that anyone thinking logically will already agree with the author, which is clearly untrue. If that were the case, the minimum wage increase would have already occurred. Secondly, the argument does not follow a logical structure. There is no development of premises which lead to a conclusion. Thirdly, the author provides no evidence for the claims made.
In order to develop a logical argument, the author first needs to determine the logic behind his own argument. It is likely that the writer did not consider this before writing, which demonstrates that arguments which could be logical are not automatically logical. They must be made logical by careful arrangement.
The writer could choose several different logical approaches to defend this point, such as a syllogism like this:
Premise 1: Minimum wage should match the cost of living in society.
Premise 2: The current minimum wage does not match the cost of living in society.
Conclusion: Therefore, minimum wage should be increased.
Once the syllogism has been determined, the author needs to elaborate each step in writing that provides evidence for the premises:
The purpose of minimum wage is to ensure that workers can provide basic amenities to themselves and their families. A report in the Journal of Economic Studies indicated that workers cannot live above the poverty line when minimum wage is not proportionate with the cost of living. It is beneficial to society and individuals for a minimum wage to match living costs.
Unfortunately, our state's minimum wage no longer reflects an increasing cost of living. When the minimum wage was last set at $5.85, the yearly salary of $12,168 guaranteed by this wage was already below the poverty line. Years later, after inflation has consistently raised the cost of living, workers earning minimum wage must struggle to support a family, often taking 2 or 3 jobs just to make ends meet. 35% of our state's poor population is made up of people with full time minimum wage jobs.
In order to remedy this problem and support the workers of this state, minimum wage must be increased. A modest increase could help alleviate the burden placed on the many residents who work too hard for too little just to make ends meet.
This piece explicitly states each logical premise in order, allowing them to build to their conclusion. Evidence is provided for each premise, and the conclusion is closely related to the premises and evidence. Notice, however, that even though this argument is logical, it is not irrefutable. An opponent with a different perspective and logical premises could challenge this argument. See the next section for more information on this issue.
Does Logic Always Work?
Logic is a very effective tool for persuading an audience about the accuracy of an argument. However, people are not always persuaded by logic. Sometimes audiences are not persuaded because they have used values or emotions instead of logic to reach conclusions. But just as often, audiences have reached a different logical conclusion by using different premises. Therefore, arguments must often spend as much time convincing audiences of the legitimacy of the premises as the legitimacy of the conclusions.
For instance, assume a writer was using the following logic to convince an audience to adopt a smaller government:
Premise 1: The government that governs least, governs best.
Premise 2: The government I am proposing does very little governing.
Conclusion: Therefore, the government I am proposing is best.
Some members of the audience may be persuaded by this logic. However, other members of the audience may follow this logic instead:
Premise 1: The government that governs best, governs most.
Premise 2: The government proposed by the speaker does very little governing.
Conclusion: Therefore, the government proposed by the speaker is bad.
Because they adhere to a different logical sequence, these members of the audience will not be persuaded to change their minds logically until they are persuaded to different values through other means besides logic. See the OWL resource here for more examples of how to integrate argument and rhetorical strategies into your writing.
A functional impropriety is the use of a word as the wrong part of speech. The wrong meaning for a word can also be an impropriety. For example, in this sentence, the impropriety is "trying":
Here is another example:
To help you practice avoiding improprieties, complete the exercise below.
Mark improprieties in the following phrases and correct them. If you find none, write C for "correct" next to the phrase.
Example: (occupation) hazards — occupational
- reforming institution policies
- percent aging students by grades
- dead trees as inhabitants for birds
- an initiate story about a young girl
- a recurrence theme in literature
- a wood chisel
- a wood baseball bat
- a frivolity conversation on the weather
- a utopia hideaway of alpine villas
- a utilize room complete with workbench
- the unstable chemical compounds
- the unschooled labor force
- the vandals who rapined Rome
- an erupting volcano crevassing the hills
- criticism writing which is often abstract
- abstracted beyond understanding
- classified as an absorbent
- a handwriting letter
- banjoed their way to the top ten
- a meander stream
- hoboing across the country
- holidayed the time away
- the redirective coming from the officer
- grain-fed slaughter cattle
- ivy tendoned to the walls | http://owl.english.purdue.edu/owl/owlprint/659/ | 13 |
55 | In computer science and operations research, the bees algorithm is a population-based search algorithm first developed in 2005. It mimics the food foraging behaviour of swarms of honey bees. In its basic version, the algorithm performs a kind of neighbourhood search combined with random search and can be used for both combinatorial optimization and functional optimisation.
The foraging process in nature
A colony of honey bees can extend itself over long distances (up to 14 km) and in multiple directions simultaneously to exploit a large number of food sources. A colony prospers by deploying its foragers to good fields. In principle, flower patches with plentiful amounts of nectar or pollen that can be collected with less effort should be visited by more bees, whereas patches with less nectar or pollen should receive fewer bees.
The foraging process begins in a colony by scout bees being sent to search for promising flower patches. Scout bees move randomly from one patch to another. During the harvesting season, a colony continues its exploration, keeping a percentage of the population as scout bees.
When they return to the hive, those scout bees that found a patch which is rated above a certain quality threshold (measured as a combination of some constituents, such as sugar content) deposit their nectar or pollen and go to the “dance floor” to perform a dance known as the waggle dance.
This dance is essential for colony communication, and contains three pieces of information regarding a flower patch: the direction in which it will be found, its distance from the hive and its quality rating (or fitness). This information helps the colony to send its bees to flower patches precisely, without using guides or maps. Each individual’s knowledge of the outside environment is gleaned solely from the waggle dance. This dance enables the colony to evaluate the relative merit of different patches according to both the quality of the food they provide and the amount of energy needed to harvest it. After waggle dancing inside the hive, the dancer (i.e. the scout bee) goes back to the flower patch with follower bees that were waiting inside the hive. More follower bees are sent to more promising patches. This allows the colony to gather food quickly and efficiently.
While harvesting from a patch, the bees monitor its food level. This is necessary to decide upon the next waggle dance when they return to the hive. If the patch is still good enough as a food source, then it will be advertised in the waggle dance and more bees will be recruited to that source.
The Bees Algorithm
The Bees Algorithm is an optimisation algorithm inspired by the natural foraging behaviour of honey bees to find the optimal solution. The algorithm requires a number of parameters to be set, namely: number of scout bees (n), number of sites selected out of n visited sites (m), number of best sites out of m selected sites (e), number of bees recruited for best e sites (nep), number of bees recruited for the other (m-e) selected sites (nsp), initial size of patches (ngh) which includes site and its neighbourhood and stopping criterion.
The pseudo code for the bees algorithm in its simplest form: 1. Initialise population with random solutions. 2. Evaluate fitness of the population. 3. While (stopping criterion not met) //Forming new population. 4. Select sites for neighbourhood search. 5. Recruit bees for selected sites (more bees for best e sites) and evaluate fitnesses. 6. Select the fittest bee from each patch. 7. Assign remaining bees to search randomly and evaluate their fitnesses. 8. End While.
In first step, the bees algorithm starts with the scout bees (n) being placed randomly in the search space. In step 2, the fitnesses of the sites visited by the scout bees are evaluated. In step 4, bees that have the highest fitnesses are chosen as “selected bees” and sites visited by them are chosen for neighbourhood search. Then, in steps 5 and 6, the algorithm conducts searches in the neighbourhood of the selected sites, assigning more bees to search near to the best e sites. The bees can be chosen directly according to the fitnesses associated with the sites they are visiting. Alternatively, the fitness values are used to determine the probability of the bees being selected. Searches in the neighbourhood of the best e sites which represent more promising solutions are made more detailed by recruiting more bees to follow them than the other selected bees. Together with scouting, this differential recruitment is a key operation of the Bees Algorithm. However, in step 6, for each patch only the bee with the highest fitness will be selected to form the next bee population. In nature, there is no such a restriction. This restriction is introduced here to reduce the number of points to be explored. In step 7, the remaining bees in the population are assigned randomly around the search space scouting for new potential solutions. These steps are repeated until a stopping criterion is met. At the end of each iteration, the colony will have two parts to its new population – those that were the fittest representatives from a patch and those that have been sent out randomly.
The Bees Algorithm has found many applications in engineering, such as:
- Training neural networks for pattern recognition.
- Forming manufacturing cells.
- Scheduling jobs for a production machine.
- Solving continuous problems and engineering optimization.
- Finding multiple feasible solutions to a preliminary design problems.
- Data clustering
- Optimising the design of mechanical components.
- Multi-Objective Optimisation.
- Tuning a fuzzy logic controller for a robot gymnast.
- Computer Vision and Image Analysis.
In Job Shop Scheduling
The honey bees' effective foraging strategy can be applied to job shop scheduling problems.
A feasible solution in a job shop scheduling problem is a complete schedule of operations specified in the problem. Each solution can be thought of as a path from the hive to the food source. The figure on the right illustrates such an analogy
The makespan of the solution is analogous to the profitability of the food source in terms of distance and sweetness of the nectar. Hence, the shorter the makespan, the higher the profitability of the solution path.
We can thus maintain a colony of bees, where each bee will traverse a potential solution path. Once a feasible solution is found, each bee will return to the hive to perform a waggle dance. The waggle dance will be represented by a list of "elite solutions", from which other bees can choose to follow another bee's path. Bees with a better makespan will have a higher probability of adding its path to the list of "elite solutions", promoting a convergence to an optimal solution.
Using the above scheme, the natural honey bee's self organizing foraging strategy can be applied to the job shop scheduling problem.
See also
- Artificial bee colony algorithm
- Ant colony optimization algorithms
- Evolutionary computation
- Intelligent Water Drops
- Invasive weed optimization algorithm
- Lévy flight foraging hypothesis
- Manufacturing Engineering Centre
- Particle swarm optimization
- Swarm intelligence
- Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S and Zaidi M. The Bees Algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK, 2005
- Duc Truong Pham, Ashraf Afify, Ebubekir Koc "Manufacturing cell formation using the Bees Algorithm". IPROMS 2007 Innovative Production Machines and Systems Virtual Conference, Cardiff, UK.
- D. T. Pham, E. Koç, J. Y. Lee, and J. Phrueksanant, Using the Bees Algorithm to schedule jobs for a machine, Proc Eighth International Conference on Laser Metrology, CMM and Machine Tool Performance, LAMDAMAP, Euspen, UK, Cardiff, p. 430–439, 2007.
- Pham D.T., Ghanbarzadeh A., Koç E., Otri S., Rahim S., and M.Zaidi "The Bees Algorithm – A Novel Tool for Complex Optimisation Problems"", Proceedings of IPROMS 2006 Conference, pp.454–461
- Von Frisch K. Bees: Their Vision, Chemical Senses and Language. (Revised edn) Cornell University Press, N.Y., Ithaca, 1976.
- Seeley TD. The Wisdom of the Hive: The Social Physiology of Honey Bee Colonies. Massachusetts: Harvard University Press, Cambridge, 1996.
- Bonabeau E, Dorigo M, and Theraulaz G. Swarm Intelligence: from Natural to Artificial Systems. Oxford University Press, New York, 1999.
- Camazine S, Deneubourg J, Franks NR, Sneyd J, Theraula G and Bonabeau E. Self-Organization in Biological Systems. Princeton: Princeton University Press, 2003.
- D. T. Pham, E. Koç, A. Ghanbarzadeh, and S. Otri, Optimisation of the weights of multi-layered perceptrons using the Bees Algorithm, Proc 5th International Symposium on Intelligent Manufacturing Systems, Turkey, 2006.
- D. T. Pham, A. Ghanbarzadeh, E. Koç, and S. Otri, Application of the Bees Algorithm to the training of radial basis function networks for control chart pattern recognition, Proc 5th CIRP International Seminar on Intelligent Computation in Manufacturing Engineering (CIRP ICME '06), Ischia, Italy, 2006.
- D. T. Pham, S. Otri, A. Ghanbarzadeh, and E. Koç, Application of the Bees Algorithm to the training of learning vector quantisation networks for control chart pattern recognition, Proc Information and Communication Technologies (ICTTA'06), Syria, p. 1624–1629, 2006.
- D. T. Pham, A. J. Soroka, A. Ghanbarzadeh, E. Koç, S. Otri, and M. Packianather, Optimising neural networks for identification of wood defects using the Bees Algorithm, Proc 2006 IEEE International Conference on Industrial Informatics, Singapore, 2006.
- Pham D. T., Zaidi Muhamad, Massudi Mahmuddin, Afshin ghanbarzadeh, Ebubekir Koc, Sameh Otri. Using the bees algorithm to optimise a support vector machine for wood defect classification. IPROMS 2007 Innovative Production Machines and Systems Virtual Conference, Cardiff, UK.""
- Yang X.S., "Engineering Optimizations Via Nature-Inspired Virtual Bee Algorithms". Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, LECTURE NOTES IN COMPUTER SCIENCE 3562:2005, pp. 317-323 , Springer Berlin / Heidelberg.
- D. T. Pham, M. Castellani, and A. Ghanbarzadeh, Preliminary design using the Bees Algorithm, Proc Eighth International Conference on Laser Metrology, CMM and Machine Tool Performance, LAMDAMAP, Euspen, UK, Cardiff, p. 420–429, 2007.
- D. T. Pham, S. Otri, A. A. Afify, M. Mahmuddin, and H. Al-Jabbouli, Data clustering using the Bees Algorithm, Proc 40th CIRP Int. Manufacturing Systems Seminar, Liverpool, 2007.
- D. T. Pham, A. J. Soroka, E. Koç, A. Ghanbarzadeh, and S. Otri, Some applications of the Bees Algorithm in engineering design and manufacture, Proc Int. Conference on Manufacturing Automation (ICMA 2007), Singapore, 2007.
- Pham D.T., Ghanbarzadeh A. "Multi-Objective Optimisation using the Bees Algorithm"", Proceedings of IPROMS 2007 Conference
- D.T Pham, Ahmed Haj Darwish, E.E Eldukhri, Sameh Otri. "Using the Bees Algorithm to tune a fuzzy logic controller for a robot gymnast."", Proceedings of IPROMS 2007 Conference
- Olague, G. and Puente, C. 2006. Parisian evolution with honeybees for three-dimensional reconstruction. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation (Seattle, Washington, USA, July 08 – 12, 2006). GECCO '06. ACM, New York, NY, 191–198. doi:10.1145/1143997.1144030
- Olague, G.; Puente, C. Honeybees as an Intelligent based Approach for 3D Reconstruction. ICPR 2006. 18th International Conference on Pattern Recognition. 2006, Volume 1, Issue , 0-0 0 Page(s):1116–1119. doi:10.1109/ICPR.2006.632
- Olague, G. Puente, C.. The Honeybee Search Algorithm for Three-Dimensional Reconstruction. 8 th European Workshop on Evolutionary Computation in Image Analysis and Signal Processing. Lecture Notes in Computer Science 3907. Springer-Verlag. pp. 427–437. EvoIASP2006. Best Paper Award for EvoIASP 2006. doi:10.1007/11732242, ISBN 978-3-540-33237-4
- Chong C, Low MYH, Sivakumar AI and Gay KL (2006) "A Bee Colony Optimization Algorithm to Job Shop Scheduling" Simulation Conference, WSC 06. Monterey, CA.
- Pham D.T., Ashraf Afify, Ebubekir Koc "Manufacturing cell formation using the Bees Algorithm". IPROMS 2007 Innovative Production Machines and Systems Virtual Conference, Cardiff, UK.
- Nakrani S. and Tovey C., "On Honey Bees and Dynamic Server Allocation in Internet Hosting Centers". Adaptive Behaviour, 2004. 12(3-4):pp. 223–240.
- Tovey C., "The Honey Bee Algorithm: A Biological Inspired Approach to Internet Server Optimization". Engineering Enterprise, the Alumni Magazine for ISyE at Georgia Institute of Technology, Spring 2004, pp. 13–15.
- Yang X.S., "Engineering Optimizations Via Nature-Inspired Virtual Bee Algorithms". Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, LECTURE NOTES IN COMPUTER SCIENCE 3562:2005, pp. 317–323, Springer Berlin / Heidelberg.
- Chin Soon Chong, Malcolm Yoke Hean Low, Appa Iyer Sivakumar and Kheng Leng Gay, "A bee colony optimization algorithm to job shop scheduling", In Proceedings of the 2006 Winter Simulation Conference, December 3–6, 2006, Monterey, CA USA, pp. 1954–1961.
- Pham D.T., Ghanbarzadeh A., Koç E., Otri S., Rahim S., and M.Zaidi "The Bees Algorithm – A Novel Tool for Complex Optimisation Problems"", Proceedings of IPROMS 2006 Conference, pp.454-461
- Pham D.T., Ghanbarzadeh A. "Multi-Objective Optimisation using the Bees Algorithm"", Proceedings of IPROMS 2007 Conference
- Pham D.T., Koç E.,Ghanbarzadeh A., Otri S. "Optimisation of the Weights of Multi-Layered Perceptrons Using the Bees Algorithm". IMS’06 Intelligent Manufacturing Systems Conference. Sakarya, Turkey.
- Pham D.T., Ghanbarzadeh A., Koc E., and Otri S. "Application of the Bees Algorithm to the Training of Radial Basis Function Networks for Control Chart Pattern Recognition". Proc 5th CIRP International Seminar on Intelligent Computation in Manufacturing Engineering (CIRP ICME '06). 2006. Ischia, Italy, pp. 711–716.
- Pham D.T., Soroka A.J., Koç E.,Ghanbarzadeh A., Otri S., Packianather M. "Optimising Neural Networks for Identification of Wood Defects Using the Bees Algorithm". 4th International IEEE Conference on Industrial Informatics.INDIN’06 16–18 August 2006, Singapore.
- NewScientist: The Honeybee Algorithm award Best Paper for EvoIASP 2006
- The Bees Algorithm – First Prize-winning Poster
- BBC Interview Records
- MEC Bees won ‘best communication’ prize at INTEGR8OR
- Boffins put dancing bees to work – BBC News | http://en.wikipedia.org/wiki/Bees_algorithm | 13 |
19 | Deductive Fallacy Study Guide
Logical errors are, I think, of greater practical importance than many people believe; they enable their perpetrators to hold the comfortable opinion on every subject in turn.
Bertrand Russell, British philosopher, mathematician, and historian (1872–1970)
An argument with poor reasoning to support its conclusion is called a fallacy. In this lesson, you'll discover the relationship between deductive reasoning and how logic does or doesn't work. And you'll investigate four of the most common logical fallacies people use that make deductive reasoning fall apart.
Deductive Reasoning Study Guide explains what makes a valid deductive argument—two premises that are true and a conclusion that logically follows from them, without assuming anything not in those premises. A factual error, like a false premise or a conclusion that is not supported by the premises, makes an argument invalid. Moreover, an error in logic can make an argument invalid. This logical error is known as a fallacy.
There are a number of logical fallacies that occur in deductive arguments. Sometimes it's hard to recognize such fallacies, but it's important to so you're not misled or persuaded by someone's faulty logic. There are four major logical fallacies:
- Slippery Slope
- False Dilemma
- Circular Reasoning
As you read in the last lesson, conditionals are premises that use "if … then" to lead to a conclusion. Example: If you oversleep, then you'll miss the bus. A slippery slope is a conditional that contains a logical fallacy. It doesn't explain how the first event leads to the second. Example: If you don't pay your electric bill, then you'll never be able to get a loan for a car.
Slippery slope makes an argument seem more severe; it assumes that one wrong could slide toward something else that is wrong. It leaves out what is "between" the two events, without saying why. In the previous example, there are many possible steps between event A, not paying an electric bill, and event B, not being able to get a car loan. It's true that not paying a bill on time would show up on your credit report, but just one late payment doesn't inescapably lead to having such a bad credit report that you can't get a loan for a car.
Other examples follow. Keep in mind the possible steps between event A and event B in each, and the likelihood, or unlikelihood, that B will ever be a result of A.
- Don't let him help you with that. The next thing you know, he will be running your life.
- You can never give anyone a break. If you do, they will walk all over you.
- This week, you want to stay out past your cur-few. If I let you stay out, next week you'll be gone all night!
Always check an argument for a chain of consequences. When someone says "If this … then that," make sure the chains are reasonable.
A false dilemma is an argument that presents a limited number of options (usually two), while in reality there are more. In other words, it gives a choice between one or another ("either-or") even though there are other choices that could be made. The false dilemma is commonly seen in black or white terms; it sets up one thing as all good and the other as all bad. Here's an example:
Stop wasting my time in this store! Either decide you can afford the stereo, or go without music in your room!
This argument contains a logical fallacy because it fails to recognize that there are many other options besides just buying one particular (expensive) stereo and going without music. You could, for instance, buy a less expensive stereo or even a radio. Or, you could borrow a stereo and have music in your room without making a purchase. There are many options beside the two presented as "either-or" in the argument.
Other common false dilemmas include:
- Love it or leave it.
- Either you're with us, or you're against us.
- Get better grades or you will never go to college.
False dilemmas are also common in politics. Many politicians would like you to believe that they, and their party, have all the right answers, and their opponents are not only wrong, but are ruining the country. They set up a choice between all good and all bad. For instance: "Price supports on agricultural production are part of the socialist agenda. My opponent in this race consistently votes for price supports on dairy and tobacco products. It is time to stop electing socialists to Congress. Should you vote for my opponent, who wants to lead our country on the path toward socialism, or should you vote for me and restore democracy?
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
ACTIVITIESGet Outside! 10 Playful Activities
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Bullying in Schools
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Should Your Child Be Held Back a Grade? Know Your Rights
- First Grade Sight Words List | http://www.education.com/study-help/article/errors-deductive-reasoning/ | 13 |
17 | Facts, information and articles about Secession, one of the causes of the civil war
Secession summary: the secession of Southern States led to the establishment of the Confederacy and ultimately the civil war. (Civil War should be capitalized; is there an SEO reason not to?) It was the most serious secession movement in the United States and was defeated when the Union armies defeated the Confederate armies in the Civil War, 1861-65.
Causes Of Secession
Before the Civil War, the country was dividing between north and south. Issues included States Rights but centered mostly on the issue of slavery, which was prominent in the south but increasingly banned by northern states.
With the election in 1860 of Abraham Lincoln, who ran on a message of anti-slavery, the Southern states felt it was only a matter of time before the institution was outlawed completely. South Carolina became the first state to officially secede from the United States on December 20, 1860. Four months later, Georgia, Florida, Alabama, Mississippi, Texas and Louisiana seceded as well. Later Virginia, Arkansas, North Carolina, and Tennessee joined them, and soon afterward, the people of these states elected Jefferson Davis as president of the newly formed Confederacy.
Secession Leads To War
The Civil War officially began with the Battle Of Fort Sumter. Fort Sumter was a Union fort in the harbor of Charleston, South Carolina. After the U.S. Army troops inside the fort refused to vacate it, Confederate forces opened fire on the fort with cannons. It was surrendered without casualty, but led to the bloodiest war in the nation’s history.
A Short History of Secession
From Articles of Confederation to "A More Perfect Union." Arguably, the act of secession lies deep within the American psyche. When the 13 colonies rebelled against Great Britain in the War for American Independence, it was an act of secession, one that is celebrated by Americans to this day.
During that war, each of the rebelling colonies regarded itself as a sovereign nation that was cooperating with a dozen other sovereigns in a relationship of convenience to achieve shared goals, the most immediate being independence from Britain. On Nov. 15, 1777, the Continental Congress passed the Articles of Confederation—"Certain Articles of Confederation and Perpetual Union"—to create "The United States of America." That document asserted that "Each State retains is sovereignty, freedom and independence" while entering into "a firm league of friendship with each other" for their common defense and to secure their liberties, as well as to provide for "their mutual and general welfare."
Under the Articles of Confederation, the central government was weak, without even an executive to lead it. Its only political body was the Congress, which could not collect taxes or tariffs (it could ask states for "donations" for the common good). It did have the power to oversee foreign relations but could not create an army or navy to enforce foreign treaties. Even this relatively weak governing document was not ratified by all the states until 1781. It is an old truism that "All politics are local," and never was that more true than during the early days of the United States. Having just seceded from what they saw as a despotic, powerful central government that was too distant from its citizens, Americans were skeptical about giving much power to any government other than that of their own states, where they could exercise more direct control. However, seeds of nationalism were also sown in the war: the war required a united effort, and many men who likely would have lived out their lives without venturing from their own state traveled to other states as part of the Continental Army.
The weaknesses of the Articles of Confederation were obvious almost from the beginning. Foreign nations, ruled to varying degrees by monarchies, were inherently contemptuous of the American experiment of entrusting rule to the ordinary people. A government without an army or navy and little real power was, to them, simply a laughing stock and a plum ripe for picking whenever the opportunity arose.
Domestically, the lack of any uniform codes meant each state established its own form of government, a chaotic system marked at times by mob rule that burned courthouses and terrorized state and local officials. State laws were passed and almost immediately repealed; sometimes ex post facto laws made new codes retroactive. Collecting debts could be virtually impossible.
George Washington, writing to John Jay in 1786, said, "We have, probably, had too good an opinion of human nature in forming our confederation." He underlined his words for emphasis. Jay himself felt the country had to become "one nation in every respect." Alexander Hamilton felt "the prospect of a number of petty states, with appearance only of union," was something "diminutive and contemptible."
In May 1787, a Constitutional Convention met in Philadelphia to address the shortcomings of the Articles of Confederation. Some Americans felt it was an aristocratic plot, but every state felt a need to do something to improve the situation, and smaller states felt a stronger central government could protect them against domination by the larger states. What emerged was a new constitution "in order to provide a more perfect union." It established the three branches of the federal government—executive, legislative, and judicial—and provided for two houses within the legislature. That Constitution, though amended 27 times, has governed the United States of America ever since. It failed to clearly address two critical issues, however.
It made no mention of the future of slavery. (The Northwest Ordinance, not the Constitution, prohibited slavery in the Northwest Territories, that area north of the Ohio River and along the upper Mississippi River.) It also did not include any provision for a procedure by which a state could withdraw from the Union, or by which the Union could be wholly dissolved. To have included such provisions would have been, as some have pointed out, to have written a suicide clause into the Constitution. But the issues of slavery and secession would take on towering importance in the decades to come, with no clear-cut guidance from the Founding Fathers for resolving them.
First Calls for Secession
Following ratification by 11 of the 13 states, the government began operation under the new U.S. Constitution in March 1789. In less than 15 years, states of New England had already threatened to secede from the Union. The first time was a threat to leave if the Assumption Bill, which provided for the federal government to assume the debts of the various states, were not passed. The next threat was over the expense of the Louisiana Purchase. Then, in 1812, President James Madison, the man who had done more than any other individual to shape the Constitution, led the United States into a new war with Great Britain. The New England states objected, for war would cut into their trade with Britain and Europe. Resentment grew so strong that a convention was called at Hartford, Connecticut, in 1814, to discuss secession for the New England states. The Hartford Convention was the most serious secession threat up to that time, but its delegates took no action.
Southerners had also discussed secession in the nation’s early years, concerned over talk of abolishing slavery. But when push came to shove in 1832, it was not over slavery but tariffs. National tariffs were passed that protected Northern manufacturers but increased prices for manufactured goods purchased in the predominantly agricultural South, where the Tariff of 1828 was dubbed the "Tariff of Abominations." The legislature of South Carolina declared the tariff acts of 1828 and 1832 were "unauthorized by the constitution of the United States" and voted them null, void and non-binding on the state.
President Andrew Jackson responded with a Proclamation of Force, declaring, "I consider, then, the power to annul a law of the United States, assumed by one state, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, inconsistent with every principle on which it was founded, and destructive of the great object for which it was formed." (Emphasis is Jackson’s). Congress authorized Jackson to use military force if necessary to enforce the law (every Southern senator walked out in protest before the vote was taken). That proved unnecessary, as a compromise tariff was approved, and South Carolina rescinded its Nullification Ordinance.
The Nullification Crisis, as the episode is known, was the most serious threat of disunion the young country had yet confronted. It demonstrated both continuing beliefs in the primacy of states rights over those of the federal government (on the part of South Carolina and other Southern states) and a belief that the chief executive had a right and responsibility to suppress any attempts to give individual states the right to override federal law.
The Abolition Movement, and Southern Secession
Between the 1830s and 1860, a widening chasm developed between North and South over the issue of slavery, which had been abolished in all states north of the Mason-Dixon line. The Abolition Movement grew in power and prominence. The slave holding South increasingly felt its interests were threatened, particularly since slavery had been prohibited in much of the new territory that had been added west of the Mississippi River. The Missouri Compromise, the Dred Scott Decision case, the issue of Popular Sovereignty (allowing residents of a territory to vote on whether it would be slave or free), and John Brown‘s Raid On Harpers Ferry all played a role in the intensifying debate. Whereas once Southerners had talked of an emancipation process that would gradually end slavery, they increasingly took a hard line in favor of perpetuating it forever.
In 1850, the Nashville Convention met from June 3 to June 12 "to devise and adopt some mode of resistance to northern aggression." While the delegates approved 28 resolutions affirming the South’s constitutional rights within the new western territories and similar issues, they essentially adopted a wait-and-see attitude before taking any drastic action. Compromise measures at the federal level diminished interest in a second Nashville Convention, but a much smaller one was held in November. It approved measures that affirmed the right of secession but rejected any unified secession among Southern states. During the brief presidency of Zachary Taylor, 1849-50, he was approached by pro-secession ambassadors. Taylor flew into a rage and declared he would raise an army, put himself at its head and force any state that attempted secession back into the Union.
The potato famine that struck Ireland and Germany in the 1840s–1850s sent waves of hungry immigrants to America’s shores. More of them settled in the North than in the South, where the existence of slavery depressed wages. These newcomers had sought refuge in the United States, not in New York or Virginia or Louisiana. To most of them, the U.S. was a single entity, not a collection of sovereign nations, and arguments in favor of secession failed to move them, for the most part.
The Election Of Abraham Lincoln And Nullification
The U.S. elections of 1860 saw the new Republican Party, a sectional party with very little support in the South, win many seats in Congress. Its candidate, Abraham Lincoln, won the presidency. Republicans opposed the expansion of slavery into the territories, and many party members were abolitionists who wanted to see the "peculiar institution" ended everywhere in the United States. South Carolina again decided it was time to nullify its agreement with the other states. On Dec. 20, 1860, the Palmetto State approved an Ordinance of Secession, followed by a declaration of the causes leading to its decision and another document that concluded with an invitation to form "a Confederacy of Slaveholding States."
The South Begins To Secede
South Carolina didn’t intend to go it alone, as it had in the Nullification Crisis. It sent ambassadors to other Southern states. Soon, six more states of the Deep South—Georgia, Florida, Alabama, Mississippi, Texas and Louisiana—renounced their compact with the United States. After Confederate artillery fired on Fort Sumter in Charleston Harbor, South Carolina, on April 12, 1861, Abraham Lincoln called for 75,000 volunteers to put down the rebellion. This led four more states— Virginia, Arkansas, North Carolina, and Tennessee—to secede; they refused to take up arms against their Southern brothers and maintained Lincoln had exceeded his constitutional powers by not waiting for approval of Congress (as Jackson had done in the Nullification Crisis) before declaring war on the South. The legislature of Tennessee, the last state to leave the Union, waived any opinion as to "the abstract doctrine of secession," but asserted "the right, as a free and independent people, to alter, reform or abolish our form of government, in such manner as we think proper."
In addition to those states that seceded, other areas of the country threatened to. The southern portions of Northern states bordering the Ohio River held pro-Southern, pro-slavery sentiments, and there was talk within those regions of seceding and casting their lot with the South.
A portion of Virginia did secede from the Old Dominion and formed the Union-loyal state of West Virginia. Its creation and admittance to the Union raised many constitutional questions—Lincoln’s cabinet split 50–50 on the legality and expediency of admitting the new state. But Lincoln wrote, “It is said that the admission of West-Virginia is secession, and tolerated only because it is our secession. Well, if we call it by that name, there is still difference enough between secession against the constitution, and secession in favor of the constitution.”
The Civil War: The End Of The Secession Movement
Four bloody years of war ended what has been the most significant attempt by states to secede from the Union. While the South was forced to abandon its dreams of a new Southern Confederacy, many of its people have never accepted the idea that secession was a violation of the U.S. Constitution, basing their arguments primarily on Article X of that constitution: "The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people."
Ongoing Calls For Secession & The Eternal Question: "Can A State Legally Secede?"
The ongoing debate continues over the question that has been asked since the forming of the United States itself: "Can a state secede from the Union of the United States?" Whether it is legal for a state to secede from the United States is a question that was fiercely debated before the Civil War (see the article below), and even now, that debate continues. From time to time, new calls have arisen for one state or another to secede, in reaction to political and/or social changes, and organizations such as the League of the South openly support secession and the formation of a new Southern republic.
Articles Featuring Secession From History Net Magazines
Was Secession Legal
Southerners insisted they could legally bolt from the Union.
Northerners swore they could not.
War would settle the matter for good.
Over the centuries, various excuses have been employed for starting wars. Wars have been fought over land or honor. Wars have been fought over soccer (in the case of the conflict between Honduras and El Salvador in 1969) or even the shooting of a pig (in the case of the fighting between the United States and Britain in the San Juan Islands in 1859).
But the Civil War was largely fought over equally compelling interpretations of the U.S. Constitution. Which side was the Constitution on? That’s difficult to say.
The interpretative debate—and ultimately the war—turned on the intent of the framers of the Constitution and the meaning of a single word: sovereignty—which does not actually appear anywhere in the text of the Constitution.
Southern leaders like John C. Calhoun and Jefferson Davis argued that the Constitution was essentially a contract between sovereign states—with the contracting parties retaining the inherent authority to withdraw from the agreement. Northern leaders like Abraham Lincoln insisted the Constitution was neither a contract nor an agreement between sovereign states. It was an agreement with the people, and once a state enters the Union, it cannot leave the Union.
It is a touchstone of American constitutional law that this is a nation based on federalism—the union of states, which retain all rights not expressly given to the federal government. After the Declaration of Independence, when most people still identified themselves not as Americans but as Virginians, New Yorkers or Rhode Islanders, this union of “Free and Independent States” was defined as a “confederation.” Some framers of the Constitution, like Maryland’s Luther Martin, argued the new states were “separate sovereignties.” Others, like Pennsylvania’s James Wilson, took the opposite view that the states “were independent, not Individually but Unitedly.”
Supporting the individual sovereignty claims is the fierce independence that was asserted by states under the Articles of Confederation and Perpetual Union, which actually established the name “The United States of America.” The charter, however, was careful to maintain the inherent sovereignty of its composite state elements, mandating that “each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated.” It affirmed the sovereignty of the respective states by declaring, “The said states hereby severally enter into a firm league of friendship with each other for their common defence [sic].” There would seem little question that the states agreed to the Confederation on the express recognition of their sovereignty and relative independence.
Supporting the later view of Lincoln, the perpetuality of the Union was referenced during the Confederation period. For example, the Northwest Ordinance of 1787 stated that “the said territory, and the States which may be formed therein, shall forever remain a part of this confederacy of the United States of America.”
The Confederation produced endless conflicts as various states issued their own money, resisted national obligations and favored their own citizens in disputes. James Madison criticized the Articles of Confederation as reinforcing the view of the Union as “a league of sovereign powers, not as a political Constitution by virtue of which they are become one sovereign power.” Madison warned that such a view could lead to the “dissolving of the United States altogether.” If the matter had ended there with the Articles of Confederation, Lincoln would have had a much weaker case for the court of law in taking up arms to preserve the Union. His legal case was saved by an 18th-century bait-and-switch.
A convention was called in 1787 to amend the Articles of Confederation, but several delegates eventually concluded that a new political structure—a federation—was needed. As they debated what would become the Constitution, the status of the states was a primary concern. George Washington, who presided over the convention, noted, “It is obviously impracticable in the federal government of these states, to secure all rights of independent sovereignty to each, and yet provide for the interest and safety of all.” Of course, Washington was more concerned with a working federal government—and national army—than resolving the question of a state’s inherent right to withdraw from such a union. The new government forged in Philadelphia would have clear lines of authority for the federal system. The premise of the Constitution, however, was that states would still hold all rights not expressly given to the federal government.
The final version of the Constitution never actually refers to the states as “sovereign,” which for many at the time was the ultimate legal game-changer. In the U.S. Supreme Court’s landmark 1819 decision in McCulloch v. Maryland, Chief Justice John Marshall espoused the view later embraced by Lincoln: “The government of the Union…is emphatically and truly, a government of the people.” Those with differing views resolved to leave the matter unresolved—and thereby planted the seed that would grow into a full civil war. But did Lincoln win by force of arms or force of argument?
On January 21, 1861, Jefferson Davis of Mississippi went to the well of the U.S. Senate one last time to announce that he had “satisfactory evidence that the State of Mississippi, by a solemn ordinance of her people in convention assembled, has declared her separation from the United States.” Before resigning his Senate seat, Davis laid out the basis for Mississippi’s legal claim, coming down squarely on the fact that in the Declaration of Independence “the communities were declaring their independence”—not “the people.” He added, “I have for many years advocated, as an essential attribute of state sovereignty, the right of a state to secede from the Union.”
Davis’ position reaffirmed that of John C. Calhoun, the powerful South Carolina senator who had long viewed the states as independent sovereign entities. In an 1833 speech upholding the right of his home state to nullify federal tariffs it believed were unfair, Calhoun insisted, “I go on the ground that [the] constitution was made by the States; that it is a federal union of the States, in which the several States still retain their sovereignty.” Calhoun allowed that a state could be barred from secession by a vote of two-thirds of the states under Article V, which lays out the procedure for amending the Constitution.
Lincoln’s inauguration on March 4, 1861, was one of the least auspicious beginnings for any president in history. His election was used as a rallying cry for secession, and he became the head of a country that was falling apart even as he raised his hand to take the oath of office. His first inaugural address left no doubt about his legal position: “No State, upon its own mere motion, can lawfully get out of the Union, that resolves and ordinances to that effect are legally void, and that acts of violence, within any State or States, against the authority of the United States, are insurrectionary or revolutionary, according to circumstances.”
While Lincoln expressly called for a peaceful resolution, this was the final straw for many in the South who saw the speech as a veiled threat. Clearly when Lincoln took the oath to “preserve, protect, and defend” the Constitution, he considered himself bound to preserve the Union as the physical creation of the Declaration of Independence and a central subject of the Constitution. This was made plain in his next major legal argument—an address where Lincoln rejected the notion of sovereignty for states as an “ingenious sophism” that would lead “to the complete destruction of the Union.” In a Fourth of July message to a special session of Congress in 1861, Lincoln declared, “Our States have neither more, nor less power, than that reserved to them, in the Union, by the Constitution—no one of them ever having been a State out of the Union. The original ones passed into the Union even before they cast off their British colonial dependence; and the new ones each came into the Union directly from a condition of dependence, excepting Texas. And even Texas, in its temporary independence, was never designated a State.”
It is a brilliant framing of the issue, which Lincoln proceeds to characterize as nothing less than an attack on the very notion of democracy:
Our popular government has often been called an experiment. Two points in it, our people have already settled—the successful establishing, and the successful administering of it. One still remains—its successful maintenance against a formidable [internal] attempt to overthrow it. It is now for them to demonstrate to the world, that those who can fairly carry an election, can also suppress a rebellion—that ballots are the rightful, and peaceful, successors of bullets; and that when ballots have fairly, and constitutionally, decided, there can be no successful appeal, back to bullets; that there can be no successful appeal, except to ballots themselves, at succeeding elections. Such will be a great lesson of peace; teaching men that what they cannot take by an election, neither can they take it by a war—teaching all, the folly of being the beginners of a war.
Lincoln implicitly rejected the view of his predecessor, James Buchanan. Buchanan agreed that secession was not allowed under the Constitution, but he also believed the national government could not use force to keep a state in the Union. Notably, however, it was Buchanan who sent troops to protect Fort Sumter six days after South Carolina seceded. The subsequent seizure of Fort Sumter by rebels would push Lincoln on April 14, 1861, to call for 75,000 volunteers to restore the Southern states to the Union—a decisive move to war.
Lincoln showed his gift as a litigator in the July 4th address, though it should be noted that his scruples did not stop him from clearly violating the Constitution when he suspended habeas corpus in 1861 and 1862. His argument also rejects the suggestion of people like Calhoun that, if states can change the Constitution under Article V by democratic vote, they can agree to a state leaving the Union. Lincoln’s view is absolute and treats secession as nothing more than rebellion. Ironically, as Lincoln himself acknowledged, that places the states in the same position as the Constitution’s framers (and presumably himself as King George).
But he did note one telling difference: “Our adversaries have adopted some Declarations of Independence; in which, unlike the good old one, penned by Jefferson, they omit the words ‘all men are created equal.’”
Lincoln’s argument was more convincing, but only up to a point. The South did in fact secede because it was unwilling to accept decisions by a majority in Congress. Moreover, the critical passage of the Constitution may be more important than the status of the states when independence was declared. Davis and Calhoun’s argument was more compelling under the Articles of Confederation, where there was no express waiver of withdrawal. The reference to the “perpetuity” of the Union in the Articles and such documents as the Northwest Ordinance does not necessarily mean each state is bound in perpetuity, but that the nation itself is so created.
After the Constitution was ratified, a new government was formed by the consent of the states that clearly established a single national government. While, as Lincoln noted, the states possessed powers not expressly given to the federal government, the federal government had sole power over the defense of its territory and maintenance of the Union. Citizens under the Constitution were guaranteed free travel and interstate commerce. Therefore it is in conflict to suggest that citizens could find themselves separated from the country as a whole by a seceding state.
Moreover, while neither the Declaration of Independence nor the Constitution says states can not secede, they also do not guarantee states such a right nor refer to the states as sovereign entities. While Calhoun’s argument that Article V allows for changing the Constitution is attractive on some levels, Article V is designed to amend the Constitution, not the Union. A clearly better argument could be made for a duly enacted amendment to the Constitution that would allow secession. In such a case, Lincoln would clearly have been warring against the democratic process he claimed to defend.
Neither side, in my view, had an overwhelming argument. Lincoln’s position was the one most likely to be upheld by an objective court of law. Faced with ambiguous founding and constitutional documents, the spirit of the language clearly supported the view that the original states formed a union and did not retain the sovereign authority to secede from that union.
Of course, a rebellion is ultimately a contest of arms rather than arguments, and to the victor goes the argument. This legal dispute would be resolved not by lawyers but by more practical men such as William Tecumseh Sherman and Thomas “Stonewall” Jackson.
Ultimately, the War Between the States resolved the Constitution’s meaning for any states that entered the Union after 1865, with no delusions about the contractual understanding of the parties. Thus, 15 states from Alaska to Colorado to Washington entered in the full understanding that this was the view of the Union. Moreover, the enactment of the 14th Amendment strengthened the view that the Constitution is a compact between “the people” and the federal government. The amendment affirms the power of the states to make their own laws, but those laws cannot “abridge the privileges or immunities of citizens of the United States.”
There remains a separate guarantee that runs from the federal government directly to each American citizen. Indeed, it was after the Civil War that the notion of being “American” became widely accepted. People now identified themselves as Americans and Virginians. While the South had a plausible legal claim in the 19th century, there is no plausible argument in the 21st century. That argument was answered by Lincoln on July 4, 1861, and more decisively at Appomattox Court House on April 9, 1865.
Jonathan Turley is one of the nation’s leading constitutional scholars and legal commentators. He teaches at George Washington University.
Article originally published in the November 2010 issue of America’s Civil War.
Second: Secession – Revisionism or Reality
Secession fever revisited
We can take an honest look at history, or just revise it to make it more palatable
Try this version of history: 150 years ago this spring, North Carolina and Tennessee became the final two Southern states to secede illegally from the sacred American Union in order to keep 4 million blacks in perpetual bondage. With Jefferson Davis newly ensconced in his Richmond capital just a hundred miles south of Abraham Lincoln’s legally elected government in Washington, recruiting volunteers to fight for his “nation,” there could be little doubt that the rebellion would soon turn bloody. The Union was understandably prepared to fight for its own existence.
Or should the scenario read this way? A century and a half ago, North Carolina and Tennessee joined other brave Southern states in asserting their right to govern themselves, limit the evils of unchecked federal power, protect the integrity of the cotton market from burdensome tariffs, and fulfill the promise of liberty that the nation’s founders had guaranteed in the Declaration of Independence. With Abraham Lincoln’s hostile minority government now raising militia to invade sovereign states, there could be little doubt that peaceful secession would soon turn into bloody war. The Confederacy was understandably prepared to fight for its own freedom.
Which version is true? And which is myth? Although the Civil War sesquicentennial is only a few months old, questions like this, which most serious readers believed had been asked and answered 50—if not 150—years ago, are resurfacing with surprising frequency. So-called Southern heritage Web sites are ablaze with alternative explanations for secession that make such scant mention of chattel slavery that the modern observer might think shackled plantation laborers were dues-paying members of the AFL-CIO. Some of the more egregious comments currently proliferating on the new Civil War blogs of both the New York Times (“Disunion”) and Washington Post (“A House Divided”) suggest that many contributors continue to believe slavery had little to do with secession: Lincoln had no right to serve as president, they argue; his policies threatened state sovereignty; Republicans wanted to impose crippling tariffs that would have destroyed the cotton industry; it was all about honor. Edward Ball, author of Slaves in the Family, has dubbed such skewed memory as “the whitewash explanation” for secession. He is right.
As Ball and scholars like William Freehling, author of Prelude to Civil War and The Road to Disunion, have pointed out, all today’s readers need to do in order to understand what truly motivated secession is to study the proceedings of the state conventions where separation from the Union was openly discussed and enthusiastically authorized. Many of these dusty records have been digitized and made available online—discrediting this fairy tale once and for all.
Consider these excerpts. South Carolina voted for secession first in December 1860, bluntly citing the rationale that Northern states had “denounced as sinful the institution of slavery.”
Georgia delegates similarly warned against the “progress of anti-slavery.” As delegate Thomas R.R. Cobb proudly insisted in an 1860 address to the Legislature, “Our slaves are the most happy and contented of workers.”
Mississippians boasted, “Our position is thoroughly identified with the institution of slavery—the greatest material interest of the world…. There is no choice left us but submission to the mandates of abolition, or a dissolution of the Union.” And an Alabama newspaper opined that Lincoln’s election plainly showed the North planned “to free the negroes and force amalgamation between them and the children of the poor men of the South.”
Certainly the effort to “whitewash” secession is not new. Jefferson Davis himself was maddeningly vague when he provocatively asked fellow Mississippians, “Will you be slaves or will you be independent?…Will you consent to be robbed of your property [or] strike bravely for liberty, property, honor and life?” Non-slaveholders—the majority of Southerners—were bombarded with similarly inflammatory rhetoric designed to paint Northerners as integrationist aggressors scheming to make blacks the equal of whites and impose race-mixing on a helpless population. The whitewash worked in 1861—but does that mean that it should be taken seriously today?
From 1960-65, the Civil War Centennial Commission wrestled with similar issues, and ultimately bowed too deeply to segregationists who worried that an emphasis on slavery—much less freedom—would embolden the civil rights movement then beginning to gain national traction. Keeping the focus on battlefield re-enactments, regional pride and uncritical celebration took the spotlight off the real cause of the war, and its potential inspiration to modern freedom marchers and their sympathizers. Some members of the national centennial commission actually argued against staging a 100th anniversary commemoration of emancipation at the Lincoln Memorial. Doing so, they contended, would encourage “agitators.”
In a way, it is more difficult to understand why so much space is again being devoted to this debate. Fifty years have passed since the centennial. The nation has been vastly transformed by legislation and attitude. We supposedly live in a “post-racial era.” And just two years ago, Americans (including voters in the former Confederate states of Virginia and North Carolina), chose the first African-American president of the United States.
Or is this, perhaps, the real underlying problem—the salt that still irritates the scab covering this country’s unhealed racial divide?
Just as some Southern conservatives decried a 1961 emphasis on slavery because it might embolden civil rights, 2011 revisionists may have a hidden agenda of their own: Beat back federal authority, reinvigorate the states’ rights movement and perhaps turn back the re-election of a black president who has been labeled as everything from a Communist to a foreigner (not unlike the insults hurled at the freedom riders half a century ago).
Fifty years from now, Americans will either celebrate the honesty that animated the Civil War sesquicentennial, or subject it to the same criticisms that have been leveled against the centennial celebrations of the 1960s. The choice is ours. As Lincoln once said, “The struggle of today is not altogether for today—it is for a vast future also.”
Harold Holzer is chairman of the Abraham Lincoln Bicentennial Foundation.
A bird's-eye view of pre-war New York displays the shipping commerce that made the city rich. Image courtesy of Library of Congress.
A NOTE FROM THE EDITOR: Because of a production problem, a portion of this article was omitted from …
Confederate soldiers under the command of Gen. Robert E. Lee camp on the outskirts of Hagerstown, Maryland, in September of 1862. Image courtesy of Weider History Group archive.
War seemed far away to the editors of a Maryland weekly newspaper–until …
A Louisiana youth wages a personal war with the Yankees on his doorstep
Aleck Mouton was 10 years old, barefoot and Confederate to the core when he confronted Maj. Gen. Nathaniel Banks, who had just invaded the tiny south Louisiana …
Simmering animosities between North and South signaled an American apocalypse
Any man who takes it upon himself to explain the causes of the Civil War deserves whatever grief comes his way, regardless of his good intentions. Having acknowledged …
Americans who lived through the Civil War established four great interpretive traditions regarding the conflict. The Union Cause tradition framed the war as preeminently an effort to maintain a viable republic in the face of secessionist actions that threatened both …
Vicksburg 1863, by Winston Groom, Alfred A. Knopf
Winston Groom is a first-rate spinner of yarns, and like the tales of his most famous fictional character, Forrest Gump, his accounts seamlessly transport readers into the story. Vicksburg 1863 is …
It's perfectly feasible to imagine that if the South had successfully left the Union, the West would also have split away
Did Confederate soldiers lose the will to fight as the outlook began to appear bleak for the South late …
Missouri in the Balance Struggle for St. Louis
By Anthony Monachello
The dark clouds of civil war gathered over the nation as twoaggressive factions–the Wide-Awakes and the Minutemen–plotted to gain political control of Missouri and its most important city, St. …
Amid Bedbugs and Drunken Secessionists
BY JACK D. FOWLERWilliam Woods Averell was a man on a mission–at least he wanted to be. He had come to Washington, D.C., from his New York home to attend President Abraham Lincoln's inauguration on …
Suave, gentlemanly Lt. Col. Arthur Fremantle of Her Majesty's Coldstream Guards picked an unusual vacation spot: the Civil War-torn United States.
By Robert R. Hodges, Jr.
After graduating from Sandhurst, Great Britain's West Point, Arthur James Lyon Fremantle entered the … | http://www.historynet.com/secession | 13 |
15 | The Essential Role of Statistics
Modern psychology couldn't get by without statistics. Some of these simply describe research data and stop there. An example is correlation, which yields a single number that indicates the extent to which two variables are “related.” Another example is the set of often-complex statistical computations that help researchers decide whether the results of their experiments are likely to be “real.”
A variable is literally anything in the environment or about a person that can be modified and have an influence on his or her particular thoughts, reactions, or behaviors. The amount of light or noise in a room is a variable — it differs from one room to the next. Height and weight are variables, as are intelligence, personality characteristics, and a host of observable behaviors, because these differ from one person to the next.
In correlation, the resulting number can range from 0 to +1.00 or 0 to –1.00. Where it falls indicates the strength of the correlation. The sign of the correlation indicates its direction. A correlation of 0 is nonexistent; a correlation of either +1.00 or –1.00 is perfect.
For example, to assess the correlation between height and weight, a researcher would measure the height and weight of each of a group of individuals and then plug the numbers into a mathematical formula. This correlation will usually turn out to be noticeable, perhaps about +.63. The “.63” tells us that it is a relatively strong correlation, and the “+” tells us that height and weight tend to vary in the same direction — taller people tend to weigh more, shorter people less. But the correlation is far from perfect and there are many exceptions.
What the correlations you encounter in this book mean vary somewhat according to the application, but here's a rule of thumb: A correlation between 0 and about +.20 (or 0 and –.20) is weak, one between +.20 and +.60 (or –.20 and –.60) is moderate, and one between +.60 and +1.00 (or –.60 and –1.00) is strong.
As another example, a researcher might assess the extent to which people's blood alcohol content (BAC) is related to their ability to drive. The participants might be asked to drink and then attempt to operate a driving simulator. Their BACs would then be compared with their scores on the simulator, and the researcher might find a correlation of –.68. This is again a relatively strong correlation, but the “–” tells us that BAC and driving ability vary in an opposite direction — the higher the BAC, the lower the driving ability.
For descriptive statistics such as correlation, the “mean,” or average, and some others that will be considered in context later in the book, the purpose is to describe or summarize aspects of behavior to understand them better. Inferential statistics start with descriptive ones and go further in allowing researchers to draw meaningful conclusions — especially in experiments. These procedures are beyond the scope of this book, but the basic logic is helpful in understanding how psychologists know what they know.
Again recalling Bandura's experiment of observational learning of aggression, consider just the model-punished and model-rewarded groups. It was stated that the former children imitated few behaviors and the latter significantly more. What this really means is that, based on statistical analysis, the difference between the two groups was large enough and consistent enough to be unlikely to have occurred simply by “chance.” That is, it would have been a long shot to obtain the observed difference if what happened to the model wasn't a factor. Thus, Bandura and colleagues discounted the possibility of chance alone and concluded that what the children saw happen to the model was the cause of the difference in their behavior.
This logic may seem puzzling to you, and it isn't important that you grasp it to understand the many experiments that are noted throughout this book. Indeed, it isn't mentioned again. The point of mentioning it at all is to underscore that people are far less predictable than chemical reactions and the like, and therefore have to be studied somewhat differently — usually without formulas.
Psychologists study what people tend to do in a given situation, recognizing that not all people will behave as predicted — just as the children in the model-rewarded group did not all imitate all the behaviors. In a nutshell, the question is simply whether a tendency is strong enough — as assessed by statistics — to warrant a conclusion about cause and effect. | http://www.netplaces.com/psychology/how-psychologists-know-what-they-know/the-essential-role-of-statistics.htm | 13 |
17 | Argument by Definition
If you are looking for the definition of “argument”, look here.
If you’d like more reasoning resources, look here.
A common method of argumentation is to argue that some particular thing belongs to a particular class of things because it fits the definition for that class. For example, someone might argue that a human embryo is a person because it meets the definition of “person.” The goal of this method is to show that the thing in question adequately meets the definition. Definitions are often set within theories. This method is most often used as part of an extended argument. For example, someone might use this method to argue that a human embryo is a person and then use this to argue against stem cell research involving embryos.
This method can be used to argue that something, X, belongs in a class of things based on the fact that X meets the conditions set by the definition. Alternatively, it can be argued that X does not belong in that class of things because X does not meet the conditions set by the definition. The method involves the following basic steps:
Step 1: Present the definition
Step 2: Describe the relevant qualities of X.
Step 3: Show how X meets (or fails to meet) the definition
Step 4: Conclude that X belongs within that class (or does not belong within that class).
To use a basic example, imagine that someone wants to argue against stem cell research involving human embryos. They could begin by presenting a definition of “person” and then show how human embryos meet that definition. This would not resolve the moral issue but she could go on to argue that using persons in such research would be wrong and then conclude that using human embryos would be wrong.
As an example in aesthetics, a person might define a work of horror as a work which has as its goal to produce an emotion that goes beyond fear, namely that of horror, which would be defined in some detail. The person could go on to show how the movie Alien meets this definition and then conclude that Alien is a work of horror.
Since dictionaries conveniently provide a plethora of definitions it is tempting to use them as the basis for an argument from definition. However, such arguments tend to be rather weak in regards to addressing matters of substantive dispute. For example, referring to the dictionary cannot resolve the debate over what it is to be a person. This is because dictionaries just provide the definition that the editors regard as the correct, acceptable, or as the generally used definition. Dictionaries also generally do not back up their definitions with arguments-the definitions are simply provided and not defended.
Obviously dictionaries are very useful in terms of learning the meanings of words. But they are not means by which substantial conceptual disputes can be settled.
When making an argument from definition it is obviously very important to begin with a good definition. In some cases providing such a definition will involve settling a conceptual dispute. Resolving such a dispute involves, in part, showing that your definition of the concept is superior to the competition and that it is at least an adequate definition.
An acceptable definition must be clear, plausible, and internally consistent. It must also either be in correspondence with our intuitions or be supported by arguments that show our intuitions are mistaken. Of course, people differ in their intuitions about meanings so this can be somewhat problematic. When in doubt about whether a definition is intuitively plausible or not, it is preferable to argue in support of the definition. A definition that fails to meet these conditions is defective.
An acceptable definition must avoid being circular, too narrow, too broad or vague. Definitions that fail to avoid these problems are defective.
A circular definition merely restates the term being defined and thus provides no progress in the understanding of the term. For example, defining “goodness” as “the quality of being good” would be circular. As another example, defining “a work of art” as “a product of the fine arts” would also be circular. While these are rather blatant examples of circularity, it can also be more subtle.
A definition that is too narrow is one that excludes things that should be included-it leaves out too much. For example, defining “person” as “a human being” would be too narrow since there might well be non-humans that are persons. As another example, defining “art” as “paintings and sculptures” would be too narrow since there are other things that certainly seem to be art, such as music and movies, which are excluded by this definition. As a final example, defining “stealing” as “taking physical property away from another person” is also too narrow. After all, there seem to be types of theft (such as stealing ideas) that do not involve taking physical property. There are also types of theft that do not involve stealing from a person-one could steal from a non-person. Naturally enough, there can be extensive debate over whether a definition is too narrow or not. For example, a definition of “person” that excludes human fetuses might be regarded as too narrow by someone who is opposed to abortion while a pro-choice person might find such a definition acceptable. Such disputes would need to be resolved by argumentation.
A definition that is too broad is one that includes things that should not be included-it allows for the term to cover too much. For example, defining “stealing” as “taking something you do not legally own” would be too broad. A person fishing in international waters does not legally own the fish but catching them would not be stealing. As another example, defining “art” as “anything that creates or influences the emotions” would be too broad. Hitting someone in the face with a brick would influence his emotions but would not be a work of art. As with definitions that are too narrow there can be significant debate over whether a definition is too broad or not. For example, a definition of “person” that includes apes and whales might be taken by some as too broad. In such cases the conflict would need to be resolved by arguments.
While it might seem odd, a definition can be too broad and too narrow at the same time. For example, defining “gun” as “a projectile weapon” would leave out non-projectile guns (such as laser guns) while allowing non gun projectile weapons (such as crossbows).
Definitions can also be too vague. A vague definition is one that is not precise enough for the task at hand. Not surprisingly, vague definitions will also tend to be too broad since their vagueness will generally allow in too many things that do not really belong. For example, defining “person” as “a being with some kind of mental activity” would be vague and also too broad.
There are a variety of ways to respond to this method. One way is to directly attack the definition used in the argument. This is done by showing how the definition used fails to meet one or more of the standards of a good definition. Obviously, since the argument rests on the definition, then if the definition is defective so too will be the argument.
For example, suppose someone argues that a play is a tragedy based on their definition of tragedy in terms of being a work of art that creates strong emotions. This definition can be attacked on the grounds that it is too broad. After all, a comedy or love story could also create strong emotions but they would not be regarded as works of tragedy.
A second option is to attack X (the thing that is claimed to fit or not fit the definition). This is done by arguing that X does not actually meet the definition. If this can be done, the argument would fail because X would not belong in the claimed category.
As an example, a person might argue that a particular song is a country song, but the response could be an argument showing that the song lacks the alleged qualities. As a second example, someone might claim that dolphins are people, but it could be replied that they lack the qualities needed to be persons.
An argument by definition can also be countered by presenting an alternative definition. This is actually using another argument of the same type against the original. If the new definition is superior, then the old definition should be rejected and hence the argument would presumably fail. The quality of the definitions is compared using the standards above and the initial definition is attacked on the grounds that it is inferior to the counter definition. For example, a person might present a definition of horror that is countered by a better definition. As a second example, a person might present a definition of stealing that is countered by presenting a more adequate definition. | http://aphilosopher.wordpress.com/2008/01/04/argument-by-definition/ | 13 |
77 | An argument is an attempt to demonstrate the truth of an assertion called a conclusion, based on the truth of a set of assertions called premises. If the argument is successful, the conclusion is said to be proved. This article classifies arguments as either deductive or inductive. An argument always assumes a certain kind of dialogue, with one person presenting the argument, attempting to persuade an interlocutor. An argument could be part of a written text, a speech, or a conversation.
In an argument, some statements are put forward as giving evidence for another statement. For example, the following is an argument:
- She likes citrus fruit, so she probably likes kumquats. After all, kumquats are citrus fruits.
Here the conclusion is “she probably likes kumquats.” The statements offered in support are “she likes citrus fruit” and “kumquats are citrus fruits.” These premises are asserted, without any additional argument or support. These premises may or may not be true. A statement is argued for if it is given other statements as support; it is asserted if it has no such support.
Sometimes the premises actually provide no support for the conclusion. Consider this argument:
- The quarter has come up heads six times, so the next flip will probably come up tails.
The conclusion of this argument is “the next flip will probably come up tails.” The statement provided as evidence for this gives no support at all. The previous flips have no bearing on the next flip. Yet this is an argument because the premises were offered as evidence for the conclusion.
Some collections of statements may look like arguments without being arguments. For example, if one’s purpose is to explain or clarify a statement, one is not giving an argument:
- The movie was good. It had a good script, good acting, and good cinematography.
If my purpose in saying this is to explain why I liked the movie, I am not arguing. The second sentence is not given as evidence for or in support of the first sentence, but is meant to explain why I liked the movie. These same sentences may be used in an argument for the conclusion; if I’m trying to convince you that the movie was good, I might offer the quality of the writing, acting, and filming as evidence of the movie’s quality.
A deductive argument uses the laws of logic to attempt to prove its conclusion. A deductive argument may be valid or invalid. If it is valid, it is logically impossible for the premises to be true and the conclusion false. In a valid argument, the premises are said to imply the conclusion. In some ways this is a very strong requirement (much stronger than the ordinary use of the word imply would suggest). It is irrational to accept the premises of a deductive argument and not accept the conclusion. One is not merely invited to accept the conclusion as plausible if one accepts the premises, rather, one is compelled to accept it as true.
At the same time, it is in some ways a very weak requirement. Consider the following argument:
- All dogs are blue.
- Nothing is blue except fish.
- Therefore, all dogs are fish.
This argument is valid since the conclusion follows logically from the premises. If the premises were true, the conclusion would be true as well. But the premises are not true, so the argument is not entirely successful. If an argument is valid and has true premises, it is called sound.
A valid argument may be unsound even if it has a true conclusion. The following argument expressed this point:
- All babies are illogical.
- Nobody is despised who can manage a crocodile.
- Illogical persons are despised.
- Therefore, no baby can manage a crocodile.
The conclusion is probably true, but at least some of the premises are certainly false. The first and third premises together prove that babies are despised, and this is surely false. If all babies are illogical (which is probably true), then at least some illogical persons are not despised. So the third premise is false (and perhaps the second premises too), but the conclusion is true.
Thus, a valid argument can have a true conclusion but untrue premises. At the same time, it can never be the vice versa. Faced with a valid argument, if you don’t believe the conclusion you must reject one of the premises. For example:
- Mammals do not lay eggs.
- The platypus lays eggs.
- Therefore, the platypus is not a mammal.
Here the conclusion is false: the platypus is a mammal. Here the false premise is the first. Some mammals (specifically, the platypus and the echidna) do lay eggs.
In a sense, logic is the study of validity. A system of logic, such as syllogism, will give rules to allow one to deduce a conclusion from premises. If a system of logic is adequate, its rules are exactly the ones needed to prove every valid argument it can express without proving any invalid arguments.
Strictly speaking, inductive arguments prove conclusions from premises that give special cases. For example:
- Every major city that has adopted similar measures has ultimately repealed them after losing millions of dollars. If any city adopts a measure like this, it will likely face similar failure. We are not immune.
There are many other kinds of inductive arguments as well. For example, an argument by analogy, in which the conclusion is argued for by presenting an example of something held to be similar, is not strictly an inductive argument, but for many purposes can be treated as one. In the preceding example, the general argument could be converted into an argument by analogy simply by changing the word ‘any’ to ‘our’, so the conclusion becomes this: “if our city adopts a measure like this, it will likely face similar failure.” Abductive argument, or reasoning to the best explanation, is another kind of non-deductive argument that is some ways similar to induction. Abductive arguments set out specific examples and then a general fact or principle that explains these examples.
Notice that the conclusion is not guaranteed by the premises. Hence, this argument is technically invalid. But if the comparisons are apt (if the measure being proposed by this city is relevantly similar, if the city is relevantly similar to the other cities, and so on), the argument is quite compelling. Thus, validity is the wrong measure for inductive arguments. Instead, an inductive argument is said to be compelling or cogent. An argument that is compelling or cogent is able to rationally persuade the interlocutor of the conclusion.
This standard of rational persuasion is not as well-defined as it in the case of deductive arguments. In many cases it is clear that an argument has gone wrong. The persuasive power of many arguments is emotional or in some other way not rational. Such an argument is fallacious, and there are many common fallacies, which, once seen, lose their ability to deceive. It is not so easy to explain the standards of cogency, to explain how an argument goes right.
The conclusion of a valid deductive argument is true if its premises are, so if one believes the premises of an argument, one must rationally believe the conclusion. Often arguments are between parties with different initial assumptions. In these cases, one party will present an argument whose premises he or she does not present as true, but as acceptable to the other party. The other party will counter with an argument from premises he or she thinks the other person believes to be true.
For example, a theodicy might have different premises if its intended audience consisted of believing Christians than if its intended audience consisted of agnostics, atheists, or Buddhists. An argument’s strength often depends on selecting the right premises for the intended audience.
- Audi, Robert. 1998. Epistemology. Second edition, 2002. London: Routledge. ISBN 0415281091
- Austin, J. L. How to Do things with Words. Second edition, 1975. Cambridge, MA: Harvard University Press. ISBN 0674411528
- Chesñevar, Carlos, Ana Maguitman and Ronald Loui. 2000. "Logical Models of Argument." ACM Computing Surveys 32(4): 337-383.
- Grice, H. P. "Logic and Conversation" from The Logic of Grammar. Dickenson, 1975.
- Poincaré, Henri. 1952. Science and Hypothesis. Mineola, N.Y. Dover Publications. ISBN 0486602214
- Eemeren, Frans van and Rob Grootendorst. 1984. Speech Acts in Argumentative Discussions. Foris Publications. ISBN 9067650188
- Popper, Karl R. Objective Knowledge: An Evolutionary Approach. Oxford: Clarendon Press, 1972. ISBN 0198750242
- Stebbing, L. S. 1948. A Modern Introduction to Logic. Methuen and Co., 1953.
- Walton, Douglas. Informal Logic: A Handbook for Critical Argumentation. Cambridge, 1998.
All links retrieved October 30, 2012.
- Argument, The Internet Encyclopedia of Philosophy
General Philosophy Sources
- Stanford Encyclopedia of Philosophy
- The Internet Encyclopedia of Philosophy
- Guide to Philosophy on the Internet
- Paideia Project Online
- Project Gutenberg
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Argument | 13 |
50 | In computing, optimization is the process of modifying a system to make some aspect of it work more efficiently or use fewer resources. For instance, a computer program may be optimized so that it executes more rapidly, or is capable of operating with less memory storage or other resources, or draw less power. The system may be a single computer program, a collection of computers or even an entire network such as the Internet. See also algorithmic efficiency for further discussion on factors relating to improving the efficiency of an algorithm.
Although the word "optimization" shares the same root as "optimal," it is rare for the process of optimization to produce a truly optimal system. The optimized system will typically only be optimal in one application or for one audience. One might reduce the amount of time that a program takes to perform some task at the price of making it consume more memory. In an application where memory space is at a premium, one might deliberately choose a slower algorithm in order to use less memory. Often there is no “one size fits all” design which works well in all cases, so engineers make trade-offs to optimize the attributes of greatest interest. Additionally, the effort required to make a piece of software completely optimal—incapable of any further improvement— is almost always more than is reasonable for the benefits that would be accrued; so the process of optimization may be halted before a completely optimal solution has been reached. Fortunately, it is often the case that the greatest improvements come early in the process.
Optimization can occur at a number of 'levels':
At the highest level, the design may be optimized to make best use of the available resources. The implementation of this design will benefit from a good choice of efficient algorithms and the implementation of these algorithms will benefit from writing good quality code. The architectural design of a system overwhelmingly affects its performance. The choice of algorithm affects efficiency more than any other item of the design.In some cases, however, optimization relies on using fancier algorithms, making use of special cases and special tricks and performing complex trade-offs; thus, a fully optimized program can sometimes, if insufficiently commented, be more difficult for less experienced programmers to comprehend and hence may contain more faults than unoptimized versions.
Avoiding bad quality coding can also improve performance, by avoiding obvious slowdowns. After that, however, some optimizations are possible which actually decrease maintainability; some, but not all of them can nowadays be performed by optimizing compilers. For instance, using more indirection is often needed to simplify or improve a software, but that indirection has a cost.
At the lowest level, writing code using an assembly language designed for a particular hardware platform will normally produce the most efficient code since the programmer can take advantage of the full repertoire of machine instructions. The operating systems of most machines have been traditionally written in assembler code for this reason.
With more modern optimizing compilers and the greater complexity of recent CPUs, it is more difficult to write code that is optimized better than the compiler itself generates, and few projects need resort to this 'ultimate' optimization step.
However, a large amount of code written today is still compiled with the intent to run on the greatest percentage of machines possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code.
Just in time compilers and Assembler programmers may be able to perform run time optimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors.
Code optimization can be also broadly categorized as platform dependent and platform independent techniques; while the latter ones are effective of most or all platforms,platform dependent techniques use specific properties of one platform, or rely on parameters depending on the single platform or even on the single processor; writing or producing different versions of the same code for different processors might be thus needed.
For instance, in the case of compile-level optimization, platform independent techniques are generic techniques such as loop unrolling, reduction in function calls, memory efficient routines, reduction in conditions, etc., that impact most CPU architectures in a similar way. Generally, these serve to reduce the total Instruction path length required to complete the program and/or reduce total memory usage during the process. On the other side, platform dependent techniques involve instruction scheduling, instruction level parallelism, data level parallelism, cache optimization techniques, i.e. parameters that differ among various platforms; the optimal instruction scheduling might be different even on different processors of the same architecture.
Computational tasks can be performed in several different ways with varying efficiency. For example, consider the following C code snippet whose intention is to obtain the sum of all integers from 1 to N:
int i, sum = 0; for (i = 1; i <= N; i++) sum += i; printf ("sum: %d\n", sum);
This code can (assuming no arithmetic overflow) be rewritten using a mathematical formula like:
int sum = (N * (N+1)) >> 1; // >>1 is bit right shift by 1, which is // equivalent to divide by 2 when N is // non-negative printf ("sum: %d\n", sum);
The optimization, sometimes performed automatically by an optimizing compiler, is to select a method (algorithm) that is more computationally efficient while retaining the same functionality. See Algorithmic efficiency for a discussion of some of these techniques. However, a significant improvement in performance can often be achieved by solving only the actual problem and removing extraneous functionality.
Optimization is not always an obvious or intuitive process. In the example above, the ‘optimized’ version might actually be slower than the original version if N were sufficiently small and the particular hardware happens to be much faster at performing addition and looping operations than multiplication and division.
Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource. This will usually require a trade-off - where one factor is optimized at the expense of others. For example, increasing the size of cache improves runtime performance, but also increases the memory consumption. Other common trade-offs include code clarity and conciseness.
There are instances where the programmer performing the optimization must decide to make the software more optimal for some operations but at the cost of making other operations less efficient. These trade-offs may sometimes be of a non-technical nature - such as when a competitor has published a benchmark result that must be beaten in order to improve commercial success but comes perhaps with the burden of making normal usage of the software less efficient. Such changes are sometimes jokingly referred to as pessimizations.
Optimization may include finding a bottleneck, a critical part of the code that is the primary consumer of the needed resource - sometimes known as a hot spot. As a rule of thumb, improving 20% of the code is responsible for 80% of the results.
In computer science, the Pareto principle can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations. In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context).
More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data—the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit.
In some cases, adding more memory can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor. Performance can be greatly improved by reading the entire file then writing the filtered result, though this uses much more memory. Caching the result is similarly effective, though also requiring larger memory use.
Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of the development stage.
Donald Knuth made the following statement on optimization:
"Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing.
An alternative approach is to design first, code from the design and then profile/benchmark the resulting code to see which parts should be optimized. A simple and elegant design is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization.
In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization.
Optimization during code development using macros takes on different forms in different languages.
In some procedural languages, such as C and C++, macros are implemented using token substitution. Nowadays, inline functions can be used as a type safe alternative in many cases. In both cases, the inlined function body can then undergo further compile-time optimizations by the compiler, including constant folding, which may move some computations to compile time.
In many functional programming languages macros are implemented using parse-time substitution of parse trees/abstract syntax trees, which it is claimed makes them safer to use. Since in many cases interpretation is used, that is one way to ensure that such computations are only performed at parse-time, and sometimes the only way.
In both cases, work is moved to compile-time. The difference between C macros on one side, and Lisp-like macros and C++ Template metaprogramming on the other side, is that the latter tools allow performing arbitrary computations at compile-time/parse-time, while expansion of C macros does not perform any computation, and relies on the optimizer ability to perform it. Additionally, C macros do not directly support recursion nor iteration, so are not Turing complete.
As with any optimization, however, it is often difficult to predict where such tools will have the most impact before a project is complete.
See main article: Compiler optimization. See also
Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superior algorithm.
Optimizing a whole system is usually undertaken by programmers because it is too complex for automated optimizers. In this situation, programmers or system administrators explicitly change code so that the overall system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations.
Use a profiler (or performance analyzer) to find the sections of the program that are taking the most resources — the bottleneck. Programmers sometimes believe they have a clear idea of where the bottleneck is, but intuition is frequently wrong. Optimizing an unimportant piece of code will typically do little to help the overall performance.
When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program: more often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine.
After one is reasonably sure that the best algorithm is selected, code optimization can start: loops can be unrolled (for lower loop overhead, although this can often lead to lower speed if it overloads the CPU cache), data types as small as possible can be used, integer arithmetic can be used instead of floating-point, and so on.
Performance bottlenecks can be due to language limitations rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a different programming language that gives more direct access to the underlying machine. For example, it is common for very high-level languages like Python to have modules written in C for greater speed. Programs already written in C can have modules written in assembly. Programs written in D can use the inline assembler.
Rewriting pays off because of a general rule known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed if the correct part(s) can be located.
Manual optimization often has the side-effect of undermining readability. Thus code optimizations should be carefully documented and their effect on future development evaluated.
The program that does the automated optimization is called an optimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers can often tailor the generated code to specific processors.
Today, automated optimizations are almost exclusively limited to compiler optimization.
Sometimes, the time taken to undertake optimization in itself may be an issue.
Optimizing existing code usually does not add new features, and worse, it might add new bugs in previously working code (as any change might). Because manually optimized code might sometimes have less 'readability' than unoptimized code, optimization might impact maintainability of it also. Optimization comes at a price and it is important to be sure that the investment is worthwhile.
An automatic optimizer (or optimizing compiler, a program that performs code optimization) may itself have to be optimized, either to further improve the efficiency of its target programs or else speed up its own operation. A compilation performed with optimization 'turned on' usually takes longer, although this is usually only a problem when programs are quite large (but probably more than compensated for over many run time savings of the code). | http://everything.explained.at/Optimization_(computer_science)/ | 13 |
15 | The central mechanism of Darwinism is natural selection of the fittest, requiring differences in organisms from which nature can select. As a result of natural selection, inferior organisms are more likely to become extinct, and the superior groups are more likely to thrive and leave a greater number of offspring.1
The biological racism of late 19th century Darwinism is now both well documented and widely publicized. Especially influential in the development of biological racism was the theory of eugenics developed by Charles Darwin’s cousin, Sir Francis Galton.2,3
Less widely known is that many evolutionists, including Darwin, taught that women were biologically and intellectually inferior to men. The intelligence gap that Darwinists believed existed between males and females was not minor, but of a level that caused some evolutionists to classify the sexes as two distinct psychological species, males as homo frontalis and females as homo parietalis.4 Darwin himself concluded that the differences between male and female humans were so enormous that he was amazed that ‘such different beings belong to the same species’ and he was surprised that ‘even greater differences still had not been evolved.’5
Sexual selection was at the core of evolution, and female inferiority was its major proof and its chief witness. Darwin concluded that males were like animal breeders, shaping women to their liking by sexual selection.6 In contrast, war pruned weaker men, allowing only the strong to come home and reproduce. Men were also the hunting specialists, an activity that pruned weaker men. Women by contrast, ‘specialized in the “gathering” part of the primitive economy.’7
Male superiority was so critical for evolution that George stated:
‘The male rivalry component of sexual selection was “the key,” Darwin believed, to the evolution of man: of all the causes which have led to the differences … between the races of man … sexual selection has been the most efficient.’ 8
Natural selection struggles existed between groups, but were ‘even more intense among members of the same species, which have similar needs and rely upon the same territory to provide them with food and mates.’9 For years, evolution theorists commonly taught that the intense struggle for mates within the same species was a major factor in producing male superiority.
Darwin’s ideas, as elucidated in his writings, had a major impact on society and science. Richards concluded that Darwin’s views about women followed from evolutionary theory, ‘thereby nourishing several generations of scientific sexism.’10 Morgan added that Darwin inspired scientists to use biology, ethnology and primatology to support the theories of women’s ‘manifestly inferior and irreversibly subordinate’ status.11
The reasons justifying the belief in the biological inferiority of women are complex, but Darwinism was a major factor, especially Darwin’s natural and sexual selection ideas. The extent of the doctrine’s effect can be gauged by the fact that the inferiority-of-women conclusion has heavily influenced theorists from Sigmund Freud to Havelock Ellis, who have had a major role in shaping our generation.12 As eloquently argued by Durant, both racism and sexism were central to evolution:
‘Darwin introduced his discussion of psychology in the Descent by reasserting his commitment to the principle of continuity … [and] … Darwin rested his case upon a judicious blend of zoomorphic and anthropomorphic arguments. Savages, who were said to possess smaller brains and more prehensile limbs than the higher races, and whose lives were said to be dominated more by instinct and less by reason … were placed in an intermediate position between nature and man; and Darwin extended this placement by analogy to include not only children and congenital idiots but also women, some of whose powers of intuition, of rapid perception, and perhaps of imitation were “characteristic of the lower races, and therefore of a past and lower state of civilization”’ (Descent 1871:326–327).13
Darwin’s theory may have reflected his personal attitudes toward women and non-Caucasian races. When Darwin was concerned that his son Erasmus might marry a young lady named Martineau, he wrote that if Erasmus married her he would not be:
‘… much better than her “nigger.”—Imagine poor Erasmus a nigger to so philosophical and energetic a lady … . Martineau had just returned from … America, and was full of married women’s property rights … . Perfect equality of rights is part of her doctrine … . We must pray for our poor “nigger” … Martineau didn’t become a Darwin.’14
Among the more telling indications of Darwin’s attitudes toward women were the statements he penned as a young man, which listed what he saw as the advantages of marriage, including children and a
‘… constant companion, (friend in old age) who will feel interested in one, object to be beloved and played with—better than a dog anyhow—Home, and someone to take care of house—Charms of music and female chit-chat. These things good for one’s health (emphasis mine).’ 15
Conflicts that Darwin perceived marriage would cause him included: ‘how should I manage all my business if I were obligated to go every day walking with my wife—Eheu!’ He added that as a married man he would be a ‘poor slave … worse than a negro’ but then reminisced that ‘One cannot live this solitary life, with groggy old age, friendless and cold and childless staring one in one’s face … .’ Darwin concluded his evaluation on the philosophical note: ‘There is many a happy slave’ and shortly thereafter, in 1839, he married his cousin, Emma Wedgewood.16
To Brent, Darwin’s comments revealed a low opinion of women: ‘It would be hard to conceive of a more self-indulgent, almost contemptuous, view of the subservience of women to men.’17 Richards’ analysis of Darwin’s thoughts was as follows:
‘From the onset he [Darwin] embarked on the married state with clearly defined opinions on women’s intellectual inferiority and her subservient status. A wife did not aspire to be her husband’s intellectual companion, but rather to amuse his leisure hours … . … and look after his person and his house, freeing and refreshing him for more important things. These views are encapsulated in the notes the then young and ambitious naturalist jotted not long before he found his “nice soft wife on a sofa” … (although throughout their life together it was Charles who monopolized the sofa, not Emma).’18
The major intellectual justification Darwin offered for his conclusions about female inferiority was found in The Descent of Man. In this work, Darwin argued that the ‘adult female’ in most species resembled the young of both sexes, and also that ‘males are more evolutionarily advanced than females.’19 Since female evolution progressed slower then male evolution, a woman was ‘in essence, a stunted man.’20 This view of women rapidly spread to Darwin’s scientific and academic contemporaries.
Darwin’s contemporary anthropologist, Allan McGrigor, concluded that women are less evolved than men and ‘… physically, mentally and morally, woman is a kind of adult child … it is doubtful if women have contributed one profound original idea of the slightest permanent value to the world.’21 Carl Vogt, professor of natural history at the University of Geneva, also accepted many of ‘the conclusions of England’s great modern naturalist, Charles Darwin.’
Vogt argued that ‘the child, the female, and the senile White’ all had the intellectual features and personality of the ‘grown up Negro,’ and that in intellect and personality the female was similar to both infants and the ‘lower’ races.22 Vogt concluded that human females were closer to the lower animals than males and had ‘a greater’ resemblance to apes than men.23 He believed that the gap between males and females became greater as civilizations progressed, and was greatest in the advanced societies of Europe.24 Darwin was ‘impressed by Vogt’s work and proud to number him among his advocates.’25
Darwin taught that the differences between men and women were due partly, or even largely, to sexual selection. A male must prove himself physically and intellectually superior to other males in the competition for females to pass his genes on, whereas a woman must only be superior in sexual attraction. Darwin also concluded that ‘sexual selection depended on two different intraspecific activities: the male struggle with males for possession of females; and female choice of a mate.’26 In Darwin’s words, evolution depended on ‘a struggle of individuals of one sex, generally males, for the possession of the other sex.’27
To support this conclusion, Darwin used the example of Australian ‘savage’ women who were the ‘constant cause of war both between members of the same tribe and distinct tribes,’ producing sexual selection due to sexual competition.28 Darwin also cited the North American Indian custom, which required the men to wrestle male competitors in order to retain their wives, to support his conclusion that ‘the strongest party always carries off the prize.’29 Darwin concluded that as a result, a weaker man was ‘seldom permitted to keep a wife that a stronger man thinks worth his notice.’29
Darwin used other examples to illustrate the evolutionary forces which he believed produced men of superior physical and intellectual strength on the one hand, and sexually coy, docile women on the other. Since humans evolved from animals, and ‘no one disputes that the bull differs in disposition from the cow, the wild-boar from the sow, the stallion from the mare, and, as is well known to the keepers of menageries, the males of the larger apes from the females,’ Darwin argued similar differences existed among humans.30 Consequently, the result was that man is ‘more courageous, pugnacious and energetic than woman, and has more inventive genius.’31
Throughout his life, Darwin held these male supremacist views, which he believed were a critical expectation of evolution.32 Darwin stated shortly before his death that he agreed with Galton’s conclusion that ‘education and environment produce only a small effect’ on the mind of most women because ‘most of our qualities are innate.’33 In short, Darwin believed, as do some sociobiologists today, that biology rather than the environment was the primary source of behaviour, morals and all mental qualities.34 Obviously, Darwin almost totally ignored the critical influence of culture, family environment, constraining social roles, and the fact that, in Darwin’s day, relatively few occupational and intellectual opportunities existed for women.35
Darwin attributed most female traits to male sexual selection. Traits he concluded were due to sexual selection included human torso-shape, limb hairlessness and the numerous other secondary sexual characteristics that differentiate humans from all other animals. What remained unanswered was why males or females would select certain traits in a mate when they had been successfully mating with hair covered mates for aeons, and no non-human primate preferred these human traits? In this case Darwin ‘looked for a single cause to explain all the facts.’36 If sexual selection caused the development of a male beard and its lack on females, why do women often prefer clean-shaven males? Obviously, cultural norms were critical in determining what was considered sexually attractive, and these standards change, precluding the long-term sexual selection required to biologically develop them.37,38
Proponents of this argument for women’s inferiority used evidence such as the fact that a higher percentage of both the mentally deficient and mentally gifted were males. They reasoned that since selection operated to a greater degree on men, the weaker males would be more rigorously eliminated than weaker females, raising the level of males. The critics argued that sex-linked diseases, as well as social factors, were major influences in producing the higher number of males judged feebleminded. Furthermore, the weaker females would be preserved by the almost universal norms that protected them.
A major reason so few women were defined as eminent was because their social role often confined them to housekeeping and child rearing. Also, constraints on the education and employment of women, by both law and custom, rendered comparisons between males and females of little value in determining innate abilities. Consequently, measures of intelligence, feeblemindedness, eminence, and occupational success should not have been related to biology without factoring out these critical factors.
The arguments for women’s inferiority, which once seemed well supported (and consequently were accepted by most theorists), were later shown to be invalid as illustrated by the changes in western society that occurred in the last generation.39 Hollingworth’s103 1914 work was especially important in discrediting the variability hypothesis. She found that the female role as homemaker enabled feebleminded women to better survive outside an institutional setting, and this is why institutional surveys located fewer female inmates.
The theory of the natural and sexual selection origin of both the body and mind had major consequences on society soon after Darwin completed his first major work on evolution in 1859. In Shields’ words, ‘the leitmotiv of evolutionary theory as it came to be applied to the social sciences was the evolutionary supremacy of the Caucasian male.’40
One of the then leading evolutionists, Joseph LeConte, even concluded that differences between male and female resulting from organic evolution must also apply to distinct societal roles for each sex.41 Consequently, LeConte opposed women’s suffrage because evolution made women ‘incapable of dealing rationally with political and other problems which required emotional detachment and clear logic.’42
Their innate belief in the inferiority of females was strongly supported by biological determinism and the primacy of nature over nurture doctrine. After reviewing the once widely accepted tabula rasa theory, in which the environment was taught to be responsible for personality, Fisher noted that Darwinism caused a radical change in society:
‘… the year in which Darwin finished the first unpublished version of his theory of natural selection , Herbert Spencer began to publish essays on human nature. Spencer was a British political philosopher and social scientist who believed that human social order was the result of evolution. The mechanism by which social order arose was “survival of the fittest,” a term he, not Darwin, introduced. In 1850, Spencer wrote “Social Statistics,” a treatise in which he … opposed welfare systems, compulsory sanitation, free public schools, mandatory vaccinations, and any form of “poor law.” Why? Because social order had evolved by survival of the fittest. The rich were rich because they were more fit; certain nations dominated others because these peoples were naturally superior; certain racial types subjugated others because they were smarter. Evolution, another word he popularized, had produced superior classes, nations, and races.’43
Fisher added that the early evolutionist’s teaching included not only ideas of superior race but also superior sex; conclusions that the male sex dominated and controlled females due to evolution. Darwin taught that a major reason for male superiority was that males fought and died to protect both themselves and their females.44 As a consequence, males were subjected to a greater selection pressure than females because they had to fight for survival in such dangerous, male-orientated activities as war and hunting.
In the late 1800’s, the inferiority-of-women doctrine was taken for granted by most scientists to be a major proof of evolution by natural selection. Gould claimed that ‘almost all scientists’ then believed that Blacks, women, and other groups were intellectually inferior, and biologically closer to the lower animals.45 Nor were these scientists simply repeating their cultural prejudices. They attempted to support their belief of female inferiority with supposedly empirical research as well as evolutionary speculation.
One approach seized upon, to scientifically demonstrate that females were generally inferior to males, was to prove that their brain capacity was smaller. Researchers first endeavoured to demonstrate smaller female cranial capacity by skull measurements, and then tried to prove that brain capacity was causally related to intelligence—a far more difficult task.46 Darwin justified this approach for proving female inferiority by explaining:
‘As the various mental faculties gradually developed themselves, the brain would almost certainly become larger. … the large proportion which the size of man’s brain bears to his body, compared to the same proportion in the gorilla or orang, is closely connected with his higher mental powers … . … that there exists in man some close relation between the size of the brain and the development of the intellectual faculties is supported by the comparison of the skulls of savage and civilized races, of ancient and modern people, and by the analogy of the whole vertebrate series.’47
One of the most eminent of the numerous early researchers who used craniology to ‘prove’ intellectual inferiority of women was Paul Broca (1824–1880), a professor of surgery at the Paris Faculty of Medicine. He was a leader in the development of physical anthropology as a science, and one of Europe’s most esteemed anthropologists. In 1859, he founded the prestigious Anthropological Society.48 A major preoccupation of this society was measuring various human traits, including skulls, to ‘delineate human groups and assess their relative worth.’49 Broca concluded that in humans, the brain is larger in
‘… men than in women, in eminent men than in men of mediocre talent, in superior races than in inferior races50 … Other things equal, there is a remarkable relationship between the development of intelligence and the volume of the brain.’51
In an extensive review of Broca’s work, Gould concluded that Broca’s conclusions only reflected ‘the shared assumptions of most successful white males during his time—themselves on top … and women, Blacks, and poor people below.’52 How did Broca arrive at these conclusions? Gould responded that ‘his facts were reliable … but they were gathered selectively and then manipulated unconsciously in the service of prior conclusions.’ One would have been that women were intellectually and otherwise demonstratively inferior to men as evolution predicted. Broca’s own further research and the changing social climate later caused him to modify his views, concluding that culture was more important than he had first assumed.53
A modern study by Van Valen, which Jensen concluded was the ‘most thorough and methodologically sophisticated recent review of all the evidence relative to human brain size and intelligence,’ found that the best estimate of the within-sex correlation between brain size and I.Q. ‘may be as high as 0.3.’54,55 A correlation of 0.3 accounts for only 9% of the variance between the sexes, a difference that may be more evidence for test bias and culture than biological inferiority. Schluter showed that claimed racial and sexual differences in brain size ‘are accounted for by a simple artifact of the statistical methods employed.’56
Although some contemporary critics of Darwin effectively argued against his conclusions, the inferiority-of-women doctrine and the subordinate position of women was long believed. Only in the 1970s was the doctrine increasingly scientifically investigated as never before.57,58 Modern critics of Darwinism were often motivated by the women’s movement to challenge especially Darwin’s conclusion that evolution has produced males and females who were considerably different, and men who ‘were superior to women both physically and mentally.’59 Their critiques demonstrated major flaws in the evidence used to prove female inferiority and, as a result, identified fallacies in major aspects of Darwinism itself.60 For example, Fisher argued that the whole theory of natural selection was questionable, and quoted Chomsky, who said that the process by which the human mind achieved its present state of complexity was
‘a total mystery … . It is perfectly safe to attribute this development to “natural selection,” so long as we realize that there is no substance to this assertion, that it amounts to nothing more than a belief that there is some naturalistic explanation for these phenomena.’ 61
She also argued that modern genetic research has undermined several major aspects of Darwin’s hypothesis—especially his sexual selection theory. In contrast to the requirement for Darwinism, in reality, even if natural selection were to operate differentially on males and females, males would pass on many of their superior genes to both their sons and daughters because most ‘genes are not inherited along sexual lines.’ Aside from the genes which are on the Y chromosome, ‘a male offspring receives genes from both mother and father.’62
Darwin and his contemporaries had little knowledge of genetics, but this did not prevent them from making sweeping conclusions about evolution. Darwin even made the claim that the characteristics acquired by sexual selection are usually confined to one sex.63 Yet, Darwin elsewhere recognized that women could ‘transmit most of their characteristics, including some beauty, to their offspring of both sexes,’ a fact he ignored in much of his writing.64 Darwin even claimed that many traits, including genius and the higher powers of imagination and reason, are ‘transmitted more fully to the male than to the female offspring.’65
Even though Darwin’s theory advanced biologically based racism and sexism, some argue that he would not approve of, and could not be faulted for, the results of his theory. Many researchers went far beyond Darwin. Darwin’s cousin, Galton, for instance, concluded from his life-long study on the topic, that ‘women tend in all their capacities to be inferior to men (emphasis mine).’66 Richards concluded that recent studies emphasized ‘the central role played by economic and political factors in the reception of evolutionary theory,’ but Darwinism also provided ‘the intellectual underpinnings of imperialism, war, monopoly, capitalism, militant eugenics, and racism and sexism,’ and therefore ‘Darwin’s own part in this was not insignificant, as has been so often asserted.’67
After noting that Darwin believed that the now infamous social-Darwinist, Spencer, was ‘by far the greatest living philosopher in England,’ Fisher concluded that the evidence for the negative effects of evolutionary teaching on history were unassailable:
‘Europeans were spreading out to Africa, Asia, and America, gobbling up land, subduing the natives and even massacring them. But any guilt they harbored now vanished. Spencer’s evolutionary theories vindicated them … . Darwin’s Origin of Species, published in 1859, delivered the coup de grace. Not only racial, class, and national differences but every single human emotion was the adaptive end product of evolution, selection, and survival of the fittest.’ 68
These Darwinian conclusions of biology about females
‘… squared with other mainstream scholarly conclusions of the day. From anthropology to neurology, science had demonstrated that the female Victorian virtues of passivity, domesticity, and greater morality ( … less sexual activity) were rooted in female biology.’ 69
Consequently, many people concluded that: ‘evolutionary history has endowed women with domestic and nurturing genes and men with professional ones.’70
The conclusion of the evolutionary inferiority of women is so ingrained in biology that Morgan concludes that researchers tended to avoid ‘the whole subject of biology and origins,’ hoping that this embarrassing history will be ignored and scientists can ‘concentrate on ensuring that in the future things will be different.’71 Even evolutionary women scientists largely ignore the Darwinian inferiority theory.72,73
Morgan stresses that we simply cannot ignore evolutionary biology because the belief of the ‘jungle heritage and the evolution of man as a hunting carnivore has taken root in man’s mind as firmly as Genesis ever did.’ Males have ‘built a beautiful theoretical construction, with himself on top of it, buttressed with a formidable array of scientifically authenticated facts.’ She argues that these ‘facts’ must be reevaluated because scientists have ‘sometimes gone astray’ due to prejudice and philosophical proscriptions.74 Morgan states that the prominent evolutionary view of women as biologically inferior to men must still be challenged, even though scores of researchers have adroitly overturned this Darwinian theory.
Culture was of major importance in shaping Darwin’s theory.75 Victorian middle-class views about men were blatant in The Descent of Man and other evolutionists’ writings. The Darwinian concept of male superiority served to increase the secularization of society, and made more palatable the acceptance of the evolutionary naturalist view that humans were created by natural law rather than by divine direction.76 Naturalism was also critically important in developing the women-inferiority doctrine, as emphasized by Richards:
‘Darwin’s consideration of human sexual differences in The Descent was not motivated by the contemporary wave of anti-feminism … but was central to his naturalistic explanation of human evolution. It was his theoretically directed contention that human mental and moral characteristics had arisen by natural evolutionary processes which predisposed him to ground these characteristics in nature rather than nurture—to insist on the biological basis of mental and moral differences … .’ 77
A major method used to attack the evolutionary conclusion of female inferiority was to critique the evidence for Darwinism itself. Fisher, for example, noted that it was difficult to postulate theories about human origins on the actual brain organization
‘… of our presumed fossil ancestors, with only a few limestone impregnated skulls—most of them bashed, shattered, and otherwise altered by the passage of millions of years … [and to arrive at any valid conclusions on the basis of this] evidence, would seem to be astronomical.’ 78
Hubbard added that ‘Darwin’s sexual stereotypes’ were still commonly found
‘… in the contemporary literature on human evolution. This is a field in which facts are few and specimens are separated by hundreds of thousands of years, so that maximum leeway exists for investigator bias.’ 79
She then discussed our ‘overwhelming ignorance’ about human evolution and the fact that much which is currently accepted is pure speculation. Many past attempts to disprove the evolutionary view that women were intellectually inferior, similarly attacked the core of evolutionary theory itself. A belief in female inferiority is inexorably bound up with human group inferiority, which must first exist for natural selection to operate. Evaluations of the female inferiority theory have produced incisive, well-reasoned critiques of both sexual and natural selection and also Darwinism as a whole.80
Evolution can be used to argue for male superiority, but it can also be used to build a case for the opposite. The evolutionary evidence leaves so many areas for ‘individual interpretation’ that some feminist authors, and others, have read the data as proving the evolutionary superiority of women by using ‘the same evolutionary story to draw precisely the opposite conclusion.’81 One notable, early example is Montagu’s classic 1952 book, The Natural Superiority of Women. Some female biologists have even argued for a gynaecocentric theory of evolution, concluding that women are the trunk of evolution history, and men are but a branch, a grafted scion.82 Others have tried to integrate reformed ‘Darwinist evolutionary “knowledge” with contemporary feminist ideals.’83
Hapgood even concludes that evolution demonstrates that males exist to serve females, arguing that ‘masculinity did not evolve in a vacuum’ but because it was selected. He notes many animal species live without males, and the fact that they do live genderlessly or sexlessly shows that ‘males are unnecessary’ in certain environments.84 It is the woman that reproduces, and evolution teaches that survival is important only to the degree that it promotes reproduction. So Hapgood argues that evolution theory should conclude that males evolved only to serve females in all aspects of child bearing and nurturing. This includes both to ensure that the female becomes pregnant and that her progeny are taken care of.
Another revisionist theory is that women are not only superior, but society was once primarily matriarchal. These revisionists argue that patriarchal domination was caused by factors that occurred relatively recently.85 Of course, the theories that postulate the evolutionary inferiority of males suffer from many of the same problems as those that postulate women’s inferiority.
Some argue that many of the views Darwin developed should be perpetuated again, to produce a moral system based on the theory of evolution.86 For example, Ford concluded that the idea of eliminating sexism is erroneous:
‘… the much-attacked gender differentiation we see in our societies is actually … a necessary consequence of the constraints exerted by our evolution. There are clear factors which really do make men the more aggressive sex, for instance … .’ 87
After concluding that natural selection resulted in female inferiority, it was often implied that what natural selection produced was natural, and thus proper. It at least gave a ‘certain dignity’ to behaviours that we might ‘otherwise consider aberrant or animalistic.’ 88 For example, evolutionary success was defined as leaving more offspring, and consequently promiscuity in human males was a selected trait.
This explanation is used to justify both male promiscuity and irresponsibility, and argues that trying to change ‘nature’s grand design’ is futile.89 Fox even argues that the high pregnancy rate among unmarried teenage girls today is due to our ‘evolutionary legacy,’ which ‘drives’ young girls to get pregnant.90 Consequently, the authors conclude that cultural and religious prohibitions against unmarried teen pregnancy are doomed to fail.
Eberhard notes that the physical aggressiveness of males is justified by sexual selection, and that: ‘males are more aggressive than females in the sexual activities preceding mating (discussed at length by Darwin 1871 and confirmed many times since …).’ 91 Further, the conclusion ‘now widely accepted … that males of most species are less selective and coy in courtship because they make smaller investments in offspring’ is used to justify male sexual promiscuity.92 Male promiscuity is, in other words, genetically determined and thus is natural or normal because ‘males profit, evolutionarily speaking, from frequent mating, and females do not.’ The more females a male mates with, the more offspring he produces, whereas a female needs to mate only with one male to become pregnant.93 Evolution can progress only if females select the fittest male as predicted by Darwin’s theory of sexual selection. Males for this reason have ‘an undiscriminating eagerness’ to mate whereas females have ‘a discriminating passivity.’93
The Darwinian conclusion that women are inferior has had major unfortunate social consequences. Darwin hypothesized that sexual selection was important in evolution, and along with the data he and his followers gathered to support their inferiority-of-women view, it provided a major support for natural selection.94 Therefore, the disproof of women’s inferiority means that a major mechanism that was originally hypothesized to account for evolutionary advancement is wrong. Today, radically different conclusions are accepted about the intelligence of women, despite using data more complete but similar to that used by Darwin to develop his theory. This vividly demonstrates how important both preconceived ideas and theory are in interpreting data. The women’s evolutionary inferiority conclusion developed partly because:
‘Measurement was glorified as the essential basis of science: both anatomists and psychologists wanted above everything else to be “scientific,” … . Earlier psychological theory had been concerned with those mental operations common to the human race: the men of the nineteenth century were more concerned to describe human differences.’ 95
These human differences were not researched to understand and help society but to justify a theory postulated to support both naturalism and a specific set of social beliefs. The implications of Darwinism cannot be ignored today because the results of this belief were tragic, especially in the area of racism:
‘… it makes for poor history of science to ignore the role of such baggage in Darwin’s science. The time-worn image of the detached and objective observer and theoretician of Down House, remote from the social and political concerns of his fellow Victorians who misappropriated his scientific concepts to rationalize their imperialism, laissez-faire economics, racism and sexism, must now give way before the emerging historical man, whose writings were in many ways so congruent with his social and cultural milieu.’ 96
Hubbard went further and charged Darwin guilty of ‘blatant sexism.’ She placed a major responsibility for scientific sexism, and its mate social Darwinism, squarely at Darwin’s door.97 Advancing knowledge has shown many of Darwin’s ideas were not only wrong but also harmful. Many still adversely affect society today. Hubbard concluded that Darwin ‘provided the theoretical framework within which anthropologists and biologists have ever since been able to endorse the social inequality of the sexes.’98 Consequently, ‘it is important to expose Darwin’s androcentricism, and not only for historical reasons, but because it remains an integral and unquestioned part of contemporary biological theories.’99
Male superiority is critical for evolution. George states that:
‘… the male rivalry component of sexual selection was the key, Darwin believed, to the evolution of man; of all the causes which have led to the differences in external appearance between the races of man, and to a certain extent between man and the lower animals, sexual selection has been the most efficient.’ 100
A critical reason for Darwin’s conclusion was his rejection of the biblical account, which taught that man and woman were specific creations of God, made not to dominate but to complement each other. Darwin believed the human races ‘were the equivalent of the varieties of plants and animals which formed the materials of evolution in the organic world generally,’ and the means that formed the sexes and races were the same struggles that Darwin concluded animals underwent to both survive and mate.101 Having disregarded the biblical view, Darwin needed to replace it with another one, and the one he selected—the struggle of males for possession of females and food—resulted in males competing against other males. He concluded that evolution favoured the most vigorous and sexually aggressive males and caused these traits to be selected because those with these traits usually left more progeny.102
Darwin’s theory of female inferiority was not the result of personal conflicts with women but from his efforts to explain evolution without an intelligent creator. In general, a person’s attitude towards the opposite sex results from poor experiences with that sex. From the available information, this does not appear to have been the situation in Darwin’s case. His marriage was exemplary. The only major difference between Darwin and his wife was in the area of religion, and this caused only minor problems: their devotion to each other is classic in the history of famous people. Further, as far as is known, he had an excellent relationship with all of the other women in his life: his mother and his daughters. Much of Darwin’s hostility to religion and God is attributed to the death of his mother when he was young and to the death of his oldest daughter in 1851, at the age of ten.
The Christian teaching of the equality of the sexes before God (Gal. 3:28), and the lack of support for the female biological inferiority position, is in considerable contrast to the conclusions derived by evolutionary biology in the middle and late 1800s. In my judgment, the history of these teachings is a clear illustration of the negative impact of social Darwinism.
Help keep these daily articles coming. Support AiG.
“Now that I have updated, revised, and expanded The Lie, I believe it’s an even more powerful, eyeopening book for the church—an essential resource to help all of us to understand the great delusion that permeates our world! The message of The Lie IS the message of AiG and why we even exist! It IS the message God has laid on our hearts to bring before the church! It IS a vital message for our time.”
– Ken Ham, president and founder of AiG–U.S.
Answers magazine is the Bible-affirming, creation-based magazine from Answers in Genesis. In it you will find fascinating content and stunning photographs that present creation and worldview articles along with relevant cultural topics. Each quarterly issue includes a detachable chart, a pullout children’s magazine, a unique animal highlight, excellent layman and semi-technical articles, plus bonus content. Why wait? Subscribe today and get a FREE DVD download! | http://www.answersingenesis.org/articles/tj/v14/n1/females | 13 |
41 | ARISTOTLE’S SYLLOGISM AS SIMPLE AS ABC NOW
BY NEW RAVAL’S NOTATION
Syllogism was introduced by Aristotle (a reasoning consisting two premises and a conclusion).Aristotle gives the following definition of syllogism in his fundamental treatise Organon.
“A syllogism is discourse, in which, certain things being stated, something other than what is stated follows of necessity from their being so”. Things that have stated are known as premises and the one that follows from the premises is known as the conclusion of the syllogism.
A categorical syllogism is a type of argument with two premises and one conclusion. Each of these three propositions is one of four forms of categorical proposition.
Type Form Example
A All S are P All monkeys are mammals
E No S is P No monkeys are birds
I Some S are P Some philosophers are logicians
O Some S are not P Some logicians are not philosophers
These four type of proposition are called A,E,I,and O type propositions, the variables S and P are place-holders for terms which represent out a class or category of thing, hence the name “categorical” proposition.
A categorical syllogism contains precisely three terms: the major term, which is the predicate of the conclusion; the minor term, the subject of the conclusion; and the middle term, which appears in both premises but not in the conclusion.
Aristotle noted following five basic rules governing the validity of categorical syllogisms
1. The middle term must be distributed at least once (distributed term refers to all members of the denoted class, as in all S are P and no S is P)
2. A term distributed in the conclusion must be distributed in the premise in which it occurs
3. Two negative premises imply no valid conclusion
4. If one premise is negative, then the conclusion must be negative
5. Two affirmatives imply an affirmative.
John Venn, an English logician, in 1880 introduced a method for analyzing categorical syllogisms, known as the Venn diagram. In a paper entitled “on the Diagrammatic and Mechanical Representation of propositions and Reasoning’s in the “philosophical magazine and journal of science,” Venn shows the different ways to represent propositions by diagrams. For categorical syllogism three overlapping circles are drawn to represent the classes denoted by the three terms. Universal propositions (all S are P, no S is P) are indicated by shading the sections of the circles representing the excluded classes. Particular propositions (some S are P, some S are not P) are indicated by placing some mark, usually an “x”, in the part of the circle representing the class whose members are specified. The conclusion may then be inferred from the diagram.
Venn diagrams has similarity with Euler diagrams, invented by Leonard Euler in the 18th century, but Venn diagrams are visually more complex than the Euler diagrams.
Solving Syllogism problems are usually time consuming by Traditional methods and considered difficult by most of the students. New RAVAL’S NOTATION solves Syllogism problems very quickly and accurately. This method solves any categorical syllogism problem with same ease and is as simple as ABC…
In RAVAL’S NOTATION, each premise and conclusion is written in abbreviated form, and then conclusion is reached simply by connecting abbreviated premises.
NOTATION: Statements (both premises and conclusions) are represented as follows:
a) All S are P SS-P
b) Some S are P S-P
c) Some S are not P (S / P)
d) No S is P SS / PP
(- implies are and / implies are not)
All is represented by double letters; Some is represented by single letter. Some S are not P is represented as (S / P) in statement notation. This statement is written uniquely in brackets because one cannot include this statement in deriving any conclusion. (Some S are not P does not imply some P are not S). No S is P implies No P is S so its notation contains double letters on both sides.
RULES: (1) Conclusions are reached by connecting Notations. Two notations can be linked only through common linking terms. When the common linking term multiplies (becomes double from single), divides (becomes single from double) or remains double then conclusion is arrived between terminal terms. (Aristotle’s rule: the middle term must be distributed at least once)
(2)If both statements linked are having – signs, resulting conclusion carries – sign (Aristotle’s rule: two affirmatives imply an affirmative)
(3) Whenever statements having – and / signs are linked, resulting conclusion carries / sign. (Aristotle’s rule: if one premise is negative, then the conclusion must be negative)
(4)Statement having / sign cannot be linked with another statement having / sign to derive any conclusion. (Aristotle’s rule: Two negative premises imply no valid conclusion)
(5)Whenever statement carrying / sign is involved as first statement in deducting conclusion then terminating point in statement carrying – sign should be in double letters to have any valid conclusion.
(When the terminating term is in double letters, it limits the terminating term to the maximum up to common term. Hence valid conclusion follows only in this case when / sign is involved) | http://www.absoluteastronomy.com/discussionpost/Aristotles_syllogism_as_simple_as_ABC_by_new_Ravals_notation_96499431 | 13 |
15 | Dynamic programming is a useful type of algorithm that can be used to optimize hard problems by breaking them up into smaller subproblems. By storing and re-using partial solutions, it manages to avoid the pitfalls of using a greedy algorithm. There are two kinds of dynamic programming, bottom-up and top-down.
In order for a problem to be solvable using dynamic programming, the problem must possess the property of what is called an optimal substructure. This means that, if the problem was broken up into a series of subproblems and the optimal solution for each subproblem was found, then the resulting solution would be realized through the solution to these subproblems. A problem that does not have this structure cannot be solved with dynamic programming.
Top-down is better known as memoization. It is the idea of storing past calculations in order to avoid re-calculating them each time.
Given a recursive function, say:
fib(n) = 0 if n = 0
1 if n = 1
fib(n - 1) + fib(n - 2) if n >= 2
We can easily write this recursively from its mathematic form as:
if(n == 0 || n == 1)
fib(n-1) + fib(n-2)
Now, anyone that has been programming for awhile or knows a thing or two about algorithmic efficiency will tell you that this is a terrible idea. The reason is that, at each step, you must to re-calculate the value of fib(i), where i is 2..n-2.
A more efficient example of this is storing these values, creating a top-down dynamic programming algorithm.
m = map(int, int)
m = 0
m = 1
if(m[n] does not exist)
m[n] = fib(n-1) + fib(n-2)
By doing this, we calculate fib(i) at most once.
Bottom-up uses the same technique of memoization that is used in top-down. The difference, however, is that bottom-up uses comparative sub-problems known as recurrences to optimize your final result.
In most bottom-up dynamic programming problems, you are often trying to either minimize or maximize a decision. You are given two (or more) options at any given point and you have to decide which is more optimal for the problem you're trying to solve. These decisions, however, are based on previous choices you made.
By making the most optimal decision at each point (each subproblem), you are making sure that your overall result is the most optimal.
The most difficult part of these problems is finding the recurrence relationships for solving your problem.
To pay for a bunch of algorithm textbooks, you plan to rob a store that has n items. The problem is that your tiny knapsack can only hold at most W kg. Knowing the weight (w[i]) and value (v[i]) of each item, you want to maximize the value of your stolen goods that all together weight at most W. For each item, you must make a binary choice - take it or leave it.
Now, you need to find what the subproblem is. Being a very bright thief, you realize that the maximum value of a given item, i, with a maximum weight, w, can be represented m[i, w]. In addition, m[0, w] (0 items at most weight w) and m[i, 0] (i items with 0 max weight) will always be equal to 0 value.
m[i, w] = 0 if i = 0 or w = 0
With your thinking full-face mask on, you notice that if you have filled your bag with as much weight as you can, a new item can't be considered unless its weight is less than or equal to the difference between your max weight and the current weight of the bag. Another case where you might want to consider an item is if it has less than or equal weight of an item in the bag but more value.
m[i, w] = 0 if i = 0 or w = 0
m[i - 1, w] if w[i] > w
max(m[i - 1, w], m[i - 1, w - w[i]] + v[i]) if w[i] <= w
These are the recurrence relations described above. Once you have these relations, writing the algorithm is very easy (and short!).
v = values from item1..itemn
w = weights from item1..itemn
n = number of items
W = maximum weight of knapsack
m[n, n] = array(int, int)
m[0, w] = 0
for i=1 to n
m[i, 0] = 0
if w[i] <= w
if v[i] + m[i-1, w - w[i]] > m[i-1, w]
m[i, w] = v[i] + m[i-1, w - w[i]]
m[i, w] = m[i-1, w]
m[i, w] = c[i-1, w]
return m[n, n]
- Introduction to Algorithms
- Programming Challenges
- Algorithm Design Manual
Luckily, dynamic programming has become really in when it comes to competitive programming. Check out Dynamic Programming on UVAJudge for some practice problems that will test your ability to implement and find recurrences for dynamic programming problems. | http://stackoverflow.com/questions/4278188/good-examples-articles-books-for-understanding-dynamic-programming/4278796 | 13 |
15 | Quicksort. The pivot is highlighted in red, the current sorting area is shaded and the recently swapped values highlighted in green. The sorting regions from the stack of the recursive calls is painted on the left. The current region is highlighted in red. The region on that partitioning will also be recursively is highlighted in blue. As it is depth first, not breath first iteration, the blue region is not sorted immediately after the red region.
Quicksort algorithm is a well-known sorting algorithm that, on average, makes
comparisons to sort n
items. In the worst case, it makes
comparisons, though in correct implementations this behavior is rare
. Java library provides a standard way of sorting Arrays with Arrays.sort()
that uses the algorithm, closely derived from Quicksort.
- Quicksort is an in-place sort that needs no temporary memory
- Typically, quicksort is faster in practice than other algorithms, because its inner loop can be efficiently implemented on most architectures.
- In most real-world data, it is possible to make design choices which minimize the probability of requiring quadratic time.
- Quicksort tends to make excellent usage of the memory hierarchy like virtual memory or caches. It is well suited to modern computer architectures.
- Quicksort can be easily parallelized due to its divide-and-conquer nature.
- This algorithm may swap the elements with equal comparison keys (it is not a stable sort).
- Simpler algorithms like insertion sort perform better for the small (10 elements or about) data sets. The advanced implementations automatically switch into simpler alternative algorithm if the data set is small enough.
- Quicksort does work very well on already mostly sorted lists or an lists with lots of similar values.
The quicksort algorithm was developed in 1960 by C. A. R. Hoare while in the Soviet Union, as a visiting student at Moscow University. At that time, Hoare worked in a project on machine translation and developed the algorithm in order to sort the words to be translated. This made them more easily matched to an already-sorted Russian-to-English dictionary that was stored on magnetic tape.
Quicksort uses "divide and conquer strategy" to divide a list into two sub-lists.
The steps are:
- Pick an element, called a pivot, from the list (somewhere from the middle).
- Reorder the list so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation.
- Recursively sort the sub-list of lesser elements and the sub-list of greater elements.
The base case of the recursion are lists of size zero or one, which are always sorted.
The stage 2, reordering, can be implemented in various ways.
The simplest way is to create two initially empty lists, and then add all elements lower than pivot into the first list and elements greater then pivot to the second list. After that the two lists and pivot can be merged producing the required list. This approach uses additional memory for the transient lists and additional time to merge them, so may not be very efficient.
Dual loop with the meeting low and high values
The applet code demonstrates more efficient in-place partitioning:
- Set the two positions: low (initially start of the interval to reorder) and high (initially end of the interval).
- Move from low to high till finding value that is more than a pivot. This is the first candidate for swapping. Set low to the position of the found candidate.
- Move from high to low till finding value that is less than a pivot (do not cross the position of the first swapping candidate). This is the second candidate for swapping. Set high to the position of the second candidate.
- Swap both found candidates.
- Repeat while low < high, from the step 2. low and high need not to meet exactly at the pivot.
Single loop with storage index
This is the alternative algorithm, described in Wikipedia. It also works:
- Swap pivot with the last element in the section to partition (moving it out of the way).
- Set storage to the position of the first element.
- Move from start till end of the section. For every value that is less than a pivot, swap it with the value indexed by storage and then increment the storage.
- After finishing the previous step, swap the element indexed by storage with the pivot value that is at the end of the section. This moves the pivot in place.
Attempts to improve
Picking a value of the random element as the pivot value (step 1) makes the algorithm fast in the best possible case, but will still be very slow in the worst possible case. There are following known attempts to improve the pivot selections:
- Choose 3 numbers uniformly at random and keep the median of the 3 numbers as pivot.
- Use some other algorithm to find the median of the data set and use this value as pivot.
Both suggestions improve the worst case performance but additional steps of picking pivot slow the algorithm for best case scenario. As a result, these extensions does not necessarily make the algorithm better for all possible cases.
A "dual pivot" Quicksort has been proposed that in the source is claimed to be faster in experimental measurements:
- Pick two pivots from the array rather than one.
- Reorder the array into three parts: all less than the smaller pivot, all larger than the larger pivot, all in between (or equal to) the two pivots.
- Recursively sort the sub-arrays.
Dual pivot quicksort will be used in Java Arrays.sort since the version 1.7. It is formally described in that also contains the source code.
While individual in-place partition operations are difficult to parallelize, but once divided, different sections of the list can be sorted in parallel. If we have processors, we can divide a list of elements into sublists in average time, then sort each of these in average time. Ignoring the preprocessing, this is linear speedup. Given processors, only time is required overall.
One advantage of parallel quicksort over other parallel sort algorithms is that no synchronization is required. A new thread is started as soon as a sublist is available for it to work on and it does not communicate with other threads. When all threads complete, the sort is done.
- 1 Document, describing Quicksort algorithm at GateGuru
- 2 http://www.vogella.de/articles/JavaAlgorithms/article.html
- 3 An Interview with C.A.R. Hoare
- 4 Seminar topics at ETHZ
- 5 http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628
- 6 V.Yaroslavskiy (2009). Dual-Pivot Quicksort
- Freely available java code of this algorithm.
- Proposal to replace Quicksort in OpenJDK by dual pivot sorting algorithm.
This web page reuses material from Wikipedia page http://en.wikipedia.org/wiki/Quicksort under the rights of CC-BY-SA license. As a result, the content of this page is and will stay available under the rights of this license regardless of restrictions that apply to other pages of this website. | http://ultrastudio.org/en/Quicksort | 13 |
16 | Scatter plots are similar to line graphs in that they use horizontal and vertical axes to plot data points. However, they have a very specific purpose. Scatter plots show how much one variable is affected by another. The relationship between two variables is called their correlation .
Scatter plots usually consist of a large body of data. The closer the data points come when plotted to making a straight line, the higher the correlation between the two variables, or the stronger the relationship.
If the data points make a straight line going from the origin out to high x- and y-values, then the variables are said to have a positive correlation . If the line goes from a high-value on the y-axis down to a high-value on the x-axis, the variables have a negative correlation .
A perfect positive correlation is given the value of 1. A perfect negative correlation is given the value of -1. If there is absolutely no correlation present the value given is 0. The closer the number is to 1 or -1, the stronger the correlation, or the stronger the relationship between the variables. The closer the number is to 0, the weaker the correlation. So something that seems to kind of correlate in a positive direction might have a value of 0.67, whereas something with an extremely weak negative correlation might have the value -.21.
An example of a situation where you might find a perfect positive correlation, as we have in the graph on the left above, would be when you compare the total amount of money spent on tickets at the movie theater with the number of people who go. This means that every time that "x" number of people go, "y" amount of money is spent on tickets without variation.
An example of a situation where you might find a perfect negative correlation, as in the graph on the right above, would be if you were comparing the amount of time it takes to reach a destination with the distance of a car (traveling at constant speed) from that destination.
On the other hand, a situation where you might find a strong but not perfect positive correlation would be if you examined the number of hours students spent studying for an exam versus the grade received. This won't be a perfect correlation because two people could spend the same amount of time studying and get different grades. But in general the rule will hold true that as the amount of time studying increases so does the grade received.
Let's take a look at some examples. The graphs that were shown above each had a perfect correlation, so their values were 1 and -1. The graphs below obviously do not have perfect correlations. Which graph would have a correlation of 0? What about 0.7? -0.7? 0.3? -0.3? Click on Answers when you think that you have them all matched up.
Back to the First Page | http://mste.illinois.edu/courses/ci330ms/youtsey/scatterinfo.html | 13 |
17 | Before dawn on 8 November 1942, American soldiers waded through the surf of North African beaches in three widely separated areas to begin the largest amphibious operations that had ever been attempted in the history of warfare. These troops were the vanguard for a series of operations that eventually involved more than a million of their compatriots in action in the Mediterranean area. One campaign led to another. Before the surrender in May 1945 put an end to hostilities in Europe, American units in the Mediterranean area had fought in North Africa, Sicily, Italy, Sardinia, Corsica, and southern France.
Footnote one is a list of suggested readings and has been moved to the end of this file to enhance readability.
The decision to take the initiative in the West with an Allied invasion of North Africa was made by Prime Minister Winston S. Churchill and President Franklin D. Roosevelt. It was one of the few strategic decisions of the war in which the President overrode the counsel of his military advisers.
The reasons for it were as much political as military. At first TORCH, as the operation was called, had no specific military objective other than to effect a lodgment in French North Africa and to open the Mediterranean to Allied shipping. It stemmed mainly from a demand for early action against the European members of the Axis, and ostensibly was designed to ease the pressure on the hard-pressed Soviet armies and check the threatened advance of German power into the Middle East.
A combined Anglo-American attack on North Africa might have come earlier had it not been for the pressing need to use the extremely limited resources of the Allies to defend the eastern Mediterranean and stem the Japanese tidal wave that ultimately engulfed Burma, Malaya, the East Indies, the Philippines, and large areas of the southwest Pacific. In fact the invasion of North Africa had been a main topic of discussion between President Roosevelt, Prime Minister Churchill, and their chief military advisers, known collectively as the Combined Chiefs of Staff (CCS), at the first of the Allied wartime conferences held in Washington (ARCADIA) during the week before Christmas 1941. The thought of a North African undertaking at that time was inspired by hope of winning the initiative at relatively small cost and "closing and tightening the ring" around Germany, preparatory to a direct attack upon the core of its military power.
American military leaders had long appreciated the fact that the occupation of North Africa held the promise of producing valuable results for the Allied cause. (See Map II, inside back cover.) It would prevent Axis penetration of the French dependencies in that region, help secure the British line of communication through the Mediterranean, and provide a potential base for future land operations in the Mediterranean and southern Europe. Nevertheless, they were opposed on
For a full discussion of the views presented at ARCADIA, see Matloff and Snell, Strategic Planning, 1941-1942. Memo, COS for CsofS, 22 Dec 41; sub: American-British Strategy, Operations Division (OPD) files ABC 337 ARCADIA (24 Dec 41). Joint Board (B) 35 Ser. 707, 11 Sep 41, Sub: Brief of Strategic Concept of Operations Required to Defeat Our Potential Enemies. Before TORCH there were a number of plans for the invasion of North Africa. As early as the spring of 1941 the U.S. Joint Board had begun work on plans to seize Dakar. The code name for this operation was BLACK later changed to BARRISTER. GYMNAST and SUPER-GYMNAST contemplated joint operations with the British in the Casablanca area. The British also had a plan for a landing in Tunisia. For additional details on GYMNAST and SUPER- GYMNAST see Matloff and Snell, Strategic Planning for Coalition Warfare, 1941-1942, Chapters XI and XII.
strategic grounds to the dissipation of Allied strength in secondary ventures. Confident that America's great resources eventually would prove the decisive factor in the war, they favored a concentration of force in the United Kingdom for a massive attack against western Europe at the earliest possible time.
The British accepted the American view that the main blow would eventually have to be delivered in western Europe, but they hesitated to commit themselves on when and where it should fall. Even at this early stage they showed a preference for peripheral campaigns to be followed by a direct attack on the enemy only after he had been seriously weakened by attrition. Such a "peripheral strategy" came naturally to British leaders. They had followed it so often in earlier wars against continental powers that it had become deeply imbedded in England's military tradition. But another factor that led them to shy away from an immediate encounter with the enemy on his home grounds was the vivid memory of earlier disasters on the Continent. About these the British said little at this time but that the fear of another debacle influenced their arguments can be taken for granted. Later it was to come more openly to the surface.
Churchill and Field Marshal Sir Alan Brooke, Chief of the Imperial General Staff, from the outset stressed the advantages of a North African operation. They made much of the tonnage that would be saved by opening the Mediterranean and the likelihood that the French in North Africa, despite the fact that they were torn by dissension, would co-operate with the Allies once they landed. Thus France would be brought back into the struggle against the Axis.
While the majority of American military leaders had their doubts about the value of a North African invasion and its chances of success, President Roosevelt was attracted to the idea largely because it afforded an early opportunity to carry the war to the Germans. In his opinion it was very important to give the people of the United States a feeling that they were at war and to impress upon the Germans that they would have to face American power on their side of the Atlantic. Because of the interest of the two political heads, who in many matters saw eye to eye, the Combined Chiefs of Staff, without committing themselves definitely to any operation, agreed at the ARCADIA Conference to go ahead with a plan to invade North Africa.
Memo, WPD for CofS, 28 Feb 42, sub: Strategic Conceptions and Their Applications to SWPA, OPD files, Exec 4, Envelope 35; Notation by Eisenhower, 22 Jan 42 entry, Item 3. OPD Hist Unit File. The date for such an assault as estimated in early 1942 was to be sometime in the spring of 1943. Notes, GCM [George C. Marshall], 23 Dec 41, sub: Notes on Mtg at White House With President and Prime Minister Presiding, War Plans Division (WPD) 4402-136.
The task of working out such a plan was given to General Headquarters (GHQ) in Washington. By combining the main features of GYMNAST and a British scheme to attack Tunisia, GHQ produced a plan in record time called SUPER-GYMNAST. This plan was first submitted for review to Maj. Gen. Joseph W. Stilwell, who had been working on plans to seize Dakar, and then to Maj. Gen. Lloyd R. Fredendall. On the basis of their comments a revised plan was drawn up and approved on 19 February 1942.
Soon thereafter, unforeseen developments arose that prevented immediate implementation of the revised plan. Among these were the heavy losses the British Navy suffered in the Mediterranean and the Japanese advances in southeastern Asia, the Philippines, and the Netherlands Indies which made it imperative to give the Pacific area first call on American resources, particularly in ships. The shipment of men and supplies to the threatened areas put so great a strain on the Allied shipping pool, already seriously depleted by the spectacular success of German U-boats, that little was available for an early venture into North Africa or anywhere else. Before the situation eased, preparations for meeting the German Army head on in Europe, known as BOLERO, had received the green light in priorities over SUPER-GYMNAST.
As in the case of SUPER-GYMNAST BOLERO had its roots in strategic thinking that antedated Pearl Harbor. Months before 7 December, basic Anglo-American strategy, in the event of America's entry into the war, called for the defeat of Germany, the strongest Axis Power, first. This grand strategic concept was discussed as a hypothetical matter in pre-Pearl Harbor British-American staff conversations held in Washington between 29 January and 27 March 1941 and later set forth in the Allied agreement (ABC-1) and in the joint Army-Navy plan, RAINBOW 5, which were submitted to the President in June 1941. While sympathetic toward the strategy in both ABC-1 and RAINBOW 5, Roosevelt refrained from approving either at the time, probably for political reasons. At the ARCADIA Conference in December 1941, the basic strategic concept was confirmed and a de-
The code name GYMNAST continued to be used loosely by many to apply to SUPER-GYMNAST as well as the original plan. Interv with Brig Gen Paul M. Robinett. USA (Rt.). 29 Jun 56. OCMH. Morison, Battle of the Atlantic, Chs. VI, VII. Ltr, Secy War and Secy Navy to President, 2 Jun 41, copy filed in JB 325. Ser.
cision was made to begin the establishment of an American force in the United Kingdom. This decision, however, "was not definitive" since it was essentially based on the need of protecting the British Isles and did not include their use as a base for future offensive operations against the Continent. The omission troubled many American leaders, including Secretary of War Henry L. Stimson, who in early March tried to persuade the President that "the proper and orthodox line of our help" was to send an overwhelming force to the British Isles which would threaten an attack on the Germans in France. In this he was supported by the Joint Chiefs of Staff who had accepted the detailed analysis of the military situation, worked out by the War Plans Division under Brig. Gen. Dwight D. Eisenhower in late February. As a result the President replied to the Prime Minister on 8 March that in general the British should assume responsibility for the Middle East, the United States for the Pacific, and both should operate jointly in the Atlantic area. At the same time, the American planners were assigned the task of preparing plans for an invasion of northwest Europe in the Spring of 1943.
The principal argument for selecting this area for the main British-American offensive was that it offered the shortest route to the heart of Germany and so was the most favorable place in the west where a vital blow could be struck. It was also the one area where the Allies could hope to gain the necessary air superiority, where the United States could "concentrate and maintain" the largest force, where the bulk of the British forces could be brought into action, and where the maximum support to the Soviet Union, whose continued participation in the war was considered essential to the defeat of Germany, could be given. By 1 April an outline draft, which came to be known first as the Marshall Memorandum and later as BOLERO, was far enough advanced to be submitted to the President who accepted it without reservation and immediately dispatched Mr. Harry Hopkins and General George C. Marshall, Army Chief of Staff, to London to obtain British approval.
As originally conceived, BOLERO contemplated a build-up of military power in the United Kingdom simultaneously with continuous raids against the Continent, to be followed by a full-scale attack on Hitler's "Festung Europa" in the spring of 1943. Later the code name ROUNDUP was applied to the operational part of the plan. Under this plan forty-eight divisions, 60 percent of which would be American, were to be placed on the continent of Europe by Septem-
Stimson and Bundy, On Active Service, pp. 415-16. Ibid., pp. 418-19; Matloff and Snell, Strategic Planning, 1941- 1942, pp. 183-85; Bryant, Turn of the Tide, p. 280.
ber of that year. Included in BOLERO was a contingent alternate plan known as SLEDGEHAMMER, which provided for the establishment of a limited beachhead on the Continent in the fall of 1942 should Germany collapse or the situation on the Eastern Front become so desperate that quick action in the west would be needed to relieve German pressure on the Soviet Union.
In London Hopkins and Marshall outlined the American plan to the British. While stressing BOLERO as a means of maintaining the Soviet Army as a fighting force, they also emphasized the need of arriving at an early decision "in principle" on the location and timing of the main British-American effort so that production, allocation of resources, training, and troop movements could proceed without delay.
Churchill seemed to be warmly sympathetic to the American proposal to strike the main blow in northwestern Europe, and described it as a "momentous proposal" in accord with "the classic principle of war-namely concentration against the main enemy." But though the Prime Minister and his advisers agreed "in principle," Marshall was aware that most of them had "reservations regarding this and that" and stated that it would require "great firmness" to avoid further dispersions. That he was right is borne out by the fact that Churchill later wrote that he regarded SLEDGEHAMMER as impractical and accepted it merely as an additional project to be considered along with invasion of North Africa and perhaps Norway as a possible operation for 1942. At all events, BOLERO was approved by the British on 14 April with only one strongly implied reservation: it was not to interfere with Britain's determination to hold its vital positions in the Middle East and the Indian Ocean area.
While BOLERO-SLEDGEHAMMER was acceptable to the British in mid-April, it remained so for less than two months. By early May
Min of Mtg, U.S.-British Planning Staffs, London, 11 Apr 42, Tab N. ABC 381 BOLERO (3-16-42), 5. For a fuller treatment of these discussions see Gordon A. Harrison, Cross-Channel Attack (Washington, 1951), pp. 13 18, in UNITED STATES ARMY IN WORLD WAR II. Ltr atchd to Min of Mtg, U.S. Representatives-British War Cabinet, Def Com. 14 Apr 42, Chief of Staff 1942-43 files, WDCSA 381.1. Msg, Marshall to McNarney, 13 Apr 42, CM-IN 3457. Churchill, Hinge of Fate, pp. 323-24. Paper, COS, 13 Apr 42, title: Comments on Gen Marshall's Memo, COS (42)97(0) Tab F, ABC 381 BOLERO (3-16-42), 5; Churchill, Hinge of Fate, pp. 181-85; Bryant, Turn of the Tide, pp. 286-87. Stimson and Bundy, On Active Service, pp. 418-19.
they were expressing strong doubts that the resources to launch an early cross-Channel operation could be found. In part the uncertainty was due to the state of the American landing craft production program which was not only lagging far behind schedule but was indefinite as to type and number. What the full requirements in craft would be no one actually knew, for all estimates in regard to both number and type were impressionistic. In the original outline plan, the number needed had been placed at 7,000. This was soon raised to 8,100 by the Operations Division (OPD), still too conservative an estimate in the opinion of many. Lt. Gen. Joseph T. McNarney, Deputy Chief of Staff, for example, considered 20,000 a more realistic figure. As to type, the Army had placed orders with the Navy for some 2,300 craft, mostly small 36-foot vehicle and personnel carriers, for delivery in time for a limited operation in the fall. These, along with 50-foot WM boats (small tank lighters), were considered sufficiently seaworthy by the Navy to negotiate the waters of the English Channel. The rest of the 8,100 were expected to be ready for delivery in mid-April 1943, in time for ROUNDUP.
This construction program, seemingly firm in early April, soon ran into difficulties. Toward the end of April the Navy, after re-examining its own requirements for amphibious operations in the Pacific and elsewhere, concluded it needed about 4,000 craft. If its estimates were allowed to stand, only about half of the Army s needs for SLEDGEHAMMER could be met in the construction program. Some of the resulting deficit might possibly be made up by the British, but this seemed unlikely at the time for their production was also behind schedule.
The second obstacle arose when the British questioned the ability of the landing craft on which construction had begun to weather the severe storms that prevailed in the Channel during the fall and winter months. They convinced the President that their objections to the type of craft under construction in the United States were sound, as indeed they were. The result was that a new program, which shifted the emphasis to the production of larger craft, was drawn up and placed under British guidance. Like the earlier program this one also underwent a series of upward changes.
As the requirements rose, the prospects of meeting them declined. In late May it was still possible to expect delivery in time for ROUNDUP in the spring of 1943 but the hope of obtaining enough craft for SLEDGE-
Bryant, Turn of the Tide, pp. 300-301.Page 180
Leighton and Coakley, Global Logistics, 1940-1943, p. 377.
Ibid. pp. 379-80.
HAMMER had dwindled. If the latter operation was to be undertaken at all, it would have to be executed with what craft and shipping could be scraped together. This, of course, would increase the danger that SLEDGEHAMMER would become a sacrificial offering launched not in the hope of establishing a permanent lodgment but solely to ease the pressure on the Soviet armies. For this the British, who would be required to make the largest contribution in victims and equipment, naturally had no stomach.
In late May when Vyacheslav M. Molotov, the Soviet Foreign Commissar, visited London to urge the early establishment of a second front in western Europe, he found Churchill noncommittal. The Prime Minister informed him that the British would not hesitate to execute a cross-Channel attack before the year was up provided it was "sound and sensible," but, he emphasized, "wars are not won by unsuccessful operations."
In Washington a few days later, Molotov found that a different view on SLEDGEHAMMER from the one he had encountered in London still prevailed. Roosevelt, much more optimistic than Churchill, told him that he "hoped" and "expected" the Allies to open a second front in 1942 and suggested that the Soviet Union might help its establishment by accepting a reduction in the shipment of lend-lease general supplies. The conversations ended with a declaration drafted by Molotov and accepted by the President which stated that a "full understanding was reached with regard to the urgent tasks of creating a Second Front in Europe in 1942." This statement, although not a definite assurance that a cross-Channel invasion would soon be launched, differed considerably from the noncommittal declarations of the Prime Minister. It clearly indicated that Washington and London were not in full accord on the strategy for 1942 and that further discussions between U.S. and British leaders were necessary to establish a firm agreement.
By the time of the second Washington conference in June 1942 the Prime Minister and his close military advisers, if they ever truly accepted the U.S. strategy proposed by Marshall, had definitely undergone a change of mind. They now contended that an emergency invasion in 1942 to aid Russia would preclude a second attempt for years to come and therefore no direct attack should be undertaken
Quoted in W. K. Hancock and M. M. Gowing, British War Economy, History of the Second World War, United Kingdom Civil Services, (London: H.M. Stationery Office, 1949), pp. 406-07. Matloff and Snell, Strategic Planning, 1941-1942, pp. 231-32; Sherwood, Roosevelt and Hopkins, pp. 568-70. Matloff and Snell, Strategic Planning, 1941-1942, pp. 231-32.
unless the German Army was "demoralized by failure against Russia."
Aware of the fact that the British had grown cool to SLEDGEHAMMER, if not to ROUNDUP, as the strategy for 1942 and 1943 and anxious to get American troops into action against the main enemy as quickly as possible, President Roosevelt in mid-June sounded out his military advisers on the resurrection of GYMNAST. The suggestion met with strong dissent from Secretary of War Stimson and General Marshall, both of whom now were convinced that the British were just as much opposed to ROUNDUP for 1943 as they were to SLEDGEHAMMER in 1942.
In deference to their views, Roosevelt refrained from openly supporting the British position during the June conference in Washington, with the result that the meetings ended with BOLERO and ROUNDUP-SLEDGEHAMMER ostensibly still intact as the basic Anglo-American strategy in the North Atlantic area. But Churchill's vigorous arguments against a 1942 cross-Channel invasion of the Continent and Roosevelt's lively and unconcealed interest in the Mediterranean basin as a possible alternative area of operations indicated that the opponents of diversionary projects were losing ground. The defeat of the British Eighth Army in a spectacular tank battle at Knightsbridge in Libya on 13 June, the subsequent fall of Tobruk on 21 June, followed by the rapid advance of Field Marshal Erwin Rommel's army toward Alexandria and the Suez Canal, further weakened the position of the U.S. military leaders, for as long as Commonwealth forces were fighting with their backs to the wall in Egypt no British Government could be expected to agree to a cross-Channel venture.
Churchill, who had hurriedly returned to England in the crisis created by Rommel's victories, soon made it unmistakably clear that he was adamant in his opposition to any plan to establish a bridgehead on the Continent in 1942. A premature invasion, he reiterated in a cable to Roosevelt, would be disastrous. Instead he recommended that the American military chiefs proceed with planning for GYMNAST while the British investigated the possibility of an attack on Norway (JUPITER) a pet project of his. To his representative in Washington, Field Marshal Sir John Dill, he sent a message making it clear that he wanted a North African operation. "GYMNAST," he stated,
Memo, COS for War Cabinet, 2 Jul 42, sub: Future Operations WP (42)Page 182
278, (COS 42)195(0), ABC 381 (7-25-42) Sec. 4-B, 19; Matloff and Snell,
Strategic Planning, 1941-1942, p. 266.
Stimson and Bundy, On Active Service, p. 419.
Churchill, Hinge of Fate, pp. 334-35.
"affords the sole means by which the U.S. can strike at Hitler in 1942 .... However if the President decided against GYMNAST the matter is settled" and both countries would have to remain "motionless in 1942." But for the time being the impetuous Prime Minister was in no position to press strongly for the early implementation of the project, eager though he was to assume the offensive. For weeks to come the military situation would demand that every ton of available shipping in the depleted Allied shipping pool be used to move men, tanks, and other materials around southern Africa to hold Egypt and bolster the Middle East against Rommel's army and the even more potentially dangerous German forces in Russia that had conquered Crimea and were massing for an offensive that might carry them across the Caucasus into the vital oil-rich regions of Iraq and the Persian Gulf.
Strong support for the Prime Minister's objections to a premature invasion of the Continent had come from the British Chiefs of Staff. After considering the advantages and disadvantages of SLEDGEHAMMER, they stated in their report to the War Cabinet on 2 July: "If we were free agents we could not recommend that the operation should be mounted." In reaching this conclusion they were ostensibly persuaded by two reports, one from Lord Leathers, British Minister of War Transport, who had estimated that the operation would tie up about 250,000 tons of shipping at a time when shipping could ill be spared, and the other from Lord Louis Mountbatten, which pointed out that, in the absence of sufficient landing craft in the United Kingdom, all amphibious training for other operations, including cross-Channel in 1943, would have to be suspended if SLEDGEHAMMER were undertaken. The War Cabinet immediately accepted the views of the British Chiefs of Staff and on 8 July notified the Joint Staff Mission in Washington of its decision against an operation on the Continent even if confined to a "tip and run" attack.
In submitting its views on the strategy to be followed, the War Cabinet carefully refrained from openly opposing ROUNDUP as an operation for 1943. But the effect was the same since it was not possible to conduct both the African invasion and the cross-Channel attack with the means then at the disposal of the Allies.
See JCS 24th Mtg, 10 July 42; Msg, Churchill to Field Marshal Dill,Page 183
12 Jul 42, ABC 381 (7-25-42) Sec. 4-B; Bryant, Turn of the Tide, pp.
How serious the British considered this latter threat to their
vital oil resources is clearly indicated in the many references to it in
Field Marshal Brooke's diary. See Bryant, Turn of the Tide, Chs. 8, 9.
Memo, COS for War Cabinet, 2 Jul 42, sub: Future Opns WP (42) 278
(COS 42), ABC 381 (7-25-42) Sec. 4-B, 19.
Msg, War Cabinet Offs to Joint Staff Mission, 8 Jul 42; Leighton
and Coakley, Global Logistics, 1940-1943, p. 384.
Because of the lag in landing craft construction, the Joint Chiefs of Staff realized that SLEDGEHAMMER was rapidly becoming a forlorn hope. By the end of June, out of a total of 2,698 LCP's, LCV's, and LCM's estimated as likely to be available, only 238 were in the United Kingdom or on the way. By mid-July General Hull informed Eisenhower, who had gone to London, "that all the craft available and en route could land less than 16,000 troops and 1,100 tanks and vehicles." This was 5,000 troops and 2,200 tanks less than the estimates made in mid-May. Despite these discouraging figures, Marshall and King stubbornly continued to object to dropping SLEDGEHAMMER from the books, not because they wanted it but because they clearly recognized that the fate of ROUNDUP was also at stake in the British Government's attitude toward the emergency operation. Whether in earnest or not they now went so far as to advocate that the United States should turn its back on Europe and strike decisively against Japan unless the British adhered "unswervingly" to the "full BOLERO plan." This attitude so impressed Field Marshal Dill that he seriously considered cabling his government that further pressure for GYMNAST at the expense of a cross-Channel operation would drive the Americans into saying, "We are finished off with the West and will go out in the Pacific." What Dill did not know was that Roosevelt was opposed to any action that amounted to an "abandonment of the British." Nor did the President openly agree with his Joint Chiefs of Staff that the British would be as unwilling to accept a large-scale cross-Channel attack in 1943 as in 1942, whatever their present views. He was still determined to commit the Western Allies to action against the Germans before the end of the year, somehow and somewhere. If an agreement with the British on a cross-Channel attack could not be reached he was quite willing to settle for some other operation. Unlike his chief military advisers, he was far from hostile to a campaign in the Mediterranean, the Middle East, or elsewhere in the Atlantic area, if circumstances ruled out SLEDGEHAMMER or ROUNDUP. In fact, Secretary Stimson believed he was weakening on BOLERO and considered him somewhat enamored of the idea of operations in the Mediterranean. The President's willingness to accept a substitute for an early invasion of Europe appears in the instructions he gave Harry Hopkins, General Marshall, and Admiral King
Leighton and Coakley, Global Logistics, 1940-1943, p. 382. Ibid. Memo, King and Marshall for President, 10 Ju1 42, WDCSA file BOLERO. Draft Cable in CofS file ABC 381 (7-25-42) Sec. 1. Msg, Roosevelt to Marshall, 14 Jul 42, WDCSA file BOLERO; Sherwood, Roosevelt and Hopkins, p. 602. Stimson and Bundy, On Active Service, p. 425.
when he sent them to England on 18 July with large powers to make a final effort to secure agreement on a cross-Channel attack. Should they become convinced after exploring all its angles with the British that such an operation would not prevent "the annihilation of Russia" by drawing off enemy air power, they were to consider other military possibilities.
As might have been expected, the American delegates failed to convince Churchill or the British military chiefs that an early assault on the Continent was practical. The Prime Minister, after questioning both the urgency and feasibility of SLEDGEHAMMER, again emphasized the value of a North African operation and suggested that if the approaching battle for Egypt went well, it might be possible to carry the war to Sicily or Italy.
A realistic estimate of the military situation at the time indicated that launching a successful operation against the mainland of Europe in 1942 was far from bright. Allied war production potential was still comparatively undeveloped and battle-tested divisions were unavailable. Landing craft, despite a high production priority ordered by the Navy in May, were still scarce, shipping was woefully short, and modern tanks, capable of meeting those of the enemy on equal terms, were just beginning to roll off the assembly lines. Even if the production of materiel could be speeded up time was required to raise and organize a large force and train units in the difficult techniques of amphibious warfare. By according additional overriding priorities to BOLERO, the flow of men, equipment, and supplies to the United Kingdom could be increased, but this meant running the grave danger of crippling forces already engaged with the enemy. Should this risk be accepted, there still remained the problem of erecting a logistical organization that could feed men, equipment, and supplies into the battle area without interruption. Considerable progress had been made in building such an organization in the United Kingdom but it was still far from perfect. Taking all these matters into consideration, along with the likelihood that the Germans would have enough strength in France and the Lowlands to contain an invasion without weakening their eastern front, the Combined Chiefs of Staff concluded that, at best, the only landing that could be made on the Continent in 1942 would be a minor one, aimed at securing a foothold with a port and holding and consolidating it during the winter. But the hard facts mutely argued against pitting any force against a veteran
Memos, Roosevelt for Hopkins, Marshall, and King, 16 Jul 42, sub: Instructions for London Conf, Jul 42, signed original in WDCSA 381, Sec. 1; Sherwood, Roosevelt and Hopkins, pp. 603-05; Matloff and Snell, Strategic Planning, 1941-1942, p. 273. Combined Staff Conf, 20 Jul 42, WDCSA 319.1; Matloff and Snell, Strategic Planning, 1941-1942, p. 278.
army on the chance that it would be sustained during the stormy winter weather.
The Americans saw this as clearly as the British. As realists, they knew that an operation in execution would take priority over one in contemplation, and that it would generate pressures that could upset the basic strategy agreed upon for Europe. The weakness of their stand was that nearly a year would probably elapse during which few Americans other than those in the air force would be in action against the Germans. Such a situation the impatient President whose full support they needed could not bring himself to accept. Knowing this, Churchill and the British Chiefs of Staff reiterated time and again the advantages of a North African operation in conjunction with a counteroffensive in Libya. They stressed all the old arguments: it could lead to the liberation of Morocco, Algeria, and Tunisia, bring the French there back into the war against the Axis, open the Mediterranean to through traffic thus saving millions of tons of shipping, cause the withdrawal of German air power from Russia, and force the Germans and Italians to extend themselves beyond their capacity in reinforcing their trans-Mediterranean and southern front. They would not admit that a North African operation in 1942 would rule out ROUNDUP and contended instead that early action in the Mediterranean would lead to a quick victory which would still permit it to be launched in 1943.
The Americans, on the other hand, continued to hold out for SLEDGEHAMMER. They resisted the idea of dropping SLEDGEHAMMER, primarily in order to forestall a diversionary and indecisive operation which would syphon off resources and prevent a true second front from being established in 1943. Marshall and King, if not Hopkins, were certain that the fate of ROUNDUP was at stake and held as firmly as ever the belief that a direct attack against the Continent was the only way to assist the hard-pressed Soviet armies and seriously threaten the military power of Germany. But because of the President's instructions to agree to some military operations somewhere in 1942, it was impossible for them to hold their ground indefinitely. Their position was not strengthened by the course of events in Russia, in the Middle East, and in the Atlantic, or by the opinion expressed by General Eisenhower-recently appointed Commanding General, European Theater of Operations, United States Army (ETOUSA)-that SLEDGEHAMMER had less than a fair chance of success. Nor were they helped by the secret message from Roosevelt to
Memo, Conclusions as to Practicability of SLEDGEHAMMER, 17 Jul 42; Diary of Commander in Chief, OPD Hist Unit file. This memorandum was prepared by General Eisenhower after consultation with Maj. Gen. Mark W. Clark, Maj. Gen. John C. H. Lee, and Col. Ray W. Barker.
Churchill, saying that "a Western front in 1942 was off" and that he was in favor of an invasion of North Africa and "was influencing his Chiefs in that direction." Furthermore, since a cross-Channel operation to ease the pressure on the Soviet Union would have to be carried out primarily by British forces, because the shipping shortage precluded the flow of U.S. troops and aircraft to the United Kingdom in large proportions before the late fall of 1942, the American representatives could not insist on it. Marshall therefore refrained from pressing for the retention of SLEDGEHAMMER in the BOLERO plan after 23 July but continued to insist on ROUNDUP. This left the whole question of alternative action for 1942 undecided.
Informed of the deadlock by Marshall, Roosevelt sent additional instructions to his representatives in London, directing again that an agreement on an operation for 1942 be reached. This message specifically instructed the American delegation to settle with the British on one of five projects: (1) a combined British-American operation in North Africa (either Algeria or Morocco or both); (2) an entirely American operation against French Morocco (the original GYMNAST); (3) a combined operation against northern Norway (JUPITER); (4) the reinforcement of the British Eighth Army in Egypt; (5) the reinforcement of Iran.
The American military chiefs, Marshall and King, now knew that SLEDGEHAMMER was dead, for no cross-Channel attack was possible in the face of British objections and without the President's strong support. Preferring the occupation of French North Africa with all its shortcomings to a campaign in the Middle East or Norway, they reluctantly accepted GYMNAST. On 24 July a carefully worded agreement, drawn up by Marshall and known as CCS 94, was accepted by the Combined Chiefs of Staff. It contained the important condition that the CCS would postpone until mid-September final decision on whether or not the North African operations should be undertaken. (The date 15 September was chosen because it was considered the earliest possible day on which the outcome in Russia could be forecast.) If at that time the Russians clearly faced a collapse that
Quotation from Brooke's diary, 23 July entry, in Bryant, Turn of the Tide, p. 344. Msg, President to Hopkins, Marshall, and King, 23 Jul 42, WDCSA 381, Sec. I; Matloff and Snell, Strategic Planning, 1941-1942, p. 278; Howe, Northwest Africa, p. 13. For War Department views on Middle East operations see OPD study, 15 Jul 42, sub: Comparison of Opn GYMNAST With Opns Involving Reinforcements of Middle East. Exec 5, Item 1. CCS 34th Mtg, 30 Jul, ABC 381 (7-25-42) Sec. 1.
would release so many German troops that a cross-Channel attack in the spring of 1943 would be impractical, the North African invasion would be launched sometime before 1 December. Meanwhile, planning for ROUNDUP was to continue while a separate U.S. planning staff would work with the British on the North African project, now renamed TORCH.
The door to later reconsideration of the agreement, deliberately left open in CCS 94 by General Marshall in order to save the ROUNDUP concept, did not remain open long. In a message to the President on 25 July, Harry Hopkins urged an immediate decision on TORCH to avoid "procrastination and delays." Without further consulting his military advisers, Roosevelt chose to assume that a North African campaign in 1942 had been definitely decided upon and at once cabled his emissaries that he was delighted with the "decision." At the same time he urged that a target date not later than 30 October be set for the invasion. By ignoring the carefully framed conditions in CCS 94 and in suggesting a date for launching TORCH, the President actually made the decision. In so doing, he effectively jettisoned ROUNDUP for 1943, though he probably did not fully realize it at the time.
Although Marshall must have realized the fatal impact of Roosevelt's action on ROUNDUP he was reluctant to view it as one that eliminated the conditions stipulated in CCS 94. At the first meeting of the Combined Chiefs of Staff held after his return to Washington he therefore refrained from accepting the "decision" as final and pointed out that the mounting of TORCH did not mean the abandonment of ROUNDUP. At the same time, he recognized that a choice between the two operations would have to be made soon "because of the logistic consideration involved," particularly the conversion of vessels to combat loaders which, according to a "flash estimate" of the Navy, would require ninety-six days. Nor was Admiral King willing to admit that the President had fully decided to abandon ROUNDUP as well as SLEDGEHAMMER in favor of TORCH.
If Marshall and King entertained any hope of getting the President to reopen the issue and make a definite choice between ROUNDUP and TORCH they were doomed to disappointment. Instead, on 30
Memo by CCS, 24 Jul 42, sub: Opns in 42 43, circulated as CCS 94, ABC 381 (25 Jul 42). For details, see the treatment of CCS 94 and its interpretation in Matloff and Snell, Strategic Planning, 1941-1942. Sherwood, Roosevelt and Hopkins, p. 611. Msg, President to Hopkins Marshall, and King, 25 Jul 42, WDCSA 381, Sec. 1. This view is also expressed in a personal letter, Marshall to Eisenhower, 30 Jul 42, GCM file under Eisenhower, D. D. Min, 34th Mtg CCS, 30 Jul 42, ABC 381 (7-25-42) Sec. 1.
July, at a meeting at the White House with the Joint Chiefs of Staff, the President stated that "TORCH would be undertaken at the earliest possible date" but made no comment on its possible effect on ROUNDUP. The next day his decision on TORCH was forwarded to the British Chiefs of Staff and to General Eisenhower.
However loath the President's military advisers were to sidetrack plans for the direct invasion of the Continent and accept a secondary project in its place, an attack on French North Africa, alone among the operations considered, met strategic conditions for joint Anglo-American operations in 1942 on which both Churchill and Roosevelt could agree. Without the wholehearted support of the two top political leaders in the United States and Great Britain, no combined operation could be mounted. In short, TORCH from the beginning had support on the highest political level in both countries, an advantage never enjoyed by either ROUNDUP or SLEDGEHAMMER.
The decision to invade North Africa restored Anglo-American cooperative planning, which had been showing signs of serious strain. It was now on a sound working basis that permitted the establishment of rights and priorities with relentless determination. What was still needed was a final agreement between Washington and London on the size, direction, and timing of the contemplated operation. Such an agreement was not easy to reach. The big question to be decided was where the main effort of the Allies should be made and when. On this issue Washington and London were at first far apart.
The strategic planners in Washington, mindful of the dangers in French opposition, hostile Spanish reaction, and a German counterstroke against Gibraltar with or without the support of Spain, proposed making the main landings outside the Mediterranean on the Atlantic coast of French Morocco. Troops would take Casablanca and adjacent minor ports, seize and hold the railroad and highways to the east as an auxiliary line of communications, secure all the approaches to Gibraltar, and consolidate Allied positions in French Morocco before moving into the Mediterranean. This, the planners estimated, would take about three months. The plan was a cautious one,
Memo, Maj Gen Walter B. Smith for JCS, 1 Aug 42, sub: Notes of Conf Held at the White House at 8:30 PM, 30 Jul 42, OPD Exec 5, Item 1, Tab 14. Before leaving London, Marshall informed Eisenhower that he would be in command of the TORCH operation, if and when undertaken, in addition to being Commanding General ETOUSA. This appointment was later confirmed by the CCS. For an extended account of this subject see, Leighton and Coakley, Global Logistics 1940-1943, pp. 427-35.
dictated primarily by the fear that the Strait of Gibraltar might be closed by the Germans or the Spanish, acting singly or together.
The bold course, advocated by the strategic planners in London, including many Americans working with the British, was to strike deep into the Mediterranean with the main force at the outset and then, in co-ordination with the British Eighth Army moving west from Egypt, seize Tunisia before the Germans could reinforce the threatened area. They viewed with feelings approaching consternation the cautious American strategy that would waste precious months in taking ports and consolidating positions over a thousand miles distant from Tunisia, whose early occupation they believed to be vital to the success of TORCH. Should the Germans be permitted to establish themselves firmly in that province it was feared that they might, because of shorter lines of communications and land-based air power, be able to hold out indefinitely, thus preventing the extension of Allied control to the strategic central Mediterranean.
The proponents of the inside approach also stressed the relative softness of the Algerian coastal area as compared with that around Casablanca. In their view Algeria with its favorable weather and tide conditions, more numerous and better ports, and proximity to Tunisia seemed to have every advantage over western Morocco as the main initial objective. They believed that even in the matter of securing communications it would be safer to move swiftly and boldly through the Strait of Gibraltar and seize ports along the Algerian coast as far east as Philippeville and Bone. Strong determined action there would cow the Spanish and make them hesitate to permit German entry into Spain for a joint attack on Gibraltar. On the other hand they contended that an unsuccessful attack in the Casablanca area, where operations were extremely hazardous because of unfavorable surf conditions four days out of five, would almost certainly invite Spanish intervention.
For weeks arguments for and against both strategic concepts were tossed back and forth across the Atlantic in what has aptly been called a "transatlantic essay contest." Meanwhile preparations for the attack languished. A logical solution to the problem was to reconcile the conflicting views by combining both into a single plan. This, General Eisenhower, who had been designated to command the operation
Ltr, Prime Minister to Harry Hopkins, 4 Sep 42, as quoted in Churchill, Hinge of Fate, p. 539; Bryant, Turn of the Tide, pp. 401-02. For an extended account see Leighton and Coakley, Global Logistics, 1940-1943, pp. 417-24.
before Marshall left London, attempted to do in his first outline plan of 9 August when he proposed approximately simultaneous landings inside and outside the Mediterranean, the first strong and the latter relatively weak.
Almost immediately the plan struck snags in the form of insufficient naval air support and assault shipping. Shortly after it was submitted, both the American and the British Navies suffered severe losses in naval units, particularly in aircraft carriers. Since close land-based air support would be negligible, confined to a single airfield at Gibraltar under the domination of Spanish guns, carriers were necessary to protect assault and follow-up convoys for the operation. In view of the recent naval losses and needs elsewhere in the world, finding them would take time. The U.S. Navy quickly let it be known that it had no carriers immediately available to fill the void and was unwilling to commit itself on when they would be. This meant that the burden of supplying seaborne air protection would probably fall on the British.
Equally if not more important in determining the size and timing of the landings was the availability of assault shipping. Most of the American APA's (assault troop transports) were tied up in the Pacific where they were vitally needed. To transport the twelve regimental combat teams, envisioned as the force needed to make the three landings, would require 36 APA's and 9 to 12 AKA's (attack cargo transports); and as yet the program for converting conventional transports to assault transports had hardly begun. On 2 August the Navy estimated that sufficient assault shipping, trained crews, and rehearsed troops for an operation of the size originally contemplated would not be ready for landings before 7 November. The British were against postponing the operation and, to gain time, were willing to skimp on the training and rehearsals of assault units and boat crews. The President sided with them on an early attack and on 12 August directed Marshall to try for a 7 October landing date even if it meant the reduction of the assault forces by two thirds. It now fell to Eisenhower and his planning staff to rearrange their plan in the light of available resources and under the pressure for quick action.
In his second outline plan of 21 August Eisenhower set 15 Oc-
Draft Outline Plan (Partial) Opn TORCH, Hq ETOUSA, 9 Aug 42, ABC 381 (7-25-42) 4A. The United States Navy lost a carrier and several cruisers in the Guadalcanal operation; the Royal Navy, one aircraft carrier sunk and one damaged in trying to reinforce Malta. Conversion had begun on ten small vessels taken off the BOLERO run. Bryant, Turn of the Tide, p. 400.
tober as a tentative date for the invasion and proposed dropping the Casablanca operation entirely and concentrating on the capture of Oran in Algeria. That having been accomplished, he would move in two directions, eastward into Tunisia and southwest across the mountains into French Morocco. This plan seemed to ignore the danger to the Allies' line of communications from the direction of both Gibraltar and Spanish Morocco should Spain join the Axis Powers. It also failed to take sufficiently into account the shortage in naval escorts and the logistical problems involved in funneling all the men, equipment, and supplies needed to seize Algiers, French Morocco, and Tunisia into the port of Oran, whose facilities might not be found intact. The complicated convoy arrangements for the assault, follow-up, and build-up phases of the operation that would have to be made were enough by themselves to doom the plan in the eyes of the military chiefs in Washington as too risky.
In response to continuous pressure from the President and the Prime Minister for an early assault, Eisenhower advanced D Day from 15 October to 7 October, when the moon would be in a phase that would facilitate surprise. This date he viewed as the earliest practical time for the beginning of the invasion. But few informed leaders believed that this date could be met. Admiral King considered 24 October more likely, and even the British planners, who were consistently more optimistic about an early D Day than their American colleagues, admitted that meeting the proposed date would require a "superhuman effort."
The most serious problem confronting planners on both sides of the Atlantic continued to be the scarcity of assault shipping. The Navy's original estimate of fourteen weeks as the time required to convert conventional ships to assault vessels, train crews, rehearse troops in embarkation and debarkation, load troops and cargo, and sail from ports of embarkation in the United States and the United Kingdom to destination remained unchanged. This meant that 7 November, the date given in the original estimate, would be the earliest possible day for the assault to begin. The Navy might also have pointed to the shortage of landing craft for transporting tanks and other assault vehicles as an argument against an early D Day. LST's were under construction at the time but none were expected to be available before October or November.
Msg, Eisenhower to AGWAR, 22 Aug 42, copy in ABC 381 (7-25-42), Sec. 4-B. Msg, King to Marshall, 22 Aug 42, sub: Sp Opns, OPD Exec 5, Item 1; Msg 236, COS to Jt Staff Mission, 4 Aug 42, Exec 5, Item 2. No LST's actually became available in time for the initial landings but three "Maracaibos," forerunners of the LST's, were.
Nevertheless Roosevelt and Churchill, impatient of delay, continued to insist on an early invasion date. It was such pressure in the face of shipping, equipment, and training deficiencies that was responsible for Eisenhower's 21 August proposal to limit drastically the size of the assault and confine it entirely to the Mediterranean.
The plan found few supporters even among those who made it. Eisenhower himself regarded it as tentative and the date of execution probably too early because as yet little progress had been made in planning the force to be organized in the United States and not enough was known about scheduling convoys, the availability of air and naval support, or the amount of resistance that could be expected.
So widely varying were the reactions to the plan in Washington and London that a reconciliation of views appeared impossible. Fortunately for the success of the operation, a spirit of compromise developed. By 24 August the British military chiefs were willing to moderate their stand for an early invasion somewhat and even to accept the idea of a Casablanca landing, provided the scope of TORCH was enlarged to include an attack on Philippeville, a port close to Tunisia. Their willingness to make concessions, however, was contingent on a greater naval contribution by the United States. The proposal was unacceptable to the American Joint Chiefs of Staff who now used the 21 August plan to bolster their original argument that the main blow should be struck in the west, outside the Mediterranean, at or near Casablanca. They would accept an assault on Oran along with one on Casablanca but none against ports farther to the east. They were also willing to adjust Eisenhower's directive as he had requested, bringing his mission more in line with his resources, but they stubbornly opposed any increase in the U.S. Navy's contribution which would weaken the fleet in critical areas elsewhere in the world.
Such was the status of TORCH planning when Churchill returned from Moscow where he had been subjected to Stalin's taunts because of the failure of the Western Allies to open up a second front on the Continent. Only by playing up the military advantages of TORCH and giving assurances that the invasion would begin no later than 30 October had he been able to win the Soviet leader's approval of the operation. Thus committed, it is no wonder that Churchill was alarmed at the turn matters had taken during his absence from London. With characteristic vigor he at once sprang into action to restore the strategic concept of TORCH to the shape he believed essential to success.
Matloff and Snell, Strategic Planning, 1941-1942, p. 289. Bryant, Turn of the Tide, p. 403. Churchill, Hinge of Fate, pp. 484-86; Bryant, Turn of the Tide, pp. 373-74.
In a series of messages to Roosevelt, he urged the establishment of a definite date for D Day, and argued eloquently for an invasion along the broadest possible front in order to get to Tunisia before the Germans. "The whole pith of the operation will be lost," he cabled, "if we do not take Algiers as well as Oran on the first day." At the same time he urged Eisenhower to consider additional landings at Bone and Philippeville. He was confident that a foothold in both places could be attained with comparative ease and expressed the opinion that a strong blow deep inside the Mediterranean would bring far more favorable political results vis-a-vis Spain and the French in North Africa than would an assault on Casablanca. He was not opposed to a feint on that port but he feared making it the main objective of the initial landings. Because of the dangerous surf conditions, he argued, "Casablanca might easily become an isolated failure and let loose upon us ... all the perils which have anyway to be faced." As to the time of the attack, he would launch it by mid-October at the latest. To meet that target date, he believed naval vessels and combat loaders could be found somewhere and outloading speeded up.
Roosevelt, equally unwilling to accept a delay, proposed in his reply two simultaneous landings of American troops, one near Casablanca, the other at Oran, to be followed by the seizure of the road and rail communications between the two ports and the consolidation of a supply base in French Morocco that would be free from dependence on the route through the Strait of Gibraltar. He appreciated the value of three landings but pointed out that there was not currently on hand or in sight enough combat shipping and naval and air cover for more than the two landings. He agreed however that both the Americans and the British should re-examine shipping resources "and strip everything to the bone to make the third landing possible." In his reply Roosevelt also conveyed his views on the national composition of the forces to be used in the initial landings within the Mediterranean. Recent intelligence reports from Vichy and North Africa had convinced him that this was a matter of such great political import that the success or failure of TORCH might well depend on the decision made. These reports indicated that in the breasts of most Frenchmen in North Africa an anti-British sentiment still rankled in consequence of the evacuation at Dunkerque, the de-
Churchill, Hinge of Fate, p. 528. Ibid., p. 530. Msg 1511, London to AGWAR, 26 Aug 42, ABC 381 (7-25-42) Sec. 4 B. Churchill, Hinge of Fate, p. 531. Msg, Roosevelt to Churchill, 30 Aug 42, Exec 5, Item 1; Churchill, Hinge of Fate, p. 532.
struction visited on the French fleet at Mers-el-Kebir, British intervention in the French dependencies of Syria and Madagascar, and the abortive attack by British-sponsored de Gaulle forces on Dakar. Both the President and his advisers were convinced that the strength of this sentiment was such that the inclusion of British troops in the assault was extremely dangerous. Roosevelt therefore insisted on confining the initial landings to American troops.
Churchill did not share the view that Americans "were so beloved by Vichy" or the British "so hated" that it would "make the difference between fighting and submission." Nevertheless he was quite willing to go along with the President's contention that the British should come in after the political situation was favorable, provided the restriction did not compromise the size or employment of the assault forces. At the same time he appropriately pointed out that the American view on the composition of the assault would affect shipping arrangements and possibly subsequent operations. Since all the assault ships would be required to lift purely American units, British forces would have to be carried in conventional vessels that could enter and discharge at ports. This necessarily would delay follow-up help for some considerable time should the landings be stubbornly opposed or even held up.
As a result of the transatlantic messages between the two political leaders, a solution to the impasse of late August gradually but steadily began to emerge. On 3 September, Roosevelt, who had promised to restudy the feasibility of more than two landings, came up with a new plan in which he proposed three simultaneous landings-at Casablanca, Oran, and Algiers. For Casablanca he proposed a force of 34,000 in the assault and 24,000 in the immediate follow-up (all United States); for Oran, 25,000 in the assault and 20,000 in the immediate follow-up (all United States); for Algiers, 10,000 in the initial beach landing (all United States) to be followed within an hour by British forces. All British forces in the follow-up, the size of which would be left to Eisenhower, would debark at the port of Algiers from non-combat loaded vessels. All the American troops for the Casablanca landing were to come directly from the United States; all those for Oran and Algiers, from the American forces in the United Kingdom. As for shipping, the United States could furnish enough combat load-
AFHQ Commander in Chief Despatch, North African Campaign, p. 4. These views of Churchill are not in accord with the reports from British intelligence agents that Churchill showed Harry Hopkins in July when he was urging the United States to accept a North African offensive. Nor are they the same as those expressed in his message of 12 July to Dill. Sherwood, Roosevelt and Hopkins, pp. 610-11; Msg, Churchill to Field Marshal Dill, 12 Jul 42, ABC 381 (7-25-42) Sec. 4. Churchill, Hinge of Fate, p. 534.
ers, ready to sail on 20 October, to lift 34,000 men and sufficient transports and cargo vessels to lift and support 52,000 additional troops. Total available shipping under U.S. control, he estimated, was enough to move the first three convoys of the proposed Casablanca force. This did not include either the American transports, sufficient to lift 15,000 men, or the nine cargo vessels in the United Kingdom that had previously been earmarked for the TORCH operation. Under the President's proposal, the British would have to furnish (1) all the shipping (including combat loaders) for the American units assigned to take Oran and Algiers except the aforementioned American vessels in the United Kingdom, (2) the additional British troops required for the Algiers assault and follow-up, and (3) the naval forces for the entire operation, less those that the United States could furnish for the Casablanca expedition.
Churchill replied to the American proposal at once, suggesting only one modification of importance, a shift of ten or twelve thousand troops from the Casablanca force to that at Oran in order to give more strength to the inside landings. Unless this was done, he pointed out, the shortage in combat loaders and landing craft would rule out an assault on Algiers.
Roosevelt consented to a reduction of approximately 5,000 men in the Casablanca force and expressed the belief that this cut, along with a previous one made in the Oran force, would release enough combat loaders for use at Algiers. Whatever additional troops were needed for that landing the President believed could be found in the United Kingdom. To these proposals the Prime Minister agreed on 5 September.
The scope and direction of the landings were now decided; the "transatlantic essay contest" was over. Only the date of the invasion remained to be settled. The planning staffs in both Washington and London, after six weeks of frustrating uncertainty, could now breathe a sigh of relief and proceed with definite operational and logistical preparation without the harassing fear that the work of one day would be upset by a new development in strategy the next.
The final decision represented a compromise on the conflicting strategic concepts of Washington and London. It sought to minimize the risks to the line of communications involved in putting the full strength of the Allied effort inside the Mediterranean without giving up hope of gaining Tunisia quickly. The plan to make initial landings east of Algiers at Philippeville and Bone, advocated by the Brit-
Msg 144, Prime Minister to Roosevelt, 5 Sep 42, Exec 5, Item 1; Churchill, Hinge of Fate, Ch. VII; Bryant, Turn of the Tide, p. 403.
ish, was abandoned but the assault on Algiers was retained at the expense of the forces operating against Casablanca and Oran. The political desirability of an all-American assault, though probably still valid, was compromised to the extent that British forces were to be used at Algiers in the immediate follow-up and for the eastward push into Tunisia after a lodgment had been attained.
No date was set for the attack. The decision the Combined Chiefs left to Eisenhower who had a number of matters to consider in making it. Because of broad political and strategic reasons and the normal deterioration in weather conditions in the area of impending operations during the late fall, the earlier the landings, the better. The vital need for tactical surprise pointed to the desirability of a new-moon period. But in the final analysis D Day would be determined by the time needed to assemble and prepare necessary shipping, acquire naval escorts, equip American units in the United Kingdom, and train assault troops and landing craft crews in amphibious operations. By mid-September Eisenhower was sufficiently convinced that his logistical and training problems could be solved by late October and so he set 8 November for the attack.
His optimism that this date could be met was not shared by all his staff, particularly those acquainted with the tremendous logistical tasks that remained to be completed. More than the political leaders and strategic planners they realized that no task forces of the size contemplated could be fully equipped and shipped in the short time remaining, no matter how strongly imbued with a sense of urgency everyone concerned might be. If there was to be an invasion at all in November, they realized that the Allies would have to cut deeply into normal requirements and resort to considerable improvisation. Events were to prove that those who doubted the complete readiness to move on 8 November were correct.
Even in retrospect, it is debatable whether the decision to invade North Africa was the soundest strategic decision that could have been made at the time and under the existing circumstances. If there had to be an operation in the Atlantic area in 1942 that had a chance of success, few students of World War II will argue today that TORCH was to be preferred over SLEDGEHAMMER. The shortage of landing craft and other resources necessary to attain a lodgment in northwest Europe and to sustain it afterward was sufficient reason for the rejec-
CCS 103/3, 26 Sep 42, Sub: Outline Plan Opn TORCH. Leighton and Coakley, Global Logistics, 1940-1943, p. 424. Memo, Col Hughes, DCAO AFHQ, for Gen Clark, 14 Sep 42, sub: Estimate of the Supply and Administrative Aspects of Proposed Operations, original in European Theater of Operations file, USFET AG 400, Supplies and Equipment, Vol. V.
tion of SLEDGEHAMMER. There was little real doubt but that TORCH would siphon off the necessary men and equipment required for ROUNDUP in 1943. This the American military leaders saw clearly as did the British, although the latter never admitted it openly in conference. The real question therefore remains: Was it wise to embark on an operation in the northwest African area in 1942 at the expense of a possible direct attack against the Continent in 1943? The British as a group and some Americans, notably the President, believed it was; most of the American military leaders and strategic planners thought otherwise.
The preference of the British for TORCH undoubtedly stemmed fundamentally from their opposition to an early frontal assault on Festung Europa. Their inclination for a peripheral strategy was based in part on tradition, in part on previous experience in the war, in part on the desirability of opening up the Mediterranean, and in part on the need of bolstering their bastions in the Middle East. More than the Americans they knew what it meant to try to maintain a force in western Europe in the face of an enemy who could move swiftly and powerfully along inner overland lines of communications. Having encountered the force of German arms on the Continent earlier in the war, they naturally shied away from the prospect of meeting it head on again until it had been thoroughly weakened by attrition.
The American military leaders, on the other hand, less bound by tradition and confident that productive capacity and organization would give the Allies overwhelming odds within a short time, believed the war could be brought to an end more quickly if a main thrust was directed toward the heart of the enemy. In their opinion the enemy, softened by heavy and sustained preliminary bombardment from the air, would become a ready subject for such a thrust by the summer of 1943. They also believed that an early cross-Channel attack was the best way to help the Russians whose continued participation in the war was a matter of paramount importance. They did not want SLEDGEHAMMER any more than the British, but fought against scrapping it before Russia's ability to hold out was certain. They opposed entry into North Africa because they did not consider it an area where a vital blow could be struck and because they wanted to save ROUNDUP. Churchill, Brooke, and others may assert, as they do, that no cross-Channel attack would have been feasible in 1942 or in 1943 because the Allies lacked the means and the experience in conducting amphibious warfare, and because the enemy was too strong in western Europe. Marshall and his support can contend with equal vigor that had not TORCH and the preparations for subsequent operations in the Mediterranean drained off men and resources, depleted the
reserves laboriously built up in the United Kingdom under the BOLERO program, wrecked the logistical organization in process of being established there, had given the enemy an added year to prepare his defenses, a cross-Channel operation could have been carried out successfully in 1943 and the costly war brought to an end earlier. Whose strategy was the sounder will never be known. The decision that was made was a momentous one in which political and military considerations were so intermingled that it is difficult to determine which carried the greater weight. For that reason if for no other, it will be the subject of controversy as long as men debate the strategy of World War II.
George F. Howe, Northwest Africa: Seizing the Initiative in the West (Washington, 1957), in UNITED STATES ARMY IN WORLD WAR II, covers in detail the operations that led to victory in Tunisia in May 1943. The Navy story is related by Samuel Eliot Morison, The Battle of the Atlantic, September 1939-May 1943, Vol. I (Boston: Little, Brown and Company, 1950), and Operations in North African Waters, October 1942-June 1943, Vol. II (Boston: Little. Brown and Company, 1950), History of United States Naval Operations in World War II. Books that deal with the TORCH decision are: Maurice Matloff and Edwin M. Snell, Strategic Planning for Coalition Warfare 1941-1942 (Washington, 1953) and Richard M. Leighton and Robert W. Coakley, Global Logistics and Strategy, 1940-1943, in UNITED STATES ARMY IN WORLD WAR II; Robert E. Sherwood, Roosevelt and Hopkins: An Intimate History (New York: Harper & Brothers, 1948); Henry L. Stimson and McGeorge Bundy, On Active Service in Peace and War (New York: Harper & Brothers, 1948); Winston S. Churchill, The Hinge of Fate (Boston: Houghton Mifflin Company, 1950); Arthur Bryant, The Turn of the Tide (Garden City, N.Y.: Doubleday and Company, 1957).
LEO J. MEYER, Historian with OCMH since 1948. B.A., M.A., Wesleyan University; Ph.D., Clark University. Taught: Clark University, Worcester Polytechnical Institute, New York University. Deputy Chief Historian,
OCMH. Troop Movement Officer, New York Port of Embarkation; Chief of Movements, G-4, European theater; Commanding Officer, 14th Major Port, Southampton, England; Secretariat, Transportation Board. Legion of Merit, Bronze Star, O.B.E. Colonel, TC (Ret.). Author: Relations Between the United States and Cuba, 1898-1917 (Worcester, 1928); articles in Encyclopedias Americana and Britannica, Dictionary of American Biography, Dictionary of American History, and various professional journals. Co-author: The Strategic and Logistical History of the Mediterranean Theater of Operations, to be published in UNITED STATES ARMY IN WORLD WAR II. | http://www.history.army.mil/books/70-7_07.htm | 13 |
53 | [1.] Introduction: Arguments
Some of the material in this section of notes, as well as in the next few sections, should be familiar to you from Critical Thinking (PHIL 2020), which is a prerequisite for this class.
The word “logic” has more than one meaning:
1. the discipline of logic = the area of inquiry that studies reasoning and arguments; it is particularly concerned with what distinguishes good arguments from bad ones. (In this sense of the word “logic,” it refers to one of the four traditional branches of philosophy, along with metaphysics, epistemology and ethics.)
2. a symbolic logic = an artificial language (i.e., a language deliberately created by human beings, as opposed to “natural languages” like English, French, Japanese, Urdu, etc.) the purpose of which is to help us to reason in a clearer and more precise way. (The creation and study of symbolic logics is a part of the discipline of logic.)
This course deals with logic in both senses of the word. We will use a symbolic logic—an artificial language—in order to study reasoning and arguments, with the goal of distinguishing between good arguments and bad arguments.
[1.2.] Arguments, Validity and Symbolism.
The discipline of logic, including the study of symbolic logics, involves the evaluation of arguments, i.e., examining arguments to distinguish the good ones from the bad ones.
Logicians do not use the
word “argument” to mean a heated, angry exchange of words. Rather, they mean:
argument (df.): a set of statements some of which (the argument’s premises) are intended to serve as evidence or reasons for thinking that another statement (the argument’s conclusion) is true. In the textbook’s words: “a series of statements, one of which is the conclusion (the thing argued for) and the others are the premises (reasons for accepting the conclusion).” (p.1)
Here is a simple example of an argument:
Socrates is a man.
Therefore, Socrates is mortal.
Here is another:
Some women are US Senators.
Olympia Snowe is a woman.
Therefore, Olympia Snowe is a US Senator.
The first argument has a desirable logical property that the second one lacks: validity.
validity (df.): A valid argument is one in which the truth of the premises would guarantee the truth of the conclusion [this is an incomplete definition; we'll get to a full definition in the next class.]
It is essential to understand that validity does not require true premises. An argument’s validity has nothing to do with whether its premises are actually true or false. As an illustration, consider the following valid argument:
are cannibals. (false)
Lane is an astronaut. (false)
Therefore, Lane is a cannibal.
This argument is valid, even though it has two false premises. It is valid because, if both of its premises were true, then the conclusion would have to be true, as well.
One of the central goals of this class is to help you distinguish arguments that are valid from arguments that are not.
You will learn how to do this by translating arguments into symbolic logic. For example, the Socrates argument translated into the system of symbolic logic contained in your textbook looks like this:
(x)(Mx É Rx)
Ms /\ Rs
You will learn a system of rules for deriving conclusions from premises within this system of symbolic logic. If you can derive a given conclusion from given premises according to those rules, then the argument in question is valid. The conclusion of the Socrates argument can be derived from its premises by those rules:
(x)(Mx É Rx) p
2. Ms p / \Rs
3. Ms É Rs 1 UI
4. Rs 2,
[the left-hand column shows the premises and conclusion; the right-hand column indicates which rule has been applied to generate each line.]
…but the conclusion of the Olympia Snow argument cannot be.
You are not expected to understand any of this symbolism today. It is this language of symbolic logic that you will be learning in this course.
Here is another example of an argument, from p.1 of your textbook:
Identical twins often have different IQ test scores. Yet such twins inherit the same genes. So environment must play some part in determining IQ. (p.1)
The first two statements are the premises; the last is the conclusion. We can indicate the difference between premises and conclusion by exhibiting each statement of the argument on a different line, as your textbook does:
1. Identical twins often have different IQ test scores.
2. Identical twins inherit the same genes.
3. So environment must play some part in determining IQ. (p.1)
Notice that one and the same argument can be formatted in different ways. This IQ argument was first formatted as prose (like you might find it in, e.g., a magazine article) and then formatted in a more formal way. Sometimes this more formal way of exhibiting an argument, with each statement numbered and on a separate line, premises first and conclusion last, is called standard form.
[1.3.] Indicator Words.
Notice the word “so” in the IQ argument. This is one of many English language words that frequently (but not always) indicate what role a given statement plays in an argument. Some others are:
because as indicated by
since in that
for may be inferred from
as given that
seeing that for the reason that
inasmuch as owing to
therefore as a result
wherefore for this reason
thus it follows that
so implies that
consequently we may infer
accordingly we may conclude
The second argument featured on p.1 includes more of these “indicator words”:
Since it’s wrong to kill a human being, it follows that abortion is wrong, because abortion takes the life of (kills) a human being. (p.1)
Notice that the conclusion does not have to be the last statement to occur within the argument as the argument is stated or printed. When an argument appears in prose form (rather than in standard form), its conclusion can appear before, after, or among its premises. So do not assume that a statement appearing at the end of an argument in prose form is the conclusion.
Here is the abortion argument in standard form (p.2):
1. It’s wrong to kill a human being.
2. Abortion takes the life of (kills) a human being.
\ 3. Abortion is wrong.
Notice that \ is being used as a symbol for the word “therefore.”
[1.4.] Statements and Evidence.
The definition of “argument” given above specifies that all the sentences in an argument must express statements. In other words, they must be declarative sentences (not questions, commands, or exclamations) and thus eligible to be true or false.
The definition of “argument” also specifies that even if every sentence in a group of sentences expresses a statement, that group of sentences does not necessarily constitute an argument.
For an argument, something else is needed: the sentences have to be connected in a specific way, viz. some of them must be intended as evidence for, or reasons for accepting, another.
This is the difference between arguments, on the one hand, and exposition and explanation, on the other.
EXAMPLES 1-4 (pp.2-3)
· note that “because” indicates a premise in #2, but not in #1.
· note that “therefore” indicates a premise in #4, but not in #3.
Exercise 1-1 (pp.3-4)
· [do all of the even problems NOW, if we have time—if we finish those, go back through the odds]
· do the rest of these problems for the next class; if it is an argument, then write it out in standard form, indicating which statements are premises and which is a conclusion (paraphrasing is OK); if it is not, then just write “NO”
· check your answers to the even-numbered questions against the back of the book; we will go through the odd-numbered questions at the beginning of the next class.
Stopping point for Tuesday January 10. For next time, complete exercise 1-1 and read all of chapter 1.
I adapt this distinction from Wilfrid Hodges, “Classical Logic I: First-Order Logic,” in The Blackwell Guide to Philosophical Logic, ed. Lou Goble, Malden, MA: Blackwell, 2001, 9-32, p.9.
Your textbook uses the concept of standard form only in a narrower sense, in which it describes only syllogisms; see p.357.
These lists are from Hurley, Concise Introduction to Logic, p.3.
A more technical definition of “statement” is this: the content expressed by (i.e., what is said by) a declarative sentence. There has been a lot of debate among philosophers about what sorts of things are capable of truth and falsity: sentences, statements, or propositions. See for example Susan Haack, Philosophy of Logics (1978) ch.6.
This page last updated 1/10/2012.
Copyright © 2012 Robert Lane. All rights reserved. | http://www.westga.edu/~rlane/symbolic/lecture01_intro-to-arguments.html | 13 |
21 | In chemistry, chemical synthesis is a purposeful execution of chemical reactions to obtain a product, or several products. This happens by physical and chemical manipulations usually involving one or more reactions. In modern laboratory usage, this tends to imply that the process is reproducible, reliable, and established to work in multiple laboratories.
A chemical synthesis begins by selection of compounds that are known as reagents or reactants. Various reaction types can be applied to these to synthesize the product, or an intermediate product. This requires mixing the compounds in a reaction vessel such as a chemical reactor or a simple round-bottom flask. Many reactions require some form of work-up procedure before the final product is isolated. The amount of product in a chemical synthesis is the reaction yield. Typically, chemical yields are expressed as a weight in grams or as a percentage of the total theoretical quantity of product that could be produced. A side reaction is an unwanted chemical reaction taking place that diminishes the yield of the desired product.
The word synthesis in the present day meaning was first used by the chemist Hermann Kolbe.
Many strategies exist in chemical synthesis that go beyond converting reactant A to reaction product B. In cascade reactions multiple chemical transformations take place within a single reactant, in multi-component reactions up to 11 different reactants form a single reaction product and in a telescopic synthesis one reactant goes through multiple transformations without isolation of intermediates.
Organic synthesis
Organic synthesis is a special branch of chemical synthesis dealing with the synthesis of organic compounds. In the total synthesis of a complex product it may take multiple steps to synthesize the product of interest, and inordinate amounts of time. Skill in organic synthesis is prized among chemists and the synthesis of exceptionally valuable or difficult compounds has won chemists such as Robert Burns Woodward the Nobel Prize for Chemistry. If a chemical synthesis starts from basic laboratory compounds and yields something new, it is a purely synthetic process. If it starts from a product isolated from plants or animals and then proceeds to new compounds, the synthesis is described as a semisynthetic process.
Other meanings
The other meaning of chemical synthesis is narrow and restricted to a specific kind of chemical reaction, a direct combination reaction, in which two or more reactants combine to form a single product. The general form of a direct combination reaction is:
- A + B → AB
- 2Na + Cl2 → 2 NaCl (formation of table salt)
- S + O2 → SO2 (formation of sulfur dioxide)
- 4 Fe + 3 O2 → 2 Fe2O3 (iron rusting)
- CO2 + H2O → H2CO3 (carbon dioxide dissolving and reacting with water to form carbonic acid)
4 special synthesis rules:
- metal-oxide + H2O → metal(OH)
- non-metal-oxide + H2O → oxi-acid
- metal-chloride + O2 → metal-chlorate
- metal-oxide + CO2 → metal carbonate (CO3)
See also
- Beilstein database
- Chemical engineering
- Methods in Organic Synthesis
- Organic synthesis
- Peptide synthesis
- Total synthesis
- Vogel, A.I., Tatchell, A.R., Furnis, B.S., Hannaford, A.J. and P.W.G. Smith. Vogel's Textbook of Practical Organic Chemistry, 5th Edition. Prentice Hall, 1996. ISBN 0-582-46236-3. | http://en.wikipedia.org/wiki/Chemical_synthesis | 13 |
40 | The functions in the printf family produce output according to a format as described below. The functions printf and vprintf write output to stdout, the standard output stream; fprintf and vfprintf write output to the given output stream; sprintf, snprintf, vsprintf and vsnprintf write to the character string str.
The functions vprintf, vfprintf, vsprintf, vsnprintf are equivalent to the functions printf, fprintf, sprintf, snprintf, respectively, except that they are called with a va_list instead of a variable number of arguments. These functions do not call the va_end macro. Consequently, the value of ap is undefined after the call. The application should call va_end(ap) itself afterwards.
These eight functions write the output under the control of a format string that specifies how subsequent arguments (or arguments accessed via the variable-length argument facilities of stdarg(3)) are converted for output.
until glibc 2.0.6. Since glibc 2.1 these functions follow the C99 standard and return the number of characters (excluding the trailing '0') which would have been written to the final string if enough space had been available.)
Format of the format string
The format string is a character string, beginning and ending in its initial shift state, if any. The format string is composed of zero or more directives: ordinary characters (not %), which are copied unchanged to the output stream; and conversion specifications, each of which results in fetching zero or more subsequent arguments. Each conversion specification is introduced by the character %, and ends with a conversion specifier. In between there may be (in this order) zero or more flags, an optional minimum field width, an optional precision and an optional length modifier.
The arguments must correspond properly (after type promotion) with the conversion specifier. By default, the arguments are used in the order given, where each `*' and each conversion specifier asks for the next argument (and it is an error if insufficiently many arguments are given). One can also specify explicitly which argument is taken, at each place where an argument is required, by writing `%m$' instead of `%' and `*m$' instead of `*', where the decimal integer m denotes the position in the argument list of the desired argument, indexed starting from 1. Thus,
are equivalent. The second style allows repeated references to the same argument. The C99 standard does not include the style using `$', which comes from the Single Unix Specification. If the style using `$' is used, it must be used throughout for all conversions taking an argument and all width and precision arguments, but it may be mixed with `%%' formats which do not consume an argument. There may be no gaps in the numbers of arguments specified using `$'; for example, if arguments 1 and 3 are specified, argument 2 must also be specified somewhere in the format string.
For some numeric conversions a radix character (`decimal point') or thousands' grouping character is used. The actual character used depends on the LC_NUMERIC part of the locale. The POSIX locale uses `.' as radix character, and does not have a grouping character. Thus,
results in `1234567.89' in the POSIX locale, in `1234567,89' in the nl_NL locale, and in `1.234.567,89' in the da_DK locale.
The flag characters
The character % is followed by zero or more of the following flags:
The value should be converted to an ``alternate form''. For o conversions, the first character of the output string is made zero (by prefixing a 0 if it was not zero already). For x and X conversions, a non-zero result has the string `0x' (or `0X' for X conversions) prepended to it. For a, A, e, E, f, F, g, and G conversions, the result will always contain a decimal point, even if no digits follow it (normally, a decimal point appears in the results of those conversions only if a digit follows). For g and G conversions, trailing zeros are not removed from the result as they would otherwise be. For other conversions, the result is undefined.
The value should be zero padded. For d, i, o, u, x, X, a, A, e, E, f, F, g, and G conversions, the converted value is padded on the left with zeros rather than blanks. If the 0 and - flags both appear, the 0 flag is ignored. If a precision is given with a numeric conversion (d, i, o, u, x, and X), the 0 flag is ignored. For other conversions, the behavior is undefined.
The converted value is to be left adjusted on the field boundary. (The default is right justification.) Except for n conversions, the converted value is padded on the right with blanks, rather than on the left with blanks or zeros. A - overrides a 0 if both are given.
(a space) A blank should be left before a positive number (or empty string) produced by a signed conversion.
A sign (+ or -) always be placed before a number produced by a signed conversion. By default a sign is used only for negative numbers. A + overrides a space if both are used.
The five flag characters above are defined in the C standard. The SUSv2 specifies one further flag character.
For decimal conversion (i, d, u, f, F, g, G) the output is to be grouped with thousands' grouping characters if the locale information indicates any. Note that many versions of gcc cannot parse this option and will issue a warning. SUSv2 does not include %'F.
glibc 2.2 adds one further flag character.
For decimal integer conversion (i, d, u) the output uses the locale's alternative output digits, if any (for example, Arabic digits). However, it does not include any locale definitions with such outdigits defined.
The field width
An optional decimal digit string (with nonzero first digit) specifying a minimum field width. If the converted value has fewer characters than the field width, it will be padded with spaces on the left (or right, if the left-adjustment flag has been given). Instead of a decimal digit string one may write `*' or `*m$' (for some decimal integer m) to specify that the field width is given in the next argument, or in the m-th argument, respectively, which must be of type int. A negative field width is taken as a `-' flag followed by a positive field width. In no case does a non-existent or small field width cause truncation of a field; if the result of a conversion is wider than the field width, the field is expanded to contain the conversion result.
An optional precision, in the form of a period (`.') followed by an optional decimal digit string. Instead of a decimal digit string one may write `*' or `*m$' (for some decimal integer m) to specify that the precision is given in the next argument, or in the m-th argument, respectively, which must be of type int. If the precision is given as just `.', or the precision is negative, the precision is taken to be zero. This gives the minimum number of digits to appear for d, i, o, u, x, and X conversions, the number of digits to appear after the radix character for a, A, e, E, f, and F conversions, the maximum number of significant digits for g and G conversions, or the maximum number of characters to be printed from a string for s and S conversions.
The length modifier
Here, `integer conversion' stands for d, i, o, u, x, or X conversion.
A following integer conversion corresponds to a signed char or unsigned char argument, or a following n conversion corresponds to a pointer to a signed char argument.
A following integer conversion corresponds to a short int or unsigned short int argument, or a following n conversion corresponds to a pointer to a short int argument.
(ell) A following integer conversion corresponds to a long int or unsigned long int argument, or a following n conversion corresponds to a pointer to a long int argument, or a following c conversion corresponds to a wint_t argument, or a following s conversion corresponds to a pointer to wchar_t argument.
(ell-ell). A following integer conversion corresponds to a long long int or unsigned long long int argument, or a following n conversion corresponds to a pointer to a long long int argument.
A following a, A, e, E, f, F, g, or G conversion corresponds to a long double argument. (C99 allows %LF, but SUSv2 does not.)
(`quad'. BSD 4.4 and Linux libc5 only. Don't use.) This is a synonym for ll.
A following integer conversion corresponds to an intmax_t or uintmax_t argument.
A following integer conversion corresponds to a size_t or ssize_t argument. (Linux libc5 has Z with this meaning. Don't use it.)
A following integer conversion corresponds to a ptrdiff_t argument.
The SUSv2 only knows about the length modifiers h (in hd, hi, ho, hx, hX, hn) and l (in ld, li, lo, lx, lX, ln, lc, ls) and L (in Le, LE, Lf, Lg, LG).
The conversion specifier
A character that specifies the type of conversion to be applied. The conversion specifiers and their meanings are:
The int argument is converted to signed decimal notation. The precision, if any, gives the minimum number of digits that must appear; if the converted value requires fewer digits, it is padded on the left with zeros. The default precision is 1. When 0 is printed with an explicit precision 0, the output is empty.
The unsigned int argument is converted to unsigned octal (o), unsigned decimal (u), or unsigned hexadecimal (x and X) notation. The letters abcdef are used for x conversions; the letters ABCDEF are used for X conversions. The precision, if any, gives the minimum number of digits that must appear; if the converted value requires fewer digits, it is padded on the left with zeros. The default precision is 1. When 0 is printed with an explicit precision 0, the output is empty.
The double argument is rounded and converted in the style [-?d.ddde E conversion uses the letter E (rather than e) to introduce the exponent. The exponent always contains at least two digits; if the value is zero, the exponent is 00.
The double argument is rounded and converted to decimal notation in the style [-?ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is explicitly zero, no decimal-point character appears. If a decimal point appears, at least one digit appears before it.
(The SUSv2 does not know about F and says that character string representations for infinity and NaN may be made available. The C99 standard specifies `[-?inf' or `[-?infinity' for infinity, and a string starting with `nan' for NaN, in the case of f conversion, and `[-?INF' or `[-?INFINITY' or `NAN*' in the case of F conversion.)
The double argument is converted in style f or e (or F or E for G conversions). The precision specifies the number of significant digits. If the precision is missing, 6 digits are given; if the precision is zero, it is treated as 1. Style e is used if the exponent from its conversion is less than -4 or greater than or equal to the precision. Trailing zeros are removed from the fractional part of the result; a decimal point appears only if it is followed by at least one digit.
(C99; not in SUSv2) For a conversion, the double argument is converted to hexadecimal notation (using the letters abcdef) in the style [-?0xh.hhhhpA conversion the prefix 0X, the letters ABCDEF, and the exponent separator P is used. There is one hexadecimal digit before the decimal point, and the number of digits after it is equal to the precision. The default precision suffices for an exact representation of the value if an exact representation in base 2 exists and otherwise is sufficiently large to distinguish values of type double. The digit before the decimal point is unspecified for non-normalized numbers, and nonzero but otherwise unspecified for normalized numbers.
If no l modifier is present, the int argument is converted to an unsigned char, and the resulting character is written. If an l modifier is present, the wint_t (wide character) argument is converted to a multibyte sequence by a call to the wcrtomb function, with a conversion state starting in the initial state, and the resulting multibyte string is written.
If no l modifier is present: The const char * argument is expected to be a pointer to an array of character type (pointer to a string). Characters from the array are written up to (but not including) a terminating NUL character; if a precision is specified, no more than the number specified are written. If a precision is given, no null character need be present; if the precision is not specified, or is greater than the size of the array, the array must contain a terminating NUL character.
wide characters. Wide characters from the array are converted to multibyte characters (each by a call to the wcrtomb function, with a conversion state starting in the initial state before the first wide character), up to and including a terminating null wide character. The resulting multibyte characters are written up to (but not including) the terminating null byte. If a precision is specified, no more bytes than the number specified are written, but no partial multibyte characters are written. Note that the precision determines the number of bytes written, not the number of wide characters or screen positions. The array must contain a terminating null wide character, unless a precision is given and it is so small that the number of bytes written exceeds it before the end of the array is reached.
(Not in C99, but in SUSv2.) Synonym for lc. Don't use.
(Not in C99, but in SUSv2.) Synonym for ls. Don't use.
The void * pointer argument is printed in hexadecimal (as if by %#x or %#lx).
The number of characters written so far is stored into the integer indicated by the int * (or variant) pointer argument. No argument is converted.
To print pi to five decimal places:
To print a date and time in the form `Sunday, July 3, 10:02', where weekday and month are pointers to strings:
Many countries use the day-month-year order. Hence, an internationalized version must be able to print the arguments in an order specified by the format:
where format depends on locale, and may permute the arguments. With the value
one might obtain `Sonntag, 3. Juli, 10:02'.
To allocate a sufficiently large string and print into it (code correct for both glibc 2.0 and glibc 2.1):
The fprintf, printf, sprintf, vprintf, vfprintf, and vsprintf functions conform to ANSI X3.159-1989 (``ANSI C) and ISO/IEC 9899:1999 (``ISO C99). The snprintf and vsnprintf functions conform to ISO/IEC 9899:1999.
Concerning the return value of snprintf, the SUSv2 and the C99 standard contradict each other: when snprintf is called with size=0 then SUSv2 stipulates an unspecified return value less than 1, while C99 allows str to be NULL in this case, and gives the return value (as always) as the number of characters that would have been written in case the output string has been large enough.
Linux libc5 knows about the five C standard flags and the ' flag, locale, %m$ and *m$. It knows about the length modifiers h,l,L,Z,q, but accepts L and q both for long doubles and for long long integers (this is a bug). It no longer recognizes FDOU, but adds a new conversion character m, which outputs strerror(errno).
glibc 2.0 adds conversion characters C and S.
glibc 2.1 adds length modifiers hh,j,t,z and conversion characters a,A.
Because sprintf and vsprintf assume an arbitrarily long string, callers must be careful not to overflow the actual space; this is often impossible to assure. Note that the length of the strings produced is locale-dependent and difficult to predict. Use snprintf and vsnprintf instead (or asprintf and vasprintf).
lib/main.php:944: Notice: PageInfo: Cannot find action page | http://www.wlug.org.nz/asprintf(3)?action=PageInfo | 13 |
17 | A Brief history of Einstein’s special theory of relativity. The main conclusions of Einstein’s special theory of relativity are the Lorentz transformation equations. They are called the “Lorentz transformation equations,” because they had already been discovered, before Einstein’s first paper, by H. A. Lorentz, taking a Newtonian approach. That is where I will pick up the story about the Einsteinian revolution in physics, since spatiomaterialism is merely following in the footsteps of Lorentz. What I will call the four “Lorentz distortions”are sufficient to explain all the of the predictions by which Einstein’s special theory of relativity has been confirmed.
Lorentz. By 1887, some eighteen years before Einstein’s paper, Michelson and Morley had made experiments that showed that light has the same velocity relative to any object, regardless of its own motion. What made their result puzzling was the Newtonian assumption that the medium in which light propagates is a “luminiferous ether,” a very subtle kind of material substance that was supposed to be at rest in absolute space. Given that the velocity of light is everywhere the same relative to absolute space, they expected that the velocity of light, as measured from a material object, to vary with that object’s own velocity in absolute space—just as the velocity of ripples propagating in a pond arrive faster (or slower), when a boat is moving toward them (or away from them).
Michelson and Morley used an interferometer, which compares the two-way velocities of light in perpendicular directions; that is, light is reflected back from mirrors in perpendicular directions and the signals are compared to see if one is lagging behind the other. They made measurements at various points in the Earth’s orbit around the sun, where the Earth should have different velocities in absolute space. On a moving object, the time it takes for light to travel both to and from a distant mirror in the direction of absolute motion should be different from the time it takes to travel an equal distance in the transverse direction. The margins of error were small enough, given the velocity of light and the velocity of the Earth in its orbit around the sun, that it should have been possible for their interferometer to detect absolute velocity. But Michelson and Morley failed to detect any difference at all in the time it took light to travel the same distance in perpendicular directions. Absolute motion could not be detected.
Length contraction. The Michelson-Morley result was surprising, but even before Einstein published his special theory in 1905, Lorentz had proposed a Newtonian explanation of it. Lorentz showed, in 1895, that their result could be explained physically, if the motion of such an apparatus in absolute space caused its length to shrink in the direction of motion as a function of its velocity by a factor of . Lorentz argued that this length contraction is a real physical change in the material object that depends on its motion relative to absolute space.
The equation was L=Lo, where Lo was the length at absolute rest. The shrinkage had been proposed independently by George F. Fitzgerald in 1889 and hence became known as the “Lorentz-Fitzgerald contraction”.
Lorentz tried to explain the length contraction physically, as an effect of motion through a stagnant ether on the electrostatic forces among its constituent, charged particles. But he could just as well have taken it to be a law of physics, making the Lorentz-Fitzgerald contraction the discovery of a new, basic physical law. (An ontological explanation of it will be suggested in the last section of this discussion of the special theory of relativity.)
Lorentz also described the length contraction as a mathematical transformation between the coordinates of a reference frame based on the moving material object and the coordinates of a reference frame at absolute rest. Lorentz started with the Galilean transformation by which Newtonians would obtain the spatial coordinates used on an object in uniform motion in the x-direction, or x’ = x - vt, and combining that with the length contraction he had discovered, he came up with the transformation equation, for obtaining the spatial coordinates on the moving material object.
Time dilation. There is, however, another distortion that material objects undergo as a function of their absolute motion. That is a slowing down of clocks (and physical processes generally) at the same rate as the length contractions, or the so-called "time dilation," which took somewhat longer for Lorentz to discover.
The Galilean transformation for time in Newtonian physics is simply t = t' , because Newtonian physics assumes that time is the same everywhere. But by using transformation equations to describe the distortions in material objects, Lorentz found that he had to introduce a special equation for transforming time: t’ = t - vx/c2 (Goldberg, p. 94). The new factor in the transformation equation, vx/c2, implied that time on the moving frame varies with location in that frame. Lorentz called it "local time," but he did not attribute any physical significance to it. "Local time" is not compatible with the belief in absolute space and time, and Lorentz described it as “no more than an auxiliary mathematical quantity” (Torretti, p. 45, 85), insisting that his transformation equations were merely “an aid to calculation” (Goldberg, p. 96).
The slowing down of physical processes is called “time dilation.” Lorentz discovered this distortion by tinkering with various ways of calculating the coordinates used on inertial reference frames in relative motion. Thus, it is natural to describe time dilation as the slowing down of clocks on the moving reference frame. It was included in the final version of Lorentz's explanation, now called the “Lorentz transformation equations.” (Lorentz 1904) Those equations contained not only the length contraction and transformation for “local time”, but also the implication that clocks on moving frames are slowed down at the same rate as lengths are contracted (that is, ). The final Lorentz equation for time transformation included both the variation in local time and time dilation: .
Though Lorentz took the distortions that he discovered in fast-moving material objects to be laws of nature, he did not think that they were basic. He thought they were effects of motion on the interactions between electrons and the ether which could be explained by his electronic theory of matter, and he saw explaining this effect as the the main challenge to Newtonian physics. The transformation equations themselves never seemed puzzling to Lorentz, because he never took them to more than just a mathematical aid to calculation.
Poincaré. H. Poincaré thought he saw more clearly what Lorentz had discovered than Lorentz himself. As early as 1895, Poincaré had expressed dissatisfaction with Lorentz’s piecemeal approach, introducing one modification of the laws of Newtonian physics after another in order to account for different aspects of the phenomenon discovered by Michelson and Morley. Instead of such ad hoc modifications, he urged the recognition of what he called a “principle of relativity” to cover all the phenomena involved in fast-moving objects. As Poincaré put it in 1904, the principle of relativity requires that “the laws of physical phenomena should be the same for an observer at rest or for an observer carried along in uniform movement of translation, so that we do not and cannot have any means of determining whether we actually undergo a motion of this kind” (from Torretti, 83).
A principle of relativity like this had, in effect, been affirmed by Newton himself, when he admitted that his laws of motion depend, not on the absolute velocities of material objects, but only on their relative velocities. That is, Newton had already denied that absolute rest could be detected by mechanical experiments. It seemed that absolute motion could be detected only when Maxwell had discovered that light could be explained as an electromagnetic wave. Thus, Poincaré saw Lorentz's discovery of distortions in fast-moving material objects as a way of extending Newton’s principle of relativity to cover electromagnetic phenomena.
Understanding how the undetectability of absolute motion could be a result of the distortions that Lorentz had discovered, he referred to Lorentz theory as “Lorentz’s principle of relativity” even after Einstein had published his special theory and Lorentz himself was attributing the principle of relativity to Einstein (Torretti 85, Goldberg 212, and Holton 178). Indeed, Poincaré joined Lorentz in the attempt to explain the Lorentz distortions by the motion of material objects through absolute space, also expecting to find their cause in the dynamics of electrons; he also thought that motion through the ether caused material objects to shrink in the direction of motion and natural clocks to slow down by the exact amount required to mask their motion, as implied by Lorentz’s transformation equations (Goldberg 94-102, Torretti 38-47). Furthermore, Poincaré apparently thought that what Lorentz said about those equations in his 1904 work answered his own demand that it be a “demonstration of the principle of relativity with a single thrust” (Goldberg 214-15).
Lorentz's explanation of the distortions was not, however, a complete explanation of the principle of relativity. There are really two quite different aspects of the phenomenon described by the principle of relativity, and Lorentz had explicitly explained only one of them.
What Lorentz’s electron theory of matter (and Poincaré’s own refinements of it) explained physically were the Lorentz distortions in material objects with absolute velocity. That explained the negative outcome of the Michelson-Morley experiment: the contraction of lengths in the direction of motion and the slowing down of clocks as a function of motion through absolute space does make it physically impossible to detect absolute motion on a moving object by measuring the velocity of light relative to it. And that is one way in which inertial reference frames are empirically equivalent, because it holds of measurements made using any material object in uniform motion as one's reference frame, regardless of its motion through absolute space.
But there is more to the principle of relativity than explaining the null result of the Michelson-Morley experiment. The transformation equations that Lorentz constructed to describe the effects of absolute motion on material objects predict the outcomes of other experiments, such as attempts to measure directly the lengths of high-velocity measuring rods and the rate at which high-velocity clocks are ticking away. Though such experiments are more difficult to perform, they are conceivable, and Lorentz's equations do make predictions about them: moving measuring rods will be shrunken in the direction of motion and moving clocks will be slowed down. That suggests another way of detecting absolute motion. One might compare measuring rods or clocks that are moving at a whole range different velocities with one another and take the one with the longest measuring rods and quickest clocks to be closest to absolute rest. Hence, the principle of relativity would be false.
It is not possible, however, to detect absolute rest in this way, and as it happens, its impossibility is also predicted by Lorentz's theory, because he formulated his description of the Lorentz distortions in terms of transformation equations. Transformation equations are equations for transforming the coordinates obtained by using one material objects as a frame of reference into the coordinates obtained by using another material object as a frame of reference, and to be consistent, they must work both ways. That is, it must be possible to obtain the original coordinates by applying the transformation equations to the transformed coordinates. Thus, whatever distortions observers at absolute rest may find in material objects with a high absolute velocity will also be found by observers in absolute motion in material objects that are at absolute rest.
The recognition that Lorentz's theory, being formulated in terms of transformation equations, implied that all such inertial reference frames are empirically equivalent is presumably what led Poincaré to proclaim that Lorentz had finally explained the truth of the principle of relativity. Absolute rest and motion cannot be detected from any inertial reference frame.
Lorentz's theory was not, however, an adequate explanation of the principle of relativity, for there is still something puzzling about the empirical equivalence entailed by the symmetry of the Lorentz transformation equations.
Lorentz meant his transformation equations to be a way of describing the length contraction and time dilation in material objects with absolute motion, for that would explain the Michelson-Morley experiment, that is, why absolute motion cannot be detected by measuring the velocity of light in different directions. But since the transformation equations describe a symmetry between the members of any pair of inertial reference frames, they imply that observers using a fast-moving material object as the basis of their reference frame would observe a length contraction in measuring rods that were at absolute rest and a time dilation in clocks at absolute rest. That makes it impossible to detect absolute rest or motion by comparing different inertial reference frames with one another. But it is puzzling, because it is hard to see how both views could be true at the same time, that is, how two measuring rods passing one another at high velocity could both be shorter than the other and how two clocks passing by one another could both be going slower than the other.
In other words, Lorentz's theory does not really give a physical explanation of what Poincaré called the "principle of relativity." What entails the truth of the principle of relativity is the description of the Lorentz distortions in terms of transformation equations; the inability to detect absolute rest and motion by comparing inertial frames with one another comes from the symmetrical relationship that transformation equations represent as holding between the members of any pair of inertial reference frames. That symmetry is not physically possible, at least, not in the sense of "physical" that Lorentz had in mind when he tried to explain the distortions as occurring to material objects because of their motion in absolute space. If inertial frames are material objects in absolute space, then their measuring rods cannot both be shorter than the other and their clocks cannot both be slower.
As we shall see, what enables Lorentz's transformation equations to predict the symmetry of distortions is the "local time" factor in the time equation, vx/c2, which Lorentz insisted was just an "aid to calculation." It represents the readings that would be given by clocks on a moving reference frame that have been synchronized by using light signals between them as if they were all at absolute rest, that is, on the assumption that the one-way velocity of light is the same both ways along the pathway between any two clocks (as required by Einstein's definition of simultaneity at a distance). That assumption is false, as Lorentz understood these phenomena, and clocks on the moving inertial frame would be mis-synchronized. It can be shown, as we shall see, that this way of mis-synchronizing clocks on a moving frame combines with the Lorentz distortions that the moving frame is actually suffering to make it appear that its own Lorentz distortions are occurring in the reference frame at absolute rest (or moving more slowly). This is a physical explanation, given how the other frame's measuring rods and clocks are measured. But it is an explanation of the principle of relativity that reveals it to be the description of a mere appearance. Though there is an empirical equivalence among inertial frames, a physicist who accepted Lorentz's Newtonian assumptions would insist that it has a deeper physical explanation.
It was not Lorentz, however, but Poincaré who declared that Lorentz had explained the truth of the principle of relativity, and Poincaré's acceptance of Lorentz's explanation as adequate may have been colored by his own philosophical commitment to conventionalism. Poincaré viewed the choice between Euclidean or non-Euclidean geometry as conventional, and he argued that convention is also what raised inertia and the conservation of energy to the status of principles that could not be empirically falsified. Poincaré's acceptance of the principle of relativity should probably be understood in the context of this more or less Kantian skepticism about knowing the real nature of what exists. Considering how the standard of simultaneity at a distance varies from one inertial reference frame to another (depending on the "local time" factor in the Lorentz transformation equations), the principle of relativity could also be seen as a conventional truth.
Poincaré's pronouncement that Lorentz's theory had explained the principle of relativity could not have set well with Lorentz himself. Lorentz may have continued to call it "Einstein's principle of relativity" because he realized that it was not explained by his theory about how spatial and temporal distortions are caused in material objects by their absolute motion. What is responsible for the principle of relativity is the symmetry in pairs of inertial frames entailed by his equations being transformation equations. If the distortions didn’t hold symmetrically in any pair inertial frames, it would be possible to detect absolute rest and motion. But to my knowledge, Lorentz never argued explicitly that what he called "local time" on the moving material object (that is, vx/c2 in the time equation) represents a mis-synchronization of clocks on the moving frame that causes the moving frame's own Lorentz distortions to appear to be occurring in the other inertial reference frame.
The Newtonian explanation of all the relevant phenomena did not, therefore, have an adequate defender. Lorentz was more concerned to find an adequate physical explanation of the distortions he had discovered in material objects, and Poincaré was more interested in defending conventionalism. That is the Newtonian context in which Einstein's special theory of relativity won the day.
Einstein. Einstein took a dramatically different approach from both Lorentz and Poincaré. Instead of taking the principle of relativity to be an empirical hypothesis that could be explained physically by deeper, Newtonian principles, or as a conventional truth, Einstein raised the principle of relativity to the status of a postulate, which was not to be explained at all, but rather accepted as basic and used to explain other phenomena (Zahar 90-2). The mathematical elegance of Einstein's explanation of these phenomena is stunning. From the premise that all inertial reference frames are empirically equivalent, he derived a description of how two different inertial reference frames would appear to each other; that is, he deduced the Lorentz transformation equations.
Einstein's new approach can be seen most clearly by considering the structure of his argument. It is represented below in a diagrammatic form.
The Principle of Relativity
|The laws of nature apply the same way on all inertial frames.|
|The Light Postulate||The velocity of light is the same on all inertial frames.|
|The Definition of Simultaneity at a Distance||The local event halfway through the period required for light to travel to the distant event and back is simultaneous with the distant event.|
|To obtain the second frame's coordinates from the first frame:||To obtain the first frame's coordinates from the second frame:|
|Lorentz transformation equations (kinematic phenomena)||
|Relativistic increase in mass (dynamic phenomena)||
The assumption that inertial frames are all empirically equivalent takes the form of three premises in Einstein’s argument: the Principle of Relativity, the Light Postulate, and Einstein's Definition of Simultaneity at a Distance (see table). Einstein's principle of relativity holds, with Poincaré, that the laws of nature hold in the same way on every inertial reference frame. That allowed Einstein to assume that Maxwell's laws of electromagnetism hold universally, and he considered what would be true of two different inertial frames in the same world. But in order to deduce the Lorentz transformation equations, Einstein also had to assume that that the velocity of light is the same relative to every inertial frame (the light postulate) and, accordingly, that simultaneity at a distance is defined on each reference frame as if the velocity of light is the same both to and back from a distance object.
What Einstein deduced from these premises are the “Lorentz transformation equations,” that is, equations for transforming the coordinates of any given inertial reference frame into those of any other.
The Lorentz transformation equations imply that any material object moving relative to any other inertial frame at a velocity approaching that of light will appear to suffer the Lorentz distortions: its clocks (and all physical processes) will be slowed down, and its measuring rods (and all material objects) will be shortened in the direction of its motion—both by the same amount, , which is a function of its velocity in the observer’s reference frame.
Einstein also inferred from these kinematic distortions and his principle of relativity that the mass of objects moving in an inertial frame increases at the same rate, making three distortions altogether. That dynamical implication is the source of Einstein's most famous equations, E = mc2.
It should be emphasized that there are really two sets of transformation equations. It may not seem that way, because Einstein's conclusion is often stated as just one of the two sets of equations listed above, making it look mathematically simpler. But that formulation overlooks a mathematical detail and thereby obscures what Einstein's conclusion is about.
Though the Lorentz transformation is exactly the same both ways between the members of any pair of inertial reference frames, it requires two, non-identical sets of transformation equations, because their relative velocity has the opposite sign for each observer. That is, the two coordinate systems are set up so that their origins coincide when t = 0 and t' = 0, and since they are moving in opposite directions, the relative velocity is v for one of them and -v for the other. Thus, in order for the transformation to be symmetrical, one set of transformation equations has to have the opposite sign for the second factor in the numerator of the equations for space and time.
Since this seems to be a mere technicality, the conclusions of Einstein’s argument are usually represented as a single set of Lorentz transformation equations (the first set in the above table). Duplication is avoided by introducing a special mathematical symbol to make the single set of equations represent both transformations in any pair of inertial frames. Thus, Einstein's conclusion seems more like just another universal law of nature. But this is just homage to the Pythagorean ideal of mathematical simplicity, which obscures the fact that Einstein's theory is, in the first instance, about the symmetry that holds between the members of every pair of inertial frames.
It should also be emphasized that Einstein's theory is about how reference frames are related, and only indirectly about the material objects on which they are based. Though it does have implications concerning the relationship between material objects with a high relative velocity, that relationship is described by way of a mathematical transformation that holds between the reference frames based on them.
Inertial reference frames are based on material objects that are not being accelerated, and what makes the material object a reference frame is that it is used as the basis for a coordinate system by which the locations and times of events throughout the universe can be measured. (For this purpose, it is useful to think of an inertial reference frame as a grid of rigid bars extending wherever needed in space with synchronized clocks located everywhere.)
Notice that Einstein's three premises are all about reference frames based on material objects. Indeed, his definition of simultaneity prescribes how clocks must be synchronized to set up such a reference frame. The light postulate makes explicit the assumption about the velocity of light on which his definition of simultaneity is based. And the principle of relativity states that all the laws of physics will hold the same way within that reference frame as every other one, that is, will make correct predictions about what happens in that reference frame.
Einstein derives conclusions from his premises by assuming that there are two different inertial reference frames in the world and figuring out how they must appear to one another. Since his premises are about their reference frames, it is hardly surprising that his conclusion is about a mathematical transformation between their coordinates.
Indirectly, however, Einstein's conclusion is a description of how material objects with different constant velocities are related to one another as parts of the same world, since the reference frames in question are based on material objects. But to see Einstein's conclusion as a description of how material objects are related in space is to take Lorentz's approach. For Lorentz, these same transformation equations were just a mathematically convenient way of describing from the absolute frame the spatial and temporal distortions that occur in material objects with a high velocity in absolute space.
By calling his argument a theory of relativity, Einstein emphasized that his theory is about the empirical equivalence of all inertial reference frames, not the relationship between the material objects on which they are based. Observers on each inertial reference frame have their own view of the relationship between the material objects involved, but they are different views, and it is their views that are related by the Lorentz transformation equations. The symmetry of the relationship between their reference frames is what is crucial for Einstein, because that is what rules out any way of detecting absolute rest or motion by comparing inertial frames to one another and ensures that there is nothing to distinguish one inertial frame from another except their velocities relative to one another.
The Lorentz distortions in material objects are, however, a consequence of the Lorentz transformation equations that Einstein deduced. And if one does follow Lorentz, interpreting them as a way of describing the material objects on which the inertial reference frames are based, then the Lorentz transformation equations lead to paradoxes, as I have already suggested. Those equations imply that observers using any given inertial reference frame will find the Lorentz distortions occurring in the material objects on which the other inertial reference frame is based, and thus, the symmetry of the transformation for any pair of inertial frames leads to paradoxes.
Consider two inertial frames in motion relative to one another. From the first frame it appears that clocks on the second frame are slowed down. That would make sense, if from the second frame, it appeared that first-frame clocks were speeded up. But special relativity implies that it also appears from the second frame that clocks on the first frame are slowed down. That is, the distortions are symmetrical on Einstein’s theory, not the reverse of one another, as one might expect. And if the Lorentz distortions are really symmetrical, it is inconceivable that the two inertial frames are just material objects moving relative to one another in absolute space, because in absolute space, there can’t be two clocks next to one another both of which are actually going slower than the other. If one assumes that Einstein's theory is describing material objects, one must give up the assumption that those objects are located in absolute space. They are, of course, parts of the same world, but they must be related to one another in some other way.
The same problem arises from the symmetry of the length contraction and relativistic mass increase, for there cannot be two measuring rods passing one another in space that are both shorter than the other. Nor can there be two material objects both be more massive than the other. It is simply not possible for material objects located in absolute space.
None of this should be a surprise, however, because even the Light Postulate itself is incompatible with absolute space (or at least, with the assumption that light has a fixed velocity relative to absolute space). Though Newtonian physics had taken absolute space to contain the medium in which light propagates, Einstein assumed that the velocity of light relative to every object is the same, regardless of their own velocities relative to other objects in the world. Thus, Einstein held that the velocity of light would be the same in both members of any pair of inertial frames. This is not possible, if electromagnetic waves propagate through (an ether in) absolute space, like waves in water, for the motion of an object through waves propagating in space would change the velocity of those waves relative to the object—just as the motion of a row boat through ripples propagating in a pond changes the velocity of those ripples relative to the boat.
Taken as a description of the relationship between material objects in space, therefore, Einstein's special theory of relativity leads to paradoxes. But Einstein was not discouraged by these paradoxes. He was not thinking of inertial reference frames as material objects that are related in space, that is, in absolute space, or a space that is the same for both material objects. He was making a more abstract, mathematical argument and, in the process, giving physics a new standpoint from which to explain all physical processes.
That Einstein's basic approach is different from Lorentz's can be seen in what made Einstein curious about these phenomena in the first place. It was not the Michelson-Morley experiment, but rather something peculiar about the connection between classical mechanics and Maxwell’s theory of electromagnetism (Zahar 99-100). Einstein realized that even though Maxwell’s theory was standardly interpreted as referring to absolute space, absolute space was not needed in order to explain electromagnetic phenomena. For example, a conductor moving through a magnetic field at absolute rest moves electrons exactly the same way as if it were at absolute rest and the magnetic field were moving. That is what suggested the principle of relativity to Einstein, and though from it he derived the same transformation equations that Lorentz had proposed in 1904, Einstein claimed not to know about Lorentz's 1904 work.
By raising the principle of relativity to the status of a postulate, Einstein was assuming, in effect, that the deepest truth that can be known about the nature of space and time is that inertial frames are all empirically equivalent. And by relying on the predictions of measurements derived from that principle to justify his theory, Einstein had the support of the positivists, who dominated philosophy of science at that time. Indeed, Einstein admits to having been influenced by Ernst Mach at the time of his first paper on special relativity. To positivists, the paradoxes mentioned above about two clocks both going slower than the other and two measuring rods both shorter than the other are not real problems, but merely theoretical problems. Theoretical propositions that could not be spelled out in terms of observations were dismissed as "metaphysical," as if theories were mere instruments for making predictions. That attitude could be taken about the aforementioned paradoxes, because there is never any occasion in which two clocks can be directly observed both going slower than the other (or two measuring rods observed both shorter than the other). Observations are made from one inertial reference frame or another, and if both members of some pair of inertial frames are observed from a third reference frame, their clocks and measuring rods do not appear this way because of the Lorentz distortions that are introduced by its own velocity relative to them.
Though when taken as a description of material objects, the special theory of relativity is incompatible with the existence of absolute space, Einstein did not attempt to use its implications to show that absolute space does not exist. He was making a mathematical argument to show that accepted theories in Newtonian physics, which did assume the existence of absolute space, could all be replaced by theories that do not mention absolute rest or motion at all. All he explicitly claimed was that physics does not require an “absolutely stationary space” and that the notion of a “‘luminiferous ether’ will prove to be superfluous” because the “phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the ideas of absolute rest” (Einstein, 1923 p. 37). It could be argued, therefore, that Einstein was merely imitating empiricist skepticism about theoretical entities generally by casting doubt on the reality of absolute space.
As it turned out, Einstein's theory proved to be remarkably successful in making surprising predictions of new experiments. For example, unstable particles have longer half-lives when moving at velocities approaching that of light. Clocks flown around the earth are indeed slowed down compared to clocks that stayed at home. The most famous new prediction of special relativity, E = mc2, has been confirmed repeatedly. It is a consequence of the relativistic increase in mass, which Einstein first pointed out, and without it, high energy physics as we know it today would be inconceivable. Finally, the equations of special relativity have become (after Dirac) the foundation of quantum field theory as well as Einstein’s theory of gravitation. The Lorentz transformation is now so basic to physics that “covariance” (or “Lorentz covariance”) is taken as a constraint on all possible laws of physics.
To be sure, Newtonian physicists complained about the loss of intuitive understanding that came with the acceptance of Einstein's way of explaining these phenomena. It was no longer possible to construct in ordinary spatial imagination a picture of the nature of the world. But that objection did not detract from the predictive success of Einstein's theory, and the Einsteinian revolution made the capacity of mathematical arguments to make surprising predictions of precise measurements the establishment criterion for accepting theories in contemporary physics.
But physics is not just mathematics. A theory in physics is generally thought to be true when it corresponds to what exists, and if the special theory of relativity does not correspond to material objects in absolute space, we want to know what it does correspond to. The success in making surprising predictions of what happens by which Einstein's theory has been confirmed means that it corresponds to regularities that hold of change in the world, but it is natural to want to know the nature of what exists that makes those regularities true. The answer given by contemporary physics is spacetime, and it was Minkowski that has made that answer possible.
Minkowski. In 1908, Minkowski offered a mathematically elegant way of representing what is true from all inertial frames, according to Einstein’s special theory of relativity, using only the coordinates of any single inertial frame. His was a “graphic method” which he said allows us to “visualize” what is going on. The key to his diagram was to represent time in the same way as space, and that is what has led to the belief that what exists is not space and time, but rather spacetime.
In Minkowski’s “spacetime diagrams”, time is represented as a fourth dimension perpendicular to the three dimensions of space (though when comparing two inertial frames, the spatial dimensions can be reduced to one by a suitable orientation of their coordinate frames). A material object at rest in space is represented, therefore, as a line running parallel to the time axis, and a material object with a constant, non-zero velocity is represented by a line inclined slightly in the direction of motion. Units for measuring time and space are usually chosen so that the path of light in spacetime (the “light-line”, t = x/c) bisects the time and space axes, making the “basic unit” of distance how far light travels in a unit of time.
Since the second frame of reference is based on a moving object, we can think of the tilted line representing its pathway as its time axis. From such a moving reference frame, the location of an object at rest in the first frame (such as one always located at its origin) would change relative to the moving frame. So far, this diagram of space and time would be acceptable in classical Newtonian physics, because it represents a so-called Galilean transformation for the coordinates of moving reference frames (in which distances in space would be related as x' = x – VT, where v is their relative velocity in the x-direction.)
What Minkowski discovered was that the Lorentz transformation for moving reference frames could be represented by tilting the space line of the moving frame equally in the opposite direction and lengthening the units of time and space. That is, the time-line and the space-line of the moving frame are inclined symmetrically around the pathway of light. (See the comparison of the Newtonian Diagram of Space and Time and Minkowski's Spacetime Diagram.)
In either the Newtonian or Minkowski's diagram, every point represents the location of a possible event in space and time (called a “world-point”), and superimposing a second reference frame makes it possible to give such coordinates in either reference frame. From the coordinates for any event in the first reference frame, we can simply read off the coordinates for the same event in the moving reference frame, and vice versa. In the case of event E, for example, the coordinates in the first frame are (2,1), and in Minkowski's diagram, they are (1.3,0.3). All possible reference frames can be represented in this way, each with a different tilt to its time-axis representing its velocity relative to the first.
The two reference frames in the Newtonian diagram have a very simple relationship, because time coordinates are the same for both reference frames and there is no change in the units of either time or space. But Minkowski's spacetime diagram represents the Lorentz transformation, and not only are the units of time and space different, but the space-line of the moving reference frame is inclined relative to the first reference frame.
Minkowski’s spacetime diagram yields the same coordinates for the second reference frame that are obtained from the Lorentz transformation equations deduced by Einstein. Thus, it predicts that measurements of the second inertial frame will reveal its clocks to be slowed down and its measuring rods to be contracted in the x-direction.
But since the Lorentz transformation works both ways, it is possible to start with the second (tilted) reference frame and obtain coordinates for events in the first reference frame. Thus, it predicts that the moving observers will detect Lorentz distortions occurring in the first frame. This symmetry about the relationship between inertial reference frames makes it impossible to single out any particular frame as being at absolute rest by comparing reference frames with one another.
Minkowski's spacetime diagram may seem to mitigate the paradoxes resulting from the symmetry of the relationship between members of any pair of inertial reference frames, because it enables us to "picture" two clocks both ticking away slower than the other and two measuring rods both shorter than the other. It is just a result of how the inertial reference frames are related to one another.
But this wonderful power of Minkowski's spacetime diagram to represent these puzzling phenomena would not be possible, if the space-lines of different reference frames had the same slope. The inclined orientation of the space-line of the second inertial frame relative to the first frame is crucial to representing the Lorentz transformation, and it represents a disagreement between inertial observers about simultaneity at a distance. That is, observers using different inertial reference frames will disagree about which events at a distance are simultaneous with the origins of their systems when they pass by one another. That is the source of all the ontological problems with the belief in spacetime.
Though it is possible to interpret Minkowski's spacetime diagram as just a useful mathematical device for predicting the measurements that would be made on different inertial frames, that is what the Lorentz transformation equations already do. The historical significance of Minkowski's diagram is that it enables us to "picture" what exists in a world where Einstein's special theory of relativity is the deepest truth about the world. Thus, it leads to the belief in spacetime (that is, "spatiotemporalism," as I called it in Spatiomaterialism, or "substantivalism about spacetime," as it is called in the literature.)
The belief in spacetime comes from realism about special relativity. Scientific realism holds that theories in physics are true in the sense of corresponding to what exists, and spacetime is what must exist, if Einstein's special theory of relativity is the deepest truth about the real nature of what exists as far as space and time are concerned.
With regard to space and time, Newtonian realists would say that what their theories correspond to is absolute space and absolute time, that is, to a three dimensional space all of whose parts exists at the present moment and endure simultaneously through time. But that is not what Einstein's special theory of relativity corresponds to, because it implies that observers on all possible inertial reference frames are equally correct about the times and places of the events that occur in the world, even though they disagree about the simultaneity of events at a distance. What all the different inertial observers say about the times and places of events can, however, be true at the same time, only if what exists is represented by Minkowski's spacetime diagram. Thus, spacetime is the natural answer to the question about what corresponds to Einstein's special theory of relativity. According to realists about special relativity, what exists is spacetime, a four-dimensional entity that contains time as a dimension and, thus, is not itself in time.
Though Einstein may merely have been arguing in the spirit of the empiricist skepticism that prevailed in philosophy at that time, Minkowski made it possible to give a realist interpretation of Einstein’s special theory. His spacetime diagram showed how Einstein's theory could be interpreted as a description of what really exists in the case of space and time. Minkowski must have realized that he was giving a realist interpretation of Einstein's special theory of relativity when he introduced his spacetime diagrams; he said (Minkowski 75) that “space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality”. In any case, later in the twentieth century, when logical positivism gave way to scientific realism, Einstein’s skepticism about absolute space, if that is what it was, spawned the belief in the existence of spacetime. Indeed, regardless what Einstein may have believed in 1905, he apparently came to agree that what he had discovered was spacetime. (See Einstein 1966, pp. 205-8).
Scientific realism is, however, a way of letting science determine one's ontology. That is not the best way to decide which ontological theory to accept, because the empirical method that science follows is to infer to the best efficient-cause explanation, and that may not be the best ontological-cause explanation. But we can see how realism led to an ontology based on spacetime.
Einstein's special theory of relativity was a better efficient-cause explanation of the relevant phenomena than Lorentz's way of defending his transformation equations, because it made all the same precise predictions of measurements, but in a mathematically simpler way. As an efficient-cause explanation, however, all that Einstein's special theory requires is an empirical equivalence of inertial reference frames. It assumes that inertial frames are experimentally indistinguishable from one another, and it derives a description about how they must appear to one another as parts of the same world (where Maxwell's laws of electromagnetism hold). That relationship is described by the Lorentz equations for transforming their coordinates into one another, and it is represented by Minkowski's spacetime diagram. But Einstein's was a mathematical argument, and no mechanism or cause of the empirical equivalence was given.
A realist interpretation of special relativity goes beyond mere empirical equivalence and holds that inertial frames are all ontologically equivalent. If special relativity is the literal and deepest truth about the world, then what observers on all possible inertial reference frames believe must be true at the same time. That is to hold, not merely that no experiment can distinguish any one inertial frame from all the others as the absolute frame, but that there is nothing about the nature of any inertial frame that makes it stand out from all the others. That means, among other things, that no assertion made by observers on one inertial frame can be true unless the same kind of assertion made by observers on every other inertial frame is also true. (Nor can any assertion made on one inertial frame be false unless the same kind of assertion made on every other inertial frame is also false.)
The virtue of Minkowski's spacetime diagram is that it enables us to "picture" what exists in a world where inertial reference frames are all ontologically equivalent. Though it may still be unclear what spacetime is, Minkowski's diagram does allow us to believe that all possible reference frames are related to what exists in the same way, for it accommodates all possible standards of simultaneity at a distance. But they can all correspond to what exists only if the world is a four-dimensional entity all of whose parts in both space and time exist in the same way.
It is clear that this ontological equivalence of inertial frames is incompatible with absolute space and time, because if space and time were absolute, one inertial frame would be singled out ontologically from all possible inertial frames. Only one of all possible inertial frames would have the correct standard of simultaneity. Its location in space and time could be shared by observers on many other inertial frames, but none of their claims about which distant events are simultaneous with their shared here and how would correspond to what exists.
Einsteinians do not use the term "ontological equivalence" to describe the relationship between different inertial reference frames, but that is what the belief in spacetime comes to. Most philosophers of space and time simply take it for granted that they must accept "substantivalism" about spacetime in order to interpret the special theory as a description of the real nature of what exists.
To believe in spacetime is to accept an ontology that is fundamentally different from Lorentz's Newtonian view, and the difference can be seen in what each implies about the nature of material objects.
Newtonian physicists assumed that material objects are substances that endure through time. They had to believe in absolute time, because the endurance theory of substances presupposes that only the present exists, or "presentism." (If the world is everything that exists, then objects that exist at only one moment in their histories must exist at the same time, for otherwise they would not be parts of the same world.) And since Newtonian physicists believed that material objects are all related to one another by (consistent) spatial relations, they were also forced to believe in absolute space. In a natural world, absolute time entails absolute space. Hence, the Newtonian world was made up of material objects in three dimensional space that endured through time.
Spacetime, on the other hand, is a four-dimensional entity. What exists is spacetime and all the events that are located in spacetime. Since time is an aspect of its essential structure, a spacetime world cannot endure through time. Thus, spacetime points and spacetime events must all exist in the same way independently of one another, if they exist at all. There are no material objects in a spacetime world, at least, not in the way that Lorentz believed. There are only the spacetime events that seem to make up the histories of so-called material objects. Thus, what is ordinarily called a "material object" is just a continuous series of spacetime events in spacetime. Its real nature is represented accurately by a “world line” in a spacetime diagram, because each spacetime event making up the history of a "material object" has an existence that is distinct from all the others, just as one point on a line exists distinctly from every other point on the line.
In short, whereas a material object in a Newtonian world exists only at each moment as it is present, but is identical across time, a so-called material object in a spacetime world is a continuous series of spacetime events, each of which exists eternally as a distinct part of the world. This is the difference between the endurance and perdurance theory of substances, and between the presentist and eternalist theory about time and existence.
Scientific realists sometimes assume that they can believe that Einstein's special theory of relativity corresponds to what exists without denying that they are themselves substances that endure through time by holding that only objects at a distance from themselves must exist the same way at all different moments in their histories. But that is not possible, if they believe that the truth of Einstein's special theory means that it corresponds to what exists for every observer. If Einstein's theory is universally true, then it must be true for inertial observers located elsewhere in the universe, and the only way that different inertial observes at a distance from us can all be correct about which moment in our local history is simultaneous with their passing by one another is if the moments in our local history all exist in the same way. We must perdure, rather than endure, because we are material objects at a distance for inertial observers elsewhere in the universe.
What Minkowski's “union” of space and time means ontologically is, therefore, that presentism is false. The denial of presentism is such a serious obstacle to an ontological explanation of the world that, in Spatiomaterialism, we were led to reject spacetime substantivalism (or "spatiotemporalism"), promising to justify it later by showing how it is possible for space and time to be absolute, despite the Einsteinian revolution. That is the argument we take up in the next section. But first, let us consider briefly why physics has ignored the ontological problems with eternalism.
What explains the ascendancy of the belief in spacetime is, once again, the empirical method of science and the physicists' addiction to mathematics as a means of practicing it. Behind Minkowski's spacetime diagram lies an elegant equation that has proved to be irresistibly attractive.
Minkowski provided a method of constructing in our own spacetime coordinate frame the spacetime coordinate frame that would be used by observers on an object moving relative to us. We may call their world-line the “moving timeline” (t = x/v), because it will be the time axis that moving observers use for their spacetime coordinate frame.
Minkowski formulated the conclusion of Einstein’s special theory as an equation that describes a hyperboloid in four dimensional spacetime: 12 = c2t2 - x2 - y2 - z2. (When we orient our x-axis in the direction of the others’ motion, we can ignore the other two dimensions and it reduces to 12 = c2t2 - x2.) (It is the red curve in the diagram depicting how Minkowski's spacetime diagram is constructed.) The intersection of Minkowski’s hyperboloid curve with our time-axis is the unit of time in our frame (t = 1), and the unit of distance (in “basic units”) is the distance in our frame that light travels during that period of time (x = 1). The moving timeline (the time-axis of the moving spacetime frame) also intersects the curve described by Minkowski’s equation, and the distance of that point along our time-axis is the length of a unit of time on the moving coordinate frame according to our clocks.
As the diagram shows, moving clocks are slowed down in our frame. The other axis of the moving spacetime frame, the “moving space-line”, is also deduced from Minkowski’s equation. Moving space-lines all have the same slope as the tangent to Minkowski’s curve at the point of the moving timeline’s intersection with his curve. (Its slope is v/c2; the points on any line with this slope are simultaneous in the moving spacetime frame.) Finally, the unit of distance on the moving space-line is how far light travels in the moving frame during a unit of time on the moving frame.
Inertial frames are all equivalent on Minkowski’s theory, as on Einstein’s, since Minkowski’s equation determines precisely the same hyperbola in every moving inertial frame constructed this way in our own spacetime coordinate frame. That is, their hyperbolas all coincide. In particular, the same procedure on the moving coordinate frame, using the same equation (and taking the velocity to be -v along the x'-axis), produces the original coordinate frame. Or more abstractly, Minkowski’s equation can be generalized as a measure, s, of the separation between any two events that is the same in every inertial frame, despite variations in their coordinates for particular events: s2 = c2t2 - x2 - y2 - z2.
In Minkowski’s equation, the parallel between the representation of space and time is remarkable. Time would be just another spatial dimension, except that it lacks a minus sign (and needs the velocity of light, c, to make units of time commensurable with distance). Indeed, that is how Minkowski includes relativistic mass increase. His equations’s form can be used to state the laws of nature that hold true in every inertial frame. In “four vector physics”, or “covariant” formulations of laws of physics, the energy of an object, E, takes the place of time and the three dimensions of momentum, p, take the place of the three spatial dimensions, so that the objects’ rest mass, m0, rather than the separation, is what is the same about the object in all inertial frames: mo2c4 = E2 - px2c2 - py2c2 - pz2c2. The mathematics of four vector physics is so elegant and suggestive about the relationship of energy and momentum that it is not surprising that physicists now find themselves committed to the belief in spacetime.
By comparison with Lorentz’s ad hoc attempts to patch up classical physics in the wake of the Michelson-Morley experiment, Einstein’s argument was astonishingly simple and elegant, making it seem that Einstein had a deeper insight into these phenomena. And since Minkowski provided a diagram that made it possible to represent what special relativity implies about the world independently of particular reference frames, it is hardly surprising that the belief in spacetime has become the orthodox ontology in physics and the philosophy of science.
The acceptance of Einstein’s special theory of relativity involved, however, a remarkable change in the empirical method of physics, for it involved the abandonment of the requirement that explanations in physics be intuitively intelligible.
To follow the empirical method is to infer to the best efficient-cause explanation. Even in classical physics, theories were highly mathematical and confirmation was most convincing when they predicted surprising, quantitatively precise measurements. But since classical physicists still believed in absolute space and time, they also expected the best scientific theories to be intuitively intelligible, in the sense that it was possible to think coherently about what was happening in spatial imagination. But intuitive intelligibility was no longer possible when the best scientific theory required giving up the belief in absolute space and time. That was undeniably a loss, but physicists felt that they had to grow up and recognize that their deepest commitment was to judging the best theory by which is the simplest and most complete prediction of measurements. Since this came from mathematical theories, abandoning the requirement that physical explanations be intuitively intelligible left them addicted to mathematics.
This is because the velocity of light relative to the object in motion is different in opposite directions, and going one way the whole distance at the lower (relative) velocity takes more extra time than it can make up coming back over the same distance at the higher (relative) velocity. Though the path back and forth is spatially symmetric, the effect of the velocity of light relative to the frame on the time of travel accumulates per unit time, and so the signal loses more time than it gains.
The equation was L=Lo, where Lo was the length at absolute rest. The shrinkage had been proposed independently by George F. Fitzgerald in 1889 and hence became known as the “Lorentz-Fitzgerald contraction”. Relevant portions of Lorentz’s 1985 monograph and 1904 theory are reprinted in Lorentz, et al, (1923, pp. 3-84).
See Stanley Goldberg (1984, p. 98) and Roberto Torretti (1983, pp. 45-6). Hereafter, these works are referred to as “Goldberg” or “Torretti”, with page numbers. “Holton” refers to Holton (1973). “Zahar” refers to Zahar (1989).
The discovery of the Lorentz distortions was complicated by the fact that there are other effects of absolute motion on material objects, besides those that are directly related to the Michelson-Morley experiment. These are the “first-order” effects of motion in space (which vary as v/c, rather than as v2/c2, or “second order” effects), such as the way telescopes must be inclined slightly in the direction of motion in order to intercept light from overhead stars (much as umbrellas must be inclined slightly forward in walking through rain to keep raindrops from hitting one’s body). First order effects (including the effects on the index of refraction) had previously been explained by the “ether drag” hypothesis (that the motion of material objects drags the ether along with them), but Lorentz abandoned it . Lorentz’s explanation of length contraction assumed that the ether is totally unaffected by the motion of material objects through it, and he had no explanation of such first order effects except to state transformation equations by which one could obtain the coordinates used on the moving object from those used at absolute rest. Goldberg, pp. 88-92; Torretti, pp. 41-45 | http://www.twow.net/ObjText/OtkCaLbStrB.htm | 13 |
35 | Logic is an indispensable part of our daily lives, just as it is an indispensable part of mathematics. Whenever we present a reasoned argument on some topic, and whenever we present a proof of a mathematical fact, we are using logic. In both cases, the logic being used follows exactly the same structures. When used in mathematics, the structure of a logical argument is relatively easy to discern, but when embedded in ordinary language, the logic may not be so readily identifiable. But nonetheless, an understanding of at least the basics of logic is essential both in mathematics and in everyday life.
The use of a symbolic language to represent logical arguments, mathematical or otherwise, can often clarify the logic involved, and help you to determine whether the argument is valid or not. In symbolic logic, a statement (also called a proposition) is a complete declarative sentence, which is either true or false. (Questions, commands and exclamations are not statements, by this meaning.) We use a single letter such as P, Q or R to represent a statement. The truth value of a statement is 1 if the statement is true, and 0 if it is false. For example, as I write this, it is a cloudy day, but it is not raining. So if P is the statement that it is raining, then the truth value of P would be 0. (Of course, there is no way for you to verify the truth value of my statement about the weather.) The truth values of statements used in mathematics, on the other hand, can often be verified. For instance, the truth value of the statement that 2 < 3 is indeed 1.
Compound statements can be built from simple statements, by connecting
them with various logical connectives. The most commonly used
logical connectives are and, or, and implies.
These connectives are often denoted by the following symbols:
For instance, if P and Q are statements, then the compound statement "P implies Q" would be denoted , while the statement "P and Q" would be denoted .
The connectives above are used to connect one statement to another, to form a compound statement. Another important "connective" is the negation or not connective. Instead of joining two statements, this applies to a single statement, to create the negation of that statement. The negation of a statement P is denoted by ~ P. For example, if P is the statement that it is a cloudy day, then ~ P is the statement that it is not a cloudy day. If Q is the statement that 2 < 3, then ~ Q is the statement that .
The truth value of a compound statement depends on the truth values of its components. For example, the statement is true only if both P and Q are true; is false in all other cases. This idea can be summarized in a truth table:
The truth table for an implication is of special interest, because implications are used so frequently in both everyday argumentation and in mathematics. The truth table for is:
Thankfully, the situation just described does not arise very frequently, either in common language or in mathematics. Although some logical arguments can be very complicated, the most common use of logic boils down to the fact that
Modus ponens can be interpreted as follows. If we know that is true and we know that P is true, then it follows that Q must be true. This type argument is also referred to as a syllogism, and can be expressed in symbolic form as:
In this type of argument, there are two premises, (called the major premise) and P (called the minor premise) and one conclusion, namely Q. Because of modus ponens, if you agree that both of the premises are true, then you have no choice but to accept the fact that the conclusion is also true. As an example taken from common language, consider the argument "If you get your feet wet, then you will catch a cold. You got your feet wet. Therefore, you will catch a cold." If you believe the major premise to be a true statement (if you get your feet wet, then you will catch a cold), and if you observe that the minor premise is also true (you did, in fact, get your feet wet), then you must also believe the conclusion -- that you will, in fact, catch a cold.
Now you may take issue with the major premise in the argument above, in which case you are not by any means obligated to accept the conclusion. However, the argument -- the use of logic -- is nonetheless perfectly valid. The validity of an argument has to do only with the logical structure of the argument, and not with the truth of any of the premises. Convincing an individual that the premises are, indeed, true is persuasion rather than logic.
As an example of an invalid argument, consider the following: "If you were born in Cleveland, then you are a U.S. citizen. You are a U.S. citizen. Therefore, you were born in Cleveland." The logical structure of this argument is:
Many of the theorems in mathematics are of the form -- for example, "If f is differentiable, then f is continuous." In order to prove such a theorem, we must prove that the one situation in which the implication has a truth value of 0 never occurs. (Remember that in the truth table for , there is only one line where the truth value is 0.) That is, we must prove that if P is true (has a truth value of 1), then Q does not have a truth value of 0 (that is, that Q is also true). Consequently, the beginning of the proof of a mathematical theorem of the form is always something like: "Suppose that P is true." The remainder of the proof is then a logical argument (embedded in language, of course) that Q is also true.
The Converse and the Contrapositive
Every statement of the form has a converse, namely the implication , and a contrapositive, namely the implication . The truth tables for and do not match up; that is, there are instances of truth values for P and Q in which is true and is false, and vice versa. As a result, an implication and its converse are not logically equivalent; that is, they have different meanings. For instance, the statements "If you get your feet wet, then you will catch a cold" and "If you catch a cold, you must have gotten your feet wet" are not equivalent statements.
However, the statement and its contrapositive, , are logically equivalent; the truth tables for these two statements will show identical truth values under all four combinations of P and Q being true or false. As a result, an implication and its contrapositive are interchangeable as far as logic is concerned. This fact is often used in mathematics, in cases where a proof of a statement of the form seems a little easier than a proof of the statement that .
The Study of Logic
As an academic discipline, logic lies on the boundary between mathematics and philosophy. In philosophy, logic is used whenever a philosopher argues a certain position. (Caution: Such an argument may be perfectly valid, but you may still take issue with the conclusion, depending on whether or not you believe the premises to be true.) In mathematics, of course, logic is used in the proof of every theorem. One important difference between a mathematical argument and a philosophical argument, however, is that -- unlike philosophy -- in mathematics the goal is not to convince you that a certain conclusion is true, but that the entire proposition is true. For example, in the theorem that if f is a differentiable function, then f is also continuous, the goal is not to convince you that f is continuous (the conclusion), but rather that every differentiable function is continuous (the proposition). As a result, any debate or uncertainty about mathematical theorems centers around the validity of the argument itself (which may be extremely complex and lengthy), whereas debate about philosophical arguments often centers around the truth of the premises. | http://www.jcu.edu/math/Vignettes/logic.htm | 13 |
45 | Plato was an Athenian Greek of aristocratic family, active as a philosopher in the first half of the fourth century bc. He was a devoted follower of Socrates, as his writings make abundantly plain. Nearly all are philosophical dialogues - often works of dazzling literary sophistication - in which Socrates takes centre stage. Socrates is usually a charismatic figure who outshines a whole succession of lesser interlocutors, from sophists, politicians and generals to docile teenagers. The most powerfully realistic fictions among the dialogues, such as Protagoras and Symposium, recreate a lost world of exuberant intellectual self-confidence in an Athens not yet torn apart by civil strife or reduced by defeat in the Peloponnesian War.
Some of Plato's earliest writings were evidently composed in an attempt to defend Socrates and his philosophical mission against the misunderstanding and prejudice which - in the view of his friends - had brought about his prosecution and death. Most notable of these are Apology, which purports to reproduce the speeches Socrates gave at his trial, and Gorgias, a long and impassioned debate over the choice between a philosophical and a political life. Several early dialogues pit Socrates against practitioners of rival disciplines, whether rhetoric (as in Gorgias) or sophistic education (Protagoras) or expertise in religion (Euthyphro), and were clearly designed as invitations to philosophy as well as warnings against the pretensions of the alternatives. Apologetic and protreptic concerns are seldom entirely absent from any Platonic dialogue in which Socrates is protagonist, but in others among the early works the emphasis falls more heavily upon his ethical philosophy in its own right. For example, Laches (on courage) and Charmides (on moderation) explore these topics in characteristic Socratic style, relying mostly on his method of elenchus (refutation), although Plato seems by no means committed to a Socratic intellectualist analysis of the virtues as forms of knowledge. That analysis is in fact examined in these dialogues (as also, for example, in Hippias Minor).
In dialogues of Plato's middle period like Meno, Symposium and Phaedo a rather different Socrates is presented. He gives voice to positive positions on a much wider range of topics: not just ethics, but metaphysics and epistemology and psychology too. And he is portrayed as recommending a new and constructive instrument of inquiry borrowed from mathematics, the method of hypothesis. While there are continuities between Plato's early and middle period versions of Socrates, it is clear that an evolution has occurred. Plato is no longer a Socratic, not even a critical and original Socratic: he has turned Socrates into a Platonist.
The two major theories that make up Platonism are the theory of Forms and the doctrine of the immortality of the soul. The notion of a Form is articulated with the aid of conceptual resources drawn from Eleatic philosophy. The ultimate object of a philosopher's search for knowledge is a kind of being that is quite unlike the familiar objects of the phenomenal world: something eternal and changeless, eminently and exclusively whatever - beautiful or just or equal - it is, not qualified in time or place or relation or respect. An account of the Form of Beautiful will explain what it is for something to be beautiful, and indeed other things are caused to be beautiful by their participation in the Beautiful. The middle period dialogues never put forward any proof of the existence of Forms. The theory is usually presented as a basic assumption to which the interlocutors agree to subscribe. Plato seems to treat it as a very general high-level hypothesis which provides the framework within which other questions can be explored, including the immortality of the soul. According to Phaedo, such a hypothesis will only stand if its consequences are consistent with other relevant truths; according to Republic its validity must ultimately be assured by its coherence with the unhypothetical first principle constituted by specification of the Good.
The Pythagorean doctrine of the immortality of the soul, by contrast, is something for which Plato presents explicit proofs whenever he introduces it into discussion. It presupposes the dualist idea that soul and body are intrinsically distinct substances, which coexist during our life, but separate again at death. Its first appearance is in Meno, where it is invoked in explanation of how we acquire a priori knowledge of mathematical truths. Socrates is represented as insisting that nobody imparts such truths to us as information: we work them out for ourselves, by recollecting them from within, where they must have lain untapped as latent memory throughout our lives. But innate forgotten knowledge presupposes a time before the soul entered the body, when it was in full conscious possession of truth. Phaedo holds out the promise that the souls of philosophers who devote their lives to the pursuit of wisdom will upon death be wholly freed from the constraints and contaminations of the body, and achieve pure knowledge of the Forms once again.
Republic, Plato's greatest work, also belongs to this major constructive period of his philosophizing. It gives the epistemology and metaphysics of Forms a key role in political philosophy. The ideally just city (or some approximation to it), and the communist institutions which control the life of its elite governing class, could only become a practical possibility if philosophers were to acquire political power or rulers to engage sincerely and adequately in philosophy. This is because a philosopher-ruler whose emotions have been properly trained and disciplined by Plato's reforming educational programme, and whose mind has been prepared for abstract thought about Forms by rigorous and comprehensive study of mathematics, is the only person with the knowledge and virtue necessary for producing harmony in society. Understanding of Forms, and above all of the Good, keystone of the system of Forms, is thus the essential prerequisite of political order.
It remains disputed how far Plato's vision of a good society ruled by philosopher-statesmen (of both sexes) was ever really conceived as a blueprint for practical implementation. Much of his writing suggests a deep pessimism about the prospects for human happiness. The most potent image in Republic is the analogy of the cave, which depicts ordinary humanity as so shackled by illusions several times removed from the illumination of truth that only radical moral and intellectual conversion could redeem us. And its theory of the human psyche is no less dark: the opposing desires of reason, emotion and appetite render it all too liable to the internal conflict which constitutes moral disease.
While Republic is for modern readers the central text in Plato's uvre, throughout much of antiquity and the medieval period Timaeus was the dialogue by which he was best known. In this late work Plato offers an account of the creation of an ordered universe by a divine craftsman, who invests pre-existing matter with every form of life and intelligence by the application of harmonious mathematical ratios. This is claimed to be only a 'likely story', the best explanation we can infer for phenomena which have none of the unchangeable permanence of the Forms. None the less Timaeus is the only work among post-Republic dialogues, apart from a highly-charged myth in Phaedrus, in which Plato was again to communicate the comprehensive vision expressed in the Platonism of the middle period dialogues.
Many of these dialogues are however remarkable contributions to philosophy, and none more so than the self-critical Parmenides. Here the mature Parmenides is represented as mounting a powerful set of challenges to the logical coherence of the theory of Forms. He urges not abandonment of the theory, but much harder work in the practice of dialectical argument if the challenges are to be met. Other pioneering explorations were in epistemology (Theaetetus) and philosophical logic (Sophist). Theaetetus mounts a powerful attack on Protagoras' relativist theory of truth, before grappling with puzzles about false belief and problems with the perennially attractive idea that knowledge is a complex built out of unknowable simples. Sophist engages with the Parmenidean paradox that what is not cannot be spoken or thought about. It forges fundamental distinctions between identity and predication and between subject and predicate in its attempt to rescue meaningful discourse from the absurdities of the paradox.
In his sixties Plato made two visits to the court of Dionysius II in Sicily, apparently with some hopes of exercising a beneficial influence on the young despot. Both attempts were abysmal failures. But they did not deter Plato from writing extensively on politics in his last years. Statesman explores the practical knowledge the expert statesman must command. It was followed by the longest, even if not the liveliest, work he ever wrote, the twelve books of Laws, perhaps still unfinished at his death.
Evidence about Plato's life is prima facie plentiful. As well as several ancient biographies, notably that contained in book III of Diogenes Laertius' Lives of the Philosophers, we possess a collection of thirteen letters which purport to have been written by Plato. Unfortunately the biographies present what has been aptly characterized as 'a medley of anecdotes, reverential, malicious, or frivolous, but always piquant'. As for the letters, no scholar thinks them all authentic, and some judge that none are.
From the biographies it is safe enough to accept some salient points. Plato was born of an aristocratic Athenian family. He was brother to Glaucon and Adimantus, Socrates' main interlocutors in the Republic; his relatives included Critias and Charmides, members of the bloody junta which seized power in Athens at the end of the Peloponnesian War. He became one of the followers of Socrates, after whose execution he withdrew with others among them to the neighbouring city of Megara. His travels included a visit to the court of Dionysius in Sicily. On returning from Sicily to Athens he began teaching in a gymnasium outside the city, called the Academy.
The Seventh Letter, longest and most interesting of the collection of letters, gives a good deal of probably trustworthy information, whether or not it was written by Plato himself. It begins with an account of his growing disenchantment with Athenian politics in early manhood and of his decision against a political career. This is prefatory to a sketch of the visit to Dionysius in Syracuse, which is followed by an elaborate self-justifying explanation of why and how, despite his decision, Plato later became entangled in political intrigue in Sicily, once the young Dionysius II had succeeded to his father's throne. There were two separate visits to the younger Dionysius: one (c.366 bc) is represented as undertaken at the behest of Dion, nephew of Dionysius I, in the hope of converting him into a philosopher-ruler; the other (c.360 bc) was according to the author an attempt to mediate between Dionysius and Dion, now in exile and out of favour. Both ventures were humiliating failures.
Of more interest for the history of philosophy is Plato's activity in the Academy. We should not conceive, as scholars once did, that he established a formal philosophical school, with its own property and institutional structures. Although he acquired a house and garden in the vicinity, where communal meals were probably taken, much of his philosophical teaching and conversation may well have been conducted in the public space of the gymnasium itself. Some sense of the Academy's distinctive style may be gleaned from evidence of the contemporaneous writings of the philosophical associates he attracted, notably his nephew Speusippus, Xenocrates, Aristotle and the mathematician Eudoxus. Discussion of Plato's metaphysical ideas figured prominently in these; but orthodoxy was not expected, to judge from their philosophical disagreements with him and with each other. Aristotle's early Topics suggests that an important role was played by formal disputation about philosophical theses.
From the educational programme of the Republic one might have guessed that Plato would have attached importance to the teaching of mathematics as a preparation for philosophy, but we have better evidence for his encouraging research in it. While he was not an original mathematician himself, good sources tell us that he formulated problems for others to solve: for example, what uniform motions will account for the apparent behaviour of the planets. Otherwise there is little reliable information on what was taught in the Academy: not much can be inferred from the burlesque of comic playwrights. Since almost certainly no fees were charged, most of those who came to listen to Plato (from all over the Greek world) must have been aristocrats. Some are known to have entered politics or to have advised princes, particularly on constitutional reform. But the Academy had no political mission of its own. Indeed the rhetorician Isocrates, head of a rival school and admittedly not an unbiased witness, dismissed the abstract disciplines favoured by the Academy for their uselessness in the real world.
Thrasyllus, astrologer to the emperor Tiberius, is the unlikely source of the arrangement of Platonic writings adopted in the manuscript tradition which preserves them. For his edition of Plato he grouped them into tetralogies, reminiscent of the trilogies produced in Athenian tragic theatre. These were organized according to an architectonic scheme constructed on principles that are now only partially apparent, but certainly had nothing to do with chronology of composition. His arrangement began with a quartet 'designed to show what the life of the philosopher is like' (Diogenes Laertius, III 57): Euthyphro, or 'On Piety', classified as a 'peirastic' or elenctic dialogue (see Socrates §§3-4), which is a species of one of his two main genres, the dialogue of inquiry; Apology, Crito and Phaedo are all regarded as specimens of exposition, his other main genre, or more specifically as specimens of ethics. These four works are all concerned in one way or another with the trial and death of Socrates.
There followed a group consisting of Cratylus, or 'On the Correctness of Names', Theaetetus, or 'On Knowledge', Sophist and Politicus (often Anglicized as Statesman). Plato himself indicates that the last three of this set are to be read together. They contain some of his most mature and challenging work in epistemology, metaphysics and philosophical methodology. In this they resemble Parmenides, with its famous critique of the theory of Forms, the first of the next tetralogy, which was completed by three major dialogues all reckoned 'ethical' by Thrasyllus: Philebus, an examination of pleasure, Symposium and Phaedrus, both brilliant literary divertissements which explore the nature of love.
A much slighter quartet came next: two dialogues entitled Alcibiades, plus Hipparchus and Rivals. None of these, with the disputed exception of the first Alcibiades, is thought by modern scholarship to be authentic Plato. They were followed by Theages, a short piece now generally reckoned spurious, Charmides, Laches, Lysis. These three works are generally regarded by modern scholars as Socratic dialogues: that is, designed to exhibit the distinctive method and ethical preoccupations of the historical Socrates, at least as Plato understood him, not to develop Plato's own philosophy. Thrasyllus would agree with the latter point, since he made them dialogues of inquiry: Laches and Lysis 'maieutic', in which the character 'Socrates' attempts as intellectual midwife to assist his interlocutors to articulate and work out their own ideas on courage and friendship respectively; Charmides elenctic, with the interlocutors Charmides and Critias and their attempts to say what moderation is put to the test of cross-examination, something Thrasyllus interestingly distinguished from philosophical midwifery.
The next group consisted of Euthydemus, Protagoras, Gorgias, Meno, important works in which modern scholarship finds analysis and further elaboration by Plato of the Socratic conception of virtue. The first three present a Socrates in argumentative conflict with sophists of different sorts (see Sophists), so it is understandable that under the general heading 'competitive' Thrasyllus characterized Euthydemus and Gorgias as dialogues of refutation, and Protagoras as a dialogue of display - presumably because Protagoras and Socrates are each portrayed as intent on showing off their debating skills. Meno, on the other hand, is labelled an elenctic work. It was followed by the seventh tetralogy: Hippias Major and Hippias Minor, two very different dialogues (of refutation, according to Thrasyllus), both featuring the sophist of that name; Ion, a curious piece on poetic performance; and Menexenus, a still more curious parody of a funeral oration, put in the mouth of Pericles' mistress Aspasia.
For the last two tetralogies Thrasyllus reserved some of Plato's major writings. The eighth contained the very brief (and conceivably spurious) Clitophon, in which a minor character from the Republic plays variations on themes in the Republic, the second dialogue in the group, and generally regarded nowadays as Plato's greatest work. This quartet was completed by Timaeus and its unfinished sequel Critias, no doubt because these dialogues represent themselves as pursuing further the discussions of the Republic. The pre-Copernican mathematical cosmology of Timaeus no longer attracts readers as it did throughout antiquity, and particularly in the Middle Ages, when the dialogue was for a period the only part of Plato's uvre known to the Latin West. Finally, the ninth tetralogy began with the short Minos, a spurious dialogue taking up issues in the massive Laws, Plato's longest and probably latest work, which was put next in the group. Then followed Epinomis, an appendix to Laws already attributed to one of Plato's pupils in antiquity (Philip of Opous, according to a report in Diogenes Laertius, III 37). Last were placed the Letters, briefly discussed above.
Thrasyllus rejected from the canon a variety of minor pieces, some of which still survive through the manuscript tradition. Modern judgment concurs with the ancient verdict against them. It also questions or rejects some he thought genuinely Platonic. But we can be fairly sure that we still possess everything Plato wrote for publication.
Attempting to determine the authenticity or inauthenticity of ancient writings is a hazardous business. Egregious historical errors or anachronisms suffice to condemn a work, but except perhaps for the Eighth Letter, this criterion gets no purchase on the Platonic corpus. Stylistic analysis of various kinds can show a piece of writing to be untypical of an author's uvre, without thereby demonstrating its inauthenticity: Parmenides is a notable example of this. Most of Plato's major dialogues are in fact attested as his by Aristotle. The difficult cases are short pieces such as Theages and Clitophon, and, most interestingly, three more extended works: the Seventh Letter, Alcibiades I and Hippias Major. Opinion remains divided on them. Some scholars detect crude or sometimes brilliant pastiche of Plato's style; a parasitic relationship with undoubtedly genuine dialogues; a philosophical crassness or a misunderstanding of Platonic positions which betrays the forger's hand. Yet why should Plato not for some particular purpose recapitulate or elaborate things he has said elsewhere? And perhaps he did sometimes write more coarsely or didactically or long-windedly than usual. Such assessments are inevitably matters of judgment, on which intelligent and informed readers will legitimately differ.
Prospects for an absolute chronology of Plato's writings are dim. There are no more than two or three references to datable contemporaneous events in the entire corpus (leaving aside the Letters). Relative chronology is another matter. Some dialogues refer back to others. A number of instances have been mentioned already, but we can add a clear reminiscence of Meno in Phaedo (72e-73b), and of Parmenides in both Theaetetus (183e-184a) and Sophist (217c). According to one ancient tradition Laws was unfinished at Plato's death, and Aristotle informs us that it was written after Republic (Politics 1264b24- 7), to which it appears to allude (see, for example, Laws 739a-e). Attempts have sometimes been made to find evidence, whether internal or external, for the existence of early versions of works we possess in different form (see for example Thesleff 1982). One example is the suggestion that Aristophanes' comedy Ecclesiazousae or Assembly of Women (388 bc) was parodying an early version of book V of Republic. But while the idea that Plato may have revised some of his writings is plausible, concrete instances in which such revision is plainly the best explanation of the phenomena are hard to find. Even if they were not, it is unlikely that the consequences for relative chronology would be clear.
For over a century hopes for a general relative chronology of Plato's writings have been pinned on the practice of stylistic analysis. This was pioneered by Lewis Campbell in his edition of Sophist and Politicus, published in 1867. His great achievement was to isolate a group of dialogues which have in common a number of features (added to by subsequent investigators) that set them apart from all the rest. Timaeus, Critias, Sophist, Politicus, Philebus and Laws turn out to share among other things a common technical vocabulary; a preference for certain particles, conjunctions, adverbs and other qualifiers over alternatives favoured in other dialogues; distinctive prose rhythms; and the deliberate attempt to avoid the combination of a vowel at the end of one word followed by another vowel at the beginning of the next. Since there are good independent reasons for taking Laws to be Plato's last work, Campbell's sextet is very likely the product of his latest phase of philosophical activity. Application of the same stylistic tests to the Platonic corpus as a whole, notably by Constantin Ritter (1888), established Republic, Theaetetus and Phaedrus as dialogues which show significantly more of the features most strongly represented in the late sextet than any others. There is general agreement that they must be among the works whose composition immediately precedes that of the Laws group, always allowing that Republic must have taken several years to finish, and that parts of it may have been written earlier and subsequently revised. Parmenides is ordinarily included with these three, although mostly on non-stylistic grounds.
Since Campbell's time there have been repeated attempts by stylometrists to divide the remaining dialogues into groups, and to establish sequences within groups. The heyday of this activity was in the late nineteenth and early twentieth centuries. Since the 1950s there has been a revival in stylistic study, with the use of increasingly sophisticated statistical techniques and the resources of the computer and the database. Secure results have proved elusive. Most scholars would be happy to date Phaedo, Symposium and Cratylus to a middle period of Plato's literary and philosophical work which may be regarded as achieving its culmination in Republic. But while this dating is sometimes supported by appeal to stylistic evidence, that evidence is in truth indecisive: the hypothesis of a middle period group of dialogues really rests on their philosophical affinities with Republic and their general literary character. The same can be said mutatis mutandis of attempts to identify a group assigned to Plato's early period.
The cohesiveness of Campbell's late group has not gone unchallenged. For example, in 1953 G.E.L. Owen mounted what for a while seemed to some a successful attack on his dating of Timaeus and Critias, on the ground that these dialogues belong philosophically in Plato's middle period. Broadly speaking, however, stylistic studies have helped to establish an agreed chronological framework within which most debates about philosophical interpretation now take place. This is not to say however that there is unanimity either about the way Plato's thought developed or about the importance of the notion of development for understanding his philosophical project or projects in the dialogues.
Who invented the philosophical dialogue, and what literary models might have inspired the invention, are not matters on which we have solid information. We do know that several of Socrates' followers composed what Aristotle calls Sokratikoi logoi, discourses portraying Socrates in fictitious conversations (see Socratic dialogues). The only examples which survive intact besides Plato's are by Xenophon, probably not one of the earliest practitioners of the genre.
One major reason for the production of this literature was the desire to defend Socrates against the charges of irreligion and corrupting young people made at his trial and subsequently in Athenian pamphleteering, as well as the implicit charge of guilt by association with a succession of oligarchic politicians. Thus his devotion to the unstable and treacherous Alcibiades was variously portrayed in, for example, the first of the Alcibiades dialogues ascribed to Plato and the now fragmentary Alcibiades of Aeschines of Sphettos, but both emphasized the gulf between Alcibiades' self-conceit and resistance to education and Socrates' disinterested concern for his moral wellbeing. The same general purpose informed the publication of versions of Socrates' speech (his 'apology') before the court by Plato, Xenophon and perhaps others. Writing designed to clear Socrates' name was doubtless a particular feature of the decade or so following 399 bc, although it clearly went on long after that, as in Xenophon's Memorabilia (see Xenophon §2). After starting in a rather different vein Gorgias turns into Plato's longest and angriest dialogue of this kind. Socrates is made to present himself as the only true politician in Athens, since he is the one person who can give a truly rational account of his conduct towards others and accordingly command the requisite political skill, which is to make the citizens good. But he foresees no chance of acquittal by a court of jurors seeking only gratification from their leaders.
Placing Socrates in opposition to Alcibiades is a way of defending him. Arranging a confrontation between a sophist (Protagoras or Hippias) or a rhetorician (Gorgias) or a religious expert (Euthyphro) or a Homeric recitalist (Ion) and Socrates is a way of exposing their intellectual pretensions, and in most cases their moral shallowness, while celebrating his wit, irony and penetration and permitting his distinctive ethical positions and ethical method to unfold before the reader's eyes. The elenchus (see Socrates §§3-4) is by no means the only mode of argument Socrates is represented as using in these fictional encounters. Plato particularly enjoys allowing him to exploit the various rhetorical forms favoured by his interlocutors. But it is easy to see why the dialogue must have seemed to Plato the ideal instrument not only for commemorating like Xenophon Socrates' style of conversation, but more importantly for exhibiting the logical structure and dynamic of the elenchus, and its power in Socrates' hands to demolish the characteristic intellectual postures of those against whom it is deployed.
In these dialogues of confrontation Socrates seldom succeeds in humbling his interlocutors into a frank recognition that they do not know what they thought they knew: the official purpose - simultaneously intellectual and moral - of the elenchus. It would not have been convincing to have him begin to convert historical figures with well-known intellectual positions. The main thing registered by their fictional counterparts is a sense of being manipulated into self-contradiction. In any case, the constructive response to the extraordinary figure of Socrates which Plato really wants to elicit is that of the reader. We have to suppose that, as conversion to philosophy was for Plato scarcely distinguishable from his response to Socrates (devotion to the man, surrender to the spell of his charisma, strenuous intellectual engagement with his thought and the questions he was constantly pursuing), so he conceived that the point of writing philosophy must be to make Socrates charismatic for his readers - to move us to similar devotion and enterprise. In short, the dialogues constitute simultaneously an invitation to philosophy and a critique of its intellectual rivals.
Whatever Plato's other accomplishments or failures as a writer and thinker, one project in which he unquestionably succeeds is in creating a Socrates who gets under the reader's skin (see Socrates §7). Plato has a genius for portrayal of character: the 'arrogant self-effacement' of Socrates' persona; the irony at once sincere and insincere; the intellectual slipperiness in service of moral paradox; the nobility of the martyr who loses everything but saves his own soul, and of the hero who stands firm on the battlefield or in face of threats by the authorities; relentless rationality and almost impregnable self-control somehow cohabiting with susceptibility to beautiful young men and their erotic charm. Also important is the ingenious variety of perspectives from which we see Socrates talking and interacting with others. Sometimes he is made to speak to us direct (for example, Apology, Gorgias). Sometimes Plato invites us to share complicity in a knowing narrative Socrates tells of his own performance (as in Charmides, Protagoras). Sometimes someone else is represented as recalling an unforgettably emotional occasion when Socrates dominated a whole roomful of people, as in the most powerfully dramatic dialogues of all, Phaedo and Symposium. Here we have the illusion that Socrates somehow remains himself even though the ideas advanced in them must go beyond anything that the historical Socrates (or at any rate the agnostic Socrates of Apology) would have claimed about the soul and its immortality or about the good and the beautiful.
It might seem strange that an original philosopher of Plato's power and stature should be content, outside the Letters if some of them are by him, never to talk directly to the reader, but only through the medium of narrative or dramatic fiction, even granted the pleasure he plainly takes in exhibiting his mastery of that medium. This will become less mysterious if we reflect further on Socrates and Socratic questioning. At any rate by the time of the Meno, Plato was wanting to suggest that the elenchus presupposes that understanding is not something one person can transmit in any straightforward way to another, but something which has to be worked out for oneself and recovered from within by recollection. The suggestion is made by means of an example from mathematics, where it is transparently true that seeing the answer to a problem is something that nobody else can do for us, even if Socrates' questions can prompt us to it. The moral we are to draw is that in pressing his interlocutors on what they say they believe, Socrates is merely an intellectual midwife assisting them to articulate for themselves a more coherent and deeply considered set of views, which will ideally constitute the truth.
The Platonic dialogue can be interpreted as an attempt to create a relationship between author and reader analogous to that between Socrates and his interlocutors. Given that that relationship is to be construed in the way indicated in Meno, the point of a dialogue will be like that of the elenchus: not to teach readers the truth (it is strictly speaking unteachable), but to provoke and guide them into working at discovering it for themselves. Most of the dialogues of Campbell's late sextet are admittedly more didactic than one would expect on this view of the dialogue, and it is significant that except in Philebus Socrates is no longer the main speaker. Yet even here use of the dialogue form can be taken as symbolizing that responsibility for an active philosophical engagement with what Plato has written rests with the reader, as the difficulty and in some cases the methodological preoccupations of most of these works confirms.
In a much discussed passage at the end of Phaedrus (275-8), Socrates is made to speak of the limitations of the written word. It can answer no questions, it cannot choose its readers, it gets misunderstood with no means of correcting misunderstanding. Its one worthwhile function is to remind those who know of what they know. By contrast with this dead discourse live speech can defend itself, and will be uttered or not as appropriate to the potential audience. The only serious use of words is achieved when speech, not writing, is employed by dialecticians to sow seeds of knowledge in the soul of the learner. If they commit their thoughts to writing they do so as play (paidia). The Seventh Letter (341-2) makes related remarks about the writing of philosophy; and at various points in, for example, Republic, Timaeus and Laws, the discussions in which the interlocutors are engaged are described as play, not to be taken seriously.
Interpreters have often taken these written remarks about writing with the utmost seriousness. In particular the Tübingen school of Platonic scholarship has connected them with references, especially in Aristotle, to unwritten doctrines of Plato. They have proposed that the fundamental principles of his philosophy are not worked out in the dialogues at all, but were reserved for oral discussions in the Academy, and have to be reconstructed by us from evidence about the unwritten doctrines. But this evidence is suspect where voluble and elusive when apparently more reliable. There are two star exhibits. First, according to the fourth century bc music theorist Aristoxenus, Aristotle used to tell of how when Plato lectured on the good he surprised and disappointed his listeners by talking mostly about mathematics (Harmonics II, 30.16-31.3). Second, at one point in the Physics (209b13-6) Aristotle refers to Plato's 'so-called unwritten teachings'; and the Aristotelian commentators report that Aristotle and other members of the Academy elsewhere wrote more about them. Plato's key idea was taken to be the postulation of the One and the great and the small, or 'indefinite dyad' as principles of all things, including Forms. In his Metaphysics (I.6) Aristotle seems to imply that in this theory the Forms were construed in some sense as numbers. It remains obscure and a subject of inconclusive scholarly debate how far the theory was worked out, and what weight we should attach to it in comparison to the metaphysical explorations of the dialogues of Plato's middle and late periods (see for example Ross 1951, Gaiser 1968, Guthrie 1978, Gaiser 1980, Burnyeat 1987).
The general issue of how far we can ascribe to Plato things said by interlocutors (principally Socrates) in his dialogues is something which exercises many readers. The position taken in this entry will be that no single or simple view of the matter is tenable: sometimes, for example, Plato uses the dialogue form to work through a problem which is vexing him; sometimes to recommend a set of ideas to us; sometimes to play teasingly with ideas or positions or methodologies without implying much in the way of commitment; and frequently to suggest to us ways we should or should not ourselves try to philosophize. As for the Tübingen school, we may agree with them that when it introduces the Form of the Good the Republic itself indicates that readers are being offered only conjectures and images, not the thorough dialectical discussion necessary for proper understanding. But the notions of seriousness and play are less straightforward than they allow. Playing with ideas - that is, trying them out and developing them to see what might work and what will not - is the way new insights in philosophy and science are often discovered. When we meet it in Plato's dialogues it usually seems fun without being frivolous. Nor should we forget that the Platonic dialogue represents itself as a spoken conversation. It seems hard to resist the thought that we are thereby invited to treat his dialogues not as writing so much as an attempt to transcend the limitations of writing. Perhaps the idea is that they can achieve the success of living speech if treated not as texts to be interpreted (despite Plato's irresistible urge to produce texts devised precisely to elicit attempts at interpretation), but as stimuli to questions we must put principally to ourselves, or as seeds which may one day grow into philosophy in our souls.
There is widespread scholarly agreement that the following are among Plato's earliest writings: Apology, Crito, Ion, Hippias Minor, Laches and Charmides. Apology, as we have noted, best fits into the context of the decade following Socrates' death, and so does Crito, which explores the question why he did not try to escape from the condemned cell; the others are all short treatments of questions to do with virtue and knowledge, or in the case of Ion, with expertise (techn), and all are relatively simple in literary structure. The brief Euthyphro and the much longer Protagoras and Gorgias (with which Menexenus is often associated) are usually seen as having many affinities with these, and so are put at least fairly early, although here anticipations of the style or content of the mature middle-period dialogues have also been detected. The connections in thought between Lysis, Euthydemus and Hippias Major and middle-period Plato may be argued to be stronger still, even though there remain clear similarities with the dialogues generally accepted as early. We do not know whether Plato wrote or published anything before Socrates' death; Menexenus cannot be earlier than 386 bc, Ion might be datable to around 394-391 bc, but otherwise we can only guess.
All those listed above fall under the commonly used description 'Socratic dialogues', because they are seen as preoccupied with the thought of the historical Socrates as Plato understood him, in contrast with writings of the middle period, where 'Socrates' often seems to become a vehicle for exploring a more wide-ranging set of ideas (see Socrates §2). In the Socratic dialogues discussion is confined almost exclusively to ethical questions, or problems about the scope and credentials of expertise: metaphysics and epistemology and speculation about the nature and powers of the soul are for the most part notable by their absence. Use of the elenchus is prominent in them as it is not, for example, in Republic (apart from book I, sometimes regarded as an early work subsequently reused as a preface to the main body of the dialogue). The hypothesis that philosophizing in this style was the hallmark of the historical Socrates is broadly consistent with what we are given to understand about him by Xenophon, Aristotle and Plato's Apology - which is usually thought to be particularly authoritative evidence, whether or not it is a faithful representation of what Socrates really said at his trial.
How historical the historical Socrates of the hypothesis actually is we shall never know. The conjecture that many of the Socratic dialogues are early works is likewise only a guess, which gets no secure support from stylometric evidence. None the less the story of Plato's literary and philosophical development to which it points makes such excellent sense that it has effectively driven all rival theories from the field. The placing of individual dialogues within that story remains a matter for controversy; and doubts persist over how far interpretation of Plato is illuminated or obstructed by acceptance of any developmental pattern. With these provisos, the account which follows assumes the existence of a group of early Socratic dialogues in the sense explained.
The convenience of the description 'Socratic dialogues' should not generate the expectation of a single literary or philosophical enterprise in these writings. It conceals considerable variety, for example as between works devoted to articulating and defending the philosophical life and works which problematize Socratic thought as much as they exhibit its attractions. This distinction is not an exhaustive one, but provides useful categories for thinking about some of the key productions of Plato's early period.
Moral, or indeed existential, choice, to use an anachronistic expression, is the insistent focus of Apology. God has appointed Socrates, as he represents it to his judges, to live the philosophical life, putting himself and others under constant examination. The consistency of his commitment to this mission requires him now to face death rather than abandon his practice of philosophy, as he supposes for the sake of argument the court might require him to do. For confronted with the choice between disobeying God (that is, giving up philosophy) and disobeying human dictate (that is, refusing to do so), he can only take the latter option. What governs his choice is justice:
It is a mistake to think that a man worth anything at all should make petty calculations about the risk of living or dying. There is only one thing for him to consider when he acts: whether he is doing right or wrong, whether he is doing what a good man or a bad man would do.
Whether death is or is not a bad thing Socrates says he does not know. He does know that behaving wrongly and disobeying one's moral superior - whether divine or human - is bad and shameful. The demands of justice, as his conscience (or 'divine sign') interpreted them, had earlier led him to choose the life of a private citizen, conversing only with individuals, rather than the political life: for justice and survival in politics are incompatible. When he did carry out the public obligations of a citizen and temporarily held office, justice again compelled him to choose the dangerous and unpopular course of resisting a proposal that was politically expedient but contrary to the law. As for those with whom he talked philosophy, they too faced a choice: whether to make their main concern possessions and the body, or virtue and the soul; that is, what belongs to oneself, or oneself. And now the judges too must choose and determine what is just as their oath requires of them.
Crito and Gorgias continue the theme in different ways. Crito has often been found difficult to reconcile with Apology when it argues on various grounds (paternalistic and quasi-contractual) that citizens must always obey the law, unless they can persuade it that it is in the wrong. Hence, since the law requires that Socrates submit to the punishment prescribed by the court, he must accept the sentence of death pronounced on him. The higher authority of divine command stressed in Apology seems to have been forgotten. Once again, however, the whole argument turns on appeal to justice and to the choices it dictates: we must heed the truth about it, not what popular opinion says; we must decide whether or not we believe the radical Socratic proposition that retaliation against injury or injustice is never right (see Socrates §4). Gorgias, one of the longest of all the dialogues, ranges over a wide territory, but at its heart is the presentation of a choice. Socrates addresses Callicles, in whose rhetoric Nietzsche saw an anticipation of his ideal of the superman:
You see that the subject of our arguments - and on what subject should a person of even small intelligence be more serious? - is this: what kind of life should we live? The life which you are now urging upon me, behaving as a man should: speaking in the assembly and practising rhetoric and engaging in politics in your present style? Or the life of philosophy?
The dialogue devotes enormous energy to arguing that only philosophy, not rhetoric, can equip us with a true expertise which will give us real power, that is power to achieve what we want: the real not the apparent good. Only philosophy can articulate a rational and reliable conception of happiness - which turns out to depend on justice.
Contrast the works outlined in §7 with Laches and Charmides, which were very likely conceived as a pair, the one an inquiry into courage, the other into sophrosyn or moderation. Both engage in fairly elaborate scene setting quite absent from Crito and Gorgias. In both there is concern with the relation between theory and practice, which is worked out more emphatically in Laches, more elusively in Charmides. For example, in Laches Socrates is portrayed both as master of argument about courage, and as an exemplar of the virtue in action - literally by reference to his conduct in the retreat from Delium early in the Peloponnesian War, metaphorically by his persistence in dialectic, to which his observations on the need for perseverance in inquiry draw attention.
A particularly interesting feature of these dialogues is their play with duality. Socrates confronts a pair of main interlocutors who clearly fulfil complementary roles. We hear first the views of the more sympathetic members of the two pairs: the general Laches, whom Socrates identifies as his partner in argument, and the young aristocrat Charmides, to whom he is attracted. Each displays behavioural traits associated with the virtue under discussion, and each initially offers a definition in behavioural terms, later revised in favour of a dispositional analysis: courage is construed as a sort of endurance of soul, sophrosyn as modesty. After these accounts are subjected to elenchus and refuted, the other members of the pairs propose intellectualist definitions: according to Nicias (also a general), courage is knowledge of what inspires fear or confidence, while Critias identifies sophrosyn with self-knowledge.
Broad hints are given that the real author of these latter definitions is Socrates himself; and in Protagoras he is made to press Protagoras into accepting the same definition of courage. There are also hints that, as understood by their proponents here, this intellectualism is no more than sophistic cleverness, and that neither possesses the virtue he claims to understand. Both are refuted by further Socratic elenchus, and in each case the argument points to the difficulty of achieving an intellectualist account which is not effectively a definition of virtue in general as the simple knowledge of good and bad. Laches explicitly raises the methodological issue of whether one should try to investigate the parts of virtue in order to understand the whole or vice versa (here there are clear connections with the main argument of Protagoras).
Aristotle was in no doubt that Socrates 'thought all the virtues were forms of knowledge' (Eudemian Ethics 1216b6); and many moves in the early dialogues depend on the assumption that if you know what is good you will be good (see Socrates §5). But Laches and Charmides present this Socratic belief as problematical. Not only is there the problem of specifying a unique content for the knowledge with which any particular virtue is to be identified. There is also the difficulty that any purely intellectual specification of what a virtue is makes no reference to the dispositions Charmides and Laches mention and (like Socrates) exemplify. In raising this difficulty Plato is already adumbrating the need for a more complex moral psychology than Socrates', if only to do justice to how Socrates lived. If the viewpoints of Laches and Nicias are combined we are not far from the account of courage in Republic, as the virtue of the spirited part of the soul, which 'preserves through pains and pleasures the injunctions of reason concerning what is and is not fearful' (442b).
In Protagoras it is Socrates himself who works out and defends the theory that knowledge is sufficient for virtuous action and that different virtues are different forms of that knowledge (see Aret). He does not here play the role of critic of the theory, nor are there other interlocutors who might suggest alternative perceptions: indeed Protagoras, as partner not adversary in the key argument, is represented as accepting the key premise that (as he puts it) 'wisdom and knowledge are the most powerful forces governing human affairs' (352c-d). It would be a mistake to think that Plato found one and the same view problematic when he wrote Laches and Charmides but unproblematic when he wrote Protagoras, and to construct a chronological hypothesis to cope with the contradiction. Protagoras is simply a different sort of dialogue: it displays Socratic dialectic at work from a stance of some detachment, without raising questions about it. Protagoras is an entirely different kind of work from Gorgias, too: the one all urbane sparring, the latter a deadly serious confrontation between philosophy and political ambition. Gorgias unquestionably attacks hedonism, Protagoras argues for it, to obtain a suitable premise for defending the intellectualist paradox that nobody does wrong willingly, but leaves Socrates' own commitment to the premise at best ambiguous (see Socrates §6). Incommensurabilities of this kind make it unwise to attempt a relative chronology of the two dialogues on the basis of apparent incompatibilities in the positions of their two Socrates.
Space does not permit discussion of Ion, or of Hippias Minor, in which Socrates is made to tease us with the paradox - derived from his equation of virtue and knowledge - that someone who did do wrong knowingly and intentionally would be better than someone who did it unintentionally through ignorance. Interpretation of Euthyphro remains irredeemably controversial. Its logical ingenuity is admired, and the dialogue is celebrated for its invention of one of the great philosophical questions about religion: either we should do right because god tells us to do so, which robs us of moral autonomy, or because it is right god tells us to do it, which makes the will of god morally redundant.
Something more needs to be said about Lysis and Euthydemus (which share a key minor character in Ctesippus, and are heavy with the same highly charged erotic atmosphere) and Hippias Major. They all present Socrates engaging in extended question and answer sessions, although only in Hippias is this an elenchus with real bite: in the other dialogues his principal interlocutors are boys with no considered positions of their own inviting refutation. All end in total failure to achieve positive results. All make great formal play with dualities of various kinds. Unusually ingenious literary devices characterize the three works, ranging from the introduction of an alter ego for Socrates in Hippias to disruption of the argument of the main dialogue by its 'framing' dialogue in Euthydemus, at a point where the discussion is clearly either anticipating or recalling the central books of Republic. All seem to be principally preoccupied with dialectical method (admittedly a concern in every dialogue). Thus Hippias is a study in definitional procedure, applied to the case of the fine or beautiful, Lysis a study in thesis and antithesis paralleled in Plato's uvre only by Parmenides, and Euthydemus an exhibition of the contrast between 'eristic', that is, purely combative sophistical argument, demonstrated by the brothers Euthydemus and Dionysodorus, and no less playful philosophical questioning that similarly but differently ties itself in knots. It is the sole member of the trio which could be said with much conviction to engage - once more quizzically - with the thought of the historical Socrates about knowledge and virtue. But its introduction of ideas from Republic makes it hard to rank among the early writings of Plato. Similarly, in Lysis and Hippias Major there are echoes or pre-echoes of the theory of Forms and some of the causal questions associated with it. We may conclude that these ingenious philosophical exercises - 'gymnastic' pieces, to use the vocabulary of Parmenides - might well belong to Plato's middle period.
Needless to say, no explicit Platonic directive survives encouraging us to read Meno, Symposium and Phaedo together. But there are compelling reasons for believing that Plato conceived them as a group in which Meno and Symposium prepare the way for Phaedo. In brief, in Meno Plato introduces his readers to the non-Socratic theory of the immortality of the soul and a new hypothetical method of inquiry, while Symposium presents for the first time the non-Socratic idea of a Platonic Form, in the context of a notion of philosophy as desire for wisdom. It is only in Phaedo that all these new ideas are welded together into a single complex theory incorporating epistemology, psychology, metaphysics and methodology, and constituting the distinctive philosophical position known to the world as Platonism.
Meno and Symposium share two features which indicate Plato's intention that they should be seen as a pair, performing the same kind of introductory functions, despite enormous differences for example in dialogue form, scale and literary complexity. First, both are heavily and specifically foreshadowed in Protagoras, which should accordingly be reckoned one of the latest of Plato's early writings. At the end of Protagoras (361c) Socrates is made to say that he would like to follow up the inconclusive conversation of the dialogue with another attempt to define what virtue is, and to consider again whether or not it can be taught. This is exactly the task undertaken in Meno. Similarly, not only are all the dramatis personae of Symposium except Aristophanes already assembled in Protagoras, but at one point Socrates is represented as offering the company some marginally relevant advice on how to conduct a drinking party - which corresponds exactly to what happens at the party in Symposium (347c-348a).
Second, both Meno and Symposium are exceedingly careful not to make Socrates himself a committed proponent either of the immortality of the soul or of the theory of Forms. These doctrines are ascribed respectively to 'priests and priestesses' (Meno) and to one priestess, Diotima, in particular (Symposium); in Meno Socrates says he will not vouch for the truth of the doctrine of immortality, in Symposium he records Diotima's doubts as to whether he is capable of initiation into the mysteries (a metaphor also used of mathematics in Meno) which culminate in a vision of the Form of the Beautiful. In Symposium these warning signs are reinforced by the extraordinary form of the dialogue: the sequence of conversations and speeches it purports to record are nested inside a Chinese box of framing conversations, represented as occurring some years later and with participants who confess to inexact memory of what they heard.
Phaedo for its part presupposes Meno and Symposium. At 72e-73b Meno's argument for the immortality of the soul is explicitly recalled, while the Form of Beauty is regularly mentioned at the head of the lists of the 'much talked about' Forms which Phaedo introduces from time to time (for example, 75c, 77a, 100b). It is as though Plato relies upon our memory of the much fuller characterization of what it is to be a Form supplied in Symposium. Unlike Meno and Symposium, Phaedo represents Socrates himself as committed to Platonist positions, but takes advantage of the dramatic context - a discussion with friends as he waits for the hemlock to take effect - and makes him claim prophetic knowledge for himself like a dying swan (84e-85b). The suggestion is presumably that Platonism is a natural development of Socrates' philosophy even if it goes far beyond ideas about knowledge and virtue and the imperatives of the philosophical life to which he is restricted in the early dialogues.
Meno is a dialogue of the simplest form and structure. It consists of a conversation between Socrates and Meno, a young Thessalian nobleman under the spell of the rhetorician Gorgias, interrupted only by a passage in which Socrates quizzes Meno's slave, and then later by a brief intervention in the proceedings on the part of Anytus, Meno's host and one of Socrates' accusers at his trial. The dialogue divides into three sections: an unsuccessful attempt to define what virtue is, which makes the formal requirements of a good definition its chief focus; a demonstration in the face of Meno's doubts that successful inquiry is none the less possible in principle; and an investigation into the secondary question of whether virtue can be taught, pursued initially by use of a method of hypothesis borrowed from mathematics. Although the ethical subject matter of the discussion is thoroughly Socratic, the character and extent of its preoccupation with methodology and (in the second section) epistemology and psychology are not. Nor is Meno's use of mathematical procedures to cast light on philosophical method; this is not confined to the third section. Definitions of the mathematical notion of shape are used in the first section to illustrate for example the principle that a definition should be couched in terms that the interlocutor agrees are already known. And the demonstration of an elenchus with a positive outcome which occupies the second is achieved with a geometrical example.
It looks as though Plato has come to see in the analogy with mathematics hope for more constructive results in philosophy than the Socratic elenchus generally achieved in earlier dialogues. This is a moral which the second and third sections of Meno make particularly inviting to draw. In the second Socrates is represented as setting Meno's untutored slave boy a geometrical problem (to determine the length of the side of a square twice the size of a given square) and scrutinizing his answers by the usual elenctic method. The boy begins by thinking he has the answer. After a couple of mistaken attempts at it he is persuaded of his ignorance. So far so Socratic. But then with the help of a further construction he works out the right answer, and so achieves true opinion, which it is suggested could be converted into knowledge if he were to go through the exercise often. The tacit implication is that if elenchus can reach a successful outcome in mathematics, it ought to be capable of it in ethics too.
None the less direct engagement with the original problem of what virtue is is abandoned, and the discussion turns to the issue of its teachability, and to the method of hypothesis. Here the idea is that instead of investigating the truth of proposition p directly 'you hit upon another proposition h ('the hypothesis'), such that p is true if and only if h is true, and then investigate the truth of h, undertaking to determine what would follow (quite apart from p) if h were true and, alternatively, if it were false' (Gregory Vlastos' formulation (1991)). After illustrating this procedure with an exceedingly obscure geometrical example, Socrates makes a lucid application of it to the ethical problem before them, and offers the Socratic thesis that virtue is knowledge as the hypothesis from which the teachability of virtue can be derived. The subsequent examination of this hypothesis comes to conclusions commentators have found frustratingly ambiguous. But the survival and development of the hypothetical method in Phaedo and Republic are enough to show Plato's conviction of its philosophical potential.
The slave boy episode is originally introduced by Socrates as a proof of something much more than the possibility of successful inquiry. The suggestion is that the best explanation of that possibility is provided by the doctrine of the immortality of the soul, a Pythagorean belief which makes the first of its many appearances in Plato's dialogues in Meno (see Psych; Pythagoras §2; Pythagoreanism §3). More specifically, the idea as Socrates presents it is that the soul pre-exists the body, in a condition involving conscious possession of knowledge. On entry into the body it forgets what it knows, although it retains it as latent memory. Discovery of the sort of a priori knowledge characteristic of mathematics and (as Plato supposes) ethics is a matter of recollecting latent memory. This is just what happens to the slave boy: Socrates does not impart knowledge to him; he works it out for himself by recovering it from within. Once again, although the Socrates of Meno does not in the end subscribe to belief in learning as recollection of innate knowledge, it is embraced without equivocation in Phaedo, as also in the later Phaedrus. But what exactly is recollected? Phaedo will say: knowledge of Forms. Meno by contrast offers no clues. The introduction of the theory of Forms is reserved for Symposium.
Symposium has the widest appeal of all Plato's writings. No work of ancient Greek prose fiction can match its compulsive readability. Plato moves through a rich variety of registers, from knockabout comedy and literary parody to passages of disturbing fantasy or visionary elevation, culminating in a multiply paradoxical declaration of love for Socrates put in the mouth of a drunken Alcibiades. Love (eros) is the theme of the succession of encomia or eulogies delivered at the drinking party (symposion) hosted by the playwright Agathon: not sublimated 'Platonic' love between the sexes, but the homoerotic passion of a mature man for a younger or indeed a teenager. This continues until Aristophanes (one of the guests) and Socrates broaden and transform the discussion. Socrates' speech, which is a sort of anti-eulogy, develops a general theory of desire and its relation to beauty, and it is in this context that the idea of an eternal and changeless Form makes its first unequivocal appearance in Plato's uvre. Thus Plato first declares himself a metaphysician not in a work devoted to philosophical argument, but in a highly rhetorical piece of writing, albeit one in which fashionable models of rhetoric are subverted.
Love and beauty are first connected in some of the earlier encomia, and notably in Agathon's claim that among the gods 'Love is the happiest of them all, for he is the most beautiful and best' (195a). This thesis is subjected to elenchus by Socrates in the one argumentative section of the dialogue. Agathon is obliged to accept that love and desire are necessarily love and desire for something, namely, something they are in need of. Following his concession Socrates argues that beauty is not what love possesses but precisely the thing it is in need of. This argument constitutes the key move in the philosophy of the dialogue, which Plato elaborates in various ways through the medium of Diotima, the probably fictitious priestess from whom Socrates is made to claim he learned the art of love in which he has earlier (177d) claimed expertise. First she tells a myth representing Love as the offspring of poverty and resource, and so - according to her interpretation - occupying the dissatisfied intermediate position between ignorance and wisdom which characterizes philosophy: hence presumably the explanation of Socrates' claim to be an expert in love, since the pursuit of wisdom turns out to be the truest expression of love. Then she spells out the theoretical basis for this intellectualist construction of what love is. The theory has rightly been said to combine 'a psychology that is strictly or loosely Socratic with a metaphysics that is wholly Platonic' (Price 1995).
This psychology holds that a person who desires something wants not so much the beautiful as the good, or more precisely happiness conceived as permanent possession of the good. Love is a particular species of desire, which occurs when perception of beauty makes us want to reproduce. (Socrates is made to express bafflement at this point: presumably an authorial device for indicating that Diotima's line of thought is now moving beyond anything Plato considered strictly Socratic.) Diotima goes on to explain that reproduction is the way mortal animals pursue immortality, interpreted in its turn in terms of the longing for permanent possession of good with which she has just identified desire. Other animals and many humans are content with physical reproduction, but humans are capable of mental creation when inspired by a beautiful body, and still more by a beautiful soul or personality. This is how the activities of poets and legislators and the virtuous are to be understood.
Perhaps Plato thought these ideas, although no longer Socratic, provided a convincing explanation of the drive which powered Socrates' philosophical activity in general, and made him spend so much time with beautiful young men in particular. However that may be, in what follows he has Diotima speak of greater mysteries which 'I do not know whether you [that is, Socrates] would be able to approach'. These are the subject of a lyrical account of how a true lover moves step by step from preoccupation with the beauty of a single beloved, to appreciating that there is one and the same beauty in all bodies and so loving them all, and then to seeing and loving beauty in souls or personalities and all manner of mental creations, until he 'turns to the great sea of beauty, and gazing upon this gives birth to many gloriously beautiful ideas and theories, in unstinting love of wisdom [that is, philosophy]' (210d). The final moment of illumination arrives when the philosopher-lover grasps the Beautiful itself, an experience described as the fulfilment of all earlier exertions. Unlike other manifestations of beauty the Form of the Beautiful is something eternal, whose beauty is not qualified in place or time or relation or respect. It is just the one sort of thing it is, all on its own, whereas other things that are subject to change and decay are beautiful by participation in the Form. Only someone who has looked upon it will be capable of giving birth not to images of virtue (presumably the ideas and theories mentioned a little earlier), but to virtue itself, and so achieving immortality so far as any human can.
It is striking that the doctrine of the immortality of the soul forms no part of Diotima's argument. If we assume the scholarly consensus that Symposium postdates Meno, this poses something of a puzzle. One solution might be to suppose that, although Meno presents the doctrine, Plato is himself not yet wholly convinced of its truth, and so gives it no role in his account of the desire for immortality in Symposium. This solution might claim support from the fact that Phaedo takes upon itself the task of arguing the case for the immortality of the soul much more strenuously than in Meno, and in particular offers a much more careful and elaborate version of the argument from recollection. Additionally or alternatively, we may note that when Plato presents the doctrine of the immortality of the soul in the dialogues, he always treats it as something requiring explicit proof, unlike the theory of Forms, which generally figures as a hypothesis recommending itself by its explanatory power or its ability to meet the requirements of Plato's epistemology. Since Diotima's discourse is not constructed as argument but as the explication of an idea, it is not the sort of context which would readily accommodate the kind of demonstration Plato apparently thought imperative for discussion of the immortality of the soul.
The departure point for Phaedo's consideration of the fate of the soul after death is very close to that idea of love as desire for wisdom which Diotima offers at the start of her speech in Symposium. For Socrates starts with the pursuit of wisdom, which he claims is really a preparation for death. This is because it consists of an attempt to escape the restrictions of the body so far as is possible, and to purify the soul from preoccupation with the senses and physical desires so that it can think about truth, and in particular about the Forms, which are accessible not to sense perception but only to thought. Pure knowledge of anything would actually require complete freedom from the body. So given that death is the separation of soul from body, the wisdom philosophers desire will be attainable in full only when they are dead. Hence for a philosopher death is no evil to be feared, but something for which the whole of life has been a training. The unbearably powerful death scene at the end of the dialogue presents Socrates as someone whose serenity and cheerfulness at the end bear witness to the truth of this valuation.
Symposium implied that a long process of intellectual and emotional reorientation was required if someone was to achieve a grasp of the Form of Beauty. Phaedo has sometimes been thought to take a different view: interpreters may read its argument about recollecting Forms as concerned with the general activity of concept formation in which we all engage early in life. In fact the passage restricts recollection of Forms to philosophers, and suggests that the knowledge they recover is not the basic ability to deploy concepts (which Plato seems in this period to think a function of sense experience), but hard-won philosophical understanding of what it is to be beautiful or good or just. The interlocutors voice the fear that once Socrates is dead there will be nobody left in possession of that knowledge; and the claim that pure knowledge of Forms is possible only after death coheres with the Symposium account very well, implying as it does that the path to philosophical enlightenment is not just long but a journey which cannot be completed in this life.
The proposal that the soul continues to exist apart from the body after death is immediately challenged by Socrates' interlocutors. Much of the rest of Phaedo is taken up with a sequence of arguments defending that proposal and the further contention that the soul is immortal, pre-existing the body and surviving its demise for ever. The longest and most ambitious of these arguments is the last of the set. It consists in an application of the method of hypothesis, which is explained again in a more elaborate version than that presented in Meno. The hypothesis chosen is the theory of Forms, or rather the idea that Forms function as explanations or causes of phenomena: beautiful things are beautiful by virtue of the Beautiful, large things large by virtue of the Large, and so on. Socrates is made to represent his reliance on this apparently uninformative or 'safe and simple' notion of causation as a position he has arrived at only after earlier intellectual disappointments: first with the inadequacies of Presocratic material causes, then with the failure of Anaxagoras' promise of a teleological explanation of why things are as they are (see Anaxagoras §4).
He soon goes on to argue however that the hypothesis can be used to generate a more sophisticated model of causation. Instead of proposing merely that (for example) hot things are hot by virtue of the Hot, we may legitimately venture the more specific explanation: 'Hot things are hot by virtue of fire', provided that it is true that wherever fire exists, it always heats things in its vicinity, being itself hot and never cold. After elaborating this point Socrates is ready to apply the model to the case of life and soul. By parity of reasoning, we may assert that living things are alive not just in virtue of life, but in virtue of soul, given that wherever soul exists it makes things it occupies alive, being itself alive and never dead. From this assertion there appears to follow the conclusion whose derivation is the object of the exercise: if soul is always alive and never dead, it must be immortal (that is, incapable of death) and so imperishable.
Phaedo, like Republic, ends with a sombre myth of last judgment and reincarnation, designed primarily to drive home the moral implications of Plato's distinctive version of soul-body dualism. It reminds us of the Pythagorean origins of the doctrine of the immortality of the soul. Yet the Platonism of Phaedo owes a great deal also to the metaphysics of Parmenides. Both here and in Symposium the characterization of Forms as simple eternal beings, accessible only to thought, not the senses, and the contrast both dialogues make with the changing and contradictory world of phenomena, are couched in terms borrowed from Parmenides and the Eleatic tradition which he inaugurated. Platonism can accordingly be seen as the product of an attempt to understand a fundamentally Socratic conception of philosophy and the philosophical life in the light of reflection on these two powerful Presocratic traditions of thought, using the new methodological resources made available by geometry.
Republic is misleadingly titled. The Greek name of the dialogue is Politeia, which is the standard word for constitution or ordering of the political structure: 'political order' would give a better sense of what Plato has in mind. There is a further and deeper complication. Once you start reading the dialogue you find that it is primarily an inquiry into justice, conceived as a virtue or moral excellence of individual persons. The philosophical task it undertakes is the project of showing that justice so conceived is in the best interests of the just person, even if it brings nothing ordinarily recognizable as happiness or success, or indeed (as with the sentence of death passed on Socrates) quite the opposite. Thus Republic carries forward the thinking about justice begun in early dialogues such as Apology, Crito and Gorgias. Why, then, the title's suggestion that it is a work of political rather than moral philosophy?
One way of answering this question is to attend to the formal structure of Republic. After book I, an inconclusive Socratic dialogue which none the less introduces, particularly in the conversation with Thrasymachus, many of the themes pursued in the rest of the work, the interlocutors agree to take an indirect approach to the problem of individual justice: they will consider the nature of justice and injustice in the polis, that is the (city-)state, in the hope that it will provide an illuminating analogy. Books II-IV spell out the class structure required in a 'good city'. It is suggested that in such a state political justice consists in the social harmony achieved when each class (economic, military, governing) performs its own and only its own function. This model is then applied to the individual soul (see Psych). Justice and happiness for an individual are secured when each of the parts of the soul (appetite, emotion, reason) performs the role it should in mutual harmony. In working out the idea of psychic harmony, Plato formulates a conception of the complexity of psychological motivation, and of the structure of mental conflict, which leaves the simplicities of Socratic intellectualism far behind, and one which has reminded interpreters of Freudian theory, particularly in books VIII-IX. Here he examines different forms of unjust political order (notably oligarchy, democracy and tyranny) and corresponding conditions of order, or rather increasing disorder, in the soul.
Political theory therefore plays a large part in the argument of the dialogue, even though the ultimate focus is the moral health of the soul, as is confirmed by the conclusion of book IX. Socrates suggests that it may not matter whether we can actually establish a truly just political order, provided we use the idea of it as a paradigm for founding a just city within our own selves.
This account of Republic omits the central books V-VII. These explore the notion of political order much further than is necessary for the purposes of inquiry into individual justice. This is where Plato develops the notion of a communistic governing class, involving the recruitment of talented women as well as men, the abolition of the family, and institution of a centrally controlled eugenic breeding programme. And it is where, in order to meet the problem of how the idea of the just city he has been elaborating might ever be put into practice, he has Socrates introduce philosopher-rulers:
Unless either philosophers rule in our cities or those whom we now call rulers and potentates engage genuinely and adequately in philosophy, and political power and philosophy coincide, there is no end, my dear Glaucon, to troubles for our cities, nor I think for the human race.
What Plato perhaps has most in mind when he makes Socrates speak of 'troubles' is as well as civil war the corruption he sees in all existing societies. As he acknowledges, this makes the emergence of an upright philosopher-ruler an improbability - and incidentally leaves highly questionable the prospects of anyone but a Socrates developing moral order within the soul when society without is infected with moral disorder.
Here we touch on another broadly political preoccupation of Republic, worked out at various places in the dialogue. It offers among other things a radical critique of Greek cultural norms. This is highlighted in the censorship of Homer proposed in books II and III, and in the onslaught on the poets, particularly the dramatists, in book X, and in their expulsion from the ideal city. But these are only the more memorable episodes in a systematic attack on Greek beliefs about gods, heroes and the departed, on the ethical assumptions underlying music, dance and gymnastics (see Mimsis), and again erotic courtship, and on medical and judicial practice. Republic substitutes its own austere state educational programme, initially focused on the training of the emotions, but subsequently (in books VI and VII) on mathematics and philosophy. Plato sees no hope for society or the human race without a wholesale reorientation, fostered by an absolute political authority, of all the ideals on which we set our hearts and minds.
Republic itself is written in such a way as to require the reader to be continually broadening perspectives on the huge range of concerns it embraces, from the banalities of its opening conversation between Socrates and the aged Cephalus to its Platonist explication of the very notion of philosophy in the epistemology and metaphysics of books V-VII. At the apex of the whole work Plato sets his presentation of the Form of the Good, as the ultimate goal of the understanding that philosophy pursues by use of the hypothetical method. The dialogue offers a symbol of its own progress in the potent symbol of the cave. We are like prisoners chained underground, who can see only shadows of images flickering on the wall. What we need is release from our mental shackles, and a conversion which will enable us gradually to clamber out into the world above and the sunlight. For then, by a sequence of painful reorientations, we may be able to grasp the Good and understand how it explains all that there is.
Parmenides is that rare phenomenon in philosophy: a self-critique. Plato here makes his own theory of Forms the subject of a penetrating scrutiny which today continues to command admiration for its ingenuity and insight. Theaetetus (datable to soon after 369 bc) also reverts to Plato's critical manner. It applies an enriched variant of the Socratic elenchus to a sequence of attempts to define knowledge. The confidence of Phaedo and Republic that Platonist philosophers are in possession of knowledge and can articulate what it consists in is nowhere in evidence, except in a rhetorical digression from the main argument. Methodological preoccupations are dominant in both works. Parmenides suggests that to defend the Forms against its critique, one would need to be much more practised in argument than is their proponent in this dialogue (a young Socrates fictively encountering a 65-year old Parmenides and a middle-aged Zeno). And it sets out a specimen of the sort of exercise required, running to many pages of purely abstract reasoning modelled partly on the paradoxes of Zeno of Elea, partly on Parmenides' deductions in the Way of Truth (see Parmenides §§3-8). Theaetetus likewise presents itself, initially more or less explicitly, later implicitly, as a model of how to go about testing a theory without sophistry and with due sympathy. While the conclusions achieved by this 'midwifery' - as Socrates here calls it - are as devastatingly negative as in the early dialogues, we learn much more philosophy along the way. Many readers find Theaetetus the most consistently rewarding of all the dialogues.
A sketch of the principal concerns of the two dialogues will bring out their radical character. Parmenides raises two main questions about Forms. First, are there Forms corresponding to every kind of predicate? Not just one and large, or beautiful and just, familiar from the middle period dialogues, but man and fire, or even hair and dirt? Socrates is represented as unclear about the issue. Second, the idea that other things we call for example 'large' or 'just' are related to the Form in question by participation is examined in a succession of arguments which seek to show that, however Forms or the participation relation are construed, logical absurdities of one kind or another result. The most intriguing of these has been known since Aristotle as the Third Man: if large things are large in virtue of something distinct from them, namely the Form of Large, then the Large itself and the other large things will be large in virtue of another Form of Large - and so ad infinitum.
Theaetetus devotes much of its space to considering the proposal that knowledge is nothing but sense perception, or rather to developing and examining two theories with which that proposal is taken to be equivalent: the view of Protagoras (§3) that truth is relative, since 'man is the measure of all things', and that of Heraclitus that everything is in flux, here considered primarily in application to the nature of sense perception. The dialogue is home to some of Plato's most memorable arguments and analogies. For example, Protagoreanism is attacked by the brilliant (although perhaps flawed) self-refutation argument: if man is the measure of all things, then the doctrine of the relativity of truth is itself true only in so far as it is believed to be true; but since people in general believe it to be false, it must be false. The next section of Theaetetus worries about the coherence of the concept of false belief. Here the soul is compared to a wax tablet, with false belief construed as a mismatch between current perceptions and those inscribed on the tablet, or again to an aviary, where false belief is an unsuccessful attempt to catch the right bird (that is, piece of knowledge). In the final section the interlocutors explore the suggestion that knowledge must involve the sort of complexity that can be expressed in a logos or statement. Socrates' 'dream' that such knowledge must be built out of unknowable simples fascinated Wittgenstein (§5), who saw in it an anticipation of the theory of his Tractatus.
Are we to infer that in opening or reopening questions of this kind Plato indicates that he is himself in a real quandary about knowledge and the Forms? Or is his main target philosophical complacency in his readers, as needing to be reminded that no position is worth much if it cannot be defended in strenuous argument? Certainly in the other two dialogues grouped here with Parmenides and Theaetetus the theory of Forms is again in evidence, presented as a view the author is commending to the reader's intellectual sympathies. Cratylus is a work whose closest philosophical connections are with Theaetetus, although its relative date among the dialogues is disputed. It is a pioneering debate between rival theories of what makes a word for a thing the right word for it: convention, or as Cratylus holds, a natural appropriateness - sound somehow mirroring essence (see Language, ancient philosophy of §2). Underlying Cratylus' position is an obscurely motivated commitment to the truth of Heracliteanism (see Cratylus). For present purposes what is of interest is the final page of the dialogue, which takes the theory of Forms as premise for an argument showing that the idea of an absolutely universal Heraclitean flux is unsustainable. As for Phaedrus, it contains one of the most elevated passages of prose about the Forms that Plato ever wrote.
The context is an exemplary rhetorical exercise in which Symposium's treatment of the philosophical lover's attraction to beauty is reworked in the light of Republic's tripartition of the soul. Subsequently Plato has Socrates dismiss the speech as 'play', useful only for the methodological morals about rhetorical procedure we happen to be able to derive from it - together with a preceding denunciation of love by Socrates, capping one by his interlocutor Phaedrus - if we are dialecticians. This comment has led some readers to conjecture that Phaedrus accordingly marks Plato's formal leave-taking of the theory of Forms: in retrospect he sees it more as rhetoric than as philosophy or dialectic, which will henceforward confine itself to something apparently less inspiring - the patient, thorough, comprehensive study of similarities and differences. Yet Phaedrus is pre-eminently a dialogue written not to disclose its author's mind, but to make demands on the sophisticated reader's. Perhaps Socrates' great speech on the philosophical lover is 'play' not absolutely, but only relative to the controlling and unifying preoccupation of the dialogue, which is to work through a fresh examination of rhetoric, going beyond Gorgias in explaining how it can be a genuine form of expertise, based on knowledge of truth and variously geared to the various psychological types to which oratory addresses itself. We might speculate that Plato writes the speech as he does precisely because he thinks or hopes many of his readers will be of a type persuadable to the philosophical life by its vision of the soul's desire for the Beautiful.
The theory of Forms also figures prominently in Timaeus. Timaeus is Plato's one venture into physical theory, and appropriately has in the Italian Greek Timaeus someone other than Socrates as main speaker. It is presented as an introduction to the story of Atlantis, allegedly an island power defeated by the prehistoric Athenians, and mentioned only by Plato among classical Greek authors. The conflict between Atlantis and Athens was to be the subject of Critias, conceived as a dialogue that would demonstrate the political philosophy of Republic in practice. But Critias was never completed, so Timaeus stands as an independent work.
The argument of Timaeus is based on the premise that the universe is not eternal but created - although debate has raged from antiquity onwards whether this means created in time, or timelessly dependent on a first cause. From the order and beauty of the universe Plato infers a good creator or craftsman (dmiourgos), working on pre-existing materials (with their own random but necessary motions) from an eternal blueprint encoding life and intelligence: namely, the Form of Animal. The greater part of Timaeus consists in an account of how first the universe (conceived of as a living creature), then humans are designed from the blueprint for the best. Much use is made of mathematical models, for example for the movements of the heavenly bodies and the atomistic construction of the four elements. The account is presented as inevitably only a 'likely story', incapable of the irrefutable truths of metaphysics.
There is no more austere or profound work of metaphysics in Plato's uvre than Sophist. Like many of the post-Republic dialogues it is 'professional' philosophy, probably written primarily for Plato's students and associates in the Academy. The style of Sophist and the remaining works to be discussed is syntactically tortuous and overloaded with abstraction and periphrasis; they are altogether lacking in literary graces or dramatic properties which might commend them to a wider readership. Sophist's main speaker is a stranger from Elea, symbolizing the Parmenidean provenance of the problem at the heart of the long central section of the dialogue: how is it possible to speak of what is not (see Parmenides §2)? This puzzle is applied for example both to the unreality of images and to falsehood, understood as what is not the case. The solution Plato offers required some revolutionary moves in philosophical logic, such as the explicit differentiation of identity from predication, and the idea that subject and predicate play different roles in the syntax of the sentence. These innovations and their bearing on analysis of the verb 'to be' have made Sophist the subject of some of the most challenging writing on Plato in the twentieth century.
The companion dialogue Politicus or Statesman addresses more squarely than Republic did the practical as distinct from the theoretical knowledge of the ideal statesman. Its contribution to this topic consists of three major claims. First is the rejection of the sovereignty of law. Plato has nothing against law as a convenient but imprecise rule of thumb in the hands of an expert statesman, provided it does not prevent him using his expertise. Making law sovereign, on the other hand, would be like preferring strict adherence to a handbook of navigation or a medical textbook to the judgment of the expert seafarer or doctor. If you have no such expert available, a constitution based on adherence to law is better than lawlessness, but that is not saying much. What law cannot do that expert rulers can and must is judge the kairos: discern the right and the wrong 'moment' to undertake a great enterprise of state. This proposition follows from the second of Plato's key claims, which is represented as one true of all practical arts: real expertise consists not of measuring larger and smaller, but in determining the norm between excess and defect - a notion which we ordinarily think more Aristotelian than Platonic (see Aristotle §22), although it recurs in a different guise in Philebus. Finally, Plato thinks we shall only get our thinking straight on this as on any matter if we find the right - usually homely - model. Statesman makes the statesman a sort of weaver. There are two strands to the analogy. First, like weaving statesmanship calls upon many subordinate skills. Its job is not to be doing things itself, but to control all the subordinate functions of government, and by its concern for the laws and every other aspect of the city weave all together. Second, the opposing temperaments of the citizens are what most need weaving together if civil strife is to be avoided, and (as in Republic) expert rulers will use education and eugenics to that end.
Statesman shares themes with both Philebus and Laws. Philebus is the one late dialogue in which Socrates is principal speaker, as befits its ethical topic: the question whether pleasure or understanding is the good, or at least the more important ingredient in the good life. After so much insistence in middle-period dialogues on the Form as a unity distinct from the plurality of the phenomena, it comes as a shock to find Socrates stressing at the outset that there is no merit in reiterating that pleasure or understanding is a unity. The skill resides in being able to determine what and how many forms of understanding and pleasure there are. What Philebus goes on to offer next is a model for thinking about how any complex structure is produced, whether a piece of music or the universe itself. It requires an intelligent cause creating a mixture by imposing limit and proportion on something indeterminate. This requirement already indicates the main lines of the answer to our problem, at any rate, if it is accepted that pleasure is intrinsically indeterminate. Clearly intelligence and understanding will be shaping forces in the good life, but pleasures are only admissible if suitably controlled. At the adjudication at the end of the dialogue, this is just the result we get. The majority of the many forms of pleasure defined and examined in the course of the dialogue are rejected. They do not satisfy the criteria of measure and proportion which are the marks of the good.
The vast Laws is in its way the most extraordinary of all Plato's later writings, not for its inspiration (which flags) but for its evidence of tireless fascination with things political. Its relation to Republic and Statesman has been much debated. What is clear is that Plato is legislating - through the last eight of its twelve long books - for a second best to the ideal state and ideal statesman of Republic, with greater zeal than Statesman might have led one to expect. Is this because he has lost faith in those ideals, which still seemed alive in Statesman at least as ideals? That view is in danger of overlooking Republic's own indication that it would be wrong to expect in practice anything but an approximation of the ideal.
Philosophers do not often read Laws. But book X presents Plato's natural theology, as the background to laws dealing with atheists. And perhaps the most interesting proposal in the dialogue concerns the very idea of legislation. It is the notion of a 'prelude' to a law, which is the attempt the legislator should make to persuade citizens of the necessity of the prescriptions of the law itself. Here is a theme which relates interestingly to conceptions of reason, necessity and persuasion found in several other dialogues, notably Republic and Timaeus.
Plato's influence pervades much of subsequent Western literature and thought. Aristotle was among those who came to listen to him in the 'school' he founded in the Academy; and a great deal of Aristotle's work is conceived in explicit or implicit response to Plato. Other philosophical traditions flourished after Aristotle's time in the last centuries bc, and the Academy of the period read Plato through sceptical spectacles (see Arcesilaus). But from the first century ad onwards Platonism in various forms, often syncretistic, became the dominant philosophy of the Roman Empire (see Platonism, Early and Middle), especially with the rise of Neoplatonism in late antiquity (see Neoplatonism). Some of the Fathers of the early Greek Church articulated their theologies in Platonist terms; and through Augustine in particular Plato shaped, for example, the Western Church's conception of time and eternity (see Patristic philosophy). A Neoplatonist version of him prevailed among the Arabs (see Platonism in Islamic philosophy).
With the translation of Plato into Latin in the high Middle Ages (see Platonism, medieval) and the revival of Greek studies in the Renaissance, Platonism (again in a Neoplatonic guise) once more gripped the minds of learned thinkers in the West, for example at the Medici court in fifteenth century Florence (see Platonism, Renaissance). But none of the great philosophers of the modern era has been a Platonist, even if Plato was an important presence in the thought of a Leibniz or a Hegel or a Russell. Probably he has never been studied more intensively than in the late twentieth century. Thanks to the availability of cheap translations in every major language and to his position as the first great philosopher in the Western canon, he figures in most introductory courses offered every year to tens of thousands of students throughout the developed world.MALCOLM SCHOFIELD | http://www.muslimphilosophy.com/ip/rep/A088 | 13 |
29 | |Jump to:||Ternary Numbers||Bivalent Logic||Bivalent Modal Logic||Trivalent Logic||Trivalent Operators||Trivalent Modal Logic|
One of the most interesting aspects of Shwa is trivalent logic. The only natural language with trivalent logic, as far as I know, is Aymará, a relative of Quechua spoken on the altiplano of Bolivia (see www.aymara.org) whose trivalent logic was elucidated by Iván Guzmán de Rojas. However, Aymará only has trivalent modal logic, while Shwa logic is completely trivalent.
This chapter will present trivalent logic in more depth than you need to use it, because it's interesting. In preparation, we'll recap ternary numbers and bivalent logic for you in the first three sections.
Many people are familiar with binary, or base 2 numbers, which are the basis for both digital computing and classical (Boolean or Aristotelian) propositional logic. In binary notation, only the digits 0 and 1 are used, and each successive place represents another power of two. Binary is also the justification for octal (base 8) and hexadecimal (base 16) arithmetic, since those powers of two can represent groups of binary digits.
This section will introduce you to another notation: ternary numbers. It's called ternary because there are three digits and each successive place represents another power of three, but instead of using the digits 0, 1, and 2 (which I would call trinary or base 3), ternary uses the digits 0 and 1 and the minus sign −, which represents −1. Since the biggest benefits of this approach result from the symmetry of 1 and −1, this notation is also called balanced ternary. Just like a single binary digit is called a bit, a single ternary digit should be called a tert (not trit or tit).
In ternary, the numbers zero and one are represented by 0 and 1, as in binary and decimal (base 10) notation. It first becomes interesting at the number two, which is represented in ternary as 1−. The digit 1 is in the 3s place, representing the value 3, and the digit − is in the 1s place, representing the value −1. Adding 3 and −1 together gives us 2. Likewise, three is written 10, and four is written 11. Here are the first few numbers in ternary (in bold):
In decimal notation, negative numbers are preceded by a minus sign, which is kind of an 11th digit, since you can't write all the integers without it. (It's too bad that we don't use a leading 0 to represent negative numbers instead, since the two uses are disjoint.) In binary computers, negative integers are represented in two's−complement notation, which requires an upper limit on the size of integers. If 16−bit integers are used, the highest bit, which would normally represent 32,768 (2 to the 15th power), instead represents −32,768! This makes the 16−bit binary representation of −1 a series of sixteen 1s: 1111111111111111, which is to be interpreted as 32767−32768. Arcane!
In contrast, −1 in ternary notation is simply −, while −2 is −1 and −3 is −0. In general in ternary notation, negative numbers start with −, and the negation of any number just replaces 1s with −s and vice versa. The number eight is 10−, and negative eight is −01. As you saw in Shwa's Reverse notation for numbers, a balanced notation makes negative integers much cleaner.
We won't go much deeper into ternary numbers here, but we will display the addition, subtraction, and multiplication tables (with A on the left, positive numbers in blue, negative numbers in red, and zero in green):
And here are some examples:
While many of you are familiar with binary numbers, few wouldn't need a quick recap of classical bivalent propositional logic before proceeding to the trivalent case. This logic is also called Boolean logic after the logician George Boole, although many thinkers from Aristotle to Bertrand Russell made major contributions. The word bivalent means two-valued: in this system, propositions are either true or false.
We aren't going to concern ourselves with the formal aspects of logic, the notions of proof or the development of theorems from axioms. Instead, we're going to offer a whirlwind tour of bivalent notation to prepare you for what follows.
The relationship of bivalent logic to binary arithmetic is one of homology, meaning that entities and relationships in one field often correspond to similar ones in the other. For example, consider the bivalent operations of disjunction ∨ and conjunction ∧, the logical operations corresponding to our words or (in its inclusive sense, where both choices might be true) and and.
The way to interpret the table on the left is that if proposition A is false and proposition B is false, then proposition A∨B (A or B) is false; otherwise, it's true. In other words, if either of two propositions is true, then their disjunction is, too. Likewise, the interpretation of the table on the right is that if proposition A is true and proposition B is true, then proposition A∧B (A and B) is true; otherwise, it's false. In other words, if either of two propositions is false, then their conjunction is, too.
There are actually two homologies with binary arithmetic. The first matches the two operations above with the binary max and min operations, which return the larger and smaller of two numbers, respectively. The number one is assigned to the truth value true, and zero is assigned to the truth value false.
The other homology with arithmetic matches the two logical operations with binary addition and multiplication, but involves the signs of the numbers, not the numbers themselves. In this case, all the positive numbers are assigned to the truth value true.
There is even a third leg to this homology: set theory. In elementary set theory, we are concerned with whether an element e is in set A, symbolized as e∈A, or not, symbolized as e∉A. The set of all elements which are in either of two sets is called their union ∪, while the set of all elements which are in both of two sets is called their intersection ∩. Here are their tables:
union of sets|
intersection of sets|
These two operators, disjunction/addition/union and conjunction/multiplication/intersection, are called binary or two-place operators because they depend on two arguments, or inputs. They're also called connectives because they connect two values. An operator that depended on only one input is called unary or one-place, and an operator that doesn't depend on any inputs is called a constant or zero-place operator.
There are five other two-place operators in bivalent logic that are interesting to us, plus a single one-place operator. Here are their truth tables:
Negation changes 1s to 0s and vice versa. Surprisingly, its homologue in binary arithmetic is not negation, but complementation: 1-A, which is also its homologue in set theory. We'll be talking much more about negation below.
Subtraction is homologous with the same operations in arithmetic and set theory. A- B is synonymous with A∧¬B, just as it is with A×-B and A∩-B. (Note that in ordinary arithmetic, A-B means A+-B.)
Alternation is also called exclusive-or or XOR. It is the equivalent of the usual English meaning of the word or, which excludes both. If you say "Would you like red wine or white?", you are usually offering a choice, not both. In bivalent logic, this is synonymous with A∨B - A∧B.
Implication, also called the conditional, is the operation most fraught with profound meaning, which unfortunately we can't explore here. Its homologues are arithmetic less-than-or-equal-to ≤ and set inclusion ⊆. In bivalent logic, this is synonymous with ¬A∨B: either A is false or B is true. I didn't bother showing the reversed version <=/≥/⊇, which just points in the other direction.
Equality, properly called equivalence, coimplication, or the biconditional, is a synonym for implication in both directions: A<=>B means A<=B and A=>B. The homologues are equality = in both arithmetic and set theory.
Inequality is the negation of equality, homologous with inequality ≠. Note that its truth table is identical with that of alternation, but I have shown them both since they differ in the trivalent case.
There's one more, degenerate, unary operator, Assertion, which is the Identity operator: the assertion of a true proposition is true, and the assertion of a false proposition is false. I mention it to be complete, and also because it's interesting to know that asserting a proposition is equivalent to asserting that it's true.
These truth tables can also be expressed in columnar form, where each entry has its own row, but we don't need to do that here. They can also be expressed as rules of inference, along the lines of "If A is true and B is true, then A∧B is true". Rules of inference might also work backwards, like "If A∧B is true, then A is true (and B is true)". Or they could do both, like "If A is true and A→B is true, then B is true". This last one is called modus ponens, and is the fundamental rule of inference.
Using either truth tables or rules, we could construct proofs of propositions, where each step derives logically from previous steps. Often, these proofs start with certain assumptions and try to deduce the consequences, but sometimes there are no assumptions, and so the proposition is universally true.
One universal truth in classical logic is ¬(A ∧ ¬A), which is called the law of non-contradiction. It states that a proposition can't be true and false at the same time: that would be a contradiction.
Another universal truth is A ∨ ¬A, called the law of the excluded middle or the law of bivalence, which asserts that every proposition is either true or false - there's no other choice. It could be expressed as A | ¬A, since it's never true that both A and ¬A are true at the same time. In fact, if you assume B and are able to derive a contradiction such as A∧¬A , there must be something wrong with B - this type of reasoning is called reductio ad absurdum.
Oddly enough, this type of thing isn't what logicians do! Instead, they abstract a level or two, and make assertions about whole logical systems at once. The bivalent logic presented in this section is one such system, and in fact is used as a basis for many systems, such as predicate logic, which introduces quantifiers like for all ∀ and there exists ∃, and modal logic, which introduces modal operators like ◻ Necessary and ⋄ Possible.
In the 1930s, a logician named Kurt Gödel proved that any classical logical system powerful enough to be useful would either be inconsistent (meaning it could prove contradictions) or incomplete (meaning it would leave some propositions unprovable). This result, which is called the Incompleteness Theorem, was quite a blow to philosophy, as it seems to state that some things must always remain unknowable.
Among the alternatives that have been explored are bivalent logics that reject the law of bivalence. At least one of these systems (Fitch) can be proven to be both consistent and complete, and it's powerful enough to serve as the basis for arithmetic. It's not that there's any alternative to a statement being either true or false - the system is still bivalent - but that bivalence isn't a law, or axiom, and thus reductio ad absurdum doesn't work.
That's too bad, because if you had a system where reductio ad absurdum worked, and you showed (using the Incompleteness Theorem) that the law of bivalence led to inconsistency, then you would have proven that the universe isn't bivalent!
The logic above can be extended to cover cases when we don't know whether a proposition is true or not, for example because it refers to the future. This is called modal logic, and the two traditional modal operators are Necessity and Possibility, represented by ◻ and ⋄, respectively. We also use the word Impossible as a shorthand for "not Possible". By definition, if a proposition is Necessary, then it must also be Possible.
For example, ◻Barça will win the Champions League means it is necessary that Barça win the Champions League, or Barça will necessarily win the Champions League, or simply Barça must win the Champions League. ⋄ Barça will win the Champions League means it is possible that Barça will win the Champions League, or Barça will possibly win the Champions League, or simply Barça might win the Champions League.
There is a set of relationships between these two operators, mediated by negation:
For instance, the first one says that if p must be true, then -p can't be true (it's not possible that -p be true).
There is a strong homology here with quantification, and in fact modal logic can't be seen as quantification over a set of possible future worlds, or possible unknown facts:
So Necessary, Possible and Impossible correspond to English All/Every/Each, Some/A and No/None/Not Any, and ◻ ⋄ correspond to ∀ ∃.
I'm going to take the bivalent case one step further, so you'll recognize it when you see it in the trivalent case.
The proposition p, whose truth value is unknown to us, can be described as Necessary, Possible or Impossible. If it's Necessary, it's also Possible, so I'm going to introduce a new term, Potential, to mean "Possible but not Necessary". Then a proposition must be Necessary, Potential or Impossible, but only one of the three. Likewise, the negation of a proposition could be Necessary, Potential or Impossible, but only one of the three.
On the face of it, that gives us nine combinations. But given the laws of bivalence and non-contradiction, it turns out that there are only three viable combinations :
I'm going to call these three combinations Modalities, and give them the following names:
In Shwa logic, there are three truth values: true, false, and wrong. True means the same in Shwa as it does in classical logic, but false means something different. In classical logic, false means not true, but in Shwa A is false means the negation of A is true.
That's a subtle difference, but consider the proposition "The King of France is Japanese". That statement is clearly not true, so in classical logic it's false. But its negation, "The King of France is not Japanese", is also not true, since there is no King of France (not since 1789, although France has had a few Emperors since then). So in Shwa, both the original statement and its negation are wrong. That's what wrong means: that neither the statement nor its negation are true.
[To be fair, some classical logicians would say that "The King of France is Japanese" is not a proposition, since it has no referent. Others would say that the statement is false, and that its negation is "It's not true that the King of France is Japanese", which is true. Yet others would say that it means "There is a King of France, and he's Japanese", which is false and has the negation "There is no King of France or he's not Japanese", which is true. Still others would say that it means that anybody who is the King of France is also Japanese, in other words that being the King of France implies being Japanese, and since the premise of the conditional is always false - there is no King of France - then the proposition as it reads is true, and so is its apparent negation!]
If a proposition is Wrong, that doesn't mean it's meaningless, like "Colorless green arrows sleep furiously". The sentence "The King of France is Japanese" isn't meaningless; there just happens not to be a King of France right now. This is a case of missing referent, but not all Wrong propositions lack referents. For instance, "Shakespeare left Alaska by boat" isn't missing any referents: both Shakespeare and Alaska existed, and so do boats. But Shakespeare never went to Alaska, so he could never have left it by boat or any other way. You could say the proposition is false, but it's a funny kind of false, since it's negation, "Shakespeare didn't leave Alaska by boat", is also false. That's called presupposition failure.
But the best examples of wrong statements are in the middle between true and false. Imagine that it's not really raining, but it's drizzling, so it seems wrong to say "It's not raining". Or the proposition that zero is a natural number, or that i ≥ -i (where i represents √-1) : they're neither true nor untrue. Finally, you can use Wrong to respond to a query where neither yes nor no seem to be truthful, for instance if I ask you whether the stock market went up after the Great Crash of 1929. Well, yes it did, but first it fell, and it remained below its previous levels for many years afterwards. It doesn't really matter how a Wrong statement is untrue, as long as it's negation is also untrue.
But a proposition isn't Wrong just because you don't know whether it's true or not, for instance because it's in the future. Both future events and simple ignorance are examples of modal statements, which we'll discuss below.
Bivalent logic is deeply embedded in English, which makes it difficult to express trivalent statements. To compensate, I'll use the English words false, negation, and not only to indicate falseness, and the words wrong, objection and neither to indicate wrongness, as in "It's neither raining (nor not raining)".
By the way, the three truth values of Shwa ternary logic - True, False and Wrong - are homologous with the three digits of ternary numbers 1 − 0 , and also with the three signs (plus, minus, and zero) of real arithmetic. Because of that, from now on I'll put Wrong before False, so the normal order will be True - Wrong - False, with Wrong in the middle.
Now that you know what Wrong means, and how it's different from False, let's consider how trivalent logic works.
The most important operators are ¬ and ~: ¬p means "the negation of p", and ~p means "the objection of p". Here are their truth tables:
I added a third unary operator, Assertion, to the end of the chart. It's not very important, except to note that saying something is equivalent to saying it's true. For example, "Roses are red" is equivalent to "It's true that roses are red".
Let's also discuss some two-place connectives. The first two are straightforward extensions of the bivalent forms:
These two connectives are homologous with ternary maximum and minimum, respectively.
The implication connective is homologous with ≤, and coimplication with =. As in the bivalent case, coimplication is the homologue of equality. In other words, if A<=B and A=>B, then A<=>B.
As you were warned, trivalent inequality is not equivalent to alternation.
The trivalent alternation operator I show here has an advantage over the bivalent. Bivalent alternation cannot be chained as can conjunction and disjunction: A | B | C will be true if they're all true, not just one (no matter how you associate it). But trivalent alternation can be chained: A | B | C will be true if and only if just one of the three is true, and it will only be false if all of them are false.
There are two other connectives in trivalent logic that have no bivalent homologues, although they do in ternary arithmetic: addition (ignoring carries) and multiplication:
As in the bivalent logic described above, Shwa also has a law of trivalence, which states that a proposition must be either true or false or wrong, and a law of non-contradiction which states that it can be only one of the three. And we can derive the Shwa equivalent of DeMorgan's Laws:
As I mentioned above, wrong has nothing to do with the question of whether you know a truth value. Propositions are wrong because they are neither true nor false, not because you don't know whether they're true or false. Instead, there is a whole set of modalities to specify precisely how much you know about a proposition.
In the section above on bivalent modal logic, I introduced two modal operators, ◻ and ⋄. We use the same two in trivalent logic, with the same meanings. However, the relationships linking them via negation are weaker:
In particular, as the last line shows, if you can eliminate one of the truth values, you can't assume it's the other, as you can in the bivalent case.
In fact, there are a total of seven possible modalities:
For example, consider the sentence "Santa Claus likes milk and cookies". Well, if he exists, it's true, but we don't know whether he really exists (I've seen him many times, but I'm still skeptical). If he doesn't exist, the sentence is wrong. Since we don't know which it is, the sentence is Certainly Not False. But if it turns out that he does exist but he doesn't like milk and cookies, then the sentence was false, and it was also false to say that it was Certainly Not False, but it wasn't wrong to say so!
You may be wondering what the benefit is of all this complication. First of all, it brings sentences like "I did not have sex with that woman" into the purview of logic, as opposed to dismissing them as aberrant.
More interestingly, trivalent logic can actually draw sure conclusions from unsure premises. In bivalent modal logic, there is no middle ground between knowing nothing about the truth of a proposition and knowing everything about it. But in trivalent modal logic, we can know something about the truth of a proposition.
|< Social Units||Home >|
|© 2002-2013 Shwafirstname.lastname@example.org||15may13| | http://www.shwa.org/logic.htm | 13 |
31 | Using a figure published in 1960 of 14,300,000 tons per year as the meteoritic dust influx rate to the earth, creationists have argued that the thin dust layer on the moon’s surface indicates that the moon, and therefore the earth and solar system, are young. Furthermore, it is also often claimed that before the moon landings there was considerable fear that astronauts would sink into a very thick dust layer, but subsequently scientists have remained silent as to why the anticipated dust wasn’t there. An attempt is made here to thoroughly examine these arguments, and the counter arguments made by detractors, in the light of a sizable cross-section of the available literature on the subject.
Of the techniques that have been used to measure the meteoritic dust influx rate, chemical analyses (of deep sea sediments and dust in polar ice), and satellite-borne detector measurements appear to be the most reliable. However, upon close examination the dust particles range in size from fractions of a micron in diameter and fractions of a microgram in mass up to millimetres and grams, whence they become part of the size and mass range of meteorites. Thus the different measurement techniques cover different size and mass ranges of particles, so that to obtain the most reliable estimate requires an integration of results from different techniques over the full range of particle masses and sizes. When this is done, most current estimates of the meteoritic dust influx rate to the earth fall in the range of 10, 000-20, 000 tons per year, although some suggest this rate could still be as much as up to 100,000 tons per year.
Apart from the same satellite measurements, with a focusing factor of two applied so as to take into account differences in size and gravity between the earth and moon, two main techniques for estimating the lunar meteoritic dust influx have been trace element analyses of lunar soils, and the measuring and counting of microcraters produced by impacting micrometeorites on rock surfaces exposed on the lunar surface. Both these techniques rely on uniformitarian assumptions and dating techniques. Furthermore, there are serious discrepancies between the microcrater data and the satellite data that remain unexplained, and that require the meteoritic dust influx rate to be higher today than in the past. But the crater-saturated lunar highlands are evidence of a higher meteorite and meteoritic dust influx in the past. Nevertheless the estimates of the current meteoritic dust influx rate to the moon’s surface group around a figure of about 10,000 tons per year.
Prior to direct investigations, there was much debate amongst scientists about the thickness of dust on the moon. Some speculated that there would be very thick dust into which astronauts and their spacecraft might “disappear”, while the majority of scientists believed that there was minimal dust cover. Then NASA sent up rockets and satellites and used earth-bound radar to make measurements of the meteoritic dust influx, results suggesting there was only sufficient dust for a thin layer on the moon. In mid-1966 the Americans successively soft-landed five Surveyor spacecraft on the lunar surface, and so three years before the Apollo astronauts set foot on the moon NASA knew that they would only find a thin dust layer on the lunar surface into which neither the astronauts nor their spacecraft would “disappear”. This was confirmed by the Apollo astronauts, who only found up to a few inches of loose dust.
The Apollo investigations revealed a regolith at least several metres thick beneath the loose dust on the lunar surface. This regolith consists of lunar rock debris produced by impacting meteorites mixed with dust, some of which is of meteoritic origin. Apart from impacting meteorites and micrometeorites it is likely that there are no other lunar surface processes capable of both producing more dust and transporting it. It thus appears that the amount of meteoritic dust and meteorite debris in the lunar regolith and surface dust layer, even taking into account the postulated early intense meteorite and meteoritic dust bombardment, does not contradict the evolutionists’ multi-billion year timescale (while not proving it). Unfortunately, attempted counter-responses by creationists have so far failed because of spurious arguments or faulty calculations. Thus, until new evidence is forthcoming, creationists should not continue to use the dust on the moon as evidence against an old age for the moon and the solar system.
One of the evidences for a young earth that creationists have been using now for more than two decades is the argument about the influx of meteoritic material from space and the so-called “dust on the moon” problem. The argument goes as follows:
“It is known that there is essentially a constant rate of cosmic dust particles entering the earth’s atmosphere from space and then gradually settling to the earth’s surface. The best measurements of this influx have been made by Hans Pettersson, who obtained the figure of 14 million tons per year.1 This amounts to 14 x 1019 pounds in 5 billion years. If we assume the density of compacted dust is, say, 140 pounds per cubic foot, this corresponds to a volume of 1018 cubic feet. Since the earth has a surface area of approximately 5.5 x 1015 square feet, this seems to mean that there should have accumulated during the 5-billion- year age of the earth, a layer of meteoritic dust approximately 182 feet thick all over the world!
There is not the slightest sign of such a dust layer anywhere of course. On the moon’s surface it should be at least as thick, but the astronauts found no sign of it (before the moon landings, there was considerable fear that the men would sink into the dust when they arrived on the moon, but no comment has apparently ever been made by the authorities as to why it wasn’t there as anticipated).
Even if the earth is only 5,000,000 years old, a dust layer of over 2 inches should have accumulated.
Lest anyone say that erosional and mixing processes account for the absence of the 182-foot meteoritic dust layer, it should be noted that the composition of such material is quite distinctive, especially in its content of nickel and iron. Nickel, for example, is a very rare element in the earth’s crust and especially in the ocean. Pettersson estimated the average nickel content of meteoritic dust to be 2.5 per cent, approximately 300 times as great as in the earth’s crust. Thus, if all the meteoritic dust layer had been dispersed by uniform mixing through the earth’s crust, the thickness of crust involved (assuming no original nickel in the crust at all) would be 182 x 300 feet, or about 10 miles!
Since the earth’s crust (down to the mantle) averages only about 12 miles thick, this tells us that practically all the nickel in the crust of the earth would have been derived from meteoritic dust influx in the supposed (5 x 109 year) age of the earth!”2
This is indeed a powerful argument, so powerful that it has upset the evolutionist camp. Consequently, a number of concerted efforts have been recently made to refute this evidence.3-9 After all, in order to be a credible theory, evolution needs plenty of time (that is, billions of years) to occur because the postulated process of transforming one species into another certainly can’t be observed in the lifetime of a single observer. So no evolutionist could ever be happy with evidence that the earth and the solar system are less than 10,000 years old.
But do evolutionists have any valid criticisms of this argument? And if so, can they be answered?
Criticisms of this argument made by evolutionists fall into three categories:-
The man whose work is at the centre of this controversy is Hans Pettersson of the Swedish Oceanographic Institute. In 1957, Pettersson (who then held the Chair of Geophysics at the University of Hawaii) set up dust-collecting units at 11,000 feet near the summit of Mauna Loa on the island of Hawaii and at 10,000 feet on Mt Haleakala on the island of Maui. He chose these mountains because
“occasionally winds stir up lava dust from the slopes of these extinct volcanoes, but normally the air is of an almost ideal transparency, remarkably free of contamination by terrestrial dust.”10
With his dust-collecting units, Pettersson filtered measured quantities of air and analysed the particles he found. Despite his description of the lack of contamination in the air at his chosen sampling sites, Pettersson was very aware and concerned that terrestrial (atmospheric) dust would still swamp the meteoritic (space) dust he collected, for he says: “It was nonetheless apparent that the dust collected in the filters would come preponderantly from terrestrial sources.”11 Consequently he adopted the procedure of having his dust samples analysed for nickel and cobalt, since he reasoned that both nickel and cobalt were rare elements in terrestrial dust compared with the high nickel and cobalt contents of meteorites and therefore by implication of , meteoritic dust also.
Based on the nickel analysis of his collected dust, Pettersson finally estimated that about 14 million tons of dust land on the earth annually. To quote Petterson again:
“Most of the samples contained small but measurable quantities of nickel along with the large amount of iron. The average for 30 filters was 14.3 micrograms of nickel from each 1,000 cubic metres of air. This would mean that each 1,000 cubic metres of air contains .6 milligram of meteoritic dust. If meteoritic dust descends at the same rate as the dust created by the explosion of the Indonesian volcano Krakatoa in 1883, then my data indicate that the amount of meteoritic dust landing on the earth every year is 14 million tons. From the observed frequency of meteors and from other data Watson (F.G. Watson of Harvard University) calculates the total weight of meteoritic matter reaching the earth to be between 365,000 and 3,650,000 tons a year. His higher estimate is thus about a fourth of my estimate, based upon theHawaiian studies. To be on the safe side, especially in view of the uncertainty as to how long it takes meteoritic dust to descend, I am inclined to find five million tons per year plausible.”12
Now several evolutionists have latched onto Pettersson’s conservatism with his suggestion that a figure of 5 million tons per year is more plausible and have thus promulgated the idea that Pettersson’s estimate was “high”,13 “very speculative”,14 and “tentative”.15 One of these critics has even gone so far as to suggest that “Pettersson’s dust- collections were so swamped with atmospheric dust that his estimates were completely wrong”16 (emphasis mine). Others have said that “Pettersson’s samples were apparently contaminated with far more terrestrial dust than he had accounted for.”17 So what does Pettersson say about his 5 million tons per year figure?:
“The five-million-ton estimate also squares nicely with the nickel content of deep-ocean sediments. In 1950 Henri Rotschi of Paris and I analysed 77 samples of cores raised from the Pacific during the Swedish expedition. They held an average of. 044 per cent nickel. The highest nickel content in any sample was .07 per cent. This, compared to the average .008- per-cent nickel content of continental igneous rocks, clearly indicates a substantial contribution of nickel from meteoritic dust and spherules.
If five million tons of meteoritic dust fall to the earth each year, of which 2.5 per cent is nickel, the amount of nickel added to each square centimetre of ocean bottom would be .000000025 gram per year, or .017 per cent of the total red-clay sediment deposited in a year. This is well within the .044-per-cent nickel content of the deep-sea sediments and makes the five- million-ton figure seem conservative.”18
In other words, as a reputable scientist who presented his assumptions and warned of the unknowns, Pettersson was happy with his results.
But what about other scientists who were aware of Pettersson and his work at the time he did it? Dr Isaac Asimov’s comments,19 for instance, confirm that other scientists of the time were also happy with Pettersson’s results. Of Pettersson’s experiment Asimov wrote:-
“At a 2-mile height in the middle of the Pacific Ocean one can expect the air to be pretty free of terrestrial dust. Furthermore, Pettersson paid particular attention to the cobalt content of the dust, since meteor dust is high in cobalt whereas earthly dust is low in it.”20
Indeed, Asimov was so confident in Pettersson’s work that he used Pettersson’s figure of 14,300,000 tons of meteoritic dust falling to the earth’s surface each year to do his own calculations. Thus Asimov suggested:
“Of course, this goes on year after year, and the earth has been in existence as a solid body for a good long time: for perhaps as long as 5 billion years. If, through all that time, meteor dust has settled to the earth at the same rate as it does, today, then by now, if it were undisturbed, it would form a layer 54 feet thick over all of the earth.”21
This sounds like very convincing confirmation of the creationist case, but of course, the year that Asimov wrote those words was 1959, and a lot of other meteoritic dust influx measurements have since been made. The critics are also quick to point this out -
“. ..we now have access to dust collection techniques using aircraft, high-altitude balloons and spacecraft. These enable researchers to avoid the problems of atmospheric dust which plagued Pettersson.”22
However, the problem is to decide which technique for estimating the meteoritic dust influx gives the “true” figure. Even Phillips admits this when he says:
“(Techniques vary from the use of high altitude rockets with collecting grids to deep-sea core samples. Accretion rates obtained by different methods vary from 102 to 109 tons/year. Results from identical methods also differ because of the range of sizes of the measured particles.”23
One is tempted to ask why it is that Pettersson’s 5-14 billion tons per year figure is slammed as being “tentative”, “very speculative” and “completely wrong”, when one of the same critics openly admits the results from the different, more modern methods vary from 100 to 1 billion tons per year, and that even results from identical methods differ? Furthermore, it should be noted that Phillips wrote this in 1978, some two decades and many moon landings after Pettersson’s work!
|(a) Small Size In Space (<0.1 cm)|
| Penetration Satellites
Al26 (sea sediment)
| 36,500-182,500 tons/yr
|(b) Cometary Meteors (104-102g) In Space|
|Cometary Meteors||73,000 tons/yr|
|(c) “Any" Size in Space|
| Barbados Meshes
(ii) Total Winter
(iii) Total Annual
(i) Dust Counter
Ni (Antarctic ice)
Ni (sea sediment)
Os (sea sediment)
CI36 (sea sediment) Sea-sediment Spherules
< 110 tons/yr
<91 ,500 tons/yr
|(d) Large Size in Space|
| 36,500 tons/yr
Table 1. Measurements and estimates of the meteoritic dust influx to the earth. (The data are adapted from Parkin and Tilles,24 who have fully referenced all their data sources.) (All figures have been rounded off.)
In 1968, Parkin and Tilles summarised all the measurement data then available on the question of influx of meteoritic (interplanetary) material (dust) and tabulated it.24 Their table is reproduced here as Table 1, but whereas they quoted influx rates in tons per day, their figures have been converted to tons per year for ease of comparison with Pettersson’s figures.
Even a quick glance at Table 1 confirms that most of these experimentally-derived measurements are well below Pettersson’s 5-14 million tons per year figure, but Phillips’ statement (quoted above) that results vary widely, even from identical methods, is amply verified by noting the range of results listed under some of the techniques. Indeed, it also depends on the experimenter doing the measurements (or estimates, in some cases). For instance, one of the astronomical methods used to estimate the influx rate depends on calculation of the density of the very fine dust in space that causes the zodiacal light. In Table 1, two estimates by different investigators are listed because they differ by 2-3 orders of magnitude.
On the other hand, Parkin and Tilles’ review of influx measurements, while comprehensive, was not exhaustive, there being other estimates that they did not report. For example, Pettersson25 also mentions an influx estimate based on meteorite data of 365,000-3,650,000 tons/year made by F. G. Watson of Harvard University (quoted earlier), an estimate which is also 2-3 orders of magnitude different from the estimate listed by Parkin and Tilles and reproduced in Table 1. So with such a large array of competing data that give such conflicting orders-of-magnitude different estimates, how do we decide which is the best estimate that somehow might approach the “true” value?
Another significant research paper was also published in 1968. Scientists Barker and Anders were reporting on their measurements of iridium and osmium concentration in dated deep-sea sediments (red clays) of the central Pacific Ocean Basin, which they believed set limits to the influx rate of cosmic matter, including dust.26 Like Pettersson before them, Barker and Anders relied upon the observation that whereas iridium and osmium are very rare elements in the earth’s crustal rocks, those same two elements are present in significant amounts in meteorites.
|* Normalized to the composition of C1 carbonaceous chondrites (one class of meteorites).|
Table 2. Estimates of the accretion rate of cosmic matter by chemical methods (after Barker and Anders,26 who have fully referenced all their data sources).
Their results are included in Table 2 (last four estimates), along with earlier reported estimates from other investigators using similar and other chemical methods. They concluded that their analyses, when compared wit iridium concentrations in meteorites (C1 carbonaceous chondrites), corresponded to a meteoritic influx rate forth entire earth of between 30,000 and 90,000 tons per year. Furthermore, they maintained that a firm upper limit on the influx rate could be obtained by assuming that all the iridium and osmium in deep-sea sediments is of cosmic origin. The value thus obtained is between 50,000 and 150,000 tons per year. Notice, however, that these scientists were careful to allow for error margins by using a range of influx values rather than a definitive figure. Some recent authors though have quoted Barker and Anders’ result as 100,000 tons, instead of 100,000 ± 50,000 tons. This may not seem a rather critical distinction, unless we realise that we are talking about a 50% error margin either way, and that’s quite a large error margin in anyone’s language regardless of the magnitude of the result being quoted.
Even though Barker and Anders’ results were published in 1968, most authors, even fifteen years later, still quote their influx figure of 100,000 ± 50,000 tons per year as the most reliable estimate that we have via chemical methods. However, Ganapathy’s research on the iridium content of the ice layers at the South Pole27 suggests that Barker and Anders’ figure underestimates the annual global meteoritic influx.
Ganapathy took ice samples from ice cores recovered by drilling through the ice layers at the US Amundsen-Scott base at the South Pole in 1974, and analysed them for iridium. The rate of ice accumulation at the South Pole over the last century or so is now particularly well established, because two very reliable precision time markers exist in the ice layers for the years 1884 (when debris from the August 26,1983 Krakatoa volcanic eruption was deposited in the ice) and 1953 (when nuclear explosions began depositing fission products in the ice). With such an accurately known time reference framework to put his iridium results into, Ganapathy came up with a global meteoritic influx figure of 400,000 tons per year, four times higher than Barker and Anders’ estimate from mid-Pacific Ocean sediments.
In support of his estimate, Ganapathy also pointed out that Barker and Anders had suggested that their estimate could be stretched up to three times its value (that is, to 300,000 tons per year) by compounding several unfavorable assumptions. Furthermore, more recent measurements by Kyte and Wasson of iridium in deep-sea sediment samples obtained by drilling have yielded estimates of 330,000-340,000 tons per year.28 So Ganapathy’s influx estimate of 400,000 tons of meteoritic material per year seems to represent a fairly reliable figure, particularly because it is based on an accurately known time reference framework.
So much for chemical methods of determining the rate of annual meteoritic influx to the earth’s surface. But what about the data collected by high-flying aircraft and spacecraft, which some critics29,30 are adamant give the most reliable influx estimates because of the elimination of a likelihood of terrestriat dust contamination? Indeed, on the basis of the dust collected by the high-flying U-2 aircraft, Bridgstock dogmatically asserts that the influx figure is only 10,000 tonnes per year.31,32 To justify his claim Bridgstock refers to the reports by Bradley, Brownlee and Veblen,33 and Dixon, McDonnel1 and Carey34 who state a figure of 10,000 tons for the annual influx of interplanetary dust particles. To be sure, as Bridgstock says,35 Dixon, McDonnell and Carey do report that “. ..researchers estimate that some 10,000 tonnes of them fall to Earth every year.”36 However, such is the haste of Bridgstock to prove his point, even if it means quoting out of context, he obviously didn’t carefully read, fully comprehend, and/or deliberately ignored all of Dixon, McDonnell and Carey’s report, otherwise he would have noticed that the figure “some 10,000 tonnes of them fall to Earth every year” refers only to a special type of particle called Brownlee particles, not to all cosmic dust particles. To clarify this, let’s quote Dixon, McDonnell and Carey:
“Over the past 10 years, this technique has landed a haul of small fluffy cosmic dust grains known as ‘Brownlee particles’ after Don Brownlee, an American researcher who pioneered the routine collection of particles by aircraft, and has led in their classification. Their structure and composition indicate that the Brown lee particles are indeed extra-terrestrial in origin (see Box 2), and researchers estimate that some 10,000 tonnes of them fall to Earth every year. But Brownlee particles represent only part of the total range of cosmic dust particles”37 (emphasis mine).
And further, speaking of these “fluffy” Brownlee particles:
“The lightest and fluffiest dust grains, however, may enter the atmosphere on a trajectory which subjects them to little or no destructive effects, and they eventually drift to the ground. There these particles are mixed up with greater quantities of debris from the larger bodies that burn up as meteors, and it is very difficult to distinguish the two”38 (emphasis ours).
What Bridgstock has done, of course, is to say that the total quantity of cosmic dust that hits the earth each year according to Dixon, McDonnell and Carey is 10,000 tonnes, when these scientists quite clearly stated they were only referring to a part of the total cosmic dust influx, and a lesser part at that. A number of writers on this topic have unwittingly made similar mistakes.
But this brings us to a very crucial aspect of this whole issue, namely, that there is in fact a complete range of sizes of meteoritic material that reaches the earth, and moon for that matter, all the way from large meteorites metres in diameter that produce large craters upon impact, right down to the microscopic-sized “fluffy” dust known as Brownlee particles, as they are referred to above by Dixon, McDonnell, and Carey. And furthermore, each of the various techniques used to detect this meteoritic material does not necessarily give the complete picture of all the sizes of particles that come to earth, so researchers need to be careful not to equate their influx measurements using a technique to a particular particle size range with the total influx of meteoritic particles. This is of course why the more experienced researchers in this field are always careful in their records to stipulate the particle size range that their measurements were made on.
Figure 1. The mass ranges of interplanetary (meteoritic) dust particles as detected by various techniques (adapted from Millman39). The particle penetration, impact and collection techniques make use of satellites and rockets. The techniques shown in italics are based on lunar surface measurements.
Millman39 discusses this question of the particle size ranges over which the various measurement techniques are operative. Figure 1 is an adaptation of Millman’s diagram. Notice that the chemical techniques, such as analyses for iridium in South Pole ice or Pacific Ocean deep-sea sediments, span nearly the full range of meteoritic particles sizes, leading to the conclusion that these chemical techniques are the most likely to give us an estimate closest to the “true” influx figure. However, Millman40 and Dohnanyi41 adopt a different approach to obtain an influx estimate. Recognising that most of the measurement techniques only measure the influx of particles of particular size ranges, they combine the results of all the techniques so as to get a total influx estimate that represents all the particle size ranges. Because of overlap between techniques, as is obvious from Figure 1, they plot the relation between the cumulative number of particles measured (or cumulative flux) and the mass of the particles being measured, as derived from the various measurement techniques. Such a plot can be seen in Figure 2. The curve in Figure 2 is the weighted mean flux curve obtained by comparing, adding together and taking the mean at anyone mass range of all the results obtained by the various measurement techniques. A total influx estimate is then obtained by integrating mathematically the total mass under the weighted mean flux curve over a given mass range.
Figure 2. The relation between the cumulative number of particles and the lower limit of mass to which they are counted, as derived from various types of recording - rockets, satellites, lunar rocks, lunar seismographs (adapted from Millman39). The crosses represent the Pegasus and Explorer penetration data.
By this means Millman42 estimated that in the mass range 10-12 to 103g only a mere 30 tons of meteoritic material reach the earth each day, equivalent to an influx of 10,950 tons per year. Not surprisingly, the same critic (Bridgstock) that erroneously latched onto the 10,000 tonnes per year figure of Dixon, McDonnell and Carey to defend his (Bridgstock’s) belief that the moon and the earth are billions of years old, also latched onto Millman’s 10,950 tons per year figure.43 But what Bridgstock has failed to grasp is that Dixon, McDonnell and Carey’s figure refers only to the so-called Brownlee particles in the mass range of 10-12 to 10-6g, whereas Millman’s figure, as he stipulates himself, covers the mass range of 10-12 to 103g. The two figures can in no way be compared as equals that somehow support each other because they are not in the same ballpark since the two figures are in fact talking about different particle mass ranges.
Furthermore, the close correspondence between these two figures when they refer to different mass ranges, the 10,000 tonnes per year figure of Dixon, McDonnell and Carey representing only 40% of the mass range of Millman’s 10,950 tons per year figure, suggests something has to be wrong with the techniques used to derive these figures. Even from a glance at the curve in Figure 2, it is obvious that the total mass represented by the area under the curve in the mass range 10-6 to 103g can hardly be 950 or so tons per year (that is, the difference between Millman’s and Dixon, McDonnell and Carey’s figures and mass ranges), particularly if the total mass represented by the area under the curve in the mass range 10-12 to 10-6g is supposed to be 10,000 tonnes per year (Dixon, McDonnell and Carey’s figure and mass range). And Millman even maintains that the evidence indicates that two-thirds of the total mass of the dust complex encountered by the earth is in the form of particles with masses between 10-6.5 and 10-3.5g, or in the three orders of magnitude 10-6, 10-5 and 10-4g, respectively,44 outside the mass range for the so-called Brownlee particles. So if Dixon, McDonnell and Carey are closer to the truth with their 1985 figure of 10,000 tonnes per year of Brownlee particles (mass range 10-12 to 10-6g), and if two-thirds of the total particle influx mass lies outside the Brownlee particle size range, then Millman’s 1975 figure of 10,950 tons per year must be drastically short of the “real” influx figure, which thus has to be at least 30,000 tons per year.
Millman admits that if some of the finer dust partlcles do not register by either penetrating or cratering, satellite or aircraft collection panels, it could well be that we should allow for this by raising the flux estimate. Furthermore, he states that it should also be noted that the Prairie Network fireballs (McCrosky45), which are outside his (Millman’s) mathematical integration calculations because they are outside the mass range of his mean weighted influx curve, could add appreciably to his flux estimate.46 In other words, Millman is admitting that his influx estimate would be greatly increased if the mass range used in his calculations took into account both particles finer than 10-12g and particularly particles greater than l03g.
Figure 3. Cumulative flux of meteoroids and related objects into the earth’s atmosphere having a mass of M(kg) (adapted from Dohnanyi41). His data sources used to derive this plot are listed in his bibliography.
Unlike Millman, Dohnanyi47 did take into account a much wider mass range and smaller cumulative fluxes, as can be seen in his cumulative flux plot in Figure 3, and so he did obtain a much higher total influx estimate of some 20,900 tons of dust per year coming to the earth. Once again, if McCrosky’s data on the Prairie Network fireballs were included by Dohnanyi, then his influx estimate would have been greater. Furthermore, Dohnanyi’s estimate is primarily based on supposedly more reliable direct meas- urements obtained using collection plates and panels on satellites, but Millman maintains that such satellite penetration methods may not be registering the finer dust particles because they neither penetrate nor crater the collection panels, and so any influx estimate based on such data could be underestimating the “true” figure. This is particularly significant since Millman also highlights the evidence that there is another concentration peak in the mass range 10-13 to 10-14g at the lower end of the theoretical effectiveness of satellite penetration data collection (see Figure 1 again). Thus even Dohnanyi’s influx estimate is probably well below the “true” figure.
This leads us to a consideration of the representativeness both physically and statistically of each of the influx measurement dust collection techniques and the influx estimates derived from them. For instance, how representitive is a sample of dust collected on the small plates mounted on a small satellite or U-2 aircraft compared with the enormous volume of space that the sample is meant to represent? We have already seen how Millman admits that some dust particles probably do not penetrate or crater the plates as they are expected to and so the final particle count is thereby reduced by an unknown amount. And how representative is a drill core or grab sample from the ocean floor? After all, aren’t we analysing a split from a 1-2 kilogram sample and suggesting this represents the tonnes of sediments draped over thousands of square kilometres of ocean floor to arrive at an influx estimate for the whole earth?! To be sure, careful repeat samplings and analyses over several areas of the ocean floor may have been done, but how representative both physically and statistically are the results and the derived influx estimate?
Of course, Pettersson’s estimate from dust collected atop Mauna Loa also suffers from the same question of representativeness. In many of their reports, the researchers involved have failed to discuss such questions. Admittedly there are so many potential unknowns that any statistical quantification is well-nigh impossible, but some discussion of sample representativeness should be attempted and should translate into some “guesstimate” of error margins in their final reported dust influx estimate. Some like Barker and Anders with their deep-sea sediments48 have indicated error margins as high as ±50%, but even then such error margins only refer to the within and between sample variations of element concentrations that they calculated from their data set, and not to any statistical “guesstimate” of the physical representativeness of the samples collected and analysed. Yet the latter is vital if we are trying to determine what the “true” figure might be.
But there is another consideration that can be even more important, namely, any assumptions that were used to derive the dust influx estimate from the raw measurements or analytical data. The most glaring example of this is with respect to the interpretation of deep-sea sediment analyses to derive an influx estimate. In common with all the chemical methods, it is assumed that all the nickel, iridium and osmium in the samples, over and above the average respective contents of appropriate crustal rocks, is present in the cosmic dust in the deep-sea sediment samples. Although this seems to be a reasonable assumption, there is no guarantee that it is completely correct or reliable. Furthermore, in order to calculate how much cosmic dust is represented by the extra nickel, iridium and osmium con- centrations in the deep-sea sediment samples, it is assumed that the cosmic dust has nickel, iridium and osmium concentrations equivalent to the average respective concentrations in Type I carbonaceous chondrites (one of the major types of meteorites). But is that type of meteorite representative of all the cosmic matter arriving at the earth’s surface? Researchers like Barker and Anders assume so because everyone else does! To be sure there are good reasons for making that assumption, but it is by no means certain the Type I carbonaceous chondrites are representative of all the cosmic material arriving at the earth’s surface, since it has been almost impossible so far to exclusively collect such material for analysis. (Some has been collected by spacecraft and U-2 aircraft, but these samples still do not represent that total composition of cosmic material arriving at the earth’s surface since they only represent a specific particle mass range in a particular path in space or the upper atmosphere.)
However, the most significant assumption is yet to come. In order to calculate an influx estimate from the assumed cosmic component of the nickel, iridium and osmium concentrations in the deep-sea sediments it is necessary to determine what time span is represented by the deep-sea sediments analysed. In other words, what is the sedimentation rate in that part of the ocean floor sampled and how old therefore are our sediment samples? Based on the uniformitarian and evolutionary assumptions, isotopic dating and fossil contents are used to assign long time spans and old ages to the sediments. This is seen not only in Barker and Anders’ research, but in the work of Kyte and Wasson who calculated influx estimates from iridium measurements in so-called Pliocene and Eocene-Oligocene deep-sea sediments.49 Unfortunately for these researchers, their influx estimates depend absolutely on the validity of their dating and age assumptions. And this is extremely crucial, for if they obtained influx estimates of 100,000 tons per year and 330,000-340,000 tons per year respectively on the basis of uniformitarian and evolutionary assumptions (slow sedimentation and old ages), then what would these influx estimates become if rapid sedimentation has taken place over a radically shorter time span? On that basis, Pettersson’s figure of 5-14 million tons per year is not far-fetched!
On the other hand, however, Ganapathy’s work on ice cores from the South Pole doesn’t suffer from any assumptions as to the age of the analysed Ice samples because he was able to correlate his analytical results with two time-marker events of recent recorded history. Consequently his influx estimate of 400,000 tons per year has to be taken seriously. Furthermore, one of the advantages of the chemical methods of influx estimating, such as Ganapathy’s analyses of iridium in ice cores, is that the technique in theory, and probably in practice, spans the complete mass range of cosmic material (unlike the other techniques - see Figure 1 again) and so should give a better estimate. Of course, in practice this is difficult to verify, statistically the likelihood of sampling a macroscopic cosmic particle in, for example, an ice core is virtually nonexistent. In other words, there is the question” of representativeness again, since the ice core is taken to represent a much larger area of ice sheet, and it may well be that the cross sectional area intersected by the ice core is an anomalously high or low concentration of cosmic dust particles, or in fact an average concentration -who knows which?
Finally, an added problem not appreciated by many working in the field is that there is an apparent variation in the dust influx rate according to the latitude. Schmidt and Cohen reported50 that this apparent variation is most closely related to geomagnetic latitude so that at the poles the resultant influx is higher than in equatorial regions. They suggested that electromagnetic interactions could cause only certain charged particles to impinge preferentially at high latitudes. This may well explain the difference between Ganapathy’s influx estimate of 400,000 tons per year from the study of the dust in Antarctic ice and, for example, Kyte and Wasson s estimate of 330,000-340,000 tons per year based on iridium measurements in deep-sea sediment samples from the mid-Pacific Ocean.
A number of other workers have made estimates of the meteoritic dust influx to the earth that are often quoted with some finality. Estimates have continued to be made up until the present time, so it is important to contrast these in order to arrive at the general consensus.
In reviewing the various estimates by the different methods up until that time, Singer and Bandermann5l argued in 1967 that the most accurate method for determining the meteoritic dust influx to the earth was by radiochemical measurements of radioactive Al26 in deep-sea sediments. Their confidence in this method was because it can be shown that the only source of this radioactive nuclide is interplanetary dust and that therefore its presence in deep-sea sediments was a more certain indicator of dust than any other chemical evidence. From measurements made others they concluded that the influx rate is 1250 tons per day, the error margins being such that they indicated the influx rate could be as low as 250 tons per day or as high as 2,500 tons per day. These figures equate to an influx rate of over 450,000 tons per year, ranging from 91,300 tons per year to 913,000 tons per year.
They also defended this estimate via this method as opposed to other methods. For example, satellite experiments, they said, never measured a concentration, nor even a simple flux of particles, but rather a flux of particles having a particular momentum or energy greater than some minimum threshold which depended on the detector being used. Furthermore, they argued that the impact rate near the earth should increase by a factor of about 1,000 compared with the value far away from the earth. And whereas dust influx can also be measured in the upper atmosphere, by then the particles have already begun slowing down so that any vertical mass motions of the atmosphere may result in an increase in concentration of the dust particles thus producing a spurious result. For these and other reasons, therefore, Singer and Bandermann were adamant that their estimate based on radioactive Al26 in ocean sediments is a reliable determination of the mass influx rate to the earth and thus the mass concentration of dust in interplanetary space.
Other investigators continued to rely upon a combination of satellite, radio and visual measurements of the “different particle masses to arrive at a cumulative flux rate. Thus in 1974 Hughes reported52 that
“from the latest cumulative influx rate data the influx of interplanetary dust to the earth’s surface in the mass range 10-13 - 106g is found to be 5.7 x 109 g yr-1”,
or 5,700 tons per year, drastically lower than the Singer and Bandermann estimate from Al26 in ocean sediments. Yet within a year Hughes had revised his estimate upwards to 1.62 x 1010 g yr-1, with error calculations indicating that the upper and lower limits are about 3.0 and 0.8 x 1010g yr-1 respectively.53 Again this was for the particle mass range between 10-13g and 106 g, and this estimate translates to 16,200 tons per year between lower to upper limits of 8,000 - 30,000 tons per year. So confident now was Hughes in the data he had used for his calculations that he submitted an easier-to-read account of his work in the widely-read, popular science magazine, New Scientist.54 Here he again argued that
“as the earth orbits the sun it picks up about 16,000 tonnes of interplanetary material each year. The particles vary in size from huge meteorites weighing tonnes to small microparticles less than 0.2 micron in diameter. The majority originate from decaying comets.”
Figure 4. Plot of thecumulative flux of interplanetary matter (meteorites, meteors, and meteoritic dust, etc.) into the earth’s atmosphere (adapted from Hughes54). Note that he has subdivided the debris into two modes of origin - cometary and asteroidal - based on mass, with the former category being further subdivided according to detection techniqes. From this plot Hughes calculated a flux of 16,000 tonnes per year.
Figure 4 shows the cumulative flux curve built from the various sources of data that he used to derive his calculated influx of about 16,000 tons per year. However, it should be noted here that using the same methodology with similar data Millman55 had in 1975, and Dohnanyi56 in 1972, produced influx estimates of 10,950 tons per year and 20,900 tons per year respectively (Figures 2 and 3 can be compared with Figure 4). Nevertheless, it could be argued that these two estimates still fall within the range of 8,000 -30,000 tons per year suggested by Hughes. In any case, Hughes’ confidence in his estimate is further illustrated by his again quoting the same 16,000 tons per year influx figure in a paper published in an authoritative book on the subject of cosmic dust.58
Meanwhile, in a somewhat novel approach to the problem, Wetherill in 1976 derived a meteoritic dust influx estimate by looking at the possible dust production rate at its source.59 He argued that whereas the present sources of meteorites are probably multiple, it being plausible that both comets and asteroidal bodies of several kinds contribute to the flux of meteorites on the earth, the immediate source of meteorites is those asteroids, known as Apollo objects, that in their orbits around the sun cross the earth’s orbit. He then went on to calculate the mass yield of meteoritic dust (meteoroids) and meteorites from the fragmentation and cratering of these Apollo asteroids. He found that the combined yield from both crate ring and complete fragmentation to be 7.6 x 1010g yr-l, which translates into a figure of 76,000 tonnes per year. Of this figure he calculated that 190 tons per year would represent meteorites in the mass range of 102 - 106g, a figure which compared well with terrestrial meteorite mass impact rates obtained by various other calculation methods, and also with other direct measurement data, including observation of the actual meteorite flux. This figure of 76,000 tons per year is of course much higher than those estimates based on cumulative flux calculations such as those of Hughes,60 but still below the range of results gained from various chemical analyses of deep-sea sediments, such as those of Barker and Anders,61 Kyte and Wasson,62 and Singer and Bandermann,63 and of the Antarctic ice by Ganapathy.64 No wonder a textbook in astronomy compiled by a worker in the field and published in 1983 gave a figure for the total meteoroid flux of about 10,000 - 1,000,000 tons per year.65
In an oft-quoted paper published in 1985, Griin and his colleagues66 reported on yet another cumulative flux calculation, but this time based primarily on satellite measurement data. Because these satellite measurements had been made in interplanetary space, the figure derived from them, would be regarded as a measure of the interplanetary dust flux. Consequently, to calculate from that figure the total meteoritic mass influx on the earth both the gravitational increase at the earth and the surface area of the earth had to be taken into account. The result was an influx figure of about 40 tons per day, which translates to approximately 14,600 tons per year. This of course still equates fairly closely to the influx estimate made by Hughes.67
As well as satellite measurements, one of the other major sources of data for cumulative flux calculations has been measurements made using ground-based radars. In 1988 Olsson-Steel68 reported that previous radar meteor observations made in the VHF band had rendered a flux of particles in the 10-6 - 10-2g mass range that was anomalously low when compared to the, fluxes derived from optical meteor observations or satellite measurements. He therefore found that HF radars were necessary in order to detect the total flux into the earth’s atmosphere. Consequently he used radar units near Adelaide and Alice Springs in Australia to make measurements at a number of different frequencies in the HF band. Indeed, Olsson-Steel believed that the radar near Alice Springs was at that time the most powerful device ever used for meteor detection, and be- cause of its sensitivity the meteor count rates were extremely high. From this data he calculated a total influx of particles in the range 10-6 - 10-2g of 12,000 tons per year, which as he points out is almost identical to the flux in the same mass range calculated by Hughes.69,70 He concluded that this implies that, neglecting the occasional asteroid or comet impact, meteoroids in this mass range dominate the total flux to the atmosphere, which he says amounts to about 16,000 tons per year as calculated by Thomas et al.71
In a different approach to the use of ice as a meteoritic dust collector, in 1987 Maurette and his colleagues72 reported on their analyses of meteoritic dust grains extracted from samples of black dust collected from the melt zone of the Greenland ice cap. The reasoning behind this technique was that the ice now melting at the edge of the ice cap had, during the time since it formed inland and flowed outwards to the melt zone, been collecting cosmic dust of all sizes and masses. The quantity thus found by analysis represents the total flux over that time period, which can then be converted into an annual influx rate. While their analyses of the collected dust particles were based on size fractions, they relied on the mass-to-size relationship established by Griin et al.73 to convert their results to flux estimates. They calculated that each kilogram of black dust they collected for extraction and analysis of its contained meteoritic dust corresponded to a collector surface of approximately 0.5 square metres which had been exposed for approximately 3,000 years to meteoritic dust infall. Adding together their tabulated flux estimates for each size fraction below 300 microns yields a total meteoritic dust influx estimate of approximately 4,500 tons per year, well below that calculated from satellite and radar measurements, and drastically lower than that calculated by chemical analyses of ice.
However, in their defense it can at least be said that in comparison to the chemical method this technique is based on actual identification of the meteoritic dust grains, rather than expecting the chemical analyses to represent the meteoritic dust component in the total samples of dust analysed. Nevertheless, an independent study in another polar region at about the same time came up with a higher influx rate more in keeping with that calculated from satellite and radar measurements. In that study, Tuncel and Zoller74 measured the iridium content in atmospheric samples collected at the South Pole. During each 10-day sampling period, approximately 20,000-30,000 cubic metres of air was passed through a 25-centimetre-diameter cellulose filter, which was then submitted for a wide range of analyses. Thirty such atmospheric particulate samples were collected over an 11 month period, which ensured that, seasonal variations were accounted for. Based on their analyses they discounted any contribution of iridium to their samples from volcanic emissions, and concluded that iridium concentrations in their samples could be used to estimate both the meteoritic dust component in their atmospheric particulate samples and thus the global meteoritic dust influx rate. Thus they calculated a global flux of 6,000 -11,000 tons per year.
In evaluating their result they tabulated other estimates from the literature via a wide range of methods, including the chemical analyses of ice and sediments. In defending their estimate against the higher estimates produced by those chemical methods, they suggested that samples (particularly sediment samples) that integrate large time intervals include in addition to background dust particles the fragmentation products from large bodies. They reasoned that this meant the chemical methods do not discriminate between background dust particles and fragmentation products from large bodies, and so a significant fraction of the flux estimated from sediment samples may be due to such large body impacts. On the other hand, their estimate of 6,000-11,000 tons per year for particles smaller than 106g they argued is in reasonable agreement with estimates from satellite and radar studies.
Finally, in a follow-up study, Maurette with another group of colleagues75 investigated a large sample of micrometeorites collected by the melting and filtering of approximately 100 tons of ice from the Antarctic ice sheet. The grains in the sample were first characterised by visual techniques to sort them into their basic meteoritic types, and then selected particles were submitted for a wide range of chemical and isotopic analyses. Neon isotopic analyses, for example, were used to confirm which particles were of extraterrestrial origin. Drawing also on their previous work they concluded that a rough estimate of the meteoritic dust flux, for particles in the size range 50-300 microns, as recovered from either the Greenland or the Antarctic ice sheets, represents about a third of the total mass influx on the earth at approximately 20,000 tons per year.
|Ni in atmospheric dust||14,300,000|
| Barker and Anders
|Ir and Os in deep-sea sediments|| 100,000
(50,000 - 150,000)
|Ir in Antarctic ice||400,000|
| Kyte and Wasson
|Ir in deep-sea sediments||330,000 - 340,000|
|Satellite, radar, visual||10,950|
|Satellite, radar, visual||20,900|
| Singer and Bandermann
|Al26 in deep-sea sediments|| 456,000
(91,300 - 913,000)
(1975 - 1978)
|Satellite, radar, visual|| 16,200
(8,000 - 30,000)
|Fragmentation of Apollo asteroids||76,000|
| Grün et al.
|Satellite data particularly||14,600|
|Radar data primarily||16,000|
| Maurette et al.
|Dust from melting Greenland ice||4,500|
| Tuncel and Zoller
|Ir in Antarctic atmospheric particulates||6,000 - 11,000|
| Maurette et al.
|Dust from melting Antarctic ice||20,000|
Table 3. Summary of the earth’s meteoritic dust influx estimates via the different measurement techniques.
Over the last three decades numerous attempts have been made using a variety of methods to estimate the meteoritic dust influx to the earth. Table 3 is the summary of the estimates discussed here, most of which are repeatedly referred to in the literature.
Clearly, there is no consensus in the literature as to what the annual influx rate is. Admittedly, no authority today would agree with Pettersson’s 1960 figure of 14,000,000 tons per year. However, there appear to be two major groupings -those chemical methods which give results in the 100,000-400,000 tons per year range or thereabouts, and those methods, particularly cumulative flux calculations based on satellite and radar data, that give results in the range 10,000-20,000 tons per year or thereabouts. There are those that would claim the satellite measurements give results that are too low because of the sensitivities of the techniques involved, whereas there are those on the other hand who would claim that the chemical methods include background dust particles and fragrnentation products.
Perhaps the “safest” option is to quote the meteoritic dust influx rate as within a range. This is exactly what several authorities on this subject have done when producing textbooks. For example, Dodd76 has suggested a daily rate of between 100 and 1,000 tons, which translates into 36,500-365,000 tons per year, while Hartmann,77 who refers to Dodd, quotes an influx figure of 10,000-1 million tons per year. Hartmann’s quoted influx range certainly covers the range of estimates in Table 3, but is perhaps a little generous with the upper limit. Probably to avoid this problem and yet still cover the wide range of estimates, Henbest writing in New Scientist in 199178 declares:
“Even though the grains are individually small, they are so numerous in interplanetary space that the Earth sweeps up some 100,000 tons of cosmic dust every year.79
Perhaps this is a “safe” compromise!
However, on balance we would have to say that the chemical methods when reapplied to polar ice, as they were by Maurette and his colleagues, gave a flux estimate similar to that derived from satellite and radar data, but much lower than Ganapathy’s earlier chemical analysis of polar ice. Thus it would seem more realistic to conclude that the majority of the data points to an influx rate within the range 10,000-20,000 tons per year, with the outside possibility that the figure may reach 100,000 tons per year.
Van Till et al. suggest:
“To compute a reasonable estimate for the accumulation of meteoritic dust on the moon we divide the earth’s accumulation rate of 16,000 tons per year by 16 for the moon’s smaller surface area, divide again by 2 for the moon’s smaller gravitational force, yielding an accumulation rate of about 500 tons per year on the moon.”80
However, Hartmann81 suggests a figure of 4,000 tons per year from his own published work,82 although this estimate is again calculated from the terrestrial influx rate taking into account the smaller surface area of the moon.
These estimates are of course based on the assumption that the density of meteoritic dust in the area of space around the earth-moon system is fairly uniform, an assumption verified by satellite measurements. However, with the US Apollo lunar exploration missions of 1969-1972 came the opportunities to sample the lunar rocks and soils, and to make more direct measurements of the lunar meteoritic dust influx.
One of the earliest estimates based on actual moon samples was that made by Keays and his colleagues,83 who analysed for trace elements twelve lunar rock and soil samples brought back by the Apollo 11 mission. From their results they concluded that there was a meteoritic or cometary component to the samples, and that component equated to an influx rate of 2.9 x 10-9g cm-2 yr-l of carbonaceous-chondrite-like material. This equates to an influx rate of over 15,200 tons per year. However, it should be kept in mind that this estimate is based on the assumption that the meteoritic component represents an accumulation over a period of more than 1 billion years, the figure given being the anomalous quantity averaged over that time span. These workers also cautioned about making too much of this estimate because the samples were only derived from one lunar location.
Within a matter of weeks, four of the six investigators published a complete review of their earlier work along with some new data.84 To obtain their new meteoritic dust influx estimate they compared the trace element contents of their lunar soil and breccia samples with the trace element contents of their lunar rock samples. The assumption then was that the soil and breccia is made up of the broken-down rocks, so that therefore any trace element differences between the rocks and soils/breccias would represent material that had been added to the soils/breccias as the rocks were mechanically broken down. Having determined the trace element content of this “extraneous component” in their soil samples, they sought to identify its source. They then assumed that the exposure time of the region (the Apollo 11 landing site or Tranquillity Base) was 3.65 billion years, so in that time the proton flux from the solar -wind would account for some 2% of this extraneous trace elements component in the soils, leaving the remaining 98% or so to be of meteoritic (to be exact, “particulate’) origin. Upon further calculation, this approximate 98% portion of the extraneous component seemed to be due to an approximate 1.9% admixture of carbonaceous-chondrite-like material (in other words, meteoritic dust of a particular type), and the quantity involved thus represented, over a 3.65 billion year history of soil formation, an average influx rate of 3.8 x 10-9gcm-2 yr-l, which translates to over 19,900 tons per year. However, they again added a note of caution because this estimate was only based on a few samples from one location.
Nevertheless, within six months the principal investigators of this group were again in print publishing further results and an updated meteoritic dust influx estimate.85 By now they had obtained seven samples from the Apollo 12 landing site, which included two crystalline rock samples, four samples from core “drilled” from the lunar regolith, and a soil sample. Again, all the samples were submitted for analyses of a suite of trace elements, and by again following the procedure outlined above they estimated that for this site the extraneous component represented an admixture of about 1.7% meteoritic dust material, very similar to the soils at the Apollo 11 site. Since the trace element content of the rocks at the Apollo 12 site was similar to that at the Apollo 11 site, even though the two sites are separated by 1,400 kilometres, other considerations aside, they concluded that this
“spatial constancy of the meteoritic component suggests that the influx rate derived from our Apollo 11 data, 3.8 x 10-9gcm-2yr-l, is a meaningful average for the entire moon.”86
So in the abstract to their paper they reported that
“an average meteoritic influx rate of about 4 x 10-9 per square centimetre per year thus seems to be valid for the entire moon. ”87
This latter figure translates into an influx rate of approximately 20,900 tons per year.
Ironically, this is the same dust influx rate estimate as for the earth made by Dohnanyi using satellite and radar measurement data via a cumulative flux calculation.88 As for the moon’s meteoritic dust influx, Dohnanyi estimated that using “an appropriate focusing factor of 2,” it is thus half of the earth’s influx, that is, 10,450 tons per year.89 Dohnanyi defended his estimate, even though in his words it “is slightly lower than the independent estimates” of Keays, Ganapathy and their colleagues. He suggested that in view of the uncertainties involved, his estimate and theirs were “surprisingly close”.
While to Dohnanyi these meteoritic dust influx estimates based on chemical studies of the lunar rocks seem very close to his estimate based primarily on satellite measurements, in reality the former are between 50% and 100% greater than the latter. This difference is significant, reasons already having been given for the higher influx estimates for the earth based on chemical analyses of deep- sea sediments compared with the same cumulative flux estimates based on satellite and radar measurements. Many of the satellite measurements were in fact made from satellites in earth orbit, and it has consequently been assumed that these measurements are automatically applicable to the moon. Fortunately, this assumption has been verified by measurements made by the Russians from their moon-orbiting satellite Luna 19, as reported by Nazarova and his colleagues.90 Those measurements plot within the field of near-earth satellite data as depicted by, for example, Hughes.91 Thus there seems no reason to doubt that the satellite measurements in general are applicable to the meteoritic dust influx to the moon. And since Nazarova et al.’s Luna 19 measurements are compatible with Hughes’ cumulative flux plot of near-earth satellite data, then Hughes, meteoritic dust influx estimate for the earth is likewise applicable to the moon, except that when the relevant focusing factor, as outlined and used by Dohnanyi,92 is taken into account we obtain a meteoritic dust influx to the moon estimate from this satellite data (via the standard cumulative flux calculation method) of half the earth’s figure, that is, about 8,000-9,000 tons per year.
Apart from satellite measurements using various techniques and detectors to actually measure the meteoritic dust influx to the earth-moon system, the other major direct detection technique used to estimate the meteoritic dust influx to the moon has been the study of the microcraters that are found in the rocks exposed at the lunar surface. It is readily apparent that the moon’s surface has been impacted by large meteorites, given the sizes of the craters that have resulted, but craters of all sizes are found on the lunar surface right down to the micro-scale. The key factors are the impact velocities of the particles, whatever their size, and the lack of an atmosphere on the moon to slow down (or burn up) the meteorites. Consequently, provided their mass is sufficient, even the tiniest dust particles will produce microcraters on exposed rock surfaces upon impact, just as they do when impacting the windows on spacecraft (the study of microcraters on satellite windows being one of the satellite measurement techniques). Additionally, the absence of an atmosphere on the moon, combined with the absence of water on the lunar surface, has meant that chemical weathering as we experience it on the earth just does not happen on the moon. There is of course still physical erosion, again due to impacting meteorites of all sizes and masses, and due to the particles of the solar wind, but these processes have also been studied as a result of the Apollo moon landings. However, it is the microcraters in the lunar rocks that have been used to estimate the dust influx to the moon.
Perhaps one of the first attempts to try and use microcraters on the moon’s surface as a means of determining the meteoritic dust influx to the moon was that of Jaffe,93 who compared pictures of the lunar surface taken by Surveyor 3 and then 31 months later by the Apollo 12 crew. The Surveyor 3 spacecraft sent thousands of television pictures of the lunar surface back to the earth between April 20 and May 3, 1967, and subsequently on November 20, 1969 the Apollo 12 astronauts visited the same site and took pictures with a hand camera. Apart from the obvious signs of disturbance of the surface dust by the astronauts, Jaffe found only one definite change in the surface. On the bottom of an imprint made by one of the Surveyor footpads when it bounced on landing, all of the pertinent Apollo pictures showed a particle about 2mm in diameter that did not appear in any of the Surveyor pictures. After careful analysis he concluded that the particle was in place subsequent to the Surveyor picture-taking. Furthermore, because of the resolution of the pictures any crater as large as 1.5mm in diameter should have been visible in the Apollo pictures. Two pits were noted along with other particles, but as they appeared on both photographs they must have been produced at the time of the Surveyor landing. Thus Jaffe concluded that no meteorite craters as large as 1.5 mm in diameter appeared on the bottom of the imprint, 20cm in diameter, during those 31 months, so therefore the rate of meteorite impact was less than 1 particle per square metre per month. This corresponds to a flux of 4 x 10-7 particles m-2sec-1 of particles with a mass of 3 x 10-8g, a rate near the lower limit of meteoritic dust influx derived from spacecraft measurements, and many orders of magnitude lower than some previous estimates. He concluded that the absence of detectable craters in the imprint of the Surveyor 3 footpad implied a very low meteoritic dust influx onto the lunar surface.
With the sampling of the lunar surface carried out by the Apollo astronauts and the return of rock samples to the earth, much attention focused on the presence of numerous microcraters on exposed rock surfaces as another means of calculating the meteoritic dust influx. These microcraters range in diameter from less than 1 micron to more than 1 cm, and their ubiquitous presence on exposed lunar rock sur- faces suggests that microcratering has affected literally every square centimetre of the lunar surface. However, in order to translate quantified descriptive data on microcraters into data on interplanetary dust particles and their influx rate, a calibration has to be made between the lunar microcrater diameters and the masses of the particles that must have impacted to form the craters. Hartung et al.94 suggest that several approaches using the results of laboratory cratering experiments are possible, but narrowed their choice to two of these approaches based on microparticle accelerator experiments. Because the crater diameter for any given particle diameter increases proportionally with increasing impact velocity, the calibration procedure employs a constant impact velocity which is chosen as 20km/sec. Furthermore, that figure is chosen because the velocity distribution of interplanetary dust or meteoroids based on visual and radar meteors is bounded by the earth and the solar system escape velocities, and has a maximum at about 20km/sec, which thus conventionally is considered to be the mean velocity for meteoroids. Particles impacting the moon may have a minimum velocity of 2.4km/sec, the lunar escape velocity, but the mean is expected to remain near 20km/sec because of the relatively low effective crosssection of the moon for slower particles. Inflight velocity measurements of micron-sized meteoroids are generally consistent with this distribution. So using a constant impact velocity of 20km/sec gives a calibration relationship between the diameters of the impacting particles and the diameters of the microcraters. Assuming a density of 3g/cm3 allows this calibration relationship to be between the diameters of the microcraters and the masses of the impacting particles.
After determining the relative masses of micrometeoroids, their flux on the lunar surface may then be obtained by correlating the areal density of microcraters on rock surfaces with surface exposure times for those sample rocks. In other words, in order to convert crater populations on a given sample into the interplanetary dust flux the sample’s residence time at the lunar surface must be known.95 These residence times at the lunar surface, or surface exposure times, have been determined either by Cosmogenic Al26 radioactivity measurements or by cosmic ray track density measurements,96 or more often by solar-flare particle track density measurements.97
On this basis Hartung et al.98 concluded that an average minimum flux of particles 25 micrograms and larger is 2.5 x 10-6 particles per cm2 per year on the lunar surface supposedly over the last 1 million years, and that a minimum cumulative flux curve over the range of masses 10-12 - 10-4g based on lunar data alone is about an order of magnitude less than independently derived present-day flux data from satellite-borne detector experiments. Furthermore, they found that particles of masses 10-7 - 10-4g are the dominant contributors to the cross-sectional area of interplanetary dust particles, and that these particles are largely responsible for the exposure of fresh lunar rock surfaces by superposition of microcraters. Also, they suggested that the overwhelming majority of all energy deposited at the surface of the moon by impact is delivered by particles 10-6 - 10-2g in mass.
A large number of other studies have been done on microcraters on lunar surface rock samples and from them calculations to estimate the meteoritic dust (micrometeoroid) influx to the moon. For example, Fechtig et al. investigated in detail a 2cm2 portion of a particular sample using optical and scanning electron microscope (SEM) techniques. Microcraters were measured and counted optically, the results being plotted to show the relationship between microcrater diameters and the cumulative crater frequency. Like other investigators, they found that in all large microcraters 100-200 microns in diameter there were on average one or two “small” microcraters about 1 micron in diameter within them, while in all “larger” microcraters (200-1,000 microns in diameter), of which there are many on almost all lunar rocks, there are large numbers of these “smaller” microcraters. The counting of these “small” microcraters within the “larger” microcraters was found to be statistically significant in estimating the overall microcratering rate and the distribution of particle sizes and masses that have produced the microcraters, because, assuming an unchanging impacting particle size or energy distribution with time, they argued that an equal probability exists for the case when a large crater superimposes itself upon a small crater, thus making its observation impossible, and the case when a small crater superimposes itself upon a larger crater, thus enabling the observation of the small crater. In other words, during the random cratering process, on the average, for each small crater observable within a larger microcrater, there must have existed one small microcrater rendered unobservable by the subsequent formation of the larger microcrater. Thus they reasoned it is necessary to correct the number of observed small craters upwards to account for this effect. Using a correction factor of two they found that their resultant microcrater size distribution plot agreed satisfactorily with that found in another sample by Schneider et al.100 Their measuring and counting of microcraters on other samples also yielded size distributions similar to those reported by other investigators on other samples.
Fechtig et al. also conducted their own laboratory simulation experiments to calibrate microcrater size with impacting particle size, mass and energy. Once the cumulative microcrater number for a given area was calculated from this information, the cumulative meteoroid flux per second for this given area was easily calculated by again dividing the cumulative microcrater number by the exposure ages of the samples, previously determined by means of solar-flare track density measurements. Thus they calculated a cumulative meteoroid flux on the moon of 4 (±3) x 10-5 particles m-2 sec-1, which they suggested is fairly consistent with in situ satellite measurements. Their plot comparing micrometeoroid fluxes derived from lunar microcrater measurements with those attained from various satellite experiments (that is, the cumulative number of particles per square metre per second across the range of particle masses) is reproduced in Figure 5.
Mandeville101 followed a similar procedure in studying the microcraters in a breccia sample collected at the Apollo 15 landing site. Crater numbers were counted and diameters measured. Calibration curves were experimentally derived to relate impact velocity and microcrater diameter, plus impacting particle mass and microcrater diameter. The low solar-flare track density suggested a short and recent exposure time, as did the low density of microcraters. Consequently, in their calculating of the cumulative micrometeoroid flux they assumed a 3,000-year exposure time because of this measured solar-flare track density and the assumed solar-track production rate. The resultant cumulative particle flux was 1.4 x 10-5 particles per square metre per second for particles greater than 2.5 x 10-10g at an impact velocity of 20km/sec, a value which again appears to be in close agreement with flux values obtained by satellite measurements, but at the lower end of the cumulative flux curve calculated from microcraters by Fechtig et al.
Figure 5. Comparison of micrometeoroid fluxes derived from lunar microcrater measurements (cross-hatched and labelled “MOON’) with those obtained in various satellite in situ experiments (adapted from Fechtig et al.99) The range of masses/sizes has been subdivided into dust and meteors.
Schneider et al.102 also followed the same procedure in looking at microcraters on Apollo 15 and 16, and Luna 16 samples. After counting and measuring microcraters and calibration experiments, they used both optical and scanning electron microscopy to determine solar-flare track densities and derive solar-flare exposure ages. They plotted their resultant cumulative meteoritic dust flux on a flux versus mass diagram, such as Figure 5, rather than quantifying it. However, their cumulative flux curve is close to the results of other investigators, such as Hartung et al.103 Nevertheless, they did raise some serious questions about the microcrater data and the derivation of it, because they found that flux values based on lunar microcrater studies are generally less than those based on direct measurements made by satellite-borne detectors, which is evident on Figure 5 also. They found that this discrepancy is not readily resolved but may be due to one or more factors. First on their list of factors was a possible systematic error existing in the solar-flare track method, perhaps related to our present-day knowledge of the solar-flare particle flux. Indeed, because of uncertainties in applying the solar-flare flux derived from solar-flare track records in time-control led situations such as the Surveyor 3 spacecraft, they concluded that these implied their solar-flare exposure ages were systematically too low by a factor of between two and three. Ironically, this would imply that the calculated cumulative dust flux from the microcraters is systematically too high by the same factor, which would mean that there would then be an even greater discrepancy between flux values from lunar microcrater studies and the direct measurements made by the satellite-borne detectors. However, they suggested that part of this systematic difference may be because the satellite-borne detectors record an enhanced flux due to particles ejected from the lunar surface by impacting meteorites of all sizes. In any case, they argued that some of this systematic difference may be related to the calibration of the lunar microcraters and the satellite-borne detectors. Furthermore, because we can only measure the present flux, for example by satellite detectors, it may in fact be higher than the long-term average, which they suggest is what is being derived from the lunar microcrater data.
Morrison and Zinner104 also raised questions regarding solar-flare track density measurements and derived exposure ages. They were studying samples from the Apollo 17 landing area and counted and measured microraters on rock sample surfaces whose original orientation on the lunar surface was known, so that their exposure histories could be determined to test any directional variations in both the micrometeoroid flux and solar-flare particles. Once measured, they compared their solar-flare track density versus depth profiles against those determined by other investigators on other samples and found differences in the steepnesses of the curves, as well as their relative positions with respect to the track density and depth values. They found that differences in the steepnesses of the curves did not correlate with differences in supposed exposure ages, and thus although they couldn’t exclude these real differences in slopes reflecting variations in the activity of the sun, it was more probable that these differences arose from variations in observational techniques, uncertainties in depth measurements, erosion, dust cover on the samples, and/or the precise lunar surface exposure geometry of the different samples measured. They then suggested that the weight of the evidence appeared to favour those curves (track density versus depth profiles) with the flatter slopes, although such a conclusion could be seriously questioned as those profiles with the flatter slopes do not match the Surveyor 3 profile data even by their own admission.,
Rather than calculating a single cumulative flux figure, Morrison and Zinner treated the smaller microcraters separately from the larger microcraters, quoting flux rates of approximately 900 0.1 micron diameter craters per square centimetre per year and approximately 10 -15 x 10-6 500 micron diameter or greater craters per square centimetre per year. They found that these rates were independent of the pointing direction of the exposed rock surface relative to the lunar sky and thus this reflected no variation in the micrometeorite flux directionally. These rates also appeared to be independent of the supposed exposure times of the samples. They also suggested that the ratio of microcrater numbers to solar-flare particle track densities would make a convenient measure for comparing flux results of different laboratories/investigators and varying sampling situations. Comparing such ratios from their data with those of other investigations showed that some other investigators had ratios lower than theirs by a factor of as much as 50, which can only raise serious questions about whether the microcrater data are really an accurate measure of meteoritic dust influx to the moon. However, it can’t be the microcraters themselves that are the problem, but rather the underlying assumptions involved in the determination/estimation of the supposed ages of the rocks and their exposure times.
Another relevant study is that made by Cour-Palais,105 who examined the heat-shield windows of the command modules of the Apollo 7 - 17 (excluding Apollo 11) spacecrafts for meteoroid impacts as a means of estimating the interplanetary dust flux. As part of the study he also compared his results with data obtained from the Surveyor 3 lunar-lander’s TV shroud. In each case, the length of exposure time was known, which removed the uncertainty and assumptions that are inherent in estimation of exposure times in the study of microcraters on lunar rock samples. Furthermore, results from the Apollo spacecrafts represented planetary space measurements very similar to the satellite-borne detector techniques, whereas the Surveyor 3 TV shroud represented a lunar surface detector. In all, Cour-Palais found a total of 10 micrometeoroid craters of various diameters on the windows of the Apollo spacecrafts. Calibration tests were conducted by impacting these windows with microparticles for various diameters and masses, and the results were used to plot a calibration curve between the diameters of the micrometeoroid craters and the estimated masses of the impacting micrometeoroids. Because the Apollo spacecrafts had variously spent time in earth orbit, and some in lunar orbit also, as well as transit time in interplanetary space between the earth and the moon, correction factors had to be applied so that the Apollo window data could be taken as a whole to represent measurements in interplanetary space. He likewise applied a modification factor to the Surveyor 3 TV shroud results so that with the Apollo data the resultant cumulative mass flux distribution could be compared to results obtained from satellite-borne detector systems, with which they proved to be in good agreement.
He concluded that the results represent an average micrometeoroid flux as it exists at the present time away from the earth’s gravitational sphere of influence for masses < l0-7g. However, he noted that the satellite-borne detector measurements which represent the current flux of dust are an order of magnitude higher than the flux supposedly recorded by the lunar microcraters, a record which is interpreted as the “prehistoric” flux. On the other hand he, corrected the Surveyor 3 results to discount the moon’s gravitational effect and bring them into line with the interplanetary dust flux measurements made by satellite- borne detectors. But if the Surveyor 3 results are taken to represent the flux at the lunar surface then that flux is currently an order of magnitude lower than the flux recorded by the Apollo spacecrafts in interplanetary space. In any case, the number of impact craters measured on these respective spacecrafts is so small that one wonders how statistically representative these results are. Indeed, given the size of the satellite-borne detector systems, one could argue likewise as to how representative of the vastness of interplanetary space are these detector results.
Figure 6. Cumulative fluxes (numbers of micrometeoroids with mass greater than the given mass which will impact every second on a square metre of exposed surface one astronomical unit from the sun) derived from satellite and lunar microcrater data (adapted from Hughes106).
Others had been noticing this disparity between the lunar microcrater data and the satellite data. For example, Hughes reported that this disparity had been known “for many years’.106 His diagram to illustrate this disparity is shown here as Figure 6. He highlighted a number of areas where he saw there were problems in these techniques for measuring micrometeoroid influx. For example, he reported that new evidence suggested that the meteoroid impact velocity was about 5km/sec rather than the 20km/ sec that had hithertofore been assumed. He suggested that taking this into account would only move the curves in Figure 6 to the right by factors varying with the velocity dependence of microphone response and penetration hole size (for the satellite-borne detectors) and crater diameter (the lunar microcraters), but because these effects are only functions of meteoroid momentum or kinetic energy their use in adjusting the data is still not sufficient to bring the curves in Figure 6 together (that is, to overcome this disparity between the two sets of data). Furthermore, with respect to the lunar microcrater data, Hughes pointed out that two other assumptions, namely, the ratio of the diameter of the microcrater to the diameter of the impacting particle being fairly constant at two, and the density of the particle being 3g per cm3, needed to be reconsidered in the light of laboratory experiments which had shown the ratio decreases with particle density and particle density varies with mass. He suggested that both these factors make the interpretation of microcraters more difficult, but that “the main problem” lies in estimating the time the rocks under consideration have remained exposed on the lunar surface. Indeed, he pointed to the assumption that solar activity has remained constant in the past, the key assumption required for calculation of an exposure age, as “the real stumbling block” - the particle flux could have been lower in the past or the solar-flare flux could have been higher. He suggested that because laboratory simulation indicates that solarwind sputter erosion is the dominant factor determining microcrater lifetimes, then knowing this enables the micrometeoroid influx to be derived by only considering rock surfaces with an equilibrium distribution of microcraters. He concluded that this line of research indicated that the micrometeoroid influx had supposedly increased by a factor of four in the last 100,000 years and that this would account for the disparity between the lunar microcrater data and the satellite data as shown by the separation of the two curves in Figure 6. However, this “solution”, according to Hughes, “creates the question of why this flux has increased” a problem which appears to remain unsolved.
In a paper reviewing the lunar microcrater data and the lunar micrometeoroid flux estimates, Hörz et al.107 discuss some key issues that arise from their detailed summary of micrometeoroid fluxes derived by various investigators from lunar sample analyses. First, the directional distribution of micrometeoroids is extremely non-uniform, the meteoroid flux differing by about three orders of magnitude between the direction of the earth’s apex and anti-apex. Since the moon may only collect particles greater than 1012g predominantly from only the apex direction, fluxes derived from lunar microcrater statistics, they suggest, may have to be increased by as much as a factor of p for comparison with satellite data that were taken in the apex direction. On the other hand, apex-pointing satellite data generally have been corrected upward because of an assumed isotropic flux, so the actual anisotropy has led to an overestimation of the flux, thus making the satellite results seem to represent an upper limit for the flux. Second, the micrometeoroids coming in at the apex direction appear to have an average impact velocity of only 8km/sec, whereas the fluxes calculated from lunar microcraters assume a standard impact velocity of 20km/sec. If as a result corrections are made, then the projectile mass necessary to produce any given microcrater will increase, and thus the moon-based flux for masses greater than 10-10g will effectively be enhanced by a factor of approximately 5. Third, particles of mass less than 10-12g generally appear to have relative velocities of at least 50km/sec, whereas lunar flux curves for these masses are based again on a 20km/sec impact velocity. So again, if appropriate corrections are made the lunar cumulative micrometeoroid flux curve would shift towards smaller masses by a factor of possibly as much as 10. Nevertheless, Hörz et al. conclude that
“as a consequence the fluxes derived from lunar crater statistics agree within the order of magnitude with direct satellite results if the above uncertainties in velocity and directional distribution are considered.”
Although these comments appeared in a review paper published in 1975, the footnote on the first page signifies that the paper was presented at a scientific meeting in 1973, the same meeting at which three of those investigators also presented another paper in which they made some further pertinent comments. Both there and in a previous paper, Gault, Hörz and Hartung108,109 had presented what they considered was a “best” estimate of the cumulative meteoritic dust flux based on their own interpretation of the most reliable satellite measurements. This “best” estimate they expressed mathematically in the form
N=9.l4 x l0-6m-l.213 l0-7<m<l03.
Figure 7. The micrometeoroid flux measurements from spacecraft experiments which were selected to define the mass-flux distribution (adapted from Gault et al.109) Also shown is the incremental mass flux contained within each decade of m, which sum to approximately 10,000 tonnes per year. Their data sources used are listed in their bibliography.
They commented that the use of two such exponential expressions with the resultant discontinuity is an artificial representation for the flux and not intended to represent a real discontinuity, being used for mathematical simplicity and for convenience in computational procedures. They also plotted this cumulative flux presented by these two exponential expressions, together with the incremental mass flux in each decade of particle mass, and that plot is reproduced here as Figure 7. Note that their flux curve is based on what they regard as the most reliable satellite measurements. Note also, as they did, that the fluxes derived from lunar rocks (the microcrater data) “are not necessarily directly comparable with the current satellite or photographic meteor data.” 110 However, using their cumulative flux curve as depicted in Figure 7, and their histogram plot of incremental mass flux, it is possible to estimate (for example, by adding up each incremental mass flux) the cumulative mass flux, which comes to approximately 2 x 10-9gcm-2yr-1 or about 10,000 tons per year. This is the same estimate that they noted in their concluding remarks:-
“We note that the mass of material contributing to any enhancement, which the earth-moon system is currently sweeping up, is of the order of 1010g per year.”111
Having derived this “best” estimate flux from their mathematical modelling of the “most reliable satellite measurements’ their later comments in the same paper seem rather contradictory:-
“If we follow this line of reasoning, the basic problem then reduces to consideration of the validity of the ‘best’ estimate flux, a question not unfamiliar to the subject of micrometeoroids and a question not with- out considerable historical controversy. We will note here only that whereas it is plausible to believe that a given set of data from a given satellite may be in error for any number of reasons, we find the degree of correlation between the various spacecraft experiments used to define the ‘best’ flux very convincing, especially when consideration is given to the different techniques employed to detect and measure the flux. Moreover, it must be remembered that the abrasion rates, affected primarily by microgram masses, depend almost exclusively on the satellite data while the rupture times, affected only by milligram masses, depend exclusively on the photographic meteor determinations of masses. It is extremely awkward to explain how these fluxes from two totally different and independent techniques could be so similarly in error. But if, in fact, they are in error then they err by being too high, and the fluxes derived from lunar rocks are a more accurate description of the current near- earth micrometeoroid flux.”(emphasis theirs )112
One is left wondering how they can on the one hand emphasise the lunar microcrater data as being a more accurate description of the current micrometeoroid flux, when they based their “best” estimate of that flux on the “most reliable satellite measurements”. However, their concluding remarks are rather telling. The reason, of course, why the lunar microcrater data is given such emphasis is because it is believed to represent a record of the integrated cumulative flux over the moon’s billions-of- years history, which would at face value appear to be a more statistically reliable estimate than brief point-in-space satellite-borne detector measurements. Nevertheless, they are left with this unresolved discrepancy between the microcrater data and the satellite measurements, as has already been noted. So they explain the microcrater data as presenting the “prehistoric” flux, the fluxes derived from the lunar rocks being based on exposure ages derived from solar- flare track density measurements and assumptions regarding solar-flare activity in the past. As for the lunar microcrater data used by Gault et al., they state that the derived fluxes are based on exposure ages in the range 2,500 - 700,000 years, which leaves them with a rather telling enigma. If the current flux as indicated by the satellite measurements is an order of magnitude higher than the microcrater data representing a “prehistoric” flux, then the flux of meteoritic dust has had to have increased or been enhanced in the recent past. But they have to admit that
“if these ages are accepted at face value, a factor of 10 enhancement integrated into the long term average limits the onset and duration of enhancement to the past few tens of years.”
They note that of course there are uncertainties in both the exposure ages and the magnitude of an enhancement, but the real question is the source of this enhanced flux of particles, a question they leave unanswered and a problem they pose as the subject for future investigation. On the other hand, if the exposure ages were not accepted, being too long, then the microcrater data could easily be reconciled with the “more reliable satellite measurements”.
Only two other micrometeoroid and meteor influx measuring techniques appear to have been tried. One of these was the Apollo 17 Lunar Ejecta and Micrometeorite Experiment, a device deployed by the Apollo 17 crew which was specifically designed to detect micrometeorites.113 It consisted of a box containing monitoring equipment with its outside cover being sensitive to impacting dust particles. Evidently, it was capable not only of counting dust particles, but also of measuring their masses and velocities, the objective being to establish some firm limits on the numbers of microparticles in a given size range which strike the lunar surface every year. However, the results do not seem to have added to the large database already established by microcrater investigations.
The other direct measurement technique used was the Passive Seismic Experiment in which a seismograph was deployed by the Apollo astronauts and left to register subsequent impact events.114 In this case, however, the particle sizes and masses were in the gram to kilogram range of meteorites that impacted the moon’s surface with sufficient force to cause the vibrations to be recorded by the seismograph. Between 70 and 150 meteorite impacts per year were recorded, with masses in the range 100g to 1,000 kg, implying a flux rate of
log N = -1.62 -1.16 log m,
where N is the number of bodies that impact the lunar surface per square kilometre per year, with masses greater than m grams.115 This flux works out to be about one order of magnitude less than the average integrated flux from microcrater data. However, the data collected by this experiment have been used to cover that particle mass range in the development of cumulative flux curves (for example, see Figure 2 again) and the resultant cumulative mass flux estimates.
Figure 8. Constraints on the flux of micrometeoroids and larger objects according to a variety of independent lunar studies (adapted from Hörz et al.107)
Hörz et al. summarised some of the basic constraints derived from a variety of independent lunar studies on the lunar flux of micrometeoroids and larger objects.116 They also plotted the broad range of cumulative flux curves that were bounded by these constraints (see Figure 8). Included are the results of the Passive Seismic Experiment and the direct measurements of micrometeoroids encountered by spacecraft windows. They suggested that an upper limit on the flux can be derived from the mare cratering rate and from erosion rates on lunar rocks and other cratering data. Likewise, the negative findings on the Surveyor 3 camera lens and the perfect preservation of the footpad print of the Surveyor 3 1anding gear (both referred to above) also define an upper limit. On the other hand, the lower limit results from the study of solar and galactic radiation tracks in lunar soils, where it is believed the regolith has been reworked only by micrometeoroids, so because of presumed old undisturbed residence times the flux could not have been significantly lower than that indicated. The “geochemical”, evidence is also based on studies of the lunar soils where the abundance of trace elements are indicative of the type and amount of meteoritic contamination. Hörz et al. suggest that strictly, only the passive seismometer, the Apollo windows and the mare craters yield a cumulative mass distribution. All other parameters are either a bulk measure of a meteoroid mass or energy, the corresponding “flux” being calculated via the differential mass-distribution obtained from lunar microcrater investigations (‘lunar rocks , on Figure 8). Thus the corresponding arrows on Figure 8 may be shifted anywhere along the lines defining the “upper” and “lower” limits. On the other hand, they point out that the Surveyor 3 camera lens and footpad analyses define points only.
|Calculated from estimates of influx to the earth||4,000|
| Keays et al.
|Geochemistry of lunar soil and rocks||15,200|
| Ganapathy et al.
|Geochemistry of lunar soil and rocks||19,900|
|Calculated from satellite, radar data||10,450|
| Nazarova et al.
|Lunar orbit satellite data||8,000 - 9,000|
| by comparison with Hughes
|Calculated from satellite, radar data||(4,000 - 15,000)|
| Gault, et al.
|Combination of lunar microcrater and satellite data||10,000|
Table 4. Summary of the lunar meteoritic dust influx estimates.
Table 4 summarises the different lunar meteoritic dust estimates. It is difficult to estimate a cumulative mass flux from Hörz et al.’s diagram showing the basic constraints for the flux of micrometeoroids and larger objects derived from independent lunar studies (see Figure 8), because the units on the cumulative flux axis are markedly different to the units on the same axis of the cumulative flux and cumulative mass diagram of Gault et al. from which they estimated a lunar meteoritic dust influx of about 10,000 tons per year. The Hörz et al. basic constraints diagram seems to have been partly constructed from the previous figure in their paper, which however includes some of the microcrater data used by Gault et al. in their diagram (Figure 7 here) and from which the cumulative mass flux calculation gave a flux estimate of 10,000 tons per year. Assuming then that the basic differences in the units used on the two cumulative flux diagrams (Figures 7 and 8 here) are merely a matter of the relative numbers in the two log scales, then the Gault et al. cumulative flux curve should fall within a band between the upper and lower limits, that is, within the basic constraints, of Hörz et al.’s lunar cumulative flux summary plot (Figure 8 here). Thus a flux estimate from Hörz et al.’s broad lunar cumulative flux curve would still probably centre around the 10,000 tons per year estimate of Gault et al.
In conclusion, therefore, on balance the evidence points to a lunar meteoritic dust influx figure of around 10,000 tons per year. This seems to be a reasonable, approximate estimate that can be derived from the work of Hörz et al., who place constraints on the lunar cumulative flux by carefully drawing on a wide range of data from various techniques. Even so, as we have seen, Gault et al. question some of the underlying assumptions of the major measurement techniques from which they drew their data - in particular, the lunar microcrater data and the satellite measurement data. Like the “geochemical” estimates, the microcrater data depends on uniformitarian age assumptions, including the solar-flare rate, and in common with the satellite data, uniformitarian assumptions regarding the continuing level of dust in interplanetary space and as influx to the moon. Claims are made about variations in the cumulative dust influx in the past, but these also depend upon uniformitarian age assumptions and thus the argument could be deemed circular. Nevertheless, questions of sampling statistics and representativeness aside, the figure of approximately 10,000 tons per year has been stoutly defended in the literature based primarily on present-day satellite-borne detector measurements.
Finally, one is left rather perplexed by the estimate of the moon’s accumulation rate of about 500 tons per year made by Van Till et al.117 In their treatment of the “moon dust controversy”, they are rather scathing in their comments about creationists and their handling of the available data in the literature. For example, they state:
“The failure to take into account the published data pertinent to the topic being discussed is a clear failure to live up to the codes of thoroughness and integrity that ought to characterize professional science.”118
“The continuing publication of those claims by young- earth advocates constitutes an intolerable violation of the standards of professional integrity that should characterize the work of natural scientists.”119
Having been prepared to make such scathing comments, one would have expected that Van Till and his colleagues would have been more careful with their own handling of the scientific literature that they purport to have carefully scanned. Not so, because they failed to check their own calculation of 500 tons per year for lunar dust influx with those estimates that we have seen in the same literature which were based on some of the same satellite measurements that Van Till et al. did consult, plus the microcrater data which they didn’t. But that is not all - they failed to check the factors they used for calculating their lunar accumulation rate from the terrestrial figure they had established from the literature. If they had consulted, for example, Dohnanyi, as we have already seen, they would have realised that they only needed to use a focusing factor of two, the moon’s smaller surface area apparently being largely irrelevant. So much for lack of thoroughness! Had they surveyed the literature thoroughly, then they would have to agree with the conclusion here that the dust influx to the moon is approximately 10,000 tons per year.
The second major question to be addressed is whether NASA really expected to find a thick dust layer on the moon when their astronauts landed on July 20, 1969. Many have asserted that because of meteoritic dust influx estimates made by Pettersson and others prior to the Apollo moon landings, that NASA was cautious in case there really was a thick dust layer into which their lunar lander and astronauts might sink.
Asimov is certainly one authority at the time who is often quoted. Using the 14,300,000 tons of dust per year estimate of Pettersson, Asimov made his own dust on the moon calculation and commented:
“But what about the moon? It travels through spacewith us and although it is smaller and has a weaker gravity, it, too, should sweep up a respectable quan tity of micrometeors.
To be sure, the moon has no atmosphere to friction the micrometeors to dust, but the act of striking the moon’s surface should develop a large enough amount of heat to do the job.
Now it is already known, from a variety of evidence, that the moon (or at least the level lowlands) is covered with a layer of dust. N o one, however, knows for sure how thick this dust may be.
It strikes me that if this dust is the dust of falling micrometeors, the thickness may be great. On the moon there are no oceans to swallow the dust, or winds to disturb it, or life forms to mess it up generally one way or another. The dust that forms must just lie there, and if the moon gets anything like the earth’s supply, it could be dozens of feet thick.
In fact, the dust that strikes craters quite probably rolls down hill and collects at the bottom, forming ‘drifts’ that could be fifty feet deep, or more. Why not?
I get a picture, therefore, of the first spaceship, picking out a nice level place for landing purposes coming slowly downward tail-first … and sinking majestically out of sight.”120
Asimov certainly wasn’t the first to speculate about the thickness of dust on the moon. As early as 1897 Peal121 was speculating on how thick the dust might be on the moon given that “it is well known that on our earth there is a considerable fall of meteoric dust.” Nevertheless, he clearly expected only “an exceedingly thin coating” of dust. Several estimates of the rate at which meteorites fall to earth were published between 1930 and 1950, all based on visual observations of meteors and meteorite falls. Those estimates ranged from 26 metric tons per year to 45,000 tons per year.122 In 1956 Öpik123 estimated 25,000 tons per year of dust falling to the earth, the same year Watson124 estimated a total accumulation rate of between 300,000 and 3 million tons per year, and in 1959 Whipple125 estimated 700,000 tons per year.
However, it wasn’t just the matter of meteoritic dust falling to the lunar surface that concerned astronomers in their efforts to estimate the thickness of dust on the lunar surface, since the second source of pulverised material on the moon is the erosion of exposed rocks by various processes. The lunar craters are of course one of the most striking features of the moon and initially astronomers thought that volcanic activity was responsible for them, but by about 1950 most investigators were convinced that meteorite impact was the major mechanism involved.126 Such impacts pulverise large amounts of rock and scatter fragments over the lunar surface. Astronomers in the 1950s agreed that the moon’s surface was probably covered with a layer of pulverised material via this process, because radar studies were consistent with the conclusion that the lunar surface was made of fine particles, but there were no good ways to estimate its actual thickness.
Yet another contributing source to the dust layer on the moon was suggested by Lyttleton in 1956,127 He suggested that since there is no atmosphere on the moon, the moon‘s surface is exposed to direct radiation, so that ultraviolet light and x-rays from the sun could slowly erode the surface of exposed lunar rocks and reduce them to dust, Once formed, he envisaged that the dust particles might be kept in motion and so slowly “flow” to lower elevations on the lunar surface where they would accumulate to form a layer of dust which he suggested might be “several miles deep”. Lyttleton wasn’t alone, since the main proponent of the thick dust view in British scientific circles was Royal Greenwich astronomer Thomas Gold, who also suggested that this loose dust covering the lunar surface could present a serious hazard to any spacecraft landing on the moon.128 Whipple, on the other hand, argued that the dust layer would be firm and compact so that humans and vehicles would have no trouble landing on and moving across the moon’s surface.129 Another British astronomer, Moore, took note of Gold’s theory that the lunar seas “were covered with layers of dust many kilometres deep” but flatly rejected this. He commented:
“The disagreements are certainly very marked. At one end of the scale we have Gold and his supporters, who believe in a dusty Moon covered in places to a great depth; at the other, people such as myself, who incline to the view that the dust can be no more than a few centimetres deep at most. The only way to clear the matter up once and for all is to send a rocket to find out.”150
So it is true that some astronomers expected to find a thick dust layer, but this was no means universally supported in the astronomical community. The Russians too were naturally interested in this question at this time because of their involvement in the “space race”, but they also had not reached a consensus on this question of the lunar dust. Sharonov,131 for example, discussed Gold’s theory and arguments for and against a thick dust layer, admitting that “this theory has become the object of animated discussion.” Nevertheless, he noted that the “majority of selenologists” favoured the plains of the lunar “seas’ (mares ) being layers of solidified lavas with minimal dust cover.
The lunar dust question was also on the agenda of the December 1960 Symposium number 14ofthe International Astronomical Union held at the Puikovo Observatory near Leningrad. Green summed up the arguments as follows:
“Polarization studies by Wright verified that the surface of the lunar maria is covered with dust. However, various estimates of the depth of this dust layer have been proposed. In a model based on the radioastronomy techniques of Dicke and Beringer and others, a thin dust layer is assumed, Whipple assumes the covering to be less than a few meters’ thick.
On the other hand, Gold, Gilvarry, and Wesselink favor a very thick dust layer. … Because no polar homogenization of lunar surface details can be demonstrated, however, the concept of a thin dust layer appears more reasonable. … Thin dust layers, thickening in topographic basins near post-mare craters, are predicted for mare areas.”132
In a 1961 monograph on the lunar surface, Fielder discussed the dust question in some detail, citing many of those who had been involved in the controversy. Having discussed the lunar mountains where he said “there may be frequent pockets of dust trapped in declivities” he concluded that the mean dust cover over the mountains would only be a millimetre or so.133 But then he went on to say,
“No measurements made so far refer purely to marebase materials. Thus, no estimates of the composition of maria have direct experimental backing. This is unfortunate, because the interesting question ‘How deep is the dust in the lunar seas?’ remains unanswered.”
In 1964 a collection of research papers were published in a monograph entitled The Lunar Surface Layer, and the consensus therein amongst the contributing authors was that there was not a thick dust layer on the moon’s surface. For example, in the introduction, Kopal stated that
“this layer of loose dust must extend down to a depth of at least several centimeters, and probably a foot or so; but how much deeper it may be in certain places remains largely conjectural.”134
In a paper on “Dust Bombardment on the Lunar Surface”, McCracken and Dubin undertook a comprehensive review of the subject, including the work of Öpik and Whipple, plus many others who had since been investigating the meteoritic dust influx to the earth and moon, but concluded that
“The available data on the fluxes of interplanetary dust particles with masses less than 104gm show that the material accreted by the moon during the past 4.5 billion years amounts to approximately 1 gm/cm2 if the flux has remained fairly constant.”135
(Note that this statement is based on the uniformitarian age constraints for the moon.) Thus they went on to say that
“The lunar surface layer thus formed would, therefore, consist of a mixture of lunar material and interplanetary material (primarily of cometary origin) from 10cm to 1m thick. The low value for the accretion rate for the small particles is not adequate to produce large-scale dust erosion or to form deep layers of dust on the moon. …”.136
In another paper, Salisbury and Smalley state in their abstract:
“It is concluded that the lunar surface is covered with a layer of rubble of highly variable thickness and block size. The rubble in turn is mantled with a layer of highly porous dust which is thin over topographic highs, but thick in depressions. The dust has a complex surface and significant, but not strong, coherence.”137
In their conclusions they made a number of predictions.
“Thus, the relief of the coarse rubble layer expected in the highlands should be largely obliterated by a mantle of fine dust, no more than a few centimeters thick over near-level areas, but meters thick in steep- walled depressions. …The lunar dust layer should provide no significant difficulty for the design of vehicles and space suits. …”138
Expressing the opposing view was Hapke, who stated that
“recent analyses of the thermal component of the lunar radiation indicate that large areas of the moon may be covered to depths of many meters by a substance which is ten times less dense than rock. …Such deep layers of dust would be in accord with the suggestion of Gold.”139
He went on:
“Thus, if the radio-thermal analyses are correct, the possibility of large areas of the lunar surface being covered with thick deposits of dust must be given serious consideration.”140
However, the following year Hapke reported on research that had been sponsored by NASA, at a symposium on the nature of the lunar surface, and appeared to be more cautious on the dust question. In the proceedings he wrote:
“I believe that the optical evidence gives very strong indications that the lunar surface is covered with a layer of fine dust of unknown thicknes.”141
There is no question that NASA was concerned about the presence of dust on the moon’s surface and its thickness. That is why they sponsored intensive research efforts in the 1960s on the questions of the lunar surface and the rate of meteoritic dust influx to the earth and the moon. In order to answer the latter question, NASA had begun sending up rockets and satellites to collect dust particles and to measure their flux in near-earth space. Results were reported at symposia, such as that which was held in August 1965 at Cambridge, Massachusetts, jointly sponsored by NASA and the Smithsonian Institution, the proceedings of which were published in 1967.142
A number of creationist authors have referred to this proceedings volume in support of the standard creationist argument that NASA scientists had found a lot of dust in space which confirmed the earlier suggestions of a high dust influx rate to the moon and thus a thick lunar surface layer of dust that would be a danger to any landing spacecraft. Slusher, for example, reported that he had been involved in an intensive review of NASA data on the matter and found
“that radar, rocket, and satellite data published in 1976 by NASA and the Smithsonian Institution show that a tremendous amount of cosmic dust is present in the space around the earth and moon.”143
(Note that the date of publication was incorrectly reported as 1976, when it in fact is the 1967 volume just referred to above.) Similarly, Calais references this same 1967 proceedings volume and says of it,
“NASA has published data collected by orbiting satellites which confirm a vast amount of cosmic dust reaching the vicinity of the earth-moon system.”144,145
Both these assertions, however, are far from correct, since the reports published in that proceedings volume contain results of measurements taken by detectors on board spacecraft such as Explorer XVI, Explorer XXIII, Pegasus I and Pegasus II, as well as references to the work on radio meteors by Elford and cumulative flux curves incorporating the work of people like Hawkins, Upton and Elsässer. These same satellite results and same investigators’ contributions to cumulative flux curves appear in the 1970s papers of investigators whose cumulative flux curves have been reproduced here as Figures 3, 5 and 7, all of which support the 10,000 - 20,000 tons per year and approximately 10,000 tons per year estimates for the meteoritic dust influx to the earth and moon respectively - not the “tremendous” and “vast” amounts of dust incorrectly inferred from this proceedings volume by Slusher and Calais.
The next stage in the NASA effort was to begin to directly investigate the lunar surface as a prelude to an actual manned landing. So seven Ranger spacecraft were sent up to transmit television pictures back to earth as they plummeted toward crash landings on selected flat regions near the lunar equator.146 The last three succeeded spectacularly, in 1964 and 1965, sending back thousands of detailed lunar scenes, thus increasing a thousand-fold our ability to see detail. After the first high-resolution pictures of the lunar surface were transmitted by television from the Ranger VII spacecraft in 1964, Shoemaker147 concluded that the entire lunar surface was blanketed by a layer of pulverised ejecta caused by repeated impacts and that this ejecta would range from boulder-sized rocks to finely-ground dust. After the remaining Ranger crash-landings, the Ranger investigators were agreed that a debris layer existed, although interpretations varied from virtually bare rock with only a few centimetres of debris (Kuiper, Strom and Le Poole) through to estimates of a layer from a few to tens of metres deep (Shoemaker).148 However, it can’t be implied as some have done149 that Shoemaker was referring to a dust layer that thick that was unstable enough to swallow up a landing spacecraft. After all, the consolidation of dust and boulders sufficient to support a load has nothing to do with a layer’s thickness. In any case, Shoemaker was describing a surface layer composed of debris from meteorite impacts, the dust produced being from lunar rocks and not from falling meteoritic dust.
But still the NASA planners wanted to dispel any lingering doubts before committing astronauts to a manned spacecraft landing on the lunar surface, so the soft-landing Surveyor series of spacecraft were designed and built However, the Russians just beat the Americans when they achieved the first lunar soft-landing with their Luna 9 spacecraft. Nevertheless, the first American Surveyor spacecraft successfully achieved a soft-landing in mid- 1966 and returned over 11,000 splendid photographs, which showed the moon’s surface in much greater detail than ever before.150 Between then and January 1968 four other Surveyor spacecraft were successfully landed on the lunar surface and the pictures obtained were quite remarkable in their detail and high resolution, the last in the series (Surveyor 7) returning 21,000 photographs as well as a vast amount of scientific data. But more importantly,
“as each spindly, spraddle-legged craft dropped gingerly to the surface, its speed largely negated by retrorockets, its three footpads sank no more than an inch or two into the soft lunar soil. The bearing strength of the surface measured as much as five to ten pounds per square inch, ample for either astronaut or landing spacecraft.”151
Two of the Surveyors carried a soil mechanics surface sampler which was used to test the soil and any rock fragments within reach. All these tests and observations gave a consistent picture of the lunar soil. As Pasachoff noted:
“It was only the soft landing of the Soviet Luna and American Surveyor spacecraft on the lunar surface in 1966 and the photographs they sent back that settled the argument over the strength of the lunar surface; the Surveyor perched on the surface without sinking in more than a few centimeters.”152152
Moore concurred, with the statement that
“up to 1966 the theory of deep dust-drifts was still taken seriously in the United States and there was considerable relief when the soft-Ianding of Luna 9 showed it to be wrong.”153
Referring to Gold’s deep-dust theory of 1955, Moore went on to say that although this theory had gained a considerable degree of respectability, with the successful soft-landing of Luna 9 in 1966 “it was finally discarded.”154 So it was in May 1966 when Surveyor I landed on the moon three years before Apollo 11 that the long debate over the lunar surface dust layer was finally settled, and NASA officials then knew exactly how much dust there was on the surface and that it was capable of supporting spacecraft and men.
Since this is the case, creationists cannot say or imply, as some have,155-160 that most astronomers and scientists expected a deep dust layer. Some of course did, but it is unfair if creationists only selectively refer to those few scientists who predicted a deep dust layer and ignore the majority of scientists who on equally scientific grounds had predicted only a thin dust layer. The fact that astronomy textbooks and monographs acknowledge that there was a theory about deep dust on the moon,161,162 as they should if they intend to reflect the history of the development of thought in lunar science, cannot be used to bolster a lop-sided presentation of the debate amongst scientists at the time over the dust question, particularly as these same textbooks and monographs also indicate, as has already been quoted, that the dust question was settled by the Luna and Surveyor soft-landings in 1966. Nor should creationists refer to papers like that ofWhipple,163 who wrote of a “dust cloud” around the earth, as if that were representative of the views at the time of all astronomers. Whipple’s views were easily dismissed by his colleagues because of subsequent evidence. Indeed, Whipple did not continue promoting his claim in subsequent papers, a clear indication that he had either withdrawn it or been silenced by the overwhelming response of the scientific community with evidence against it, or both.
Two further matters need to be also dealt with. First, there is the assertion that NASA built the Apollo lunar lander with large footpads because they were unsure about the dust and the safety of their spacecraft. Such a claim is, inappropriate given the success of the Surveyor soft-landings, the Apollo lunar lander having footpads which were proportionally similar to the relative sizes of the respective spacecraft. After all, it stands to reason that since the design of Surveyor spacecraft worked so well and survived landing on the lunar surface that the same basic design should be followed in the Apollo lunar lander.
As for what Armstrong and Aldrin found on the lunar surface, all are agreed that they found a thin dust layer .The transcript of Armstrong’s words as he stepped onto the moon are instructive:
“I am at the foot of the ladder. The LM [lunar module ] footpads are only depressed in the surface about one or two inches, although the surface appears. to be very, very fine grained, as. you get close to it. It is almost like a powder. Now and then it is very fine. I am going to step off the LM now. That is one small step for man, one giant leap for mankind.”164164
Moments later while taking his first steps on the lunar surface, he noted:
“The surface is fine and powdery. I can - I can pick it up loosely with my toe. It does adhere in fine layers like powdered charcoal to the sole and sides. of my boots. I only go in a small fraction of an inch, maybe an eighth of an inch, but I can see the footprints. of my boots and the treads in the fine sandy particles.‘
And a little later, while picking up samples of rocks and fine material, he said:
“This is very interesting. It is a very soft surface, but here and there where I plug with the contingency sample collector, I run into a very hard surface, but it appears to be very cohesive material of the same sort. I will try to get a rock in here. Here’s a couple.”165
So firm was the ground, that Armstrong and Aldrin had great difficulty planting the American flag into the rocky and virtually dust-free lunar surface.
The fact that no further comments were made about the lunar dust by NASA or other scientists has been taken by some166-168 to represent some conspiracy of silence, hoping that some supposed unexplained problem will go away. There is a perfectly good reason why there was silence - three years earlier the dust issue had been settled and Armstrong and Aldrin only confirmed what scientists already knew about the thin dust layer on the moon. So because it wasn’t a problem just before the Apollo 11 landing, there was no need for any talk about it to continue after the successful exploration of the lunar surface. Armstrong himself may have been a little concerned about the constituency and strength of the lunar surface as he was about to step onto it, as he appears to have admitted in subsequent interviews,169 but then he was the one on the spot and about to do it, so why wouldn’t he be concerned about the dust, along with lots of other related issues.
Finally, there is the testimony of Dr William Overn.170,171 Because he was working at the time for the Univac Division of Sperry Rand on the television sub-system for the Mariner IV spacecraft he sometimes had exchanges with the men at the Jet Propulsion Laboratory (JPL) who were working on the Apollo program. Evidently those he spoke to were assigned to the Ranger spacecraft missions which, as we have seen, were designed to find out what the lunar surface really was like; in other words, to investigate among other things whether there was a thin or thick dust layer on the lunar surface. In Bill’s own words:
“I simply told them that they should expect to find less than 10,000 years’ worth of dust when they got there. This was based on my creationist belief that the moon is young. The situation got so tense it was suggested I bet them a large amount of money about the dust. … However, when the Surveyor spacecraft later landed on the moon and discovered there was virtually no dust, that wasn’t good enough for these people to pay off their bet. They said the first landing might have been a fluke in a low dust area! So we waited until ,.,. astronauts actually landed on the moon. …”172
Neither the validity of this story nor Overn’s integrity is in question. However, it should be noted that the bet Overn made with the JPL scientists was entered into at a time when there was still much speculation about the lunar surface, the Ranger spacecraft just having been crash-landed on the moon and the Surveyor soft-landings yet to settle the dust issue. Furthermore, since these scientists involved with Overn were still apparently hesitant after the Surveyor missions, it suggests that they may not have been well acquainted with NASA’s other efforts, particularly via satellite measurements, to resolve the dust question, and that they were not “rubbing shoulders with” those scientists who were at the forefront of these investigations which culminated in the Surveyor soft-landings settling the speculations over the dust. Had they been more informed, they would not have entered into the wager with Overn, nor for that matter would they have seemingly felt embarrassed by the small amount of dust found by Armstrong and Aldrin, and thus conceded defeat in the wager. The fact remains that the perceived problem of what astronauts might face on the lunar surface was settled by NASA in 1966 by the Surveyor soft-landings.
The final question to be resolved is, now that we know how much meteoritic dust falls to the moon’s surface each year, then what does our current knowledge of the lunar surface layer tell us about the moon’s age? For example, what period of time is represented by the actual layer of dust found on the moon? On the one hand creationists have been using the earlier large dust influx figures to support a young age of the moon, and on the other hand evolutionists are satisfied that the small amount of dust on the moon supports their billions-of-years moon age.
To begin with, what makes up the lunar surface and how thick is it? The surface layer of pulverised material on the moon is now, after on-site investigations by the Apollo astronauts, not called moon dust, but lunar regolith, and the fine materials in it are sometimes referred to as the lunar soil. The regolith is usually several metres thick and extends as a continuous layer of debris draped over the entire lunar bedrock surface. The average thickness of the regolith on the maria is 4-5m, while the highlands regolith is about twice as thick, averaging about 10m.173 The seismic properties of the regolith appear to be uniform on the highlands and maria alike, but the seismic signals indicate that the regolith consists of discrete layers, rather than being simply “compacted dust”. The top surface is very loose due to stirring by micrometeorites, but the lower depths below about 20cm are strongly compacted, probably due to shaking during impacts.
The complex layered nature of the regolith has been studied in drill-core samples brought back by the Apollo missions. These have clearly revealed that the regolith is not a homogeneous pile of rubble. Rather, it is a layered succession of ejecta blankets.174 An apparent paradox is that the regolith is both well mixed on a small scale and also displays a layered structure. The Apollo 15 deep core tube, for example, was 2.42 metres long, but contained 42 major textural units from a few millimetres to 13cm in thickness. It has been found that there is usually no correlation between layers in adjacent core tubes, but the individual layers are well mixed. This paradox has been resolved by recognising that the regolith is continuously “gardened” by large and small meteorites and micrometeorites. Each impact inverts much of the microstratigraphy and produces layers of ejecta, some new and some remnants of older layers. -The new surface layers are stirred by micrometeorites, but deeper stirring is rarer. The result is that a complex layered regolith is built up, but is in a continual state of flux, particles now at the surface potentially being buried deeply by future impacts. In this way, the regolith is turned over, like a heavily bombarded battlefield. However, it appears to only be the upper 0.5 - l mm of the lunar surface that is subjected to intense churning and mixing by the meteoritic influx at the present time. Nevertheless, as a whole, the regolith is a primary mixing layer of lunar materials from all points on the moon with the incoming meteoritic influx, both meteorites proper and dust.
Figure 9. Processes of erosion on the lunar surface today appear to be extremely slow compared with the processes on the earth. Bombardment by micrometeorites is believed to be the main cause. A large meteorite strikes the surface very rarely, excavating bedrock and ejecting it over thousands of square kilometres, sometimes as long rays of material radiating from the resulting crater. Much of the meteorite itself is vaporized on impact, and larger fragments of the debris produce secondary craters. Such an event at a mare site pulverizes and churns the rubble and dust that form the regolith. Accompanying base surges of hot clouds of dust. gas and shock waves might compact the dust into breccias. Cosmic rays continually bombard the surface. During the lunar day ions from the solar wind and unshielded solar radiation impinge on the surface. (Adapted from Eglinton et al.176)
So apart from the influx of the meteoritic dust, what other processes are active on the moon’s surface, particularly as there is no atmosphere or water on the moon to weather and erode rocks in the same way as they do on earth? According to Ashworth and McDonnell,
“Three major processes continuously affecting the surface of the moon are meteor impact, solar wind sputtering, and thermal erosion.”175
The relative contributions of these processes towards the erosion of the lunar surface depend upon various factors, such as the dimensions and composition of impacting bodies and the rate of meteoritic impacts and dust influx, These processes of erosion on the lunar surface are of course extremely slow compared with erosion processes on the earth, Figure 9, after Eglinton et al.,176 attempts to illustrate these lunar surface erosion processes.
Of these erosion processes the most important is obviously impact erosion, Since there is no atmosphere on the moon, the incoming meteoritic dust does not just gently drift down to the lunar surface, but instead strikes at an average velocity that has been estimated to be between 13 and 18 km/sec,177 or more recently as 20 km/sec,178 with a maximum reported velocity of 100 km/sec.179 Depending not ,ony on the velocity but on the mass of the impacting dust particles, more dust is produced as debris.
A number of attempts have been made to quantify the amount of dust-caused erosion of bare lunar rock on the lunar surface. Hörz et al.180 suggested a rate of 0.2-0.4mm/106 year (or 20-40 x 10-9cm/yr) after examination of micrometeorite craters on the surfaces of lunar rock samples brought back by the Apollo astronauts. McDonnell and Ashworth181 discussed the range of erosion rates over the range of particle diameters and the surface area exposed. They thus suggested that a rate of 1-3 x 10-7cm/yr (or 100-300 x 10-9cm/yr), basing this estimate on Apollo moon rocks also, plus studies of the Surveyor 3 camera. They later revised this estimate, concluding that on the scale of tens of metres impact erosion accounts for the removal of some 10-7cm/yr (or 100x 10-9cm/yr) of lunar material.182 However, in another paper, Gault et al.183 tabulated calculated abrasion rates for rocks exposed on the lunar surface compared with observed erosion rates as determined from solar-flare particle tracks. Discounting the early satellite data and just averaging the values calculated from the best, more recent satellite data and from lunar rocks, gave an erosion rate esti mate of 0.28cm/106yr (or 280 x 10-9cm/yr), while the average of the observed erosion rates they found from the literature was 0.03cm/106yr (or 30 x 10-9cm/yr). However, they naturally favoured their own “best” estimate from the satellite data of both the flux and the consequent abrasion rate, the latter being 0.1 cm/106yr (or 100 x 10-9cm/ yr), a figure identical with that ofMcDonnell and Ashworth. Gault et al. noted that this was higher, by a factor approaching an order of magnitude, than the “consensus’ of the observed values, a discrepancy which mirrors the difference between the meteoritic dust influx estimates derived from the lunar rocks compared with the satellite data.
These estimates obviously vary from one to another, but 30-100 x 10-9cm/yr would seem to represent a “middle of the range” figure. However, this impact erosion rate only applies to bare, exposed rock. As McCracken and Dubin have stated, once a surface dust layer is built up initially from the dust influx and impact erosion, this initial surface dust layer would protect the underlying bedrock surface against continued erosion by dust particle bombardment.184 If continued impact erosion is going to add to the dust and rock fragments in the surface layer and regolith, then what is needed is some mechanism to continually transport dust away from the rock surfaces as it is produced, so as to keep exposing bare rock again for continued impact erosion. Without some active transporting process, exposed rock surfaces on peaks and ridges would be worn away to give a somewhat rounded moonscape (which is what the Apollo astronauts found), and the dust would thus collect in thicker accumulations at the bottoms of slopes. This is illustrated in Figure 9.
So bombardment of the lunar surface by micrometeorites is believed to be the main cause of surface erosion. At the Current rate of removal, however, it would take a million years to remove an approximately 1mm thick skin of rock from the whole lunar surface and convert it to dust. Occasionally a large meteorite strikes the surface (see Figure 9 again), excavating through the dust down into the bedrock and ejecting debris over thousands of square kilometres sometimes as long rays of material radiating from the resulting crater. Much of the meteorite itself is vaporised on impact, and larger fragments of the debris create secondary craters. Such an event at a mare site pulverises and churns the rubble and dust that forms the regolith.
The solar wind is the next major contributor to lunar surface erosion. The solar wind consists primarily of protons, electrons, and some alpha particles, that are continuously being ejected by the sun. Once again, since the moon has virtually no atmosphere or magnetic field, these particles of the solar wind strike the lunar surface unimpeded at velocities averaging 600 km/sec, knocking individual atoms from rock and dust mineral lattices. Since the major components of the solar wind are H+ (hydrogen) ions, and some He (helium) and other elements, the damage upon impact to the crystalline structure of the rock silicates creates defects and voids that accommodate the gases and other elements which are simultaneously implanted in the rock surface. But individual atoms are also knocked out of the rock surface, and this is called sputtering or sputter erosion. Since the particles in the solar wind strike the lunar surface with such high velocities,
“one can safely conclude that most of the sputtered atoms have ejection velocities higher than the escape velocity of the moon.”185
There would thus appear to be a net erosional mass loss from the moon to space via this sputter erosion.
As for the rate of this erosional loss, Wehner186 suggested a value for the sputter rate of the order of 0.4 angstrom (Å)/yr. However, with the actual measurement of the density of the solar wind particles on the surface of the moon, and lunar rock samples available for analysis, the intensity of the solar wind used in sputter rate calculations was downgraded, and consequently the estimates of the sputter rate itself (by an order of magnitude lower). McDonnell and Ashworth187 estimated an average sputter rate of lunar rocks of about 0.02Å/yr, which they later revised to 0.02-0.04Å/yr.188 Further experimental work refined their estimate to 0.043Å/yr,189 which was reported in Nature by Hughes.190 This figure of 0.043 Å/yr continued to be used and confirmed in subsequent experimental work,191 although Zook192 suggested that the rate may be higher, even as high as 0.08Å/yr.193 Even so, if this sputter erosion rate continued at this pace in the past then it equates to less than one centimetre of lunar surface lowering in one billion years. This not only applies to solid rock, but to the dust layer itself, which would in fact decrease in thickness in that time, in opposition to the increase in thickness caused by meteoritic dust influx. Thus sputter erosion doesn’t help by adding dust to the lunar surface, and in any case it is such a slow process that the overall effect is minimal. Yet another potential form of erosion process on the lunar surface is thermal erosion, that is, the breakdown of the lunar surface around impact/crater areas due to the marked temperature changes that result from the lunar diurnal cycle. Ashworth and McDonnell194 carried out tests on lunar rocks, submitting them to cycles of changing temperature, but found it “impossible to detect any surface changes”. They therefore suggested that thermal erosion is probably “not a major force.” Similarly, McDonnell and Flavill195 conducted further experiments and found that their samples showed no sign of “degradation or enhancement” due to the temperature cycle that they had been subjected to. They reported that
“the conditions were thermally equivalent to the lunar day-night cycle and we must conclude that on this scale thermal cycling is a very weak erosion mechanism.‘
The only other possible erosion process that has ever been mentioned in the literature was that proposed by Lyttleton196 and Gold.197 They suggested that high-energy ultraviolet and x-rays from the sun would slowly pulverize lunar rock to dust, and over millions of years this would create an enormous thickness of dust on the lunar surface. This was proposed in the 1950s and debated at the time, but since the direct investigations of the moon from the mid- 1960s onwards, no further mention of this potential process has appeared in the technical literature, either for the idea or against it. One can only assume that either the idea has been ignored or forgotten, or is simply ineffective in producing any significant erosion, contrary to the suggestions of the original proposers. The latter is probably true, since just as with impact erosion the effect of this radiation erosion would be subject to the critical necessity of a mechanism to clean rock surfaces of the dust produced by the radiation erosion. In any case, even a thin dust layer will more than likely simply absorb the incoming rays, while the fact that there are still exposed rock surfaces on the moon clearly suggests that Lyttleton and Gold’s radiation erosion process has not been effective over the presumed millions of years, else all rock surfaces should long since have been pulverized to dust. Alternately, of course, the fact that there are still exposed rock surfaces on the moon could instead mean that if this radiation erosion process does occur then the moon is quite young.
So how much dust is there on the lunar surface? Because of their apparent negligible or non-existent contribution, it may be safe to ignore thermal, sputter and radiation erosion. This leaves the meteoritic dust influx itself and the dust it generates when it hits bare rock on the lunar surface (impact erosion). However, our primary objective is to determine whether the amount of meteoritic dust in the lunar regolith and surface dust layer, when compared to the current meteoritic dust influx rate, is an accurate indication of the age of the moon itself, and by implication the earth and the solar system also.
Now we concluded earlier that the consensus from all the available evidence, and estimate techniques employed by different scientists, is that the meteoritic dust influx to the lunar surface is about 10,000 tons per year or 2x10-9g cm-2yr-1. Estimates of the density of micrometeorites vary widely, but an average value of 19/cm3 is commonly used. Thus at this apparent rate of dust influx it would take about a billion years for a dust layer a mere 2cm thick to accumulate over the lunar surface. Now the Apollo astronauts apparently reported a surface dust layer of between less than 1/8 inch (3mm)and 3 inches (7.6cm). Thus, if this surface dust layer were composed only of meteoritic dust, then at the current rate of dust influx this surface dust layer would have accumulated over a period of between 150 million years (3mm) and 3.8 billion years (7.6cm). Obviously, this line of reasoning cannot be used as an argument for a young age for the moon and therefore the solar system.
However, as we have already seen, below the thin surface dust layer is the lunar regolith, which is up to 5 metres thick across the lunar maria and averages 10 metres thick in the lunar highlands. Evidently, the thin surface dust layer is very loose due to stirring by impacting meteoritic dust (micrometeorites), but the regolith beneath which consists of rock rubble of all sizes down to fines (that are referred to as lunar soil) is strongly compacted. Nevertheless, the regolith appears to be continuously “gardened” by large and small meteorites and micrometeorites, particles now at the surface potentially being buried deeply by future impacts. This of course means then that as the regolith is turned over meteoritic dust particles in the thin surface layer will after some time end up being mixed into the lunar soil in the regolith below. Therefore, also, it cannot be assumed that the thin loose surface layer is entirely composed of meteoritic dust, since lunar soil is also brought up into this loose surface layer by impacts.
However, attempts have been made to estimate the proportion of meteoritic material mixed into the regolith. Taylor198 reported that the meteoritic compositions recognised in the maria soils turn out to be surprisingly uniform at about 1.5% and that the abundance patterns are close to those for primitive unfractionated Type I carbonaceous chondrites. As described earlier, this meteoritic component was identified by analysing for trace elements in the broken-down rocks and soils in the regolith and then assuming that any trace element differences represented the meteoritic material added to the soils. Taylor also adds that the compositions of other meteorites, the ordinary chondrites, the iron meteorites and the stony-irons, do not appear to be present in the lunar regolith, which may have some significance as to the origin of this meteoritic material, most of which is attributed to the influx of micrometeorites. It is unknown what the large crater-forming meteorites contribute to the regolith, but Taylor suggests possibly as much as 10% of the total regolith. Additionally, a further source of exotic elements is the solar wind, which is estimated to contribute between 3% and 4% to the soil. This means that the total contribution to the regolith from extra-lunar sources is around 15%. Thus in a five metre thick regolith over the maria, the thickness of the meteoritic component would be close to 60cm, which at the current estimated meteoritic influx rate would have taken almost 30 billion years to accumulate, a timespan six times the claimed evolutionary age of the moon.
The lunar surface is heavily cratered, the largest crater having a diameter of 295kms. The highland areas are much more heavily cratered than the maria, which suggested to early investigators that the lunar highland areas might represent the oldest exposed rocks on the lunar surface. This has been confirmed by radiometric dating of rock samples brought back by the Apollo astronauts, so that a detailed lunar stratigraphy and evolutionary geochronological framework has been constructed. This has led to the conclusion that early in its history the moon suffered intense bombardment from scores of meteorites, so that all highland areas presumed to be older than 3.9 billion years have been found to be saturated with craters 50-100 km in diameter, and beneath the 10 metre-thick regolith is a zone of breccia and fractured bedrock estimated in places to be more than 1 km thick.199
Figure 10. Cratering history of the moon (adapted from Taylor200). An aeon represents a billion years on the evolutionists’ time scale, while the vertical bar represents the error margin in the estimation of the cratering rate at each data point on the curve.
Following suitable calibration, a relative crater chronology has been established, which then allows for the cratering rate through lunar history to be estimated and then plotted, as it is in Figure 10.200 There thus appears to be a general correlation between crater densities across the lunar surface and radioactive “age” dates. However, the crater densities at the various sites cannot be fitted to a straightforward exponential decay curve of meteorites or asteroid populations.201 Instead, at least two separate groups of objects seem to be required. The first is believed to be approximated by the present-day meteoritic flux, while the second is believed to be that responsible for the intense early bombardment claimed to be about four billion years ago. This intense early bombardment recorded by the crater-saturated surface of the lunar highland areas could thus explain the presence of the thicker regolith (up to 10 metres) in those areas.
It follows that this period of intense early bombardment resulted from a very high influx of meteorites and thus meteoritic dust, which should now be recognisable in the regolith. Indeed, Taylor202 lists three types of meteoritic debris in the highlands regolith- the micrometeoritic component, the debris from the large-crater-producing bodies, and the material added during the intense early bombardment. However, the latter has proven difficult to quantify. Again, the use of trace element ratios has enabled six classes of ancient meteoritic components to be identified, but these do not correspond to any of the currently known meteorite classes, both iron and chondritic. It would appear that this material represents the debris from the large projectiles responsible for the saturation cratering in the lunar highlands during the intense bombardment early in the moon’s history. It is this early intense bombardment with its associated higher influx rate of meteoritic material that would account for not only the thicker regolith in the lunar highlands, but the 12% of meteoritic component in the thinner regolith of the maria that we have calculated (above) would take up to 30 billion years to accumulate at the current meteoritic influx rate. Even though the maria are believed to be younger than the lunar highlands and haven’t suffered the same saturation cratering, the cratering rate curve of Figure 10 suggests that the meteoritic influx rate soon after formation of the maria was still almost 10 times the current influx rate, so that much of the meteoritic component in the regolith could thus have more rapidly accumulated in the early years after the maria’s formation. This then removes the apparent accumulation timespan anomaly for the evolutionists’ timescale, and suggests that the meteoritic component in the maria regolith is still consistent with its presumed 3 billion year age if uniformitarian assumptions are used. This of course is still far from satisfactory for those young earth creationists who believed that uniformitarian assumptions applied to moon dust could be used to deny the evolutionists’ vast age for the moon.
Given that as much as 10% of the maria regolith may have been contributed by the large crater-forming meteorites,203 impact erosion by these large crater-producing meteorites may well have had a significant part in the development of the regolith, including the generation of dust, particularly if the meteorites strike bare lunar rock. Furthermore, any incoming meteorite, or micrometeorite for that matter, creates a crater much bigger than itself,204 and since most impacts are at an oblique angle the resulting secondary cratering may in fact be more important205 in generating even more dust. However, to do so the impacting meteorite or micrometeorite must strike bare exposed rock on the lunar surface. Therefore, if bare rock is to continue to be available at the lunar surface, then there must be some mechanism to move the dust off the rock as quickly as it is generated, coupled with some transport mechanism to carry it and accumulate it in lower areas, such as the maria.
Various suggestions have been made apart from the obvious effect of steep gradients, which in any case would only produce local accumulation. Gold, for example, listed five possibilities,206 but all were highly speculative and remain unverified. More recently, McDonnell207 has proposed that electrostatic charging on dust particle surfaces may cause those particles to levitate across the lunar surface up to 10 or more metres. As they lose their charge they float back to the surface, where they are more likely to settle in a lower area. McDonnell gives no estimate as to how much dust might be moved by this process, and it remains somewhat tentative. In any case, if such transport mechanisms were in operation on the lunar surface, then we would expect the regolith to be thicker over the maria because of their lower elevation. However, the fact is that the regolith is thicker in the highland areas where the presumed early intense bombardment occurred, the impact-generated dust just accumulating locally and not being transported any significant distance.
Having considered the available data, it is inescapably clear that the amount of meteoritic dust on the lunar surface and in the regolith is not at all inconsistent with the present meteoritic dust influx rate to the lunar surface operating, over the multi-billion year time framework proposed by evolutionists, but including a higher influx rate in the early history of the moon when intense bombardment occurred producing many of the craters on the lunar surface. Thus, for the purpose of “proving” a young moon, the meteoritic dust influx as it appears to be currently known is at least two orders of magnitude too low. On the other hand, the dust influx rate has, appropriately enough, not been used by evolutionists to somehow “prove” their multi-billion year timespan for lunar history. (They have recognised some of the problems and uncertainties and so have relied more on their radiometric dating of lunar rocks, coupled with wide- ranging geochemical analyses of rock and soil samples, all within the broad picture of the lunar stratigraphic succession.) The present rate of dust influx does not, of course, disprove a young moon.
Some creationists have tentatively recognised that the moon dust argument has lost its original apparent force. For example, Taylor(Paul)208 follows the usual line of argument employed by other creationists, stating that based on published estimates of the dust influx rate and the evolutionary timescale, many evolutionists expected the astronauts to find a very thick layer of loose dust on the moon, so when they only found a thin layer this implied a young moon. However, Taylor then admits that the case appears not to be as clear cut as some originally thought, particularly because evolutionists can now point to what appear to be more accurate measurements of a smaller dust influx rate compatible with their timescale. Indeed, he says that the evidence for disproving an old age using this particular process is weakened, but that furthermore, the case has been blunted by the discovery of what is said to be meteoritic dust within the regolith. However, like Calais,209,210 Taylor points to the NASA report211 that supposedly indicated a very large amount of cosmic dust in the vicinity of the earth and moon (a claim which cannot be substantiated by a careful reading of the papers published in that report, as we have already seen). He also takes up DeYoung’s comment212 that because all evolutionary theories about the origin of the moon and the solar system predict a much larger amount of incoming dust in the moon’s early years, then a very thick layer of dust would be expected, so it is still missing. Such an argument cannot be sustained by creationists because, as we have seen above, the amount of meteoritic dust that appears to be in the regolith seems to be compatible with the evolutionists’ view that there was a much higher influx rate of meteoritic dust early in the moon’s history at the same time as the so-called “early intense bombardment”.
Indeed, from Figure 10 it could be argued that since the cratering rate very early in the moon’s history was more than 300 times today’s cratering rate, then the meteoritic dust influx early in the moon’s history was likewise more than 300 times today’s influx rate. That would then amount to more than 3 million tons of dust per year, but even at that rate it would take a billion years to accumulate more than six metres thickness of meteoritic dust across the lunar surface, no doubt mixed in with a lesser amount of dust and rock debris generated by the large-crater-producing meteorite impacts. However, in that one billion years, Figure 10 shows that the rate of meteoritic dust influx is postulated to have rapidly declined, so that in fact a considerably lesser amount of meteoritic dust and impact debris would have accumulated in that supposed billion years. In other words, the dust in the regolith and the surface layer is still compatible with the evolutionists’ view that there was a higher influx rate early in the moon’s history, so creationists cannot use that to shore up this considerably blunted argument.
Coupled with this, it is irrelevant for both Taylor and DeYoung to imply that because evolutionists say that the sun and the planets were formed from an immense cloud of dust which was thus obviously much thicker in the past, that their theory would thus predict a very thick layer of dust. On the contrary, all that is relevant is the postulated dust influx after the moon’s formation, since it is only then that there is a lunar surface available to collect the dust, which we can now investigate along with that lunar surface. So unless there was a substantially greater dust influx after the moon formed than that postulated by the evolutionists (see Figure 10 and our calculations above), then this objection also cannot be used by creationists.
De Young also adds a second objection in order to counter the evolutionists’ case. He maintains that the revised value of a much smaller dust accumulation from space is open to question, and that scientists continue to make major adjustments in estimates of meteors and space dust that fall upon the earth and moon.213 If this is meant to imply that the current dust influx estimate is open to question amongst evolutionists, then it is simply not the case, because there is general agreement that the earlier estimates were gross overestimates. As we have seen, there is much support for the current figure, which is two orders of magnitude lower than many of the earlier estimates. There may be minor adjustments to the current estimate, but certainly not anything major.
While De Young hints at it, Taylor (Ian)214 is quite open in suggesting that a drastic revision of the estimated meteoritic dust influx rate to the moon occurred straight after the Apollo moon landings, when the astronauts , observations supposedly debunked the earlier gross over-estimates, and that this was done quietly but methodically in some sort of deliberate way. This is simply not so. Taylor insinuates that the Committee for Space Research (COSPAR) was formed to work on drastically downgrading the meteoritic dust influx estimate, and that they did this only based on measurements from indirect techniques such as satellite-borne detectors, visual meteor counts and observations of zodiacal light, rather than dealing directly with the dust itself. That claim does not take into account that these different measurement techniques are all necessary to cover the full range of particle sizes involved, and that much of the data they employed in their work was collected in the 1960s before the Apollo moon landings. Furthermore, that same data had been used in the 1960s to produce dust influx estimates, which were then found to be in agreement with the minor dust layer found by the astronauts subsequently. In other words, the data had already convinced most scientists before the Apollo moon landings that very little dust would be found on the moon, so there is nothing “fishy” about COSPAR’s dust influx estimates just happening to yield the exact amount of dust actually found on the moon’s surface. Furthermore, the COSPAR scientists did not ignore the dust on the moon’s surface, but used lunar rock and soil samples in their work, for example, with the study of lunar microcraters that they regarded as representing a record of the historic meteoritic dust influx. Attempts were also made using trace element geochemistry to identify the quantity of meteoritic dust in the lunar surface layer and the regolith below.
A final suggestion from De Young is that perhaps there actually is a thick lunar dust layer present, but it has been welded into rock by meteorite impacts.215 This is similar and related to an earlier comment about efforts being made to re-evaluate dust accumulation rates and to find a mechanism for lunar dust compaction in order to explain the supposed absence of dust on the lunar surface that would be needed by the evolutionists’ timescale216 For support, Mutch217 is referred to, but in the cited pages Mutch only talks about the thickness of the regolith and the debris from cratering, the details of which are similar to what has previously been discussed here. As for the view that the thick lunar dust is actually present but has been welded into rock by meteorite impacts, no reference is cited, nor can one be found. Taylor describes a “mega-regolith” in the highland areas218 which is a zone of brecciation, fracturing and rubble more than a kilometre thick that is presumed to have resulted from the intense early bombardment, quite the opposite to the suggestion of meteorite impacts welding dust into rock. Indeed, Mutch,219 Ashworth and McDonnell220 and Taylor221 all refer to turning over of the soil and rubble in the lunar regolith by meteorite and micrometeorite impacts, making the regolith a primary mixing layer of lunar materials that have not been welded into rock. Strong compaction has occurred in the regolith, but this is virtually irrelevant to the issue of the quantity of meteoritic dust on the lunar surface, since that has been estimated using trace element analyses.
Parks222 has likewise argued that the disintegration of meteorites impacting the lunar surface over the evolutionists’ timescale should have produced copious amounts of dust as they fragmented, which should, when added to calculations of the meteoritic dust influx over time, account for dust in the regolith in only a short period of time. However, it has already been pointed out that this debris component in the maria regolith only amounts to 10%, which quantity is also consistent with the evolutionists, postulated cratering rate over their timescale. He then repeats the argument that there should have been a greater rate of dust influx in the past, given the evolutionary theories for the formation of the bodies in the solar system from dust accretion, but that argument is likewise negated by the evolutionists having postulated an intense early bombardment of the lunar surface with a cratering rate, and thus a dust influx rate, over two orders of magnitude higher than the present (as already discussed above). Finally, he infers that even if the dust influx rate is far less than investigators had originally supposed, it should have contributed much more than the 1.5%’s worth of the 1-2 inch thick layer of loose dust on the lunar surface. The reference cited for this percentage of meteoritic dust in the thin loose dust layer on the lunar surface is Ganapathy et al.223 However, when that paper is checked carefully to see where they obtained their samples from for their analytical work, we find that the four soil samples that were enriched in a number of trace elements of meteoritic origin came from depths of 13-38 cms below the surface, from where they were extracted by a core tube. In other words, they came from the regolith below the 1-2 inch thick layer of loose dust on the surface, and so Parks’ application of this analytical work is not even relevant to his claim. In any case, if one uses the current estimated meteoritic dust influx rate to calculate how much meteoritic dust should be within the lunar surface over the evolutionists’ timescale one finds the results to be consistent, as has already been shown above.
Parks may have been influenced by Brown, whose personal correspondence he cites. Brown, in his own publication,224 has stated that
“if the influx of meteoritic dust on the moon has been at just its present rate for the last 4.6 billion years, then the layer of dust should be over 2,000 feet thick.”
Furthermore, he indicates that he made these computations based on the data contained in Hughes225 and Taylor.226 This is rather baffling, since Taylor does not commit himself to a meteoritic dust influx rate, but merely refers to the work of others, while Hughes concentrates on lunar microcraters and only indirectly refers to the meteoritic dust influx rate. In any case, as we have already seen, at the currently estimated influx rate of approximately 10,000 tons per year a mere 2 cm thickness of meteoritic dust would accumulate on the lunar surface every billion years, so that in 4.6 billion years there would be a grand total of 9.2 cm thickness. One is left wondering where Brown’s figure of 2,000 feet (approximately 610 metres) actually came from? If he is taking into account Taylor’s reference to the intense early bombardment, then we have already seen that, even with a meteoritic dust influx rate of 300 times the present figure, we can still comfortably account for the quantity of meteoritic dust found in the lunar regolith and the loose surface layer over the evolutionists’ timescale. While defence of the creationist position is totally in order, baffling calculations are not. Creation science should always be good science; it is better served by thorough use of the technical literature and by facing up to the real data with sincerity, as our detractors have often been quick to point out.
So are there any loopholes in the evolutionists’ case that the current apparent meteoritic dust influx to the lunar surface and the quantity of dust found in the thin lunar surface dust layer and the regolith below do not contradict their multi-billion year timescale for the moon’s history? Based on the evidence we currently have the answer has to be that it doesn’t look like it. The uncertainties involved in the possible erosion process postulated by Lyttleton and Gold (that is, radiation erosion) still potentially leaves that process as just one possible explanation for the amount of dust in a young moon model, but the dust should no longer be used as if it were a major problem for evolutionists. Both the lunar surface and the lunar meteoritic influx rate seem to be fairly well characterised, even though it could be argued that direct geological investigations of the lunar surface have only been undertaken briefly at 13 sites (six by astronauts and seven by unmanned spacecraft) scattered across a portion of only one side of the moon.
Furthermore, there are some unresolved questions regarding the techniques and measurements of the meteoritic dust influx rate. For example, the surface exposure times for the rocks on whose surfaces microcraters were measured and counted are dependent on uniformitarian age assumptions. If the exposure times were in fact much shorter, then the dust influx estimates based on the lunar microcraters would need to be drastically revised, perhaps upwards by several orders of magnitude. As it is, we have seen that there is a recognised discrepancy between the lunar microcrater data and the satellite-borne detector data, the former being an order of magnitude lower than the latter. Hughes227 explains this in terms of the meteoritic dust influx having supposedly increased by a factor of four in the last 100,000 years, whereas Gault et al.228 admit that if the ages are accepted at face value then there had to be an increase in the meteoritic dust influx rate by a factor of 10 in the past few tens of years! How this could happen we are not told, yet according to estimates of the past cratering rate there was in fact a higher influx of meteorites, and by inference meteoritic dust, in the past. This is of course contradictory to the claims based on lunar microcrater data. This seems to leave the satellite-borne detector measurements as apparently the more reliable set of data, but it could still be argued that the dust collection areas on the satellites are tiny, and the dust collection timespans far too short, to be representative of the quantity of dust in the space around the earth-moon system.
Should creationists then continue to use the moon dust as apparent evidence for a young moon, earth and solar system? Clearly, the answer is no. The weight of the evidence as it currently exists shows no inconsistency within the evolutionists’ case, so the burden of proof is squarely on creationists if they want to argue that based on the meteoritic dust the moon is young. Thus it is inexcusable for one creationist writer to recently repeat verbatim an article of his published five years earlier,229,230 maintaining that the meteoritic dust is proof that the moon is young in the face of the overwhelming evidence against his arguments. Perhaps any hope of resolving this issue in the creationists, favour may have to wait for further direct geological investigations and direct measurements to be made by those manning a future lunar surface laboratory, from where scientists could actually collect and measure the dust influx, and investigate the characteristics of the dust in place and its interaction with the regolith and any lunar surface processes.
Over the last three decades numerous attempts have been made using a variety of methods to estimate the meteoritic dust influx to both the earth and the moon. On the earth, chemical methods give results in the range of 100,000-400,000 tons per year, whereas cumulative flux calculations based on satellite and radar data give results in the range 10,000-20,000 tons per year. Most authorities on the subject now favour the satellite data, although there is an outside possibility that the influx rate may reach 100,000 tons per year. On the moon, after assessment of the various techniques employed, on balance the evidence points to a meteoritic dust influx figure of around 10,000 tons per year.
Although some scientists had speculated prior to spacecraft landing on the moon that there would be a thick dust layer there, there were many scientists who disagreed and who predicted that the dust would be thin and firm enough for a manned landing. Then in 1966 the Russians with their Luna 9 spacecraft and the Americans with their five successful Surveyor spacecraft accomplished soft-landings on the lunar surface, the footpads of the latter sinking no more than an inch or two into the soft lunar soil and the photographs sent back settling the argument over the thickness of the dust and its strength. Consequently, before the Apollo astronauts landed on the moon in 1969 the moon dust issue had been settled, and their lunar exploration only confirmed the prediction of the majority, plus the meteoritic dust influx measurements that had been made by satellite-borne detector systems which had indicated only a minor amount.
Calculations show that the amount of meteoritic dust in the surface dust layer, and that which trace element analyses have shown to be in the regolith, is consistent with the current meteoritic dust influx rate operating over the evolutionists’ timescale. While there are some unresolved problems with the evolutionists’ case, the moon dust argument, using uniformitarian assumptions to argue against an old age for the moon and the solar system, should for the present not be used by creationists.
Research on this topic was undertaken spasmodically over a period of more than seven years by Dr Andrew Snelling. A number of people helped with the literature search and obtaining copies of papers, in particular, Tony Purcell and Paul Nethercott. Their help is acknowledged. Dave Rush undertook research independentl yon this topic while studying and working at the Institute for Creation Research, before we met and combined our efforts. We, of course, take responsibility for the conclusions, which unfortunately are not as encouraging or complimentary for us young earth creationists as we would have liked.
Help keep these daily articles coming. Support AiG.
“Now that I have updated, revised, and expanded The Lie, I believe it’s an even more powerful, eyeopening book for the church—an essential resource to help all of us to understand the great delusion that permeates our world! The message of The Lie IS the message of AiG and why we even exist! It IS the message God has laid on our hearts to bring before the church! It IS a vital message for our time.”
– Ken Ham, president and founder of AiG–U.S.
Answers magazine is the Bible-affirming, creation-based magazine from Answers in Genesis. In it you will find fascinating content and stunning photographs that present creation and worldview articles along with relevant cultural topics. Each quarterly issue includes a detachable chart, a pullout children’s magazine, a unique animal highlight, excellent layman and semi-technical articles, plus bonus content. Why wait? Subscribe today and get a FREE DVD download! | http://www.answersingenesis.org/articles/tj/v7/n1/moondust | 13 |
18 | The term race or racial group usually refers to the concept of categorizing humans into populations or groups on the basis of various sets of characteristics. The most widely used human racial categories are based on visible traits (especially skin color, cranial or facial features and hair texture), and self-identification.
Conceptions of race, as well as specific ways of grouping races, vary by culture and over time, and are often controversial for scientific as well as social and political reasons. The controversy ultimately revolves around whether or not races are natural types or socially constructed, and the degree to which perceived differences in ability and achievement, categorized on the basis of race, are a product of inherited (i.e. genetic) traits or environmental, social and cultural factors.
Some argue that although race is a valid taxonomic concept in other species, it cannot be applied to humans. Many scientists have argued that race definitions are imprecise, arbitrary, derived from custom, have many exceptions, have many gradations, and that the numbers of races delineated vary according to the culture making the racial distinctions; thus they reject the notion that any definition of race pertaining to humans can have taxonomic rigour and validity. Today most scientists study human genotypic and phenotypic variation using concepts such as "population" and "clinal gradation". Many contend that while racial categorizations may be marked by phenotypic or genotypic traits, the idea of race itself, and actual divisions of persons into races or racial groups, are social constructs.
Given visually complex social relationships, humans presumably have always observed and speculated about the physical differences among individuals and groups. But different societies have attributed markedly different meanings to these distinctions. For example, the Ancient Egyptian sacred text called Book of Gates identifies four categories that are now conventionally labeled "Egyptians", "Asiatics", "Libyans", and "Nubians", but such distinctions tended to conflate differences as defined by physical features such as skin tone, with tribal and national identity. Classical civilizations from Rome to China tended to invest much more importance in familial or tribal affiliation than with one's physical appearance (Dikötter 1992; Goldenberg 2003). Ancient Greek and Roman authors also attempted to explain and categorize visible biological differences among peoples known to them. Such categories often also included fantastical human-like beings that were supposed to exist in far-away lands. Some Roman writers adhered to an environmental determinism in which climate could affect the appearance and character of groups (Isaac 2004). In many ancient civilizations, individuals with widely varying physical appearances became full members of a society by growing up within that society or by adopting that society's cultural norms (Snowden 1983; Lewis 1990).
Julian the Apostate was an early observer of the differences in humans, based upon ethnic, cultural, and geographic traits, but as the ideology of "race" had not yet been constructed, he believed that they were the result of "Providence":
Come, tell me why it is that the Celts and the Germans are fierce, while the Hellenes and Romans are, generally speaking, inclined to political life and humane, though at the same time unyielding and warlike? Why the Egyptians are more intelligent and more given to crafts, and the Syrians unwarlike and effeminate, but at the same time intelligent, hot-tempered, vain and quick to learn? For if there is anyone who does not discern a reason for these differences among the nations, but rather declaims that all this so befell spontaneously, how, I ask, can he still believe that the universe is administered by a providence? — Julian, the Apostate.
Medieval models of "race" mixed Classical ideas with the notion that humanity as a whole was descended from Shem, Ham and Japheth, the three sons of Noah, producing distinct Semitic (Asian), Hamitic (African), and Japhetic (European) peoples.
The first scientific attempts to classify humans by categories of race date from the 17th century, along with the development of European imperialism and colonization around the world. The first post-Classical published classification of humans into distinct races seems to be François Bernier's Nouvelle division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different species or races which inhabit it"), published in 1684.
These scientists made three claims about race: first, that races are objective, naturally occurring divisions of humanity; second, that there is a strong relationship between biological races and other human phenomena (such as forms of activity and interpersonal relations and culture, and by extension the relative material success of cultures), thus biologizing the notion of "race", as Foucault demonstrated in his historical analysis; third, that race is therefore a valid scientific category that can be used to explain and predict individual and group behavior. Races were distinguished by skin color, facial type, cranial profile and size, texture and color of hair. Moreover, races were almost universally considered to reflect group differences in moral character and intelligence.
The eugenics movement of the late 19th and early 20th centuries, inspired by Arthur Gobineau's An Essay on the Inequality of the Human Races (1853–1855) and Vacher de Lapouge's "anthroposociology", asserted as self-evident the biological inferiority of particular groups (Kevles 1985). In many parts of the world, the idea of race became a way of rigidly dividing groups by culture as well as by physical appearances (Hannaford 1996). Campaigns of oppression and genocide were often motivated by supposed racial differences (Horowitz 2001).
In Charles Darwin's most controversial book, The Descent of Man, he made strong suggestions of racial differences and European superiority. In Darwin's view, stronger tribes of humans always replaced weaker tribes. As savage tribes came in conflict with civilized nations, such as England, the less advanced people were destroyed. Nevertheless, he also noted the great difficulty naturalists had in trying to decide how many "races" there actually were (Darwin was himself a monogenist on the question of race, believing that all humans were of the same species and finding "race" to be a somewhat arbitrary distinction among some groups):
Man has been studied more carefully than any other animal, and yet there is the greatest possible diversity amongst capable judges whether he should be classed as a single species or race, or as two (Virey), as three (Jacquinot), as four (Kant), five (Blumenbach), six (Buffon), seven (Hunter), eight (Agassiz), eleven (Pickering), fifteen (Bory St. Vincent), sixteen (Desmoulins), twenty-two (Morton), sixty (Crawfurd), or as sixty-three, according to Burke. This diversity of judgment does not prove that the races ought not to be ranked as species, but it shews that they graduate into each other, and that it is hardly possible to discover clear distinctive characters between them.
In a recent article, Leonard Lieberman and Fatimah Jackson have suggested that any new support for a biological concept of race will likely come from another source, namely, the study of human evolution. They therefore ask what, if any, implications current models of human evolution may have for any biological conception of race.
Today, all humans are classified as belonging to the species Homo sapiens and sub-species Homo sapiens sapiens. However, this is not the first species of hominids: the first species of genus Homo, Homo habilis, evolved in East Africa at least 2 million years ago, and members of this species populated different parts of Africa in a relatively short time. Homo erectus evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout the Old World. Virtually all physical anthropologists agree that Homo sapiens evolved out of Homo erectus. Anthropologists have been divided as to whether Homo sapiens evolved as one interconnected species from H. erectus (called the Multiregional Model, or the Regional Continuity Model), or evolved only in East Africa, and then migrated out of Africa and replaced H. erectus populations throughout the Old World (called the Out of Africa Model or the Complete Replacement Model). Anthropologists continue to debate both possibilities, and the evidence is technically ambiguous as to which model is correct, although most anthropologists currently favor the Out of Africa model.
Lieberman and Jackson have argued that while advocates of both the Multiregional Model and the Out of Africa Model use the word race and make racial assumptions, none define the term. They conclude that "Each model has implications that both magnify and minimize the differences between races. Yet each model seems to take race and races as a conceptual reality. The net result is that those anthropologists who prefer to view races as a reality are encouraged to do so" and conclude that students of human evolution would be better off avoiding the word race, and instead describe genetic differences in terms of populations and clinal gradations.
With the advent of the modern synthesis in the early 20th century, many biologists sought to use evolutionary models and populations genetics in an attempt to formalise taxonomy. The Biological Species Concept (BSC) is the most widely used system for describing species, this concept defines a species as a group of organisms that interbreed in their natural environment and produce viable offspring. In practice species are not classified according to the BSC but according to typology by the use of a holotype, due to the difficulty of determining whether all members of a group of organisms do or can in practice potentially interbreed. BSC species are routinely classified on a subspecific level, though this classification is conducted differently for different taxons, for mammals the normal taxonomic unit below the species level is usually the subspecies. More recently the Phylogenetic Species Concept (PSC) has gained a substantial following. The PSC is based on the idea of a least-inclusive taxonomic unit (LITU), in phylogenetic classification no subspecies can exist because they would automatically constitute a LITU (any monophyletic group). Technically species cease to exist as do all hierarchical taxa, a LITU is effectively defined as any monophyletic taxon, phylogenetics is strongly influenced by cladistics which classifies organisms based on evolution rather than similarities between groups of organisms. In biology the term "race" is very rarely used because it is ambiguous, "'Race' is not being defined or used consistently; its referents are varied and shift depending on context. The term is often used colloquially to refer to a range of human groupings. Religious, cultural, social, national, ethnic, linguistic, genetic, geographical and anatomical groups have been and sometimes still are called 'races'". Generally when it is used it is synonymous with subspecies. One of the main obstacles to identifying subspecies is that, while it is a recognised taxonomic term, it has no precise definition.
Species of organisms that are monotypic (i.e. form a single subspecies) display at least one of these properties:
A polytypic species has two or more subspecies. These are separate populations that are more genetically different from one another and that are more reproductively isolated, gene flow between these populations is much reduced leading to genetic differentiation.
In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the world should, in general, be considered to be of different subspecies by the usual criterion that most individuals of such populations can be allocated correctly by inspection. It does not require a trained anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by features, skin color, and type of hair in spite of so much variability within each of these groups that every individual can easily be distinguished from every other. However, it is customary to use the term race rather than subspecies for the major subdivisions of the human species as well as for minor ones.
On the other hand in practice subspecies are often defined by easily observable physical appearance, but there is not necessarily any evolutionary significance to these observed differences, so this form of classification has become less acceptable to evolutionary biologists. Likewise this typological approach to "race" is generally regarded as discredited by biologists and anthropologists.
Because of the difficulty in classifying subspecies morphologically, many biologists reject the concept altogether, citing problems such as:
In their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races Jeffrey Long and Rick Kittles give a long critique of the application of FST to human populations. They find that the figure of 85% is misleading because it implies that all human populations contain on average 85% of all genetic diversity. This does not correctly reflect human population history, they claim, because it treats all human groups as independent. A more realistic portrayal of the way human groups are related is to understand that some human groups are parental to other groups and that these groups represent paraphyletic groups to their descent groups. For example under the recent African origin theory the human population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which all non-African populations derive, but more than that, non-African groups only derive from a small non-representative sample of this African population. This means that all non-African groups are more closely related to each other and to some African groups (probably east Africans) than they are to others, and further that the migration out of Africa represented a genetic bottleneck, with a great deal of the diversity that existed in Africa not being carried out of Africa by the emigrating groups. This view produces a version of human population movements that do not result in all human populations being independent, but rather produces a series of dilutions of diversity the further from Africa any population lives, each founding event representing a genetic subset of it's parental population. Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population derived from New Guinea. Long and Kittles make the observation that this still produces a global human population that is genetically homogeneous compared to other mammalian populations.
Wright's F statistics are not used to determine whether a group can be described as a subspecies or not, though the statistic is used to measure the degree of differentiation between populations, the degree of genetic differentiation is not a marker of subspecies status. Generally taxonomists prefer to use phylogenetic analysis to determine whether a population can be considered a subspecies. Phylogenetic analysis relies on the concept of derived characteristics that are not shared between groups, this means that these populations are usually allopatric and therefore discretely bounded, this makes subspecies, evolutionarily speaking, monophyletic groups. The clinality of human genetic variation in general rules out any idea that human population groups can be considered monophyletic as there appears to always have been a great deal of gene flow between human populations.
The first to challenge the concept of race on empirical grounds were anthropologists Franz Boas, who demonstrated phenotypic plasticity due to environmental factors (Boas 1912), and Ashley Montagu (1941, 1942), who relied on evidence from genetics. Zoologists Edward O. Wilson and W. Brown then challenged the concept from the perspective of general animal systematics, and further rejected the claim that "races" were equivalent to "subspecies" (Wilson and Brown 1953).
In a response to Livingston, Theodore Dobzhansky argued that when talking about "race" one must be attentive to how the term is being used: "I agree with Dr. Livingston that if races have to be 'discrete units,' then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could use the term race if one distinguished between "race differences" and "the race concept." The former refers to any distinction in gene frequencies between populations; the latter is "a matter of judgment." He further observed that even when there is clinal variation, "Race differences are objectively ascertainable biological phenomena .... but it does not follow that racially distinct populations must be given racial (or subspecific) labels. In short, Livingston and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether the race concept remains a meaningful and useful social convention.
In 1964, biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed discordantly—for example, melanin is distributed in a decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific geographical points in Africa (Ehrlich and Holm 1964). As anthropologists Leonard Lieberman and Fatimah Linda Jackson observe, "Discordant patterns of heterogeneity falsify any description of a population as if it were genotypically or even phenotypically homogeneous" (Lieverman and Jackson 1995).
Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. For example if only skin colour and a "two race" system of classification were used, then one might classify Indigenous Australians in the same "race" as Black people, and Caucasians in the same "race" as East Asian people, but biologists and anthropologists would dispute that these classifications have any scientific validity. On the other hand the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, due to the fact that traits and gene frequencies do not always correspond to the same geographical location, or as Ossario and Duster (2005) put it:
Richard Lewontin, claiming that 85 percent of human variation occurs within populations, and not among populations, argued that neither "race" nor "subspecies" were appropriate or useful ways to describe populations (Lewontin 1973). Nevertheless, barriers—which may be cultural or physical— between populations can limit gene flow and increase genetic differences. Recent work by population geneticists conducting research in Europe suggests that ethnic identity can be a barrier to gene flow. Others, such as Ernst Mayr, have argued for a notion of "geographic race" Some researchers report the variation between racial groups (measured by Sewall Wright's population structure statistic FST) accounts for as little as 5% of human genetic variation². Sewall Wright himself commented that if differences this large were seen in another species, they would be called subspecies. In 2003 A. W. F. Edwards argued that cluster analysis supersedes Lewontin's arguments (see below).
These empirical challenges to the concept of race forced evolutionary sciences to reconsider their definition of race. Mid-century, anthropologist William Boyd defined race as:
The distribution of many physical traits resembles the distribution of genetic variation within and between human populations (American Association of Physical Anthropologists 1996; Keita and Kittles 1997). For example, ~90% of the variation in human head shapes occurs within every human group, and ~10% separates groups, with a greater variability of head shape among individuals with recent African ancestors (Relethford 2002).
With the recent availability of large amounts of human genetic data from many geographically distant human groups scientists have again started to investigate the relationships between people from various parts of the world. One method is to investigate DNA molecules that are passed down from mother to child (mtDNA) or from father to son (Y chromosomes), these form molecular lineages and can be informative regarding prehistoric population migrations. Alternatively autosomal alleles are investigated in an attempt to understand how much genetic material groups of people share. This work has led to a debate amongst geneticists, molecular anthropologists and medical doctors as to the validity of conceps such as "race". Some researchers insist that classifying people into groups based on ancestry may be important from medical and social policy points of view, and claim to be able to do so accurately. Others claim that individuals from different groups share far too much of their genetic material for group membership to have any medical implications. This has reignited the scientific debate over the validity of human classification and concepts of "race".
Mitochondrial DNA and Y chromosome research has produced three reproducible observations relevant to race and human evolution.
Firstly all mtDNA and Y chromosome lineages derive from a common ancestral molecule. For mtDNA this ancestor is estimated to have lived about 140,000-290,000 years ago (Mitochondrial Eve), while for Y chromosomes the ancestor is estimated to have lived about 70,000 years ago (Y chromosome Adam). These observations are robust, and the individuals that originally carried these ancestral molecules are the direct female and male line most recent common ancestors of all extant anatomically modern humans. The observation that these are the direct female line and male line ancestors of all living humans should not be interpreted as meaning that either was the first anatomically modern human. Nor should we assume that there were no other modern humans living concurrently with mitochondrial Eve or Y chromosome Adam. A more reasonable explanation is that other humans who lived at the same time did indeed reproduce and pass their genes down to extant humans, but that their mitochondrial and Y chromosomal lineages have been lost over time, probably due to random events (e.g. producing only male or female children). It is impossible to know to what extent these non-extant lineages have been lost, or how much they differed from the mtDNA or Y chromosome of our maternal and paternal lineage MRCA. The difference in dates between Y chromosome Adam and mitochondrial Eve is usually attributed to a higher extinction rate for Y chromosomes. This is probably because a few very successful men produce a great many children, while a larger number of less successful men will produce far fewer children.
Secondly mtDNA and Y chromosome work supports a recent African origin for anatomically modern humans, with the ancestors of all extant modern humans leaving Africa somewhere between 100,000 - 50,000 years ago.
Thirdly studies show that specific types (haplogroups) of mtDNA or Y chromosomes do not always cluster by geography, ethnicity or race, implying multiple lineages are involved in founding modern human populations, with many closely related lineages spread over large geographic areas, and many populations containing distantly related lineages. Keita et al. (2004) say, with reference to Y chromosome and mtDNA studies and their relevance to concepts of "race":
Human genetic variation is not distributed uniformly throughout the global population, the global range of human habitation means that there are great distance between some human populations (e.g. between South America and Southern Africa) and this will reduce gene flow between these populations. On the other hand environmental selection is also likely to play a role in differences between human populations. Conversely it is now believed that the majority of genetic differences between populations is selectively neutral. The existence of differences between peoples from different regions of the world is relevant to discussions about the concept of "race", some biologists believe that the language of "race" is relevant in describing human genetic variation. It is now possible to reasonably estimate the continents of origin of an individual's ancestors based on genetic data
Richard Lewontin has claimed that "race" is a meaningless classification because the majority of human variation is found within groups (~85%), and therefore two individuals from different "races" are almost as likely to be as similar to each other as either is to someone from their own "race". In 2003 A. W. F. Edwards rebuked this argument, claiming that Lewontin's conclusion ignores the fact that most of the information that distinguishes populations is hidden in the correlation structure of the data and not simply in the variation of the individual factors (see Infobox: Multi Locus Allele Clusters). Edwards concludes that "It is not true that 'racial classification is ... of virtually no genetic or taxonomic significance' or that 'you can't predict someone’s race by their genes'. Researchers such as Neil Risch and Noah Rosenberg have argued that a person's biological and cultural background may have important implications for medical treatment decisions, both for genetic and non-genetic reasons.
The results obtained by clustering analyses are dependent on several criteria:
Rosenberg et al.'s (2002) paper "Genetic Structure of Human Populations." especially was taken up by Nicholas Wade in the New York Times as evidence that genetics studies supported the "popular conception" of race. On the other hand Rosenberg's work used samples from the Human Genome Diversity Project (HGDP), a project that has collected samples from individuals from 52 ethnic groups from various locations around the world. The HGDP has itself been criticised for collecting samples on an "ethnic group" basis, on the grounds that ethnic groups represent constructed categories rather than categories which are solely natural or biological. Scientists such as the molecular anthropologist Jonathan Marks, the geneticists David Serre, Svante Pääbo, Mary-Claire King and medical doctor Arno G. Motulsky argue that this is a biased sampling strategy, and that human samples should have been collected geographically, i.e. that samples should be collected from points on a grid overlaying a map of the world, and maintain that human genetic variation is not partitioned into discrete racial groups (clustered), but is spread in a clinal manner (isolation by distance) that is masked by this biased sampling strategy. The existence of allelic clines and the observation that the bulk of human variation is continuously distributed, has led scientists such as Kittles and Weiss (2003) to conclude that any categorization schema attempting to partition that variation meaningfully will necessarily create artificial truncations. It is for this reason, Reanne Frank argues, that attempts to allocate individuals into ancestry groupings based on genetic information have yielded varying results that are highly dependent on methodological design.
In a follow up paper "Clines, Clusters, and the Effect of Study Design on the Inference of Human Population Structure" in 2005, Rosenberg et al. maintain that their clustering analysis is robust. But they also agree that there is evidence for clinality (isolation by distance). Thirdly they distance themselves from the language of race, and do not use the term "race" in any of their publications: "The arguments about the existence or nonexistence of 'biological races' in the absence of a specific context are largely orthogonal to the question of scientific utility, and they should not obscure the fact that, ultimately, the primary goals for studies of genetic variation in humans are to make inferences about human evolutionary history, human biology, and the genetic causes of disease."
One of the underlying questions regarding the distribution of human genetic diversity is related to the degree to which genes are shared between the observed clusters, and therefore the extent that membership of a cluster can accurately predict an individuals genetic makeup or susceptibility to disease. This is at the core of Lewontin's argument. Lewontin used Sewall Wright's Fixation index (FST), to estimate that on average 85% of human genetic diversity is contained within groups. Are members of the same cluster always more genetically similar to each other than they are to members of a different cluster? Lewontin's argument is that within group differences are almost as high as between group differences, and therefore two individuals from different groups are almost as likely to be more similar to each other than they are to members of their own group. Can clusters correct for this finding? In 2004 Bamshad et al. used the data from Rosenberg et al. (2002) to investigate the extent of genetic differences between individuals within continental groups relative to genetic differences between individuals between continental groups. They found that though these individuals could be classified very accurately to continental clusters, there was a significant degree of genetic overlap on the individual level.
This question was addressed in more detail in a 2007 paper by Witherspoon et al. entitled "Genetic Similarities Within and Between Human Populations". Where they make the following observations:
The paper states that "All three of the claims listed above appear in disputes over the significance of human population variation and 'race'" and asks "If multilocus statistics are so powerful, then how are we to understand this [last] finding?"
Witherspoon et al. (2007) attempt to reconcile these apparently contradictory findings, and show that the observed clustering of human populations into relatively discrete groups is a product of using what they call "population trait values". This means that each individual is compared to the "typical" trait for several populations, and assigned to a population based on the individual's overall similarity to one of the populations as a whole. They therefore claim that clustering analyses cannot necessarily be used to make inferences regarding the similarity or dissimilarity of individuals between or within clusters, but only for similarities or dissimilarities of individuals to the "trait values" of any given cluster. The paper measures the rate of misclassification using these "trait values" and calls this the "population trait value misclassification rate" (CT). The paper investigates the similarities between individuals by use of what they term the "dissimilarity fraction" (ω): "the probability that a pair of individuals randomly chosen from different populations is genetically more similar than an independent pair chosen from any single population." Witherspoon et al. show that two individuals can be more genetically similar to each other than to the typical genetic type of their own respective populations, and yet be correctly assigned to their respective populations. An important observation is that the likelihood that two individuals from different populations will be more similar to each other genetically than two individuals from the same population depends on several criteria, most importantly the number of genes studied and the distinctiveness of the populations under investigation. For example when 10 loci are used to compare three geographically disparate populations (sub-Saharan African, East Asian and European) then individuals are more similar to members of a different group about 30% of the time. If the number of loci is increased to 100 individuals are more genetically similar to members of a different population ~20% of the time, and even using 1000 loci, ω ~ 10%. They do stated that for these very geographically separated populations it is possible to reduce this statistic to 0% when tens of thousands of loci are used. That means that individuals will always be more similar to members of their own population. But the paper notes that humans are not distributed inot geographically separated populations, omitting intermediate regions may produce a false distinctiveness for human diversity. The paper supports the observation that "highly accurate classification of individuals from continuously sampled (and therefore closely related) populations may be impossible". Furthermore the results indicate that clustering analyses and self reported ethnicity may not be good estimates for genetic susceptibility to disease risk. Witherspoon et al. conclude that:
|Essentialist||Hooton (1926)||"A great division of mankind, characterized as a group by the sharing of a certain combination of features, which have been derived from their common descent, and constitute a vague physical background, usually more or less obscured by individual variations, and realized best in a composite picture."|
|Taxonomic||Mayr (1969)||"An aggregate of phenotypically similar populations of a species, inhabiting a geographic subdivision of the range of a species, and differing taxonomically from other populations of the species."|
|Population||Dobzhansky (1970)||"Races are genetically distinct Mendelian populations. They are neither individuals nor particular genotypes, they consist of individuals who differ genetically among themselves."|
|Lineage||Templeton (1998)||"A subspecies (race) is a distinct evolutionary lineage within a species. This definition requires that a subspecies be genetically differentiated due to barriers to genetic exchange that have persisted for long periods of time; that is, the subspecies must have historical continuity in addition to current genetic differentiation."|
Since 1932, some college textbooks introducing physical anthropology have increasingly come to reject race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent did in 1996. The American Anthropological Association, drawing on biological research, currently holds that "The concept of race is a social and cultural construction... . Race simply cannot be tested or proven scientifically," and that, "It is clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. The concept of 'race' has no validity ... in the human species".
In an ongoing debate, some geneticists argue that race is neither a meaningful concept nor a useful heuristic device, and even that genetic differences among groups are biologically meaningless, on the grounds that more genetic variation exists within such races than among them, and that racial traits overlap without discrete boundaries. Other geneticists, in contrast, argue that categories of self-identified race/ethnicity or biogeographic ancestry are both valid and useful, that these categories correspond with clusters inferred from multilocus genetic data, and that this correspondence implies that genetic factors might contribute to unexplained phenotypic variation between groups.
In February, 2001, the editors of the medical journal Archives of Pediatrics and Adolescent Medicine asked authors to no longer use "race" as an explanatory variable and not to use obsolescent terms. Some other peer-reviewed journals, such as the New England Journal of Medicine and the American Journal of Public Health, have made similar endeavours. Furthermore, the National Institutes of Health recently issued a program announcement for grant applications through February 1, 2006, specifically seeking researchers who can investigate and publicize among primary care physicians the detrimental effects on the nation's health of the practice of medical racial profiling using such terms. The program announcement quoted the editors of one journal as saying that, "analysis by race and ethnicity has become an analytical knee-jerk reflex.
A survey, taken in 1985 (Lieberman et al. 1992), asked 1,200 American anthropologists how many disagree with the following proposition: "There are biological races in the species Homo sapiens." The responses were:
The figure for physical anthropologists at PhD granting departments was slightly higher, rising from 41% to 42%, with 50% agreeing. This survey, however, did not specify any particular definition of race (although it did clearly specify biological race within the species Homo Sapiens); it is difficult to say whether those who supported the statement thought of race in taxonomic or population terms.
The same survey, taken in 1999, showed the following changing results for anthropologists:
In Poland the race concept was rejected by only 25 percent of anthropologists in 2001, although: "Unlike the U.S. anthropologists, Polish anthropologists tend to regard race as a term without taxonomic value, often as a substitute for population.
In the face of these issues, some evolutionary scientists have simply abandoned the concept of race in favor of "population." What distinguishes population from previous groupings of humans by race is that it refers to a breeding population (essential to genetic calculations) and not to a biological taxon. Other evolutionary scientists have abandoned the concept of race in favor of cline (meaning, how the frequency of a trait changes along a geographic gradient). (The concepts of population and cline are not, however, mutually exclusive and both are used by many evolutionary scientists.)
According to Jonathan Marks,
In the face of this rejection of race by evolutionary scientists, many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems with "race," following the Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum in the 1960s during the U.S. civil rights movement and the emergence of numerous anti-colonial movements worldwide. They thus came to understood that these justifications, even when expressed in language that sought to appear objective, were social constructs.
Even as the idea of "race" was becoming a powerful organizing principle in many societies, the shortcomings of the concept were apparent. In the Old World, the gradual transition in appearances from one group to adjacent groups emphasized that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them," as Blumenbach observed in his writings on human variation (Marks 1995, p. 54). As anthropologists and other evolutionary scientists have shifted away from the language of race to the term population to talk about genetic differences, Historians, anthropologists and social scientists have re-conceptualized the term "race" as a cultural category or social construct, in other words, as a particular way that some people have of talking about themselves and others. As Stephan Palmie has recently summarized, race "is not a thing but a social relation"; or, in the words of Katya Gibel Mevorach, "a metonym," "a human invention whose criteria for differentiation are neither universal nor fixed but have always been used to manage difference." As such it cannot be a useful analytical concept; rather, the use of the term "race" itself must be analyzed. Moreover, they argue that biology will not explain why or how people use the idea of race: history and social relationships will. For example, the fact that in many parts of the United States, categories such as Hispanic or Latino are viewed to constitute a race (instead of an ethnic group) reflect this new idea of "race as a social construct". However, it may be in the interest of dominant groups to cluster Spanish speakers into a single, isolated population, rather than classifying them according to Race (as are the rest of U.S. racial groups). Especially in the context of the debate over immigration. "According to the 2000 census, two-thirds [of Hispanics] are of Mexican heritage . . . So, for practical purposes, when we speak of Hispanics and Latinos in the U.S., we’re really talking about Native Americans . . . [therefore] if being Hispanic carries any societal consequences that justify inclusion in the pantheon of great American racial minorities, they’re the result of having Native American blood. [But imagine the] the impact this would have on the illegal-immigration debate. It’s one thing to blame the fall of western civilization on illegal Mexican immigration, but quite thornier to blame it on illegal Amerindian immigration from Mexico.
In the United States since its early history, Native Americans, African-Americans and European-Americans were classified as belonging to different races. For nearly three centuries, the criteria for membership in these groups were similar, comprising a person’s appearance, his fraction of known non-White ancestry, and his social circle.2 But the criteria for membership in these races diverged in the late 19th century. During Reconstruction, increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black regardless of appearance.3 By the early 20th century, this notion of invisible blackness was made statutory in many states and widely adopted nationwide.4 In contrast, Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum) due in large part to American slavery ethics. Finally, for the past century or so, to be White one had to have perceived "pure" White ancestry.
Efforts to sort the increasingly mixed population of the United States into discrete categories generated many difficulties (Spickard 1992). By the standards used in past censuses, many millions of children born in the United States have belonged to a different race than have one of their biological parents. Efforts to track mixing between groups led to a proliferation of categories (such as "mulatto" and "octoroon") and "blood quantum" distinctions that became increasingly untethered from self-reported ancestry. A person's racial identity can change over time, and self-ascribed race can differ from assigned race (Kressin et al. 2003). Until the 2000 census, Latinos were required to identify with a single race despite the long history of mixing in Latin America; partly as a result of the confusion generated by the distinction, 32.9% (U.S. census records) of Latino respondents in the 2000 census ignored the specified racial categories and checked "some other race". (Mays et al. 2003 claim a figure of 42%)
The difference between how Native American and Black identities are defined today (blood quantum versus one-drop) has demanded explanation. According to anthropologists such as Gerald Sider, the goal of such racial designations was to concentrate power, wealth, privilege and land in the hands of Whites in a society of White hegemony and privilege (Sider 1996; see also Fields 1990). The differences have little to do with biology and far more to do with the history of racism and specific forms of White supremacy (the social, geopolitical and economic agendas of dominant Whites vis-à-vis subordinate Blacks and Native Americans) especially the different roles Blacks and Amerindians occupied in White-dominated 19th century America. The theory suggests that the blood quantum definition of Native American identity enabled Whites to acquire Amerindian lands, while the one-drop rule of Black identity enabled Whites to preserve their agricultural labor force. The contrast presumably emerged because as peoples transported far from their land and kinship ties on another continent, Black labor was relatively easy to control, thus reducing Blacks to valuable commodities as agricultural laborers. In contrast, Amerindian labor was more difficult to control; moreover, Amerindians occupied large territories that became valuable as agricultural lands, especially with the invention of new technologies such as railroads; thus, the blood quantum definition enhanced White acquisition of Amerindian lands in a doctrine of Manifest Destiny that subjected them to marginalization and multiple episodic localized campaigns of extermination.
The political economy of race had different consequences for the descendants of aboriginal Americans and African slaves. The 19th century blood quantum rule meant that it was relatively easier for a person of mixed Euro-Amerindian ancestry to be accepted as White. The offspring of only a few generations of intermarriage between Amerindians and Whites likely would not have been considered Amerindian at all (at least not in a legal sense). Amerindians could have treaty rights to land, but because an individual with one Amerindian great-grandparent no longer was classified as Amerindian, they lost any legal claim to Amerindian land. According to the theory, this enabled Whites to acquire Amerindian lands. The irony is that the same individuals who could be denied legal standing because they were "too White" to claim property rights, might still be Amerindian enough to be considered as "breeds", stigmatized for their Native American ancestry.
The 20th century one-drop rule, on the other hand, made it relatively difficult for anyone of known Black ancestry to be accepted as White. The child of a Black sharecropper and a White person was considered Black. And, significant in terms of the economics of sharecropping, such a person also would likely be a sharecropper as well, thus adding to the employer's labor force.
In short, this theory suggests that in a 20th century economy that benefited from sharecropping, it was useful to have as many Blacks as possible. Conversely, in a 19th century nation bent on westward expansion, it was advantageous to diminish the numbers of those who could claim title to Amerindian lands by simply defining them out of existence.
It must be mentioned, however, that although some scholars of the Jim Crow period agree that the 20th century notion of invisible Blackness shifted the color line in the direction of paleness, thereby swelling the labor force in response to Southern Blacks' great migration northwards, others (Joel Williamson, C. Vann Woodward, George M. Fredrickson, Stetson Kennedy) see the one-drop rule as a simple consequence of the need to define Whiteness as being pure, thus justifying White-on-Black oppression. In any event, over the centuries when Whites wielded power over both Blacks and Amerindians and widely believed in their inherent superiority over people of color, it is no coincidence that the hardest racial group in which to prove membership was the White one.
In the United States, social and legal conventions developed over time that forced individuals of mixed ancestry into simplified racial categories (Gossett 1997). An example is the "one-drop rule" implemented in some state laws that treated anyone with a single known African American ancestor as black (Davis 2001). The decennial censuses conducted since 1790 in the United States also created an incentive to establish racial categories and fit people into those categories (Nobles 2000). In other countries in the Americas where mixing among groups was overtly more extensive, social categories have tended to be more numerous and fluid, with people moving into or out of categories on the basis of a combination of socioeconomic status, social class, ancestry, and appearance (Mörner 1967).
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers from American Spanish-speaking countries to the United States. It includes people who had been considered racially distinct (Black, White, Amerindian, Asian, and mixed groups) in their home countries. Today, the word "Latino" is often used as a synonym for "Hispanic". In contrast to "Latino"´or "Hispanic" "Anglo" is now used to refer to non-Hispanic White Americans or non-Hispanic European Americans, most of whom speak the English language but are not necessarily of English descent.
Typically, a consumer of a commercial PGH service sends in a sample of DNA which is analyzed by molecular biologists and is sent a report, of which the following is a sample
Through these kinds of reports, new advances in molecular genetics are being used to create or confirm stories have about social identities. Although these identities are not racial in the biological sense, they are in the cultural sense in that they link biological and cultural identities. Nadia Abu el-Haj has argued that the significance of genetic lineages in popular conceptions of race owes to the perception that while genetic lineages, like older notions of race, suggests some idea of biological relatedness, unlike older notions of race they are not directly connected to claims about human behaviour or character. Abu el-Haj has thus argued that "postgenomics does seem to be giving race a new lease on life." Nevertheless, Abu el-Haj argues that in order to understand what it means to think of race in terms of genetic lineages or clusters, one must understand that Abu el-Haj argues that genomics and the mapping of lineages and clusters liberates "the new racial science from the older one by disentangling ancestry from culture and capacity." As an example, she refers to recent work by Hammer et al., which aimed to test the claim that present-day Jews are more closely related to one another than to neighbouring non-Jewish populations. Hammer et. al found that the degree of genetic similarity among Jews shifted depending on the locus investigated, and suggested that this was the result of natural selection acting on particular loci. They therefore focused on the non-recombining Y chromosome to "circumvent some of the complications associated with selection". As another example she points to work by Thomas et al., who sought to distinguish between the Y chromosomes of Jewish priests (in Judaism, membership in the priesthood is passed on through the father's line) and the Y chromosomes of non-Jews. Abu el-Haj concluded that this new "race science" calls attention to the importance of "ancestry" (narrowly defined, as it does not include all ancestors) in some religions and in popular culture, and peoples' desire to use science to confirm their claims about ancestry; this "race science," she argues is fundamentally different from older notions of race that were used to explain differences in human behaviour or social status:
On the other hand, there are tests that do not rely on molecular lineages, but rather on correlations between allele frequencies, often when allele frequencies correlate these are called clusters. Clustering analyses are less powerful than lineages because they cannot tell an historical story, they can only estimate the proportion of a person's ancestry from any given large geographical region. These sorts of tests use informative alleles called Ancestry-informative marker (AIM), which although shared across all human populations vary a great deal in frequency between groups of people living in geographically distant parts of the world. These tests use contemporary people sampled from certain parts of the world as references to determine the likely proportion of ancestry for any given individual. In a recent Public Service Broadcasting (PBS) programme on the subject of genetic ancestry testing the academic Henry Louis Gates: "wasn’t thrilled with the results (it turns out that 50 percent of his ancestors are likely European)". Charles Rotimi, of Howard University's National Human Genome Center, is one of many who have highlighted the methodological flaws in such research - that "the nature or appearance of genetic clustering (grouping) of people is a function of how populations are sampled, of how criteria for boundaries between clusters are set, and of the level of resolution used" all bias the results - and concluded that people should be very cautious about relating genetic lineages or clusters to their own sense of identity. (see also above section How much are genes shared? Clustering analyses and what they tell us)
Thus, in analyses that assign individuals to groups it becomes less apparent that self-described racial groups are reliable indicators of ancestry. One cause of the reduced power of the assignment of individuals to groups is admixture. For example, self-described African Americans tend to have a mix of West African and European ancestry. Shriver et al. (2003) found that on average African Americans have ~80% African ancestry. Also, in a survey of college students who self-identified as “white” in a northeastern U.S. university, ~30% of whites had less than 90% European ancestry.
Stephan Palmie has responded to Abu el-Haj's claim that genetic lineages make possible a new, politically, economically, and socially benign notion of race and racial difference by suggesting that efforts to link genetic history and personal identity will inevitably "ground present social arrangements in a time-hallowed past," that is, use biology to explain cultural differences and social inequalities.
Researchers have reported differences in the average IQ test scores of various ethnic groups. The interpretation, causes, accuracy and reliability of these differences are highly controversial. Some researchers, such as Arthur Jensen, Richard Herrnstein, and Richard Lynn have argued that such differences are at least partially genetic. Others, for example Thomas Sowell, argue that the differences largely owe to social and economic inequalities. Still others have such as Stephen Jay Gould and Richard Lewontin have argued that categories such as "race" and "intelligence" are cultural constructs that render any attempt to explain such differences (whether genetically or sociologically) meaningless.
The Flynn effect is the rise of average Intelligence Quotient (IQ) test scores, an effect seen in most parts of the world, although at varying rates. Scholars therefore believe that rapid increases in average IQ seen in many places are much too fast to be as a result of changes in brain physiology and more likely as a result of environmental changes. The fact that environment has a significant effect on IQ demolishes the case for the use of IQ data as a source of genetic information.
There is an active debate among biomedical researchers about the meaning and importance of race in their research. The primary impetus for considering race in biomedical research is the possibility of improving the prevention and treatment of diseases by predicting hard-to-ascertain factors on the basis of more easily ascertained characteristics. Some have argued that in the absence of cheap and widespread genetic tests, racial identification is the best way to predict for certain diseases, such as Cystic fibrosis, Lactose intolerance, Tay-Sachs Disease and sickle cell anemia, which are genetically linked and more prevalent in some populations than others. The most well-known examples of genetically-determined disorders that vary in incidence among populations would be sickle cell disease, thalassaemia, and Tay-Sachs disease.
There has been criticism of associating disorders with race. For example, in the United States sickle cell is typically associated with black people, but this trait is also found in people of Mediterranean, Middle Eastern or Indian ancestry. The sickle cell trait offers some resistance to malaria. In regions where malaria is present sickle cell has been positively selected and consequently the proportion of people with it is greater. Therefore, it has been argued that sickle cell should not be associated with a particular race, but rather with having ancestors who lived in a malaria-prone region. Africans living in areas where there is no malaria, such as the East African highlands, have prevalence of sickle cell as low as parts of Northern Europe.
Another example of the use of race in medicine is the recent U.S. FDA approval of BiDil, a medication for congestive heart failure targeted at black people in the United States. Several researchers have questioned the scientific basis for arguing the merits of a medication based on race, however. As Stephan Palmie has recently pointed out, black Americans were disproportionately affected by Hurricane Katrina, but for social and not climatological reasons; similarly, certain diseases may disproportionately affect different races, but not for biological reasons. Several researchers have suggested that BiDil was re-designated as a medicine for a race-specific illness because its manufacturer, Nitromed, needed to propose a new use for an existing medication in order to justify an extension of its patent and thus monopoly on the medication, not for pharmacological reasons.
Gene flow and intermixture also have an effect on predicting a relationship between race and "race linked disorders". Multiple sclerosis is typically associated with people of European descent and is of low risk to people of African descent. However, due to gene flow between the populations, African Americans have elevated levels of MS relative to Africans. Notable African Americans affected by MS include Richard Pryor and Montel Williams. As populations continue to mix, the role of socially constructed races may diminish in identifying diseases.
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description that will readily suggest the general appearance of an individual than to make a scientifically valid categorization by DNA or other such means. Thus in addition to assigning a wanted individual to a racial category, such a description will include: height, weight, eye color, scars and other distinguishing characteristics, etc. Scotland Yard use a classification based in the ethnic background of British society: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). Some of the characteristics that constitute these groupings are biological and some are learned (cultural, linguistic, etc.) traits that are easy to notice.
In many countries, such as France, the state is legally banned from maintaining data based on race, which often makes the police issue wanted notices to the public that include labels like "dark skin complexion", etc. One of the factors that encourages this kind of circuitous wordings is that there is controversy over the actual relationship between crimes, their assigned punishments, and the division of people into the so called "races," leading officials to try to deemphasize the alleged race of suspects. In the United States, the practice of racial profiling has been ruled to be both unconstitutional and also to constitute a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded crimes, punishments meted out, and the country's "racially divided" people. Many consider de facto racial profiling an example of institutional racism in law enforcement. The history of misuse of racial categories to adversely impact one or more groups and/or to offer protection and advantage to another has a clear impact on debate of the legitimate use of known phenotypical or genotypical characteristics tied to the presumed race of both victims and perpetrators by the government.
More recent work in racial taxonomy based on DNA cluster analysis (see Lewontin's Fallacy) has led law enforcement to narrow their search for individuals based on a range of phenotypical characteristics found consistent with DNA evidence.
While controversial, DNA analysis has been successful in helping police identify both victims and perpetrators by giving an indication of what phenotypical characteristics to look for and what community the individual may have lived in. For example, in one case phenotypical characteristics suggested that the friends and family of an unidentified victim would be found among the Asian community, but the DNA evidence directed official attention to missing Native Americans, where her true identity was eventually confirmed. In an attempt to avoid potentially misleading associations suggested by the word "race," this classification is called "biogeographical ancestry" (BGA), but the terms for the BGA categories are similar to those used as for race. The difference is that ancestry-informative DNA markers identify continent-of-ancestry admixture, not ethnic self-identity, and provide a wide range of phenotypical characteristics such that some people in a biogeographical category will not match the stereotypical image of an individual belonging to the corresponding race. To facilitate the work of officials trying to find individuals based on the evidence of their DNA traces, firms providing the genetic analyses also provide photographs showing a full range of phenotypical characteristics of people in each biogeographical group. Of special interest to officials trying to find individuals on the basis of DNA samples that indicate a diverse genetic background is what range of phenotypical characteristics people with that general mixture of genotypical characteristics may display.
Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) in order to aid in the identification of the body, including in terms of race. In a recent article anthropologist Norman Sauer asked, "if races don't exist, why are forensic anthropologists so good at identifying them. Sauer observed that the use of 19th century racial categories is widespread among forensic anthropologists:
According to Sauer, "The assessment of these categories is based upon copious amounts of research on the relationship between biological characteristics of the living and their skeletons." Nevertheless, he agrees with other anthropologists that race is not a valid biological taxonomic category, and that races are socially constructed. He argued there is nevertheless a strong relationship between the phenotypic features forensic anthropologists base their identifications on, and popular racial categories. Thus, he argued, forensic anthropologists apply a racial label to human remains because their analysis of physical morphology enables them to predict that when the person was alive, that particular racial label would have been applied to them.
Issues of (in)compatibility between the worldview and research rules of the Science of Unitary Human Beings: an invitation to dialogue.(Clinical report)
Jan 01, 1996; ABSTRACT This paper offers an invitation to dialogue about the degree to which the research rules associated with the Science of...
The beat generation and beyond: popular culture and the development of the Science of Unitary Human Beings.(Clinical report)
Jan 01, 1998; ABSTRACT In the 1940s and 50s, Greenwich Village, and New York City in general, were places that were participating in great... | http://www.reference.com/browse/human%20beings | 13 |
22 | “Computer simulation is defined as having the following two key features:
- There is a computer model of a real or theoretical system that contains information on how the system behaves.
- Experimentation can take place, i.e. changing the input to the model affects the output.
As a numerical model of a system, presented for a learner to manipulate and explore, simulations can provide a rich learning experience for the student. They can be a powerful resource for teaching: providing access to environments which may otherwise be too dangerous, or impractical due to size or time constraints; and facilitating visualisation of dynamic or complex behaviour.” (Thomas and Milligan, 2004)
See also simulation (list of other types)
Simulation in education
Simulations can be considered a variant of cognitive tools, i.e. they allow students to test hypothesis and more generally "what-if" scenarios. In addition, they can enable learners to ground cognitive understanding of their action in a situation. (Thomas and Milligan, 2004; Laurillard, 1993). In that respect simulations are compatible with a constructivist view of education.
Most authors seem to agree that use of simulations needs to be pedagogically scaffolded. “Research shows that the educational benefits of simulations are not automatically gained and that care must be taken in many aspects of simulation design and presentation. It is not sufficient to provide learners with simulations and expect them to engage with the subject matter and build their own understanding by exploring, devising and testing hypotheses.” (Thomas and Milligan, 2004: 2). The principal caveat of simulations is that students rather engage with the interface than with the underlying model (Davis, 2002). This is also called video gaming effect.
Various methods can be used, e.g.:
- the simulation itself can provide feedback and guidance in the form of hints
- Human experts (teachers, coaches, guides), peers or electronic help can provide assistance using the system.
- Simulation activities can be strongly scaffolded, e.g. by providing built-in mechanisms for hypothesis formulation (e.g. as in guided discovery learning simulation)
- Simulation activities can be coached by humans
The inquiry learning perspective
“Inquiry learning is defined as "an approach to learning that involves a process of exploring the natural or material world, and that leads to asking questions, making discoveries, and rigorously testing those discoveries in the search for new understanding" (National Science Foundation, 2000). This means that students adopt a scientific approach and make their own discoveries; they generate knowledge by activating and restructuring knowledge schemata (Mayer, 2004)). Inquiry learning environments also ask students to take initiative in the learning process and can be offered in a naturally collaborative setting with realistic material.” (De Jong, 2006).
According to the What do we know about computer simulations, common characteristics of educational computer simulations are:
- Model Based: Simulations are based on a model. This means that the calculations and rules operating the simulation are programmed. These calculations and rules are collectively called "the model", and it determines the behavior of the simulation depending on user actions.
- Interactive: Learners work interactively with a simulation's model to input information and then observe how the variables in the simulation change, based on this output.
- Interface driven: The value changes to the influenced variables and the observed value changes in the output are found in the simulation's interface.
- Scaffolded: Simulations designed for education should have supports or scaffolds to assist students in making the learning experience effective. Step by step directions, or small assignments which break the task down to help students, while they work with a simulation, are examples.
- JeLSIM - Java eLearning SIMulations.Jelsim Builder is a tool for the rapid production of interactive simulations (Jelsims).
- some multi-purpose cognitive/classroom tools like Freestyler may have embedded simulations tools.
(needs additions !)
Introductions and Overviews
- Computer simulation (Wikipedia)
- Kaleidoscope Network of Excellence for Technology Enhanced Learning (2007). What do we know about computer simulations ?, PDF (based on a Dutch brochure written by Ton de Jong and Wouter van Joolingen).
- Davies, C., H., J. (2002). "Student engagement with simulations." Computers and Education 39: 271-282.
- De Jong, Ton (2006) Computer Simulations: Technological Advances in Inquiry Learning, Science 28 April 2006 312: 532-533 DOI: 10.1126/science.1127750
- De Jong, T. (2006b). Scaffolds for computer simulation based scientific discovery learning. In J. Elen & R. E. Clark (Eds.), Dealing with complexity in learning environments (pp. 107-128). London: Elsevier Science Publishers.
- de Jong, Ton; van Joolingen, Wouter R. (1998). Scientific Discovery Learning with Computer Simulations of Conceptual Domains, Review of Educational Research, Vol. 68, pp. 179-201.
- Gijlers, H. (2005). Confrontation and co-construction; exploring and supporting collaborative scientific discovery learning with computer simulations. University of Twente, Enschede.
- David Guralnick, Christine Levy, Putting the Education into Educational Simulations: Pedagogical Structures, Guidance and Feedback, International Journal of Advanced Corporate Learning (iJAC), Vol 2, No 1 (2009) Abstract/PDF (Open access journal).
- Hickey, D. T., & Zuiker, S. (2003). A new perspective for evaluating innovative science learning environments. Science Education, 87, 539-563.
- Jackson, S., Stratford, S., Krajcik, J., & Soloway, E. (1996). Making dynamic modeling accessible to pre-college science students. Interactive Learning Environments, 4, 233-257.
- Ketelhut, D. J., Dede, C., Clarke, J., & Soloway, E. (1996). A multiuser virtual environment for building higher order inquiry skills in science. Paper presented at the American Educational Research Association, San Francisco.
- Lee, J. (1999). "Effectiveness of computer-based instructional simulation: a meta analysis." International Journal of Instructional Media 26(1): 71-85
- Laurillard, D. (1993). Rethinking University Education: a framework for effective use of educational technology, Routledge.
- Mayer, R. E. (2004), Should there be a three strikes rule against pure discovery? The case for guided methods of instruction. Am. Psych. 59 (14).
- National Science Foundation, in Foundations: Inquiry: Thoughts, Views, and Strategies for the K-5 Classroom (NSF, Arlington, VA, 2000), vol. 2, pp. 1-5 HTML.
- Parush, A., Hamm, H. & Shtub, A. (2002). "Learning histories in simulation-based teaching: the effects on self learning and transfer." Computers and Education 39: 319-332.
- Reigeluth, C. & Schwartz, E. (1989). "An instructional theory for the design of computer-based simulation." Journal of Computer-Based Instruction 16(1): 1-10.
- Swaak, J. (1998). What-if: Discovery simulations and assessment of intuitive knowledge. Unpublished PhD, University of Twente, Enschede.
- Swaak, J., Van Joolingen, W. R., & De Jong, T. (1998). Supporting simulation-based learning; the effects of model progression and assignments on definitional and intuitive knowledge. Learning and Instructions, 8, 235-253.
- Thomas, R.C. and Milligan, C.D. (2004). Putting Teachers in the Loop: Tools for Creating and Customising Simulations. Journal of Interactive Media in Education (Designing and Developing for the Disciplines Special Issue), 2004 (15). ISSN:1365-893X http://www-jime.open.ac.uk/2004/15
- Van Joolingen, W. R., & De Jong, T. (1991). Characteristics of simulations for instructional settings. Education & Computing, 6, 241-262.
- Van Joolingen, W. R., & De Jong, T. (2003). Simquest: Authoring educational simulations. In T. Murray, S. Blessing & S. Ainsworth (Eds.), Authoring tools for advanced technology educational software: Toward cost-effective production of adaptive, interactive, and intelligent educational software (pp. 1-31). Dordrecht: Kluwer Academic Publishers.
- Van Joolingen, W. R., De Jong, T., Lazonder, A. W., Savelsbergh, E. R., & Manlove, S. (2005). Co-lab: Research and development of an online learning environment for collaborative scientic discovery learning. Computers in Human Behavior, 21, 671-688.
- Van Joolingen, W.R. and King, S. and Jong de, T. (1997) The SimQuest authoring system for simulation-based discovery learning. In: B. du Boulay & R. Mizoguchi (Eds.), Artificial intelligence and education: Knowledge and media in learning systems. IOS Press, Amsterdam, pp. 79-86. PDF
- White, B., & Frederiksen, J. (1998). Inquiry, modeling, and metacognition: Making science accessible to all students. Cognition and Instruction, 16, 3-118. | http://edutechwiki.unige.ch/en/Computer_simulation | 13 |
16 | Computers are often used to automate repetitive tasks. Repeating identical or similar tasks without making errors is something that computers do well and people do poorly.
Repeated execution of a sequence of statements is called iteration. Because iteration is so common, Python provides several language features to make it easier. We’ve already seen the for statement in Chapter 3. This is a very common form of iteration in Python. In this chapter we are also going to look at the while statement — another way to have your program do iteration.
Recall that the for loop processes each item in a list. Each item in turn is (re-)assigned to the loop variable, and the body of the loop is executed. We saw this example in an earlier chapter.
We have also seen iteration paired with the update idea to form the accumulator pattern. For example, to compute the sum of the first n integers, we could create a for loop using the range to produce the numbers 1 thru n. Using the accumulator pattern, we can start with a running total and on each iteration, add the current value of the loop variable. A function to compute this sum is shown below.
To review, the variable theSum is called the accumulator. It is initialized to zero before we start the loop. The loop variable, aNumber will take on the values produced by the range(1,aBound+1) function call. Note that this produces all the integers starting from 1 up to the value of aBound. If we had not added 1 to aBound, the range would have stopped one value short since range does not include the upper bound.
The assignment statement, theSum = theSum + aNumber, updates theSum each time thru the loop. This accumulates the running total. Finally, we return the value of the accumulator.
There is another Python statement that can also be used to build an iteration. It is called the while statement. The while statement provides a much more general mechanism for iterating. Similar to the if statement, it uses a boolean expression to control the flow of execution. The body of while will be repeated as long as the controlling boolean expression evaluates to True.
The following figure shows the flow of control.
We can use the while loop to create any type of iteration we wish, including anything that we have previously done with a for loop. For example, the program in the previous section could be rewritten using while. Instead of relying on the range function to produce the numbers for our summation, we will need to produce them ourselves. To to this, we will create a variable called aNumber and initialize it to 1, the first number in the summation. Every iteration will add aNumber to the running total until all the values have been used. In order to control the iteration, we must create a boolean expression that evaluates to True as long as we want to keep adding values to our running total. In this case, as long as aNumber is less than or equal to the bound, we should keep going.
Here is a new version of the summation program that uses a while statement.
You can almost read the while statement as if it were in natural language. It means, while aNumber is less than or equal to aBound, continue executing the body of the loop. Within the body, each time, update theSum using the accumulator pattern and increment aNumber. After the body of the loop, we go back up to the condition of the while and reevaluate it. When aNumber becomes greater than aBound, the condition fails and flow of control continues to the return statement.
The same program in codelens will allow you to observe the flow of execution.
The names of the variables have been chosen to help readability.
More formally, here is the flow of execution for a while statement:
The body consists of all of the statements below the header with the same indentation.
This type of flow is called a loop because the third step loops back around to the top. Notice that if the condition is False the first time through the loop, the statements inside the loop are never executed.
The body of the loop should change the value of one or more variables so that eventually the condition becomes False and the loop terminates. Otherwise the loop will repeat forever. This is called an infinite loop. An endless source of amusement for computer scientists is the observation that the directions on shampoo, lather, rinse, repeat, are an infinite loop.
In the case shown above, we can prove that the loop terminates because we know that the value of n is finite, and we can see that the value of v increments each time through the loop, so eventually it will have to exceed n. In other cases, it is not so easy to tell.
Introduction of the while statement causes us to think about the types of iteration we have seen. The for statement will always iterate through a sequence of values like the list of names for the party or the list of numbers created by range. Since we know that it will iterate once for each value in the collection, it is often said that a for loop creates a definite iteration because we definitely know how many times we are going to iterate. On the other hand, the while statement is dependent on a condition that needs to evaluate to False in order for the loop to terminate. Since we do not necessarily know when this will happen, it creates what we call indefinite iteration. Indefinite iteration simply means that we don’t know how many times we will repeat but eventually the condition controlling the iteration will fail and the iteration will stop. (Unless we have an infinite loop which is of course a problem)
What you will notice here is that the while loop is more work for you — the programmer — than the equivalent for loop. When using a while loop you have to control the loop variable yourself. You give it an initial value, test for completion, and then make sure you change something in the body so that the loop terminates.
So why have two kinds of loop if for looks easier? This next example shows an indefinite iteration where we need the extra power that we get from the while loop.
Check your understanding
7.2.1: True or False: You can rewrite any for-loop as a while-loop.
7.2.2: The following code contains an infinite loop. Which is the best explanation for why the loop does not terminate?
n = 10 answer = 1 while ( n > 0 ): answer = answer + n n = n + 1 print answer
Suppose we want to entertain ourselves by watching a turtle wander around randomly inside the screen. When we run the program we want the turtle and program to behave in the following way:
Notice that we cannot predict how many times the turtle will need to flip the coin before it wanders out of the screen, so we can’t use a for loop in this case. In fact, although very unlikely, this program might never end, that is why we call this indefinite iteration.
So based on the problem description above, we can outline a program as follows:
create a window and a turtle while the turtle is still in the window: generate a random number between 0 and 1 if the number == 0 (heads): turn left else: turn right move the turtle forward 50
Now, probably the only thing that seems a bit confusing to you is the part about whether or not the turtle is still in the screen. But this is the nice thing about programming, we can delay the tough stuff and get something in our program working right away. The way we are going to do this is to delegate the work of deciding whether the turtle is still in the screen or not to a boolean function. Lets call this boolean function isInScreen We can write a very simple version of this boolean function by having it always return True, or by having it decide randomly, the point is to have it do something simple so that we can focus on the parts we already know how to do well and get them working. Since having it always return true would not be a good idea we will write our version to decide randomly. Lets say that there is a 90% chance the turtle is still in the window and 10% that the turtle has escaped.
Now we have a working program that draws a random walk of our turtle that has a 90% chance of staying on the screen. We are in a good position, because a large part of our program is working and we can focus on the next bit of work – deciding whether the turtle is inside the screen boundaries or not.
We can find out the width and the height of the screen using the window_width and window_height methods of the screen object. However, remember that the turtle starts at position 0,0 in the middle of the screen. So we never want the turtle to go farther right than width/2 or farther left than negative width/2. We never want the turtle to go further up than height/2 or further down than negative height/2. Once we know what the boundaries are we can use some conditionals to check the turtle position against the boundaries and return False if the turtle is outside or True if the turtle is inside.
Once we have computed our boundaries we can get the current position of the turtle and then use conditionals to decide. Here is one implementation:
def isInScreen(wn,t): leftBound = - wn.window_width()/2 rightBound = wn.window_width()/2 topBound = wn.window_height()/2 bottomBound = -wn.window_height()/2 turtleX = t.xcor() turtleY = t.ycor() stillIn = True if turtleX > rightBound or turtleX < leftBound: stillIn = False if turtleY > topBound or turtleY < bottomBound: stillIn = False return stillIn
There are lots of ways that the conditional could be written. In this case we have given stillIn the default value of True and use two if statements to set the value to False. You could rewrite this to use nested conditionals or elif statements and set stillIn to True in an else clause.
Here is the full version of our random walk program.
We could have written this program without using a boolean function, You might try to rewrite it using a complex condition on the while statement, but using a boolean function makes the program much more readable and easier to understand. It also gives us another tool to use if this was a larger program and we needed to have a check for whether the turtle was still in the screen in another part of the program. Another advantage is that if you ever need to write a similar program, you can reuse this function with confidence the next time you need it. Breaking up this program into a couple of parts is another example of functional decomposition.
Check your understanding
7.3.1: Which type of loop can be used to perform the following iteration: You choose a positive integer at random and then print the numbers from 1 up to and including the selected integer.
7.3.2: In the random walk program in this section, what does the isInScreen function do?
As another example of indefinite iteration, let’s look at a sequence that has fascinated mathematicians for many years. The rule for creating the sequence is to start from some given n, and to generate the next term of the sequence from n, either by halving n, whenever n is even, or else by multiplying it by three and adding 1 when it is odd. The sequence terminates when n reaches 1.
This Python function captures that algorithm. Try running this program several times supplying different values for n.
The condition for this loop is n != 1. The loop will continue running until n == 1 (which will make the condition false).
Each time through the loop, the program prints the value of n and then checks whether it is even or odd using the remainder operator. If it is even, the value of n is divided by 2 using integer division. If it is odd, the value is replaced by n * 3 + 1. Try some other examples.
Since n sometimes increases and sometimes decreases, there is no obvious proof that n will ever reach 1, or that the program terminates. For some particular values of n, we can prove termination. For example, if the starting value is a power of two, then the value of n will be even each time through the loop until it reaches 1.
You might like to have some fun and see if you can find a small starting number that needs more than a hundred steps before it terminates.
Particular values aside, the interesting question is whether we can prove that this sequence terminates for all values of n. So far, no one has been able to prove it or disprove it!
Think carefully about what would be needed for a proof or disproof of the hypothesis “All positive integers will eventually converge to 1”. With fast computers we have been able to test every integer up to very large values, and so far, they all eventually end up at 1. But this doesn’t mean that there might not be some as-yet untested number which does not reduce to 1.
You’ll notice that if you don’t stop when you reach one, the sequence gets into its own loop: 1, 4, 2, 1, 4, 2, 1, 4, and so on. One possibility is that there might be other cycles that we just haven’t found.
Choosing between for and while
Use a for loop if you know the maximum number of times that you’ll need to execute the body. For example, if you’re traversing a list of elements, or can formulate a suitable call to range, then choose the for loop.
So any problem like “iterate this weather model run for 1000 cycles”, or “search this list of words”, “check all integers up to 10000 to see which are prime” suggest that a for loop is best.
By contrast, if you are required to repeat some computation until some condition is met, as we did in this 3n + 1 problem, you’ll need a while loop.
As we noted before, the first case is called definite iteration — we have some definite bounds for what is needed. The latter case is called indefinite iteration — we are not sure how many iterations we’ll need — we cannot even establish an upper bound!
Check your understanding
7.4.1: Consider the code that prints the 3n+1 sequence in ActiveCode box 6. Will the while loop in this code always terminate for any value of n?
Loops are often used in programs that compute numerical results by starting with an approximate answer and iteratively improving it.
For example, one way of computing square roots is Newton’s method. Suppose that you want to know the square root of n. If you start with almost any approximation, you can compute a better approximation with the following formula:
better = 1/2 * (approx + n/approx)
Execute this algorithm a few times using your calculator. Can you see why each iteration brings your estimate a little closer? One of the amazing properties of this particular algorithm is how quickly it converges to an accurate answer.
The following implementation of Newton’s method requires two parameters. The first is the value whose square root will be approximated. The second is the number of times to iterate the calculation yielding a better result.
You may have noticed that the second and third calls to newtonSqrt in the previous example both returned the same value for the square root of 10. Using 10 iterations instead of 5 did not improve the the value. In general, Newton’s algorithm will eventually reach a point where the new approximation is no better than the previous. At that point, we could simply stop. In other words, by repeatedly applying this formula until the better approximation gets close enough to the previous one, we can write a function for computing the square root that uses the number of iterations necessary and no more.
This implementation, shown in codelens, uses a while condition to execute until the approximation is no longer changing. Each time thru the loop we compute a “better” approximation using the formula described earlier. As long as the “better” is different, we try again. Step thru the program and watch the approximations get closer and closer.
The while statement shown above uses comparison of two floating point numbers in the condition. Since floating point numbers are themselves approximation of real numbers in mathematics, it is often better to compare for a result that is within some small threshold of the value you are looking for.
Newton’s method is an example of an algorithm: it is a mechanical process for solving a category of problems (in this case, computing square roots).
It is not easy to define an algorithm. It might help to start with something that is not an algorithm. When you learned to multiply single-digit numbers, you probably memorized the multiplication table. In effect, you memorized 100 specific solutions. That kind of knowledge is not algorithmic.
But if you were lazy, you probably cheated by learning a few tricks. For example, to find the product of n and 9, you can write n - 1 as the first digit and 10 - n as the second digit. This trick is a general solution for multiplying any single-digit number by 9. That’s an algorithm!
Similarly, the techniques you learned for addition with carrying, subtraction with borrowing, and long division are all algorithms. One of the characteristics of algorithms is that they do not require any intelligence to carry out. They are mechanical processes in which each step follows from the last according to a simple set of rules.
On the other hand, understanding that hard problems can be solved by step-by-step algorithmic processess is one of the major simplifying breakthroughs that has had enormous benefits. So while the execution of the algorithm may be boring and may require no intelligence, algorithmic or computational thinking is having a vast impact. It is the process of designing algorithms that is interesting, intellectually challenging, and a central part of what we call programming.
Some of the things that people do naturally, without difficulty or conscious thought, are the hardest to express algorithmically. Understanding natural language is a good example. We all do it, but so far no one has been able to explain how we do it, at least not in the form of a step-by-step mechanical algorithm.
One of the things loops are good for is generating tabular data. Before computers were readily available, people had to calculate logarithms, sines and cosines, and other mathematical functions by hand. To make that easier, mathematics books contained long tables listing the values of these functions. Creating the tables was slow and boring, and they tended to be full of errors.
When computers appeared on the scene, one of the initial reactions was, “This is great! We can use the computers to generate the tables, so there will be no errors.” That turned out to be true (mostly) but shortsighted. Soon thereafter, computers and calculators were so pervasive that the tables became obsolete.
Well, almost. For some operations, computers use tables of values to get an approximate answer and then perform computations to improve the approximation. In some cases, there have been errors in the underlying tables, most famously in the table the Intel Pentium processor chip used to perform floating-point division.
Although a power of 2 table is not as useful as it once was, it still makes a good example of iteration. The following program outputs a sequence of values in the left column and 2 raised to the power of that value in the right column:
The string '\t' represents a tab character. The backslash character in '\t' indicates the beginning of an escape sequence. Escape sequences are used to represent invisible characters like tabs and newlines. The sequence \n represents a newline.
An escape sequence can appear anywhere in a string. In this example, the tab escape sequence is the only thing in the string. How do you think you represent a backslash in a string?
As characters and strings are displayed on the screen, an invisible marker called the cursor keeps track of where the next character will go. After a print function, the cursor normally goes to the beginning of the next line.
The tab character shifts the cursor to the right until it reaches one of the tab stops. Tabs are useful for making columns of text line up, as in the output of the previous program. Because of the tab characters between the columns, the position of the second column does not depend on the number of digits in the first column.
Check your understanding
7.7.1: What is the difference between a tab (\t) and a sequence of spaces?
Two dimensional tables have both rows and columns. You have probably seen many tables like this if you have used a spreadsheet program. Another object that is organized in rows and columns is a digital image. In this section we will explore how iteration allows us to manipulate these images.
A digital image is a finite collection of small, discrete picture elements called pixels. These pixels are organized in a two-dimensional grid. Each pixel represents the smallest amount of picture information that is available. Sometimes these pixels appear as small “dots”.
Each image (grid of pixels) has its own width and its own height. The width is the number of columns and the height is the number of rows. We can name the pixels in the grid by using the column number and row number. However, it is very important to remember that computer scientists like to start counting with 0! This means that if there are 20 rows, they will be named 0,1,2, and so on thru 19. This will be very useful later when we iterate using range.
In the figure below, the pixel of interest is found at column c and row r.
Each pixel of the image will represent a single color. The specific color depends on a formula that mixes various amounts of three basic colors: red, green, and blue. This technique for creating color is known as the RGB Color Model. The amount of each color, sometimes called the intensity of the color, allows us to have very fine control over the resulting color.
The minimum intensity value for a basic color is 0. For example if the red intensity is 0, then there is no red in the pixel. The maximum intensity is 255. This means that there are actually 256 different amounts of intensity for each basic color. Since there are three basic colors, that means that you can create 2563 distinct colors using the RGB Color Model.
Here are the red, green and blue intensities for some common colors. Note that “Black” is represented by a pixel having no basic color. On the other hand, “White” has maximum values for all three basic color components.
Color Red Green Blue Red 255 0 0 Green 0 255 0 Blue 0 0 255 White 255 255 255 Black 0 0 0 Yellow 255 255 0 Magenta 255 0 255
In order to manipulate an image, we need to be able to access individual pixels. This capability is provided by a module called image. The image module defines two classes: Image and Pixel.
Each Pixel object has three attributes: the red intensity, the green intensity, and the blue intensity. A pixel provides three methods that allow us to ask for the intensity values. They are called getRed, getGreen, and getBlue. In addition, we can ask a pixel to change an intensity value using its setRed, setGreen, and setBlue methods.
Method Name Example Explanation Pixel(r,g,b) Pixel(20,100,50) Create a new pixel with 20 red, 100 green, and 50 blue. getRed() r = p.getRed() Return the red component intensity. getGreen() r = p.getGreen() Return the green component intensity. getBlue() r = p.getBlue() Return the blue component intensity. setRed() p.setRed(100) Set the red component intensity to 100. setGreen() p.setGreen(45) Set the green component intensity to 45. setBlue() p.setBlue(156) Set the blue component intensity to 156.
In the example below, we first create a pixel with 45 units of red, 76 units of green, and 200 units of blue. We then print the current amount of red, change the amount of red, and finally, set the amount of blue to be the same as the current amount of green.
Check your understanding
188.8.131.52: If you have a pixel whose RGB value is (20, 0, 0), what color will this pixel appear to be?
To access the pixels in a real image, we need to first create an Image object. Image objects can be created in two ways. First, an Image object can be made from the files that store digital images. The image object has an attribute corresponding to the width, the height, and the collection of pixels in the image.
It is also possible to create an Image object that is “empty”. An EmptyImage has a width and a height. However, the pixel collection consists of only “White” pixels.
We can ask an image object to return its size using the getWidth and getHeight methods. We can also get a pixel from a particular location in the image using getPixel and change the pixel at a particular location using setPixel.
The Image class is shown below. Note that the first two entries show how to create image objects. The parameters are different depending on whether you are using an image file or creating an empty image.
Method Name Example Explanation Image(filename) img = image.Image(“cy.png”) Create an Image object from the file cy.png. EmptyImage() img = image.EmptyImage(100,200) Create an Image object that has all “White” pixels getWidth() w = img.getWidth() Return the width of the image in pixels. getHeight() h = img.getHeight() Return the height of the image in pixels. getPixel(col,row) p = img.getPixel(35,86) Return the pixel at column 35, row 86d. setPixel(col,row,p) img.setPixel(100,50,mp) Set the pixel at column 100, row 50 to be mp.
Consider the image shown below. Assume that the image is stored in a file called “luther.jpg”. Line 2 opens the file and uses the contents to create an image object that is referred to by img. Once we have an image object, we can use the methods described above to access information about the image or to get a specific pixel and check on its basic color intensities.
When you run the program you can see that the image has a width of 400 pixels and a height of 244 pixels. Also, the pixel at column 45, row 55, has RGB values of 165, 161, and 158. Try a few other pixel locations by changing the getPixel arguments and rerunning the program.
Check your understanding
184.108.40.206: In the example in ActiveCode box 10, what are the RGB values of the pixel at row 100, column 30?
Image processing refers to the ability to manipulate the individual pixels in a digital image. In order to process all of the pixels, we need to be able to systematically visit all of the rows and columns in the image. The best way to do this is to use nested iteration.
Nested iteration simply means that we will place one iteration construct inside of another. We will call these two iterations the outer iteration and the inner iteration. To see how this works, consider the simple iteration below.
for i in range(5): print(i)
We have seen this enough times to know that the value of i will be 0, then 1, then 2, and so on up to 4. The print will be performed once for each pass. However, the body of the loop can contain any statements including another iteration (another for statement). For example,
for i in range(5): for j in range(3): print(i,j)
The for i iteration is the outer iteration and the for j iteration is the inner iteration. Each pass thru the outer iteration will result in the complete processing of the inner iteration from beginning to end. This means that the output from this nested iteration will show that for each value of i, all values of j will occur.
Here is the same example in activecode. Try it. Note that the value of i stays the same while the value of j changes. The inner iteration, in effect, is moving faster than the outer iteration.
Another way to see this in more detail is to examine the behavior with codelens. Step thru the iterations to see the flow of control as it occurs with the nested iteration. Again, for every value of i, all of the values of j will occur. You can see that the inner iteration completes before going on to the next pass of the outer iteration.
Our goal with image processing is to visit each pixel. We will use an iteration to process each row. Within that iteration, we will use a nested iteration to process each column. The result is a nested iteration, similar to the one seen above, where the outer for loop processes the rows, from 0 up to but not including the height of the image. The inner for loop will process each column of a row, again from 0 up to but not including the width of the image.
The resulting code will look like the following. We are now free to do anything we wish to each pixel in the image.
for col in range(img.getWidth()): for row in range(img.getHeight()): #do something with the pixel at position (col,row)
One of the easiest image processing algorithms will create what is known as a negative image. A negative image simply means that each pixel will be the opposite of what it was originally. But what does opposite mean?
In the RGB color model, we can consider the opposite of the red component as the difference between the original red and 255. For example, if the original red component was 50, then the opposite, or negative red value would be 255-50 or 205. In other words, pixels with alot of red will have negatives with little red and pixels with little red will have negatives with alot. We do the same for the blue and green as well.
The program below implements this algorithm using the previous image. Run it to see the resulting negative image. Note that there is alot of processing taking place and this may take a few seconds to complete. In addition, here are two other images that you can use. Change the name of the file in the image.Image() call to see how these images look as negatives. Also, note that there is an exitonclick method call at the very end which will close the window when you click on it. This will allow you to “clear the screen” before drawing the next negative.cy.png goldygopher.png
Lets take a closer look at the code. After importing the image module, we create two image objects. The first, img, represents a typical digital photo. The second, newimg, is an empty image that will be “filled in” as we process the original pixel by pixel. Note that the width and height of the empty image is set to be the same as the width and height of the original.
Lines 8 and 9 create the nested iteration that we discussed earlier. This allows us to process each pixel in the image. Line 10 gets an individual pixel.
Lines 12-14 create the negative intensity values by extracting the original intensity from the pixel and subtracting it from 255. Once we have the newred, newgreen, and newblue values, we can create a new pixel (Line 16).
Finally, we need to insert the new pixel into the empty image in the same location as the original pixel that it came from in the digital photo.
Other pixel manipulation
There are a number of different image processing algorithms that follow the same pattern as shown above. Namely, take the original pixel, extract the red, green, and blue intensities, and then create a new pixel from them. The new pixel is inserted into an empty image at the same location as the original.
For example, you can create a gray scale pixel by averaging the red, green and blue intensities and then using that value for all intensities.
From the gray scale you can create black white by setting a threshold and selecting to either insert a white pixel or a black pixel into the empty image.
You can also do some complex arithmetic and create interesting effects, such as Sepia Tone
You have just passed a very important point in your study of Python programming. Even though there is much more that we will do, you have learned all of the basic building blocks that are necessary to solve many interesting problems. From and algorithm point of view, you can now implement selection and iteration. You can also solve problems by breaking them down into smaller parts, writing functions for those parts, and then calling the functions to complete the implementation. What remains is to focus on ways that we can better represent our problems in terms of the data that we manipulate. We will now turn our attention to studying the main data collections provided by Python.
Check your understanding
220.127.116.11: What will the following nested for-loop print? (Note, if you are having trouble with this question, review CodeLens 3).
for i in range(3): for j in range(2): print(i,j)a.
0 0 0 1 1 0 1 1 2 0 2 1b.
0 0 1 0 2 0 0 1 1 1 2 1c.
0 0 0 1 0 2 1 0 1 1 1 2d.
0 1 0 1 0 1
18.104.22.168: What would the image produced from ActiveCode box 12 look like if you replaced the lines:
newred = 255-p.getRed() newgreen = 255-p.getGreen() newblue = 255-p.getBlue()with the lines:
newred = p.getRed() newgreen = 0 newblue = 0
If you want to try some image processing on your own, outside of the textbook you can do so using the cImage module. You can download cImage.py from The github page . If you put cImage.py in the same folder as your program you can then do the following to be fully compatible with the code in this book.
import cImage as image img = image.Image("myfile.gif")
One important caveat about using cImage.py is that it will only work with GIF files unless you also install the Python Image Library. The easiest version to install is called Pillow. If you have the pip command installed on your computer this is really easy to install, with pip install pillow otherwise you will need to follow the instructions on the Python Package Index page. With Pillow installed you will be able to use almost any kind of image that you download.
This chapter showed us how to sum a list of items, and how to count items. The counting example also had an if statement that let us only count some selected items. In the previous chapter we also showed a function find_first_2_letter_word that allowed us an “early exit” from inside a loop by using return when some condition occurred. We now also have break to exit a loop (but not the enclosing function, and continue to abandon the current iteration of the loop without ending the loop.
Composition of list traversal, summing, counting, testing conditions and early exit is a rich collection of building blocks that can be combined in powerful ways to create many functions that are all slightly different.
The first six questions are typical functions you should be able to write using only these building blocks.
Add a print function to Newton’s sqrt function that prints out better each time it is calculated. Call your modified function with 25 as an argument and record the results.
Write a function print_triangular_numbers(n) that prints out the first n triangular numbers. A call to print_triangular_numbers(5) would produce the following output:
1 1 2 3 3 6 4 10 5 15
(hint: use a web search to find out what a triangular number is.)
Write a function, is_prime, which takes a single integer argument and returns True when the argument is a prime number and False otherwise.
Modify the the Random turtle walk program so that the turtle turns around when it hits the wall and goes the other direction. This bouncing off the walls should continue until the turtle has hit the wall 4 times.
Modify the previous program so that you have two turtles each with a random starting location. Keep the turtles moving and bouncing off the walls until they collide with each other.
Modify the previous program so that rather than a left or right turn the angle of the turn is determined randomly at each step. When the turtle hits the wall you must calculate the correct angle for the bounce.
Write a function to remove all the red from an image.
Write a function to convert the image to grayscale.
Write a function to convert an image to black and white.
Sepia Tone images are those brownish colored images that may remind you of times past. The formula for creating a sepia tone is as follows:
newR = (R × 0.393 + G × 0.769 + B × 0.189) newG = (R × 0.349 + G × 0.686 + B × 0.168) newB = (R × 0.272 + G × 0.534 + B × 0.131)
Write a function to convert an image to sepia tone. Hint: Remember that rgb values must be integers between 0 and 255.
Write a function to uniformly shrink or enlarge an image. Your function should take an image along with a scaling factor. To shrink the image the scale factor should be between 0 and 1 to enlarge the image the scaling factor should be greater than 1.
Write a function to rotate an image. Your function should take an image object along with the number of degrees to rotate. The rotational degrees can be positive or negative, and should be multiples of 90.
After you have scaled an image too much it looks blocky. One way of reducing the blockiness of the image is to replace each pixel with the average values of the pixels around it. This has the effect of smoothing out the changes in color. Write a function that takes an image as a parameter and smooths the image. Your function should return a new image that is the same as the old but smoothed.
When you scan in images using a scanner they may have lots of noise due to dust particles on the image itself or the scanner itself, or the images may even be damaged. One way of eliminating this noise is to replace each pixel by the median value of the pixels surrounding it.
Research the Sobel edge detection algorithm and implement it. | http://interactivepython.org/courselib/static/thinkcspy/MoreAboutIteration/moreiteration.html | 13 |
22 | In Routing Protocol, Routing is the process of selecting paths in a network along which to send data on physical traffic. In different network operating system the network layer perform the function of protocol routing. In TCP/IP the IP protocol is the ability to form connections between different physical networks with the help of a Routing Protocol.
A system that performs this function is called an IP router. This type of device attaches to two or more physical networks and forwards packets between the networks. When sending data to a remote destination, a host passes packet to a local router.
The router forwards the packet toward the final destination. They travel from one router to another until they reach a router connected to the destination’s LAN segment. Each router along the end-to-end path selects the next hop device used to reach the destination. The next hop represents the next device along the path to reach the destination.
It is located on a physical network connected to this intermediate system. Because this physical network differs from the one on which the system originally received the datagram, the intermediate host has forwarded (that is, routed) the packets from one physical network to another.
There are two types of routing algorithm :
Static Routing : Static routing uses preprogrammed definitions representing paths through the network. Static routing is manually performed by the network administrator. The administrator is responsible for discovering and propagating routes through the network.
These definitions are manually programmed in every routing device in the environment. After a device has been configured, it simply forwards packets out the predetermined ports. There is no communication between routers regarding the current topology of the network. In small networks with minimal redundancy, this process is relatively simple to administer.
Dynamic Routing : Dynamic routing algorithms allow routers to automatically discover and maintain awareness of the paths through the network. This automatic discovery can use a number of currently available dynamic routing protocols.
Following are the routing algorithms for networks :
- Distance Vector Algorithm
- Link State Algorithm
- Path Vector Algorithm
- Hybrid Algorithm
Distance Vector Routing: Distance vector algorithms use the Bellman- Ford algorithm. Distance vector algorithms are examples of dynamic routing protocols. Algorithms allow each device in the network to automatically build and maintain a local routing table or matrix.
Routing table contains list of destinations, the total cost to each, and the next hop to send data to get there. This approach assigns a number, the cost, to each of the links between each node in the network. Nodes will send information from point A to point B via the path that results in the lowest total cost i.e. the sum of the costs of the links between the nodes used.
The algorithm operates in a very simple manner. When a node first starts, it only knows of its immediate neighbours, and the direct cost involved in reaching them. The routing table from the each node, on a regular basis, sends its own information to each neighbouring node with current idea of the total cost to get to all the destinations it knows of.
The neighbouring node(s) examine this information, and compare it to what they already 'know'; anything which represents an improvement on what they already have, they insert in their own routing table(s). Over time, all the nodes in the network will discover the best next hop for all destinations, and the best total cost. The main advantage of distance vector algorithms is that they are typically easy to implement and debug. They are very useful in small networks with limited redundancy. When one of the nodes involved goes down, those nodes which used it as their next hop for certain destinations discard those entries, and create new routing-table information.
They then pass this information to all adjacent nodes, which then repeat the process. Eventually all the nodes in the network receive the updated information, and will then discover new paths to all the destinations which they can still "reach".
Link State Routing : A link state is the description of an interface on a router and its relationship to neighboring routers. When applying link-state algorithms, each node uses as its fundamental data a map of the network in the form of a graph.
To produce this, each node floods the entire network with information about what other nodes it can connect to, and each node then independently assembles this information into a map. Using this map, each router then independently determines the least-cost path from itself to every other node using a standard shortest paths algorithm such as Dijkstra's algorithm.
The result is a tree rooted at the current node such that the path through the tree from the root to any other node is the least-cost path to that node. This tree then serves to construct the routing table, which specifies the best next hop to get from the current node to any other node.
Shortest-Path First (SPF) Algorithm : The SPF algorithm is used to process the information in the topology database. It provides a tree representation of the network. The device running the SPF algorithm is the root of the tree.
The output of the algorithm is the list of shortest-paths to each destination network. Because each router is processing the same set of LSAs, each router creates an identical link state database. However, because each device occupies a different place in the network topology, the application of the SPF algorithm produces a different tree for each router.
Path Vector Routing : Distance vector and link state routing are both intra-domain routing protocols. They are used inside an autonomous system, but not between autonomous systems. Both of these routing protocols become intractable in large networks and cannot be used in Inter-domain routing. Distance vector routing is subject to instability if there are more than few hops in the domain. Link state routing needs huge amount of resources to calculate routing tables. It also creates heavy traffic because of flooding. Path vector routing is used for inter-domain routing. It is similar to Distance vector routing.
In path vector routing we assume there is one node (there can be many) in each autonomous system which acts on behalf of the entire autonomous system. This node is called the speaker node. The speaker node creates a routing table and sends information to its neighboring speaker nodes in neighboring autonomous systems. The idea is the same as Distance vector routing except that only speaker nodes in each autonomous system can communicate with each other.
The speaker node sends information of the path, not the metric of the nodes, in its autonomous system or other autonomous systems. The path vector routing algorithm is somewhat similar to the distance vector algorithm in the sense that each border router advertises the destinations it can reach to its neighboring router. However, instead of advertising networks in terms of a destination and the distance to that destination, networks are sends information as destination addresses and path descriptions to reach those destinations.
A route is defined as a pairing between a destination and the attributes of the path to that destination, thus the name, path vector routing, where the routers receive a vector that contains paths to a set of destinations. The path, expressed in terms of the domains traversed so far, is carried in a special path attribute that records the sequence of routing domains through which the reachability information has passed. The path represented by the smallest number of domains becomes the preferred path to reach the destination. The main advantage of a path vector protocol is its flexibility.
Hybrid Routing : This algorithm attempt to combine the positive attributes of both distance vector and link state protocols. Like distance vector, hybrid algorithm use metrics to assign a preference to a route. However, the metrics are more accurate than conventional distance vector algorithm.
Like link state algorithms, routing updates in hybrid algorithm are event driven rather than periodic. Networks using hybrid algorithm tend to converge more quickly than networks using distance vector protocols. Finally, algorithm potentially reduces the costs of link state updates and distance vector advertisements. | http://ecomputernotes.com/computernetworkingnotes/routing/routing-protocols | 13 |
23 | This discussion addresses several different aspects of proof and includes many links to additional readings. You may want to jump to the activities, try some out, and then double back to the readings once you have had a chance to reflect on how you approach proofs. You can use the table of contents below to navigate around this chapter:
In everyday life, we frequently reach conclusions based on anecdotal evidence. This habit also guides our work in the more abstract realm of mathematics, but mathematics requires us to adopt a greater level of skepticism. Examplesno matter how manyare never a proof of a claim that covers an infinite number of instances.
A proof is a logical argument that establishes the truth of a statement. The argument derives its conclusions from the premises of the statement, other theorems, definitions, and, ultimately, the postulates of the mathematical system in which the claim is based. By logical, we mean that each step in the argument is justified by earlier steps. That is, that all of the premises of each deduction are already established or given. In practice, proofs may involve diagrams that clarify, words that narrate and explain, symbolic statements, or even a computer program (as was the case for the Four Color Theorem (MacTutor)). The level of detail in a proof varies with the author and the audience. Many proofs leave out calculations or explanations that are considered obvious, manageable for the reader to supply, or which are cut to save space or to make the main thread of a proof more readable. In other words, often the overarching objective is the presentation of a convincing narrative.
Postulates are a necessary part of mathematics. We cannot prove any statement if we do not have a starting point. Since we base each claim on other claims, we need a property, stated as a postulate, that we agree to leave unproven. The absence of such starting points would force us into an endless circle of justifications. Similarly, we need to accept certain terms (e.g., "point" or "set") as undefined in order to avoid circularity (see Writing Definitions). In general, however, proofs use justifications many steps removed from the postulates.
Before the nineteenth century, postulates (or axioms) were accepted as true but regarded as self-evidently so. Mathematicians tried to choose statements that seemed irrefutably truean obvious consequence of our physical world or number system. Now, when mathematicians create new axiomatic systems, they are more concerned that their choices be interesting (in terms of the mathematics to which they lead), logically independent (not redundant or derivable from one another), and internally consistent (theorems which can be proven from the postulates do not contradict each other). (Download Axiomatic Systems (Lee) and see sections 6.1, 8.1, and 8.4 in book 3b of Math Connections (Berlinghoff) for further explanations, activities, and problem sets on axiomatic systems, consistency, and independence). For example, non-Euclidean geometries have been shown to be as consistent as their Euclidean cousin. The equivalence between these systems does not mean that they are free of contradictions, only that each is as dependable as the other. This modern approach to axiomatic systems means that we consider statements to be true only in the context of a particular set of postulates.
To Establish a Fact with Certainty
There are many possible motives for trying to prove a conjecture. The most basic one is to find out if what one thinks is true is actually true. Students are used to us asking them to prove claims that we already know to be true. When students investigate their own research questions, their efforts do not come with a similar guarantee. Their conjecture may not be true or the methods needed may not be accessible. However, the only way that they can be sure that their conjecture is valid, that they have in fact solved a problem, is to come up with a proof.
Students confidence in a fact comes from many sources. At times, they appeal to an authoritative source as evidence for a claim: "it was in the text" or "Ms. Noether told us this last year." It has been my experience that such justifications carry little practical persuasive value. For example, a class discussed the irrationality of and proofs of that fact, yet an essay assignment on a proposal to obtain the complete decimal expansion of still generated student comments such as, "if eventually turns out not to be irrational then that project would be interesting." Thus, an authoritative claim of proof is only good until some other authority shows otherwise. Mathematical truths do tend to stand the test of time. When students create a proof themselves, they are less likely to think of the result as ephemeral. A proof convinces the prover herself more effectively than it might if generated by someone else.
To Gain Understanding
"I would be grateful if anyone who has understood this demonstration would explain it to me."
Fields Medal winner Pierre Deligne, regarding a theorem that he proved using methods that did not provide insight into the question.
There are proofs that simply prove and those that also illuminate. As in the case of the Deligne quote above, certain proofs may leave one unclear about why a result is true but still confident that it is. Proofs with some explanatory value tend to be more satisfying and appealing. Beyond our interest in understanding a given problem, our work on a proof may produce techniques and understandings that we can apply to broader questions. Even if a proof of a theorem already exists, an alternative proof may reveal new relationships between mathematical ideas. Thus, proof is not just a source of validation, but an essential research technique in mathematics.
If our primary consideration for attempting a proof is to gain insight, we may choose methods and types of representations that are more likely to support that objective. For example, the theorem that the midpoints of any quadrilateral are the vertices of a parallelogram can be proven algebraically using coordinates or synthetically (figure 1).
Figure 1. The diagrams for coordinate and synthetic proofs
A synthetic proof rests on the fact that the segment connecting the midpoints of two sides of a triangle, the midline, is parallel to the third side. In quadrilateral ABCD (right side of figure 1), the midlines of triangles ABD and CBD are both parallel to the quadrilateral diagonal BD and, therefore, to each other. It is clear that if point C were to move, the midline for triangle BCD would remain parallel to both BD and the midline of triangle ABD. To complete the proof, one would consider the midlines of triangle ADC and triangle ABC as well. The coordinate proof uses the coordinates of the midpoints to show that the slopes of opposite midlines are equal.
For many people, the synthetic proof is more revealing about why any asymmetries of the original quadrilateral do not alter the properties of the inner parallelogram. It also illustrates how a proof can be a research tool by answering other questions, such as "when will the inner quadrilateral be a rhombus?" Because midlines are one half the length of the parallel side, the inner parallelogram will have equal sides only when the diagonals of the original quadrilateral are congruent.
Sometimes our inability to develop a proof is revealing and leads us to reconsider our examples or intuitions. After countless attempts to prove that Euclids fifth postulate (the parallel postulate) was dependent on the other four, mathematicians in the nineteenth century finally asked what the consequences would be if the postulate were independent. The doubts that arose from the failure to obtain a proof led to the creation of non-Euclidean geometries.
To Communicate an Ideas to Others
Often, mathematicians (of both the student and adult variety) have a strong conviction that a conjecture is true. Their belief may stem from an informal explanation or some convincing cases. They do not harbor any internal doubt, but there is a broader audience that retains some skepticism. A proof allows the mathematician to convince others of the correctness of their idea. A Making Mathematics teacher, in the midst of doing research with colleagues, shared his feelings about proof:
Just so I can get it off of my chest, I hate doing proofs with a passion. Its the part of mathematics that I grew to hate when I was an undergraduate, and its what so many of my former students come back and tell me turned them off to continuing on as a math major. I remember having a professor who held us responsible for every proof he did in class. Wed probably have a dozen or more to know for each exam, in addition to understanding the material itself. I can remember just memorizing the steps, because the approaches were so bizarre that no "normal person" would ever think of them in a million years (yes, I know I'm stereotyping).
This teachers frustrations with proofs involved having to memorize arguments that were neither revealing (and therefore, not entirely convincing) nor sufficiently transparent about the process by which they were created. Yet, this same teacher, on encountering collegial doubts about his conjecture concerning Pascals triangle wrote, "Well, I decided to try and convince you all that the percentage of odds does in fact approach zero as the triangle grows by proving it." His efforts over several days produced a compelling proof. His conflicting attitudes and actions highlight the distinction between proofs as exercises and proofs as tools for communication and validation. A genuine audience can make an odious task palatable.
For the Challenge
Difficult tasks can be enjoyable. Many mathematical problems are not of profound significance, yet their resolution provides the person who solves them with considerable gratification. Such success can provide a boost in self-esteem and mathematical confidence. The process of surmounting hurdles to a proof can have all of the thrill of a good mystery. Students (and adults) are justifiably excited when they solve a problem unlike any they have previously encountered and which no one else may have ever unraveled.
To Create Something Beautiful
The more students engage in mathematics research, the more they develop their own aesthetic for mathematical problems and methods. The development of a proof that possesses elegance, surprises us, or provides new insight is a creative act. It is rewarding to work hard to make a discovery or develop a proof that is appealing. The mathematician Paul Erdös spoke of proofs that were "straight from the Book"the Book being Gods collection of all the perfect proofs for every theorem. Although Erdös did not actually believe in God, he did believe that there were beautiful truths waiting to be uncovered (Hoffman).
To Construct a Larger Mathematical Theory
We rarely consider mathematical ideas in a vacuum. Our desire to advance a broader mathematical problem is often a source of motivation when we attempt a proof. For example, a number of mathematicians spent many years attempting to characterize a class of objects known as simple groups (Horgan). Their cumulative efforts resulted in thousands of pages of proofs that together accomplished the task. Many of these proofs, significant in their own right, were of even greater value because of their contribution to the larger understanding that the mathematics community sought.
For a further discussion of the role of proof in school curricula, see Do We Need Proof in School Mathematics? (Schoenfeld, 1994).
We can prove many different types of claims.
In general, students should attempt a proof in response to one of the motivations listed in the Why Do We Prove? section. If students only attempt proofs as exercises, they come to see proof as an after-the-fact verification of what someone else already knowsit becomes disconnected from the process of acquiring new knowledge. However, students derive considerable satisfaction from proving a claim that has arisen from their own investigations.
If students in a class disagree about a conjecture, then that is a good time for the individuals who support it to look for a proof in order to convince the doubters. If a student seems particularly taken with a problem and starts to feel some sense of ownership for the idea, then she should attempt a proof in response to her own mathematical tastes. If two student claims have a connection, the students may want to prove the one that is a prerequisite for proving the other.
A focus on formal proof should grow gradually. When we emphasize formal proof too soon and too often, before students have developed a rich repertoire of proof techniques and understanding, their frustration with, and subsequent dislike of, the challenge can become an obstacle to further progress. It is always appropriate to ask students what led them to their conjectures and why they think they are true. We begin by asking for reasons, not formal proofs, and establish the expectation that explanations should be possible and are important. Note that we ask "why" regardless of the correctness of a claim and not just for false propositions. As we highlight that they always should be interested in why an idea is true, students begin to develop the habit of asking "why?" themselves.
A good time to ask a student to write out a proof is when you think that she has already grasped the connections within a problem that are essential to the development of a more formal argument. This timing will not only lead to an appreciation for how proofs can arise organically during research, it will also lead to some confidence regarding the creation of proofs.
It is not necessary for students to prove all of their claims just for the sake of thoroughness. Published articles often prove the hard parts and leave the easier steps "for the reader." In contrast, a student should begin by trying to prove her simpler assertions (although it may be difficult to figure out how hard a problem will be in advance). When students have conjectures, label them with the students names and post them in the class as a list of open problems. Then, as students grow in the rigor and complexity of their proofs, they can return to questions that have become accessible.
When a student does create a proof, have her describe it to a peer, give an oral presentation to the class, or write up her thinking and hand it out for peer review. The students should come to see themselves as each other's editorial board, as a group of collaborating mathematicians. They should not be satisfied if their classmates do not understand their argument. It is a long struggle getting to the point where we can write intelligible yet efficient mathematics. One of my students once presented proofs of a theorem four times before the class gave him the "official Q.E.D". Each of the first three presentations generated questions that helped him to refine his thinking, his definitions, and his use of symbols.
Learning to prove conjectures is a lifelong process, but there are some basic considerations and methods that students should focus on as they begin to develop rigorous arguments. The first concern is that they be clear about what they are trying to provethat they unambiguously identify the premises and the conclusions of their claim (see Conditional Statements in Conjectures).
The next goal should be to try to understand some of the connections that explain why the conjecture might be true. As we study examples or manipulate symbolic representations, we gain understanding that may lead to a proof. Because understanding and proof often evolve together, if a student wants to prove a conjecture that a classmate or teacher has presented, she should consider undertaking an investigation that will help her recreate the discovery of the result. This process may provide insight into how a proof might be produced. (See Schoenfeld (1992) for more discussion of problem solving and proof.)
Often, a proof involves a large number of steps that, in our thinking about the problem, we organize into a smaller number of sequences of related steps (similar to when computer programmers turn a number of commands into a single procedure). This "chunking" of many steps into one line of reasoning makes it possible to grasp the logic of a complicated proof. It also helps us to create an outline of a potential proof before we have managed to fill in all of the needed connections (see Proof Pending a Lemma below).
When we create a proof, we seek to build a bridge between our conjectures premise and its conclusion. The information in the premise will have a number of possible consequences that we can use. Similarly, we try to identify the many conditions that would suffice to prove our conclusion. For example, if we know that a number is prime, there are numerous properties of prime numbers that we might bring into play. If we seek to show that two segments are congruent, we might first show that they are corresponding sides of congruent figures, that they are both congruent to some third segment, or that it is impossible for one to be either shorter or longer than the other. Once we have considered the possibilities that stem from our premises and lead to our conclusions, we have shortened the length of our proof from "if premise, then conclusion" to "if consequence-of-premise, then conditions-leading-to-conclusion" (figure 2). A main task comes in trying to determine if any of these new statements (one for each combination of consequence and condition) is likely to be easier to prove than the original.
Figure 2. Searching for a path to a proof
Some conjectures conclusions involve more than one claim. Recognizing all of these requirements can be a challenge. For example, to show that the formula (n2 m2, 2mn, n2 + m2) is a complete solution to the problem of identifying Pythagorean triples, we need to show both that it always generates such triples and that no triples are missed by the formula. Cases such as this, in which we need to demonstrate both a claim and its converse, are common.
Sometimes, two approaches to proving a result will differ in both their method and what they teach us. A student working on the Amida-kuji project defined a minimal configuration of horizontal rungs as one that results in a particular rearrangement of the numbers using the fewest rungs possible. He then conjectured that the number of distinct minimal configurations would always be greatest for the reversal of n items (1 2 3 n goes to n 3 2 1) than for any other permutation of the n values. Does this student need to find and prove a formula for the number of minimal configurations for each permutation? Can he somehow compare the number of minimal configurations without actually counting them explicitly and show that one set is larger? These two approaches might both prove his claim, but they require distinctly different findings along the way.
Just as we make decisions about the sequencing of ideas that we use to construct a proof, so, too, do we choose from among an array of different technical tools. In the quadrilateral proof above, we represented the same setting using coordinates as well as synthetically. We transform our mathematical ideas into diagrams, numeric examples, symbolic statements, and words. Within those broad categories, there are numerous ways of representing information and relationships and each representation offers the possibility of new understandings.
We may further our understanding of a problem by looking at a simpler version of it. We can apply this same approach to proof: prove a special case or subset of cases before taking on the entire problem. For example, a student working on the Raw Recruits project first proved theorems about the cases with one or two misaligned recruits and then worked up to the general solution. Choosing the right simplification of a problem is important. Had the student focused on a fixed number of total recruits rather than of misaligned ones, she might not have been as successful finding patterns.
The list of proof techniques is endless. Providing students with a repertoire of a few powerful, general methods can give them the tools that they need to get started proving their conjectures. These first techniques also whet students appetites to learn more. Each students own research and reading of mathematics articles (see Reading Technical Literature in Getting Information) will provide additional models to consider when constructing a proof. When students begin work within a new mathematical domain, they will need to learn about the tools (representations, techniques, powerful theorems) common to the problems that they are studying.
It is not possible to give ironclad rules for when a given approach to proof will prove fruitful. Therefore, in addition to providing guidance ("It might be worthwhile holding one of your variables constant"), our job mentoring students engaged in proof is to ask questions that will help them reflect on their thinking. Is planning a part of their process (are they considering alternative strategies or just plowing ahead with the first approach that occurs to them)? Are they connecting the steps that they are exploring with the goal that they are trying to reach (can they explain how their current course of action might produce a useful result)? Are they periodically revisiting the terms of their conjecture to see that they have not drifted off course in their thinking? See Getting Stuck, Getting UnstuckCoaching and Questioning for further questions.
The most basic approach that students can use to develop understanding and then a proof is to study specific cases and seek to generalize them. For example, a student was exploring recursive functions of the form . She wanted to find an explicit formula for f and began by looking at with . Her first values:
revealed some patterns, but no breakthrough. She then took an algebraic perspective on the problem by looking at the form and not the value of the results. She decided to keep her examples general by not doing the arithmetic at each step:
This form revealed an explicit formula, , which pointed the way to a general rule for all a, b, and . This example demonstrates why it is sometimes advantageous not to simplify an expression.
Algebra is a familiar, all-purpose tool that we should encourage students to use more often. Many students primarily think of variables as specific unknowns and not as placeholders for an infinite number of examples (see the practice proofs and their solutions for examples of algebraic expressions used in this manner).
Examples as Disproof and Proof
An example cannot prove an affirmative statement about an infinite class of objects. However, a single example, called a counterexample, is sufficient to disprove a conjecture and prove the alternative possibility. For example, we know of many even perfect numbers (Weisstein). The discovery of a single odd perfect number would be an important proof that such numbers, conjectured not to exist, are possible.
When a conjecture involves a finite set of objects, we can prove the conjecture true by showing that it is true for every one of those objects. This exhaustive analysis is sometimes the only known means for answering a question. It may not be elegant, but it can get the job done if the number of instances to test is not overwhelmingly large. The mathematicians who proved the Four Color Theorem (MacTutor) broke the problem into 1476 cases and then programmed a computer to verify each one. Such proofs are not entirely satisfying because they are less likely than a proof that covers all cases simultaneously to have explanatory value.
We often break a problem down into categories of instances or cases and not all the way down to individual instances. For example, a theorem about triangles may require separate analyses for acute, right, and obtuse triangles. One challenge when proving via a case-by-case analysis is to have a rigorous means of showing that you have identified all of the different possible cases.
One of the more exciting experiences in mathematics is the recognition that two ideas are connected and that the truth of one is dependent on the truth of the other. Often a student will be working on a proof and discover that they have a line of reasoning that will work if some other claim is true. Encourage the student to develop their main argument and then return to see if they can fill in the missing link. A claim that is not a focus of your interest, but which you need for a larger proof, is called a lemma. As students working on a common problem share their discoveries through oral and written reports, they may recognize that a fellow researcher has already proven a needed lemma. Alternatively, they may realize that their conjecture is a straightforward consequence of a general result that another classmate has proven. We call a theorem that readily follows from an important result a corollary. These events contribute enormously to students understanding of mathematics as a communal activity.
There are many well-known cases of theorems that mathematicians have proven pending some other result. Of course, that means that they are not actually theorems until the lemma has been established. What is a theorem in these situations is the connection between two unproven results. For example, Gerhard Frey proved that if a long-standing problem known as the Taniyama-Shimura conjecture were true, then Fermats Last Theorem (MacTutuor) must be as well. This connection inspired Andrew Wiles to look for a proof of the Taniyama-Shimura conjecture.
How do we know that we have proven our conjecture? For starters, we should check the logic of each claim in our proof. Are the premises already established? Do we use the conclusions to support a later claim? Do we have a rigorous demonstration that we have covered all cases?
We next need to consider our audience. Is our writing clear enough for someone else to understand it? Have we taken any details for granted that our readers might need clarified? Ultimately, the acceptance of a proof is a social process. Do our mathematical peers agree that we have a successful proof? Although we may be confident in our work, unless others agree, no one will build upon or disseminate our proof. Our theorem may even be right while our proof is not. Only when our peers review our reasoning can we be assured that it is clear and does not suffer from logical gaps or flaws.
If a proof is unclear, mathematical colleagues may not accept it. Their clarifying questions can help us improve our explanations and repair any errors. On the other hand, mathematical truth is not democratically determined. We have seen many classes unanimously agree that a false assertion was true because the students failed to test cases that yielded counterexamples. Likewise, there have been classes with one voice of reason trying to convince an entire class of non-believers. The validity of a proof is determined over timereaders need time to think, ask questions, and judge the thoroughness of an exposition. Students should expect to put their proofs through the peer review process.
When do peers accept a proof? When they have understood it, tested its claims, and found no logical errors. When there are no intuitive reasons for doubting the result and it does not contradict any established theorems. When time has passed and no counterexamples have emerged. When the author is regarded as capable ("I dont understand this, but Marge is really good at math"). Some of these reasons are more important than others, but all have a role in practice.
See Davis and Hershs (1981) The Mathematical Experience for a fine collection of essays on the nature of proof, on methods of proof, and on important mathematical conjectures and theorems.
Since one reason we tackle proofs is for the challenge, we are entitled to a modest "celebration" when a proof is completed. The nicest honor is to name a theorem after the student or students who proved it. If you dub proofs after their creators (e.g., Lauras Lemma or the Esme-Reinhard Rhombus Theorem) and have them posted with their titles, students will be justifiably proud. Give conjectures titles, as well, in order to highlight their importance and as a way to promote them so that others will try to work on a proof.
Introduce students to the traditional celebration: ending a proof with "Q.E.D." Q.E.D. is an acronym for "quod erat demonstrandum," Latin for "that which was to be demonstrated." At the end of a proof by contradiction, students can use "Q.E.A.," which stands for "quad est absurdum" and means "that which is absurd" or "we have a contradiction here." These endings are the understated mathematical versions of "TaDa!" or "Eureka!" Modern, informal equivalents include AWD ("and were done") and W5 ("which was what we wanted") (Zeitz, p. 45). We have also seen "" and "MATH is PHAT!" at the end of student proofs. Professional publications are now more likely to end a proof with a rectangle () or to indent the proof to distinguish it from the rest of a discussion, but these are no fun at all.
Do remind students that once their celebration is over, their work is not necessarily done. They may still need to explore their theorem further to understand why it is true and not just that it is true, to come up with a clearer or more illuminating proof, or to extend their result in new directions. Additionally, proofs sometimes introduce new techniques that we can productively apply to other problems. In other words, the completion of a proof is a good time to take stock and figure out where to go next in ones research. Like movies that leave a loose strand on which to build a sequel, most math problems have natural next steps that we can follow.
We are not very pleased when we are forced to accept a mathematical truth by virtue of a complicated chain of formal conclusions and computations, which we traverse blindly, link by link, feeling our way by touch. We want first an overview of the aim and of the road; we want to understand the idea of the proof, the deeper context. - Hermann Weyl (1932)
The standard form for a mathematical proof is prose interwoven with symbolic demonstrations and diagrams. Students who write paragraph explanations will often comment that they do not yet have a "real proof." However, the two-column style that they believe to be the only acceptable format is often not as clear or informative as a proof with more English in it. Encourage them to add narrative to their proofs and to use whatever form seems most effective at communicating their ideas. Let them know that written language is a part of mathematics.
Weyl encourages us tell the story of our proof at the start so that each step in the presentation can be located on that roadmap. We should be able to say to ourselves "Oh, I see why she did that. She is setting up for this next stage" rather than "Where on Earth did that come from? Why did she introduce that variable?" Our goal is not to build suspense and mystery, but to provide the motivation for the important steps in our proofs. As noted earlier, we improve a lengthy proofs story by considering how the pieces of the proof fit together into connected chunks that we can present as separate theorems or lemmas. These chapters in the story reduce the number of arguments that our readers have to manage at any given stage in their effort to understand our proof.
Published proofs are often overly refined and hide from the reader the process by which the mathematician made her discoveries. As teachers, we want to encourage students to share the important details of that process. What methods did they consider? Why did some work and others not? What were the examples or special cases that informed their thinking? What dead ends did they run into? The teacher mentioned above, who disliked proof, was frustrated because the proofs that he had read were too polished to be a guide for how to develop a proof. The more we include our data, insights, experimentation, and derivations in our proofs, the more they will help others with their own mathematics.
We want to find a balance between the desire to convey the process of discovery, which is often circuitous, and the need to present a coherent argument. Students should develop an outline for each proof that reflects which ideas are dependent on which others. They should punctuate their narrative with clearly labeled definitions, conjectures, and theorems. Proofs should include examples that reveal both the general characteristics of the problem as well as interesting special cases. Examples are particularly helpful, not as a justification, but because they provide some context for understanding the more abstract portions of a proof. Examples may also help clarify imprecise notation or definitions.
Some additional recommendations for making proofs more readable:
If any parts of your research were carried out collaboratively or based on someone elses thinking, be sure to acknowledge their work and how you built upon it. For a full discussion on how to write up your results, see Writing a Report in Presenting Your Research.
We evaluate proofs at several levels. First, we need to see if we can understand what the proof says. If our mathematical background is sufficient to understand the proof, then, with effort, we should be able to make sense of it (see Reading Technical Literature in Getting Information). Next, we want to decide whether the proof is actually a successful proof. Do all of the pieces fit together? Are the explanations clear? Convincing? A good proof does not over-generalize. If a proof does not work in all cases, is it salvageable for some meaningful subset of cases?
Students should be given time to read each other's proofs. They should be skeptical readers who are trying to help their classmate improve their work. They should be supportive by offering helpful questions about claims that are unclear or steps that would improve the proof. The writer of a proof should expect to address any concerns and to work through several drafts before the class declares her work completed. Although we are tempted to believe in our own discoveries, we are also obliged to look for exceptions and holes in our reasoning and not leave the doubting just to our peers.
Once a proof passes the first hurdle and we believe it is correct, we come to a different set of criteria for judging proofs. These criteria are both aesthetic and functional and help us to understand why we would want to find different ways to prove a particular theorem. Here are some considerations that students might apply to proofs that they study (see Evaluating Conjectures for further considerations):
Each of us has our own aesthetic for which areas of mathematics and ways of solving problems are most appealing. Mathematicians will often call a proof "elegant" or "kludgy" based on their standards of mathematical beauty. Is a substitution, offered without motivation, that quickly resolves a problem (e.g., let f(x) = cotan(1 x/2)) magical, concise, or annoying? Whichever of the standards above move us to call a proof beautiful, it is an important recognition that judgments of beauty are part of mathematics. Share your own aesthetics with students and encourage them to develop their own. It is perfectly reasonable simply to enjoy geometric or number theoretic problems and solutions more than some other area of mathematics. Some students may love problems that start out complicated but then sift down to simple results. Help them to recognize and celebrate these interests while broadening their aesthetics through the sharing of ideas with each other.
Class Activity: One way to highlight the different characteristics of proofs is to ask students to study and compare alternative proofs of the same theorem. Handout Three Proofs that is irrational (table 1, below) and give students time to read all three slowly (note: students should be familiar with proof by contradiction). Ask them to write down questions that they have about the different steps in the proofs. Next have them work in small groups trying to answer the questions that they recorded and clarify how each proof achieves its goal. Have each student then write an evaluation of the proofs: Does each proof seem to be valid? If not, where do they identify a problem? Which proof appealed to them the most? Why? Ask them to consider the other criteria above and choose one or more to address in evaluating the proofs.
Students may note that there are similarities among the proofs. All three proofs are indirect and all three begin by eliminating the root and converting the problem to one of disproving the possibility that . These first steps reduce the problem to one involving only counting numbers instead of roots and remove the likelihood that any not-yet-proven assumptions about roots or irrational numbers will creep into the reasoning.
Students are drawn to different parts of the three proofs. Some prefer Proof B because it does not rely on the assumptionkids may call it a gimmickthat a and b have no common factors. This objection is a good occasion to discuss the "story" of how that assumption comes into proofs A and C. It is essential to establishing the contradiction later on in the proof, but how did the prover know it was needed? The answer is that they didnt and that it was put in place once the need was discovered (we have watched students develop this proof themselves and then stick in the condition in order to force the contradiction). If the authors of these proofs included details of their derivationsthe story of how they thought up the proofsthey would avoid the discomfort that the austere versions create.
Proof C relies on a case-by-case analysis that the different ending digits cannot match. Again, despite the bluntness of its means, it seems to explain why cannot reduce to 2. This method becomes more elegant with fewer cases when we look at the final digit in a smaller base such as base 3.
The point of the above discussion is not to have your students choose one "best" proof, but to have them weigh the pros and cons of each. We want them to discover that not everyone in the class has the same mathematical tastes. However, some criteria are more objective than others. For example, one important criterion is how easily a proof may be generalized to related problems. In the case of the three proofs in table 1, you may ask students to decide which extend readily to show that the roots of other integers (or all non-perfect squares) are irrational. We might also inquire why the same proof methods do not show that is irrational.
Another objective criterion is the sophistication of the mathematics needed to support a proof. Proof A requires fewer lemmas than the other two. Despite students frequent preference for proof B, it relies on the comparatively "heavy machinery" of the fundamental theorem of arithmetic (positive integers have a unique prime factorization). Mathematicians often applaud a proof that uses more elementary methods, but, in this case, the elementary approach is not necessarily easier to understand.
You can introduce the activity described above with other accessible theorems. Pythagorean Theorem and its Many Proofs (Bogomolny) and Pythagorean Theorem (Weisstein) provide several dozen different proofs of the Pythagorean Theorem. Make handouts of a variety of these proofs and have each student pick three to study. Which did they like best? Why? Do they prefer those that involved geometric dissections or algebraic calculations? Those that were shorter and skipped steps or those that explained each step carefully? Can the class provide the missing arguments for the less rigorous "proof without words" diagrams? Encourage them to see the particular appeal of each proof.
Earlier in this section, we suggested that students proof experiences are most effective when they emerge organically from student investigations. Nevertheless, for a number of reasons, there is value to students practicing creating proofs as well. For example, practice helps students hone techniques and instincts that they can use in work that is more open-ended. Additionally, some of the reasons given in Why Do We Prove? remain relevant even if we are told what to prove. When students share their proofs with each other, they get further practice reading proofs and comparing the different types of reasoning used to justify theorems.
The transfer of understandings derived from practice problems is particularly likely if the practice is not overly structured. Proof exercises not connected to the study of a particular content area (e.g., triangle congruence or induction) force students to think about which of their many skills might help solve the problem. For each one, they might ask, "Should we introduce a variable? Will an indirect proof work?" This way, they are practicing methods and making thoughtful choices. If students do not a have a clear reason for choosing one approach over another, point out to them they do not have to be paralyzed in the face of this uncertainty. They can just start experimenting with different representations of the information and different proof methods until one of them works.
Students first proofs are rarely polished or precise. They may over-emphasize one point while omitting an important consideration (see, for example, the student proof below). Without experience devising symbolic representations of their ideas, students representations are often inefficient or unhelpful. For example, a student working on the Amida Kuji project was asked by her teacher to clarify and strengthen an English argument using symbols. She devised substitutes for her words ("hi is a horizontal rung"), but the symbols had no value facilitating her computations and led to an argument that was more difficult to read. The proof had that "mathy" look to it, but, until the student had a better grasp of the underlying structures of the problem and their properties, she was in no position to develop a useful system of symbols.
When we respond to students early proofs, our emphasis should be on the proofs clarity and persuasiveness. Their arguments may take many forms: paragraphs, calculations, diagrams, lists of claims. Any of these may be appropriate. We want to help them identify any assumptions or connections that they have left unstated, but we also have to judge how convincing and complete a line of reasoning has to be. Can steps that are obvious be skipped? To whom must they be obvious? Does a proof have to persuade a peer, a teacher, or a less knowledgeable mathematics student? We want to help younger students develop rigor without bludgeoning them on specifics that they may not be ready to attend to. Can students adopt the attitude of the textbook favorite, "we will leave it as an exercise for the reader to verify that " ? Fine readings on this topic include "I would consider the following to be a proof " and "Types of Students Justifications" in the NCTM Focus Issue on the Concept of Proof (1998).
One answer to the above questions is that a students classmates should be able to understand and explain their proofs. If classmates are confused, they should explain where they lose the thread of an argument or what they think a sentence means so that the author can rewrite her proof to address these confusions. Once a proof has passed the peer test, we can note additional possible refinements that will help our students develop greater sophistication in their thinking and presentation over time. Try to focus on certain areas at a time and expand students rigor and use of symbols incrementally. We try to emphasize proper vocabulary first (see Definitions). The development of original and effective symbolic representations tends to take more time to appear.
Be aware of "hand-waving" in proofs. Hand-waving is what a magician does to distract his audience from a maneuver that he does not want them to notice. For mathematicians, hand-waving is a, perhaps unintentional, misdirection during a questionable part of an argument. The written equivalent often involves the words "must" or "could" (e.g., "the point must be in the circle " ) without justification of the claimed imperative. Sometimes we need to note, but accept, a bit of hand-waving because a gap is beyond a students ability to fill.
Many of the proof exercises provided here are more suitable for high school than middle school students. The whole class settings described below as well as practice problems 1, 4, 6, 7, 15, and 16 are likely to work with middle school students (although others may also be useful depending on the students background). Particularly with younger students, doing proof within explorations that help them see how a proof evolves naturally from questions and observations is more valuable than exercises that ask them to prove someone elses claims. When we are given a "to prove", we have to go back and explore the setting anyway in order to develop some intuition about the problem. Older students, who have a broader array of techniques from which to choose, are more likely to benefit from proof exercises.
Once a class has proven theorems in the context of longer research explorations, you can use the practice problems as a shorter activity. Choose a few problems to put on a handout and distribute them to each student. Give the students a few days to work on the problems and then discuss and compare their discoveries and proofs. Based on these discussions and peer responses, each student can then rewrite one of their proofs to produce a polished solution.
Kids need more experience trying to prove or disprove claims without knowing the outcome ahead of time. In genuine mathematical work, we pose a conjecture, but we are not sure that it is true until we have a proof or false until we have a counterexample. The practice problems below sometimes call attention to this indeterminate status by asking students to "prove or disprove" the claim. Some of them actually ask for a proof even though the claim is false. We include these red herrings because students are often overly confident about their own conjectures and need to develop greater skepticism. Students should not consider this feature foul play, but good training in skeptical thinking. We are often taught to see texts as unerring authorities, but even the most prestigious journals of mathematics and science occasionally publish results that turn out to be false or incomplete. We have found that students are delighted when, and will put great effort into proving that, a textbook or teacher is wrong. We are simply building in that opportunity.
Once a false statement has captured students attentions, challenge them to turn it into a true claim. Can they identify a significant set of cases for which the claim is true (e.g., by changing the domain to remove the counterexamples, see problem 10)? Can they generalize the claim (e.g., problem 7 is false, but the more general claim for two relatively prime divisors is true)?
The related games Yucky Chocolate and Chomp are good settings for early work with proof. These games are effective with both middle and high school classes. Both games begin with an n-by-m array of chocolate squares (n not necessarily different from m) in which the top left square of chocolate has become moldy.
Rules for the game of Yucky Chocolate: On each turn in the game of Yucky Chocolate, a player chooses to break the bar of chocolate along a horizontal or vertical line. These breaks must be between the rows of squares (figure 3). The rectangle that is broken off is "eaten" by that player. The game continues with the rectangle that includes the yucky square. You can introduce this game with real chocolate, but the incentive to break off large pieces for consumption may overwhelm any other strategic thinking. Players take turns until one player, the loser, is left with just the yucky piece to eat.
Figure 3. A horizontal break in the game of Yucky Chocolate leaves a 2 by 4 board
Introduce your class to the rules of the game and then have them pair off to play several rounds starting with a 4 by 6 board. They can play the game on graph paper, mark off the starting size of the chocolate bar, and then shade in eaten portions each turn. After a few rounds of play, students will start to notice winning end-game strategies. In one fifth-grade class, the students observed that when a player faced a 2-by-2 board, they always lost. Given that observation, additional play led them to see why a 3-by-3 board was also a losing position. They were able to turn these conjectures into theorems with simple case-by-case analyses. For the 3-by-3 board, the symmetry of the situation meant that there were really only two distinct moves possible (leaving a 2-by-3 or 1-by-3 board). Each of these moves gave the other player a winning move (reducing the board to a 1-by-1 or 2-by-2 case).
After the class realized that the smaller square positions were losers, some students took the inductive leap to conjecture that all n-by-n boards represented losing positions. One girl, who had never studied proof by induction, excitedly began explaining how each larger square array could be turned into the next smaller one and that she could always force the game down to the proven losing square positions. She had an intuitive understanding of the validity of an inductive argument. She then stopped and realized that her opponent might not oblige her by carving off just one column and that she did not know how big the next board might be. She had cast doubt on the reasoning of her own argument. She was facing another form of inductive proof in which one builds not just from the next smallest case but all smaller cases. After a while, the class was able to show that regardless of the move that an opponent facing an n-by-n board takes, there was always a symmetrical move that made a smaller square board. Therefore, they could inexorably force a win. This argument made it possible for a full analysis of the games that led to a win for the first player (n m) and those that should always be won by the second player.
Once students have a complete understanding of Yucky Chocolate, the game provides a nice opportunity for practicing problem posing. Ask the students to each develop one or more variations of the game. What characteristics can they change? Does the game remain interesting? Does it become more complicated? Do they have to change any rules to make it still make sense? Some of the changes that students have explored include moving the location of the moldy square, making the problem three-dimensional, changing the number of players, or playing with a triangular grid of chocolate.
Rules for the game of Chomp: The game of Chomp starts with the same slightly moldy chocolate bar, only the players take turns biting the chocolate bar with a right-angled mouth. These bites remove a chosen square and all remaining squares below and/or to the right of that square (figure 4. See Joyce for further examples).
Figure 4. Two turns in a game of Chomp
These bites can leave behind boards with complicated shapes that make it difficult to analyze which player should win for a given starting board. Student investigations can identify many sets of initial configurations (e.g., the 2-by-n or n-by-n cases) where a winning strategy can be determined and a proof produced (see Keeley And Zeilberger). Zeilbergers Three-Rowed Chomp provides an elegant existence proof that the first player in a game must always have a winning strategy. Being an existence proof, it provides no hint at how the winning strategy might be found. See Gardner, Joyce, Keeley, and Stewart for more on the game of Chomp. The article by Stewart also discusses Yucky Chocolate. The Keeley article provides a lovely discussion of one classs definitions, conjectures, and theorems about the game of Chomp.
Rather than work through an exploration of each quadrilateral type sequentially, provide the class with standard definitions of each and have them draw (or construct) examples of each. Point out that each shape has a number of properties that are a consequence of their definition (e.g., reflection symmetry) that are not explicitly part of their definition. The handout Quadrilateral Properties will encourage a systematic exploration of these properties, each of which can be turned into a conjecture (e.g., "if the diagonals of a quadrilateral are congruent and bisect each other, then the figure is a rectangle" or "if a figure is a rhombus then it is a parallelogram") that students can try to prove (see writing conjectures for more on this topic). For each proof, they should produce a labeled diagram and a statement of the given information in terms of those labels. The given information should be strictly limited to that provided in the definitions of the terms in the premise of the conjecture.
Once students have generated a number of proofs using the above activity, they can move on to explore the properties of the perpendicular bisectors or midpoints of the sides or the bisectors of the angles of the different quadrilaterals. They might even explore dividing the angles or sides into multiple equal parts (n-secting them). Dynamic geometry programs, such as Geometers Sketchpad, are particularly helpful in creating clear diagrams and taking accurate measurements that aid students in making discoveries with these settings.
Diagrams play a complex role in mathematics. Many mathematicians think about even quite abstract ideas using visual images. Algebraic ideas often have natural geometric representations. We will often try to draw a picture of a problem that we are exploring because the image conveys a great deal of information organized according to a set of meaningful relationships. However, pictures do have limitations that students need to appreciate. In trying to gain insight from a diagram, we are restricted by its static nature. It shows us just one instance. The appeal of dynamic programs such as Geometers Sketchpad is, in part, that they allow us to quickly view multiple examples. Diagrams can mislead us if they are not created with precision and even accurate pictures may possess properties that are not typical of all cases. While diagrams may persuade and inform us, they do not constitute proofs. As with other types of examples, a picture may look convincing simply because we have not yet imagined how to construct a counterexample.
We want to help our students learn how to use diagrams as tools for furthering their investigations and how to extract information from them. As they work on problems, we can prompt them to consider whether a graph or other visual representation can be generated and studied. When they are reading other peoples proofs, encourage them to study all labels and features and to connect those details to the text and symbolic statements in the discussionto see how they illuminate that discussion and whether they serve as an effective visual counterpart.
Students can also get practice interpreting diagrams by studying "proofs without words." These "proofs" are pictures that their author considers so enlightening that they readily convince us that we can dependably generalize the pattern to all cases. Depending on how wordless a proof without words is (and some do have the occasional accompanying text), the pictures can take some effort to analyze. Effective pictures can be the inspiration for a more formal proof. Winicki-Landman (1998, p. 724) cautions that some students may respond negatively to proofs without words if they feel that they will have to come up with such elegant diagrams themselves. Be sure to emphasize the value of working with diagrams and the purpose of these activities. When "proof pictures" do not even have variable labels, encourage students to choose variables for the different quantities in the picture and to see what the pictures tell them about those variables.
See Proof without words (Bogomolny) for further discussion and additional examples.
Solutions to these problems are provided below as a way for you to gauge the difficulty of the problems and to determine their appropriateness for your class. Do not expect or require the students solutions to match the ones provided here. Alternatively, try to work on the problems yourself and with your students so that you can model how you think about analyzing problems and constructing proofs. After you and the students have your own results, you can use the solutions to make interesting comparisons. As the class discusses the different solutions to the problems, be sure to highlight the different methods (e.g., induction, proof by contradiction, case-by-case analysis) that they used. This emphasis will reinforce the message that there are common techniques that are often effective.
Once students have worked through some initial proofs, it is good to anticipate the frustrations and barriers that they will face as they attempt longer and harder problems. The NOVA (1997) video The Proof, which details Andrew Wiles work on Fermats Last Theorem, provides a motivational lesson that also tells students about one of the great mathematics accomplishments of the past century. Although Wiles proof is intimidating in its inaccessibility, his personal struggle and emotional attachment to the task are inspiring. After watching the video about his seven-year journey, students have a greater appreciation for the role that persistence plays in successful endeavors. The article Ten lessons from the proof of Fermats Last Theorem (Kahan) can be used as a teachers guide for a follow-up discussion. See Student and Teacher Affect for a further discussion of motivational considerations.
Note: Some of these problems may ask you to prove claims that are not true. Be sure to approach each with some skepticismtest the claims and make sure that a proof attempt is called for. If you disprove a statement, try to salvage some part of the claim by changing a condition.
Bogomolny, Alexander (2001). Pythagorean triples. Cut-the-knot. Available online at http://www.cut-the-knot.com/pythagoras/pythTriple.html.
Bogomolny, Alexander (2001). Infinitude of primes. Cut-the-knot. Available online at http://www.cut-the-knot.com/proofs/primes.html.
Bogomolny, Alexander (2001). Non-Euclidean geometries, models. Cut-the-knot. Available online at http://www.cut-the-knot.com/triangle/pythpar/Model.html.
Bogomolny, Alexander (2001). Integer iterations on a circle. Cut-the-knot. Available online at http://www.cut-the-knot.com/SimpleGames/IntIter.html
Bogomolny, Alexander (2001). Proof without words. Cut-the-knot. Available online at http://www.cut-the-knot.com/ctk/pww.shtml.
Bogomolny, Alexander (2001). Proofs in Mathematics. Cut-the-knot. Available online at http://www.cut-the-knot.com/proofs/index.html
Berlignhoff, William, Clifford Slover, & Eric Wood (1998). Math connections. Armonk, N.Y.: Its About Time, Inc.
Brown, Stephen & Walters, Marion (1983). The art of problem posing. Hillsdale, NJ: Lawrence Erlbaum Associates.
Carmony, Lowell (1979, January). Odd pie fights. Mathematics Teacher, 61-64.
Chaitin, G. J. The Berry paradox. Available online at http://www.cs.auckland.ac.nz/CDMTCS/chaitin/unm2.html.
Davis, Philip and Reuben Hersh (1981). The mathematical experience. Boston, Massachusetts: Houghton Mifflin Company:
Deligne, Pierre (1977, 305(3)). Lecture notes in mathematics, 584. Springer Verlag.
Erickson, Martin & Joe Flowers (1999). Principles of mathematical problem solving. New Jersey, USA: Prentice Hall
Flores, Alfinio (2000, March). Mathematics without words. The College Mathematics Journal, 106.
Focus Issue on The Concept of Proof (1998, November). Mathematics
Teacher. Available from NCTM at http://poweredge.nctm.org/nctm/itempg.icl?secid=1&subsecid=12&orderidentifier=
Gardner, Martin (1986). Knotted doughnuts and other mathematical recreations. New York, N.Y.: W. H. Freeman and Company, 109-122.
Hoffman, Paul (1998). The man who loved only numbers. New York, New York: Hyperion.
Horgan, John (1996, April). The not so enormous theorem. Scientific American.
Joyce, Helen (2001, March). Chomp. Available on-line at http://plus.maths.org/issue14/xfile/.
Kahan, Jeremy (1999, September). Ten lessons from the proof of Fermats Last Theorem. Mathematics Teacher, 530-531.
Keeley, Robert J (1986, October). Chompan introduction to definitions, conjectures, and theorems. Mathematics Teacher, 516-519.
Knott, Ron (2000) Easier Fibonacci puzzles. Available online at http://www.mcs.surrey.ac.uk/Personal/R.Knott/Fibonacci/fibpuzzles.html.
Lee, Carl (2002). Axiomatic Systems. Available for download at ../../../handbook/teacher/Proof/AxiomaticSystems.pdf.
MacTutor History of Mathematics Archive (1996). The four colour theorem. Available online at http://www-history.mcs.st-andrews.ac.uk/history/HistTopics/The_four_colour_theorem.html.
MacTutor History of Mathematics Archive (1996). Fermats last theorem. Available online at http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Fermat's_last_theorem.html.
MegaMathematics (2002). Algorithms and ice cream for all. Available online at http://www.cs.uidaho.edu/~casey931/mega-math/workbk/dom/dom.html.
Nelson, Roger (1993). Proof without words. Washington, D.C.: The Mathematical Association of America.
NOVA (1997). The proof. WGBH/PBS. See http://www.pbs.org/wgbh/nova/proof/ for more information and http://www.pbs.org/wgbh/shop/novavidedu06detect.html#proof to order.
Peterson, Ivars (1996, December 23). Prime theorem of the century. MAA online: MathTrek. Available online at http://www.maa.org/mathland/mathland_12_23.html.
Peterson, Ivars (1998, February 23). The limits of mathematics. MAA online: MathTrek. Available online at http://www.maa.org/mathland/mathtrek_2_23_98.html.
Platonic Realms Interactive Mathematics Encyclopedia (PRIME). Gödels theorems. Available online at http://www.mathacademy.com/pr/prime/articles/godel/index.asp.
Schoenfeld, Alan (1992). Learning to think mathematically: problem solving, metacognition, and sense-making in mathematics. Available online at http://www-gse.berkeley.edu/faculty/aschoenfeld/LearningToThink/Learning_to_think_Math06.html#Heading18. Read from this point in the essay to the end.
Schoenfeld, Alan (1994, 13(1)). Do we need proof in school mathematics? in What do we know about mathematics curricula? Journal of Mathematical Behavior, 55-80. Available online at http://www-gse.berkeley.edu/Faculty/aschoenfeld/WhatDoWeKnow/What_Do_we_know 02.html#Heading4.
Stewart, Ian (1998, October). Mathematical recreations: playing with chocolate. Scientific American, 122-124.
Weisstein, Eric (2002). Perfect Number. Eric Weissteins World of Mathematics. Available online at http://mathworld.wolfram.com/PerfectNumber.html.
Weisstein, Eric (2002). Pythagorean Theorem. Eric Weissteins World of Mathematics. Available online at http://mathworld.wolfram.com/PythagoreanTheorem.html.
Weyl, Hermann(1932). Unterrichtsblätter für Mathematik und Naturwissenschaften, 38, 177-188. Translation by Abe Shenitzer (1995, August-September) appeared in The American Mathematical Monthly, 102:7, 646. Quote available online at http://www-groups.dcs.st-and.ac.uk/~history/Quotations/Weyl.html.
Winicki-Landman, Greisy (1998, November). On proofs and their performance as works of art. Mathematics Teacher, 722-725
Zeilberger, Doron (2002). Three-rowed Chomp. Available online at http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/chomp.pdf.
Zeitz, Paul (1999). The art and craft of problem solving. John Wiley and Sons: New York.
Translations of mathematical formulas for web display were created by tex4ht. | http://www2.edc.org/makingmath/handbook/teacher/proof/proof.asp | 13 |